title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Privacy-Preserving Logistic Regression Training with A Faster Gradient Variant
Reject
Summary: This paper proposes a second-order version of Nesterov's accelerated gradient (NAG) descent and Adagrad for logistic regression by incorporating an approximation to the Hessian. The authors call this the "quadratic gradient". Specifically, a diagonal approximation to the Hessian for logistic regression is proposed. Some empirical results are shown to illustrate the benefit of the proposed method over vanilla NAG and Adagrad. Strengths: The proposed approximation of the diagonal Hessian may be interesting for NAG. Weaknesses: Unfortunately, this paper has several weaknesses. **1.** **Limited novelty**: The proposed approximation to the Hessian in Section 3.2 seems like a trivial and incremental extension of the idea of reference [4] discussed in Section 3.1. **2.** **Lack of clarity**: The proposed methods are unclear to me and the presentation needs to be heavily improved. * The enhanced NAG method described in line 150 is unclear to me -- what is $G$ here and is $\alpha_t$ the step-size here? Moreover, Algorithm 1 seems different from the discussions in Section 3.3. What are $\alpha_0$ and $\alpha_1$? They don't look like the quantity $\alpha_t$ introduced in Section 3.3. Why is $\alpha_1$ chosen to be $0.5(1 + \sqrt{1 + 4 \alpha_0^2})$? I don’t understand lines 31 and 37 in Algorithm 1 and what are $\gamma$ and $\eta$ here? What is the role of $W$ in lines 34 and 35 – it is not being used at all. In summary, the enhanced NAG method/Algorithm 1 has been presented poorly and I don't understand the method at all. * What is the enhanced Adagrad algorithm? The two equations after line 154 which are supposed to explain the enhancement are not very clear. What is the difference between $G^{(t)}$ and $g^{(t)}$ in these equations? Is $G = \tilde{B}^{-1} g$ here? There is no algorithm summarizing it like Algorithm 1 for enhanced NAG. Also, suddenly for Adagrad, the authors have a negative sign in front of the gradient which corresponds to minimizing the function whereas for NAG and the previous discussions in the paper, maximizing the function has been considered. Please stick to either minimization or maximization for consistency. **3.** **Premise of enhanced Adagrad:** One way to interpret Adagrad is that it tries to maintain a diagonal approximation of the Hessian inverse and applies it to the gradients (a.k.a. preconditioning). So I'm not sure why applying a second approximation of the Hessian inverse on the *already preconditioned* gradients makes sense intuitively. Additionally, the authors themselves point out that enhanced Adagrad cannot be applied to general optimization problems (line 182) due to "learning-rate explosion". Then why introduce this method at all? **4.** **Setup and experiments**: The datasets on which experiments are performed are not standard benchmarking datasets in the ML community and appear to have very few features (looking at Table 2). There are no test set statistics provided. I'd be more convinced if the authors showed empirical results in a *standard logistic regression setup without any kind of encryption* (which frankly seems irrelevant to me in this paper) on benchmarking ML datasets. ---- *Some general comments*: The introduction on logistic regression can be compressed. It is standard to consider the *negative* log-likelihood objective and apply gradient *descent* to minimize it. A couple of small typos -- in line 96, I guess it should be "$\bar{h}_{k i}$ is the $k^\text{th}$ element in the $i^\text{th}$ row of the Hessian" and in line 216, it should be "public". Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: Please see the Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 1 poor Contribution: 1 poor Limitations: Not in too much detail but as I mentioned in Weaknesses, the authors point out that enhanced Adagrad cannot be applied to general optimization problems. No foreseeable negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response}$ We would like to thank the reviewers for their input. Their comments have been thoroughly considered, and altering the manuscript in accordance with these comments will significantly improve the quality of our paper in the next submission. **C1: The enhanced NAG method described in line 150 is unclear to me -- what is $G$ here and is $\alpha_t$ the step-size here? Moreover, Algorithm 1 seems different from the discussions in Section 3.3. What are $\alpha_0$ and $\alpha_1$? They don't look like the quantity $\alpha_t$ introduced in Section 3.3. Why is $\alpha_1$ chosen to be $0.5(1 + \sqrt{1 + 4\alpha_0^2})$ ? I don’t understand lines 31 and 37 in Algorithm 1 and what are $\gamma$ and $\eta$ here? What is the role of $W$ in lines 34 and 35 – it is not being used at all. A1: We apologize for not clearly presenting the proposed method. $G$ here refers to the quadratic gradient, namely $\bar B g$, and $\alpha_t$ here is the step size (the learning rate). We misused the symbol in the Algorithm 1 and the discussion in Section 3.3. It seems that NAG has many variants and we just adopt the one chosen by the baseline work. In this variant, $\alpha_0$ and $\alpha_1$ are its parameters, and $\alpha_1$ is set to be $0.5(1 + \sqrt{1 + 4\alpha_0^2})$. $\gamma$ is also the parameter of this NAG variant. $\alpha_0$, $\alpha_1$ and $\gamma$ are commonly set in this way. $\eta$ is the step size (the learning rate) in Algorihtm1. NAG computed the expected weight of the current iteration by using the previous weight. Hence, we need an intermediate to store the last-step weight, which is the role of $W$ in lines 34 and 35. It has been used to update the current weight in line 34. **C2: What is the enhanced Adagrad algorithm? The two equations after line 154 which are supposed to explain the enhancement are not very clear. What is the difference between $G^{(t)}$ and $g^{(t)}$ in these equations? Is $G = \tilde B^{-1} g$ here? A2: The enhanced Adagrad algorithm is similar to the raw Adagrad algorithm. $G^{(t)}$ refer to the quadratic gradient at step $t$ while $g^{(t)}$ means the raw gradient at the $t$-$th$ step. Yes, $G$ here is $\tilde B^{-1} g$. **C3: One way to interpret Adagrad is that it tries to maintain a diagonal approximation of the Hessian inverse and applies it to the gradients (a.k.a. preconditioning). So I'm not sure why applying a second approximation of the Hessian inverse on the already preconditioned gradients makes sense intuitively. Additionally, the authors themselves point out that enhanced Adagrad cannot be applied to general optimization problems (line 182) due to "learning-rate explosion". Then why introduce this method at all? A3: We have little knowledge of Adagrad but we do realize the similar performance shared by the raw gradient and the quadratic gradient. We want to apply quadratic gradients to other gradient methods instead of just NAG, and Adagrad is the starting point of Adagrad-like methods. The enhanced Adagrad method did outperform the raw one in some cases but we didn't realize the fundamental limitations of the enhanced Adagrad method until we try to apply it to the general optimization problems. **C4: The datasets on which experiments are performed are not standard benchmarking datasets in the ML community and appear to have very few features (looking at Table 2). There are no test set statistics provided. A4: We have tested the proposed methods on larger datasets (MNIST) with a testing set. The result is also positive. But we would like to leave it to further work, just like what the baseline work did. **C5: It is standard to consider the negative log-likelihood objective and apply gradient descent to minimize it. A5: We apologize for deliberately not using the conventional method of minimizing the loss function. There are three reasons for this: (a) the direct goal of logistic regression is to maximize the log-likelihood objective. Turning this problem into minimizing the negative log-likelihood function might introduce complexity; (b) it is exactly maximizing the log-likelihood objective that helps to develop the idea of quadratic gradient. We wish to emphasize this; and (c) we would like to mention the versatile aspect of quadratic gradient, whose basic form is just the (simplified) fixed Hessian method. While applying (fixed Hessian) Newton's method, the updated formula is always to subtract something ($W = W - H^{-1}g$) even for the maximization problems. The proposed gradient variant helps to use Newton's method in a convenient way. Most attention has been paid to gradient descent method but gradient ascent method can also be used with the gradient. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for the rebuttal. I have read it and would like to keep my score. My question about limited novelty has not been addressed though. --- Reply to Comment 1.1.1: Title: Response to reviewer cD3c Comment: Thank you for your feedback. We admit that the primary idea behind this work may initially appear as a minor and incremental extension of the concept presented in reference [4]. However, this extension, despite its seemingly small change, was developed after a thorough examination of the Simplified Fixed Hessian (SFH) method [4]. The original SFH method, along with Fixed Hessian Newton's method, introduces a compelling idea but still carries certain limitations. For instance, it is not applicable to datasets like MNIST and cannot be employed for numerical optimization problems. Whether or not our work is lacking in novelty, we guess that its practical implications are noteworthy. The proposed gradient variant has the potential to combine the strengths of first-order gradient methods and second-order Newton's methods, thereby enabling the creation of a variety of faster algorithms.
Summary: This paper proposed a new approach to improve the gradient used by first-order optimization methods in logistic regression by utilizing a constant bound to the Hessian matrix. The authors demonstrate how to use their method under a fully Homomorphic-Encryption scenario. They test their method on many real-world datasets under non-private settings and Homomorphic-Encryption settings. Strengths: 1. The paper is written clearly. 2. The experiments are all using real-world datasets which have strong practical implications. Weaknesses: 1. The `quadratic gradient` method is not very new. As the authors have mentioned in Section 3.1 (Line 95), most parts of the method were proposed by Bonte and Vercauteren. I understand that the missing non-negative restriction is important for using the convergence results by Böhning and Lindsay (Line 92). However, using the absolute value is rather an straightforward solution. * A more interesting and critical question remaining to be answered is why this proposed `quadratic gradient` method is faster as the authors claimed in the conclusion (Line 238). * Another problem with this method is its usage being restricted in logistic regression: The authors provided a choice of $\bar{H}$ for logistic regression, while it may be very hard to generalize it to other problems, especially neural network training. 2. The experiments have not shown much advantage of using the `quadratic gradient` method. In Table 1 and Table 2, the accuracy and AUC of the proposed method are almost always lower than the compared baseline method [12]. I understand that the learning time is reduced, but it was not a main problem in [12] as shown in the tables, and we are not sure if there is a tradeoff between the learning time and the accuracy. * The description of the experiment details in Section 5 is very short. The authors suggest the readers refer to [12]. I think it would be better to have the details in supplementary and a discussion of the weaknesses of [12] in these experiment settings, along with why the proposed method solves those weaknesses. Minor weaknesses: 1. The maximum likelihood estimation (MLE) is commonly referring to the estimate for $\beta$. The value of the loss function is the negative log-likelihood. That said, the y-axis of the figures could be corrected. Also, the objective function is usually the mean of the loss for each data point, not the sum. 2. Typo: Line 216, pulbic -> public. --- I have read the rebuttal which answered my questions but did not fully address my concerns on the weakness. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. The cited paper [7] also used `quadratic gradient`. Is it a following up paper to this paper? If yes, it is better to have a dedicated discussion of the relationship between this paper and [7], e.g. about the difference and the novelty. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response}$ We would like to thank the reviewers for their input. Their comments have been thoroughly considered, and altering the manuscript in accordance with these comments will significantly improve the quality of our paper in the next submission. **C1 : The cited paper [7] also used quadratic gradient. Is it a following up paper to this paper? If yes, it is better to have a dedicated discussion of the relationship between this paper and [7], e.g. about the difference and the novelty. A1 : Yes, the cited paper [7] is a follow-up paper to this work. The underlying theory of our gradient variant presented in this manuscript suggests its application in a variety of optimization algorithms but this paper should not include such exploration. The cited paper [7] also discusses other related issues. **C2 : A more interesting and critical question remaining to be answered is why this proposed quadratic gradient method is faster as the authors claimed in the conclusion (Line 238). A2 : This manuscript shows the proposed gradient variant can be used for the momentum-based gradient method (NAG) and the Adagrad method. The cited paper shows it can also be applied to the Adam method, which is a hybrid of the first two. It might be able to enhance other gradient methods. **C3 : Another problem with this method is its usage being restricted in logistic regression: The authors provided a choice of for logistic regression, while it may be very hard to generalize it to other problems, especially neural network training. A3 : The choice of constant $H$ for logistic regression has a similar performance to that derived directly from the varying Hessain itself. It should be very hard to find such a constant choice for other problems. However, we can build the proposed gradient variant $\bar B g$ directly from the Hessian. Basically, the raw quadratic gradient method without using any first-order gradient methods is the (Simplified) Fixed Hessian method, except that the fixed Hessian in this case is not fixed. **C4: The experiments have not shown much advantage of using the quadratic gradient method. In Table 1 and Table 2, the accuracy and AUC of the proposed method are almost always lower than the compared baseline method [12]. I understand that the learning time is reduced, but it was not a main problem in [12] as shown in the tables, and we are not sure if there is a tradeoff between the learning time and the accuracy. A4: The compared baseline method [12] has better accuracy and AUC than the proposed method. This is because the baseline method performs 7 iterations while the proposed method only performs 3 iterations. That is where the reduction of learning time for the proposed method comes from. Maybe we should adopt the bootstrapping operation in order to perform the same iterations as the baseline and that would surely increase the learning time. --- Rebuttal Comment 1.1: Title: Thank you for the response! Comment: A1: Got it. A2: If I understand your statement correctly, this 'faster' claim is from the intuition that NAG, Adagrad, or Adam should be faster than the vanilla SGD. I think some theoretical analysis would help support this claim. A3: Yes. Therefore, I wanted to say that this paper as an extension of the simplified fixed Hessian is restricted to the logistic regression setting. A4: Yes. Usually, the baseline is for the comparison between the best possible performance of the previous method, and the best of your method. You are free to choose the hyperparameter to demonstrate various aspects of advantage in your method. Based on my comments above, I will keep my score at 3. --- Reply to Comment 1.1.1: Title: Response to reviewer LggN Comment: A2: This 'faster' claim here (Line 238) refers to the empirical experimental results that the proposed enhenced gradient methods outperform the corresponding raw gradient counterparts. For instance, the enhanced NAG using quadratic gradient outperfroms the raw NAG. An exception is the enhanced Adagrad that sometimes may not surpass the performance of the raw Adagrad method due to the inherant nature of Adagrad. A4: OK, I see. Thank you for your feedback .
Summary: This paper proposes the quadratic gradient for privacy-preserving logistic regression. Such gradient is used together with Nesterov’s accelerated gradient (NAG) and Adagrad on Homomorphic Encryption techniques. Strengths: This paper tackles the privacy-preserving (in the sense of encryption) logistic regression. The paper is clear with introduction and motivation. The algorithm is easy-to-follow and well-supported by experiments. Weaknesses: Overall, it is hard to understand the privacy concerns in this work as there are many privacy-preserving techniques available. Without an example or experiment of privacy attack, the motivation of using HE in the first place is not backed up. In addition, only [12] is compared in the empirical results. While the new method seems promising, the lack of other baselines makes it hard to understand the limitations and benefits of the new algorithm. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response}$ We would like to thank the reviewers for their input. Their comments have been thoroughly considered, and altering the manuscript in accordance with these comments will significantly improve the quality of our paper in the next submission. **C1 : Overall, it is hard to understand the privacy concerns in this work as there are many privacy-preserving techniques available. Without an example or experiment of privacy attack, the motivation of using HE in the first place is not backed up. In addition, only [12] is compared in the empirical results. While the new method seems promising, the lack of other baselines makes it hard to understand the limitations and benefits of the new algorithm. ** A1 : There are other techniques available for privacy-preserving concerns with their own advantages. Using HE might not be the most efficient and HE is very time-consuming. It may be inefficient to use [12] as the only baseline.
Summary: The paper proposes a new gradient method that can be efficiently used under homomorphic encryption. The proposed method replaces the gradient $g$ by an approximation of $H^{-1}g$, where $H$ is the Hessian. This approximation is done using a specific diagonal matrix, that speeds up convergence while being possible to use under homomorphic encryption. Strengths: 1. The paper proposes a new method for privacy-preserving logistic regression under FHE, that achieves reasonable results with less computation than existing methods. 2. The proposed method is quite versatile as it can be applied to a variety of optimization algorithms. Weaknesses: 1. Experimental results are not convincing. Contrary to the paper's claims, the proposed method often performs way worse than existing ones (especially on iDASH, Edunburgh and pcs). Datasets are also very small, and therefore do not account for how the method scales with the dimension. Given this is an empirical paper, it seems a bit insufficient. 2. The fixed-Hessian method seems to be closer to preconditioning (see e.g. [1]), where gradients are linearly transformed before being used, than to a proper second order method. 3. The proposed method does not seem to be too novel. In particular, the paper refers to [2] (which itself refers to a paper with the same title as this manuscript), which proposes a very similar method. [1] Preconditioned Stochastic Gradient Descent, Xi-Lin Li 2015. [2] Quadratic Gradient: Uniting gradient algorithm and newton method as one, by Chiang, 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Are there settings where the difference in runtime between the proposed method and existing ones is more significant? What would cause this difference to get larger? 2. When normalizing the data, this has an impact on the Hessian matrix (and therefore on $\bar B$). In particular, one could imagine that after normalization, $X^T X$ is close to the identity. Could you comment on the effect of normalization on $\bar B$? 3. Line 108, it is claimed that with proper learning rate, the fixed Hessian Netwon method converges: can you give evidence/proof to support this claim? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: Limitations are not discussed in the paper. In particular, experiments only consider a specific setting, with a choice of parameters that seems arbitrary (e.g., 3 vs. 7 iterations), and some claims are not supported with evidence. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response}$ We would like to thank the reviewers for their input. Their comments have been thoroughly considered, and altering the manuscript in accordance with these comments will significantly improve the quality of our paper in the next submission. **C1 : Are there settings where the difference in runtime between the proposed method and existing ones is more significant? What would cause this difference to get larger? A1 : We didn't test various settings on the proposed method. We used to think that the baseline work adopted the learning rate $1/(t+1)$ and we just used $1 + 1/(t+1)$ as the learning rate for the proposed method. Later we found that they adopted the learning rate $10/(t+1)$ and that for the proposed method the setting $1 + 1/(t+1)$ slightly outperforms that of $1 + 10/(t+1)$ . Indeed, this paper only shows empirical results. We suspect that there existed more optimal parameters for the enhanced NAG method. To further explore the optimal parameters of the proposed method is currently beyond our capability. So, our focus in this work is to find a faster algorithm than the baseline work. **C2 : When normalizing the data, this has an impact on the Hessian matrix (and therefore on $\bar B$). In particular, one could imagine that after normalization, $X^TX$ is close to the identity. Could you comment on the effect of normalization on $\bar B$? A2 : After normalizing the data, we guess the approximate diagonal Hessian matrix has smaller diagonal elements and therefore $\bar B$ has larger diagonal elements, which seems helpful to the training process. It is suggested in the paper [2] that the eigenvalues of the Hessian matrix, in this case, that is related to the diagonal elements of $\bar B$, can be helpful to find a safe learning rate for the raw gradient descent method. As a result, we might be able to safely use a larger learning rate after normalizing the data. However, it might be difficult to tell the overall effect of normalization on the training process. In the iteration formulas, the updated item $H^{-1} g$ for the normal Newton's method and the proposed method in this work is not only determined by $H$. And we might not know the effect of the normalization on the gradient $g$. Even if not normalizing the data has a detrimental effect on $\bar B$, it might also have a similar role in Newton's method or the raw gradient descent method. **C3 : Line 108, it is claimed that with proper learning rate, the fixed Hessian Netwon method converges: can you give evidence/proof to support this claim? A3 : This is the annoying problem always existed that puzzles us for a long time. That is why we assumed that the new learning rate for our gradient variant descent to 1 in a bound number of steps so that the later stage is just the raw Simplified Fixed Hessian Newton's method. It is too difficult for us to prove the convergence of the proposed methods including the enhanced NAG method and the enhanced Adagrad method. And proofing their convergences might help to find the optimal parameters, if, of course, they converge. The proof of the convergence for the raw first-order gradient method might help. $\textbf{References}$ [2] Quadratic Gradient: Uniting gradient algorithm and newton method as one, by Chiang, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your answer. I am not sure I understand what you mean about the step size schedule: what are the final results? The results of [2] seem highly influenced by the current draft, which is a problem. Claims that are not supported by evidence should also be removed from the draft. [2] Quadratic Gradient: Uniting gradient algorithm and newton method as one, by Chiang, 2022. --- Reply to Comment 1.1.1: Title: Response to reviewer KaEy Comment: Glad to be of help, and I apologize for any misunderstandings. We intended to compare the performance of the raw NAG method with the enhanced version, with as little modification as possible at the algorithmic level. Therefore, after substituting the raw gradient with the quadratic gradient, we simply add 1 to the step size schedule used by the baseline work for the enhanced NAG method. We conducted a comparison using two sets of configurations: $1 + 1/(t+1)$ for the enhanced NAG method and $1/(t+1)$ for the raw NAG method; $1 + 10/(t+1)$ for the enhanced NAG method and $10/(t+1)$ for the raw NAG method. The step size $1 + 1/(t+1)$ for the enhanced NAG method showed slightly better performance than $1 + 10/(t+1)$ for the same method, and both outperformed their corresponding raw NAG counterparts. Ultimately, we opted for $1 + 10/(t+1)$ since the baseline utilized $10/(t+1)$. We are uncertain if there are other step size schedules that offer better performance. Sure, untested claims will be removed from this manuscript. Thank you for your valuable feedback.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Path following algorithms for $\ell_2$-regularized $M$-estimation with approximation guarantee
Accept (poster)
Summary: A novel grid point selection scheme and stopping criterion for any general path-following algorithms are proposed in this paper. However, the presentation of this paper is poor, and the contributions are not clear. Strengths: A novel grid point selection scheme and stopping criterion for any general path-following algorithms are proposed in this paper. This work provides some interesting theoretical results. For example, this work theoretically shows that the approximation error of the proposed solution path can be upper bounded by the sum of the interpolation and optimization error, and establish a global approximation-error bound for the solution path. Weaknesses: This work provides some interesting theoretical results, however, I am dissatisfied with the writing and organization of this paper. 1. The writing of this paper is poor. In order to enhance the readability, I suggest the author discuss the definition of "regularized M-minimization problems" in the introduction, maybe with some formulation. 2. The contributions of this paper are not clear. This work shows some interesting theoretical results, I suggest the author clearly clarify the contributions in the Introduction. 3. The experimental results are all put in the Appendix in the current manuscript. I am unsure why the authors did not put the results in the main text, especially considering that there are many empty spaces on the ninth page of the main text. 4. The format of the appendix may not comply with the requirements of NeurIPS. I recommend that the authors make corrections accordingly. 5. To further improve the readability of this paper, I suggest the author add a "Related Work" section to introduce some works related to this work. And the organization of this work is poor, I suggest the author carefully modify the whole paper. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. The writing of this paper needs improvement. It is suggested to enhance readability by discussing the definition of "regularized M-minimization problems" in the introduction, possibly with some formulation. 2. Lack of clarity regarding the contributions of this paper. Although this work presents some interesting theoretical results, it is recommended that the author explicitly clarify the contributions in the Introduction. 3. Placement of experimental results in the Appendix. Uncertain why the authors chose not to include the results in the main text, especially considering the available space on the ninth page. 4. Non-compliance of the appendix format with NeurIPS requirements. It is advised that the authors make the necessary corrections to adhere to the guidelines. 5. Readability and organization concerns. To enhance the paper's readability, it is suggested to add a "Related Work" section to introduce works relevant to this study. Furthermore, the entire paper's structure requires careful modification. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 1 poor Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and suggestions. We summarize our responses below. **Weaknesses and Questions** 1. Thanks for the suggestion. Following your suggestion, we introduced the definition of regularized $M$-estimation earlier in the Introduction section in our revision. 2. Thanks for the suggestion. We have made a few changes to the Introduction section to better highlight our contributions. In particular, the key contributions are summarized in the third and fourth paragraphs in Section 1. 3. Thanks for the suggestion. We now include the *runtime versus global approximation error* plot for the simulation studies (Figure S1 and Figure S3 of current supplementary file) in the main file. We may move additional plots in the Appendix to the main text if the paper is accepted. 4. Thanks for the suggestion. We now have changed the format of the supplementary material by using the NeurIPS template. 5. Thanks for the suggestion. Most of the discussions of related work can be found in the last few paragraphs of the Instroduction section starting from line 62 of the current main text file. The discussions involve summarizing relevant work as well as highlighting their differences and new contributions of our work compared to existing works. As such, we feel that the contributions of the article should be quite clear to the reader, and a separate Related Work subsection may not be necessary. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I tend to maintain my scores.
Summary: This paper proposes a new path following algorithm for 2-regularized M-estimation with an approximation guarantee. The algorithm includes a grid point selection scheme and an adaptive stopping criterion to optimize the trade-off between model fit and complexity in machine learning algorithms. The paper also provides a comparison of the proposed scheme to a standard path-following scheme through a simulated study using ridge regression and L2-regularized logistic regression. Strengths: 1. This paper proposes a new path following algorithm for 2-regularized M-estimation with an approximation guarantee. 2. This work is generally technically sound with rigorous statistical analysis in the work. 3. The article is well written and mathematical derivations are easy to follow. Weaknesses: 1: The main body does not include the experimental results. 2: The experiments were solely conducted on simulation datasets, and evaluating the method on public datasets could enhance the results. 3: The experiments do not include other baselines on solution path algorithms. 4: Can the proposed method be extended to other regularized M-estimation techniques, such as Lasso or Group Lasso? 5: It is important to discuss relevant existing work on solution path algorithms, including notable studies such as [1], as well as approximate solution path algorithms, like [2]. [1] Ryan Joseph Tibshirani, Jonathan E Taylor, Emmanuel Jean Candes, and Trevor Hastie, "The solution path of the generalized lasso," The Annals of Statistics, 2011. [2] Runxue Bao, Bin Gu, and Heng Huang, "Efficient Approximate Solution Path Algorithm for Ordered Weighted L_1-Norm with Accuracy Guarantee," ICDM 2019. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: see weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and suggestions. We summarize our responses below: **Weaknesses and Questions** 1. Thanks for the suggestion. We have moved all the *runtime versus global approximation error* plot for the simulation studies (Figure S1 and Figure S3 of current supplementary file) to the main text. We may move additional plots in the Appendix to the main text if the paper is accepted. 2. Thanks for your suggestions. We have added a real-world data example in the revision (please see plots in Section 1 of our author rebuttal pdf). 3. Thanks for the suggestion. Our current baseline algorithm selects equally spaced grid points (on a log-scale), which is widely adopted by many popular packages such as glmnet. 4. Unfortunately, it seems that our proposed method can only be easily extended to the differentiable regularizer. Extensions to nonsmooth regularizer seem to be not straightforward, and will be left for future work. 5. Thanks for pointing out additional related works. We have included them and some discussions in the revision. --- Rebuttal Comment 1.1: Comment: Thank you very much for the response. --- Reply to Comment 1.1.1: Comment: Thanks for the reply. Kindly let us know if your concerns have been fully resolved. We will also be very happy to answer any remaining questions you may have. --- Reply to Comment 1.1.2: Title: Does our response address your questions? Comment: Thanks again for the efforts and time you put into reading and commenting on our work. We wonder whether our response has addressed your concerns and questions. We would appreciate the opportunity to engage further if needed. We also kindly ask you to consider stronger support for the paper if your concerns have been addressed. Thanks!
Summary: This paper considers this problem of the ($\ell_2$-regularized) $M$-estimation problem of computing solutions to the regularized objective $(e^t – 1) L(\Theta) + (1/2) \| \Theta \|_2^2$ for all $t$, denoted $\theta(t)$. The paper proposes a method for approximating theta(t) for carefully chosen values of t, such that interpolating between these points approximately solves the problem. When $L$ is convex and differentiable the paper proves that (under certain bounds) $1/\sqrt{\epsilon}$ values of t suffice to obtain a curve of $\epsilon$ approximate solution to the regularized objective. The paper discusses how this improves upon prior bounds, how the scheme has desirable properties for implementation, and provides experiments implementing the algorithm. Further, the paper discusses extensions of their work to non-convex optimization. Strengths: The $M$-estimation problem that the paper considers is a natural and feels fundamental. This paper provides a seemingly useful set of theoretical (algorithms and bounds) and empirical (experiments) on this fundamental problem. Further, the paper argues how they improve upon prior work on this problem. Additionally, the paper is fairly well-written, suggests an interesting direction or future work, and could invite further study of the problem. Weaknesses: The problem considered and the approach for it feels quite natural. This is not necessarily an issue except that it isn’t clear from the writing if the results are not what one might obtain by following the first approaches for the problem one might think of. For example, if one simply computes the derivative of the value of the regularized objective minimizer with respect to t and then bounds the terms, does that more or less directly suggest the approach of the paper? More broadly, I think the paper could benefit by discussing their proof strategy and any obstacles that had to be overcome in achieving their result. Additionally, I thought it would be helpful if the paper was a little clearer about what assumptions need to be made to achieve, e.g., the claimed $O(1/\sqrt{\epsilon})$. What other quantities does this rate depend on exactly? As discussed briefly in the following “questions” section I think it would be helpful to discuss all the quantities that the proposed method depends on. Finally, the $M$-estimation problem seems like one that could be studied in part under different names in a number of different areas. Ultimately, it is in some sense asking to approximately follow a certain induced curve which seems like something that could have arisen in a number of contexts. Consequently, more literature review or discussion to raise confidence in the novelty of the proposed method would be helpful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My main question are those raised in the “Weaknesses” category. Additionally, below are some more detailed suggestions, questions, and comments: * Line 16: “Corroborate with our theoretical” --> “Corroborate our” * Line 62 – 99: It would be helpful to know what exactly is hidden in the $O(\cdot)$ notation. * Line 112: why write the $n$ in $L_n$? Does it arise anywhere else in the paper? * Theorem 1: some more discussion of the proof strategy would be nice. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper doesn’t discuss weaknesses and limitations, but the paper is primarily bout the theoretical and empirical analysis of an algorithm for solving a sequence of regularized optimization problems. Consequently, the societal impact is unclear. Further, the weaknesses and questions discussed reflect limitations for which further discussion might be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and suggestions. We summarize our responses below: **Weaknesses** * Thanks for the suggestion, we have added the proof strategy for Theorem 1. Basically, the proof of Theorem 1 starts with relating the local approximation error to the suboptimality of $\theta_k$ and $\theta_{k+1}$ at some $t \in [t_k, t_{k+1}]$. The suboptimality can then further bounded by the square norm of the grandient leveraging the fact that the objective function is $e^{-t}$-strongly convex. Finally, a triangular inequality is used to bound the norm of the grandient by quantities that depend on $\|g_k\|_2$ and $\|\theta_k\|_2$. * The hidden quantities behind our claimed number of iterations $\mathcal O(1/\sqrt{\epsilon})$ include parameters $c_2$, $\alpha_{\max}$, $t_{\max}$, as well as the problem-dependent constants $\|\theta(t_{max})\|_2$ and $\|{\nabla} L_n(0)\|_2$. * Basically, a larger $c_2$ or $\alpha_{\max}$ will lead to fewer number of iterations, while a larger $t_{\max}$ will require more iterations. The problem-dependent constants $A = \|\theta(t_{\max})\|_2$ and $B = \|\nabla L_n(0)\|_2$ impact the number of iterations jointly through quantity $\nu_1 = B/(2(A+B))$. The number of iterations needed will increase as $\nu_1$ increase. * Thanks for the suggestion, we have included some additional related works and additinoal discussions in the Introduction section. **Questions** * Fixed. * See the response in above **Weaknesses** section. * Since the loss function is often formulated based on the training examples, we use notation $L_n$ to reflect that $n$ examples are considered when we define the loss function. For example, in the section 4.2 of our current main text file, we have that the empirical loss function for logistic regression is $L_n(\theta) = n^{-1} \sum_{i=1}^n \log(1+\exp(-Y_i X_i^\top\theta))$. * Thanks for the suggestion, we have added the proof strategy for Theorem 1 in the revision. The basic idea can be found in above **Weaknesses** section. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you or your response. I appreciate your answers and my overall evaluation remains as is. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Thanks for the reply. If you have any further questions or concerns regarding our work, please don't hesitate to let us know.
Summary: The submission 9422 proposed a novel selection scheme for grid search and stopping criterion for a more general path-following algorithms, while not trying to compute/derive the whole solution spectrum. The analysis of the authors indicates that under certain assumptions, their proposed approximate path-tracking algorithm (with linear interpolation) can approximate the exact solution path and provide theoretical boundaries. The theoretical conclusions have been verified to some extent through some simple numerical simulations. Strengths: - Solution path (hyperparameter optimization) is an interesting and important topic in both theory and application communities of ML. - The paper is well written and it clearly state its contributions, notation and results. The most part of their derivations are easy to follow (for me). Weaknesses: - The numerical simulations of this work is too weak and might not be convincing. - the scale of the experiment is too small. The authors are suggested to consider more objectives since their analysis can be readily extended to general differentiable functions - lack of experiments on real-world benchmark datasets, e.g., UCI, libsvm - the code (currently) has not been submitted and lacks *sufficient* details for reproducibility - Lack of some existing research work on the situation where the exact paths are not piecewise linear, e.g., - [1] Pierre Garrigues, and Laurent Ghaoui. *An homotopy algorithm for the Lasso with online observations.* Advances in neural information processing systems 21 (2008). - [2] Xingyu Qu, Diyang Li, Xiaohan Zhao, and Bin Gu. *GAGA: Deciphering Age-path of Generalized Self-paced Regularizer.* Advances in Neural Information Processing Systems 35 (2022). - ... I also suggest that the authors further highlight the intrinsic differences from recent work and the technical challenges faced by their settings. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Why not use the vanilla $C(t):=t$ to replace the current $C(t):=e^t-1$ in your analysis? - I suggest the authors to plot the *path visualization* of $\tilde{\theta}(t)$ as well as $\theta(t)$, to help the readers that may not familiar with the solution path area. also we can observe how the computed path (via algo. 1) approximates the ground truth. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - some technical limitations of the current method were mentioned in some parts of the paper, but there was no detailed/thorough discussion. - there is no need to discuss potential negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and suggestions. We summarize our response as follows. **Weaknesses** * Thanks for your suggestions on numerical simulations. Following your suggestion, we have added a real-world data example in our numerical studies section (please see plots in Section 1 of our author rebuttal pdf). We will make all the code publicly available if the article gets accepted. * Thanks for pointing out additional related works. We have included those works and added some additional discussions in our revision. **Questions** * We agree that our current choice of $C(t) = e^t - 1$ is not essential and can be replaced by $C(t) = t$. The primary rationale behind choosing $C(t) = e^t - 1$ is that grid points selection are typically equally spaced in the log-scale. Moreover, it makes some of the algebra much simpler and cleaner in our theoretical derivations. * Thanks for the suggestion. We have included the visualization of $\tilde{\theta}(t)$ and $\theta(t)$ of simple examples in the supplementary material in the revision (please see plots in Section 2 of our author rebuttal pdf). We will move this part to the main text if the article gets accepted. --- Rebuttal Comment 1.1: Title: Thank you for your explanation and additional numerical simulations... Comment: I also have experience running solution paths algo on datasets such as *a9a*. I hope the author could provide more real-world datasets in the final version, which may make your results more convincing. In addition, in Figure S2, the ground truth and the method proposed in this work almost overlap, which may not be very convincing (because if I simply ran the same algorithm twice, it would also have the same Figures). It is best to change some settings/conditions to clearly demonstrate how the solution path algo approaches the ground truth. Given the overall reviews and rebuttal, I think this work could be a good contribution, and will upgrade my score 5 -> 6 --- Reply to Comment 1.1.1: Title: Thank you for upgrading your score. Comment: Thanks for the additional suggestions! We will provide results on more real-world datasets in our final version, and present new plots on $\theta(t)$ versus $\tilde{\theta}(t)$ to more clearly demonstrate how our solution path algo approaches the ground truth. If you have any further questions or concerns regarding our work, please don't hesitate to let us know.
Rebuttal 1: Rebuttal: Thanks for all of your valuable comments and suggestions. Following your suggestions, we have made some changes to our submission, which we feel greatly improved the presentation. Main changes include: * We moved the formal definition of $M$-estimation earlier in the Introduction section, and made a few changes to the Introduction section to better highlight our contributions. * We included suggested related literature and added additional discussions of the existing works. * We moved the *runtime versus global approximation error* plots of our numerical studies into the main file. * We added plots to visualize our approximated solution path as well as the ground truth solution in the Appendix (also see plots in Section 2 of our author rebuttal pdf). We will include them in the main file if our paper gets accepted. * We added a real data analysis example in Section 4 (also see plots in Section 1 of our author rebuttal pdf). We will place the entire real data analysis section in the allowed additional content page if our paper gets accepted. Pdf: /pdf/aaf59e5c3721f17440fd2731c31d362ab22b73de.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Taylor TD-learning
Accept (poster)
Summary: This paper proposes an algorithm for improving model-based actor-critic methods in continuous state and action spaces. When using Dyna-style updates, the algorithm can compute an expected update over a small noise distribution using a linearization to reduce the variance of the performed update. This method is shown to be theoretically-sound in a simplified setting and a variety of experiments demonstrate the effectiveness of the proposed approach on standard continuous-control benchmark tasks. Strengths: - The main idea is interesting, computing an explicit expected update over a small region by using a linearization. - The experiments are done well with meaningful baseline algorithms. In particular, the supporting experiments other than standard learning curves are a welcome addition: The variance analysis comparing sampled updates to the proposed one and the ablation study on cosine similarity vs. the inner product (in the appendix). - The theoretical result in the linear setting is nice even if it is in a simplified setting and the proofs are correct as far as I can tell. Weaknesses: There are some aspects of the presentation that I think could be improved. These are more minor points overall though. - For example, I would avoid using the term "Monte-Carlo" estimates when discussing sampling of TD updates since it can lead to some confusion due to "Monte-Carlo" often being used to refer to MC estimates of the return in contrast to the bootstrap estimates that TD uses. I would consider using the phrase "sampled TD update" instead of "MC TD update". E.g. line 31, line 42, ... - I would also suggest including a line about continuous state and actions in the introduction to clarify the problem setting since the proposed algorithm would mainly be applicable there. - Section 3.1 describing the updates was a bit difficult to follow at first. Some of the notation such as using $\Delta_{Exp}$ was slightly confusing since it suggests that it is the overall expected update even though it's only the expected update over $\xi_a$. An alternative could be $\Delta_{\mu_a}$. The expectation $E_{s,a$}[\Delta \theta(s,a)]$ is slightly unclear as to what distribution is used for $(s,a)$ here. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why does aligning the gradients w.r.t the action of the Q-value and the TD error make sense? Is there an interpretation for this? It seems like the inner product between those gradients could be a meaningful quantity more broadly e.g. as an evaluation metric for the quality of a critic. - What is $\eta$ in equation (11)? It wasn't described in the text or the appendix. - This method could potentially also be used in cases when the simulator is differentiable to start with. Have you experimented with this at all? - Minor point: I would consider modifying the title of the paper to a more descriptive name which could reference model-based RL or Dyna-style updates. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments! > For example, I would avoid using the term "Monte-Carlo" estimates when discussing sampling of TD updates since it can lead to some confusion due to "Monte-Carlo" often being used to refer to MC estimates of the return in contrast to the bootstrap estimates that TD uses. I would consider using the phrase "sampled TD update" instead of "MC TD update". E.g. line 31, line 42, ... Thanks for spotting this clash with pre-existing RL terminology. We will switch to your suggested terminology. > I would also suggest including a line about continuous state and actions in the introduction to clarify the problem setting since the proposed algorithm would mainly be applicable there. Thank you for the suggestion. This is included in the original abstract, and we will include a note about it in the Introduction too. > Section 3.1 describing the updates was a bit difficult to follow at first. Some of the notation such as using $\Delta_\text{Exp}$ was slightly confusing since it suggests that it is the overall expected update even though it's only the expected update over $\Xi_\text{a}$. An alternative could be $\Delta_{\mu_a}$. We'll consider these notational changes. We hoped that the $(s, \mu_\text{a})$ function arguments in $\Delta_\text{Exp}(s, \mu_\text{a})$ would emphasise that this is taking an expectation only over the noise in the action, for fixed state, $s$ and mean action, $\mu_\text{a}$. The issue with $\Delta_{\mu_a}$ is that we contrast the exact expectation, $\Delta_\text{Exp}$, with the Taylor-series approximation, $\Delta_\text{Ta}$. But they're both exact/approximate expected updates over $\Xi_\text{a}$ with fixed $\mu_a$, so $\Delta_{\mu_a}$ wouldn't make the required distinction. We could change $\Delta_\text{Exp} \rightarrow \Delta_\text{Exact}$? Alternatively, we could further clarify the definition of $\Delta_\text{Exp}$ in Eq. 7? Let us know if you have a preferred option. > The expectation $E_{s,a}[\Delta \theta(s,a)]$ is slightly unclear as to what distribution is used for here. We will add a sentence clarifying the distribution here. To clarify, this distribution refers to the overall visited state distribution, $s \sim d^\pi$, and the initial policy distribution, $a \sim \pi^\text{init}$. > Why does aligning the gradients w.r.t the action of the Q-value and the TD error make sense? Is there an interpretation for this? It seems like the inner product between those gradients could be a meaningful quantity more broadly e.g. as an evaluation metric for the quality of a critic. First, it is important to emphasise that our updates should just be understood as the expectation of standard TD updates (Eq. 7) over a distribution over initial actions, $a$, centred at $\mu_\text{a}$. In that context, it is possible to understand how the "alignment" emerges. Specifically, lets consider a random action in the direction of $\nabla_a \delta_\theta$, e.g. $a = \mu_\text{a} + \epsilon \nabla_a \delta_\theta$. That action has a bigger, positive prediction error (as we've moved along the gradient of $\delta_\theta$). And under standard TD updates, a bigger positive prediction error should lead to a bigger positive update to $Q_\theta$. This interaction of bigger positive prediction errors leading to bigger positive updates to $Q_\theta$ implies an "alignment" intuition. Its a super-interesting idea to use this quantity as an evaluation metric! It certainly could work! But we worry that doing a sufficiently thorough investigation of the properties of this metric is a significant piece of work requiring its own paper. > What is $\eta$ in equation (11)? It wasn't described in the text or the appendix. Good catch! We have deleted the $\eta$'s. (They were intended to represent a learning rate in a previous version of the derivation, but we subsequently removed it to simplify the derivations). > This method could potentially also be used in cases when the simulator is differentiable to start with. Have you experimented with this at all? Agreed, Taylor TD has broad applications to differentiable simulator settings. For instance, [1] highlights the importance of learning accurate value functions with differentiable simulator in order to perform policy updates beyond a short horizon of returns. Taylor TD would greatly aid the process of learning accurate value functions with little-to-none added computational cost to the differentiable simulator. However, in the current manuscript, we decided to focus on the usual (and potentially harder) RL setting where the model must be learned from the transitions (e.g. as in Janner et al., 2019 - MBPO). We'd be happy to run any additional experiments if the reviewer can suggest a specific setting. [1] Xu, J., Makoviychuk, V., Narang, Y., Ramos, F., Matusik, W., Garg, A., \& Macklin, M. (2022). Accelerated policy learning with parallel differentiable simulation. arXiv preprint arXiv:2204.07137. > Minor point: I would consider modifying the title of the paper to a more descriptive name which could reference model-based RL or Dyna-style updates. Something like "Taylor TD-learning in model-based reinforcement learning"? We'd be happy to make that change. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the response and consideration. About the presentation: > Alternatively, we could further clarify the definition of in Eq. 7? Let us know if you have a preferred option. Perhaps clarifying the definition would be sufficient and re-iterating that $\mu_a$ is fixed and the only randomness is over $\Xi_a$. I agree there's some difficulty in choosing the right notation here and leave it to your discretion. >Something like "Taylor TD-learning in model-based reinforcement learning"? We'd be happy to make that change. That sounds good. I appreciate the explanation of the alignment of TD-error and $Q$ value gradients. It makes sense to me although it doesn't necesssarily explain why this direction would be useful for policy optimization. There may be some deeper reason here to be found. I am satisfied with the response and the references to risk-sensitive RL by Reviewer 9c2H would be a nice addition too. I would also encourage some investigation regarding the alignment of TD-error and $Q$ values to be done even if it is preliminary. Overall, I will still recommend acceptance.
Summary: This paper, 1. Proposes Taylor TD, a model-based RL algorithm that uses a Taylor series expansion to analytically estimate the expected TD update over a distribution of nearby state-action pairs. This reduces variance compared to standard MC TD updates. 2. Provides theoretical analysis showing the variance of Taylor TD updates is lower than standard TD updates. Also proves stability of Taylor TD with linear function approximation. 3. Empirically demonstrates lower variance of Taylor TD on several RL tasks. 4. Combines Taylor TD with TD3 in an algorithm called TaTD3. Shows strong performance of TaTD3 relative to model-free and model-based baselines on MuJoCo benchmarks. Strengths: 1. The core idea of using Taylor expansions to estimate expected TD updates is novel and well-motivated from a theoretical perspective. This analytic approach to reducing variance is interesting. 2. The method preserves the convergence guarantees of TD-learning under linear function approximation, as shown formally. This is an important theoretical contribution. 3. The variance reduction analysis provides evidence Taylor TD reduces variance versus standard TD methods. 4. The algorithm is straightforward to implement on top of existing model-based RL frameworks. Weaknesses: 1. In Figure 2, Taylor TD3 seems to provide only very small performance improvements over the baselines in most environments (4 out of 6). The gains are noticeable in only 2 environments. Given that Taylor TD3 is much more complex, is the minimal improvement in the majority of environments concerning? The results seem to imply the benefits may be limited to certain environments. Some discussion of why the gains are so marginal in certain tasks would be useful. 2. Some derivations are not clear. It is not clear what loss function are the authors using to learn the model. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In the Taylor TD update derivations (Equations 12, 17 to 18), the Hessian terms present after the Taylor expansions disappear in the final update equation. Can you clarify the assumptions or steps that lead to the Hessian terms dropping out? As these terms arise directly from the Taylor approximations but then vanish, an explicit explanation should be provided about how and why they drop out of the final update form. 2. It is also not clear how the loss function (equation 18) is derived. Could you please provide the derivation? Algorithm 1 includes an update equation for the model parameters w using some loss L, but does not define what this loss function L is (unless I am mistaken). Please specify what objective or loss function is used to optimize the model parameters w based on the observed environment transitions. 3. The current evaluations of Taylor TD are limited to a small set of MuJoCo continuous control tasks. Did the authors consider testing on additional environments that evaluate aspects like partial observability, sparse rewards, and higher task dimensionality? A more diverse test suite could provide greater insight into the strengths and weaknesses of Taylor TD across different conditions. What other test beds do you think could be useful for thorough analysis? What are the challenges that you foresee? 4. The hyperparameters indicate high values for \lambda_a and very low values for \lambda_s. Why is that the case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors mentioned about the limitations in the last section and I do not forsee any negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments! > In Figure 2, Taylor TD3 seems to provide only very small performance improvements While we agree with the reviewer's assessment, it is important to note that there is no single method that performs competitively with TaTD3. Particularly, MBPO is competitive with TaTD3 on HalfCheetah-v2, Walker2d-v2 and Ant-v2, but learns extensively slower on the most complex environment, Humanoid-v2. In contrast, MAGE performs well on HalfCheetah-v2 and the most complex environment, Humanoid-v2, but fails in other environments: Walker2d-v2, Ant-v2 and Hopper-v2. In contrast, TaTD3 performs consistently (close to) the best in all these environments. It should also be noted that Taylor TD3 is not necessarily more complex than the two strongest baseline algorithms, MBPO and MAGE. For instance, although MBPO does not require any extra gradient term, it uses additional compute to iterate through the model predictions over multiple steps for each update (up to 25x). > Some derivations are not clear. It is not clear what loss function are the authors using to learn the model. We thank the reviewer for pointing this out. We neglected to include a section in the appendix where we described how the model was learned. To clarify, the reward and transition model were learned by maximum likelihood based on the observed transition, as for most model-based RL approaches (e.g. Janner et al., 2019 - MBPO). A section describing this process will be added in the appendix of the camera-ready version of the paper. > In the Taylor TD update derivations (Equations 12, 17 to 18), the Hessian terms present after the Taylor expansions disappear We have a choice about the order of the Taylor expansion. For instance, we could choose to do a first-order expansion, which would include gradient but not Hessian terms, or we could choose to do a second-order Taylor expansion, which would include Hessian terms. We choose the first-order approach. We did play with the second-order terms, but we found they added considerable additional complexity for little obvious benefit. Additionally, we were able to get theoretical guarantees on the first-order approach (Appendix B), and it wasn't clear we'd be able to get the same guarantees on the more complex second-order methods. By the Hessian, are you referring to $\Sigma_\text{a}$, which is present e.g. in Eq 11, but not in Eq 12, 17 to 18? $\Sigma_\text{a}$ isn't the Hessian, it is the covariance of the Gaussian distribution over initial actions, $a$, in the Bellman update (see Eqs 4-6). We get to choose this covariance, and in going from 11 to 12, we choose $\Sigma_\text{a} = \lambda_\text{a} I$. We will re-emphasise this point in the camera-ready text. > It is also not clear how the loss function (equation 18) is derived. The key derivation of the critic updates is in Sec. 3.1 (Eq. 12) for the actions and Sec. 3.2 (Eq. 17) for the states (see Appendix A for more details). To get to Eq. 18 requires two steps. First, we apply the usual RL implementation trick, of writing the updates in terms of a gradient of a loss, with stop-gradients operations to avoid taking the gradient of $\delta$ (Appendix D). Second, we replace the dot product in Eq 12 with a cosine similarity. This gives a useful normalization, which does seem to improve performance (see Appendix I.2). We agree that these steps were not described clearly enough in the main text, so we will update the camera-ready to spell this out. The reward and transition model were learned by maximum likelihood based on the observed transition, as in most model-based RL approaches (e.g. as in Janner et al., 2019 - MBPO). We will add an Appendix describing the model-learning to the camera-ready. > The current evaluations of Taylor TD are limited to a small set of MuJoCo continuous control tasks. We agree with the reviewer that a more diverse test suite could provide greater insight into the strengths and weaknesses of Taylor TD across different conditions. Partial observability is complex, as it requires latent states in the model. We suspect it should be possible, but anticipate that it is a "research-project" level exercise that is out of scope for this paper. Additionally, Humanoid is typically considered a high dimensional control task with the RL literature, given the 376 dimensional state space and the 17 dimensional action space. It should also be noted the set of MuJoCo continuous control tasks that we chose represent the standard benchmark on which RL algorithms are tested for continous control. For instance, the baseline algorithms we tested Taylor TD against were themselves tested on 6 similar MuJoCo continous control environments in the original papers and this is the case for most continuous control RL algorithms (e.g., SAC, TD3). Nevertheless, we are happy to run further experiments for camera-ready if you can suggest specific benchmarks. > The hyperparameters indicate high values for \lambda_a and very low values for \lambda_s. In general, we found the action-based expansion to bring the largest benefits to performance, although the state-based expansion is still useful (i.e., see Appendix I.1). We think one reason for this may be because, we know that Gaussian distributions over $a$ work well when evaluating the Bellman updates. However, we don't know whether the same applies for the distribution of state; a Gaussian distribution may only work well for very nearby states. We should also stress that any direct comparison between the magnitude of the state covariance (\lambda_s) and the action covariance (\lamba_a) may not be very meaningful. This is because the scale of the actions and states tend be fairly different in the tested environments, making any comparison between $\lambda_s$ and $\lambda_a$ harder to assess. We will add this key discussion in section A.6. of the camera-ready. --- Rebuttal Comment 1.1: Title: Thank you Comment: Dear Authors, I appreciate the thorough responses to all my questions. My concerns on derivation, loss function and others have been addressed. I adjusted the score accordingly. Thanks
Summary: The author presents a variance reduction trick in TD learning by taking the analytical expectation of the gradient using first order taylor expansion. Standard RL, replaces the expectation over the gradient of critic (Q-value) with a sampled value from the replay buffer or online learning samples. The paper proposed to replace this expected gradient of Q value function with first order taylor series expansion of the same. They proposed taylor expansion over both state and action. ** I have read the rebuttal of the authors. Strengths: Originality 1. The work is proposing novel research in the direction of stabilizing the critic updates by reducing the variance using first order taylor approximation of the expected gradients. Quality 1. The paper could be improved on how the experiments are conducted for Fig 1. Furthermore provide better clarity on the questions section later. Significance 1. The work is definitely targeting a significant problem that could have impact on RL algorithms in general. This work is stabilizing the critic updates by reducing the variance. Weaknesses: The weakness/questions are mentioned in the next section. If the authors provide justification for the below, I am willing to change the score. Add references for the other variance reduction work in direction of - variance reduction of policy updates, inherent stochasticity (risk-sensitive RL). They are very related to your proposed research. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. It is not clear in the paper how the model (transition, reward) is learnt using the $L_\theta$ objective in Eq 18? Algo 1 shows the update $w$ (model parameter ) with the gradient of $L_\theta$ without showing how $w$ influences it. 2. Why is $\mu_a$ in Eq 19 sampled from deterministic target policy? Why do you enforce this determinism on the policy being learnt? 3. In Fig 1, what is the variance computer over? Is it same $(s,a)$ pair and then compute multiple gradients over that? Is it batch data, comprising of same data for both MC TD and Taylor-TD and then the variance over updates is computed? Because if the variance is computed over different state-action, then the high variance could be a factor of that (s,a) pair having high stochasticity in the dynamics. It wouldn’t mean necessarily that updates have less variance because of taylor approximation. Please explain what this variance update is over. 4. What is the effect of reducing the variance in critic update vs to that actor update(Ref[1])? Have you tried ablation study for the same - which one is better, or what effects they have on learning separately vs together? 5. Provide some connections with Risk-sensitive RL and robust RL - inherent stochasticity vs imperfect knowledge? Will be good to connect to the literature in that area (add references for same)! 6. Comparison with expected sarsa kind of update in tabular environments? That would help to understand how expected update over the gradients help over Expected sarsa. (Would be good to know more insights on this, just something additional) References -- [1] Variance Reduction for Policy-Gradient Methods via Empirical Variance Minimization Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes the authors have included the limitation section. The negative societal impact section doesn't apply to this line of research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments! > Add references for the other variance reduction work ... Risk-sensitive RL and robust RL Thanks for these suggestions! We will add your reference [1], and we did a literature review on risk-sensitive RL approaches (e.g., [2],[3],[4],[5]), which we will add to the camera ready. Please do suggest other papers! We agree Taylor TD may be applicable to risk-sensitive RL. In the presence of a good model of the transitions, Taylor TD may can approximate the values of risky actions around safe (e.g. deterministic) actions, without actually needing to take those actions, thanks to the Taylor expansion of the TD objective. That said, Taylor TD does not directly estimate the uncertainty induced by imperfect knowledge (i.e., epistemic uncertainty), so it may be less applicable to robust RL. We would also like to stress that in the related work section of the original manuscript, we discussed Expected Policy Gradients (Ciosek and Whiteson, 2018), Mean Actor Critic (Asadi et al., 2017) and "all-action" policy gradient (Petit et al., 2019) as related methods. These methods tackle the variance at the level of the policy rather than critic updates by also integrating over the stochasticity induced by the action distribution. [1] Variance Reduction for Policy-Gradient Methods via Empirical Variance Minimization [2] Tamar et al. Policy gradients with variance related risk criteria. ICML (2012) [3] Bellemare et al. A distributional perspective on reinforcement learning. ICML (2017) [4] Lim & Malik. Distributional Reinforcement Learning for Risk-Sensitive Policies. NeurIPS (2022) [5] Fu. Risk-Sensitive Reinforcement Learning via Policy Gradient Search. arXiv (2018). > It is not clear in the paper how the model (transition, reward) is learnt We thank the reviewer for pointing this out. We neglected to include a section in the appendix where we described how the model was learned. To clarify, the reward and transition model were learned by maximum likelihood based on the observed transition, as in most model-based RL approaches (e.g., as in Janner et al., 2019 - MBPO). We will add an Appendix describing the model-learning to the camera-ready. Note, Eq 18 refers to the critic loss (derived from the Taylor state and action expansion). > Why is $\mu_\text{a}$ in Eq 19 sampled from deterministic target policy? There are really two separate questions here. First, there is the question of whether $a$, the initial action in the TD update, is stochastic/deterministic. In our setting, $a$ is always stochastic, which helps learn a broader Q function with performance benefits over deterministic initial actions (e.g., see the poor performance of Dyna-TD3 baseline algorithm). However, stochastic initial actions also increase the TD-update variance relative to deterministic initial actions. Taylor-TD mitigates this increased variance by analytically approximating the expected TD update arising from stochastic initial actions. Second, there is the question of whether the target policy (which we use to generate $a'$ in the TD update) is stochastic/deterministic. It turns out the Taylor TD can be used with deterministic or stochastic target policies. We used deterministic target policies, because in a deterministic environment, the optimal policy is deterministic (or at least, _an_ optimal policy is deterministic). > In Fig 1, what is the variance computer over? We agree that this wasn't quite clear enough in the original manuscript, and will clarify in the camera-ready. We used exactly the same initial state-action pairs when comparing standard (MC) TD and Taylor-TD updates. The additional variance in standard (MC) TD updates comes from the usual approach of adding a small amount of Gaussian noise to the initial action, $a$, in the TD update; while in Taylor-TD, we analytically integrate over a Gaussian distribution over actions (and states). > What is the effect of reducing the variance in critic update vs to that actor update(Ref[1])? We thank the reviewer for bringing this paper to our attention, we will add it to the related work. We would expect that the improvements from each method would compound, as one is reducing variance in the actor update, while the other is reducing variance in the critic update: we will attempt to perform this experiment for the camera-ready deadline. That said it is important to note an additional complication. In particular, TaTD can be used with deterministic or stochastic target policies, and our current experiments use deterministic target policies, as we are mostly working in deterministic environments. However, [1] can offer no benefits with deterministic target policies, as [1] reduces variance from stochastic target policies. > Comparison with expected sarsa kind of update in tabular environments? First, as noted in the Abstract, Taylor-TD requires continuous state and action spaces, for the required derivatives, so the connection to tabular settings is unclear. More fundamentally, expected SARSA computes the expected $Q(s', a')$ under the distribution over _next_ actions, $a'$, given the current policy. In contrast, Taylor-TD computes the expected TD update under a distribution over _initial_ action, $a$. Thus, it is possible to combine expected SARSA and Taylor-TD, and as they reduce different components of the variance, we would expect them to be complementary. However, it is important to note that expected SARSA only makes sense when we have a stochastic policy, which implies a need to compute the expectation of $Q(s', a')$ under a distribution over _next_ actions, $a'$. In contrast, Taylor-TD allows the use of either stochastic or deterministic. As noted above, we use deterministic target policies (as an optimal policy in a deterministic environment is deterministic). And if we use a deterministic target policy, there is no need for an expectation over next actions: we can just compute $Q(s', a')$. --- Rebuttal Comment 1.1: Title: Thanks for the responses Comment: Thanks for addressing some of my concerns. I had some additional input. Please clarify more on the third point mentioned below. 1. "These methods tackle the variance at the level of the policy rather than critic updates by also integrating over the stochasticity induced by the action distribution." -> There exists work in risk-sensitive RL literature that tackles variance estimation by directly using the Bellman operator for the variance. Would be good to include them in references too. Also it would be good to provide clarity on how it differs from these works. I agree that your work minimizes the variance of the value function estimation by using Taylor approximation. [1]Sobel, M. J. 1982. The variance of discounted Markov decision processes. Journal of Applied Probability 19(4): 794– 802. [2]Jain, Arushi, et al. "Variance penalized on-policy and off-policy actor-critic." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021. [3]Tamar, A.; Di Castro, D.; and Mannor, S. 2013. Temporal difference methods for the variance of the reward to go. In International Conference on Machine Learning, 495–503. 2. “It is not clear in the paper how the model (transition, reward) is learnt” -> If the transition and reward are learnt using maximum likelihood, please change the notation in Algo 1, the model update step where the same \mathcal{L} was used. The same \mathcal{L} is used in Eq 18 which makes it very confusing. Further, also describe the notation in intro/Taylor TD section - how the model was learnt. 3. Why is the standard TD method called (MC) TD? Because in the TD method, we are still estimating the target by bootstrapping with a 1-step Q value. I can't understand why MC is there. In this work, the “standard TD style” is also used. Could you clarify the difference between the two? --- Reply to Comment 1.1.1: Title: Response Comment: Thanks for your continued engagement! [1,2,3] look like great papers. In fact, they're super-relevant for some of our other work, so I will pass them on. And of course, we will include them in the related work for this paper. But importantly they are doing something quite different from this paper. [1,2,3] are all considering variance _in the return_, induced either by stochastic _rewards_ or _transitions_. In contrast, this paper is not considering variance in the return, nor does it considering variance that arises through stochastic rewards or transitions. Instead, we are considering variance in the _TD update_ induced by stochasticity in the _choice of initial action_ (and visited states). [1] Sobel, M. J. 1982. The variance of discounted Markov decision processes. Journal of Applied Probability 19(4): 794– 802. This paper presents formulas "for the variance and higher moments of the present value of single-stage rewards in a finite Markov decision process". [2] Jain, Arushi, et al. "Variance penalized on-policy and off-policy actor-critic." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021. This paper proposes "on-policy and off-policy actor-critic algorithms that optimize a performance criterion involving both mean and variance in the return." [3] Tamar, A.; Di Castro, D.; and Mannor, S. 2013. Temporal difference methods for the variance of the reward to go. In International Conference on Machine Learning, 495–503. (The title is a reasonable summary.) > If the transition and reward are learnt using maximum likelihood, please change the notation in Algo 1, the model update step where the same \mathcal{L} was used. The same \mathcal{L} is used in Eq 18 which makes it very confusing. Further, also describe the notation in intro/Taylor TD section - how the model was learnt. Good catch. We'll definitely update this as part of the more general revisions regarding the model learning. In particular, we will use $\mathcal{L}^{\text{model}}$ as the model loss, and $\mathcal{L}^{\text{critic}}$ as the critic loss (e.g. in Eq. 18). > Why is the standard TD method called (MC) TD? Because in the TD method, we are still estimating the target by bootstrapping with a 1-step Q value. I can't understand why MC is there. In this work, the “standard TD style” is also used. Could you clarify the difference between the two? We're using Monte-Carlo (MC) as opposed to Taylor (Ta) to emphasise that there are two ways to compute the expectation in Eq. 7. We could draw many samples of $\xi_i$ and compute an empirical average. That's a Monte Carlo approach. Alternatively, we could use our proposed approach of computing the analytic expectation for a first order Taylor series expansion. We realise that there is a bit of a clash with an alternative use of "Monte-Carlo" in RL (specifically Monte-Carlo estimates of the return). Following Reviewer 9ew5's suggestion we will switch to using "sampled TD update" rather than "MC TD" to avoid this potential source of confusion.
Summary: The authors are proposing a method for reducing the variance of TD-learning updates for model-based RL applied to problems with continual state-action spaces. The method relies on Taylor expansion of noise terms in action and initial state distribution, reducing the contribution of those to the variance of TD-update. The authors demonstrate variance reduction both theoretically and empirically, as well as performance on par with SOTA model-based methods such as MBPO (Janner et al, 2019) and MAGE (D'Oro et al, 2020). For empirical evaluation, the authors apply the proposed update adjustment to TD3 with Dyna. Strengths: The empirical results seem pretty strong, though I would have found results with longer training times to be more convincing (about 2x longer, i.e. as long as was used in other work such as Janner et al, 2019). I appreciated the strong theoretical backing, i.e. demonstrating lower variance and stability guarantees (though it should be noted that I did not check the proofs as this work falls outside of my expertise). Weaknesses: Minor: - the authors should be clearer in the main body of the paper about the additional computational demands of the proposed method - some of the particularly interesting results can be only found in the Appendix. For example, the claim that the method performs better on large state-action spaces seems like an important one, and it was good to see it explored in a more controlled setting in Appendix F. Similarly, we can only find ablations (importance of state expansion and cosine similarity) in Appendix I (also it would be good to see these results on all 6 environments). I hope the authors can find a way to move these results to the main body of the paper (personally I found those to be more insightful than the comparison in Fig 3). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Figure 1, does the variance of each method change during training? 2. In Figure 2, how do the results look like if the models are trained for another 2x steps? What about comparison in terms of training time? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No concerns regarding potential negative societal impact, and the limitations were discussed to a reasonable amount. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments! > the authors should be clearer in the main body of the paper about the additional computational demands of the proposed method. We raised this point in the Limitations section of the original manuscript, and provided a reference to the Appendix for the actual computational costs in terms of training time, highlighting the additional computational cost is not that large (i.e., standard TD learning is only 20\% faster on average). We will bring this table to the main text in the camera-ready. However, standard TD learning is perhaps the fastest baseline. Additional analysis showed that while Taylor-TD was a bit slower than TD, it was much closer to MAGE (for Walker 2d, Taylor-TD took 68 s while MAGE took 63 s). > In Figure 2, how do the results look like if the models are trained for another 2x steps? What about comparison in terms of training time? These runs require quite a bit of additional compute time, so we have prioritised Ant, as that's the only one that seems not to have saturated yet. In line with the rest of the results for Ant, we found that MBPO (5260 at 250k steps) seemed to be doing a bit better than TaTD3 (4860 at 250k steps). For the camera-ready, we will run Ant out to 300k steps, and increase the number of iterations in the rest of the environments. > some of the particularly interesting results can be only found in the Appendix. For example, the claim that the method performs better on large state-action spaces seems like an important one, and it was good to see it explored in a more controlled setting in Appendix F. Similarly, we can only find ablations (importance of state expansion and cosine similarity) in Appendix I (also it would be good to see these results on all 6 environments). I hope the authors can find a way to move these results to the main body of the paper (personally I found those to be more insightful than the comparison in Fig 3). Thanks! We agree that the Appendix contains a number of interesting results, and to the extent that is possible within space constraints, we will move some of these results into the main text. > In Figure 1, does the variance of each method change during training? We have run preliminary experiments assessing the variance reduction from Taylor TD at different stages of training. We found that for completely untrained networks, there was little benefit, likely because the untrained model provides poor gradient estimates. However, a beneficial variance reduction, similar in magnitude to those in the main-text, emerges early on in training, and remains for the rest of the training run. For the camera-ready paper, we will modify Fig. 1 to show the full time-course of the variance reduction through training. --- Rebuttal Comment 1.1: Title: More details about runtimes Comment: We have got some numbers around the runtimes for our method (TaTD3) against the competitive baselines (MBPO and MAGE). MAGE has about the same runtime as our method (TaTD3), while MBPO is a lot slower. | Method | Pendulum | Walker | Ant | Humanoid | | --- | ---- | ---- | --- | --- | | MAGE | 36 s | 63 s | 75 s | 127 s | | MBPO | 52 s | 133 s | 158 s | 235 s | | TaTD3 | 38 s | 68 s | 72 s | 117 s |
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Exact Generalization Guarantees for (Regularized) Wasserstein Distributionally Robust Models
Accept (poster)
Summary: The paper provides theoretical results characterizing the generalization capabilities of methods based on Wasserstein distributionally robust approaches. In particular, the results presented extend the conditions under which the performance guarantees are not affected by the curse of dimensionality and are applicable for general classes of models. Strengths: The paper shows that the usage of Wasserstein radius of the order 1/sqrt(n) can provide generalization bounds in situations more general than those considered in existing works (linear models). In addition, the results presented also cover regularized versions of WDRO. The more general results are obtained using a novel type of proof based on a concentration bound for the dual problem, which is of independent interest. Weaknesses: The paper contribution with respect to the state of the art needs to be better described. In particular, the extension to non-linear models of the scaling 1/sqrt(n). The problematic dimension-dependent scaling arises in Wasserstein methods while other techniques based on robust risk minimization have been shown to provide performance guarantees with the scaling 1/sqrt(n). It would be good if the authors describe this fact and the related work. If I am not mistaken, examples 3.6 and 3.7 correspond to cases for which the right scaling was already proven in previous works. In order to better assess the paper's contribution, it would be good if the authors discuss interesting examples for which the paper provides the right scaling while existing results cannot. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Would it be possible to include numerical results describing the theoretical results presented?. The choice of the radius in practice is often problematic in Wasserstein methods. The theoretical results provide the scaling of such radius but not a concrete recipe to choose it. It would be useful to explore choices of such radius with the right scaling that result in small error and provide performance guarantees. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper adequately describes the limitations of the methods proposed, mostly in terms of the specific assumptions needed for the results to hold. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments and suggestions. Here is a detailed response to all remarks and questions. - *"The paper contribution with respect to the state of the art needs to be better described. In particular, the extension to non-linear models of the scaling 1/sqrt(n)."* Thank you for this important suggestion; we will clarify this point in the introduction. There are two points to make. - To the best of our knowledge, the only existing *exact* generalization guarantees with the right $1 / \sqrt n$ scaling are in the work Shafieezadeh-Abadeh et al. (2019), Thm. 39. These guarantees are shown in the restricted context of linear models with Lipschitz loss (eg robust linear regression, support vector machines, logistic regression) and leverage a closed form of the robust risk, to get an *exact* upper bound, as in our results. - The work of An and Gao (2021), which is the closest to ours, covers non-linear models, but provides generalization guarantees with additional error terms. - *"other techniques based on robust risk minimization have been shown to provide performance guarantees with the scaling 1/sqrt(n). It would be good if the authors describe this fact and the related work."* Other DRO neighborhoods indeed also provide similar generalization guarantees with the right scaling. In the revision, we will refer to, in particular, MMD DRO [1, 3]. A result very similar to ours -- an exact upper-bound on the true loss by the empirical robust -- indeed exist for MMD DRO [3, Cor. 3.1]. As other generalization guarantees for MMD DRO or its variants [2, 4], the radius of the MMD is indeed only required to scale as $1 / \sqrt n$. Adding this discussion in the introduction will help us underline that, in this work, we focus on advancing the theoretical understanding of WDRO. - *"it would be good if the authors discuss interesting examples for which the paper provides the right scaling while existing results cannot."* Thank you for this suggestion. We will improve the example section with additional examples, namely kernel methods and neural networks, see the main rebuttal. For these models, our work is the first to give *exact* generalization guarantees in WDRO. - *"Would it be possible to include numerical results describing the theoretical results presented?"* We provide some numerical simulations on linear and logistic regression in the main rebuttal: we show that, both for standard and regularized WDRO, the robust risk with respect to the empirical training distribution is an upper bound on the test loss with probability almost one provided the radius $\rho$ is large enough. - *" The choice of the radius in practice is often problematic in Wasserstein methods [...]"* We fully agree that choosing the Wasserstein radius remains a major challenge in practice. Though some works have started addressing this question (Esfahani and Kuhn, 2018; Blanchet et al. 2021), this issue is part of our future work. We will mention this in the conclusion. References for Kernel DRO: [1] Zhu JJ, Jitkrittum W, Diehl M, Schölkopf B. Kernel distributionally robust optimization: Generalized duality theorem and stochastic approximation. AISTATS 2021 [2] Jia-Jie Zhu, Christina Kouridi, Yassine Nemmour, Bernhard Schölkopf: Adversarially Robust Kernel Smoothing. AISTATS 2022: [3] Staib M, Jegelka S. Distributionally robust optimization and generalization in kernel methods. Advances in Neural Information Processing Systems. 2019 [4] Zeng Y, Lam H. Generalization bounds with minimal dependency on hypothesis class via distributionally robust optimization. Advances in Neural Information Processing Systems. 2022 --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses that mostly address my comments/questions. I believe the paper deserves to be published and it will be improved in the camera ready version --- Reply to Comment 1.1.1: Comment: Thank you for your kind words, and once again, we are grateful for your detailed comments and suggestions. If there are still specific points you would like us to expand upon, please feel free to let us know.
Summary: This paper presents generalization bound for Wasserstein DRO and entropic regularized Wasserstein DRO (or called Sinkhorn DRO in Wang et al.) formulations. Those generalization bounds do not suffer from the curse of dimensionality. The theoreical analysis is also supported by two examples in Section 3.4. Strengths: - The theoretical analysis is interesting from two aspects. First, the authors reveal that the radius selection of WDRO to make the empirical robust loss dominate the true loss does not suffer from the curse of dimensionality. The analysis follows different techniques from existing literature such as Gao et al, Blanchet et. al, etc. Second, the technique is general enough so that it also applies to entropic regularized Wasserstein DRO (or called Sinkhorn DRO in Wang et al.) formulations. This is the first work that investigates the statistical properties of such formulations. - The authors also present two examples in machine learning to demonstrate the technique assumption holds and the proposed theoretical analysis applies. Weaknesses: - The writing of this paper could be potentially improved: 1. There should be a comma in Eq.(1), or equation between line 83-84, or Eq. (4), or equation between line 194-195, or equation between line 301-302. 2. There should be a period in Eq. (10). 3. The contribution and related work part in the introduction section should be separated. 4. It would be a little bit confusing to first introduce KL-divergence regularized WDRO risk in Eq.(5-6) and then introduce it corresponds to the Sinkhorn ambiguity set in line 194-195. The authors should put them together in Section 2.2 5. The notation could be potentially improved. For example, in Eq. (7) the authors use $\hat{\mathcal{R}}$ to refer to the risk based on empirical distribution $P_n$. I would suggest replace the notation $P_n$ with $\hat{P}_n$ for consistency. Further, in Eq. (7) I think the authors are meaning $\rho$ should at least scale in the order of $\sqrt{(1+\log(1/\delta))/n}$, then why not write $\Omega(\sqrt{(1+\log(1/\delta))/n})\le \rho$ instead of $O(\sqrt{(1+\log(1/\delta))/n})\le \rho$? The same applies for equation between line 199-200. - It is great that the authors present statistical analysis for entropic regularized Wasserstein DRO. I would suggest the authors add some explanation or numerical example to demonstrate the benefit of introducing entropic regularization. Will it bring extra benefits than standard WDRO? - The analysis is limited to quadratic cost function, which could be restrictive. From my own trial and reading, I think the major difficulty for generalization is that, it is difficult to apply Laplace approximation technique for general p-th power of norm function. In other words, it is difficult to obtain the p-th power of norm counterpart of Lemma A.3 and Lemma G.1.If so, I suggest the authors add explanation for the difficulty of extension. - Some literature is missing. For example, readers may wonder why consider adding entropic regularization to WDRO problem and what is the applications? I suggest the authors make the following revisions: 1. update reference [J. Wang, R. Gao, and Y. Xie. Sinkhorn distributionally robust optimization. arXiv preprint arXiv:2109.11926, 2021] as [J. Wang, R. Gao, and Y. Xie. Sinkhorn distributionally robust optimization. arXiv preprint arXiv:2109.11926, 2023]. In the updated version, the authors demonstrate that people can find $\delta$-optimal solution to general entropic regularization WDRO problem with complexity $\tilde{O}(1/\delta^2)$. So one major benefit of adding entropic regularization is the computational tractability; 2. add several application papers brought by entropic regularization WDRO in literature review: (i) Dapogny, Charles, et al. "Entropy-regularized Wasserstein distributionally robust shape and topology optimization." Structural and Multidisciplinary Optimization 66.3 (2023): 42. (ii) Song, Jun, et al. "Provably Convergent Policy Optimization via Metric-aware Trust Region Methods." arXiv preprint arXiv:2306.14133 (2023). (iii) Wang, Jie, and Yao Xie. "A data-driven approach to robust hypothesis testing using sinkhorn uncertainty sets." 2022 IEEE International Symposium on Information Theory (ISIT). IEEE, 2022. (iv) Wang, Jie, et al. "Improving sepsis prediction model generalization with optimal transport." Machine Learning for Health. PMLR, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We heartwarmingly thank the reviewer for the numerous suggestions, comments and references. It is a pleasure for us to read that "The theoretical analysis is interesting from two aspects [...]" and that "it is great that the authors present statistical analysis for entropic regularized Wasserstein DRO [...]". Here is a point-by-point response. - About the writing: thank you for the suggestions that we will implement in the revision. - *"I would suggest the authors add some explanation or numerical example to demonstrate the benefit of introducing entropic regularization."* Our work theoretically shows that regularized WDRO enjoys similar generalization guarantees as standard WDRO, with less restrictive assumptions: Thm. 3.4 for regularized WDRO requires less assumptions than both Thm. 3.1 and Thm. 3.3 on standard WDRO to obtain similar generalization results. On the numerical aspect, we will provide, in the revision, a discussion about the interest of entropic regularization. First, we will underline that J. Wang, R. Gao, and Y. Xie., (2023) illustrates that the out-of-sample performance of regularized WDRO is on par, if not better in some cases, than standard WDRO. We confirm this with some additional numerical simulations on linear and logistic regression in the main rebuttal, to be added to the revision. - *"The analysis is limited to quadratic cost function, [...] I suggest the authors add explanation for the difficulty of extension."* The reviewer is indeed right: it is not clear how to apply the Laplace approximation when the squared norm is replaced by the norm to the power $p$, and so, in particular, how to extend appendix A.3. Moreover, the analysis of appendix D.1 also relies heavily on $p$ being equal to 2 and more work would needed to determine whether it can be extended or not. On the other hand, the appendix D.2 would seem to extend to general exponents, as would appendix C when $\epsilon = 0$ (similarly to An and Gao (2021)). We will add this remark to the revision in conclusion. - About the additional references: we will gladly add them to the revision and update the reference [J. Wang, R. Gao, and Y. Xie. 2023]. --- Rebuttal Comment 1.1: Title: After reading the rebuttal Comment: I have read the rebuttal and I am happy to raise my score to 7. --- Reply to Comment 1.1.1: Comment: Thank you again for your suggestions and comments, that will help improve our work!
Summary: This work proves generalization guarantees for Wasserstein DRO models that only require the radius of order $O(n^{-1/2})$ under mild assumptions for general classes of models. This provides concentration results that do not suffer from the curse of dimensionality. Strengths: The theoretical contribution is the main strength. The empirical concentration of Wasserstein distance suffers from the curse of dimensionality, and this paper is able to prove the results (under some assumptions) that do not have this curse of dimensionality issue and provide statistical guarantees on the performance of WDRO solutions. Weaknesses: There is no significant weakness in this paper. Nevertheless, I think adding some discussion or examples for which the assumptions and thus the results in this paper do not hold can be beneficial; it can show failure cases and may also motivate future directions for the extension. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It might be better to split Section 3 into two shorter sections for better readability. And the discussion may also be extended by adding examples of failure cases with potential methods of relaxation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their reading, comments and suggestions. - *"adding some discussion or examples for which the assumptions and thus the results in this paper do not hold can be beneficial;"* We agree that such examples were missing in the submission. In the main rebuttal, we provide more examples of parametric models, in particular kernel methods and neural networks. We also point out which cases our framework fails to cover in this context, see e.g. the discussion at the end of the kernel example. We will use all this material to enrich the example section in the revision. - *"It might be better to split Section 3 into two shorter sections for better readability."* Thank you for this suggestion which is well aligned with the addition of new examples. In the revision, we will create a section 4 dedicated to examples.
Summary: This paper provides generalization guarantees of Wasserstein DRO for a general class of functions, in which the radius scales as $1/\sqrt{n}$ and does not suffer from the curse of dimensionality. Moreover, these guarantees hold for any distribution in the neigbourhood of the true distribution, so that they still apply when the distribution shifts at testing time. The results in this paper hold for both constrained and regularized version of Wasserstein DRO. The authors also provide a proof sketch that explains the main ideas and techniques used in the proof, and apply their results to logistic and linear regression. Strengths: 1. This paper provides novel generalization guarantees such that the robustness radius does not suffer from the curse of dimensionality. To the best of my knowledge, the results in this paper are novel and make a non-trivial contribution to the DRO community. Moreover, the authors consider the regularized version of Wasserstein DRO and provide similar guarantees as well. 2. Most parts of the paper are well-written. The necessary backgrounds are clearly explained, and theorems are accompanied with detailed explanations of related definitions and concepts. Weaknesses: In Section 3.4, the authors considers logistic and linear regression as applications of their theorems. It would be better if more complicated and popular parametric models can be included in this section to justify the main assumptions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How does the setting considered in this paper compared with other works? Could you give a brief and high-level discussion of why the generalization guarantee is dimension-independent in your setting? 2. Is it possible to obtain similar generalization guarantee for DRO with $\phi$-divergence? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This paper does not have potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their reading, comments and questions. We very much appreciate the comment "the results in this paper are novel and make a non-trivial contribution to the DRO community." Here are point-by-point answers to your questions. - *"It would be better if more complicated and popular parametric models can be included in this section to justify the main assumptions."*: In the main rebuttal, we discuss additional examples of parametric models: in particular, kernel regression and neural networks. We will add them to the revision of the paper. - *"How does the setting considered in this paper compared with other works?"* We will provide a more detailled comparison of our setting with the closest previous works: - Blanchet et al. (2022), Blanchet and Shapiro (2023): these works consider the parametric setting, where $f(\theta, \xi)$ is twice differentiable with Lipschitz gradient, satisfies a uniform quadratic growth condition and study the asymptotic concentration properties of the WDRO solution in the neighborhood of a minimizer of the true risk. - Gao (2022), An and Gao (2021): these works are the closest to ours. They consider piecewise differentiable loss functions on a compact set with Hölder gradient (however, note that, for their bounds to have vanishing errors, the points of non-differentiability have to be sufficiently far from the data distribution). Note however that they obtain generalization guarantees with additional error terms compared to our bounds, see the discussion in our paper line 181. - *"Could you give a brief and high-level discussion of why the generalization guarantee is dimension-independent in your setting?"* Our bound indeed does not suffer form the curse of dimensionality: the minimal radius to obtain generalization bounds scales as $1 / \sqrt n$ instead of $1 / n^{1/d}$ (Esfahani and Kuhn, 2018). The main difference with (Esfahani and Kuhn, 2018) is that we consider the WDRO objective as a whole, instead of proceeding in two steps: 1) considering the Wasserstein distance independently and invoking concentration results on the Wasserstein distance and 2) plugging this result in the WDRO problem. - *"Is it possible to obtain similar generalization guarantee for DRO with $\phi$-divergence?"* Though some of their generalization properties have been studied (eg Blanchet and Shapiro (2023)), to the best of our knowledge, this is not possible to obtain exact upper-bounds like ours for $\phi$-divergences. A possible explanation may be the following: unlike Wasserstein uncertainty sets, uncertainty sets defined with $\phi$-divergences only contain distribution whose support is included in the one of $P_n$.
Rebuttal 1: Rebuttal: We thank the reviewers for suggesting adding non-linear examples to the paper. We discuss three examples (kernel models, neural networks and family of invertible mappings) that we will add to the revision of the paper. We also discuss a numerical illustration of our main theorems that we will also add to the revision. ### Kernel ridge regression We present the example of kernel ridge regression and show that both Thm. 3.3 and 3.4 apply. Our work is the first to provide *exact* generalization bounds that do not suffer from the curse of dimensionality for these non-linear models in WDRO. Take a kernel $k : \mathcal{X} \times \mathcal{X} \to \mathbb R$ with $\mathcal{X}$ compact and $k$ smooth (eg Gaussian, polynomial...). We consider the following class of loss functions: $$\left\\{(x, y) \in \mathcal X \times \mathbb R \mapsto \frac{1}{2} \left(\sum_{i = 1}^m \alpha_i k(x, x_i) - y\right)^2 + \frac{\mu}{2} ||\alpha||^2_2: (\alpha_1,\dots,\alpha_m) \in A_m,\, (x_1,\dots,x_m) \in \mathcal{X}_m\right\\}$$ where $m$ is a fixed integer, $A$ is a compact subset of $\mathbb R^m$, $\mathcal X_m$ can be any closed subset of $\mathcal X^m$ and $\mu \geq 0$ is the regularization parameter. A typical choice for $X_m$ would be the data points of the training set. This class fits into our framework of parametric models of $\S3.4$ by setting $\xi = (x, y)$, $\Xi$ to some compact subset of $\mathcal X \times \mathbb R$, $\theta=(\alpha_1,\dots,\alpha_m,x_1,\dots,x_m)$, $\Theta = A_m \times \mathcal X_m$ and $$f(\theta, \xi) = \frac{1}{2} \left(\sum_{i = 1}^m \alpha_i k(x, x_i) - y\right)^2 + \frac{\mu}{2} ||\alpha||^2_2$$ - With that setting, Assumption 2 is readily satisfied so that Thm 3.4 applies. - To apply Thm 3.3, we further need to assume that Assumption 4 holds. This non-degeneracy assumption is common in the WDRO literature (eg Blanchet et al. (2022); Blanchet and Shapiro (2023); Gao (2022); An and Gao (2021)). - However, Thm. 3.1 cannot be applied directly as it is not yet clear what conditions on $k$ could ensure that Assumption 5 is satisfied. Moreover, as in other related works on WDRO, it is also not obvious how to extend this framework to cover non-smooth kernels (eg Laplace). Finally, note that kernel logistic regression is also covered by our framework by combining the arguments above with the the logistic regression example Example 3.6. ## Smooth neural networks We present the example of neural networks: as in the case of kernels, we show that both Thm. 3.3 and 3.4 apply to smooth neural networks. Again, our work is the first to provide *exact* generalization bounds that do not suffer from the curse of dimensionality. Denote by $\mathcal{NN}(x, \theta, \sigma)$ a multi-linear perceptron that takes $x$ as input, has weights and biases $\theta$ and a smooth activation function $\sigma$ (eg GELU, tanh, ...). Choose $\ell(\hat y, y)$ a smooth loss function. Then, we consider the family of losses $\left\\{ (x, y) \mapsto \ell(\mathcal{NN}(x, \theta, \sigma), y): \theta \in \Theta \right\\}$ with $\Theta$ some compact set. Provided that the inputs $(x, y)$ lie in a compact set $\Xi$, the situation is the same as for kernels. - Thm. 3.4 applies since Assumption 2 is readily satisfied. - Thm. 3.3 applies provided the non-degeneracy assumption Assumption 4 is satisfied. - We do not know how to ensure that Assumption 5 of Thm. 3.1 is satisfied for general neural networks nor how to extend this framework to cover non-smooth activation functions (eg ReLU). ## Family of diffeomorphisms and generative modelling In this example, we show how the three theorems Thm. 3.1, 3.3 and 3.4 apply. Consider a parametric function of the form $f(\theta, \xi) = h(g(\theta, \xi))$ where $g : \Theta \times \Xi \to \Xi$ is such that for any $\theta$, $g(\theta, \cdot) : \Xi \to \Xi$ is a diffeomorphism and $h : \Xi \to \mathbb R$ satisfies a mild technical assumption (the standard Morse-Bott condition, see eg (Arbel and Mairal, 2022, A.1)). Normalizing flows, widely used in generative modeling and sampling, are diffeomorphisms and thus lead to loss functions of this form. For these functions, we readily see that Thm. 3.4 applies since Assumption 2 is readily satisfied and Thm. 3.3 applies provided the non-degeneracy assumption Assumption 4 is satisfied. To apply Thm. 3.1, we show that Assumption 5 holds, as follows. 1. The function $h$ alone satisfies the first item of Assumption 5 since it is continuous and $\Xi$ is compact. $f$ then also satisfies it since $g$ is Lipschitz in $\xi$ uniformly in $\theta$. 2. We now show that the second item of Assumption 5 is also satisfied. In $\S A.5$, we showed that the second item of Assumption 5 is implied by the so-called parametric Morse-Bott assumption of Arbel and Mairal, 2022. But lemma 1 in $\S A.2$ in their paper shows that this family does satisfy this assumption in Lemma 1 in $\S A.2$. Hence, Assumption 5 is satisfied and Thm. 3.1 also applies. ## Numerical illustration We present numerical experiments supporting our theoretical results. On linear and logistic regression models, we illustrate that, provided the radius $\rho$ is large enough, the robust loss on the training distribution is indeed an upper-bound on the true loss. For $f(\theta, \xi)$ as defined in examples 3.6 and 3.7, we estimate the following probability, as in (Esfahani and Kuhn, 2018, $\S 7.2.A$), $P\left(\hat{\mathcal R}^\varepsilon_{\rho^2} (f(\hat\theta_n, \cdot))\geq E_{P}[f(\hat\theta_n, \xi)]\right)\quad \text{where} \quad \hat\theta_n = argmin_{\Theta}\hat{\mathcal{R}}^\varepsilon_{\rho^2}(f(\theta, \cdot)))$ We observe on the plots (cf pdf) that, for $\rho$ large enough, the above probability is close to 1, for both models and for both standard and regularized cases (as guaranteed by theorems 3.1 and 3.4). Pdf: /pdf/b8f03158e5aa4930cb935821f352927b8cef2c91.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
$SE(3)$ Equivariant Convolution and Transformer in Ray Space
Accept (spotlight)
Summary: This paper presents a series of methods on SE(3) equivariant convolution and transformer, which can operate on ray space. The experimental results show that the proposed method can help establish SE(3) reconstruction of signed distance functions and neural radiance fields. I'm not an expert in equivariant networks, I feel sorry but I may only be able to provide a general review. Strengths: - The paper looks solid in theory. - Adequate transformation settings in the experiments demonstrate its equivariance property of the proposed network modules. Weaknesses: - The entire method description section is obscure to me. - It is a bit hard to capture the overall idea when reading the sentences from Line 44 to Line 70. - The qualitative results (visual comparisons) are limited. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What are the model sizes and training times compared to DISN (for SDF reconstruction) and IBRNet (for novel view synthesis)? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It looks the limitations are properly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. “The entire method description section is obscure to me.”** As mentioned in the response to Reviewer VaKN, we will change the order of presentation in the method section. We will first define the two problems of generalized rendering and reconstruction from multiple views and then elaborate on how to make their main ingredients (convolution and transformer) equivariant. The definition of equivariance convolution requires several concepts from representation theory. We will introduce first the definition and then elaborate on the concepts so that even if the concepts themselves are quite complex, the motivation for using them is clear. We employed visualization techniques to vividly illustrate the key concepts outlined in the paper, with additional visual aids provided in the appendix. We will try to make them more comprehensible. **2. “It is a bit hard to capture the overall idea when reading the sentences from Line 44 to Line 70.”** We recognize that lines 53-61 describing the definition of equivariance might be confusing and we will rewrite them drawing parallels from other applications of equivariance, as follows. Everybody understands what equivariance is in 2D image operations. The output should be transformed the same way as the input image. Exact equivariance holds only when rotations are in multiples of 90 degrees because of the orthogonal sampling grid of the image. Things become a bit more complicated in 3D tasks when the input is a point cloud. There, equivariance means that if all input points are transformed, then the output (segmentation, reconstruction) should be transformed as well. Exact equivariance in this case means that all points will be transformed rather than a new sampling of the environment from a rotated sensor configuration. In our case, the input consists of features on rays in 3D. Exact equivariance, here, means that when all rays undergo the same transformation because of a change of reference frame the output features on rays or points will be transformed the same way. This differs from rotating the object because the rays (and content) captured after the rotation will be different from the content before the rotation. In lines 44 to 51, we extend the equivariant convolution in the ray space to the equivariant transformer in the ray space, which leverages the kernel in equivariant convolution. We provide two cases of cross-attention: one is from rays to points, where the query feature is attached to points; and key, value features are attached to the rays; another one is from rays to rays, where the query feature is attached to the target ray and the key, value features are attached to the source rays. Proceeding from lines 52 to 70, we delve into the practical applications of our model in 3D reconstruction and novel view synthesis, particularly within the realm of multiple views. We initially elucidate the specifics of equivariance within the context of multi-view settings. Subsequently, we detail how we seamlessly integrate the equivariant convolution and transformation modules to cater to the distinct requirements of these two tasks. **3. “The qualitative results (visual comparisons) are limited.”** We have introduced an additional visual comparison in Figure 3, which is illustrated in the rebuttal PDF. Due to space constraints within the PDF, the number of visualizations presented is limited. Nevertheless, we are dedicated to enriching the visual aspect of the paper by including more comparative visualizations to enhance clarity and understanding in the revised paper. **4. “What are the model sizes and training times compared to DISN (for SDF reconstruction) and IBRNet (for novel view synthesis)?”** For a fair comparison, our model size (num of parameters) is similar to that of DISN ( about 16.2M parameters). Training duration extends to around 1.2 times that of DISN. For novel view synthesis, our model size (num of parameters) is similar to that of IBRnet (about 9.04M parameters). Training time experiences an approximately 1.8-fold increase compared to IBRnet due to the heightened complexity of calculations within each layer.
Summary: The aim of this paper is to build equivariance constraints into neural rendering and 3D reconstruction networks. As far as I can tell from the paper, the representation used is a mapping from a ray space to a vector space. The paper describes equivariant convolution and transformer layers that operate over the ray space. Finally, these layers are inserted into existing neural rendering and 3D reconstruction networks to impose equivariance over re-parameterization of the coordinate space in which we represent camera extrinsics. Strengths: 1. The paper establishes a theory of equivariance in the ray space. This is an important contribution that could guide the design of future equivariant 3D scene representation approaches. 2. Adding equivariance leads to robustness to rotations and translations in 3D reconstructions. While it does not improve the quality of neural rendering, it makes renders consistent under coordinate frame transformations. Weaknesses: Overall, the paper lacks clarity and is not self-contained *at all*. I would suggest a major rewrite that adds a background section, re-structures the methods section and adds needed details to the experiments section. 1. The paper needs a background section. The background should cover at least the definition of rays (using your parameterization), ray space and the IBRNet. A background section is much more important than a 2-page introduction. 2. The methods section is missing a clear overview at the start about what neural components we need and what equivariance constraints we want to build into these components. The methods section directly jumps into dense exposition with frequent references to the 34-page appendix. 3. The proposed model that is actually used in the experiments section is delegated to the appendix. 4. It is not clear why IBRNet is the only baseline for novel view reconstruction. What about NeRF [1], Equivariant NeRF [2] or SRT [3]. The proposed method should be compared with other equivariant novel view reconstruction methods. 5. It is unclear if Section 4.2 involves novel view reconstruction or only rendering the same images in different coordinate frames. If the latter is the case, novel view reconstruction seems like a vital experiment. 6. There are very few qualitative examples of the rendered frames, both in the main paper and in the appendix. There should be a much more extensive qualitative comparison of image rendering across several SOTA methods. **References:** [1] https://arxiv.org/abs/2003.08934 [2] https://arxiv.org/abs/2006.07630 [3] https://arxiv.org/abs/2111.13152 Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. How well does the proposed method perform on novel view reconstruction? 2. How does the proposed method compare to Equivariant NeRF or the Scene Representation Transformer? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 4 excellent Limitations: The limitations of the proposed method are not sufficiently addressed except for a brief mention in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1:** We agree with the reviewer, and we will present the ray space definition and the context of IBRnet in the introduction. The mathematical definition and background of rays were initially placed in the appendix, a decision we acknowledge as not ideal. **W2:** We will rearrange and rewrite the methods section in the following way: Present first the generalized rendering and reconstruction from multiple views with their neural components of their architectures (convolutional and attentional) and their inputs and outputs. Then, move on with exposing the convolution and attention formula and then explain why we need all the math tools from representation theory necessary in the definition of convolution in homogeneous spaces which is the master-tool of our approach. All this material exists, In the introduction lines 26-27, we give the clear purpose of the paper, that is we address the problem of learning geometric priors that are SE(3)-equivariant with respect to transformations of the reference coordinate frame. Further in the introduction, between lines 28 and 30, we clarify that our input entails a light field—a function characterized by its orientation within 3D space—whose values encompass radiance or a range of features derived from pixel data. Transitioning to lines 60-70 of the introduction, we provide a detailed exposition of the neural components or features that play a pivotal role in each stage of the 3D reconstruction and neural rendering tasks. We acknowledge the reviewer's input regarding the order of presentation, recognizing its potential to enhance the paper's overall comprehensibility. **W3:** The main contribution of the paper is the novel introduction of the equivariant convolution and transformer in a light field. We believe that NeurIPS is a conference appreciating novel methodological contributions and this is why we emphasized the description of them instead of the models of the task we applied them on. This is also the case in all novel equivariance papers published at NeurIPS. We still presented the architectures of the rendering and reconstruction models in Figures 6 and 7. We would prefer to devote any available space to explaining the background as requested in your previous point. Of course, if we have space we will elaborate more on the models in the main paper. **W4 (Q2):** We chose IBRNet as a baseline because it is the classic generalized Nerf method, which makes use of a conventional transformer of rays, similar to approaches in the literature mentioned in your question. Our architecture is based on the IBRNet, and we only replace the aggregation method with equivariant convolution and replace the conventional transformer with the equivariant transformer. Therefore it is fair to take IBRNet as the baseline to show the robustness of our proposed basic operation (convolution, transformer)). NeRF is not relevant to our approach because it performs a single scene optimization without using any prior. Any time we change the reference frame we have to rerun NeRF because NeRF is not equivariant to the choice of coordinate system where we compute density and color. The equivariance in the paper Equivariant Neural Rendering is not the equivariance we mean in our paper: Equivariant Neural Rendering enforces only approximate equivariance via a loss function rather than by design as we do. We will definitely add it to the citations. In the rebuttal PDF, we added a comparison to SRT (Scene Representation Transformer), as outlined in Table 3. We present the quantitative outcomes obtained from the MultiShapeNet dataset. Notably, both IBRnet and our model's performances lag behind that of SRT. We want to emphasize that our central contribution is to propose an equivariant convolution and transformer on ray space, which can be generally embedded into broad models in 3D learning. SRT is not strictly equivariant, it has the assumption that the views are upright, is inconsistent with respect to the permutation of the cameras, and depends on the camera ID embedding. By randomly choosing the first camera frame as the canonical frame, it approximately achieves data-driven equivariance. Our framework is sufficiently generic to allow us to take architectures like the SRT and convert them to be equivariant as long as they consist of transformer (or convolutional) modules. **W5 (Q1):** This is a misunderstanding. We perform novel view rendering (=reconstruction) from completely different novel 3D poses (section 3.4). **W6:** We will augment the paper with additional qualitative results to enhance the overall presentation, and we have included extra visualizations in the rebuttal PDF. Although the page constraints of the rebuttal PDF limited the number of visualizations we could include, rest assured that we will incorporate an expanded set of results in the forthcoming revised version. **Limitation:** The primary constraint of this approach stems from the finite sampling of the light field. Sparse view-based sampling inadequately addresses substantial object displacements accompanied by significant changes in perspective, resulting in the breakdown of equivariance, which also explains its suboptimal performance on the MultiShapeNet dataset due to noncomprehensive light field sampling. We will expand upon this aspect in a more detailed section within the forthcoming revised paper. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the rebuttal and the additional comparisons! I raised my score. You are right that NeRF does not fit into your experimental setup, my mistake. It is interesting that SRT does so much better; e.g. https://arxiv.org/abs/2304.00947 took a step towards making SRT invariant to the reference frame. --- Reply to Comment 1.1.1: Comment: Thank you for appreciating our rebuttal and providing valuable feedback. The significant performance disparity between SRT and our approach, as well as IBRNet, can be attributed to i) the order of magnitude in the model size (74M parameters vs 9.04M ours), ii) the global nature of the SRT set-latent representations, and iii) in the larger encoder, encompassing a CNN and a state of the art Vision Transformer applied cross image patches. We appreciate the valuable reference you have provided, and we will ensure to cite it. While both our method and RePAST address the frame problem through relative pose, there are differences in the approach. Our model maintains theoretical equivariance even in cases of individual camera rotations around their axes or minor individual rotations accompanied by small content changes. In RePAST, patch tokens are reliant on the camera frame, while the relative pose is tied to the query camera frame. Although we have not conducted experiments with cameras rotated around their axes for neural rendering, we have done so for 3D reconstruction, showcasing the robustness of our model.
Summary: This paper introduces a method for leveraging geometric priors in 3D reconstruction and novel view rendering when the input views are insufficient. The authors propose learning priors from 2D images using a 2D canonical frame and a 3D canonical frame. They achieve coordinate frame equivariance by introducing an SE(3)-equivariant convolution and transformer in the 3D ray space. The paper demonstrates the efficacy of their approach in tasks such as 3D reconstruction and neural rendering using multiple views, showcasing robust results without transformation augmentation. Strengths: The proposed method in this paper offers several strengths: 1. Geometric Priors: The approach leverages geometric priors to enhance 3D reconstruction and novel view rendering. By incorporating prior knowledge, it improves the quality of results, particularly when the input views have limited coverage and inter-view baselines. 2. Equivariance to Coordinate Frame Transformations: The method ensures equivariance to coordinate frame transformations, allowing it to handle different orientations and positions of the cameras. This enables more accurate and consistent reconstruction and rendering across various coordinate frames. 3. SE(3)-Equivariant Convolution and Transformer: The introduction of SE(3)-equivariant convolution and transformer in the 3D ray space provides a powerful tool for learning and exploiting geometric priors. It allows for effective feature extraction and representation, leading to improved reconstruction and rendering results. 4. Adaptability to Different Tasks: The method demonstrates adaptability to different tasks, including equivariant 3D reconstruction and equivariant neural rendering. It can be tailored to specific applications, making it versatile and applicable in various scenarios. 5. Robustness without Transformation Augmentation: The approach achieves robust results even in datasets with roto-translated transformations, without the need for additional transformation augmentation techniques. This reduces the complexity and computational requirements while maintaining high-quality outputs. Overall, these strengths make the proposed method valuable for improving 3D reconstruction and novel view rendering by incorporating geometric priors and addressing challenges related to coordinate frame transformations. Weaknesses: 1. I appreciate the solid math and notation the author adopted in this paper, which makes the writing theoretically sound. However, this may make the reading very difficult to follow when many mathematical explanations irrelevant to the method itself appears in the paper (like the whole paragraph from line 131-138, which in my understanding could be all moved to the appendix). I believe you could lint your writing and leave more space for the qualitative/quantitative experiments section. 2. I appreciate the notation explanations in the appendix. However, when I first read the terms like "type-1 features", it confuses me and I need to look back to see what I have missed before. A more intuitive name or a brief explanation of these important terminology is required, I think. This is just a recommendation on the writing and won't change my final rating. 3. More experiments (ablations, baselines) are well needed to demonstrate the soundness of this paper as a new 3D representation learning framework. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I appreciate the soundness of your math, while in Sec 3.1, could you briefly explain, what should the convolution from rays to point/rays to rays handle and why are they necessary? Any ablation on these designs, like using any of the convolution alone in the setting of overfitting/generalized NeRF? 2. This paper spends a large paragraph explaining how to achieve equivariance in the light field space using convolution and transformer. However, I still don't quite see the advantages behind it, .e.g., why in the airplane dataset, w/o the equivariance design, Fvor is still superior compared to the proposed method under R/Y/SO(3) transformations? Also, IBR-Net also surpasses the proposed method over many datasets and metrics. 3. I wonder in Sec 4.2, should the original NeRF & PixelNeRF and other strong baselines that uses canonical pose be included in the comparisons? Besides, I wonder whether this method could benefit the noisy pose reconstruction setting, e.g., comparing with "SPARF Neural Radiance Fields from Sparse and Noisy Poses". Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Already discussed in the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1:** Indeed, lines 131-138 introduce material that is beyond the standard toolbox even for a mathematically avid NeurIPS reader. Nevertheless, this material is essential for establishing the definition of the generalized convolution as depicted in Equation 3 of the paper. We agree with the reviewer that we can, though, introduce the convolution and still refer to the appendix to explain the terms in the convolution. **W2:** We really appreciate these comments and we could easily replace type-0 with scalar and type-1 with vectors avoiding, thus, the cryptic representation theory terms. **W3:** We conducted ablation studies during the initial submission, and the results are presented in Tables 1 and 2 on page 28 of the appendix. We could move them to the paper upon saving space after moving theoretical segments to the appendix. In Table 1 we showed ablations about the importance of convolution and transformer as well as the vector (type-1) features. In addition, we provide an ablation in the rebuttal PDF. We replaced the equivariant transformer for points along the ray with the canonical transformer; Our rationale for not exclusively altering the equivariant convolution and transformer from rays to rays lies in the fact that such a change would eliminate the presence of a feature specifically tailored for equivariance in subsequent modules. Consequently, we would be left with the option of applying a conventional transformer to this feature, resulting in a setup identical to IBRNet, which we have already compared against. In order to apply a conventional transformer to the equivariant feature, we first convert the equivariant feature to the invariant (scalar) feature, which makes the model also equivariant, but the performance is inferior to the full model and it shows that the equivariant feature contains more information and the equivariant transformer therefore will be more expressive. **Q1:** We need ray-to-ray convolution where both the input and output consist of fields distributed across rays. Conversely, when the input constitutes a field distributed across rays, like a radiance (feature) field, but the output takes the form of a field across 3D Euclidean space, such as an occupancy field or Signed Distance Function (SDF), ray-to-point convolution is necessary. The selection between ray-to-ray and ray-to-point convolutions is contingent upon the specific task at hand. The pivotal characteristic of the proposed convolution lies in its equivariance to SE(3). Equivariant ray convolution is essential, providing consistency under SE(3) transformations and overcoming challenges in 3D vision with unknown poses of objects or scenes. As stated in W3, we did the ablation by replacing the equivariant transformer over points along the ray, in addition, IBRNet is our baseline by replacing equivariant convolution and transformer from rays to rays with the conventional view aggregation in IBRNet and replacing the equivariant transformer over points along the ray by the ray transformer in IBRNet. Through this analysis, we ascertain that equivariant convolution and transformer not only exhibit robustness towards transformations but also highlight the increased potency of the equivariant transformer when maintaining equivariant features within the model. **Q2:** It is important to highlight that, across all settings, the object pose is provided in Fvor, which significantly aids the 3D reconstruction process by incorporating point positions into the method. In the Y/Y setting, akin to I/I, Fvor outperforms our method due to the provision of more informative data. Regarding the SO(3)/SO(3) and R/R settings within the airplane category, we attribute the observed trend to the unique characteristics of airplanes. Being slender objects with distinct features, airplanes are relatively straightforward for non-equivariant networks to memorize transformations through training augmentation or data-driven methods. This can explain the comparable performance between our method and Fvor in such scenarios. For the neural rendering experiment, IBRNet surpasses our method in the I/I setting, which is understandable, since it uses the absolute positioning encoding while ours uses the relative pose and constrained kernels. IBRNet is comparable to our method in I/SO(3) settings in the Diffuse Synthetic 360 dataset. We think it is because the synthetic data is less sensitive to the ray direction and has sparser source views, which makes it less dependent on ray directions. The only difference between IBRnet and ours is how to deal with the geometric relation of the rays, when the task is much more dependent on the image feature rather than the geometric information, the priority of the method would be weakened. **Q3:** NeRF does not aim to learn a 3D prior by using the radiance field as input, but rather to fit a scene by regressing its radiance field. NeRF's application is primarily geared towards individual scenes, requiring network retraining or optimization reruns whenever the reference frame for densities and colors changes. In this sense, a comparison with the NeRF-based SPARF is not adequate. We will cite it nevertheless. Our rebuttal PDF includes a robust baseline, NeuRay, as illustrated in Table 2 and Figure 2 of the PDF. NeuRay, like our method, builds upon the IBRNet baseline but goes further by incorporating a depth map or cost volume to estimate point visibility. We conduct a comparative analysis with NeuRay (Neural Rays for Occlusion-aware Image-based Rendering) on the Real Forward-facing LLFF dataset. Our observations reveal that our method demonstrates comparable performance to NeuRay in the I/I setting. In the I/SO(3) configuration, NeuRay experiences a performance decline and inconsistency, whereas our method remains robust against rotations.
Summary: This paper proposes to study the geometric priors for 3D reconstruction and neural rendering with a novel perspective of multi-view equivariance. A theoretically-sound definition of the ray neighborhoods with SE(3) is obtained with the theoretical deduction of characterizing ray space with group theory. Then, the SE(3)-equivariant convolution and cross-attention operators are presented. In the experiments, the authors evaluate the proposed mathematical framework on two tasks, multi-view 3D object reconstruction and neural rendering. For the multi-view 3D object reconstruction, the presented SE(3)-equivariant network outperforms Fvor and DISN in most settings for chars, airplanes, and cars on the ShapeNet dataset. For neural rendering, the proposed method obtains better results on the I/SO(3) setting to justify its design. Overall, I think this paper brings us some new messages to understand the ray space by using group theory. Strengths: 1. This paper presents a theoretical perspective to understand the ray spaces for learning geometric priors for multi-view 3D reconstruction. The thorough theoretical analysis presented in this paper is new to me. 2. Based on the theoretical findings, the authors presented equivariant convolution and transformer in ray space and justified their design on two tasks with positive results obtained. 3. This paper is well written with a detailed appendix to understand their theory. Weaknesses: 1. The core of this paper is studying the geometric relationship between rays to define the ray neighborhoods, and then, convolution and attention can be induced in the ray space. If I understand correctly, what's the relationship between the point correspondences and the neighbors in the ray space? For example, are the corresponding rays near or the same in the ray space if we have a pair of keypoint matches $x$ and $x'$ for a pair of input images? 2. As for the generalized neural rendering task, the authors stated that their model queries a target ray and obtains neighboring rays from source views in the first. To my knowledge, such operations can be done by following the epipolar geometry. Thus I am curious about the difference between the operation used in this paper and the alternative solutions using epipolar geometry. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I am confused about the sentence "given only the relative poses of the cameras, we show how to learn priors...". Why do authors need to highlight the relative poses in the paper? It is strange for me. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have discussed their limitations and broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. “The core of this paper is studying the geometric relationship between rays to define the ray neighborhoods, and then, convolution and attention can be induced in the ray space. If I understand correctly, what's the relationship between the point correspondences and the neighbors in the ray space? For example, are the corresponding rays near or the same in the ray space if we have a pair of keypoint matches x and x’ for a pair of input images?”** We really appreciate this question. Ultimately, it can be argued that every IBR or reconstruction method inherently boils down to addressing a correspondence challenge. When a pair of input images share matching keypoints, denoted as x and x', these keypoints represent neighboring rays. Figure 3 of the paper illustrates that when considering a specific ray x, its neighboring rays traverse within a confined cylinder. This cylinder utilizes x as its central axis and the maximum neighboring distance d as its radius. In the context of a two-view setup, as depicted in Figure 9 in the appendix, for a given ray x within view A, the neighboring rays can be categorized into two distinct groups. The first group encompasses rays originating from view A, where the angle between these rays and x is smaller than the specified threshold for the neighboring maximum angle. The second group consists of rays emanating from view B, located in close proximity to the epipolar line associated with x in view B. This proximity is attributed to the fact that the neighboring region corresponds to the projection of the confined cylinder onto view B. In this configuration, keypoint correspondence entails that subsequent to convolution and attention, querying the density of the keypoint would yield a significantly elevated value. **2. “As for the generalized neural rendering task, the authors stated that their model queries a target ray and obtains neighboring rays from source views in the first. To my knowledge, such operations can be done by following the epipolar geometry. Thus I am curious about the difference between the operation used in this paper and the alternative solutions using epipolar geometry.”** This question is also spot on! As outlined in the first question and depicted in Figure 9 of the appendix, the neighboring rays within the source view of the target ray are situated within the vicinity of the epipolar line associated with the target rays in that source view. When we establish the maximum neighboring distance as 0 and set the angle threshold to $-\pi$, the resulting neighboring rays precisely align with the rays located on the epipolar line. **3. “I am confused about the sentence "given only the relative poses of the cameras, we show how to learn priors...". Why do authors need to highlight the relative poses in the paper? It is strange for me.”** Certainly, some clarification is needed to contextualize this concept within classical multi-view geometry. In approaches such as classical space carving or plane-sweep reconstruction, a crucial step involves the selection of a reference frame for voxels or sweeping planes, followed by an optimization process. In those cases, there is no question of equivariance, and many times we choose as reference frame one of the camera frames. However in learning methods that use a prior (encoded in the weights of the network), this prior depends on the choice of the reference frame, and by default generalized reconstruction or rendering methods are not equivariant to the choice of this frame. In calibration cases, where only relative poses are given by an SfM method (instead of a world coordinate system by standard calibration) choosing one of the camera frames as a reference frame will alter the inference of the network if the network is not equivariant. Our method guarantees that this will not happen. --- Rebuttal Comment 1.1: Title: Reply to authors' rebuttal Comment: I wish to convey my gratitude to the authors for their comprehensive rebuttal. Following a thorough review of the rebuttal materials and additional comments, I find the technical contributions of the paper to be solid. I am pleased to note that the authors have acknowledged the need for revisions to enhance clarity. Their commitment to addressing these concerns is commendable. I am satisfied with the provided rebuttal and look forward to the revised version. --- Reply to Comment 1.1.1: Comment: We really appreciate you taking the time to review our rebuttal materials and additional comments. Your feedback is highly valued. We are dedicated to enhancing the paper's clarity as per the concerns highlighted in the reviews, going beyond the revisions presented in the attached PDF.
Rebuttal 1: Rebuttal: We extend our gratitude for the invaluable feedback and suggestions provided by the reviewers. We are pleased to note that all reviewers appreciated the contribution (2 excellent, 3 good) and the soundness (1 excellent, 4 good). All reviewers listed multiple strengths of the approach and none of them doubted its novelty. The most negative reviewer (VaKN) says: “The paper establishes a theory of equivariance in the ray space. This is an important contribution that could guide the design of future equivariant 3D scene representation approaches.” Reviewer MuuL writes: “The paper proposes a novel approach for reconstruction from multiple views using equivariant shape priors….The paper is well written and easy to follow.” Reviewer sFoS says: “This paper presents a theoretical perspective to understand the ray spaces for learning geometric priors for multi-view 3D reconstruction. The thorough theoretical analysis presented in this paper is new to me…This paper is well written with a detailed appendix to understand their theory.” Reviewer gW8w lists 5 strengths and says: “Overall, these strengths make the proposed method valuable for improving 3D reconstruction and novel view rendering by incorporating geometric priors and addressing challenges related to coordinate frame transformations.” Two are the main concerns raised: the lack of clarity in the presentation and the lack of comparisons to approaches beyond IBRNet. We take your feedback about the presentation into serious consideration, we are committed to addressing this concern in the revised version as described in the individual responses to the reviewers. We will convert the introduction section into a background section including necessary definitions that we had in the appendix. We will clarify the definition of equivariance in light fields. We will change the order of the presentation so that the example tasks are described first in order to motivate the use of equivariant modules like convolution and transformer. Last, we will add more qualitative results to show the performance of our approach. The second primary concern raised pertains to the rationale behind comparing our approach with IBRNet, along with suggestions for incorporating comparisons with other state-of-the-art methods and conducting further ablation studies. We have selected IBRNet as our baseline due to its status as a classic and widely recognized generalized NeRF method. IBRNet employs the conventional ray transformer, a technique also prevalent in the literature referred to in the question. Our architecture builds upon the foundations of IBRNet, with a key modification involving the replacement of the combining method with equivariant convolution and the substitution of the conventional transformer with an equivariant counterpart. This reasoned approach justifies IBRNet's selection as the baseline, effectively showcasing the robustness of our fundamental operations, namely convolution, and transformer. We have added a comparative analysis with NeuRay (Neural Rays for Occlusion-aware Image-based Rendering) on the Real Forward-facing LLFF dataset and with SRT (Scene Representation Transformer) on the MultiShapeNet. We further added more ablations, all included in the rebuttal PDF. We are committed to acknowledging and citing all the valuable references they have contributed to the paper. Pdf: /pdf/5d290a1a5cff3627c03d2f3c2ac6c21692f4a78c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a novel approach for reconstruction from multiple views using equivariant shape priors. The paper proposes the $SE(3)$-equivariant generalized convolution as the fundamental operation on a light field whose values may be radiance or features. Input ray features and producing output ray features and point features using two different $SE(3)$-equivariant convolutions. The paper demonstrates $SE(3)$-equivariance by obtaining robust results in roto-translated datasets without performing transformation augmentation. To say it upfront, I do not have background in Equivariant Networks and consider the paper outside my area of expertise. I find it difficult to understand the technical details of the paper. As such, I can only provide some high-level feedback to the paper and hope the AC and other reviewers can provide more detailed evaluations. Strengths: 1. The paper is well written and easy to follow. 2. The experiment results show that the provided models can outperform previous methods on various tasks. 3. This novel general equivariant representation framework for light fields can inspire further work on 3D vision and graphics tasks. Weaknesses: 1. The figures in this paper could be redesigned; some of them (Figure 4, 5) are so small that it is difficult to read the text and symbols they contain. In addition, the effect shown in Figure 9 is not intuitive. 2. The paper demonstrated good reconstruction results on Shapenet. Can the method proposed in this paper reconstruct surfaces on real datasets such as SparseNeuS. 3. About neural rendering experiments, the paper only shows results comparing with IBRNet, which doesn't seem to be the latest method. Can the authors provide the results of more experiments? (a. Local Implicit Ray Function for Generalizable Radiance Field Representation b. Learning to Render Novel Views from Wide-Baseline Stereo Pairs c. Neural Rays for Occlusion-aware Image-based Rendering) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see Weaknesses. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please see Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. “The figures in this paper could be redesigned; some of them (Figure 4, 5) are so small that it is difficult to read the text and symbols they contain. In addition, the effect shown in Figure 9 is not intuitive.”** We will improve the design of Figures 4 and 5, and make them larger while saving space by relocating theoretical content from the paper to appendices. We hope that this will make the figures legible and increase the understanding of the reader. In reference to Figure 9, in the top row of images, a noticeable distinction arises between the images generated by IBRNet(I) and IBRNet(SO(3)) as they contain a non-existent black area within the pillar. Contrarily, our results exhibit no such black area. Furthermore, when comparing IBRNet(I) and IBRNet(SO(3)), the latter presents several blurred transverse lines, whereas our method remains robust against rotations. For the second row, our performance closely matches that of IBRNet in nonrotated scenarios. However, when the reference frame is rotated, our method maintains its quality, while IBRNet's output becomes increasingly blurred, particularly in the bottom right corner. We are committed to enlarging the figures to enhance their visual clarity and intuitiveness. Because the rebuttal pdf is limited to one page the figures are still small there, but they will be shown in legible size in the revised paper. **2.“The paper demonstrated good reconstruction results on Shapenet. Can the method proposed in this paper reconstruct surfaces on real datasets such as SparseNeuS.”** We invested considerable effort in the theoretical formulation of equivariant convolution and attention in a light field, and we were planning to conduct experiments on real datasets in a subsequent paper. We believe that the community will benefit from our exposition of a novel convolution and attention instead of squeezing it into a few paragraphs so that we make the real experiments fit. While we recognize the significance of real-world reconstruction, ShapeNet has been an established dataset for surface reconstruction in hundreds of papers in computer vision. **3. “Regarding neural rendering experiments, the paper only shows results compared with IBRNet, which doesn't seem to be the latest method. Can the authors provide the results of more experiments? (a. Local Implicit Ray Function for Generalizable Radiance Field Representation b. Learning to Render Novel Views from Wide-Baseline Stereo Pairs c. Neural Rays for Occlusion-aware Image-based Rendering)”** We have selected IBRNet as our baseline due to its status as a classic generalized NeRF method. This choice is rooted in the fact that IBRNet employs the conventional ray transformer, a technique also referenced in the pertinent literature. Our architecture builds upon IBRNet's foundation, with modifications limited to the replacement of the aggregation mechanism using equivariant convolution and the conventional transformer substituted by an equivariant transformer. Therefore, utilizing IBRNet as a baseline is justifiable to showcase the resilience of our proposed fundamental operations (convolution, transformer). We have added a comparative analysis with NeuRay (Neural Rays for Occlusion-aware Image-based Rendering) on the Real Forward-facing LLFF dataset. Referencing Table 2 and Figure 2 in the provided PDF, our observations reveal that our method demonstrates comparable performance to NeuRay in the I/I setting. In the I/SO(3) configuration, NeuRay experiences a performance decline and inconsistency, whereas our method remains robust against rotations. Unfortunately, we were unable to locate the publicly available code for the "Local Implicit Ray Function for Generalizable Radiance Field Representation." This absence of code renders a direct comparison unfair. Furthermore, "Learning to Render Novel Views from Wide-Baseline Stereo Pairs," a method centered around rendering from two views, is not optimal for our approach, as our method assumes ample information extraction through light field sampling. Nevertheless, it would be worth considering whether expanding the ray neighborhood in our technique could offset the sparse light field sampling. We will cite these valuable references.
null
null
null
null
null
null
InsActor: Instruction-driven Physics-based Characters
Accept (poster)
Summary: This work proposes InsActor, a framework for instruction-driven character animation. Given human instruction and/or way points, InsActor first leverages a diffusion model to generate state sequences of the character. Next, it uses a skill embedding model to convert the state sequence into physically plausible trajectories with state and actions. Strengths: 1. The paper presents the motivations, setups, and methods quite well. The diffusion model for state sequence generation and conversion from state sequence to physical trajectories are described in sufficient details. 2. The paper provides sufficient experiments and comparisons to show it out-performs the baseline method. The hierarchical design is also justified with ablation studies. Weaknesses: 1. The works appears to have limited novelty, as it is somewhat a straightforward combination of character motion synthesis with diffusion methods such as [13] and a low-level trajectory optimization framework. 2. This work seems to miss a comparison with a straightforward method for the low-level policy, since the second stage can be formulated with a standard trajectory optimization problem. Since Brax is also a differentiable simulator, trajectory optimization should be easy to set up and in principle, it can be used to obtain an optimal solution in terms of distances between the states generated from the first stage and states from a physically plausible trajectory. I think this work could include results and analysis from trajectory optimization or explain why it is not feasible here. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Suggestion on writing: - line 92: I believe angular velocity $q$ of a link is not the derivative of rotation, and the symbol $q$ clashes with the notation of joint position $q$. This description needs to be clarified to avoid confusion. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The paper has provided adequate discussion of limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. We would like to address your concern as follows: > Q1: The work appears to have limited novelty, as it is somewhat a straightforward combination of character motion synthesis with diffusion methods such as [13] and a low-level trajectory optimization framework. This is a great point for us to clarify! We would like to highlight that this paper is more focused on studying a scalable framework for the new task of text-to-motion generation, instead of any individual component. We have conducted rigorous evaluation and fair comparison on various different design choices using state-the-art motion planning algorithm (MotionDiffuse) and a state-of-the-art tracking algorithm (DiffMimic) on a largest text-to-motion dataset available to identify the best framework, in which we believe is our core contribution in this work. Also, we believe that our framework will benefit from a better motion generation model and motion tracker in the future. There are two main points we would like to highlight of InsActor: 1. Scalability: Building a scalable system to achieve human instructed animation generation is highly non-trivial. Compared with previous works, InsActor understands more general human language in terms of the motion database size, where most related work PADL works on 131 individual clips, for a total of approximately 9 minutes, and InsActor works on HumanML3D dataset with 14,616 motions for a total of 28.59 hours (before filtering out physically impossible motions like sit on a chair). Training a general low-level policy handling various motions is extremely difficult as it may take up to hours to train a low-level policy to perform one single motion, e.g. backflip. InsActor is the first system to work on such a large-scale amount of data. 2. Thoroughness: As the first attempt for large scale physically-aware language guided animation generation, InsActor evaluates and benchmark various alternatives to gain insight to better push the task forward. With extensive experiments, we show that directly outputting actions and state sequences like Diffuser[13] or Decision Diffuser[1] using state of the art generative models may fail in this task, which requires accurate continuous control. Instead, a proper decomposition of the task should be performed in order to derive an efficient solution. Thus, InsActor is not merely a combination but an important baseline for future work as also acknowledged by reviewer s1C4. InsActor is scalable as the data scale grows and provides insights to the problem of language guided animation generation. > Q2: This work seems to miss a comparison with a straightforward method for the low-level policy, since the second stage can be formulated with a standard trajectory optimization problem. Since Brax is also a differentiable simulator, trajectory optimization should be easy to set up and in principle, it can be used to obtain an optimal solution in terms of distances between the states generated from the first stage and states from a physically plausible trajectory. I think this work could include results and analysis from trajectory optimization or explain why it is not feasible here. We chose not to use trajectory optimization for several reasons: 1. Computational Efficiency: While employing standard trajectory optimization alongside the differentiable simulator is possible, optimizing trajectories for long motion sequences can be time-consuming. Each generated trajectory necessitates optimization, leading to potential delays. In contrast, the utilization of a trained policy enables fast inference. Given the emphasis on engaging with human users within our work's scope, we chose not to adopt trajectory optimization due to computational efficiency. 2. Robustness: The application of trajectory optimization methods such as [A] typically yields a single trajectory. However, our envisioned use cases for InsActor involve animations and gaming scenarios. In these dynamic environments, robustness is important. Inherent noise in state estimation or random perturbations like pushing in the environment affecting characters are pervasive. Consequently, the efficacy and robustness of the low level policy becomes pivotal in ensuring InsActor's robustness during real-world deployment. Simple trajectory optimization is unable to handle such cases. In addition, we attach the tracking error of the learned low-level policy in the PDF file, which is low and is on par with previous motion tracking works. 3. Flexibility: A learned policy can adapt to various situations and environments. Once trained, a policy can generalize its behavior to new situations that it hasn't explicitly encountered during training. This is important for future development of InsActor like enabling object interaction and accommodating more intricate animations. The current framework of InsActor enables seamless integration of object interaction into the framework with the low level tracking module. In summary, the design is mainly due to the consideration of the application scenario and use cases. Nevertheless, we agree with the reviewer that including such comparison would make the experiment results more thorough and we will add them in the final version due to time limit during the rebuttal period. Additional References: [A] Gärtner et al., “Differentiable Dynamics for Articulated 3d Human Motion Reconstruction”, CVPR 2022. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I would like to thank authors for the explanation. I agree that the main contribution of the work is a scalable pipeline for character motion synthesis, and I would keep my original **accept** recommendation. However, after discussion with other reviewers, I think the quality of character motion could still be improved (penetrating and jittering artifacts). Without improving the quality, I do not think computational efficiency or flexibility need to be prioritized. Therefore, I still encourage including trajectory optimization (potentially adding stronger contact constraints to reduce artifacts) to see if it would help improve the synthesis quality.
Summary: This paper tackles the problem of generating text-conditioned character animation that is physics-based. It proposes a two-stage approach that first generates a kinematic motion conditioned on text using diffusion, and then tracks this motion with a physics-based motion VAE in the learned latent space. Experiments on the common KIT-ML and HumanML3D benchmarks show improved performance over prior work in physics-based motion from text, and show the ability to specify goal waypoints for the motion to hit while following human text prompts. Strengths: Physics-driven text-to-motion is an important problem and conditioning on free-form text input has not really been tackled in the literature, so the paper is novel in that respect. Diffusion has shown promising results recently, but these are all kinematic. The proposed idea of using high-level diffusion followed by physics-based tracking is simple and solid. It would make a good first baseline for future work in this area. Technically, InsActor uses a physics-based motion VAE to track kinematic motion which is novel, and it uses differentiable physics to train rather than a learned world model or RL. If this approach really does work to track general motions in the HumanML3D dataset, it would contribute an alternative to recent RL approaches that can be difficult to generalize. The proposed method is evaluated on HumanML3D and KIT, which are the most relevant benchmarks for the text-to-motion task. Also, the DReCon baseline in Tables 1 and 2 is an important baseline that uses the alternative state-based approach to tracking. The supplementary video and figures are visually pleasing, and I appreciate that the supp video is extensive and shows many results. The shown demo is also cool, and demonstrates fast generation capabilities (relative to other diffusion approaches). Weaknesses: To better motivate the need for physics in text-to-motion, there should be a comparison between the proposed InsActor and a state-of-the-art kinematic diffusion model like MDM [33] added to Tables 1 and 2. Currently some numbers in these tables, e.g. FID and diversity, are worse than those reported in MDM, and it’s not clear why that’s the case since the high-level planner in InsActor is very similar. Does this high-level planner perform worse than MDM? Or does the physics-based motion tracking somehow have a large effect on motion quality and diversity? The high-level policy ablation reported in Table 4 takes a step in this direction, but it is trained on rollouts from the motion VAE instead of directly on mocap data as done in MDM and other text-to-motion diffusion models. Looking at video results at 0:46 and 01:54, I don’t think the high-level diffusion planner is on par with recent models like MDM. Both with and without waypoint guidance there are some significant artifacts like jittering and skating, some of which seem to be affecting the final motion from InsActor (e.g. some noisy popping of limbs and unnatural sliding). I understand this planner is not necessarily the main contribution, but I think poor kinematic motions from the planner undermines the comparison to the target-state tracking policy DReCon, which may perform better when operating on, e.g., outputs from MDM that better reflect realistic motion. Since the low-level motion VAE model for tracking (Sec 4.2) is a key contribution of the work, it’s very important to justify that it is necessary by showing that the DReCon baseline is still inferior when operating on more reasonable kinematic inputs. The methods Sec 4 is missing some details that could improve understanding and reproducibility: * The tasks states described in L92 are all local, so how is the global root trajectory modeled in the diffusion Sec 4.1? * L154: if the pose state is in the local frame, how is inpainting performed to ensure motion meets a global target waypoint? * What is the architecture of the diffusion model? Is it using 1D convolutions as in Diffuser or a transformer as in other human motion diffusion models? $\mu$ and $\Sigma$ in Eqn 2 are never defined. In general, I’m wondering why not use a SOTA motion diffusion model out-of-the-box for this high-level planning component? * L185: is the low-level motion VAE trained directly on outputs of the diffusion model or on mocap data from the dataset? If on mocap, why is the encoder (Eqn 4) expected to produce reasonable results when operating on noisy and unrealistic pose transitions? * Similarly, Sec 5.3 shows robustness to perturbations from boxes, but is this kind of perturbation seen in training of the low-level policy too? If not, how does this robustness arise without using RL for training (i.e. without some exploration). An evaluation on the low-level tracking component by itself would be very helpful. E.g. reporting tracking errors for the latent policy from InsActor compared to DReCon for both motion-captured and diffusion-generated motions. The current metrics in Tables 1 and 2 were designed for kinematic text-to-motion models, and I would think are mostly influenced by the diffusion planner which is the same for InsActor and DReCon, so a tracking-only evaluation could help parse the difference in performance. There are also open-source RL physics-based trackers that may be worth considering, e.g. [Luo et al., Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation, NeurIPS 2021]. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Overall, I think a general physics-based text-to-motion model is an important and novel direction, and the hierarchical approach of diffusion planning with physics-based tracker could be a strong baseline going forward. But I’m mainly concerned that the quality of the output from the high-level diffusion planner is compromising the comparison to DReCon and therefore the need for a latent motion VAE tracker has not been fully justified. I would really like to see how InsActor performs when plugging in a SOTA diffusion model like MDM [33] as the planner. Moreover, an evaluation of standalone tracking performance would make the comparison between the two tracking approaches (latent vs state-based) much more clear. Some other comments and suggestions that didn’t have an influence on my rating: * The related work (Sec 2) is missing relevant physics-based human animation methods and a discussion of why they are difficult to scale up to the general text-to-motion task. E.g. [Peng et al., ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters, SIGGRAPH 2022] [Won et al., A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters, SIGGRAPH 2020], etc.. * Concurrent work PhysDiff [Yuan et al., arxiv 2022] is an alternative approach to adding physicality to text-to-motion diffusion and could be discussed in future revisions of the paper. Also related, concurrent work Trace and Pace [Rempe et al., CVPR 2023] gives controllability over physics based characters with guidance of a diffusion planner. ===================== After Rebuttal ============================ After considering other reviews and discussions with authors and between reviewers, I have decided to slightly raise my score and am leaning towards accept. I think the paper lays out a compelling kinematic diffusion + physics-based tracking idea that can serve as a baseline and inspire improvement in each component of the system including differentiable simulation, motion diffusion, and physics-based tracking. The evaluations show that this hierarchical approach works better than state+action diffusion, and that tracking with a latent model is more robust than direct state-based tracking for planned motions from MotionDiffuse. However, I am still quite concerned about the qualitative results and would really encourage the authors to update the paper text to discuss these qualitative issues such that future work can pursue important directions (e.g., the choice of Brax and differentiable simulation in general rather than RL, and the noisy plans from MotionDiffuse especially in the waypoint setting). It would also be good to clarify why in Table 2 of the rebuttal doc, motion quality (FID) drops significantly from kinematic “Planner” (b) output to full physics-based InsActor (c), indicating that adding physics-based tracking is not necessarily improving motion realism despite it being physically constrained (unlike in PhysDiff). I encourage the authors to show some video results of the planner output vs InsActor on regular text-to-motion (not the waypoint setting) to demonstrate the difference and potential advantages/disadvantages of using the latent tracking technique vs RL and more reliable simulators as in DReCon. ========================================================== Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Limitations are sufficiently discussed in Sec 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments. Due to the character limit, we would like to address your major concerns here. For more detailed questions (e.g. tracking details), we are glad to discuss them during the discussion phase. > Q1: To better motivate the need for physics in text-to-motion, there should be a comparison between the proposed InsActor and a state-of-the-art kinematic diffusion model like MDM [33] added to Tables 1 and 2. Why FID and diversity are worse than those reported in MDM? The high-level policy ablation reported in Table 4 takes a step in this direction, but it is trained on rollouts from the motion VAE instead of directly on mocap data as done in MDM and other text-to-motion diffusion models. This is a great point for us to clarify! Unfortunately, since MDM has a different output domain than our state trajectories, we can not directly compare MDM in Table 1/2. However, we would like to clarify that our high-level planner is adapted from MotionDiffuse[A], a state of the art text-to-motion model that is on par with MDM. Quantitatively, Table 2 in the appended PDF shows that our planner achieves strong generation performance. As we report in the general response, there is indeed a gap between the kinematic and physically simulated setting, which demonstrates the challenge of this new task. The gap is led by the fact that the kinematic motion generator is not physically-plausible, which can not be reflected on the previous evaluation metrics like FID. This issue has also been concurrently observed in a recent work PhysDiff[B], which proposed a trajectory optimization approach to fix the generated motion. Different from PhysDiff which is in the kinematic setting, our character moves in a simulated environment and the physical plausibility is more rigorously ensured. In addition, motion generation models tend to produce sequences that appear appealing but are physically implausible. In the process of grounding the state sequence in physics, certain compromises are made on visual aesthetics in order to adhere to physical realities. The "high-level" ablation study in Table 4 refers to the framework proposed in the Diffuser[13], where a generative model predicts both state sequence and action sequence. This study is to show the importance of our low-level skill embedding, and does not reflect the capability of our high level planner. > Q2: Looking at video results at 0:46 and 01:54, I don’t think the high-level diffusion planner is on par with recent models like MDM. Since the low-level motion VAE model for tracking (Sec 4.2) is a key contribution of the work, it’s very important to justify that it is necessary by showing that the DReCon baseline is still inferior when operating on more reasonable kinematic inputs. Qualitatively, we do notice that our generated plans have more jittering than motions generated by either MDM or MotionDiffuse. This could be caused by the fact that MotionDiffuse uses temporal smoothing in the visualization but we did not smooth our plans in our visualization. However, we show in Table 1 in the appended PDF that plan smoothing has a minimal effect on the tracking result. In terms of the concern that a poor kinematic motion undermines the comparison to DReCon, we would like to highlight that making an executable plan can be very difficult for generative planners. This has also been discussed by a recent work PhysDiff[B], which shows that the physics plausibility generated by MDM has a large room for improvement. Particularly, we observe that the invalid state transition increases when under the waypoint conditioned setting. The mismatch between global motion and locomotion makes it extremely difficult for DReCon-like trackers to track. Note that DReCon has only been verified on a small scale 10-minute motion database with high quality motion data generated by Motion Matching. > Q3: The methods Sec 4 is missing some details that could improve understanding and reproducibility: An evaluation on the low-level tracking component by itself would be very helpful. There are also open-source RL physics-based trackers that may be worth considering, e.g. [Luo et al., Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation, NeurIPS 2021]. We thank the reviewer for pointing it out. Due to the character limit, we will address your main concern by providing an evaluation of the tracking module and will add more details regarding the tracking module in future iterations or the discussion period. As shown in Figure 1 in the appended PDF, our motion tracker achieves excellent tracking performance with a pose error smaller than 0.05m. We also show in Table 2 in the appended PDF that our implemented DReCon trackers achieves 0.086 FID tracking the test dataset. Although motion tracking is not our main contribution, we highlight that our low-level policy is trained on a very large motion database and achieving the tracking performance is challenging. Thanks for bringing the work by Luo et al. to our attention! However, although the work provides an RL-based motion tracker, its setting is a simplified version of ours. Concretely, in their setting, there is a residual force, i.e., hand of god, which is an external force acted on the character. Although the low-level control can be simplified and the motion quality can be improved with additional external force (hand of god), the generated motion is not technically physically plausible due to the invisible force. On the other hand, in our setting, all forces are produced by the character itself, which follows the standard physics-based character animation setting and hence is more challenging. Additional References: [A] Zhang et al., “MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model”, ArXiv, 2022. [B] Yuan et al., “PhysDiff: Physics-Guided Human Motion Diffusion Model”, ICCV, 2023. --- Rebuttal Comment 1.1: Title: Followup question Comment: I would like to thank the authors for their thorough response to my points and questions. Could you please expand on why the results from MDM/MotionDiffuse (in Table 1 of the attached document) are not directly comparable to the output of "Planner" in Table 2? Both methods output a set of joint positions/rotations that can be used to compute the metrics, correct? Is it because InsActor use a different skeleton topology than the SMPL body used in MDM? Thanks! --- Reply to Comment 1.1.1: Title: Answer to followup question Comment: Thank you for your question! Yes, the difference in topology is part of the reason. Additionally, variations in body proportions and motion representation prevent us from directly adopting the evaluation code from text-to-motion works. **Topology Difference:** Our character has a topology that slightly differs from a standard SMPL model, as it lacks the middle joint of the spine, referred to as "spine 1." **Body Proportion Difference:** The character we used originates from DeepMimic [23]. Its body proportion differs from the template skeleton employed in the standard evaluation code. This leads to variations in forward/inverse kinematics results. **Motion Representation Difference:** InsActor utilizes a standard state representation in physics-based character animation. This includes link position, rotation, linear velocity, and angular velocity in the global frame. It's worth noting that a link refers to a rigid segment of an articulated body, like arms and legs, connected by joints. Conversely, motion representation in text-to-motion works relies on joint information in the local frame and excludes per-joint angular velocity. Due to these differences in generation space, we constructed our evaluation pipeline from scratch, following the guidelines of Guo et al. [7]. This process included training the contrastive models instead of leveraging pretrained ones. As a result, the numerical outcomes aren't directly comparable. Furthermore, converting from the InsActor representation to the text-to-motion representation is feasible. However, such conversions come with certain losses — for instance, when changing the InsActor link global positions to the SMPL joint local positions. Nonetheless, we acknowledge that employing the text-to-motion codebase to assess the converted InsActor output offers a direct comparison with purely kinematic baselines and might yield valuable insights. We will include this converted comparison in a future version.
Summary: This work proposes InsActor, a physics-based character control framework that enables controlling agents to follow high-level instructions. It employs a high-level diffusion motion planner and a low-level character controller to achieve mapping between text-based instructions and physics-based motion control. By creating waypoints and desired state transitions, the diffusion motion planner specifies the motion plan that the low-level motion controller follows. The low-level motion controller consists of a pre-trained VAE encoder to translate diffusion state into actionable latents for control. Strengths: - I find the idea of using a higher-level motion diffusion model to specify waypoints and states, and then using a lower-level motion controller to follow them, intuitive and easy to understand. This formulation is flexible and creates the ability to have both high-level control of motion through text and slightly lower-level control through waypoints. - The low-level skill discovery module based on diffMimic itself seems like a significant contribution, as it has the potential to be used in many tasks and enables character control. Using a VAE to translate invalid state transitions into actionable latent codes for stable control has great potential in other downstream tasks. - Experiments show that the methods outperform the in-house implementation of the state-of-the-art methods PADL and DReCon. Weaknesses: - The main weakness I find in this work is its qualitative results. The simulated character is jittery, floaty, and appears to have foot sliding, which is unexpected for a physics-based method. For instance, at the 1:43 mark of the video, InsActor's feet quickly shuffle in a way that should be impossible in a physics simulator. The character seems overall drunk when walking, and there are occasional high-frequency jitters in the root (or camera?), for instance, at 00:48. I am not sure what could have caused this issue: is it the policy? or the setting/fidelity of the Brax simulator? - Similar to the previous point, the visual results of the implemented PADL and DReCon are far inferior to their original shown results. The movement is unnatural and jittery. I understand that there is no official implementation provided, but similar scenarios could be recreated in InsActor to create visual comparisons. PADL's generated motion is stable and physically realistic, unlike the ones shown in the provided video. As motion generation is largely visual, the provided quantitative results do not really provide that much insight into the real performance of the method. - Visualization should be provided for the high level diffusion planer. Since states are directly generated by the planner, they should be visualized and compared with existing models such as motion diffusion model (MDM). The lower level controller, combined with the latent skill decoder $p_\psi$, forms an imitator that follows specified states. How well does the diffusion model generates the state and how “invalid” are they? - The claim of "long, unstructured human commands" is overstated. The tested text instructions are still short, clear, and close to the ones in the dataset. - There are many missing details about the performance of the lower-level skill discovery module. If the whole HumanML3D dataset is used for training, how well can the skill embedding encapsulate the motion described in the dataset? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My main concern is the fidelity of the generated motion. Why does the character appear to be floaty and jittery? Is it the learned policy or the simulator settings? --- After rebuttal, my main concern about the fidelity of the generated motion returned. However, after discussion with other reviewers and the authors, I believe that the this issue could be a problem of the underlying simulator and should not undermine the overall contribution of the framework. Thus, I raise my score to borderline accept. --- Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for helpful comments. We would like to address your concern as follows: > Q1: The main weakness I find in this work is its qualitative results. The simulated character is jittery, floaty, and appears to have foot sliding, which is unexpected for a physics-based method. Our results are directly exported from the Brax simulation engine, which conform to the physic-based character animation settings. However, we do observe some artifacts in our qualitative results. They may come from multiple sources: 1. Imperfect motion retargeting. For the main results in the paper and the teaser video, we retarget the simulated motion to a character (a white Y-Bot from Adobe Mixamo) for better visual appearance. However, to preserve the motion content, we copy the rotation angles and add a vertical offset, which can not fully address the difference in bone lengths and introduces additional physics implausibility like foot floating and foot penetration. 2. Brax simulator smoothens the contact to allow differentiability, which is a known issue [28]. This approximation could lead to possible foot sliding or wrong collision. Brax is relatively new and differentiable physics simulation is still in rapid development, therefore we believe that the issue can be addressed by a better DPS. 3. Imperfect motion planning and motion tracking policy. Although we are using a state-of-the-art text-to-motion diffusion generator (MotionDiffuse), the generated motion plan can contain jittery and impossible state transitions. For motion tracking, although we used the state-of-the-art motion tracking algorithm (DiffMimic), tracking a very large motion database is non-trivial and could lead to performance degradation. > Q2: Similar to the previous point, the visual results of the implemented PADL and DReCon are far inferior to their original shown results. Our task is much more challenging than DReCon and PADL. DReCon tracks 10-minute motion captures, PADL tracks 9-minute captures, while our model uses HumanML3D, a dataset with 28.59 hours of diverse motions. This large dataset presents significant tracking challenges. Since neither PADL nor DReCon have official implementations, to compare fairly, we reproduce them using DiffMimic. Our tracker achieves low pose tracking error on motion datasets for high-quality reproduction. We refer the reviewer to Table 2 in the uploaded PDF file for more details. > Q3: Visualization should be provided for the high level diffusion planner. How well does the diffusion model generate the state and how “invalid” are they? We have provided some plan visualizations in the supplementary videos 00:45 and 01:54. Quantitatively, Table 2 in the appended PDF shows that our planner achieves strong generation performance. Examples of "invalid" state transitions are demonstrated in our supplementary video at 00:45, revealing artifacts like floating, foot sliding, and jittering. These issues even signify in the waypoint heading task as demonstrated in our supplementary video at 01:54. As a result, directly tracking these plans can easily lead to failure, as shown in the alongside DReCon results, whereas InsActor overcomes this with skill embeddings. Notably, physics-plausibility in motion generation, addressed in PhysDiff[A], aligns with our findings as MDM lacks it. Incorporating physics-based optimization enhances MDM's scores on physical plausibility, supporting our observation of infeasible state transitions. > Q4: The claim of "long, unstructured human commands" is overstated. The tested text instructions are still short, clear, and close to the ones in the dataset. Our claim regarding previous works' inability to handle "long, unstructured human commands" refers to longer and more complex language we address. For example, PADL processes sentences like "jump and swing sword down" and "shield charge forward," using Multiple Choice Questions to structure input. In contrast, InsActor directly interprets intricate language, such as "the person raises their left foot up to their knee and then kicks their right foot out, then returns foot to their knee," as demonstrated in our supplementary video. Additionally, InsActor comprehends more general human language due to the larger motion database size. While PADL operates on 131 individual clips totaling around 9 minutes, InsActor employs the HumanML3D dataset with 14,616 motions spanning 28.59 hours. We acknowledge potential misunderstanding arising from "long, unstructured human commands" phrasing and will address this in our next version. Nevertheless, leveraging our extensive text-motion dataset, we anticipate our language understanding capability to improve with more descriptive datasets in the future. > Q5: There are many missing details about the performance of the lower-level skill discovery module. If the whole HumanML3D dataset is used for training, how well can the skill embedding encapsulate the motion described in the dataset? In the general response, we show that the tracking error is low and is on par with previous motion tracking works. In addition, the FID is 0.086 when tracking ground truth motions. We agree that using a single policy network to encapsulate all motion skills on a large motion dataset can be suboptimal. As previous works in motion tracking have shown[B], an MoE ensemble can largely improve the model capacity and better capture dynamic moves like break dancing. However, adopting the system to skill embedding is non-trivial and is out of the scope of this paper that focuses on understanding human language instruction. Nonetheless, we believe that a stronger skill embedding module is an important next step and will be a crucial addition to our framework. Additional References: [A] Yuan et al., “PhysDiff: Physics-Guided Human Motion Diffusion Model”, ICCV 2023 [B] Won et al., “A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters”, SIGGRAPH 2022 --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: I thank the authors for the detailed response! My concern about the motion quality remains. I understand the artifacts such as penetration, foot sliding, or wrong collision could be a result of Brax simulator, but the ones shown in the videos are very obvious and jarring. At a bare minimal the collision and penetration needs to properly modeled to ensure physically plausibility, otherwise what is the purpose of using a simulator? Claiming that the motion is physically plausible while the generated motion has large non-physical artifacts seems disingenuous. If the physics simulator (Brax) is the issue, then more investigation needs to be made or a different simulator used. I understand that DPS is under rapid development, but enjoying its benefits (being differentiable) while not considering/discussing its drawbacks seems to be misleading for the community. Purely diffusion-based methods for motion generation seems to be generating much more smooth and good-looking motion overall. Results shown from the MotionDiffuse paper seems to be better than the result shown here (00:45 and 01:54). What could be the cause? For the tracking error, I think acceleration error and velocity error, in addition to a MPJPE is needed for better a better picture. How many sequences can be tracked successfully? --- Reply to Comment 1.1.1: Title: Answer to Reviewer Response Comment: Thanks for opening up about your concerns! > Claiming that the motion is physically plausible while the generated motion has large non-physical artifacts seems disingenuous. All of our results are fully simulated in a well-received physics engine Brax, so we respectfully disagree that claiming the motion is physically plausible is disingenuous. It is noteworthy that all physics simulation engines like Bullet and IssacGyms do not guarantee perfect physics simulation, which does not undermine the significance of the research built on top of them. > If the physics simulator (Brax) is the issue, then more investigation needs to be made or a different simulator used. We agree that more alternatives to Brax can be explored which may lead to better visual results. However, the training efficiency of DiffMimic, which is benefited from the differentiability of Brax, is key to conducting the research on a large-scale language-to-motion dataset. Note that as described in ScaDiver, training a tracker with a non-differentiable physics simulator like PyBullet on the AMASS dataset takes tremendous efforts. Training a single expert policy takes up to 6 days, and training the Mixture-of-Expert controller takes another 10 days. Despite the possibility that alternative physics simulators can reduce the artifact, Brax is a reasonable choice for us at this point considering that it has a significantly better training efficiency and our research focus is not on motion tracking. > I understand the artifacts such as penetration, foot sliding, or wrong collision could be a result of the Brax simulator, but the ones shown in the videos are very obvious and jarring. As stated in the rebuttal, Brax is not the only source for visual artifacts. Foot penetration/floating is resulted of the naive motion retargeting from the simulated character to the visualization character rendered in __Blender__. The jittering is a result of imperfect motion plans. Brax is mainly responsible for the foot sliding problem. We do not observe foot penetration/floating and jittering when tracking ground truth motions with Brax. > At a bare minimal the collision and penetration needs to properly modeled to ensure physically plausibility, otherwise what is the purpose of using a simulator? Physics simulation also enables interactiveness with the physical surroundings, which purely kinematic generation inherently falls short of. > I understand that DPS is under rapid development, but enjoying its benefits (being differentiable) while not considering/discussing its drawbacks seems to be misleading for the community. We agree that more limitations of using DPS should be discussed in the paper. We will improve on that in future versions. > Purely diffusion-based methods for motion generation seems to be generating much more smooth and good-looking motion overall. Results shown from the MotionDiffuse paper seems to be better than the result shown here (00:45 and 01:54). What could be the cause? As explained in the rebuttal, MotionDiffuse uses an additional smoothing in the visualization: *This could be caused by the fact that MotionDiffuse uses temporal smoothing in the visualization but we did not smooth our plans in our visualization.* Specifically, the temporal smoothing function is used at text2motion/tools/visualization.py#L22 in their official repository. Therefore, the visualization in MotionDiffuse does not have jittering. Note that this smoothing module is __not__ applied in the evaluation of MotionDiffuse. Following the same protocol, we did not apply the smoothing module in our experiments. Please see the attached PDF on how adding this smoothing module will not affect the final simulated results. > For the tracking error, I think acceleration error and velocity error, in addition to a MPJPE is needed for better a better picture. How many sequences can be tracked successfully? The definition of success in tracking can be vague. The fall rate successfully converges to zero after few hours of training, where falling is defined as the pelvis dropping below 0.2 meter. In particular, we would like to refer the reviewer to the attached PDF on how DReCon achieves a very low FID using the tracking module when tracking the HumanML test set. From this perspective, the tracking module can track most sequences successfully as indicated by the FID. Thanks for the suggestion of MPJPE, acceleration error, and velocity error. We are aware that they are common metrics in human pose estimation, where the latter two penalize unnatural jitterings. We will consider additionally including these metrics in justification of our tracking performance. Given the limited time and restricted rebuttal policy, we are unable to provide additional demo videos/visualization or experiments to fix the issue. We thank you for your valuable questions and will further improve them in our revisions
Summary: The paper proposes an approach for generating physically plausible human motion from open text prompts. The approach combines high-level trajectory generation with guided diffusion and a low-level skill model encoded in a latent space to correct for physical plausibility during execution in simulation with PD control. The proposed approach is compared against existing motion generation approaches with a set of quantitative metrics and qualitative evaluation. Strengths: Overall this work soundly presents an improvement to SoTA in the space of human motion generation from text prompts. The use of guided diffusion to produce high-level trajectories is not surprising, nor is its effectiveness when combined with low-level skill models as a method of correcting physical implausibilities and errors. The paper is well written and clearly describes the architecture, training procedure, and use of guidance for waypoint following. Evaluation is sound and clearly demonstrates the strengths of the approach. I appreciate the use of multiple quantitative metrics and stochastic evaluation of many samples. This work benefits from the existence and availability of its component parts (Brax, ControlVAE, motion diffusion models), but connects them to achieve fairly good results. I’m always happy to see motion generation connected to physics simulation instead of remaining kinematic. The results of this work seem fairly robust to perturbation, which is a good signal for the usefulness of future iterations of the approach. Weaknesses: While the resulting motion is fairly good, it isn’t yet achieving the level of fidelity to make it generally useful in generative contexts (e.g. game or animation characters). This work achieves its success from connecting other existing techniques, though the overall system is a sound contribution. The stack is trained considering only the character’s joint space and a specific character’s physical constraints. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Should cite Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion for similar architecture (guided-diffusion trajectories + low-level physical controller) https://arxiv.org/abs/2304.01893 Table 3 ordering of metrics is different from Metrics text section. Swap Multimodal Distance and FID in one or the other. Is it feasible to generalize this stack to different body shapes, dynamics, to include objects or other kinds of waypoints? Would it be possible/better to have some model-based or explicit low-level skill experts for specific sub-tasks (e.g. locomotion) to achieve higher fidelity in addition to general/open motion skills? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Discussion of limitations was pretty good and I appreciated the limitation notes throughout the paper. I did not see any mention of generalizing to other character morphologies (even different human body shapes). I currently believe this work is using a single embodiment as a strong assumption for achieving accuracy and physical plausibility. I would be interested to hearing more about the feasibility of generalization for applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive comments. We would like to address your concern as follows: > Q1: While the resulting motion is fairly good, it isn’t yet achieving the level of fidelity to make it generally useful in generative contexts (e.g. game or animation characters) This is a great point for us to elaborate more! As our approach delivers the first large-scale instruction driven physics-based character baseline, we also identify several aspects to improve the motion fidelity: 1. Simulation quality. Although all of our results are from physics simulation, the differentiable simulator may not generate high-fidelity motions. We note that the Brax engine has made approximations to achieve the differentiability. As a result, our resulting motion may have some artifacts as observed. This is a known issue in the state-of-the-art motion tracker DiffMimic that we use. Nonetheless, we believe that this can be resolved by future efforts in developing a better differentiable physics engine. 2. Data Scale. Being a data-driven approach, we recognize that the data scale is an important bottleneck. Despite using the largest text-to-motion dataset HumanML3D, it is far smaller than billion-level text-to-image datasets. We believe that our approach serves as the first scalable instruction-driven physics-based character animation baseline, and will continuously improve when more data is getting available. > Q2: This work achieves its success from connecting other existing techniques, though the overall system is a sound contribution. The stack is trained considering only the character’s joint space and a specific character’s physical constraints. Action generation in the joint space is the commonly adopted protocol in prior works[22,A] as they are general enough to achieve various kinds of motions. We use the same protocol in order to align with previous works. > Q3: Should cite Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion for similar architecture. Thanks for bringing this paper to our attention! This is an important related work for InsActor we shall discuss. We will add it into the manuscript in the final version. > Q4: Swap Multimodal Distance and FID in one or the other. Is it feasible to generalize this stack to different body shapes, dynamics, to include objects or other kinds of waypoints? We believe that our framework can be extended to include objects into the states. The high-level controller can generate motion plans conditioned on object states and the low-level controller will execute the interaction. For example, given a chair, a plan for a character to sit on the chair will be generated and executed without specifically training for the task. However, extending our current waypoint encoding system to encode object information including different types, sizes, and 3D locations is highly non-trivial. In addition, the current motion data scale for human-scene interactions may not match the complexity of the problem. Therefore, we do not consider object interactions in this work. Nonetheless, we do notice recent works like [A] that show promising results on physics-based character-object interaction, and view it as a vital future work. We believe our approach will serve as an important baseline that is extendable to character-scene joint modeling and expandable with additional human-scene motion data. We answer the question regarding different body shapes in Q6. > Q5: Would it be possible/better to have some model-based or explicit low-level skill experts for specific sub-tasks (e.g. locomotion) to achieve higher fidelity in addition to general/open motion skills? We agree that a single policy network for all motion skills on a large dataset isn't optimal. Previous motion tracking works have demonstrated that an MoE ensemble enhances model capacity and captures dynamic moves more effectively[B,C]. In these works, the motion dataset is categorized into groups, and an expert policy is trained for each group. A higher-level policy can then determine the best expert policy deployment. Although expert ensemble is a promising approach to achieve high motion fidelity, implementing an expert system from scratch requires large engineering efforts and could take a long time for training[C]. Since this work is less focused on motion tracking but more on language understanding, we choose DiffMimic which can be trained efficiently as shown in Figure 1 in the attached PDF. > Q6: I did not see any mention of generalizing to other character morphologies (even different human body shapes). 1. Different Morphology: This direction is intriguing but remains challenging due to current data limitations. Effective high-level planner training relies on ample text-motion pairs. However, data for non-human character morphologies is scarce, hindering high-level planner training. Cross-morphology transfer attempts[d] are intricate due to dataset motion diversity, constituting a significant independent contribution. 2. Varied Human Body: Body variation is an interesting topic and there are promising ways to achieve this level of generalization. For example, we can train the diffusion model with SMPL shape conditioning at the high level, and train a shape-invariant controller following [D]. However, as the first work to consider language-guided physical motion generation, our emphasis is on language understanding. We leave it for future study. We identify them as crucial future work and will add a discussion in future versions. Additional References: [A] Peng et al. “AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control ” TOG 2021 [B] Wagener et al. “MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control.”NeurIPS 2022. [C] Won et al., “A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters”, SIGGRAPH 2022 [D] Won et al., “Learning Body Shape Variation in Physics-based Characters”, TOG 2019 --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their responses to my questions. The responses satisfied my queries and I remain positive about this work. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer Mekg Comment: We are glad to hear that our response addressed your concerns! Thank you for your recognition of our paper and your valuable suggestions.
Rebuttal 1: Rebuttal: We thank reviewers for the encouragement and insightful feedback. We are glad that the reviewers found * the problem “interesting”(R-ndRp) and “important” (R-s1C4); and * the result “good” (R-Mekg) and “promising”(R-ndRp) and InsActor has the potential to be used in many tasks (R-K1zb); and * the evaluation informative (R-ndRP, R-Mekg, R-K1zb, R-7eMQ); and * the hierarchical design of InsActor “clever” (R-ndRP), “intuitive”(R-K1zb), “simple and solid” (R-s1C4). We execute additional experiments to address reviewers' concerns. We refer the reviewers to the uploaded PDF file for more details. Experiments include: 1. Comparison between our diffusion planner and MDM. We show that our planner is built on a state-of-the-art text-to-motion generator. 2. More ablations on planning and tracking. We further verifies the performance of our planner, show that there is a motion quality gap between plan and plan tracking, and proves the effectiveness of the tracker in our DReCon baseline. 3. Quantitative performance of low-level control. We show the training curve of our skill mapping module, which converges to a sufficiently low pose error. We would like to address some common issues as follows: 1. Clarification of high level planner: * We adapted the high level planner from MotionDiffuse [40], which is on par with MDM on standard text-to-motion generation benchmarks. * Some visual artifacts in plans are caused by not using temporal smoothing for visualization, which has minimal impact on the final results. * We emphasize that in the waypoint heading setting, generating an executable plan is particularly more challenging and a compact skill mapping will be necessary. 2. Generalization of the method: * Generalization to human-object interaction. We acknowledge the significance of character-object interaction in animation generation. Our approach serves as an important baseline, expandable with additional object/scene modeling. We view human-object interactions as vital future work. * Generalization to other morphologies. Effective high-level planner training relies on ample text-motion pairs. However, data for non-human character morphologies is limited, thus, transferring to different morphologies is non trivial. 3. Tracking module details: * The tracking error of the low-level policy is low and is on par with previous motion tracking works. * Compared with standard trajectory optimization, the design of InsActor is advantageous considering computational efficiency, robustness during deployment, and flexibility for future developments. While we agree with the importance of some limitations reviewers raised (e.g. lack of object interaction), we hope our rebuttal highlights why addressing these limitations constitute full, separate contributions themselves (e.g., requiring more data). We kindly ask the reviewers to let us know if further clarification or information is needed. Pdf: /pdf/af26dafd0871691c06795ec221480655a5351ae8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents an approach for generating animations using a language-conditioned neural network in particular a diffusion model. The underlying goal is to generate physics based human character animation in an easy and intuitive fashion. The approach leverages a diffusion model to generate high-level states of the animated character which are then refined in a lower level module. This module is realized via an auto encoder which generates skill embedding representing the state transition in a compact fashion. A decoder is then used to synthesized the final action of the agent. The approach also uses an inpainting strategy to ensure that the agent goes through critical phases or points, i.e., this seems similar to the difference of keyframes and in-betweens in traditional animation. The paper compares the introduced approach to other recent methods on the topic, e.g., PADL and concludes among other things that more faithful animations (wrt to the original language query) are generated. The contributions of the paper is the combination of Diffusion and autoencoder models for motion generation, the inpainting strategy and waypoint system. Strengths: The paper addresses a really interesting and important problem that is very exciting. It is also one of the first papers to use diffusion models in this context and the results seem promising. The combination of in-painting with diffusion is also very clever (but details are missing). A BIG plus compared to other similar systems is that it does not require RL. For example, PADL requires multiple policies to be learned and later combined with each policy requiring 7 simulated years to be trained! That does not seem to be the case here since this is a purely supervised approach - even though substantial computation is required for all deep learning of course. I really appreciated the use of contrastive models to evaluate the faithfulness of generated models. Diversity was also an interesting measure since in animation it is critical that the character shows a range of motions. Generally the paper is written in a way that can be followed (with caveats see below). ** Update after rebuttal ** I have increased my score to "weak accept". I appreciate the details provided by the authors - this helped better contextualize and understand the paper. What is clear from the rebuttal process is that it is hard to judge the visual fidelity alone, despite the substantial effort the authors put into the evaluation section. Especially the choice of visualizing the resulting animations via retargeting to a robot (in Blender) may have negatively impacted the reviewers' opinion. The animations in the latter part of the video are actually more convincing in my opinion. However, as pointed out before, the animations are not well-adapted to the environment, i.e., foot skating, actions on objects, etc. In future submissions on the topic, I strongly advise addressing this topic even if only partially. Weaknesses: A major question that the paper is not addressing is the generation of actions on objects and surroundings. A critical element of any character animation is to be able to specify an object acted upon, e.g., "push the red object on the table". That is not addressed at all in this paper, while it is addressed on other papers in robotics (the manipulation papers) but also animation (PADL). In PADL, a module specifically identifies the target object and animations are synthesized accordingly. In the robotics papers [Lynch2020, Stepputtis2020] the actions of the agents are also conditioned on an image of the surroundings and target objects are visually identified in the image. Potentially that could be included into InsActor through the waypoint system (maybe?), but it is so far not addressed in the paper. Personally, this is the biggest flaw of the paper, since at the moment it would not be appealing to me or many others to use this system. Along these lines, the presented example results are not very convincing with regards to the complexity of the generated motions. Another weakness of the paper is that it seems important details are left out or not well justified. For example, the waypoint encoding is interesting but should be explained with an example. I am particularly unsure what this would entail for the user - do we have to place the character at those waypoints and set all of its joint values? This would require the user to actually become an animator and generate keyframes. That could be a serious limitation if true. Right now I am assuming we only need to position the basic character at the waypoint (i.e. modify position and orientation of agent) not moving limbs (i.e. modify joint angles). The paper states "...replace the Gaussian noise in the first and last 25% frames with the noisy states of the character standing at the starting position and target position respectively." Personally, I would really have liked to see a long sequence in which the agent performs a number of different actions in a row. At Siggraph they often have these cute teaser figures in which a humanoid walks, crouches, jumps etc. over a period of time. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What exactly is the transition function? I am assuming that is the differential simulator, right? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: As described above, the lack interaction with the environment and linguistic goals is a major limitation that has not been brought up in the paper. The system is not able to execute commands like "lift the green cup" or "move to the lion statue". That substantially limits the applications of the approach. It is also surprising that the authors mention problems like foot skating at the beginning but do not describe how that is resolved later in the paper. While I understand that in a physics-based system the dynamics of the environment can make some of these considerations obsolete, there is still room for broken animations if this not adequately addressed. Hence, I would ask the authors to address this point in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for comprehensive comments. We would like to address your concern as follows: > Q1: A major question that the paper is not addressing is the generation of actions on objects and surroundings. Potentially that could be included into InsActor through the waypoint system (maybe?) We agree that enabling character-object interaction is important in animation generation. Although it is a more challenging setting, we believe that our framework will be compatible with the setting by additionally including objects into the states. The high-level controller can generate motion plans conditioned on object states and the low-level controller will execute the interaction. For example, given the information of a chair, a plan for a character to sit on the chair will be generated and executed without specifically training for the task. However, extending our current waypoint encoding system to encode object information including different types, sizes, and 3D locations is highly non-trivial. In addition, the current motion data scale for human-scene interactions may not match the complexity of the problem. Therefore, we do not consider object interactions in this work. Nonetheless, we do notice recent works like [A] that show promising results on physics-based character-object interaction, and view it as a vital future work. We believe our approach will serve as an important baseline that is extendable to character-scene joint modeling and expandable with additional human-scene motion data. > Q2: Another weakness of the paper is that it seems important details are left out or not well justified. For example, the waypoint encoding is interesting but should be explained with an example. This is a great point for us to clarify! As described in the main paper, the general idea of waypoint conditioning is to encode a sequence of mean poses at the starting and the ending locations. Specifically, we first set the positions and rotations of all body links to zero, which will be a mean pose after denormalization. And then, we add a normalized position offset to move the mean pose to the starting point and the ending point. For both starting and ending, we pad the mean pose for 25% of the sequence length. In the diffusion sampling process, we fix the noise-injected mean poses following the inpainting technique. All the rest of the plan, including velocity and angular velocity in the beginning and ending state, will be generated by the diffusion planner. We will add an illustrative figure with a concrete example in future versions. > Q3: Personally, I would really have liked to see a long sequence in which the agent performs a number of different actions in a row. At Siggraph they often have these cute teaser figures in which a humanoid walks, crouches, jumps etc. over a period of time. Thanks for the great suggestion! InsActor is able to perform different actions in a row with the combination of language condition and waypoint encoding. Figure 1 included in the paper is achieved by doing so. Concretely, Fig 1. includes four text prompts which instruct the character to sequentially crouch, jump, walk, walk like a zombie to a waypoint, and finally end with a kick. We have also shown the corresponding animation in the supplementary material. > Q4: What exactly is the transition function? I am assuming that is the differential simulator, right? Yes, the transition function is the differentiable simulator/dynamics. Additional References: [A] Hassan et al., “Synthesizing Physical Character-Scene Interactions”, SIGGRAPH, 2023 --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed and careful response to both my questions and the questions of the other reviewers! I am very impressed by the deep discussions that were possible in this rebuttal. Truly appreciated! What is clear from the rebuttal process is that it is hard to judge the visual fidelity alone, despite the substantial effort the authors put into the evaluation section. Especially the choice of visualizing the resulting animations via retargeting to a robot (in Blender) may have negatively impacted the reviewers' opinion. The animations in the latter part of the video are actually more convincing in my opinion. More generally, I think all reviewers agree that the animations are not well-adapted to the environment, i.e., foot skating, actions on objects, etc. As a result, it seems like something is missing in this framework. Even a partial solution to this problem could have substantially added to the appeal of the paper and would have made it a clear 'accept'. That being said, I appreciate the effort the author put into the rebuttal - it definitely helped me better understand the nuances of the approach. I also think that (as mentioned before) the quality of the stick figure animations (second part of the video) is fairly convincing and would even be appreciated by the computer animation community. I also acknowledge that the authors basically reimplemented the two other methods they compared against, since they did not find a public implementation. This can take a lot of time and effort. Based on the above, I will slightly increase my score. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer ndRp Comment: Thank you for raising the score! We truly appreciate your suggestions for future improvements and your acknowledgment of our efforts in establishing a systematic evaluation pipeline and re-implementing baseline approaches.
null
null
null
null
null
null
A new perspective on building efficient and expressive 3D equivariant graph neural networks
Accept (poster)
Summary: This paper presents the expressive power of equivariant graph neural networks in the context of 3D local isomorphism from a 2D local isomorphism perspective. Similar to the subgraph isomorphism of GNNs, the paper proposes three types of isomorphism in 3D space: Tree Isometric, Triangular Isometric, Subgraph Isometric. Based on the above definitions, the paper defines geometric Weisfeiler-Lehman (WL) tests. Based on the conclusions in paper [1], this paper enhances the expressive power of equivariant GNN by incorporating mutual 3D substructures. On the other hand, to further enhance the expressive power of the equivariant graph neural network, this paper introduces an SE(3)-invariant encoder to exploit mutual 3D structures. Through a theoretical analysis (theoreom 4.1), the paper demonstrates that simple aggregation of local messages cannot approximate global interactions, thus the authors further enhance the model's expressive power through the frame transition matrix. The effectiveness of the proposed model is demonstrated by experimental results on scalar and vector properties of molecules on QM9. [1] A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?" Strengths: 1. To the best of my knowledge, this is the first paper to analyze the expressive power of equivariant neural networks from the 3D isomorphism perspective. This could inspire future work to analyze the expressive power of current equivariant neural networks from this perspective. 2. Based on the analysis proposed in the paper on the expressivity of equivariant neural networks, the paper incorporates mutual 3D substructures into 3D GNNs (LSE in this paper) to further enhance their expressive power. The effectiveness of the proposed analysis and methods has been demonstrated experimentally. 3. To reduce the computational cost of neural sheaf diffusion, the paper proposes a novel frame transition matrix to express global geometric information. The computational complexity analysis in line 304 validates the effectiveness of the proposed method. Weaknesses: 1. I have to say that the notation in this paper is confusing, which imposes a burden on me. For instance, h is defined as a node feature on line 81, and f is a tensor-valued function on line 70. However, f is then used as a bijection on line 112. Does this mean the definition of h on line 112 is different from that on line 81? 2. Although the analysis of the expressivity of 3D GNNs is the first from the 3D isomorphism perspective, this paper seems to be based on the analysis in [1]. Moreover, the method proposed by the analysis in [1] can incorporate structural properties into many GNNs, such as GCN, GIN, and GAT. Can this paper similarly incorporate mutual 3D substructures into other 3D GNNs? 3. Recently, some work [2] has analyzed the expressive power of GNNs from the perspective of the attention mechanism, so how can we analyze the expressive power of 3D GNNs from the attention mechanism? [1] A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?" [2] Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism Technical Quality: 3 good Clarity: 3 good Questions for Authors: See my comments on weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: In my personal opinion, one of the limitations of this paper is that the notation is a bit confusing, and the paper's layout is somewhat disorganized, making it hard to follow. I would suggest that the authors provide a guideline at the end of the introduction to help readers understand the role of each section. Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes Flag For Ethics Review: ['No ethics review needed.']
Rebuttal 1: Rebuttal: ## Response to Reviewer b3yr - **W1**: I have to say that the notation in this paper is confusing, which imposes a burden on me. For instance, h is defined as a node feature on line 81, and f is a tensor-valued function on line 70. However, f is then used as a bijection on line 112. Does this mean the definition of h on line 112 is different from that on line 81? Sorry for the heavy notation. The $h$ on line 112 and line 81 denotes the node features. However, the repeated use of $f$ in line 70 and 112 may cause a burden for readers. Do you think it’s better if we use $\varphi$ to denote the bijection map rather than $f$? - **W2**: Although the analysis of the expressivity of 3D GNNs is a first, this paper seems to be based on the analysis in [1]. Moreover, the method proposed by the analysis in [1] can incorporate structural properties into many GNNs, such as GCN, GIN, and GAT. Can this paper similarly incorporate mutual 3D substructures into other 3D GNNs? Thank you for your insightful question. Indeed, [1] has laid the groundwork with a novel perspective for crafting potent GNNs on 2D graphs. Building upon this inspiration, we extend our analysis to the realm of 3D graphs. The primary distinction between 2D and 3D graphs lies in the inclusion of not only nodes (with features) and edges (with or without features), but also 3D coordinates for each node. **It's exactly the 3D symmetry that makes our extention nontrivial.** Regarding various 3D GNNs: Yes, our proposed method can seamlessly integrate 3D geometric information into diverse 3D GNNs, provided that geometric features such as distances, angles, or equivariant steerable features are integrated. Specifically, the message passing block illustrated in Figure 3 (between LSE and FTE) can be applied to any equivariant 3D GNN (e.g., Schnet, GVP-GNN, EGNN, or invariant graph attention messgae passing). The equivariance of our method is guranteed by theorem 5.1. Furthermore, In our experimental evaluation, we employ a previous sota method (PaiNN) as the backbone message passing scheme and enhance its capabilities by incorporating our LSE and FTE blocks. The adaptability of our approach across different 3D GNN architectures underscores its versatility and potential impact on advancing the field. - **W3**: Recently, some work [2] has analyzed the expressive power of GNNs from the perspective of the attention mechanism, so how can we analyze the expressive power of 3D GNNs from the attention mechanism? Thanks for bringing up this nice work. In fact, LEFTNet also implement the structural coefficients in theorem 3.1 as a set of weights that are multiplied to the node features. Note that attention mechanism can be also seen as defining a set of weight, however the difference is that the attention coefficients go through the softmax normalization. Even though we do not see an immediate connection to transfer in [2] to 3D Equivariant GNNs, we do see it is a promising direction to enhance our understanding of 3D Equivariant GNNs and improve interpretability. We will leave this as future work. - **W4**: In my personal opinion, one of the limitations of this paper is that the notation is a bit confusing, and the paper's layout is somewhat disorganized, making it hard to follow. I would suggest that the authors provide a guideline at the end of the introduction to help readers understand the role of each section. Thanks for the great suggestion! We will reorganize the structure with a roadmap at the end of the introduction. [1] A new perspective on" how graph neural networks go beyond weisfeiler-lehman?" [2] Interpretable and generalizable graph learning via stochastic attention mechanism. --- Rebuttal Comment 1.1: Comment: Thanks for your response. For the notation, I think $\varphi$ is ok. My concerns have been addressed. And I suggest to improve the writing and reorganize some parts. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thanks again for your suggestion! We will make sure to improve organization in the revised version.
Summary: This paper introduces local structure encoding and frame transition encoding for more expressive representation learning in 3D graph neural networks. The local structure encoding is inspired by observations in the proposed local hierarchy of 3D graph isomorphisms; specifically the observation that subgraph isomorphism has greater discriminative power than triangle or tree isomorphism. The frame transition encoding is inspired by the observation that not all types of invariant interactions between disjoint clusters can be expressed as functions of the union of those clusters. The authors show that the proposed 3D GNN exceeds the performance of or performs on par with a multitude of existing models. Strengths: * Originality: The proposed work appears original. The authors introduce a classification of local and global structure, and use it to classify existing 3D GNNs. They also show how to incorporate the most expressive types of local and global structural elements into a new 3D GNN and show that it outperforms/performs on par with existing methods. * Quality/Clarity: The proposed work is of good quality. The paper is well written, preliminary ideas are presented in an accessible way; the local hierarchy of 3D graph isomorphisms is paired with a very nice figure; the work is well motivated; the proposed model outperforms/performs on par with existing methods. * Significance: The problem of building expressive representations in GNNs is challenging and of significance to the community. Weaknesses: * Quality: see questions Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * SpookyNet is not cited or discussed although it shows very strong performance in the analysis. This method appears to have local and global features, it would be interesting to see where it lands in the proposed classification. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors discuss some limitations of the method Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer gxFS Thanks for your meticulous examination and insightfull feedback. **Q1**: SpookyNet is not cited or discussed although it shows very strong performance in the analysis. This method appears to have local and global features, it would be interesting to see where it lands in the proposed classification. **Response**: Thank you for bringing this to our attention. We apologize for the oversight. In Table 2, we have compared our method with SpookyNet, but regrettably, we neglected to cite it in the main text. We will correct it in the revised version. SpookyNet is a strong baseline with both local and nonlocal interaction blocks. But **the meaning of “local” and “nonlocal” in SpookyNet are different from the local-to-global analysis in our paper.** - Specifically, we focus on the expressive power of 3D GNNs, in other words, the ability to distinguish different 3D structures. Here local means local 3D structures, and global means the whole input 3D structure, as used in [1]. - While in SpookyNet, **the local interaction aims to incorporate neighboring information into central nodes**, which is essentially what an MPNN layer can do. The nonlocal interaction aims to capture long-range interactions between nodes which can not be captured by an L-layer GNN, therefore, they **use attention to consider interactions between all pair of nodes**. This local and nonlocal idea is similar to the method in this paper [2]. In short, we focus on the ability to distinguish different local and global structures, while SpookyNet focuses on incorporating local and global information. These two are different. For example, a SchNet-like model which considers edge distances during message passing indeed can capture local interactions, but it cannot distinguish different local structures. It is worth mentioning that instead of simply using a SchNet-like model to capture local interactions, SpookyNet uses a more powerful model, as shown in Figure 3 of their paper. Specifically, it first constructs some basis functions based on Bernstein polynomials and spherical harmonics. These basis functions are then used to update node features in the local interaction block. Putting SpookyNet in Table 4, we can say that it satisfies LSE, but only partially satisfies FTE by using the nonlocal interaction. Note that it doesn't update equivariant features. The final outputs of the local interaction block are only invariant features. [1] ComENet: Towards complete and efficient message passing for 3D molecular graphs [2]. Recipe for a general, powerful, scalable graph transformer --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep my score.
Summary: Authors analize the expressive power of 3D equivariant GNNs and introduce a new expressive equivariant GNN architecture based on local node and edge wise frames. Strengths: The proposed way to achieve equivariance is well grounded and conceptually very interesting and novel. The construction is also based on solid theoretical motivation and achieved good results in practical experiments. The paper is well written Weaknesses: The theory about the local hierarchy of 3D Graph Isomorphism (section 3) relatively straightforwardly follows from existing works on equivariant GNN expressive power and subgraph GNN expressive power. That being said, the overall package is still very solid. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The local frame construction reminded me a bit of Frame Averaging by Puny et al. (ICLR 2022). It would be interesting to see how they stack up. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer 7xD9 Thanks for your meticulous examination and insightfull feedback. **Response to the Weakness**: We appreciate your discerning observation and insightful feedback regarding the theoretical aspect of the local hierarchy of 3D Graph Isomorphism in Section 3. Indeed, the foundation of this theory draws upon existing works on 2D GNN expressive power and subgraph GNN expressive power. However, it's important to note that even within the context of established theories, the synthesis and customization of these ideas to our specific problem domain contribute to the overarching value of our work. We would like to emphasize the coherent theoretical logic that emerges from the synergy between Section 3, Section 4, and the neural sheaf interpretation in appendix. To enhance the clarity of this logical progression, we will incorporate a roadmap at the conclusion of the introduction, highlighting the interconnectedness of these sections. **Q1: Frame Averaging by Puny et al. (ICLR 2022)** A1: Thanks. After carefully checking, we believe Puny et al. (ICLR 2022) [1] provided another way of building equivariant frames. However, their specific frames come out of the PCA decomposition, which are essentially global. Then, the averaging is a weighted sum of these global frames (a finite approximation of integration on the Lie group). We will add the discussion of this work into our related work section. [1] Frame Averaging for Invariant and Equivariant Network Design --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep my score.
Summary: This paper investigates expressive message passing architectures for processing 3D geometric graphs with permutation and Euclidean symmetries. The authors first provide an investigation of a local hierarchy of isomorphism separability. Then, the authors show cases where powerful local invariant models may fall short of encoding global structures due to information loss during propagation, and aim to reduce the information loss by turning to equivariant message passing incorporating frame transition matrices that are related to neural sheaf diffusion. Based on the theoretical framework, the authors propose a message passing architecture that is expressive in terms of the local isomorphism separability as well as incorporating frame transition. Experimental results on invariant molecular property prediction on QM9 and equivariant force prediction on MD17 shows that the proposed approach outperforms previous equivariant neural networks. Strengths: S1. This paper addresses an important problem of understanding the theoretical expressiveness of geometric graph neural networks and developing architectures that are provably expressive. S2. The local isomorphism hierarchy proposed in Section 3 is novel as far as I can tell (in context of 3D geometric graphs) and correct. S3. The visual example shown in Figure 1 was helpful in understanding the proposed hierarchy of local isomorphism. S4. The connection of the proposed approach to sheaf diffusion mentioned in Appendix I was interesting. S5. The experimental results and ablation study on QM9 and MD17 seems to support the main arguments. Weaknesses: W1. The equality in Eq. 3 seems not correct as it neglects permutation symmetry. If we consider positional features represented as matrices $\mathbf{X}\in\mathbb{R}^{n\times 3}$ (Line 77), the precise symmetry under consideration is a direct product of permutation group and Euclidean group $S_n \times SE(3)$ (Dym et al., 2020). I think it should be clarified that the equality in Eq. 3 is up to permutation of nodes. W2. In Line 111, I think {-tree, -triangular, -subgraph} should be {tree-, triangular-, subgraph-}. Same applies to Line 603-604 and Line 612. W3. In the definition of tree and triangular isometries (Line 114-118), is there any reason to use the term tree and triangular? I think the definitions are a straightforward extension of the subtree and overlap isomorphism proposed in Wijesinghe et al., 2022, respectively, and I see no reason to use different name. Especially, for the triangular isometry, I found the name misleading since the isometry condition is imposed on the entire set of triangles that share $e_{ui}$, rather than individual triangle. W4. In the definition of triangular isometry (Line 116), the condition is specified for each edge in the intersection of the local subgraphs $e_{iu}\in\mathbf{S}\_{i-j}$. Please correct me if I am wrong, but to make sense, I think the condition should be for each edge in one of the local subgraphs, $e_{iu}\in\mathbf{S}\_{i}$. W5. In Line 606, the term structure completeness is used without proper definition (although it is emphasized by boldface). Without clear definition, I was not able to properly understand and correctness of the argument in Line 608. W6. The argument in Line 162-164 is ambiguous; do you mean a composition of Atoministc Cluster Expansion and invariant map constructed using scalarization by edge-wise equivariant frames is a universal approximator of SE(3) invariant functions, so it can serve as $\phi$ in Theorem 3.1? Is the proof of universality available in literature? W7. For LSE, based on Appendix H, it seems that the hierarchy of local isomorphism proposed in this paper is a subcase of the geometric WL test proposed in Joshi et al., 2023. Since this paper provides 3-way classification while GWL provides a more sensitive separation of expressive power of geometric GNNs based on the orders of node tuple and feature tensors, one can argue that the local isomorphism hierarchy proposed in this paper is less of practical interest compared to GWL. W8. For FTE, in Section 4, frame transition is introduced to solve the limitation of SE(3) invariant messages as outlined in Theorem 4.1. But as far as I know, a majority of practical equivariant geometric GNNs in literature are already using equivariant (tensor) messages (Thomas et al., 2018; Fuchs et al., 2020; Satorras et al., 2022; Geiger et al., 2022; Kohler et al., 2019), and the expressive power of some of them (TFN (Thomas et al., 2018) and SE(3)-Transformers (Fuchs et al., 2020)) are shown to be universal i.e., maximally expressive (Dym et al., 2020). Is the motivation of frame transition also relevant for these architectures? Also, is there specific reasons to favor the proposed frame based propagation over these already equivariant message passing algorithms, despite the added complexity of incorporating node- and edge-wise frames? W9. In Line 214, does the frame here refer to frames in Riemannian manifold theory? Since there is no proper definition (Eq. 5 provides a description of key property but is not a definition), I was confused on what exactly are the frames mentioned here and was unable to verify whether Line 216-221 is correct. It seems the precise definition and descriptions of equivariant frame are completely deferred to Appendix B and F although it serves as a main component of the proposed architecture, which makes the paper hard to read and understand; I think key parts of the proposed algorithm should be described in the main text in a self-contained manner. W10. Overall, I find that the writing of the paper has room for improvement in terms of organization and readability. There are many cross-references back and forth across the paper, so I couldn't serially read through each section without going through a large portion of the entire text multiple text. For example, Line 236-237 (Section 4) refers to Figure 3 (Section 5), Line 555 (Appendix A) refers to Eq. 21 (Appendix F), Line 592 (Appendix C) refers to Figure 3 (Section 5), Algorithm 1 (Appendix C) refers to Eq. 4 (Section 3), and so on. W11. In Eq. 7, I don't see how the equation describes equivariance. I think it should be something like $\mathbf{m}(g\mathbf{x}\_u) = \sum\_{i=0}\^l\mathcal{M}\^i(g)\mathbf{m}\_i(\mathbf{x}\_u)$, is this a typo? W12. Typo in Line 1 of Algorithm 1, gragh -> graph W13. Specification of the output is missing in Algorithm 1, and the output in Algorithm 2 is not clear since normal and boldfaced h are mixed in the algorithm. Furthermore, In Line 7 of Algorithm 1, how do you scalarize the mutual 3D structure, which is a subgraph, based on edge frame $F_{ij}$? I don't see how this can be done in Eq. 8 which defines scalarization. Similarly, in Line 6 of Algorithm 2, how do you scalarize the mutual 3D structure without frame? Also, if the algorithm 2 is LSE only, why should one compute edge-wise frames in Line 5? W14. In the proof of Theorem 5.1, I can see that the proposed parameterization is universal, but does it correctly guarantee equivariance as well (as specified in Theorem 5.1)? The current proof seems to clearly show universality, but I am not sure about equivariance. Dym et al., On the universality of rotation equivariant point cloud networks (2020) Wijesinghe et al., A new perspective on "how graph neural networks go beyond weisfeiler-lehman" (2022) Joshi et al., On the Expressive Power of Geometric Graph Neural Networks (2023) Thomas et al., Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds (2018) Fuchs et al., SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks (2020) Satorras et al., E(n) Equivariant Graph Neural Networks (2022) Geiger et al., e3nn: Euclidean Neural Networks (2022) Kohler et al., Equivariant Flows: sampling configurations for multi-body systems with symmetric energies (2019) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The questions are included in the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have addressed the limitations of their work in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer Wt9q > typos (W2, W4, W11, W12, W13 (Alg.1 and 2)) Thanks for your meticulous examination and useful feedback. We will fix them in the revised version. > Other questions **Due to 6k characters' limitation, we only provide short answers. We are happy to explain more if you have further questions. In addition, discussions about frame transition and its realizations (W8 and W9) are provided in the [general response](https://openreview.net/forum?id=hWPNYWkYPN&noteId=1Xmyobc4x8).** - W1: Eq.3 permutation symmetry: Will clarify that Eq. 3 is up to the permutation of nodes - W3: name of the isomorphisms The sub-tree isometry corresponds to our tree isomorphism as the name suggests. However, the overlap isomorphism is subtle, in the sense that overlap is an obscure terminology for formal definitions. This obscurity becomes more severe when both 2D and 3D structures are present. **Therefore, we propose a more intuitive “ triangular” as an alternative.** But, you are right that triangular isomorphism is based on the entire set of triangles that share (a bundle of triangles that share a common edge). We will add a remark to directly point out that the triangular isomorphism corresponds to 3D overlap isomorphism. - W5: definition of structure completeness Sorry for the ambiguity. The structure completeness in Appendix refers to the ability to classify non-isomorphic 3D structures (the formal definition is in [1]). In other words, if a neural network can classify non-isomorphic 3D graphs into different categories, we say that this neural network is structure complete. The reason we add the restrictive prefix 'structure' is to emphasize that this completeness doesn't imply a neural network has the universality of approximating any continuous functions as Sec.4 discussed. - W6: about Line 162-164 1 and 2 are two parallel ways to achieve local universal invariant approximator: 1. Applying local frames will first transform all equivariant quantities to invariant scalars (this procedure doesn’t lose information, as this transform is invertible, see [2], then the rest of the argument follows the universality of MLPs. 2. Atoministc Cluster Expansion can express equivariant functions universally, then the invariant encoder can be built by adding another projection layer (that transforms equivariant output to invariant scalars). This layer can be found in [3] and since ACE is composed of taking the tensor product of equivariant vectors for all orders, and the tensor product type of universality has also been proved in [3], and therefore we neglect some details. - W7: compare to GWL Thanks for asking this insightful question. First of all, our setup is different from [4] but highly related. We seek to implement expressive (and efficient) equivariant graph neural network that avoids higher-order message passing scheme. Thus, we only considered one-hop message passing. The high level philosophy of our local approach is similar with [5], which also goes beyond WL by encoding local structural coefficients. On the other hand, [4]’s theory is inspired by classical potential theory of many body interaction systems, which is complementary to our geometry view. Secondly, we are inspired by [5] such that we aim to develop a local hierarchy of isomorphism that could characterize the expressiveness of 3D equivariant GNNs in a finer scale. We actually believe our local isomorphism hierarchy would be of practical interest to the community in the sense that we can quickly design efficient algorithms (e.g., LEFTNet) to improve expressiveness within one-hop region. We do agree a concrete technical connection between the local isomorphisms and GWL would be interesting (such as Theorem 2 in [5] as a 2D analogy) for future works as GWL is also a recent work. - W13: Alg.1: We will add the detailed output, which should be the output block(see the framework in fig.3) which takes all node features (invariant & equivariant) for final prediction. Alg. 2: Typo here. All features should be invariant. Line 6 of Alg.2: Typo here. It should be the same as line 7 of Alg.1. The scalarization step (details in Fig.3 and line 258-265): The scalarization of a subgraph is realized by scalarizing all the 3D position vectors within this subgraph. In summary, the mutual 3D structure $S_{i-j}$ is associated with edge $e_{i-j}$, and we build an equivariant frame for each edge. Then, we scalarize all the 3D particles inside $S_{i-j}$ through this edge frame by Eq. 8. In summary, LSE requires edge equivariant frames, and FTE requires node frames in our implementation. W14: equivariance of Theorem 5.1 Since the key ingredient of proving equivariance is similar with previous [2]'s appendix (also by chasing the lower part of the commutative diagram in Line 786), we neglect some details on the equivariance side of the proof. Now we are happy to fill in the details. Roughly speaking, the equivariance is proved by showing that the composition of scalarization, the MLP, and the tensorization is equivariant. The equivariance of the composition of scalarization and MLP is obvious since scalarization turns equivariance into invariance, and then the MLP is a transformation of invariant scalars. On the other hand, the tensorization defined in Line 779-782 is a pairing of equivariant tensors and invariant scalars, which is equivariant by definition. Combining the two steps, we have proved that our implementation of FTE is equivariant. **Ref** [1] ComENet: Towards complete and efficient message passing for 3D molecular graphs. [2] SE (3) equivariant graph neural networks with complete local frames [3] On the Universality of Rotation Equivariant Point Cloud Networks [4] On the expressive power of geometric graph neural networks [5] A new perspective on" how graph neural networks go beyond weisfeiler-lehman?" **Note that we provide the response of W8-W10 in the general response.** --- Rebuttal Comment 1.1: Comment: Dear Reviewer Wt9q, Thanks again for your additional questions. We are eager to hear your thoughts about our response! We are also happy to discuss further if you still have any concerns. Sincerely, Authors
Rebuttal 1: Rebuttal: ## General Response We thank all the reviewers for their valuable comments and appreciate that all the reviewers find our study novel and interesting for 3D Equivariant GNNs. As mentioned by the reviewers, - the problem we are focusing on is challenging and of significance to the community (Reviewer Wt9q and gxFS), - our work is novel and original (all reviewers), - we have solid theoretical analysis (Reviewer 7xD9), - our paper is well-written with nice figures (Reviewer Wt9q, 7xD9, and gxFS), - our method achieves good results, and the ablation study can support the main arguments (all reviewers). We also would like to thank Reviewer Wt9q and Reviewer b3yr for pointing out some typos and repeated notation that may cause misunderstandings for general ML audiences. We will fix them in the revised version. In addition, we will add a roadmap at the end of the introduction, and rearrange the order of the appendix to avoid some cross-references, following reviewer Wt9q and b3yr’s suggestion. Here we add additional explanation about frame transition (FT) (**suggested by Reviewer Wt9q**), since FT is also one of our main contributions. W8: about frame transition: why do we need FT? Our introduction of frame transition (FT) addresses a universal local-to-global phenomenon intrinsic to equivariant neural networks. In this sense, FT is a concept designed for almost **all equivariant graph neural networks that implement local message passing as backbone models**. On the other hand, this concept assumes significance in the context of symmetry-aware processing, and there won’t be any frame (or coordinate) ‘transitions’ in non-equivariant neural networks used for 3D point clouds, which operate without considering symmetry. We acknowledge your observation regarding the prevalent use of equivariant architectures that implicitly incorporate aspects of frame transition. In fact, Table 4 of our work elucidates how these architectures partly encode frame transition information, thus **offering a principled basis for their empirical success**. Two of the relevant equivariant models you mentioned have inadvertently been omitted. We will promptly rectify this oversight by including them in Table 4. To complete our logical framework, we emphasize that invariant architectures can fully capture frame transition information as well. You will find a computationally efficient realization of this principle in Section H of the supplementary materials, where we explore the incorporation of neural sheaf and combined node-edge equivariant frames for a pure invariant encoding of FT. Regarding the **preference for our proposed frame-based propagation over existing equivariant message passing algorithms**, we underscore three key practical advantages: **1**. Our approach explicitly encodes Frame Transition (FT), offering **both equivariant and invariant methods** for FT encoding, as mentioned earlier. Furthermore, the expressiveness of our FT-encoding is guaranteed by theorem 5.1; **2**. Optimization Benefits: Our empirical findings demonstrate enhanced stability and faster training for our LSE + FTE modules. While expressiveness gauges approximation errors, optimization challenges persist in neural networks [2]. Hence, we advocate exploring diverse FT realizations to navigate optimization complexities; **3**. Multi-Modality Pretraining Ease: We observe that incorporating invariant architectures with other neural networks is seamless due to invariant scalar hidden layers. These layers can seamlessly integrate with invariant representations from other modalities [1]. Consequently, the invariant encoding of FT could confer advantages in such multi-modal pretraining scenarios. [1] Learn to Combine Modalities in Multimodal Deep Learning [2] DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators W9: about frame transition: definition of frames We will move the definition of equivariant frames to the main text. In line 214, the orthonormal frames and especially the orthonormal property are defined in terms of the ordinary Euclidean metric. Therefore, no Riemannian metric is needed. We only mention Riemannian geometry and the frame bundle in the limitations and future work section, as our local frame has a root in the Riemannian context as a basis of the tangent space attached to spatial points. W10: paper organization Apologies for the recurrent cross-references throughout the paper. We have taken steps to enhance this aspect. Given that our paper encompasses both theoretical groundwork and theory-inspired algorithm design, and the theory itself encompasses two intricately linked components—1. Local Analysis and 2. Local-to-Global Intermediate Analysis—cross-references become unavoidable and imperative to ensure the rigor of the article. To mitigate potential confusion, our approach involves furnishing intuitive explanations for each mentioned concept while reserving detailed arguments for the appendix (please consult the supplementary materials' version of the appendix, which is further refined). As part of our improvement strategy, we intend to: 1. Enhance the paper's organization by providing a more structured "roadmap" at the conclusion of the introduction section. 2. Relocate the **initial portion of Appendix F (as featured in the supplementary materials) to Line 556** to eliminate the need for a cross-reference.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Hardware Resilience Properties of Text-Guided Image Classifiers
Accept (poster)
Summary: The paper studies the hardware resiliency of image classifiers, that is misclassification rates under random-bit flipping. They show that initializing the last classification layer using CLIP embeddings can greatly improve hardware resiliency. To obtain CLIP embeddings, for every class, GPT3 produces C different text prompts which are then averaged to produce a single embedding per class. In specific, the original classification layer of dim $B \times C$ is replaced by a latent layer of dim $B \times E$ and a projection layer of dim $E \times C$. The projection layer is initialized with embeddings from a CLIP text encoder. Results are shown on VGG and ResNet on 2 metrics (Top2Diff and $\Delta$ Loss which is difference in cross-entropy between the original prediction and misclassified prediction). Post hoc, the authors see that their proposed method has better saliency properties and lower last layer activations. Strengths: * The paper proposes a simple change with low overhead that improves hardware reliability rates. * Since the clip embedding dimension is generally lower than classifier bottleneck dimension, a side effect is that the number of parameters are also reduced. * Results are shown on a number of different convolutional architectures. Weaknesses: The positioning of the paper is a bit confusing. While finetuning pretrained clip models on ImageNet is not new in some sense, the paper seems to make the claim that for eg, in Figure 1 and Section 4.2 that this work proposes a new approach. The paper may be better positioned as an investigation of CLIP finetuning for hardware reliability. I would like to discuss with other reviewers about this aspect. I have a few more questions about the clarity of some metrics and experiments used in this work, which may be useful to the broader computer vision community. Please see the questions section below. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Metrics -------- * It is a bit unclear to me what the "GoldenEye" benchmark actually is. The authors say "We use PyTorchFI [36] to perform the random bit flip, and we perform 4096 unique error injection experiments per layer, totaling more than 3.6 million experiments across all our models". Does the GoldenEye benchmark prescribe the exact way this error injection is done? * I suggest that the authors report the classification flip rate as well in Table 1. The 14x number is interesting but is a bit unintuitive. Measuring how many decisions are flipped when the bit is flipped may be more intuitive. * The Top2Diff metric is unclear. "Top2Diff metric is simply the difference in classification accuracy between the top (correct) class and the second-highest class". I'm unsure what the classification accuracy of the second highest class means. Did you mean logits? Experiments --------------- * Major: What is the difference between RandomInit in Table 2 and the baseline in Table 1? If the projection layer is initialized randomly, then does this not default to a baseline classifier? * Is layer 53 in Figure 3, the final classification layer meaning that the ResNet-50 baseline is highly confident? If yes, can the authors provide some intution on why small perturbations at the output layer changes its decision drastically? * According to Figure 3, can an explicit additional L2 loss term at the outputs of the ResNet-50 baseline have the same effect as clip initialization? * L305: The authors say that they inject 2000 random errors in weights across the network. Is the output saliency map averaged across these 2000 different maps? * It might be nice to show some experiments on Vision Transformers, but this is more of a nice to have. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **q1:Positioning of the paper** **a1:** We would like to clarify that we don't exactly *finetune* CLIP models. We augment a standard, randomly initialized image model, with an additional projection head initialized with rich textual features. The rest of the model is randomly initialized as with any standard image classification training setup on the ImageNet dataset. Our major contribution is to the analysis that this simple additional projection + initialization to any image classification architecture can immediately improve the model resilience to hardware errors. We thank the reviewer for pointing this out. We will further clarify in the final version of the paper our major contribution focusing on the analysis of inherent resilience gains obtained by utilizing rich text features. **q2: GoldenEye clarifications** **a2:** Yes, we use the GoldenEye benchmark as provided in their repository for our evaluation. At a high level, GoldenEye is a wrapper around the PyTorchFI error injection framework plus providing us with an analysis of the results in terms of DeltaLoss resilience. To that end, we configure GoldenEye to perform 4096 unique error injection experiments per layer and use it to compare a baseline model with our proposed technique in terms of reliability. By using GoldenEye, the 3.6 million error injection experiments are performed under the hood to provide us with strong statistical guarantees on the model's resilience to hardware errors. **q3: Classification Bit Rate vs DeltaLoss** **a3:** The GoldenEye benchmark provides us with both the classification bit rate and the DeltaLoss metric as well. We report the DeltaLoss metric because the authors of [39] show that it converges asymptotically faster with fewer error injections compared to the classification bit rate, signifying that it is a stronger and more preferable metric for hardware resiliency analysis. In terms of the 14x number: it is the same whether reported through DeltaLoss or classical flip rate - the only difference is that the statistical guarantees are stronger via DeltaLoss at fewer error injection experiments (Figure 3 of [39] helps highlight this point). **q4: Top2Diff Metric is unclear** **a4:** The Top2Diff metric is computed for the *softmax*, not the logits. We apologize for the unclarity and will fix it. **q5: Difference between RandomInit in Table 2 and baseline in Table 1?** **a5:** The baseline in Table 1 is the default model as defined in the pytorch/torchvision library. The RandomInit model in Table 2 is our design with the additional projection, but no textual initialization to ablate whether the textual initialization helps or not. **q6: Figure 3, last layer Clarifications** **a6:** To clarify: Figure 3 does not refer to confidence. It shows empirical data for the maximum neuron value across the dataset, which we correlate with the single-bit flip error model as a reason for a high probability of a new (erroneous) value corrupting the output. We provide more detail in Appendix D of the Supplementary material. **q7:** Can an explicit L2 loss term be used at the output instead? **a7:** To see if adding an explicit L2 loss on the output helps, we run an experiment with this additional loss on the baseline ResNet-50 output. We show the results for this in the table below: |Backbone|Acc. Baseline|Acc. Last Layer L2 Norm|Improvement in Reliability (Last Layer)|Improvement in Reliability (Overall)| Improvement in Top2Diff| |:-:|:-:|:-:|:-:|:-:|:-:| |ResNet-50|75.64|75.13|-1.14x|-0.82x|-0.26%| We see that the explicit loss term reduces accuracy/resilience instead of improving it. We believe that the reason for this is that the training scheme already uses weight decay, which is also an implicit penalty on the model weights. Hence, the additional penalty does not really help much in terms of resilience. **q8: Figure 2 Clarifications** **a8:** For this experiment, there is only one saliency map generated by performing 2000 perturbations in the inference of an image. We chose this as a “large” number to show visual differences in the saliency between the two techniques. No averaging is needed. For error injection experiments, we perform a single bit flip in the entire network, and show that it can, in fact, alter the final classification. **q9: Additional Models** **a9:** We sincerely thank the reviewer for their suggestion to help improve this work. Included below, we have evaluated our proposed method on recent and state-of-the-art image classification models, to complement our CNN-based evaluation. Additionally, adding these models to our work strongly supports our original claims in the paper that our technique is general and can support any vision classification model type where we have the training recipe available. |Backbone|Acc. Baseline|Acc. Ours|Improvement in Reliability (Last Layer)|Improvement in Reliability (Overall)|Improvement in Top2Diff| |-|:-:|:-:|:-:|:-:|:-:| |FocalNet-T [1] (NeurIPS'22)|80.23|80.77|3.87x|2.61x|2.61%| |FocalNet-S [1] (NeurIPS'22)|82.01|82.52|4.73x|3.50x|3.10%| |Swin-V2-T [2] (CVPR'22)|80.97|80.02|1.65x|1.07x|2.85%| |Swin-V2-S [2] (CVPR'22)|82.71|82.86|3.51x|2.60x|3.04%| |MaxVit-T [3] (ECCV'22)|82.98|83.08|3.38x|2.63x|2.62%| |MobileNet-V2 [4] (CVPR'18)|71.87|71.83|3.92x|2.43x|5.36%| **References** [1] J. Yang, C. Li, X. Dai, and J. Gao, ‘Focal Modulation Networks’, in NeurIPS, 2022. [2] Z. Liu et al., ‘Swin Transformer V2: Scaling Up Capacity and Resolution’, in CVPR, 2022. [3] Z. Tu et al., ‘MaxViT: Multi-Axis Vision Transformer’, in ECCV, 2022. [4] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, ‘MobileNetV2: Inverted Residuals and Linear Bottlenecks’, in CVPR, 2018. [39] A. Mahmoud et. al., ‘Optimizing Selective Protection for CNN Resilience’, in ISSRE, 2021. --- Rebuttal Comment 1.1: Title: Updated rating Comment: I agree with the authors that this paper does not finetune clip models per-se but using clip textual features as the last layer is not entirely new and Section 4.2 implies that this is a contribution of this paper. This is misleading and can just be fixed by some minor editing. However, thanks to the authors for their additional experiments and the rebuttal. All my other concerns/questions are addressed and so I updated my rating to 6. --- Reply to Comment 1.1.1: Title: Thank you for the comments Comment: We thank the reviewer for their comment. We value your suggestion and will update the final manuscript accordingly in Section 4.2 to clarify the contribution. Kind regards, Submission13786 Authors
Summary: This paper presents a software-based approach to enhance classifier resilience to hardware errors. It leverages GPT-3 to enhance text descriptions for target classes, followed by utilizing the CLIP text encoder to generate text embeddings. These embeddings initialize the classifier head, enabling it to learn robust representations by leveraging the generalization abilities of CLIP and GPT-3. Strengths: 1. Using vision-language models to improve model resistance to hardware errors is an interesting idea. 2. The method is simple and effective. Experiments on ImageNet with VGG and ResNet show the model reliability increases significantly. 3. The paper is well-written and easy to follow. Figure 1 provides the data shapes at each step, making it easy to understand. 4. The experiment section includes rich ablation studies and intuitive explanations. Weaknesses: 1. In Line 216, the delta isn't converted to a math symbol. 2. It seems that Top2Diff is not used correctly. The reference paper "Optimizing Selective Protection for CNN Resilience" defines Top2Diff as the difference between the top two class confidences. However, Line 231 assumes the top-1 class is a correct prediction. We don't know ground-truth labels during model deployment, so we can't tell whether the top-1 prediction is correct. But we still can compute Top2Diff, right? This is also related to the ablation study in Section 7.4. When removing images with certain Top2Diff, do you remove only the images with correct predictions or all the images, no matter whether the predictions are correct? 3. The experiments don't use more recent backbones such as BEiT and Swin Transformer. ResNet and VGG are kind of out-of-dated. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Do you train the classifier head, i.e., the projection layer, during training? If yes, do the head and the backbone use the same learning rate? If the head is updated significantly during training, would it lose the ability to guide the backbone learning representations? 2. In Table 1, why do some backbones (ResNet-18 and ResNet-34) have increased parameters, whereas others' parameters reduce? 3. For Figure 3, do you use the training set or validation set of ImageNet? Which set makes more sense for this purpose? Why use the maximum absolute value of one neuron rather than the average of absolute values of all neurons of one layer? The maximum value may favor more outliers. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: It would be better to use some more recent vision backbones such as BEiT, DINO-v2, MAE, and Swin Transformer. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **q1: Top2Diff Clarifications** **a1:** We thank the reviewer for highlighting this point. To clarify, we use the same exact procedure for hardware resiliency evaluation as the referenced paper [39]. It is true that we do not have a “ground truth” at runtime, but [39] claims that a strong indication of the probability an error occurs is the difference between the top 2 classes. During the evaluation, we focus only on the “correct” subset of images to perform error injections in, because it does not make sense to evaluate whether an error changed an incorrect class into a correct class. In deployment time, where we do not have the ground truth available, we would rely on our analysis of the model’s robustness, but would also need correction mechanisms. While our proposed technique does not propose a detection-and-recovery mechanism like [39], we introduce a technique that reduces the baseline probability of errors occurring in the first place, which would subsequently help lower the cost of techniques such as ILR and FLR proposed by [39]. **q2: Evaluation on newer models** **a2:** We sincerely thank the reviewer for their suggestion to help improve this work. Included below, we have evaluated our proposed method on recent and state-of-the-art image classification models, to complement our CNN-based evaluation. Additionally, adding these models to our work strongly supports our original claims in the paper that our technique is general and can support any vision classification model type where we have the training recipe available. We find that the new models agree with our original assessments from the paper, that our technique not only helps improve the reliability of the last layer and the overall model but also has a negligible impact on the accuracy (and in some cases actually improves it), as was a primary goal in this research paper. |Backbone|Acc. Baseline|Acc. Ours|Improvement in Reliability (Last Layer)|Improvement in Reliability (Overall)|Improvement in Top2Diff| |-|:-:|:-:|:-:|:-:|:-:| |FocalNet-T [1] (NeurIPS'22)|80.23|80.77|3.87x|2.61x|2.61%| |FocalNet-S [1] (NeurIPS'22)|82.01|82.52|4.73x|3.50x|3.10%| |Swin-V2-T [2] (CVPR'22)|80.97|80.02|1.65x|1.07x|2.85%| |Swin-V2-S [2] (CVPR'22)|82.71|82.86|3.51x|2.60x|3.04%| |MaxVit-T [3] (ECCV'22)|82.98|83.08|3.38x|2.63x|2.62%| |MobileNet-V2 [4] (CVPR'18)|71.87|71.83|3.92x|2.43x|5.36%| **q3: Do you train the classifier head during training?** **a3:** Yes, we do train the classifier head during training, and yes, we use the same learning rate for both the head and the backbone to ensure we stay consistent with the overall training recipe. **q4: Why does ResNet-18,34 increase params, while ResNet50 decrease (Table 1)?** **a4:** This goes back to the DNN architecture design for different ResNet versions. As explained in lines 251-256, the second-to-last layer feeding into the projection layer impacts the change in the number of parameters for our model. If we take a look at Table 1 of the ResNet paper [5], we see that ResNet18 and ResNet34 end on a 3x3 512 layer, while the other ResNets end on a 1x1 2048 layer. More specifically: In case of the baseline Resnet18/34 last layer parameters are 512x1000 (num classes) = 512000 \ In case of our Resnet18/34 last layer parameters are 512x512 (embed size) + 512x1000 (proj layer) = 262144 + 512000 \ Hence params are increased by 0.26M In case of the baseline Resnet50/101/152 last layer parameters are 2048x1000 (num classes) = 2048000 \ In case of our Resnet18/34 last layer parameters are 2048x512 (embed size) + 512x1000 (proj layer) = 1048576 + 512000 \ Hence params are decreased by 0.49M **q5: Fig-3 elaboration:** **a5:** We use the validation set, as we are evaluating the network’s reliability outside the training regime. We conducted an analysis of the mean magnitude of neuron values for each layer, as per the reviewer's request. Complimenting Figure 3 in the main paper which uses absolute max value, we added a Figure in the rebuttal PDF showing the difference between the mean absolute value per layer for the baseline and our model. We see a similar trend between both figures, with the baseline model having a large value, especially in the final layer. This rationale is expounded upon in detail in Appendix D. **References:** [1] J. Yang, C. Li, X. Dai, and J. Gao, ‘Focal Modulation Networks’, in NeurIPS, 2022. [2] Z. Liu et al., ‘Swin Transformer V2: Scaling Up Capacity and Resolution’, in CVPR, 2022. [3] Z. Tu et al., ‘MaxViT: Multi-Axis Vision Transformer’, in ECCV, 2022. [4] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, ‘MobileNetV2: Inverted Residuals and Linear Bottlenecks’, in CVPR, 2018. [5] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, in CVPR, 2016. [39] A. Mahmoud et. al., ‘Optimizing Selective Protection for CNN Resilience’, in ISSRE, 2021. --- Rebuttal Comment 1.1: Title: Request for Comments Comment: Dear Reviewer eFF3, Thanks again for your effort in reviewing our paper and giving us a helpful chance to improve the paper's quality. We hope that our response can address your concerns. Considering that the discussion period will end on Aug 21st, we would like to know if you have any other questions about our paper, and we are glad to have a discussion with you in the following days. If our response has addressed your concerns, would you mind considering re-evaluating our work based on the updated information? Best regards, Submission13786 Authors
Summary: This paper provides a method to enhance the reliability of image classification models against hardware errors. For this purpose, the authors propose to combine textual and visual information to improve the reliability of neural networks by up to $14\times$, in comparison with traditional error detection and correction techniques. The authors verify their method on ImageNet classification with several models. Strengths: The proposed method to utilize texture information to enhance reliability of vision models is interesting and the author conducted some experiments on ImageNet classification with several models. Weaknesses: The proposed method requires textural information during inference, and thus the set of textural information collections would determine the final performance. For example, if the input image contains information outside those classes in ImageNet dataset, it is unclear how to use the proposed method during inference, except for retraining the models. Also, the authors only verify on early CNN architectures, not including efficient models or attention models, so the effectiveness of the proposed method might be questionable. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Could the author explain how to apply the proposed method for cases when the image contains information outside the pretrained classes such as ImageNet in the paper? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors did not talk about limitations, and the question asked above can be one limitation of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **q1: Does the proposed method require textual information during inference?** **a1:** No, the model only uses the textual information during *training*, to make it more resilient. Please note that in this paper we focus on a *closed-set* classification setting. We only use the text information to provide a rich initialization to the additional projection head. During inference, the model *only* uses the input image to perform its classification. As such, our technique produces an entirely drop-in solution where legacy models can be replaced with our new model, without breaking any previous abstractions of a system at inference time. **q2: Evaluation on newer models** **a2:** We sincerely thank the reviewer for their suggestion to help improve this work. Included below, we have evaluated our proposed method on recent and state-of-the-art image classification models, to complement our CNN-based evaluation. Additionally, adding these models to our work strongly supports our original claims in the paper that our technique is general and can support any vision classification model type where we have the training recipe available. We find that the new models agree with our original assessments from the paper, that our technique not only helps improve the reliability of the last layer and the overall model but also has a negligible impact on the accuracy (and in some cases actually improves it), as was a primary goal in this research paper. |Backbone|Acc. Baseline|Acc. Ours|Improvement in Reliability (Last Layer)|Improvement in Reliability (Overall)|Improvement in Top2Diff| |-|:-:|:-:|:-:|:-:|:-:| |FocalNet-T [1] (NeurIPS'22)|80.23|80.77|3.87x|2.61x|2.61%| |FocalNet-S [1] (NeurIPS'22)|82.01|82.52|4.73x|3.50x|3.10%| |Swin-V2-T [2] (CVPR'22)|80.97|80.02|1.65x|1.07x|2.85%| |Swin-V2-S [2] (CVPR'22)|82.71|82.86|3.51x|2.60x|3.04%| |MaxVit-T [3] (ECCV'22)|82.98|83.08|3.38x|2.63x|2.62%| |MobileNet-V2 [4] (CVPR'18)|71.87|71.83|3.92x|2.43x|5.36%| **q3: How does the proposed method work when images contain information outside pre-trained classes?** **a3:** To clarify, we do not work in an *open-set/zero-shot classification* setting. The proposed technique aims at improving the resilience of image classification models in the *closed-set classification* supervised setting. We only use the textual features to provide a rich initialization to the new projection head added to the network which improves the overall resilience of the network. Afterward, we follow a standard supervised training and testing setting. **References:** [1] J. Yang, C. Li, X. Dai, and J. Gao, ‘Focal Modulation Networks’, in NeurIPS, 2022. [2] Z. Liu et al., ‘Swin Transformer V2: Scaling Up Capacity and Resolution’, in CVPR, 2022. [3] Z. Tu et al., ‘MaxViT: Multi-Axis Vision Transformer’, in ECCV, 2022. [4] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, ‘MobileNetV2: Inverted Residuals and Linear Bottlenecks’, in CVPR, 2018. --- Rebuttal Comment 1.1: Title: Request for Comments Comment: Dear Reviewer DM9r, Thanks again for your effort in reviewing our paper and giving us a helpful chance to improve the paper's quality. We hope that our response can address your concerns. Considering that the discussion period will end on Aug 21st, we would like to know if you have any other questions about our paper, and we are glad to have a discussion with you in the following days. If our response has addressed your concerns, would you mind considering re-evaluating our work based on the updated information? Best regards, Submission13786 Authors
Summary: This paper studies how CLIP can be used to mitigate the effect of hardware failures on image classification models. The authors do this by incorporating embeddings from the clip text encoder of class based inputs (queried through GPT) to initialize the classification layer. The authors evaluate this technique by comparing how it affects accuracy on the downstream task, as well as some hardware reliability evaluation. Strengths: - Presentation is clear and explains why this is an important problem to work on for someone not familiar with the area - Method is simple and generalizes to standard image classification models - Evaluation metrics make sense (maintaining accuracy, investigating reliability) - Ablations are interesting and cover initial questions from reviewing the results section Weaknesses: - Since you are using the CLIP text encoder, I wish there was more exploration in that space. I think a natural question that is in-scope is how robust are CLIP models both for zeroshot and fine-tuning to hardware failure. - Evaluation is limited to just ImageNet - Confidence (top2diff) might not be an informative metric and doesn't necessarily make a better model - perhaps something related to calibration could be helpful Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses - mostly concerned about evaluation and comparison Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **q1: How robust are CLIP models to hardware failure?** **a1:** We perform error injection on the CLIP model in the zero-shot setting below and compare it against the baseline (a standard model trained on ImageNet in a supervised manner). We use the resnet50 backbone version of CLIP. |Backbone|Acc. Baseline|Acc. CLIP ZeroShot|Improvement in Reliability (Last Layer)|Improvement in Reliability (Overall)|Improvement in Top2Diff| |-|:-:|:-:|:-:|:-:|:-:| |ResNet-50|75.64|58.18|-5.38x|-3.71x|-2.85%| We see that zero-shot CLIP has much lower resilience than the supervised trained model on ImageNet. We believe that two reasons contribute to this. Firstly, the CLIP has not seen the exact dataset distribution like the supervised baseline. And Secondly, the CLIP model uses both a vision and text encoder during inference. So it has a larger number of parameters that are susceptible to bit flips that can cause a misclassification. **q2: Beyond ImageNet evaluation.** **a2:** We thank the reviewer for highlighting this point and have included additional evaluation spanning multiple datasets (CIFAR10, CIFAR100, Food101, and STL10) for two networks: ResNet-50 [1] and FocalNet-T [2]. Our results, shown below in the table, validate that our technique is general and can work across an array of model types and datasets. Furthermore, as we have shown in the paper, we did not have to modify any hyperparameters in the process, suggesting the ease of our technique as well as the increased benefit from a reliability point of view. Additionally, adding these new datasets further support our claims made in Section 7.4 of the paper, that our technique has negligible impact on model training accuracy, whilst still providing us with a large upside in resilience. |Dataset|Backbone|Acc. Baseline|Acc. Ours|Improvement in Reliability (Last Layer)|Improvement in Reliability (Overall)|Improvement in Top2Diff| |-|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|ResNet-50|95.07|95.29|2.04x|1.71x|6.70%| |CIFAR10|FocalNet-T|94.76|94.94|2.47x|1.30x|3.58%| |CIFAR100|ResNet-50|78.23|78.53|2.19x|1.65x|3.69%| |CIFAR100|FocalNet-T|77.06|79.21|3.21x|1.58x|2.90%| |FOOD101|ResNet-50|83.13|83.97|2.66x|2.15x|2.78%| |FOOD101|FocalNet-T|85.64|85.91|3.28x|2.85x|1.70%| |STL10|ResNet-50|47.73|52.68|2.10x|1.91x|2.45%| |STL10|FocalNet-T|62.74|63.78|2.23x|1.72x|1.96%| **q3: Confidence (Top2Diff) as an informative metric for improving a model?** **a3:** To clarify: Top2Diff is an *analysis* metric, and it is measured post-training (as the authors of [39] suggest), as opposed to an optimization/cost metric *for* training. The metric itself points to the propensity that an error could sufficiently cause a misclassification at the output, and as such, increasing the gap between the first and second-class confidence, a hardware error manifestation is less likely to corrupt the output. Our insight that Top2Diff can be used to understand the relationship between resilience and accuracy is strongly supported by our original experiments in Section 7.4, plus the addition of the new datasets which we thank the reviewer for suggesting we include in our paper. Additionally, with regard to model calibration, we compare and contrast Model Calibration with the Top2Diff Metric in the table below: |Aspect|Model Calibration|Top2Diff| |-|-|-| |Nature|Technique to adjust predicted probabilities to match true probabilities.|Analysis metric quantifying the difference between the top two predicted class probabilities.| |Purpose|Ensure well-calibrated predicted probabilities for accurate confidence estimates.|Indicate susceptibility of predictions to errors and assess the resilience-accuracy relationship.| |Timing|Performed post-training to refine predicted probabilities.|Measured post-training to analyze model behavior.| |Impact|Refines predicted probabilities to align with true probabilities.|Quantifies potential impact of errors on predictions and resilience-accuracy trade-off.| While both calibration and Top2Diff address the reliability and accuracy of machine learning models, calibration focuses on refining the probability estimates to align with reality, while Top2Diff serves as a post-training metric to measure the potential impact of errors on the model's predictions and understand its resilience-accuracy trade-off. We would like to clarify that while calibration could potentially be used as an inference-based technique to improve model resilience, our goal as outlined in Lines 43-47 was to introduce a low-entry and entirely training-based routine to reduce the inherent and underlying vulnerability of a model significantly, after which many inference-side techniques could potentially be appended for even stronger resilience (including calibration, as proposed by this reviewer, or other selective protection mechanisms as outlined in Lines 39-43). **References:** [1] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, in CVPR, 2016. [2] J. Yang, C. Li, X. Dai, and J. Gao, ‘Focal Modulation Networks’, in NeurIPS, 2022. [39] A. Mahmoud et. al., ‘Optimizing Selective Protection for CNN Resilience’, in ISSRE, 2021. --- Rebuttal Comment 1.1: Title: Request for Comments Comment: Dear Reviewer 4rtm, Thanks again for your effort in reviewing our paper and giving us a helpful chance to improve the paper's quality. We hope that our response can address your concerns. Considering that the discussion period will end on Aug 21st, we would like to know if you have any other questions about our paper, and we are glad to have a discussion with you in the following days. If our response has addressed your concerns, would you mind considering re-evaluating our work based on the updated information? Best regards, Submission13786 Authors --- Rebuttal 2: Comment: Thank you for your thorough response. I have updated my score to account for the authors' response. --- Rebuttal Comment 2.1: Title: Thank you Comment: We would like to express our gratitude to the reviewer for their comment. We appreciate your effort in reviewing our paper and rebuttal. Kind regards, Submission13786 Authors
Rebuttal 1: Rebuttal: We thank the reviewers for all their feedback and comments. We respond to common themes across all reviewers in this global rebuttal, and then answer further specific questions per reviewer individually. **Q1. Vision model evaluation for hardware resilience, and the use of CLIP-text features** One of the fundamental points of this work is that we use textual features to enhance the resilience of *closed-set image classification models*. One common misconception we gathered was about the use of textual information during inference. Rather, we operate entirely within a closed-set regime in this paper and only use the textual information from CLIP only to initialize the projection layer during the *training* of our model. Thus, at deployment time, there is no need for explicit textual information, as that has already been incorporated into the model via our technique of pre-training the final layer of the model. To that end, our proposed model can directly replace any classical model and would have the same pros and cons (in the context of in-distribution and out-of-distribution performance), but operate at a higher level of resilience to single-bit perturbations and hardware errors. In summary, our approach involves refraining from fine-tuning CLIP models directly. Instead, we enhance a conventional image model, which starts as a randomly initialized framework, by incorporating an extra projection head initialized with textually enriched features. The remaining segments of the model are also subject to random initialization, following the conventional image classification training process using the ImageNet dataset. Notably, our primary contribution lies in demonstrating that this uncomplicated augmentation through an extra projection and initialization step, when applied to any image classification architecture, can swiftly enhance the model's ability to withstand hardware errors. **Q2. Top2Diff Discussion** We credit the authors of [39] for the introduction of the Top2Diff metric in the context of hardware reliability and would like to further clarify its use here. Top2Diff is a measurement between the top softmax value and the 2nd highest softmax value. In Section IV-B of [39], the authors show that this metric is good for hardware reliability, intuitively because it indicates a higher threshold that an error needs to overcome for the output to change its classification. For example, a Truck with 52% confidence and Bird with 48% confidence indicates a Top2Diff of 4%, while a 90% Truck to 10% Bird indicates an 80% Top2Diff. The idea is that “overcoming” a 4% difference is a much lower threshold than an 80% difference, implying that a single bit flip during the first Truck example is more likely to produce an output misclassification. To further clarify, we do not use Top2Diff during training at all, and simply measure it as a proxy to understand the reliability of a model to hardware errors. The error injection experiments produced can be considered the primary metric for resilience (which is measured by DeltaLoss), and Top2Diff is helpful to gather insight into the model only. **Q3. Additional Results** We included additional results as requested by multiple reviewers, including: - Adding more models including modern transformers (FocalNet, Swin-V2, MaxVit, and MobileNet-V2) - Adding additional datasets (CIFAR10/100, STL10, and Food101) - Including an additional metric (absolute mean) to complement our original metric (absolute max) in Figure 3 as requested by one of the reviewers. This is presented in the rebuttal PDF file. |Backbone|Acc. Baseline|Acc. Ours|Improvement in Reliability (Last Layer)|Improvement in Reliability (Overall)|Improvement in Top2Diff| |-|:-:|:-:|:-:|:-:|:-:| |FocalNet-T [1] (NeurIPS'22)|80.23|80.77|3.87x|2.61x|2.61%| |FocalNet-S [1] (NeurIPS'22)|82.01|82.52|4.73x|3.50x|3.10%| |Swin-V2-T [2] (CVPR'22)|80.97|80.02|1.65x|1.07x|2.85%| |Swin-V2-S [2] (CVPR'22)|82.71|82.86|3.51x|2.60x|3.04%| |MaxVit-T [3] (ECCV'22)|82.98|83.08|3.38x|2.63x|2.62%| |MobileNet-V2 [4] (CVPR'18)|71.87|71.83|3.92x|2.43x|5.36%| |Dataset|Backbone|Acc. Baseline|Acc. Ours|Improvement in Reliability (Last Layer)|Improvement in Reliability (Overall)|Improvement in Top2Diff| |-|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|ResNet-50|95.07|95.29|2.04x|1.71x|6.70%| |CIFAR10|FocalNet-T|94.76|94.94|2.47x|1.30x|3.58%| |CIFAR100|ResNet-50|78.23|78.53|2.19x|1.65x|3.69%| |CIFAR100|FocalNet-T|77.06|79.21|3.21x|1.58x|2.90%| |FOOD101|ResNet-50|83.13|83.97|2.66x|2.15x|2.78%| |FOOD101|FocalNet-T|85.64|85.91|3.28x|2.85x|1.70%| |STL10|ResNet-50|47.73|52.68|2.10x|1.91x|2.45%| |STL10|FocalNet-T|62.74|63.78|2.23x|1.72x|1.96%| Pdf: /pdf/36c3ba419850f72cc5de90ad572057ddf9b34c1a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Pseudo-Semantic Loss for Autoregressive Models with Logical Constraints
Accept (poster)
Summary: The authors propose a pseudo-semantic loss for deep generative models with logical constraints. The authors consider autoregressive generative distributions (modeled by RNNs, Transformers, etc.), which are more expressive, and go beyond the standard approach of enforcing the constraints on fully-factorized distributions. However, the likelihood of the constraint is hard for auto-regressive distributions, so the authors propose an approximation at a random, local point, yielding a "pseudolikelihood". Then they show the utility of that pseudolikelihood in constructing a pseudo-semantic loss, which improves standard tasks such as finding the shortest path on Warcraft maps and solving partially solved 9x9 Sudokus. As a "killer app," the authors propose to use their method on detoxifying generations from LLMs. They show results on detoxifying GPT-2. Strengths: S1: The problem of adding constraints with semantic losses is important, and has a clear impact on relevant problems, such as reducing toxicity. S2: The approximations in the paper are sound and seem to be of good fidelity and low deviation from the ground truth. S3: The experiments are performed on a variety of tasks, showing the promise of the method. Weaknesses: W1: The authors need to give more examples of constraints, logical circuits, and how the loss encourages the reuse of sub-problems. See my questions below for more concrete suggestions for improvement. Furthermore, I do not think that Figure 2 is a good example. The choice of numbers at the leaves is arbitrary (should be discussed). The computation of the probabilities on the left is not clear. It's not clear how one relates A, B, and C to the constraint with Cat, Animal, and Dog. Some simplification and/ or explanation of the steps are necessary. W2: The reader does not get a sense of how scalable the approach is. It is acknowledged in the paper that the circuits' complexity is a bottleneck (260-263). I would expect more discussion on the circuits used in the paper, and how they could scale. What more complicated problems should we expect to solve by using the proposed pseudo-semantic loss? W3: Have you tried other constraints for toxicity? Was the avoiding tokens from the list the first thing you tried? Why not try more stuff to check for more gains? Again, it's important to discuss the circuits design and the issues that arise. Minor: M1: line 115 - do you mean $\mathbf{w}_i$ in the softmax? I do not see where the index $j$ comes from. M2: line 9 in Algorithm 1 - I think you mean "seq" vs, "seq_len" and "cats" vs. "num_cat"? M3: line 323: one again <- once again. M4: line 508: generate 100k <- generate 100k samples (or generations)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: Not sure I understood how your approach reuses sub-problems. Could you give examples from the tasks you use? Q2: Could you give examples of reducing toxicity by looking at samples from the GPT model? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: the authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for engaging with our work and their valuable feedback. We are happy to see them acknowledge the importance of the problem tackled and the value of both the proposed approximation and experimental evaluation. [“ It's not clear how one relates A, B, and C to the constraint with Cat, Animal, and Dog.“] We agree that a mapping from the variable A, B and C to Cat, Animal and Dog is missing. We will modify the example to either make the mapping explicit, or only use one or the other. The implied mapping is that variable A maps to Cat, variable B maps to dog and variable C maps to Animal. We will make the mapping clear and/or modify the example accordingly. [“The authors need to give more examples of constraints, logical circuits”] A logical constraint is simply any sentence in Boolean logic. For instance, a very simple constraint might be $y_3, = \text{dog}$ which simply states that all sentences must have as their third token “dog”. In the example we presented, our constraint simply asserts that if I predict an “cat” or “dog”, I necessarily need to predict an animal (every “cat” is an animal, and every “dog” is an animal; If I acknowledge the existence of either in an image, logic necessitates that I also acknowledge the existence of an animal in the image. A logical circuit is a computational graph for compactly representing solutions of a constraint, that is all possible assignments where the constraint holds true. There every AND node represents a (partial) solution to the constraint and every OR node represents a distribution over mutually exclusive and exhaustive (partial) solutions. More concretely, focusing on the upper-right AND node, we can see that it represents the Boolean function $$(\text{Animal} \lor \lnot \text{Animal}) \land (\lnot \text{Cat} \lnot \text{Dog})$$, meaning if we predict neither a dog nor a cat, then we're free to predict or not predict an animal. [“how the loss encourages the reuse of sub-problems”] We simply mean the solution of a problem depends on already-computed solutions to other subproblems in a dynamic-programming fashion. Take for example the problem of choosing $k$ out of $n$ elements which has a simple structurethe probability of selecting $k$ out of $n$ elements is simply the probability of selecting $k-1$ out of $n-1$ elements AND selecting the current element OR selecting $k$ out of $n-1$ elements AND NOT selecting the current element. [“The choice of numbers at the leaves is arbitrary (should be discussed). The computation of the probabilities on the left is not clear”] Back to our example, the numbers on the second row on the left, denoted by an arrow "eval" are the likelihood assigned by the auto-regressive model to these joint assignments i.e. they are the output of a neural network. The row below that, denoted by an arrow "norm" refers to the conditional probabilities, obtained through normalizing the joint by the marginal e.g. $p(a|b,c) = p(a,b,c) / p(b,c)$ on the right, the only numbers we supply are those at the leaves $A, \lnot A, B, \lnot B, C$ and $\lnot C$ .These correspond exactly to the conditionals obtained on the last row on the left. [“The reader does not get a sense of how scalable the approach is. It is acknowledged in the paper that the circuits' complexity is a bottleneck (260-263). I would expect more discussion on the circuits used in the paper, and how they could scale. What more complicated problems should we expect to solve by using the proposed pseudo-semantic loss?”] The size of the logical circuit can indeed worst-case grow exponentially in the size of the constraint. That being said, there are some problems that exhibit a surprising amount of structure theoretically guaranteeing compact circuit representations, examples of which are the $k$-subset constraint and perfect matching on planar graphs. For many other problems, we can obtain compact logical circuits for many instances of interest e.g. entity-relation extraction, NLI transitivity, MNIST addition, and many many more. There are cases in practice, of course, when the circuits grow too much beyond what is computational feasible, in which case we can resort to approximating the logical circuit [1, 2] which should work seamlessly with our distributional approximation. [“What more complicated problems should we expect to solve by using the proposed pseudo-semantic loss?”] We expect pseudo-semantic loss to be of use in any task where we have domain knowledge relating the outputs of a classifier (also known as structured-prediction problems), and where we have that the output distribution induced by the classifier goes beyond the fully-factorized distribution. [“other constraints for toxicity? "] We have not explored other constraints for toxicity. It is generally not easy to capture toxicity using a logical constraint as the trait of being toxic can be attributed to latent factors beyond just the presence of certain keywords e.g. a condescending tone. Instead, our hope was that steering the model's distribution from sentences containing toxic words would serve to steer it away from toxic sentences in general as both are typically correlated. This is a direction that we hope to explore in future work. [“examples of reducing toxicity”] Please see the PDF included in the general response. [“Typos”] Thank you for pointing out the typos. We will be sure to correct them in the camera-ready. References: [1] Kareem Ahmed, Kai-Wei Chang and Guy Van den Broeck. 2023. Semantic Strengthening of Neuro-Symbolic Learning. In Proceedings of International Conference on Artificial Intelligence and Statistics (2023). [2] Robin Manhaeve, Giuseppe Marra, Luc De Raedt. 2021. Approximate Inference for Neural Probabilistic Logic Programming. In Proceedings of the 18th International Conference on Principles of Knowledge Representation and Reasoning. --- Rebuttal Comment 1.1: Title: Thanks for the clarifications. Comment: The authors have addressed my questions and comments well. I am maintaining my positive score. Regarding the discussion with the other reviewers, there was a question of whether RNNs/ LSTMs are autoregressive. Yes, they are. I think the reviewers responded well to that.
Summary: The paper presents a new way of computing a loss function measuring the degree of satisfaction of logical constraints for autoregressive models. This is an important task, as autoregressive models are now more and more used and calculating the degree of satisfaction of even simple constraints made of a single literal is #P-hard (due to the fact that different solutions depend on different conditionals). The proposed solution has been evaluated on 3 different tasks: 1. Warcraft shortest path: the task is to generate a minimum-cost path from the upper left to the lower right vertices. 2. Sudoku: the task is to generate a solution to a given partially filled sudoku 3. LLM detoxification: the task is to steer away the LLM model from toxic prompted-generations. Strengths: - The paper is very well written and easy to follow. - The introduced approach is very interesting and presents improvements in the experiments. - The problem solved is very important as auto-regressive models are becoming more and more important (mostly due to the spread of LLMs) Weaknesses: - The authors claim that they are the first to learn with constraints under auto-regressive generative models. While this might be true, there has been a lot of work on steering the generation of auto-regressive LLMs at inference time (see, e.g., [1,2]). I think that discussing the pros and cons of learning to satisfy the constraints during training vs steering the generation during inference might be very beneficial for the community. Especially given the fact that the authors have a NLP task in their experiments. - On this note, it might be interesting to see how their method performs against such methods in the LLM detoxification task. - I think that the current title is a bit misleading. Not all deep generative models are auto-regressive. So I would change the title from "A Pseudo-Semantic Loss for DGM with Logical Constraints" to "A Pseudo-Semantic Loss for auto-regressive models with Logical Constraints". References: [1] Meng et al. Controllable text-generation with neurally-decomposed oracle, NeurIPS 2022. [2] Lu et al. NeuroLogic Decoding: (Un)supervised Neural Text generation with predicate logic constraints. ACL 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I found the entire paper very clear. However, I have a doubt about this step at equation 7. Why can you write: $\mathbb{E}_{y \sim \tilde p}[\mathbb{1}(y \models \alpha)] \approx \mathbb{E}_{y\sim p} \mathbb{E}_{\tilde y \sim \tilde p_y}[\mathbb{1}(\tilde y \models \alpha)]$? Can you give some more details about this step? - Algorithm 1: what is the difference between seq and seq\_len? In the same way, what is the difference between cats and num\_cat? Also, expand works on non-singleton dimensions. Since you're having python code in the paper then it needs to be as you'd write it. - In the example, suppose that instead of having generated $abc$, the models has generated the phrase "I love dogs", the samples that are 1 hamming distance away then would be "love dogs", "I dogs" and "love dogs"? Or something else? - Again in the example, you use a constraint with Cat, Dog and Animal as variables. Can you rewrite it using $a,b,c$. Also, in appendix A, it would be nice to have the step by step guide showing how the circuit is built from the constraint. - Minor: in the related work it shouldn't be HMCCN but C-HMCNN P.S. I really like the paper, if all the questions can be answered well, then I will increase the score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. We are delighted they found the paper easy to follow, the problem to be timely, and the approach to be interesting. ["The authors claim that they are the first to learn with constraints under auto-regressive generative models. While this might be true, there has been a lot of work on steering the generation of auto-regressive LLMs at inference time (see, e.g., [1,2]). I think that discussing the pros and cons of learning to satisfy the constraints during training vs steering the generation during inference might be very beneficial for the community. Especially given the fact that the authors have a NLP task in their experiments… On this note, it might be interesting to see how their method performs against such methods in the LLM detoxification task."] There has indeed been numerous work on controllable text generation which we’re happy to acknowledge in the related works section of our camera-ready. We would like to point out, however, that when specifying “learning with constraints”, we mean so in the specific sense of learning to maximize the training data likelihood, through cross-entropy, subject to some constraint. This can be achieved through the addition of a *regularization* or *penalty* term that *also* ensures the network's outputs satisfy the constraint. In such a setting, all the work that we are aware of only considers fully-factorized distributions. Please see the general response regarding how learning with constraints relates to constrained generation. [“I think that the current title is a bit misleading. Not all deep generative models are auto-regressive."] It is true that not all deep generative models are auto-regressive. Our intention was to hint at the fact that our approach is applicable to any likelihood-based model where the output distribution is not fully-factorized. However, we agree that maybe “auto-regressive” would be more befitting, especially given our experiments. [“However, I have a doubt about this step at equation 7”] This step can be understood as performing the analogous of a first-order Taylor series expansion of a discrete function with no analytical form about a point of interest, the point here being a model sample. [“In the example, suppose that instead of having generated abc, the model has generated the phrase "I love dogs", the samples that are 1 hamming distance away then would be "love dogs", "I dogs" and "love dogs"? Or something else?”] They would be all possible completions of "_ love dogs", "I _ dogs" and "I love _", where _ denotes a blank to be filled in. We can, however, save some computations by only considering the tokens of interest. More specifically, in the detoxification experiment, only the ~800 possibly-toxic tokens in addition to a single ``non-toxic'' token to fill in each blank. [“Algorithm 1: what is the difference between seq and seq\_len? In the same way, what is the difference between cats and num\_cat? Also, expand works on non-singleton dimensions. Since you're having python code in the paper then it needs to be as you'd write it.”] These are unfortunate typos that will be corrected in the camera-ready. We will map seq_len and num_cats to seq and cats, respectively. Regarding expand, you are absolutely correct, it does only operate on singleton dimensions, meaning our code would require unsqueezing the last two dimensions before calling expand, which we elected to omit for brevity's sake. But we do agree that it makes sense to present working PyTorch code, and will modify the code accordingly. [“Again in the example, you use a constraint with Cat, Dog and Animal as variables. Can you rewrite it using a,b,c”] Variable A maps to Cat, variable B maps to dog and variable C maps to Animal. We will make the mapping clear and/or modify the example accordingly. [“Also, in appendix A, it would be nice to have the step by step guide showing how the circuit is built from the constraint.”] Thank you for the suggestion. We will add it to the camera-ready. [“Minor: in the related work it shouldn't be HMCCN but C-HMCNN”] Will make sure to fix it in the camera-ready. We are happy to answer any more questions or concerns you might have. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their answers. However, I still do not understand why they cannot compare against approaches (e.g., neurologic decoding) proposed for controllable text generation in the LLM detoxification example. Actually, I think it would be very interesting to: 1. compare the approaches, and 2. try to see what happens when used together. This should be possible because the LLM detoxification example includes only the simple constraint "a list of profanity, slurs, and swear words should not appear as part of the model generations", which can also be dealt with methods such as NeuroLogic decoding. Also, in the general answer to all reviewers, the authors write that they can deal with non-lexical constraints. Can the authors give an example of non-lexical constraint that they can deal with in the *text generation* domain? Finally, will the authors change the title? If so, how? --- Reply to Comment 1.1.1: Comment: Thanks again for your response. [neurologic decoding] Here are the numbers comparing against neurologic, after painstakingly getting the code to work (the code base was quite outdated). Results that are not significantly worse than the competition, as determined by a t-test, are boldfaced. | Method | Full | Toxic | Non-toxic | |-------------------------|---------------|----------------|--------------------| | GPT-2 | 0.11 +- 0.15 | 0.69 +- 0.13 | 0.09 +- 0.19 | | GPT-2 + neurologic | 0.08 +- 0.14 | 0.66 +- 0.13 | **0.06 +- 0.08** | | GPT-2 + word banning | 0.12 +- 0.16 | 0.69 +- 0.13 | 0.09 +- 0.11 | | PseudoSL | **0.06 +- 0.09** | 0.59 +- 0.04 | **0.06 +- 0.08** | | PseudoSL + neurologic | **0.05 +- 0.10** | 0.68 +- 0.15 | **0.05 +- 0.07** | | PseudoSL + word banning | **0.06 +- 0.09** | **0.58 +- 0.01** | **0.06 +- 0.08** | We observe that while neurologic does improve the toxicity of the GPT-2 model, it fails when used in conjunction with our trained fine-tuned model. We do, however, show that a simple decoding method, where we set the probabilities of toxic words to 0 at decoding time, improves upon our results which is in line with our expectations given previous work. As a side note, these results are reported on a random subset of size 1k of the dataset. Attempting to run neurologic decoding on the entire dataset, 100k, using the maximum batch size we could fit on a 48GB GPU yielded an estimated time of 165 hours. [Title] We will be changing the title to "A Pseudo-Semantic Loss for Autoregressive Models with Logical Constraints". [Constraints] If we restrict ourselves to the definition in the neurologic paper "lexical constraints i.e. which words should or shouldn’t be included in the output text.", then we can specify more general constraints on the generated sentences such as "every fourth word has to be the same" or "every sentence needs to start with a noun". Essentially, any constraint that can be expressed in propositional logic can be dealt with. Thanks again for continuing to engage with us, and we hope this convinces the reviewer to push their scores towards an accept.
Summary: The method proposes as new neuro-symbolic loss for autoregressive models. The proposed method approximates the ture expectation of the constraint satisfaction using a pseudo likelihood term computed only in a neighbourhood of the node. Strengths: I think the paper proposes an interesting extension of neurosymbolic methods on auto-regressive models addressing some tractability problems. Moreover, I really appreciated the toxic language experiment and it really shows an important application of neurosymbolic in the current AI scenario Weaknesses: My first concern/doubt is about the exploitation of RNNs as auto-regressive probabilistic model (i.e. p(y_i | y_{<i})). While RNNs have an autoregressive nature in the latent space, they do not model an auto-regressive probabilistic model. What I mean is that, in terms of the y random variables (i.e. output variables), RNNs are fully factorised models conditioned on the parameters and the inputs, which have to be considered as observed variables in a PGM sense. Therefore it is not clear to me whether and how an LSTM is used for modelling p(y_i | y_{<i})), and whether many of the approximations are needed for an LSTM classifier. This question is also related to the baselines of the experiments, as one would be interested in knowing how pure semantic loss would behave on an RNN/LSTM. My second doubt/question is about the fact that peudolikelihood approximations have been quite standard in statistical relational learning (e.g. MLNs/PSL inference based on pseudolikelihood) and it is not clear to me how the proposed approximations are positioned/inspired/different w.r.t. these methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How is the LSTM used to model p(y_i | y_{<i})? 2) Do you have any intuition / experiment on how an RNN/LSTM behaves with pure semantic loss? 3) How is the proposed method linked to the use of pseudolikelihoods in statistical relational models (e.g. MLN learning with pseudolikehood?) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No explicit limitations are mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable feedback and for engaging with our work. We are particularly happy with their appreciation for the language detoxification experiment, which we believe has great potential for real-life impact. ["My first concern/doubt is about the exploitation of RNNs as auto-regressive probabilistic model (i.e. $p(y_i | y_{<i}))$. While RNNs have an autoregressive nature in the latent space, they do not model an auto-regressive probabilistic model. What I mean is that, in terms of the y random variables (i.e. output variables), RNNs are fully factorised models conditioned on the parameters and the inputs, which have to be considered as observed variables in a PGM sense. Therefore it is not clear to me whether and how an LSTM is used for modelling $p(y_i | y_{<i}))$, and whether many of the approximations are needed for an LSTM classifier"] It is true that the outputs $y_i$ are \emph{conditionally independent} given the hidden states $h_i$. This, however, does not imply that $p(y_1, \ldots, y_n) \neq \prod_i p(y_i | y_{<i})$. It also does not buy us much since we do not have oracle access to the hidden states $h_i$ ahead of time. Rather, for some weight matrices $\mathbf{U}, \mathbf{W}$ and $\mathbf{V}$ we can only obtain $\mathbf{h_i} = g(\mathbf{U} \mathbf{h_{i-1}} + \mathbf{W} \mathbf{e_i})$ and consequently $p(y_{i} | y_{t-1}) = \mathsf{Softmax}(\mathbf{V} \mathbf{h_i})$ only having processed $h_{i-1}$, and inductively, the entire prefix. Intuitively, the main computational hurdle remains: the probability $y_i$ is dependent on the entire prefix $y_0, \ldots, y_{i-1}$. Computing the probability of a Boolean constraint constituting of even a single term asserting e.g. $y_i$ = dog, which reduces to computing the \emph{marginal} probability $p(y_i=\text{dog})$ requires summing over all $50256^{i-1}$ prefixes ($50256$ being the number of possible tokens at every time step) , a summation which is highly intractable even for a prefix of size $10$. ["How is the LSTM used to model $p(y_i | y_{<i})$?"] Consequently, the RNN/LSTM is trained in precisely the same manner as you would train an autoregressive model i.e. we simply train the model to minimize the error in predicting the true next word in the training sequence given the prefix, using cross-entropy as the loss function (see e.g. https://web.stanford.edu/~jurafsky/slp3/9.pdf, section 9.2 for a treatment of RNNs as language models). The only modification is that we also condition on the input (Warcraft map image and the input Sudoku puzzle in the shortest path and Sudoku tasks, respectively) in a fashion very similar to CNN-BiLSTM models, throughout the entire sequence. ["Do you have any intuition / experiment on how an RNN/LSTM behaves with pure semantic loss?"] One intuition to our approach is that it is the analog of a first-order approximation to discrete functions with no analytical form. As such, our approach makes use of first-order information to approximate the value of the discrete function about a given point, in our case the model sample. One naïve approach to employing semantic loss would be to assume the distribution is a fully-factorized one and use the the conditional probabilities of the model sample $p(y_i | y_{<i})$ as marginal probabilities $p(y_i)$. It is unclear, however, what the semantics of the loss would be in this case, or in what way the number it yields relates to the original quantity. Nonetheless, on attempting to train with semantic loss, we achieved an exact match of $63$% and a consistency of $75$%, the former metric being slightly better than the baseline and the latter being slightly worse (see numbers in paper) [How is the proposed method linked to the use of pseudolikelihoods in statistical relational models] Our approach was inspired by such methods, although the end goal is at odds in a sense. Precisely, in statistical relational learning, maximizing the pseudolikelihood in place of the likelihood sidesteps the computation of the, very often intractable, partition function. In our case, however, the partition function *is* precisely what we're interested in computing, or at the very least estimating. Towards that end, the role of pseudolikelihood is as a stepping stone to help “massage” the autoregressive distribution into a form that is amenable to the reuse of computations, in a fashion similar to dynamic programming. To see what we mean by reuse of computations, consider e.g. in the easiest case, when we have a fully-factorized distribution, the probability of every sentence where $y_i$ is true depends on the same probability $p(y_i)$. [“No explicit limitations are mentioned in the paper.”] Please see section F of the appendix for some potential limitations of our proposed approach. --- Rebuttal 2: Comment: We wanted to ensure we've satisfactorily addressed the reviewers concerns. To that end, we wanted to emphasize that, just like the output distribution defined by GPT, the output distribution defined by an RNN is auto-regressive. Simply put, to obtain the hidden state at time $t$, the RNN needs to have processed the entire prefix up to time $t-1$. $h_t$, and consequently $y_t$, is then a function of the current token $x_t$ and the prefix through $h_{t-1}$. This is very similar to the way in which, in GPT, $y_t$ is a function of the current token and the prefix through the attention at time $t$. --- Rebuttal Comment 2.1: Comment: I think this depends on how the RNN is designed, and this was my main question. If the output at time (t-1) is fed as input to timestep (t), then, yes, I agree that there exist a conditional dependence of the variable y_t on y_{t-1}. This is indeed what you would do for text generation. But, this is not a general property of recurrent networks. As for general definition, when the x and y space are disjoint, the only dependence of the output is on the state h_{t-1} and the history input x_{0:t}, not on the previous output y, which would make it autoregressive on this space. Reading the answer of the authors it seems this is the way it is implemented. I missed this detail in the paper and I think it is fundamental to give credit to the experimental campaign. I will therefore change my score accordingly.
Summary: This paper proposes adding a pseudo-semantic loss into the training of autoregressive models, so that the model can learn logical constraints. Specifically, this approach includes a data augmentation by perturbation (e.g., by Hamming distance), and then adding a penalty loss for generation steps that are not following given constraints. Strengths: I don't see obvious strengths. Weaknesses: * The writing of this paper is of low quality. For example, the meaning of Figure 1 is vague. As a front-page figure, it is even harder to understand than the algorithm pseudo-code. * The most important claim of this paper, "the first approach to learning with constraints under auto-regressive generative models", is actually not true. Instead, there is already a bunch of previous work on the topic of "generation with constraints", such as [1] and its followup work. [1] NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints (NAACL 2021) Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: I don't have a question in this stage. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 1 poor Contribution: 1 poor Limitations: I don't see the discussion of limitations in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your response. [The writing of this paper is of low quality. For example, the meaning of Figure 1 is vague. As a front-page figure, it is even harder to understand than the algorithm pseudo-code.] The figure is just meant to convey our approach in broad strokes. An autoregressive model defines a distribution over the output space, shown in black. It places probability mass on sentences that both violate (white area under curve) as well as those that satisfy the constraint (pink area under curve). Our aim throughout the paper is to shift the distribution, so the autoregressive model places all its mass only on sentences that satisfy the constraint. Typically, we can achieve this by simply maximizing the probability of the constraint w.r.t the output distribution. The problem is computing the probability of the constraint w.r.t the output distribution is intractable when the distribution is autoregressive. What we propose is a local, tractable approximation of the original distribution (shown in grey) under which we can approximate the probability of the constraint. It is local in the sense of being correct within a neighborhood of a sample (the red cross). This is similar to how we compute a linear approximation of a function about a point, except here we have a discrete function with no analytical form. [The most important claim of this paper, "the first approach to learning with constraints under auto-regressive generative models", is actually not true. Instead, there is already a bunch of previous work on the topic of "generation with constraints", such as [1] and its followup work.] There has indeed been numerous work on controllable text generation which we’re happy to acknowledge in the related works section of our camera-ready. We would like to point out, however, that when specifying “learning with constraints”, we mean so in the specific sense of learning to maximize the training data likelihood, through cross-entropy, subject to some constraint. This can be achieved through the addition of a *regularization* or *penalty* term that *also* ensures the network's outputs satisfy the constraint. In such a setting, all the work that we are aware of only considers fully-factorized distributions. Please see the general response regarding how learning with constraints relates to constrained generation. [“I don't see the discussion of limitations in this paper.”] Please see section F of the appendix for some potential limitations of our proposed approach. We are happy to engage with the reviewer regarding any misunderstandings regarding the paper's contribution as well as the presentation. Our hope is that this paper is accessible to a wide audience --- Rebuttal 2: Comment: We wanted to re-emphasize the key difference between our work and "generation with constraints" works such as NeuroLogic. Our proposed approach is a **training time** approach that biases the function learnt from the data to respect certain constraints by adding a regularization term to the cross-entropy loss. It **does not** interfere with the generation process at test time. "generation with constraints" approaches on the other hand **do not interfere with the training process**, and only modify an **already trained** model's predictions at test time. --- Rebuttal Comment 2.1: Title: Re: Official Comment by Authors Comment: Thanks for your clarification on the difference between your work and NeuroLogic. However, I would like to insist my score for now, because I am still not convinced regarding the validation of the big claim "the first approach to learning with constraints under auto-regressive generative models": * "Learning" is a rather high-level concept, and people don't always understand it as "training", especially when it doesn't appear together with "inference", so it might be better to claim it as "training generative models with constraints". * Even considering training-time approaches only, there are still a bunch of work closely related, especially [1] and its following work. After a quick reading of [1], I even feel the proposed approach in this paper can be seen as a special case of [1]. Specifically, [1] adapts a posterior regularization formulation to text generation. It has a search space approximation and a regularization term in its objective function, corresponding to the purturbation of y and the penalty loss term in this paper, respectively. There are more related work mentioned in [2], such as [3]. * Missing discussion and citation for all the above closely related work, including NeuroLogic, raises an overall concern on the thoroughness of the background introduction in this paper. [1] Prior Knowledge Integration for Neural Machine Translation using Posterior Regularization (ACL 2017) \ [2] A Survey of Knowledge-Enhanced Text Generation (ACM Computing Survey, 2022) \ [3] Deep Generative Models with Learnable Knowledge Constraints (NeurIPS 2018) --- Reply to Comment 2.1.1: Comment: Thank you for your response and help towards improving the paper. ["might be better to claim it as "training generative models with constraints""] Thank you for your suggestion. We will make sure to clarify it in the paper. ["considering training-time approaches only, there are still a bunch of work closely related…, I even feel the proposed approach in this paper can be seen as a special case of [1]. "] Thank you for pointing out this related work. We are happy to discuss it in our related works section. It is unclear, however, how applicable it is to our setting. The approach in [1] rests upon the framework of posterior regularization [4], which requires that we design feature functions encoding the constraint. Such feature functions need to *factorize* over the tokens of the sentence (see section 2.5 in [4]). That might be doable for very simple constraints such as those considered in [1], but not in general. Setting aside that issue, the approach in [1] minimizes an extra KL-divergence between the autoregressive distribution and a log-linear variational distribution. For the constraints considered, computing such KL-divergence is intractable. Even approximating the KL-divergence is hard, due to the computational hardness of sampling the constraint ([1] sample from the autoregressive distribution and threshold using the feature function, which does not sample from the correct distribution). There is also no notion of perturbations; these are simply samples from the distribution. We would also argue that log-linear models defined by a set of linear constraints as variational distributions have been in use since posterior regularization was first introduced. [1] is simply an instantiation of such a framework. In our case, however, our distributional approximation is novel: it is a first-order approximation of the true autoregressive distribution. Consequently, we are **exactly computing the probability of the constraint being satisfied w.r.t. our proposed approximate distribution**. ["Missing discussion and citation"] We are happy to cite any and all related work. We were of course aware of neurologic and other inference time approaches, but felt they might be orthogonal to approach proposed here. Having discussed it with the reviewers, however, we will happily include a discussion with such works. Moreover, our work tends to focus on developing methodology for *general* constraints. And despite our best efforts, due to the massive number of works, it can be easy to miss methods geared towards very specific tasks from other communities. We greatly appreciate the reviewer bringing these to our attention. ["Claims"] Having said all of the above, fundamentally, the issue at hand seems to be the reviewer's disagreement with our overarching claim. We are happy to relax our claim to something along the lines of "Unlike previous works that are only able to approximately handle simple constraints under relaxations of autoregressive distributions, our approach injects non-trivial constraints, that don't easily factorize, as part of the training process, computing the probability of the constraint exactly w.r.t an approximate distribution". Thanks again for engaging with us, and we hope this convinces the reviewer to push their scores towards an accept. [References]: [4] Posterior Regularization for Structured Latent Variable Models (JMLR 2010)
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their valuable feedback towards improving our paper. We are happy to see the reviewers' excitement regarding the posed problem, the proposed solution as well as the empirical evaluation, with an emphasis on the LLM detoxification experiment. There have been several comments regarding controllable language generation and its relevance to our approach. There has indeed been a lot of work on controllable text generation which we’re happy to acknowledge and discuss in our camera-ready. We would like to point out, however, that both approaches are not at odds. Indeed, they have been shown to be *complementary* by previous work, both old [1] and new [2] [3], theoretically and empirically, with methods combining both paradigms attaining the highest performance. In the specific context of language detoxification, [4] has shown complementing domain-adaptive training, such as our approach, with decoding-time constraints to yield the best results. It is, however, non-trivial to generalize such decoding methods beyond *non-lexical constraints* e.g. Sudoku or Warcraft, which seem to be the focus of all such approaches. Reviewers have also suggested adding a step-by-step construction of the logical circuit in our examples, which we are happy to add in the camera-ready. We will now address each reviewer's individual concerns. References [1] Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding Semi-Supervision with Constraint-Driven Learning. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. [2] Kareem Ahmed, , Eric Wang, Guy Van den Broeck and Kai-Wei Chang. 2021. Leveraging Unlabeled Data for Entity-Relation Extraction through Probabilistic Constraint Satisfaction. ArXiv abs/2103.11062. [3] Kaifu Wang, Hangfeng He, Tin D. Nguyen, Piyush Kumar and Dan Roth. 2023. On Regularization and Inference with Label Constraints. In Proceedings of the 40th International Conference on Machine Learning. [4] Boxin Wang, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar and Bryan Catanzaro. 2022. Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models. ArXiv abs/2202.04173. Pdf: /pdf/cd79305c8d35711a24679b642eb08459bd3f0f6d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Hierarchical Training Paradigm for Antibody Structure-sequence Co-design
Accept (poster)
Summary: This paper introduces a novel approach called the hierarchical training paradigm (HTP) to address the antibody co-design problem. It leverages both geometric neural networks and large-scale protein language models and proposes four levels of training stages to efficiently exploit the evolutionary information encoded in the abundant protein sequences and complex binding structures. Strengths: 1. The paper is well-written and easy to follow. 2. The authors explored using multi-modal data from different domains to enhance the antibody sequence-structure co-design performance, which is a novel perspective. 3. Extensive experiments show that HTP significantly overperforms previous methods. Weaknesses: 1. The code is not provided at the current stage. 2. The reported baseline results are inconsistent with the original papers. For example, in table. 1, the CDR-H3 AAR of MEAN is only 22.56%, which is much lower than the 36.38% in the original paper. Therefore, the comparison may be unfair. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our gratitude for your valuable feedback and constructive comments on our paper. We appreciate your recognition of the strengths of our work and acknowledge the weaknesses pointed out. We have carefully reviewed your feedback and would like to address each concern: 1. Code Availability: We apologize for not providing the code at the current stage. We fully understand the importance of code reproducibility for advancing research in the field and will release the code and associated resources upon acceptance. 2. Inconsistent Baseline Results: We appreciate your observation regarding the inconsistency of the reported baseline results with the original papers. It is worth noting that we follow DiffAb [A] and adopt a completely different dataset split from MEAN [B]. To be specific, the selected data points in SAbDab are divided into training and test data based on their release date and CDR sequence identity. The test split includes protein structures released after December 24, 2021, and structures with any CDR similar to those released after the date (sequence identity higher than 50%). Antibodies in the test set are further clustered with 50% CDR sequence identity to remove duplicates, finally resulting in 20 antibody-antigen structures. The training split contains complexes not involved during the curation of the test split. Meanwhile, MEAN [B] uses the split setting of RefineGNN [C], which separates the entire dataset into training, validation, and test sets according to the clustering of CDRs to maintain the generalization test. Then they divide all clusters into training, validation, and test sets with a ratio of 8:1:1. As a consequence, the split setting of DiffAb [A] poses a greater challenge than that of MEAN [B] and RefineGNN [C]. This statement can be verified via the discrepancy in their reported results. For instance, RefineGNN [C] achieves an AAR of 39.40%, 37.06%, and 18.88% for CDR-H1, CDR-H2, and CDR-H3 in the MEAN paper [B], but only reaches an AAR of 27.77%, 27.04%, and 8.00% for CDR-H1, CDR-H2, and CDR-H3 in the DiffAb paper [A]. This phenomenon indicates that the same algorithm can encounter a decline in its performance when facing a more difficult data-splitting mechanism. Based on this fact, we believe our reproduction of several important baselines is implemented well and the comparison in our manuscript is fair. However, we are still thankful for your concerns and will add a few sentences to clarify the influence of dataset splitting over the model performance. [A] Luo, Shitong, et al. "Antigen-specific antibody design and optimization with diffusion-based generative models for protein structures." Advances in Neural Information Processing Systems 35 (2022): 9754-9767. [B] Kong, Xiangzhe, Wenbing Huang, and Yang Liu. "Conditional antibody design as 3d equivariant graph translation." arXiv preprint arXiv:2208.06073 (2022). [C] Jin, Wengong, et al. "Iterative refinement graph neural network for antibody sequence-structure co-design." arXiv preprint arXiv:2110.04624 (2021). --- Rebuttal Comment 1.1: Title: Reply to the authors Comment: I have read the reply and appreciate the author's reply. My concerns are mostly resolved. Thanks!
Summary: In this study, the author points out that the efficacy of existing co-design methods is predominantly limited by the small number of antibody structures. They propose a hierarchical training paradigm (HTP), a novel unified prototype to exploit multiple biological data resources and aim at fully releasing the potential of geometric graph neural networks (GGNNs) for the sequence and structure co-design problem. Strengths: - This study is well-motivated, and sheds light on integrating big data (including sequence data, protein complex data) into antibody design. - The writing is clear and the pipeline is easy to follow. Weaknesses: - Missing comparison with the SOTA method, dyMEAN, whose AAR of CDR-H3 is 43.65% (Table 4). (End-to-End Full-Atom Antibody Design, ICML 2023) - Some results are inconsistent with the reference. In the original paper of MEAN, the AAR of CDR-H3 is reported to be 36.38% or 39.87% under different evaluation settings (Table 1, MEAN), while in this manuscript, MEAN's AAR is 22.56% (Table 1). - Some results are confusing. The performance on fix-backbone design (Table 2) is lower than that on co-design (Table 1). For example, the AARs of CDR-L1/L2/L3 on co-design are 91.13%/89.80%/73.82%, while the performance on fix-backbone design is only 77.49%/74.15%/76.96%. The AARs of CDR-H3 are both 40.98% on two tasks. Fixed-backbone design should be much easier than the co-design task. Can you give an explanation? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - In line 590, Appendix, is weight decay 1e-5? - Missing References: Wang et al, On Pre-training Language Model for Antibody, ICLR 2023 - How do you split the epitope from the antigen? Is there any threshold? - In this work, you build a graph with a cutoff of 8A, and initialize the antibody with the center of the residues before/after the CDR. Is it possible that the initialized antibody is far from the antigen, i.e., larger than 8A, such that the antibody can not interact with the antigen? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review of our study. We appreciate your positive feedback on the motivation of our work and the clarity of the writing, as well as your constructive comments that will help us improve the quality and accuracy of the paper. We have carefully considered each of the points you raised and would like to address them accordingly: * Comparison with dyMEAN dyMEAN [A], indeed, is an extraordinary work for antibody sequence-structure co-design and extends the framework of MEAN [B]. However, the problem setting of dyMEAN [A] is completely different from most existing co-design architectures (i.e., MEAN [B], RefineGNN [C], HERN [D], DiffAb [E], and ours). To be explicit, dyMEAN [A] assumes that the 1D incomplete antibody sequences (without CDRs) are given but no antibody structures are provided. To resolve this dilemma, dyMEAN proposes an end-to-end pipeline to replace the previous multi-stage schema: IgFold for structure prediction + HDock for docking on the target epitope + MEAN [B] for binding CDR generation. In contrast, we follow the conventional setting of RefineGNN [C], HERN [D], DiffAb [E], and MEAN [B], and assume that the incomplete antibody structures (without CDRs) are offered. Therefore, it is hard to directly compare the effectiveness of those co-deign approaches with dyMEAN [A]. In Table 1 of the dyMEAN paper [A], the reported results for DiffAb [E], MEAN [B], and HERN [D] all adopt the above-mentioned pipeline (IgFold -> HDock -> CDR generation -> Rosetta) rather than only employing those algorithms. Thus, we are unable to fully compare dyMEAN in our experimental section. * Inconsistent Results of MEAN Thank you for bringing to our attention the discrepancy in the reported baseline results compared to the original papers. We acknowledge the importance of this observation and would like to clarify the differences in the dataset-splitting mechanism used in our work compared to previous studies. In our study, we followed the dataset-splitting approach of DiffAb [E], which uses a completely different methodology compared to MEAN [B]. Specifically, we utilized the SAbDab dataset and split the data points into training and test sets based on their release date and CDR sequence identity. The test set includes protein structures released after December 24, 2021 and structures with any CDR similar to those released after that date (sequence identity higher than 50%). To eliminate duplicates, we further clustered antibodies in the test set with 50% CDR sequence identity, resulting in a final set of 20 antibody-antigen structures. On the other hand, the training split contains complexes that were not involved in the curation of the test split. In contrast, MEAN [B] employs the split setting of RefineGNN [C], which divides the entire dataset into training, validation, and test sets based on the clustering of CDRs to maintain generalization. Subsequently, all clusters are divided into training, validation, and test sets in an 8:1:1 ratio. Due to the different data-splitting mechanisms, our approach in DiffAb [E] poses a greater challenge than that of MEAN [B] and RefineGNN [C]. This challenging dataset split is likely responsible for the observed discrepancies in the reported results. For instance, RefineGNN [C] achieves an AAR of 39.40%, 37.06%, and 18.88% for CDR-H1, CDR-H2, and CDR-H3 in the MEAN paper [B]. However, in the DiffAb paper [E], the AAR for CDR-H1, CDR-H2, and CDR-H3 drops to 27.77%, 27.04%, and 8.00%, respectively. This difference indicates that the same algorithm may experience a decline in performance when facing a more challenging data-splitting mechanism. In light of this fact, we believe that our reproduction of several important baselines is well-implemented, and the comparison in our manuscript is fair, considering the more difficult dataset splitting used in our work. However, we are grateful for your concerns, and we will add a few sentences to our paper to clarify the influence of dataset splitting on the model performance. * Performance Discrepancy in Fix-Backbone Design We appreciate your astute observation regarding the contrast in performance between the co-design and fix-backbone design tasks. Indeed, we concur with your insight that fix-backbone design should be relatively less complex since it doesn't entail predicting the precise positions of CDRs. In our initial implementation for CDR-L1/L2/L3, we admit that we did not strictly follow a grid search of the entire hyperparameter space. Instead, we opted for early stopping during training for convenience once we observed an acceptable result that outperformed all existing baselines. We acknowledge that this approach might not have fully explored the optimal combination of different hyperparameters. In response to your valuable feedback, we have re-run the experiments and conducted a thorough grid search to find the best combination of hyperparameters for CDR-L1/L2/L3 in fix-backbone design. The updated results are listed below, and as you can see, the overall performance of CDR-L1/L2/L3 in fix-backbone design is indeed better than that of the sequence-structure co-design task: | Region | CDR-L1 | | CDR-L2 | | CDR-L3 | | | |--|---|-----|---|----|---|--|---| | Metric | AAR | Perplexity | AAR | Perplexity | AAR | Perplexity | | | HTP | 93.62$\pm$1.5 | 1.09$\pm$0.08 | 91.46$\pm$1.7 | 1.58$\pm$0.10 | 80.71$\pm$1.1 | 2.66$\pm$0.10 | | Additionally, we extend our gratitude for identifying a typographical error in Table 2. To rectify this, we'd like to clarify that the AAR for CDR-H3 in the fix-backbone design task stands at 43.25. We sincerely appreciate your attention to detail and your insistence on rigor in our experimental setup. Your feedback has led us to reevaluate our approach and perform a comprehensive grid search, which has yielded more reliable and accurate results. --- Rebuttal Comment 1.1: Title: Rebuttal by Authors (Continued) Comment: * Weight Decay in Line 590 Yes, you are correct. The weight decay in line 590, as described in the appendix, is set to 1e-5. We will ensure that this information is clearly stated in the revised version. * Missing References We apologize for the oversight in missing the reference to EATLM [F], which is an excellent study introducing a new pre-trained antibody language model EATLM. It leverages an ancestor germline prediction (AGP) task and a mutation position prediction (MPP) task to enable evolution awareness. However, it is a great pity that until we submitted our manuscript to NeurIPS, EATLM did release the code but did not make the pretrained weights publicly available. Actually, the pretrained language model weights are still not accessible today (see https://github.com/dqwang122/EATLM). Therefore, we fail to utilize EATLM and examine its benefits for our antibody design problem. As a remedy, we only compare our algorithm with some existing antibody-specific language models containing AntiBERTa [G] and AbLang [H]. But as you suggested, we will include some necessary discussion about EATLM in the revised version of the paper. * Epitope-Antigen Split The splitting of the epitope from the antigen is an essential step in our method. In HERN [D], they selected the $m\in [20, 40, 80]$ closest residues to the antibody as the epitope. However, as for our epitope-based CDR coordinate initialization, we adopt the widely-acknowledged way to determine the epitope residues based on their proximity to the antibody residues. That is, we recognize antigen residues as epitopes if their heavy atoms are within 8A of another heavy atom from any antibody chain (either heavy or light). This aligns with several important studies in epitope prediction [I]. We will provide additional details and clarify this threshold used for the epitope-antigen split in the revised manuscript. * Graph Initialization We acknowledge your concern about the possibility of initializing the antibody far from the antigen, potentially exceeding the predefined cutoff. To better understand the impact of graph connectivity between antigens and initialized CDRs, we compute the explicit distance between the initialized CDR and the antigen, and results show that 15.26% CDRs cannot interact with the antigen within a threshold of 8A. This phenomenon points out a possible direction to further promote the performance of our model by building interactions between the initialized CDR and the antigen. For instance, we can enlarge the receptor field of the initialized CDR from 8A to 12A or 16A. Or instead, we can connect the initialized CDR to the $k$ closest residues in the antigen. [A] Kong, Xiangzhe, Wenbing Huang, and Yang Liu. "End-to-End Full-Atom Antibody Design." arXiv preprint arXiv:2302.00203 (2023). [B] Kong, Xiangzhe, Wenbing Huang, and Yang Liu. "Conditional antibody design as 3d equivariant graph translation." arXiv preprint arXiv:2208.06073 (2022). [C] Jin, Wengong, et al. "Iterative refinement graph neural network for antibody sequence-structure co-design." arXiv preprint arXiv:2110.04624 (2021). [D] Jin, Wengong, Regina Barzilay, and Tommi Jaakkola. "Antibody-antigen docking and design via hierarchical structure refinement." International Conference on Machine Learning. PMLR, 2022. [E] Luo, Shitong, et al. "Antigen-specific antibody design and optimization with diffusion-based generative models for protein structures." Advances in Neural Information Processing Systems 35 (2022): 9754-9767. [F] Wang, Danqing, Y. E. Fei, and Hao Zhou. "On pre-training language model for antibody." The Eleventh International Conference on Learning Representations. 2022. [G] Olsen, Tobias H., Iain H. Moal, and Charlotte M. Deane. "AbLang: an antibody language model for completing antibody sequences." Bioinformatics Advances 2.1 (2022): vbac046. [H] Leem, Jinwoo, et al. "Deciphering the language of antibodies using self-supervised learning." Patterns 3.7 (2022). [I] Tubiana, Jérôme, Dina Schneidman-Duhovny, and Haim J. Wolfson. "ScanNet: an interpretable geometric deep learning model for structure-based protein binding site prediction." Nature Methods 19.6 (2022): 730-739.
Summary: This paper proposes a hierarchical training paradigm for antibody sequence-structure codesign. It incorporates different sources of data, including general protein sequences, antibody sequences, general protein-protein complexes, and antibody-antigen complexes. The motivation is that there are a lot more data on general proteins and pre-training the model on these non-antibody data may provide additional boost to model performance. Specifically, it uses the pre-trained ESM-2 language model to incorporate all the knowledge learned from general protein sequence data. It then fine-tunes it on all antibody sequences in Observed Antibody Space (OAS). The fine-tuned language model is used to calculate features for antibody and antigen sequences. Next, the model is trained on all protein-protein complexes in DIPS. Lastly, the model is fine-tuned on antibody-antigen complexes in SAbDab. The method shows substantial improvement on antibody design benchmarks compared to existing baselines. Strengths: * This paper proposes to incorporate multiple sources of biological data for model pre-training. If implemented properly, this is a important contribution to the field. * The evaluation is comprehensive. It includes all the recent baselines proposed in the field. The improvement over existing methods are substantial (but it can be due to potential data leakage, see discussion below). Weaknesses: * The main limitation of this paper is potential data leakage. ESM-2 is trained on all protein sequences in the UniRef database (September 2021 version). The test set includes sequence released after December 2021, as well as structures with any CDR similar to those released after this date (with sequence identity higher than 50%). Therefore, it is fairly possible that the training set of ESM-2 includes antibody sequences similar to the test set. Likewise, OAS may also contain antibody sequences similar to the test set and the author did not perform any filtering to ensure this may not happen. Similarly, DIPS also contains antibody-antigen structures. Even though it only contains structures released before 2019, it may still contain similar antibody sequences or antigen sequences to the test set. Can the authors provide evidence that there is no potential data leakage? * The model architecture is an adaptation of existing models (e.g., EGNN). The technical innovation of the model architecture is relatively weak (though this is not the focus of the paper). * The antibody-antigen complex test set contains only 21 structures, which is too small for evaluation. Expansion of the test set is necessary (at least over 100 structures is needed). Technical Quality: 1 poor Clarity: 3 good Questions for Authors: * Can you plot sequence similarity between all the different training data sources (general protein sequences in UniRef50, general protein-protein complexes in DIPS, antibodies in OAS)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 3 good Contribution: 3 good Limitations: The authors has addressed the limitations and negative societal impact in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and comments on our paper. We appreciate your acknowledgment of the potential importance of incorporating multiple sources of biological data for model pre-training. We have carefully considered your concerns and would like to address them as follows: * Data Leakage Or Not (1) We acknowledge the concern you raised regarding the potential high sequence similarity between the different datasets used in our proposed approach. In response, we have conducted a comprehensive analysis of the sequence similarity between the various pretraining data sources and the test set in SAbDab. This analysis includes general protein sequences from UniRef50, general protein-protein complexes from DIPS, and antibodies from OAS. We have plotted the sequence similarity distributions and have attached the corresponding figures in the uploaded PDF in the general author rebuttal. Below, we present the statistical findings from this analysis: | Dataset | Mean | Std | Min. | 25% | 50% | 75% | Max. | |-----------------|-------------------|-------------------|-------------------|-------------------|-------------------|-----|------| | UniRef50 | 0.051 | 0.021 | 0.002 | 0.041 |0.052 | 0.059 | 0.260 | | DIPS | 0.188 | 0.036 | 0.000 | 0.183 | 0.198 | 0.208 | 0.429 | | OAS | 0.246 | 0.017 | 0.200 | 0.235 | 0.243 | 0.254 | 0.401 | From the statistical results, it is evident that the maximum sequence similarity values for these three datasets are all below 0.5. Based on this evidence, we can confidently conclude that neither UniRef50, DIPS, nor OAS contains sequences that are substantially similar to the SAbDab test set. Consequently, we firmly believe that there is no potential data leakage in our hierarchical training paradigm. (2) Furthermore, we would like to address the broader question of whether it is acceptable to utilize additional publicly available unlabeled data for model pretraining, even if there is some distributional overlap with the downstream data distribution. Our viewpoint aligns with common practices in various domains of AI, including computer vision, natural language processing, and AI for scientific research. It is indeed a reasonable practice to leverage self-supervised learning on independently collected databases to enable deep learning models to capture a broader feature space. This broader exposure often leads to enhanced generalization capabilities, especially when dealing with out-of-distribution samples, such as proteins from different families or clusters [A]. It is noteworthy that several prior studies in computational biology and bioinformatics have successfully employed pretrained language models like ESM or ProtTrans to enhance model performance. These studies did not intentionally exclude samples belonging to distributions closely related to the test set. For instance, DiffDock [B] utilizes ESM-2 to derive residue embeddings for receptors, and RDE [C] pretrains a Graph Transformer on PDB-REDO for side-chain angle recovery, which is then applied to predicting mutation effects (ddG) on Skempi. CLEAN [D], featured in Science, initializes residue features with ESM-2 to achieve superior accuracy, reliability, and sensitivity in assigning Enzyme Commission (EC) numbers compared to existing tools. Beyond language models, even structurally pretrained algorithms often assess their effectiveness without strictly filtering out overly similar structures. For example, GearNet [E] employs pretraining on the AlphaFold protein structure database (805K structures) and evaluates its model across diverse downstream tasks like Enzyme Commission (EC) number prediction, Gene Ontology (GO) term prediction, Fold classification, and Reaction classification. Similarly, PromptProtein [F] pretrains on UniRef50, PDB (200K structures), and STRING datasets, demonstrating efficacy in tasks involving Gene Ontology and Enzyme Commission numbers. All the aforementioned examples strongly support our assertion that pretraining on publicly available sequential or structural databases can indeed empower deep learning models effectively. Crucially, this empowerment is feasible as long as the approach avoids any use of downstream test data and corresponding labels. In our HTP approach, datasets like OAS, UniRef, and DIPS are independently collected and do not reference any data points in SAbDab. This ensures that regardless of the specific downstream problem or test data employed, our pretraining steps in HTP can consistently transfer prior knowledge to various real-world problems. Thank you for your thoughtful considerations and queries. We are confident that our approach adheres to rigorous standards of integrity and robustness, and we appreciate the opportunity to clarify these points. [A] Erhan, Dumitru, et al. "Why does unsupervised pre-training help deep learning?." Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 2010. [B] Corso, Gabriele, et al. "Diffdock: Diffusion steps, twists, and turns for molecular docking." ICLR 2023. [C] Luo, Shitong, et al. "Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction." ICLR 2023. [D] Yu, Tianhao, et al. "Enzyme function prediction using contrastive learning." Science 379.6639 (2023): 1358-1363. [E] Zhang, Zuobai, et al. "Protein representation learning by geometric structure pretraining." ICLR 2023. [F] Wang, Zeyuan, et al. "Multi-level Protein Structure Pre-training via Prompt Learning." The Eleventh International Conference on Learning Representations. 2022. --- Rebuttal Comment 1.1: Comment: * Model Architecture We value your feedback regarding the technical innovation of our model architecture. While our primary emphasis lies in the integration of multiple data sources for model pre-training, we recognize the significance of introducing a clear and novel architectural framework. However, it's important to highlight that our model does present notable advancements over the standard EGNN. To illustrate, we've introduced a distinction between intra- and inter- types of message-passing schemes to discern interactions within the same graph from those across different counterparts. An enhanced self-attention mechanism has been employed to more efficiently aggregate inter- information. Moreover, our model strategically updates the coordinates of residues solely within the Complementarity Determining Regions (CDRs), the segments intended for design. Meanwhile, the positions of other regions remain fixed. We will refine our paper to more prominently underscore these distinctive model adaptations, particularly how we have tailored the EGNN to suit the specific requirements of sequence-structure co-design. * Test Set Size: We readily acknowledge the limitation posed by the relatively small size of our test set, comprising only 21 antibody-antigen complex structures. Our approach to dataset splitting aligns with the strategy outlined in DiffAb [A], wherein data points are allocated into training and test sets based on release dates and CDR sequence identities. Specifically, the test set encompasses protein structures released after December 24, 2021, along with structures that share CDR similarity with post-date releases (sequence identity exceeding 50%). We've taken care to address duplicate entries by clustering antibodies within the test set using a 50% CDR sequence identity threshold, ultimately culminating in a final collection of 20 unique antibody-antigen structures. Conversely, our training split consists of complexes that were not involved in curating the test split. We acknowledge that alternate data splitting methods, such as those employed by RefineGNN [B] and MEAN [C], may yield larger test datasets. However, it's important to note that our chosen data-splitting approach, as verified, poses a more rigorous challenge compared to RefineGNN and MEAN. This stringent approach enables us to comprehensively evaluate the efficacy of diverse co-design algorithms. While we appreciate the significance of expanding the test set size for a more comprehensive evaluation, we will take steps to enhance its diversity and representativeness in our future work. We aim to augment the test set size to encompass a minimum of 100 structures, thereby bolstering the reliability and generalizability of our evaluation outcomes. Thank you for your insightful feedback, which serves to enhance the clarity and comprehensiveness of our manuscript. Should you have any further questions or recommendations, please don't hesitate to communicate them to us. Title: Rebuttal by Authors (Continued) --- Rebuttal Comment 1.2: Title: sequence similarity between sabdab and uniref50/dips | data leakage Comment: Dear authors, The sequence similarity results reported about SAbDab and UniRef50 are surprising. Looking e.g., at https://www.rcsb.org/sequence/6FE4, a random sequence within SAbDab, one can see a reference to the UniProtKB accession: p09386. Generally, SAbDab sequences are contained within UniProtKB. Further, UniRef50 is built by clustering sets of sequences from UniProtKB. Further, as noted within the DIPS paper (https://arxiv.org/pdf/1807.01297.pdf), DIPS contains antibody-antigen complexes. Can you explain how you have computed the sequence similarity? A common pitfall when reading in sequences from PDBs is that they contain gaps hindering sequence comparison. The proper way to do this is by checking the sequence submitted with the PDB. --- Reply to Comment 1.2.1: Title: Response to Area Chair Comment: Dear Area Chair SSEw: Thank you for your insightful comments and concerns regarding our recent publication. We appreciate the opportunity to address the points you've raised in your review. Regarding the unexpected outcomes in the sequence similarity findings between SAbDab and UniRef50, we acknowledge your surprise and commend your meticulous attention to this particular aspect. Allow us to elucidate the methodology we employed for the computation of sequence similarity: (1) Initially, we meticulously verified the indices of antigen-antibody samples within the SAbDab test set, specifically: ['5xku_C_B_A', '7chf_A_B_R', '7chf_H_L_R', '5tlj_D_C_X', '7che_H_L_R', '5tlk_B_A_X', '5tlk_F_E_Y', '5w9h_H_I_G', '7bwj_H_L_E', '5tlj_B_A_X', '5tl5_H_L_A', '7d6i_B_C_A', '8ds5_C_B_A', '7chb_H_L_R', '5w9h_E_F_D', '5w9h_B_C_A', '7che_A_B_R', '5tlk_H_G_Y', '5tlk_D_C_X'], and subsequently, we eliminated duplicate entries. (2) Following this, we acquired the corresponding sequences from the SAbDab test set in their FASTA formats. Notably, we extracted sequences from the FASTA files as opposed to PDB structures, as the latter can introduce gaps that impede accurate sequence comparisons. However, for sequences within DIPS, we directly extracted their sequences from their PDB structures, a process that might lead to a reduction in sequence similarity. (3) To perform sequence alignment and calculate the similarity between sequences from various pretraining data sources (UniRef50, OAS, DIPS) and those within the SAbDab test set, we utilized the **pairwise2** function from the **Biopython** library. The code for computing sequence similarity and generating distributional plots is accessible through the following anonymous link: https://anonymous.4open.science/r/HTP/Similarity.ipynb. If you come across any issues, we would be delighted to rectify any errors and recompute the similarities. Noteworthy is the fact that the process of iteratively computing the similarity between UniRef50 (with over 50 million sequences) and the SAbDab test set is notably time-consuming, exceeding 4 weeks. Our attempt to expedite this computation using **pandarallel** was regrettably unsuccessful. As a solution, given the limited time frame for rebuttal, we opted to randomly select a subset of UniRef50 rather than utilizing the entire dataset. Admittedly, this approach doesn't capture the complete landscape of sequence similarity distribution. Nevertheless, we are steadfast in our commitment to continue computing similarity for the entire dataset and will incorporate updated plots in the Appendix of our paper to provide a comprehensive understanding of the interplay between pretraining data resources and downstream datasets. It's pertinent to mention that our test split encompasses protein structures released after December 24, 2021, along with structures featuring any CDR similarity to those released after this date (with a sequence identity exceeding 50%). Notably, the example of 6FE4, released on 2018-03-07, is not included in our test set. Additionally, as noted by Reviewer 81S2, DIPS only comprises structures released before 2019, which might contribute to the lower similarities observed between DIPS and our test set. We wish to reiterate our appreciation for your insightful review, which greatly contributes to the refinement of our work. Your input is highly valuable and will undoubtedly assist in enhancing the interpretation of our results and their potential significance within the wider scientific community.
Summary: Antibody sequence-structure co-design and fix-backbone design is a very appealing task for both industry and academia especially in the context of drug design. The paper introduces a hierarchical training paradigm (HTP) as a potential solution of this problem. Moreover, the offered approach deal with a major issue of small dataset size for training. The experiments demonstrate effectiveness and contribution of HTP. Strengths: Originality: The problem addressed in this paper is important and up to date. Structural biology in general and structural immunology in particular lacks big amount of data, therefore, it's important to find ways to train models on small datasets. Authors propose a novel way how to overcome this problem. The related work are cited and discussed. Quality: The submission is well written and organized. The claims are supported by appropriate experiments. The used methods and equations are explained and sufficient. Authors provide ablation study and discussion of related work. Clarity: The text is well written and organized. The paper contains all the necessary citations. Significance: The results are important and useful for the cases with limited training data (for example, structural biology). The submission provides a comparison to previous and related work and shows the impact and advantages of the current paper. Weaknesses: Please review more thoroughly currently available approaches for antibody design and fix-backbone protein design. For example: https://arxiv.org/abs/2110.04624 https://arxiv.org/abs/2207.06616 https://www.biorxiv.org/content/10.1101/2022.07.10.499510v5.abstract https://arxiv.org/abs/2302.00203 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Are you going to make HTP open source? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper adequately address limitations and future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive review of our paper. We greatly appreciate your feedback and are pleased to know that you find the work strong in several aspects. Firstly, we are glad to hear that you acknowledge the originality and importance of the problem addressed in our paper. We agree that the field of structural biology, especially structural immunology, often suffers from limited data availability, making it challenging to train accurate models. Our proposed novel approach aims to overcome this issue, and we are delighted that you find it valuable. Furthermore, we are grateful for your comments on the quality of the submission. We put significant effort into ensuring that the claims made in the paper are well-supported by appropriate experiments. We are also pleased that you found the ablation study and discussion of related work to be satisfactory. Regarding the weaknesses you pointed out, we sincerely appreciate your suggestions for reviewing currently available approaches for antibody design and fix-backbone protein design. The provided links [A, B] are very relevant. Though we have already included them as baselines in Section 3 for comparison, there is still room for us to thoroughly assess these approaches to strengthen our paper's discussion sections. We believe that including these references will enhance the paper's completeness and provide a more comprehensive view of the state-of-the-art in the field. [A] Jin, Wengong, et al. "Iterative refinement graph neural network for antibody sequence-structure co-design." arXiv preprint arXiv:2110.04624 (2021). [B] Luo, Shitong, et al. "Antigen-specific antibody design and optimization with diffusion-based generative models for protein structures." Advances in Neural Information Processing Systems 35 (2022): 9754-9767. As for your question about making our method open source, we are pleased to confirm that we intend to release the code and data associated with our work upon acceptance. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal! I'm satisfied with the additions and changes.
Rebuttal 1: Rebuttal: We extend our sincere gratitude to all four reviewers for your insightful and constructive feedback on our proposed hierarchical training paradigm (HTP) for antibody sequence-structure co-design and fix-backbone design. We are encouraged by your positive reception of our work and your recognition of its relevance to both industry and academia, especially in the context of drug design. Your commendation of the soundness, presentation, and contribution of our paper is invaluable to us. -------------------- * Potential Data Leakage As recommended by Reviewer 81S2, we have undertaken an exhaustive examination of the sequence similarity existing between the diverse pretraining data sources and the test set present in SAbDab. This comprehensive analysis encompasses general protein sequences sourced from UniRef50, general protein-protein complexes extracted from DIPS, and antibodies extracted from OAS. The outcomes of this analysis have been visually represented through sequence similarity distributions, and we have included the relevant figures within the uploaded PDF. The statistical findings unequivocally demonstrate that the highest sequence similarity values across these three datasets are consistently below 0.5. This empirical evidence serves as a strong basis for our resolute assertion that neither UniRef50, DIPS, nor OAS encompasses sequences that exhibit significant similarity to the SAbDab test set. With this robust evidence at hand, we maintain our firm confidence that the hierarchical training paradigm we have employed is free from any potential data leakage concerns. * Inconsistent Results We value the insightful observation of Reviewers eNMD and H5io regarding the disparity between the reported baseline outcomes and the original papers. It's crucial to recognize that we have adhered to the methodology outlined in DiffAb [A], which entails a distinct dataset split approach compared to MEAN [B]. Specifically, our division of data points within SAbDab for training and testing is rooted in release dates and CDR sequence identity. The test partition encompasses protein structures released after December 24, 2021, along with structures that bear any CDR similarity to post-date releases (with a sequence identity surpassing 50%). Antibodies within the test set are further grouped using 50% CDR sequence identity to eliminate duplicates, ultimately culminating in 20 antibody-antigen complexes. Meanwhile, MEAN [B] employs the RefineGNN [C] split configuration, involving segregation of the entire dataset into training, validation, and test subsets, organized as per CDR clustering to ensure generalization. Each cluster is then distributed into training, validation, and test sections, with an 8:1:1 ratio. This distinctive approach in DiffAb [A] engenders a greater challenge than that of MEAN [B] and RefineGNN [C], a fact substantiated by the variance in their reported results. For example, RefineGNN [C] attains an AAR of 39.40%, 37.06%, and 18.88% for CDR-H1, CDR-H2, and CDR-H3 in the MEAN paper [B], but registers an AAR of 27.77%, 27.04%, and 8.00% for the respective CDRs in the DiffAb paper [A]. This discrepancy underscores that the identical algorithm can witness diminished performance when confronted with a more demanding dataset-splitting framework. Armed with this empirical insight, we are confident in the meticulous replication of crucial baselines in our work, ensuring fairness in our comparisons. Although we stand by the fairness of our approach, we value your concerns and will augment our manuscript with clarifications elucidating the impact of dataset splitting on model performance. [A] Luo, Shitong, et al. "Antigen-specific antibody design and optimization with diffusion-based generative models for protein structures." Advances in Neural Information Processing Systems 35 (2022): 9754-9767. [B] Kong, Xiangzhe, Wenbing Huang, and Yang Liu. "Conditional antibody design as 3d equivariant graph translation." arXiv preprint arXiv:2208.06073 (2022). [C] Jin, Wengong, et al. "Iterative refinement graph neural network for antibody sequence-structure co-design." arXiv preprint arXiv:2110.04624 (2021). * Code Availability We deeply appreciate the significance of ensuring code reproducibility to facilitate the progress of research within the field. Rest assured, we are committed to releasing the code and its associated resources upon the acceptance of our work. ----------- We genuinely appreciate the time and effort you've invested in reviewing our work. Your feedback will undoubtedly contribute to the enhancement of our research. We are committed to addressing all your concerns and look forward to sharing our revised manuscript with you. Pdf: /pdf/8ce645ec55ac08633d881c29f913a23bd71abeca.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Adversarial Examples Are Not Real Features
Accept (poster)
Summary: This paper builds upon the work of [12] "Adversarial examples are not bugs, they are features" written by Ilyas et al. The authors of [12] introduced the concept of robust and non-robust features. According to [12], non-robust features alone can be useful for the classification task. The authors of the current paper show that such features however are not extendable to different tasks. They show this through linear probing and auto encoders. Furthermore, according to the authors of [12], robust features alone are able to create a robust dataset. However, the findings of this paper contradicts this notion. Strengths: 1. This paper extends the definition of what means for a feature to be robust vs non-robust. Instead of relying on a single classification task, they extend the definition to a task wise perspective. This allows them to define "Absolute Useful" and "Relative Useful" features. Intuitively relative usefulness represents the importance of a feature more than absolute usefulness. Using these definitions, the authors then define $\rho$-useful and $\gamma$-useful features. 2. The authors then ask a series of compelling questions. They try to find the answer to these questions empirically. In the first experiment they find that using the task definition, non-robust features perform poorly (even though not non-negligible). This indicates that non-robust features are actually not useful. Next they find that the essence of non-robust features are task-specific and they should not be considered as real features. The next question is the most interesting one. The authors find that robust features alone are not enough to for robust training. This directly contradicts with the finding in [12]. Finally, they show that different tasks capture different non-robust features. These findings together shed more light on the nature of adversarial examples and the features related to them. Weaknesses: 1. Even though the paper is pretty well written, there are some weakness in the presentation. These include several typos e.g., in (13) and (14) I believe the function $U(g, D, T)$ is mentioned incorrectly instead of $R(g, D, T)$. Similarly, in line 104 Figure 3 is mentioned, but I think that the authors meant Figure 2. Adding to these, I could not find a mention of Figure 1 in the text. This means that the evaluation framework is not properly explained. Some similar other details are also missing reducing the readability of the manuscript. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In Figure 2, the clean dataset always performs better than the rest. Why is this the case always? 2. What are the authors intuition behind non-negligible features having non-negligible performance in Figure 2? Does this mean that some of this features can be transferred across some specific tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1. As mentioned above the presentation of this paper can be improved. In my opinion doing so would elevate the paper further. 2. The authors ask four questions in the manuscript and answer them through experiments. It would be useful to mention these questions in the intro and comment on the usefulness of each of them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### Title: Response to reviewer NVbH Dear Reviewer NVbH, Thanks for your careful reading and we are glad to hear that you appreciate the novelty and insights of this work. Below, we address your main concerns. --- **Q1.** Some suggestions regarding the presentation. **A1.** Thanks for your careful reading. We will fix these issues in the revision. --- **Q2.** I could not find a mention of Figure 1 in the text. This means that the evaluation framework is not properly explained. **A2.** Sorry for the confusion. We will add more descriptions of Figure 1 to explain the evaluation famework. Below is an outline: - First, we train a feature extractors $f_{T}$ with different kinds of tasks $T$ (SSL and SL ones) on a specific dataset; - Second, we perform linear probing by training a linear classification head on top of frozen features using labeled data; - Third, we evaluate the accuracy and robustness of the composed classifier (feature extractor + linear head) on test data. --- **Q3.** In Figure 2, the clean dataset always performs better than the rest. Why is this the case always? **A2.** We note that this phenomeon is similar in Ilyas et al's experiments (except that non-robust features perform much worse in ours). There could be two main reasons. **First**, the clean dataset contains both robust and non-robust features, which is supposed to provide more information for classification. **Second**, different from clean features using the original images, the (non-)robust dataset is constrcuted through iterative optimization from random inputs using an imperfect NN classifer. This process will inevitably lose information and introduce some noises to the constructed inputs, leading to performance degradation to some extent. --- **Q4.** What are the authors intuition behind non-negligible features having non-negligible performance in Figure 2? Does this mean that some of this features can be transferred across some specific tasks? **A3.** Indeed, it suggests that the human perceptible (non-negligible) features have good transferability across tasks, while human inperceptible features do not. Previously, Ilyas et al argue that despite the fact robust features are perceptible and non-robust features are not, non-robust features are still useful for classification. On the contrary, we show that their counter-intuitive understanding is questionable to some extent, as these non-robust features are largely useless on SSL tasks where natural and robust features work well. It indicates a fundamental difference between robust and non-robust features and only those human perceptible (robust) features are truly useful features, which justifies the rationality of human perception instead. --- Please let us know if you have further concerns/questions. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: I thank the authors for taking the time to address the reviewers' concerns. In particular the addition of the TinyImageNet-200 results was very helpful. I would encourage the authors to make those results a part of the main manuscript. After reading the other reviews and the rebuttals I have decided to update my rating.
Summary: This work analyzes the robustness of deep networks under various tasks, analyzes how adversarial attacks between models trained on different tasks transfers between them, and does so through a newly proposed framework. Strengths: This paper delves into the robustness of models trained under different tasks (SSL training) and runs thorough analysis on CIFAR10 datasets to compare these models. As SSL trained models continue to be used, this paper's research direction is interesting and useful. The newly proposed framework seems reasonable to compare robustness. Weaknesses: While I like the general idea of this paper, the main thrust of it was quite difficult to follow. While I am familiar with the work "Adversarial Examples Are Not Bugs, They Are Features" (or [12]), it wasn't until nearly finishing my first read-through that the submitted manuscript is heavily based on it's work. It also wasn't clear that "tasks" in this paper do not represent different datasets (with different objectives) similar to multi-task learning, but "tasks" refers to self-supervised learning. These previous points could be made much clearer. Furthermore, it appears that the paper may have been written under time constraints, as there are multiple instances of typographical errors throughout the text ("datatset" and "hekp" for example). Taking the time to carefully proofread the content would greatly enhance its clarity and overall presentation (e.g., Fig 3 referenced on page 4, but shown on page 6). Lastly, many technical details are left out of the paper which would seriously help understanding: a brief overview of each of the SSL methods, how are the datasets constructed, and where linear probing is performed. The proposed "theory" framework appears to be heavily based off [12], and it is more of a list of definitions rather than offering any theoretical insights. While these definitions are useful, theoretical insight could be given by analyzing how different (SSL / non SSL) tasks could effect the robustness more comprehensively. Some of the large claims in this paper, that the robust datasets are in fact not robust, are not quite backed up by the experiments of the paper. The robust dataset was generated by using a classifier, to then train a classifier. I'm not sure why it's surprising that using that robust classifier specific dataset doesn't give robustness to SSL trained models. Here's a natural experiment: 1. Train an SSL model on the clean dataset 2. Fit a linear probe for classification 3. Use that linear probe to generate the robust dataset following [12] 4. Train a new SSL model on that new robust dataset 5. Attack that model I would be very curious to see what Table 3 would look like after following the above steps. Finally, results on using CIFAR-10 have questionable generalization. It is a little surprising that the "Restricted ImageNet" from [12] was not used for experiments. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: See Weaknesses for items to address. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: See above. No ethical statement, a little could be said about dataset robustness and safety. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### Title: Response to reviewer nwrB We apperciate your careful reading and constructive comments on the presentation details. We will revise the paper carefully and take into consideration your suggestions in the revision. Below, we address your concerns on the paper content. --- **Q1.** While these definitions are useful, theoretical insight could be given by analyzing how different (SSL / non SSL) tasks could effect the robustness more comprehensively. **A1.** Thanks for suggestions. We can give an intuitive theoretical example as in Ilyas et al to illustrate the differences between SSL features and SL features. Consider a simple case where $X = USV\in\mathbb{R}^{N\times M}$ is the SVD of input features of $N$ samples with $M$-dimesional features. If there is a spurious correlation between the eigenvectors with smallest singular values and the class labels, they are useful features for classification. However, these features will be discarded in PCA that only extract top eigenvectors. This shows that the features learned by an SSL task and an SL task can be quite different. And some features may only be useful for one task only. Things are much more complex on real-world data and NN-based SSL methods. As shown in our experiments, although natural features transfer well between SSL and classification, non-robust features are mostly ineffective under various SSL training, showing strong task dependence. --- **Q2.** Some of the large claims in this paper, that the robust datasets are in fact not robust, are not quite backed up by the experiments of the paper. The robust dataset was generated by using a classifier, to then train a classifier. **A2.** We highlight that we have evaluated the robust-dataset classifier on both classification (SL->SL) and SSL (SL->SSL) tasks. The former experiment follows **the same evaluation protocol as Ilyas et al [1]'s experiments to validate the robust dataset (Sec 3.1 in their paper)**, that is to perform standard training on the robust dataset and evaluate model robustness. The *only* change we did is to replace their PGD/CW attack with AutoAttack. From Table 3 quoted below, we can find the model robustness vanishes under AutoAttack. It clearly shows that robust datasets are not really robust. | Attack Method | PGD-500 | CW-500 | PGD-1000 | CW-1000 | AutoAttack | | ---- | :----: | :----: | :----: | :----: | :----: | | **Robust Accuracy** | 32.86% | 32.44% | 32.59% | 32.44% | **0.21%** | > Why it's surprising that using that robust classifier specific dataset doesn't give robustness to SSL trained models? Here's a natural experiment [...] This experiment is to further study the "universal robustness" (defined in Sec 3.1) of the robust dataset, i.e., whether its robustness can generalize across different tasks. We find that similarly, SSL models using robust datasets also exhibit no robustness. In all, robust datasets show no robustness on both classification and SSL tasks. We also collect new results following the experiment setup that you suggest, using robust dataset generated by SSL (SSL -> SSL). The results are also consistent with the SL->SL and SL->SSL results, and still no nontrivial robustness is observed. | Model | Clean Accuracy | Robust Accuracy | | ---- | :----: | :----: | | SimCLR | 60.52% | 0.02% | | MAE | 28.67% | 0.24% | | ResNet-50 | 71.04% | 0.06% | | DenseNet-121 | 72.34% | 0.10% | [1] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry; Adversarial Examples Are Not Bugs, They Are Features; Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019) --- **Q3.** Finally, results on using CIFAR-10 have questionable generalization. It is a little surprising that the "Restricted ImageNet" from [12] was not used for experiments. **A3.** Restricted ImageNet is about 1/10 size of ImageNet and extracting robust/non-robust features requires massive computation. Due to limit time of rebuttal, we instead add experiment results on TinyImageNet-200, which contains 100,000 64x64 images diveded into 200 classes. As shown in Table A in our rebuttal PDF, non-robust features still obtains much lower accuracy than natural and robust features, which aligns well with our experiments on CIFAR-10. We also evaluate the robustness of ResNet-50, DenseNet-121 with standard training on the constructed robust dataset on TinyImageNet. Simialr to CIFAR-10 results, in Table B (Rebuttal PDF), the trained models are also non-robust under AutoAttack. [1] Ya Le and Xuan S. Yang. Tiny imagenet visual recognition challenge. Stanford University. 2015 [2] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry; Adversarial Examples Are Not Bugs, They Are Features; Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019) --- Please let us know if you have further concerns/questions. --- Rebuttal Comment 1.1: Title: Thanks authors Comment: I'm mostly convinced by your rebuttal. I am still not a fan of the word "task" to describe different SSL losses, and I do worry about the general readability of the paper. But those are not content issues. I'll raise my score. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thanks for appreciating our response! We understand that the word "task" can be a bit stretched for describing different SSL objectives. We are considering changing it to be more specific, e.g., "different paradigms for representation learning". Please let us know if you have better options. Also, we will certainly carefully revise the writing to be more clear and readable, and take your valuable suggestions into consideration. --- Rebuttal 2: Title: Your invaluable input is needed Comment: Dear Reviewer nwrB, thanks for your time reviewing our paper. We have meticulously prepared a detailed response addressing the concerns you raised. Could you please have a look to see if there are further questions? Your invaluable input is greatly appreciated. Thank you once again, and we hope you have a wonderful day!
Summary: This work challenges the theory on "robust" and "non-robust" features from Ilyas et al. With an extended and more generalized formulation, the authors show that "non-robust" features are indeed very task-specific, and that even supposedly "robust" features are mostly task-specific and hardly provide robustness. The authors experiment with various self-supervised models on CIFAR10 (and its robust/non-robust variants) and answer multiple interesting research questions. Strengths: - The original work from Ilyas et al. did mention that the "robust" features are task specific, but this work explicitly tests and shows that "robust" features are not necessarily truly robust, and and only really "task-robust". In some sense, there are 2 levels of generalization- this work shows that robustness at the second stage (same X, Y distribution) does not imply robustness in the first stage (same X, different Y distribution). - I like Eq (9) - providing a formulation that is relative to the choice of $g$. - The conclusion in L199-200 is fitting, and ideally should have been a stress point in the original paper on robust features. The authors here have done a good job of making this point explicit to clear up the misconception around "robust" features. The fact that the original paper did claim robust features to be a property of the dataset is in direct contradiction to results from this work. Weaknesses: - My biggest concern is the interpretation of "robust" and "non-robust" features. The authors challenge the claim made by Ilyas et al. about the actual robustness of features. However, the cited paper does not claim that the "robust" features they identify are universally robust, only that they are robust for the classification task at hand. This is not surprising- 'eye color' would be a robust feature for person identification, but not for smile detection. These features are statistical patterns that ere useful for the **given** task but not of any use otherwise, so not surprising that they do not generalize to multiple tasks. Even on L112, the authors claim "we believe that the existence of non-robust features is task-reliant", which is what Ilyas et al. also say. - L144: "...meaning that the adversarial perturbations are almost meaningless to DDPM". This can also mean that the attack is not potent enough. Also, there is no reason to believe that adversarial perturbations should transfer across tasks. The objective when adding perturbations for one task is unrelated to the wanted objective for another task- perturbations meant to fool smiling/non-smiling have no reason to influence classification scores for straight/curly hair, for instance. The former would most likely perturb areas around the face, while the latter would likely look at features close to the head. - All experiments are focused on CIFAR-10 and its robust/non-robust variants. I would like to see at least one more dataset to be more confident in the generalization of claims made in the paper. ## Minor comments - This paper assumes that the readers are familiar with the work on "robust features" referenced throughout the paper. Please give a brief summary of the referenced paper (from MadryLab) - their crux and notion of "robust"/"non-robust" features in the Introduction itself. - L19: "...and it becomes natural for papers to use terms like..." - please give some examples of papers that indeed do this. - L35: "experimente" -> "experiment" - L41: Please provide a list of contributions. Although posing these research questions is a good way to pique interest, they should not not be left as unanswered questions until the end of the paper. - Eq (12) seems to be missing the term $T$ - Figure 2 missing axis labels - L164: "...exceeds 80%" - what is the ASR of a robust model for the same attack? - L169-172: "First, the attack method.......from gradient obfuscation" - this might mean that the "robust" features are not entirely robust and have some noisy non-robust features in them too. - Figure 4: please provide a heatmap legend. This is an interesting figure- why is cosine similarity of perturbations a good metric? Why not look at transferability rates instead. The latter would be a better and more direct indicator. - References formatting is mixed: conference names are mixed in lower-case, capitalized, etc. For instance, "Are adversarial examples inevitable" appears in ICLR but the arxiv version is cited. Some conference names are in full while for others abbreviations are used. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In Eq(11) the same $\epsilon_0$ seems to be added to added for all tasks? This does not seem optimal at all, since different tasks may require different levels of noise. For instance, a binary classifier would require much more noise (happy/sad classifier) as opposed to a more fine-grained task (person identification). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### Title: Response to Reviewer GnEu Dear Reviewer GnEu, Appreciate for your careful reading and acknowledging our contributions on the definitions and evaluation of robust features. Below, we address your main concerns, especially those concerning the task-reliance of robust features. --- **Q1.** My biggest concern is the interpretation of "robust" and "non-robust" features [...] **A1.** Thanks for your insightful comments, and we get your point that a feature could be useful for one task while useless for the other. However, this does NOT contradict with our theory. In fact, we did not expect a feature that is generalized across every possible task (e.g., eye color and smile); instead, we define robust features **for a given set of tasks $\mathcal{T}$** (see L73). The word "universally robust" could be a little stretching, but essentially, we define it as "robust for every task in $\mathcal{T}$" (see L82). For a valid discussion, in this work we consider the case when $\mathcal{T}$ contains **a set of relevant tasks where naturally trained features transfer well across different tasks**. Specifically, we consider the transfer from SSL pretraining tasks (CL, MAE, diffusion) to supervised tasks (classification), and it is a known fact that features learned by SSL are very useful for classification tasks (with high linear probing accuracy). Due to this high relevance, one would expect that like natural features, the transferability should happen for non-robust features if they are truly useful for classification (as Ilyas et al suggested). Instead, our experiments give quite surprising results that their transferability is much worse than that of natural / robust features. This supports our claim that non-robust features may not be truly useful features but mainly task-specific spurious features. We will state this relationship and make the dependence on the task set $\mathcal{T}$ clearer to avoid potential confusions. Please let us know if there is more to clarify. --- **Q2.** L144: "...meaning that the adversarial perturbations are almost meaningless to DDPM". **A2.** We address your concerns on this statement point by point. > This can also mean that the attack is not potent enough. We note that these non-robust features generated by PGD-1000 are very useful in the classification task, as they can 1) mislead prediction, 2) provide a good classifier when trained on non-robust dataset (Ilyas et al). The fact that it no longer works on DDPM reveals that there exits a clear discrepancy on feature usefulness between the two tasks, and thus justfies our argument here. > Also, there is no reason to believe that adversarial perturbations should transfer across tasks. Following the discussion in A1, diffusion models can also be seen as an SSL pretrain method, and we obtain >80% transferred linear classification accuracy with diffusion-learned features (Fig 1). In view of the good transferability of natural features, the intransferability of non-robust features between two tasks reveals that non-robust features are very different from natural features. --- **Q3.** All experiments are focused on CIFAR-10 and its robust/non-robust variants. I would like to see at least one more dataset to be more confident in the generalization of claims made in the paper. **A3.** Within the limit of time, we reproduce some main results on a larger dataset, TinyImageNet [1], which contains 100,000 64x64 images diveded into 200 classes. As shown in Table A in our rebuttal PDF, non-robust features still obtains much lower accuracy than natural and robust features, which aligns well with our experiments on CIFAR-10. We also evaluate the robustness of ResNet-50, DenseNet-121 with standard training on the constructed robust dataset on TinyImageNet. Simialr to CIFAR-10 results, in Table B (Rebuttal PDF), the trained models are also non-robust under AutoAttack. [1] Ya Le and Xuan S. Yang. Tiny imagenet visual recognition challenge. Stanford University. 2015 --- ***Remark.* Due to the limit of characters, we address some of your key concerns in minor points below and will fix the writing problems in revision following your suggestions. **Q4.** ASR of a robust model under AutoAttack. **A4.** According to RobustBench [1], the ASR of the SOTA robust model of CIFAR10 dataset is **29.31%**, and for a medium-level adversarially trained model ASR is about **50%**. [1] Croce et al. RobustBench: a standardized adversarial robustness benchmark. NeurIPS'21. --- **Q5.** Figure 4: please provide a heatmap legend. This is an interesting figure-Why is cosine similarity used rather than transferability of adversarial perturbations? **A5.** We will add the legend. Consine similarity measures the relationship between adversarial perturbations in the input space. As you suggested, transferability is also a good metric for similarity and the results can be seen in Figure A in the Rebuttal PDF, which mostly consistent with previous results: the transferability between SL models is good, but poor between SSL and SL models. --- **Q6.** In Eq(11) the same $\epsilon_0$ added for all tasks? This might not be optimal, since different tasks may require different levels of noise. **A6.** No, $\varepsilon_0$ here denotes a sample-wise and task-wise perturabation independently generated for each sample pair $(x,y)$ at each task $T$ *per se*. Rigorously, it should be $\varepsilon_{x,\mathcal{T}}$. We will clarify the notation in revision. --- Please let us know if you have further concerns/questions. --- Rebuttal Comment 1.1: Title: Concerns addressed Comment: The authors have addressed my concerns quite well- I will update my rating to reflect this as soon as the option becomes available. I have no further questions for the authors :) Good luck!
Summary: In this study, the authors dispute the argument from "Adversarial examples are not bugs, they are features," where it was suggested that adversarial examples exist due to non-robust yet useful image features. They used self-supervised learning algorithms on both robust and non-robust datasets to test this theory. The findings contradicted the hypothesis, revealing that the self-supervised models trained on non-robust datasets didn't generalize well, indicating that non-robust features aren't universally useful for different training setups. Additionally, it was demonstrated that non-robust features, while beneficial for classification, aren't helpful for reconstruction, highlighting their task-specific utility. Moreover, models trained solely on robust features lacked robustness, showing high vulnerability to AutoAttack. This contradicts the original study's findings, as these models failed to exhibit robustness even when trained exclusively on robust data. Lastly, an examination of cosine similarity between the adversarial attack directions of various self-supervised models showed significant differences, suggesting adversarial attacks aren't easily transferable between different training setups. This indicates that non-robust features are generally not very useful and are model-specific rather than dataset-specific. Strengths: - This paper principally examined the arguments made by [1], by training models in a self supervised manner on non-robust and robust datasets in [1]. - I think the finding the models trained on robust features alone are not robust is already a very important finding. As [1] is a paper that the community has been very interested in, it is important that other papers try to reproduce and validate the results. [1] Ilyas, Andrew, et al. "Adversarial examples are not bugs, they are features." Advances in neural information processing systems 32 (2019). Weaknesses: - I think the authors should really expand on 6.2 and table 3. The results is truly surprising and strongly contradict the original finding in [1]. For example, the authors suggest that the use of AutoAttack is the main cause for the discrepancies with the finding of the original paper. The author should verify whether the current model is vulnerable under PGD and CW attack. If the current model is also vulnerable under PGD and CW attack with more steps, which I think the current model is going to be. Then what are the conditions for reproducing the original finding, and how does the author's setting differ from it? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the author shows whether the current model in Table 3 is vulnerable to both PGD and CW attack? Also, could the authors ablate the experimental differences between the original paper and the current on? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations adequately Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### Title: Response to Reviewer WcXm Dear Reviewer WcXm, We appreciate for your careful reading of our work as well as recognizing our finding that standard training on robust dataset does not yield true robustness. Below, we address your main concerns. --- **Q1.** The author should verify whether the current model is vulnerable under PGD and CW attacks. If the current model is also vulnerable under PGD and CW attack with more steps, which I think the current model is going to be. Then what are the conditions for reproducing the original finding, and how does the author's setting differ from it? **A1.** In this experiment, we follow the original setting of [1], and the only modification is to change the attacker from PGD/CW to AutoAttack. As it's shown in the table below, under the same setting, we can reproduce [1]'s results and find that the model is indeed **robust under PGD/CW attack**, even after 1000 iterations. However, under AutoAttack, that model has only 0.21% robust accuracy. Therefore, [1]'s evaluation actually gives a false sense of robustness, and their arguments on robust datasets are questionable since the resulting model is essentially non-robust. We will add this comparison to Sec 6.2 and Table 3 in revision. *Robustness of the PreActResNet-18 model obtained from standard training on robust dataset [1] under different attacks on CIFAR-10.* | Attack Method | PGD-500 | CW-500 | PGD-1000 | CW-1000 | AutoAttack | | ---- | :----: | :----: | :----: | :----: | :----: | | **Robust Accuracy** | 32.86% | 32.44% | 32.59% | 32.44% | **0.21%** | References: [1] Adversarial Examples Are Not Bugs, They Are Features; Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry; Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019) --- Please let us know if there is more to clarify. We are happy to take your further question in the discussion stage. --- Rebuttal 2: Title: Your invaluable input is needed Comment: Dear Reviewer WcXm, thanks for your time reviewing our paper. We have meticulously prepared a detailed response addressing the concerns you raised. Could you please have a look to see if there are further questions? Your invaluable input is greatly appreciated. Thank you once again, and we hope you have a wonderful day! --- Rebuttal Comment 2.1: Comment: I am satisfied with the updated response, and as a result I have increased my score
Rebuttal 1: Rebuttal: The supplementary material for rebuttal. Pdf: /pdf/22ed6540caab3544be6eb7a41e382f0f2303468c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Imagine That! Abstract-to-Intricate Text-to-Image Synthesis with Scene Graph Hallucination Diffusion
Accept (poster)
Summary: In this paper, text-to-image synthesis under the abstract-to-intricate setting is studied. Firstly, the input prompt is hallucinated and expanded into feasible specific scene structures by the proposed SGH mechanism. Then, text-to-image synthesis is implemented through a diffusion-based synthesizer by gradually incorporating semantic scene structure induced from the SGH. Extensive experiments on COCO, especially on the abstract-to-intricate text-to-image setting, prove the method could contribute to synthesizing images reasonably and accurately under the simple text. Strengths: 1. The proposed abstract-to-intricate T2I is a vital research topic in the field of text-to-image synthesis since existing large models require delicate and well-designed prompts for controllable image synthesis. 2. Abstract-to-intricate T2I is further promoted by the designed SGH mechanism, avoiding vision distraction from text and wrong focus enriched prompts. 3. Extensive experiments including thorough metrics are conducted to prove the effectiveness. And corresponding analysis and discussion are reasonably stated. Weaknesses: 1. Existing experiments compared to the text-based enrichment methods are only conducted on manually selected dataset COCO-A2I with Place Norm and Progressive Verbs, which is not representative and convincing enough. 2. There exists unfair comparison in T2I results on COCO, since the best FID and CLIP score of Frido are respectively 8.12 and 0.7915. And the best T2I baseline on COCO is not Frido. As far as I know, make-a-scene[1] achieves a 7.55 FID, which is not discussed. 3. Scene graphs consist of two parts, which are node and edges representing semantics, bounding boxes referring to sizes and locations of objects. As proved by previous works, incorporating visual guidance into T2I training is beneficial. Why bounding boxes information is not included for training? 4. Overclaims and inaccurate description in contributions:” We propose a diffusion-based model with SG guidances for highly controllable and scalable image generation.” Scene graph hallucinations might include unexpected concepts, which is not controlled by users. 5. Unclear captions and inconsistent description. The caption of Fig.2 is unclear and lacks descriptions about each subfigure. 6. Citations about scene graph-to-image synthesis and scene graph generation are not thoroughly included[2-4]. 7. Typos and inconsistent descriptions: In line 226, “diffusioninspired”. In line 245, Frido-G is inconsistent with the description on Tab.1. [1] Gafni O, Polyak A, Ashual O, et al. Make-a-scene: Scene-based text-to-image generation with human priors[C]//Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XV. Cham: Springer Nature Switzerland, 2022: 89-106. [2] Sitong Su, Lianli Gao, Junchen Zhu, Jie Shao, and Jingkuan Song. 2021. Fully Functional Image Manipulation Using Scene Graphs in A Bounding-Box Free Way. Proceedings of the 29th ACM International Conference on Multimedia. Association for Computing Machinery, New York, NY, USA, 1784–1792. https://doi.org/10.1145/3474085.3475326 [3] Lyu X, Gao L, Guo Y, et al. Fine-grained predicates learning for scene graph generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 19467-19475. [4] Herzig R, Bar A, Xu H, et al. Learning canonical representations for scene graph to image generation[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16. Springer International Publishing, 2020: 210-227. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Please provide results compared to text-based enrichment methods on COCO dataset. 2. Please discuss results compared to the best Firdo other T2I baselines like make-a-scene. 3. Why bounding boxes of scene graphs for visual guidance are not included in training? Will there be gain by adding bbox? 4. Please modify writing issues and answer questions in 4,5,7 of Weakness. 5. Citations are missing as described in 2, 6 of Weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your time and valuable comments. Your suggestions will surely help consolidate our paper. Following we present the point-to-point response to address your concerns. And if you feel our responses effectively relieve your concerns, please kindly reconsider your evaluation. --- **Q1: Existing experiments compared to the text-based enrichment methods are only conducted on manually selected dataset COCO-A2I with Place Norm and Progressive Verbs, which is not representative and convincing enough. Please provide results compared to text-based enrichment methods on COCO dataset.** **A:** Here, we also provide the results by comparing them with the text-based enrichment methods on the overall COCO dataset, where pre-trained Frido is employed to generate images. Compared with the best baselines, the two enrichment approaches fail to exhibit superior results on the three evaluation metrics, indicating that text-based enrichment not only fails to ease the difficulty of existing models in image synthesis, but also causes the performance decrease of existing models. Moreover, these results further demonstrate the effectiveness of our proposed method. | Model | FID | IS | CLIP| | :-----: | :----: | :----: | :----: | |Frido|11.24|26.84|70.46| |SD-PG|15.78|24.76|65.74| |SPY|12.63|25.34|66.95| --- **Q2: There exists an unfair comparison in T2I results on COCO, since the best FID and CLIP score of Frido are respectively 8.12 and 0.7915. And the best T2I baseline on COCO is not Frido. As far as I know, make-a-scene[1] achieves a 7.55 FID, which is not discussed.** **A:** Sorry for the mistake of not providing the full comparison with Frido. We conduct a re-evaluation using the open-source checkpoint provided by the authors and the same evaluation code. Following are the experimental results. Comparatively, our model outperforms Frido-CLIP and exhibits superior performance, particularly on the COCO-A2I dataset, showcasing significant improvements. This again validates the effectiveness of the proposed method. Result on COCO: |Model|FID|IS|CLIP| | :-----: | :----: | :----: | :----: | |Ours|10.19|29.96|74.83| |Frido-CLIP|10.87|27.30|73.46| Result on COCO-A2I: |Model|FID|IS|CLIP| | :-----: | :----: | :----: | :----: | |Ours|31.25|28.63|71.29| |Frido-CLIP|38.45|19.22|69.20| As for the Make-A-Scene model you indicated here, it is actually trained on the total collection of **CC12m**, **CC**, **YFCC100m**, **Redcaps**, and **COCO**, in amount to 35m text-image pairs, a definitely much larger volume data than ours, which enables to achieve a much lower FID score. Therefore, the direct comparisons with Make-A-Scene can be unfair. Also, the evaluation on Make-A-Scene could be unfeasible for us, as it does not offer an open-source checkpoint. --- **Q3: Scene graphs consist of two parts, which are nodes and edges representing semantics, bounding boxes referring to sizes and locations of objects. As proved by previous works, incorporating visual guidance into T2I training is beneficial. Why bounding boxes information is not included for training?** **A:** In this work, we placed the major emphasis on modeling the object nodes and semantic relationships between objects and attributes. The experimental results demonstrate that just employing such information has notably improved the quality of image generation. That being said, we totally agree that the incorporation of layout information (e.g., the bounding box) will be promising to gain more controllable and high-quality image generation [1], which we consider as future work. [1] LayoutGPT: Compositional Visual Planning and Generation with Large Language Models --- **Q4: Overclaims and inaccurate description in contributions:” We propose a diffusion-based model with SG guidances for highly controllable and scalable image generation.” Scene graph hallucinations might include unexpected concepts, which is not controlled by users.** **A:** Actually, here we emphasize the **controllable generation** more about the highly structured SG representations (i.e., the 'object-predicate-subject' triplets) that exhibit the ability of precise control over the image generation process. Your point could also be reasonable to a certain extent that, hallucinations may come with unexpected outcomes. But in our work, by training with sufficient volume of supervised data, our SGH module learns to (more tends to) have valid imagination (as we evaluated and answered this question in our _Q2_ of section 4.4, line 304). So this is quite different from the hallucination by the existing LLMs; they are two separate concepts. But, we will improve our expressions and make it clearer and more accurate on this. --- **Q5: Unclear captions and inconsistent description. The caption of Fig.2 is unclear and lacks descriptions about each subfigure. Typos and inconsistent descriptions: In line 226, “diffusioninspired”. In line 245, Frido-G is inconsistent with the description on Tab.1** **A:** Thanks for going through the paper this carefully; we will correct them all in the revision. --- **Q6: Citations about scene graph-to-image synthesis and scene graph generation are not thoroughly included.** **A:** Thanks for indicating the missing literature, we will carefully check them and consider including them into the related work. --- Rebuttal Comment 1.1: Comment: Thanks for your response. This rebuttal addressed some of my concerns. However, there are lots of OVERCLAIMED statements in this paper. For example, 'We are the first to study the novel T2I setup of intricate image synthesis from succinct abstract texts,' Actually, this kind of setup can be easily modified from existing T2I tasks. I will keep my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer#GuSJ, Thanks for your feedback. We'd like to make further clarification to your point. While the T2I model could be easily adopted from existing work directly for the abstract-to-intricate (A2I) T2I task as you stated, it may largely fail this kind of A2I setting surely, which has been extensively evaluated in our experiments. As A2I T2I has not been studied before, this makes it the critical focus of this work. That is, we have already dedicated significant effort to addressing this matter, proposing an efficient model that we substantiated through comprehensive experiments. Thus, it is no doubt we are the pioneers in resolving this problem. Besides, would you mind specifying which part of the concerns remain unaddressed in our response? We would be more than willing to offer additional clarification. Best. --- Rebuttal 2: Comment: Dear Reviewer#GuSJ, We would like to thank you again for your efforts and valuable feedback. Your comments are essential to help us improve the quality of our work. To address your main concerns about the rationale on COCO-A2I data construction and unfair comparison, we during rebuttal have run the experiments under a more reasonable setting by your suggestion, and presented the updated results. We kindly hope that you can take some time to check our response and re-evaluate our paper based on our replies. If you have any further concerns or questions, please do not hesitate to let us know. We will be happy to address them promptly. Best Regards.
Summary: This paper studies a new setup of generating intricate images from abstract prompts. To overcome the issues of vision distraction and wrong binding of using text as the condition to generate images, the author proposes a two stage pipeline by first generating scene graph from abstract text inputs, and then condition on the sythesized scene graph, another model generates images. The proposed diffusion model with SG guidance showcases its controllability and interoperability and it achieves new SoTA results in the abstract-to-intricate T2I setup. Overall, the paper is well-written and the proposed method is interesting and novel. Strengths: The introduction section is easy to follow, they provide examples to show the issues of the current T2I generation and motivate the scene graph representation as a potential solution. The proposed SGH model is interesting and well adopted from VQ-diffusion etc for image modelling to the problem of scene graph modelling. The experiments are convincing with strong performance on COCO, outperforming competitive baselines LDM, VQ-diffusion and Frido, they also construct an abstract-to-intricate dataset from COCO and demonstrate SoTA performance in this setup. Weaknesses: One claim of the paper is that SG guidance helps image generation with strong semantic controllability, it is not clear to me which experiments can support this claim. In table 3, it seems like replacing the HSI module with GCN encoding only drops a little bit of the performance, it is questionable if the design of HSI is necessary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It seems like using SG as condition/guidance will lower the FID compared to text conditioning and also in qualitative results, details like faces will be blurry. Any idea how to improve this and explanation of the cause? Any idea after enriching the SG, if users want to modify some nodes, how would the user interact with the SG/model? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As pointed out by the author, the proposed method depends on the generation quality of the SG, while a large-scale SG dataset is rare. Plus, since the pipeline needs to modify the conditioning of the LDM, how to better leverage pre-trained large T2I model is worth investigating. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you that you acknowledge the strengths of our work. Your support is the source of the power to push us forward and further enhance the paper. Following, we show the response to your questions one by one. --- **Q1: One claim of the paper is that SG guidance helps image generation with strong semantic controllability, it is not clear to me which experiments can support this claim.** **A:** Actually, we have done the evaluation for this claim, please kindly refer to _Q1_ in section 4.5 (line 288). To evaluate the semantic controllability of SGs in image generation, we make comparisons with baselines (i.e., LDM, Frido) without SG guidance by measuring the semantic similarity (measured by _CLIP_ score) and semantic structural alignment (measured by _TriRec._ metric). As shown in Figure 6, the system with the guidance of SG consistently performs much better than the baselines in terms of those two metrics, indicating that SG guidance enhances the semantic controllability of the T2I process, and leads to high-faithful generation. --- **Q2: In Table 3, it seems like replacing the HSI module with GCN encoding only drops a little bit of the performance, it is questionable if the design of HSI is necessary.** **A:** It is our mistake in wrongly calculating the numbers in Table 3. We incorrectly note the performance drop as 1.17. But the actual performance drop should be 4.71(=35.96-31.25) on the COCO-A2I dataset when replacing the HSI module with GCN. This notable decrease actually underscores the indispensability of the HSI module. Thanks for pointing this out. We will correct it. --- **Q3: It seems like using SG as condition/guidance will lower the FID compared to text conditioning and also in qualitative results, details like faces will be blurry. Any idea how to improve this and an explanation of the cause?** **A:** FID measures the Fréchet Distance between the distribution of the synthetic images and real-world images within the feature space. In light of this, the lower FID score corresponds to the generation of more realistic images. Therefore, the model additionally using SG as condition/guidance achieves a lower FID score compared to the model only conditioned on text, demonstrating the former model excels in better image generation. As seen from the qualitative results, compared to the baselines, our proposed model has significantly enhanced the quality of face synthesis than the baselines. Besides, we kindly note that the images have been compressed to some extent when compiling into PDF file, resulting in blurriness and a decrease in resolution. Later upon acceptance, we will show the high-resolution resulting images on our project page to the public. --- **Q4: Any idea after enriching the SG, if users want to modify some nodes, how would the user interact with the SG/model?** **A:** Good idea, which seems very promising in our setting. Actually using SG guidance for diffusion-based image editing by the user is feasible, because SG is a highly structural representation that enables more interactive manipulation for users and more controllable image generation. In our designed framework, the idea can be implemented by taking the user-modified SG as the conditions and then employing the hierarchical scene integration module to directly fuse the SG feature representations into the diffusion model without SG hallucination. However, in our current system, we did not cover this point. Thanks for your output, we will mention this part as future work. --- **Q5: As pointed out by the author, the proposed method depends on the generation quality of the SG, while a large-scale SG dataset is rare. Plus, since the pipeline needs to modify the conditioning of the LDM, how to better leverage the pre-trained large T2I model is worth investigating.** **A:** Thank you for your insights. Indeed, the size of the SG dataset plays a pivotal role in the imaging ability of the proposed scene graph hallucination (SGH) module. Actually, we have leveraged the rich number of annotated Visual Genome (VG) dataset (with 62k annotated images) for pre-training the SGH module, and then we fine-tune the SGH on the COCO dataset (83k). We believe the data quantity of the training SGH module can be quite substantial in size. Based on the experiments, the proposed SGH moudel is capable of imaging reasonable SGs and facilitates the generation of high-quality images. Of course, it is well believed that with more data, the performance of SGH can be further boosted, for producing more accurate and visually-enriched imagination. --- Rebuttal Comment 1.1: Comment: Thank the author for the response. I retain my original rating and encourage authors to open-source the code for the research community. --- Reply to Comment 1.1.1: Comment: Dear Reviewer #QdRJ, Thanks again for your acknowledgment. Sure, we will release the codes and resources upon acceptance. Best.
Summary: The paper proposes a novel approach to text-to-image (T2I) synthesis, specifically focusing on generating intricate visual content from simple abstract text prompts, aka., abstract-to-intricate (A2I) setting. The proposed mechanism, named scene-graph hallucination (SGH), expands the initial scene graph (SG) of the input prompt with more feasible specific scene structures using discrete diffusion technique. A continuous-state diffusion model then serves as the T2I synthesizer navigated by the semantic scene structure induced from the SGH module. Additionally, this paper further devises a scene sampling mechanism to generate various scene graphs. They also construct a more challenging benchmark data, called COCO-A2I, to effectively evaluate the models under the abstract-to-intricate setting. Experiments on two benchmark datasets shows that leveraging SG imagination helps better image generation, where Salad could hallucinate certain scene clues to facilitate the abstract-to-intricate T2I generation. The study also contributes to a better understanding of the efficacy and rationale behind scene graph features, with potential applications in other tasks such as image editing and text-to-video generation. Strengths: Overall, I enjoy the work much where it studies the specific conditioned image synthesis from an interesting and realistic perspective, with robust and novel technical methods. The paper seems very solid to me, with thorough evaluations from many angles, and the details are given quite sufficiently (with long and informative appendix content). While this is a very technical paper, there is immense interest in diffusion models under such novel perspective. I believe this paper will have the potential to unlock interesting future research. **Interesting and meaningful perspective.** The paper studies how to generate intricate images from succinct abstract prompts. This can be a very interesting and realistic perspective in many scenarios. The issue essentially lies in the natural modality asymmetry between language and vision, where strict one-to-one correspondences do not exist. This becomes particularly challenging when T2I systems attempt to capture the nuanced content of the prompts and generate corresponding high-quality images, thereby highlighting the need for deep semantic understanding in these systems. **Innovative methodology.** The authors propose a novel approach to the task of abstract-to-intricate T2I synthesis, where the image-generating process is controlled and navigated by the underlying semantic scene structure. On the one hand, it satisfies the human intuition for handling this task, which is pretty interesting and makes a lot sense to me. If the system could hallucinate some concrete textual clues which has a more corresponding relationship to the visual scenes, the generation process will ease largely. Technically, the devised method, called scene-graph hallucination (SGH), expands the initial scene graph (SG) of the input prompt by iteratively evolving new scene elements via discrete diffusion model, which are theoretically sound and empirically validated through experimental results. On the other hand, the hierarchical scene integration mechanism is able to ensure the highly effective guidance of the semantic scene features. **Solid evaluations and convincing analyses.** The results are presented clearly and concisely, showcasing a significant improvement for the abstract-to-intricate T2I task at hand. Some in-depth analyses are provided to offer substantial evidence and valuable insights from all different perspectives to support the claims made, such as, exploring the effectiveness and rationale behind the employed scene graph features. Also, the comprehensive implementation details will enhance reproducibility and facilitates the smooth adoption of the proposed approach. The codes are provided. **Well-structured presentation and clear writing.** This paper is exceptionally clear, well-written (except some notation problems) and good illustration, making it easy to understand. The details are given quite sufficiently, with long and informative appendix content. I’ve gone through the appendix and find almost everything I wanted to see. Weaknesses: While I think the paper is solid, it can be significantly improved further if the following issues (major or minor) are considered properly: (ordered by appearance in the paper) 1. In Figure 1, the issues of vision distraction and wrong binding appear to represent the same issue where the resulted images deviate from the original user intention. Besides, it would be beneficial to maintain consistent terminology throughout. For example, in Figure 1(a), it is referred to as 'vision distraction', whereas in the article, it is mentioned as "visual distraction." 2. Although the authors devise a scene sampling mechanism to generate various scene graphs for diversified image syntheses during inference, the diversity of scene graph remains inadequate. This could be due to the limited number of categories of objects, attributes or relationships in the SGH module, which restricts the model's imagination and exploration capacity. In real scenarios, the objects, attributes can be quite multifarious, for example, the color of the T-shirts. 3. Potentially lack of comparison of the existing scene graph completion work. The performance of scene graph hallucination has a huge impact on the performance of abstract-to-intricate T2I task. It would be interesting to see the performance comparison on existing work about scene graph hallucination, such as [1,2]. Moreover, another intuitive method is the pipeline method, i.e., first scene graph hallucination and then scene graph generation based the imagined SG. Therefore, some evaluating experiments should be provided to make a comparison with the pipeline method. 4. The authors may possibly overlook the imagination ability of the T2I model itself. For example, existing T2I models can synthesis an image based on the prompt, 'a man'. Is the appearance of the man in the generated image, such as hair color, attributed to the imagination of the T2I model? In other words, the authors fail to provide a clear definition of when the model needs to image. Furthermore, they do not delve into what scene graph needs to be detailed in order to generate intricate images effectively. For instance, in Figure 1(b), the enrich SG not extremely correspond to the image as the screen on the wall are not demonstrated in the enriched SG, but the model is still able to generate it. 5. Some unclear and confusing annotations: - In Figure 2 right bottom, it is confusing $a_{n, 1}$ whether one value, i.e., the m-th attribute of the object $o_1$, or many values. Same issues for $r_{n,k}$. - In line 150, the distribution should be $p_{\theta}(s_t|s_{t+1}, y)$. - Is $\mathcal{B}{s_t}$ a row one-hot vector or column one-hot vector? This should be clarified. Besides, there should be a transition symbol in Eq. 2. - In Eq. 1, the $\mathcal{L}_{vlb}$ is not explicitly designed for the optimization of conditional image generation. Thus, some clarification should be added, and the correct loss function is demonstrated in Appendix B.1, specifically Equations 19-22. - No illustration of the $d$ in Eq. 7. - In the Implementation section, the version of CLIP mentioned is not consistent with the information provided in Appendix C.5. - No demonstration of the hyper-parameters? top-A and the temperature $\eta$ in the inference? - In Table 3, the NTD-CA should be Eq. 5, not Eq. 17 - In Line 314, not ‘object-object’ pairs but ‘subject-predicate-object’ triplets. - In Line 738, ‘2.005’ should be written as ‘2,005’. - In Table 5, the second ‘Max’ should be ‘Avg.’ 6. The paper lacks in-depth analysis of the circumstances under which the proposed SGH module fails to generate reasonable scene graphs, leaving a gap in understanding its limitations. 7. In the section 4.5, the question Q2 is interesting but strikes me as a rather weird way to answer the question. The Figure 7 precisely answers that the SGH is able to induce intricate SGs. As discussed in Q1, there is a strong semantic alignment between the input prompts and the generated images guided of the SG. Consequently, it becomes relatively straightforward to achieve a high TriRec. Score by comparing the induced SG with the SG of the generated image. Nonetheless, this response lacks directness and comprehensiveness regarding whether SGH is capable of producing reasonable SGs. Besides, the concept of a "reasonable" SG remains ambiguous, For example, if the ‘table-in-room’ is a reasonable scene triplet, what about the triplet ‘table-in-ocean’? If a reasonable SG means some SGs that conforms natural conditions, can our proposed model generate some abstract or unconventional images? [1] Garg S, Dhamo H, Farshad A, et al. Unconditional scene graph generation. 2021. [2] Agarwal R, Chandra T S, Patil V, et al. GEMS: Scene Expansion using Generative Models of Graphs. 2023 Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Is there any further explanation about the order of the Text-CA, Graph-CA, NTD-CA? 2. During the SG hallucination, other visual scenes are also introduced. How does the proposed method deal with the visual distraction issues? 3. How to ensure that the objects, attributes, and relationships in the initial SGs are always remain in the imaged SGs? 4. Has the main results in Table 1&2 adopted the scene sampling mechanism? So is this the average of multiple experimental results? 5. In HIS, which semantic level has a significant impact on task performance? 6. How to leverage the VG dataset to train the SGH module? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I do not foresee any potential negative impact from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate it so much that you went through our work so deep. And we are very much excited to receive your such strong support, which will definitely push us moving forward. Following we will answer your questions one by one. --- **Q1: In Figure 1, the issues of vision distraction and wrong binding appear to represent …** **A:** Our aim is to analyze this phenomenon at a detailed level, as while the problems in the generated images are alike, there are nuanced variations in their underlying causes. We appreciate your careful review, and in revision, we will ensure the consistent use of terminology. --- **Q2: Although the authors devise a scene sampling mechanism to …, the diversity of scene graph remains inadequate.** **A:** Based on the current experimental results, we see that the imagination module has good imagination capacity, which can help generate intricate images from abstract prompts. However, we also acknowledge that the model's imagination and exploration capabilities are limited. As part of our future work, we will explore how to alleviate this limitation. --- **Q3: lack of comparison… Moreover, experiments should…** **A:** Following [2], we compute the Maximum Mean Discrepancy (MMD) for node and edge types, as well as sub-graph similarities (referred to as NSPDK*) to compare our method with baselines in scene graph expansion. As shown in the table, our method consistently outperforms the baselines, indicating graphs generated by our model are more meaningful and better align with the observed SG distribution. |Matrix|GraphRNN|SceneGraphGen|GEMS|Ours| | :-----: | :----: | :----: | :----: | :----: | | Node ($\times 10^4 \downarrow$ )|5.44|5.92|5.19|4.85| | Edge ($\times 10^2 \downarrow$)|22.38|0.83|1.13|0.63| | NSPDK* ($\times 10^2 \downarrow$)|22.60|0.73|1.21|0.69| --- **Q4: The authors may possibly overlook the imagination ability of the T2I model itself …** **A:** As discussed in Q2, our work lies in enhancing generative models' capacity to comprehend abstract text by envisioning key objects that concretize the abstract text, rather than exhaustively generating all image details. Additionally, we take the imagined SGs as skeletons, which means the SGs and images are not in one-to-one correspondence. Consequently, it is possible that the enriched scene graph does not precisely match the generated image. --- **Q5: Some unclear and confusing annotations.** **A:** Thank you so much for the careful reviewing, we will revise them and further carefully double-check all the content. --- **Q6: The paper lacks an in-depth analysis … leaving a gap in understanding its limitations.** **A:** Because of dataset constraints, our method encounters difficulties with infrequent abstract words that prove challenging to concretize using specific key objects, like 'yearn' and 'freedom' in the prompt 'A person who yearns for freedom'. resulting in the failure of high-quality image generation. Actually, we outline a potential vulnerability assessment in Appendix A.3, please refer to it. --- **Q7: In section 4.5, the question Q2 is interesting but strikes me as a rather weird way to answer the question…** **A:** Firstly, we assume that the SGs of gold images entail reasonable scenes. Therefore, if the SGs induced by SGH exhibit a high alignment with the SGs of the gold images, i.e., achieving a high TriRec. score, it indicates the capability of SGH to induce reasonable SGs, directly addressing the question Q2. Secondly, our proposed method, due to the scene sampling strategy, has the potential to generate unconventional images. This is because we take the TOP-A category candidates and perform sampling over these candidates instead of always picking the best prediction of the category of any node during inference. --- **Q8: Is there any further explanation about the order of the Text-CA, Graph-CA, NTD-CA?** **A:** In the experiment, we also devise the other orders between three condition injection modules. Based on the experiments, we observe that the order illustrated in Figure 2 shows the best performance for SG hallucination. --- **Q9: During the SG hallucination… How does the proposed method deal with visual distraction issues?** **A:** In this work, there are two approaches to address visual distraction issues. The one is rational SG imagination, aiming to align the imagined SGs with the gold images while minimizing unwanted information induced. The other one is controllable image generation, where the image generation is fine-grainedly guided by the objects, attributes, and relationships specified in the SGs. --- **Q10: How to ensure that the objects … remain in the imaged SGs?** **A:** Indeed, it is not essential to ensure a strict correspondence between the initial scene graphs (SGs) and the imaged SGs, since the imaged SGs involves many more concrete objects, while the initial SGs maybe include certain abstract objects. --- **Q11: Have the main results in Table 1&2 adopted the scene sampling mechanism? So is this the average of multiple experimental results?** **A:** Yes, it is. We present the average of multiple experimental results. --- **Q12: In HIS, which semantic level has a significant impact on task performance?** **A:** The information at different semantic levels contributes to the task performances to different extent, with the semantic information at the relation level showing a notable influence on the task. This is attributed to the fact that the information at relation level encompasses both object-level details and higher-level information. --- **Q13: How to leverage the VG dataset to train the SGH module?** **A:** As no captions for images in the VG dataset, we first extract the seed graph as initial graph by the algorithm in [1]. Then, we transform the seed graph into a token sequence as the text prompt. Finally, we use the VG data to train our SGH module. [1] GEMS: Scene Expansion using Generative Models of Graphs. WACV 2023 --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Thank the authors for the detailed response, which has effectively addressed most of my concerns. But I still have a few lingering points to discuss: 1. I remain somewhat perplexed about how the proposed model determines what input prompts are needed to engage in the imagination process. 2. I've also glanced at the comments by other reviewers. With regard to the task definition, I find Reviewer DqEY's perspective quite compelling. Text-to-image generation inherently involves a complex transition from abstract concepts to intricate visualizations due to the inherent asymmetry between language and vision. Why does this work solely concentrate on place nouns and progressive verbs? Put differently, there seems to be a lack of a comprehensive definition of abstract terms. If this is caused by the COCO data distribution, like the place nouns and progressive verbs are the most common cases, then my suggestion could be, authors are encouraged to further enrich the proposed COCO-A2I data, maybe say XXX-A2I, by including and collecting more scenarios. 3. The capacity for imagination within the proposed method appears to be limited by the generalization and diversity of the training dataset, right? Could the authors provide more insight into the potential strategies available to enhance imaginative capabilities while concurrently mitigating the risk of hallucination? Overall, I consider this paper to be of great value. The addressed problem is intriguing and interesting, and under the conditions of a well-conducted experiment, the proposed method demonstrates a marked superiority over the existing method. Therefore, I will maintain my earlier assessment. --- Reply to Comment 1.1.1: Comment: Thank you so much for the prompt responses and high recognition of our work. Following we will show the response to your questions one by one. *** **Q1: I remain somewhat perplexed about how the proposed model determines what input prompts are needed to engage in the imagination process.** **A:** During the training, we utilize the scene graphs derived from the generated images as supervised data to optimize the imagination model. In essence, when discrepancies arise between the scene graph associated with the input prompts and the desired scene graph, the model endeavors to enhance its ability to envision the input prompts, subsequently expanding it into a scene graph that harmonizes more cohesively with the semantic scene structure of the target images. Notably, since a pronounced discrepancy exists between the abstract input and the scene graph of the target image, the model allocates greater focus to mastering the imagination for abstract words. *** **Q2: I've also glanced at the comments by other reviewers. With regard to the task definition, I find Reviewer DqEY's perspective quite compelling. Text-to-image generation inherently involves a complex transition from abstract concepts to intricate visualizations due to the inherent asymmetry between language and vision. Why does this work solely concentrate on place nouns and progressive verbs? Put differently, there seems to be a lack of a comprehensive definition of abstract terms. If this is caused by the COCO data distribution, like the place nouns and progressive verbs are the most common cases, then my suggestion could be, authors are encouraged to further enrich the proposed COCO-A2I data, maybe say XXX-A2I, by including and collecting more scenarios.** **A:** Thanks for the reviewer's valuable and constructive suggestions. Actually, we already embark on the process of enriching the A2I data by collecting more data through the following approaches: 1) we leverage the existing image-caption pairs to collect more instances under various scenarios, such as [CC12M](https://github.com/google-research-datasets/conceptual-12m), or [CommonPool](https://arxiv.org/abs/2304.14108); 2) we construct various abstract input prompts by designing different templates within filling with different types of abstract words. *** **Q3: The capacity for imagination within the proposed method appears to be limited by the generalization and diversity of the training dataset, right? Could the authors provide more insight into the potential strategies available to enhance imaginative capabilities while concurrently mitigating the risk of hallucination?** **A:** We believe there are promising approaches to enhance the capacity for the imagination of the proposed method. Firstly, training model with more diverse datasets. We can leverage existing large-scale datasets to further optimize the scene graph imagination module, enabling the model to perform scene imagination for a wider range of abstract content. Secondly, extending the depth of imagination. This involves expanding the categories of objects, attributes, and relationships within scene graphs that the model can currently image. This refinement will empower the model to envision a broader array of intricate scene components. Thirdly, integrating with LLMs. We can explore LLM's potential to comprehend and expand scene graphs as LLMs have shown multifaced abilities in various tasks. Meanwhile, we can leverage the retrieved knowledge from Wikipedia or other authoritative platforms to mitigate the hallucination issues in LLM. *** Thanks again for your interactions. And if you have further inquiries, please do not hesitate to reply to this thread.
Summary: The paper proposes a new setting (or sub-domain) of text-to-image generation (T2I), namely abstract-to-intricate T2I. To tackle the new setting, the authors propose a method (Salad) to enrich the scene graphs parsed from the text prompts based on the discrete diffusion models. The enriched scene graphs are used as guidance to generate complicated scenes that align better with the initial concise prompts. They report quantitative results and analysis experiments to show the effectiveness of the two-stage system in multiple metrics. Strengths: - The paper addresses a practical and important problem. Users of the T2I system, in reality, have to write unreasonably long text prompts to generate images with plausible scenes/styles/semantics. Abstract-to-intricate T2I could benefit the users with a more faithful and efficient generation process. - The scene graph hallucination stage achieved with the discrete diffusion model is interesting. Enriching the text prompt in the scene graph space seems like a more controllable and stable process that can preserve the faithfulness of relations and attributes. - The authors conduct an extensive analysis of the components of the system to demonstrate its effectiveness. Weaknesses: - Task definition. While I understand the concept of abstract-to-intricate, I think the work lacks a more rigorous definition of the task. It seems unclear why places and progressive verbs are included and other nouns or verbs are discarded in the COCO-A2I. I tried the failure prompt in Fig. 1 on Stable Diffusion, and I got good results most of the time. So what makes these prompts an A2I problem instead of a faithfulness problem of all T2I models? - My major concern is about the experimental setup. It seems that the Salad system is using the pre-trained Stable Diffusion as the image generator compared to other methods like Frido and LDM-G. The comparison could be unfair as all these baselines are trained with much fewer images and inherently have higher FID scores on MSCOCO. In addition, the authors use Frido for text-based enrichment approaches. - The writing needs improvement. There are multiple typos throughout the paper and some inaccurate sentences. For instance, line 133 "attribute (o)", line 226 "baselines: stable diffusioninspired by [6].", line 235-236 "For the SIS module, we load the parameters of Stable Diffusion3 (v1.4) as the initialization.", line 238 "UNet" (UNet in Stable Diffusion?). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - What is the SGH module's output and the SIS step's input? Fig. 2 seems like you are inputting the scene graph matrix into the UNet, while Sec. 3.2 and Fig. 3 show that you convert the graphs into prompts and feed them to the attention layers. - Did you fine-tune the Stable Diffusion model by saying "For the SIS module, we load the parameters of Stable Diffusion3 (v1.4) as the initialization."? - Why do all models achieve such high CLIP scores? Is the CLIP score the cosine similarity between image-text pairs or the CLIP R-precision metric? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for going through our paper carefully, and providing valuable constructive feedback, which will surely benefit our work. Following we will address your concerns. And we sincerely hope you can raise your evaluation when you feel that we relieve your concerns. --- **Q1: Task definition. I think the work lacks a more rigorous definition of the task. It seems unclear why places and progressive verbs are included and other nouns or verbs are discarded in the COCO-A2I. I tried the failure prompt in Fig. 1 on Stable Diffusion, and I got good results. So what makes these prompts an A2I problem instead of a faithfulness problem of all T2I models?** **A:** Thank you for pointing this out. Here we can give a rigorous definition of the setting. This work is dedicated to improving the quality of intricate image generation conditioned on succinct abstract prompts, where the prompts are concise and highly abstract and often encapsulate complex scenes. Actually, abstract words have many categories, such as _place noun_, _progressive verb_, _social events_ (e.g., party, concert, conference), and _experience nouns_ (e.g., adventure, journey, festival). For the COCO data, we conduct the pilot study and found that the two types of abstract words, _place nouns_ and _progressive verbs_, are the most common, with a proportion of up to 95% and other types of abstract words account for a very small portion. Thus, we mainly selected these two types for COCO-A2I dataset. The results in Fig1 were obtained by the Stable Diffusion (v1.4) checkpoint at the time we were doing our project. Currently, the SD model has been updated several times as it trained on more datasets, achieving better results. Therefore, if you try it now or recently, you may have a chance to obtain good results. That being said, there is no guarantee of good performance for using SD on those abstract prompts. To gain an empirical result, we randomly select 50 abstract prompts from the COCO-A2I dataset, and input them into the latest Stable Diffusion (v2.1), with only a success rate of generating satisfactory results of 54%. Therefore, enabling T2I generative models with imagination ability is still non-trivial. At the same time, we acknowledge that the A2I problem is inherently a problem of addressing the faithfulness of T2I models. --- **Q2: My major concern is about the experimental setup. Salad system is using the pre-trained Stable Diffusion as the image generator compared to other methods like Frido and LDM-G. The comparison could be unfair as all these baselines are trained with much fewer images. In addition, the authors use Frido for text-based enrichment approaches.** **A:** It is our oversight for being unfair. We were just following the common practice of latent stable diffusion, i.e., leveraging the pre-trained stable diffusion to ensure the stability of training and the diversity of its application scenarios. That being said, to gain a fair comparison, during the rebuttal days, without loading the checkpoint as the initialization, we re-train our model on the same COCO training set as baselines. The results are shown as follows, and we find our conclusions still hold water. The superiority of our proposed Salad model is still significant, especially on the COCO-A2I dataset. Result on COCO: | Model | FID | IS | CLIP| | :-----: | :----: | :----: | :----: | |Frido | 11.24 | 26.84 |70.46| | Ours | 10.57 | 28.16 |71.37| Result on COCO-A2I: | Model | FID | IS | CLIP| | :-----: | :----: | :----: | :----: | |Frido | 40.36|18.36|68.53| | Ours | 33. 61|24.16|70.34| --- **Q3: The writing needs improvement. There are multiple typos throughout the paper and some inaccurate sentences.** **A:** We appreciate it much that you went through the paper carefully. We will carefully correct them all. --- **Q4: What is the SGH module's output and the SIS step's input? Fig. 2 seems like you are inputting the scene graph matrix into the UNet, while Sec. 3.2 and Fig. 3 show that you convert the graphs into prompts and feed them to the attention layers.** **A:** Sorry for causing the confusion, and we will provide a clearer description in revision. 1) The output of the SGH module is three node matrices, i.e., object nodes $s_t^o$, the attributes nodes $s_t^a$, and relation nodes $s_t^r$, as shown in the right bottom of Fig 2. The value of three matrices is the label of each node. We can convert the three matrics into a scene graph without the need for any further calculations. 2) The SIS step takes as input a sequence of tokens derived from the scene graph at different semantic levels. For more details, please refer to Appendix B.2. --- **Q5: Did you fine-tune the Stable Diffusion model by saying "For the SIS module, we load the parameters of Stable Diffusion3 (v1.4) as the initialization."?** **A:** Yes, we load the SD (version 1.4) weights, and then fine-tune it with our SGH module on the COCO data. The merits of employing the off-the-shelf checkpoint as the initialization are multifaceted. Firstly, based on the existing pre-trained parameters, we can achieve more stable training and faster convergence. Second, reusing the existing SD weights has been the default practice in the community, which also helps avoid repeating training the backbone from scratch. --- **Q6: Why do all models achieve such high CLIP scores? Is the CLIP score the cosine similarity between image-text pairs or the CLIP R-precision metric?** **A:** The CLIP score [1] is defined as the cosine similarity between image-text pairs, as employed in the baselines. Specifically, for a generated image with visual CLIP embedding $v$ and an input prompt with textual CLIP embedding $c$, the CLIP score is computed as: $CLIP(v, c) = w * max(cos(v, c), 0)$, where $w$ is a re-scaling parameter. We used the officially-released code for accurate evaluation. [1] CLIPScore: A Reference-free Evaluation Metric for Image Captioning. EMNLP 2021 --- Rebuttal 2: Title: looking forward to your feedback Comment: Dear Reviewer#DqEY, We would like to express our sincere appreciation for your efforts and valuable feedback. Your comments are essential to help us improve the quality of our work. To address your main concerns of COCO-A2I data construction and the experimental setup, we during rebuttal have run the experiments under a more reasonable setting, and presented the updated results. We kindly hope that you can take some time to check our response and re-evaluate our paper based on our replies. If you have any further concerns or questions, please do not hesitate to let us know. We will be happy to address them promptly. Best Regards. --- Rebuttal Comment 2.1: Title: Thank you for your response Comment: Thank you for addressing some of my concerns. While I still have concerns about the experiment setup, I am on the fence and increased my rating to borderline accept. --- Reply to Comment 2.1.1: Comment: Dear Reviewer#DqEY, Thank you so much for your kind recognition of this work. All your feedback will be incorporated into revision. And if you still have concerns, please let us know. We would be more than willing to offer additional clarification. Best.
Rebuttal 1: Rebuttal: # General Response to All Reviewers Dear reviewers, Thanks for all of your time to write valuable and constructive comments. Your feedback will definitely assist us in enhancing the quality of our paper, and thus we are committed to incorporating your suggestion in our revision process. Meanwhile, we feel encouraged so much that the reviewers find the new task setting is **'practical and important', 'vital' and 'interesting and meaningful'** (by reviewer #DqEY, #mzvT and #GuSJ), **the proposed method novel, interesting and effective** (by reviewer #QdRJ, #mzvT and #KZmg), and **our experiments solid and comprehensive** (by reviewer #QdRJ, #mzvT, #GuSJ and #DqEY). Your support means a lot to us! At this juncture, we would like to re-emphasize the significance of this work. While existing state-of-the-art (SoTA ) text-to-image (T2I) generation systems (e.g., Stable Diffusion) have shown incredible capability in creating high-quality images, the research on abstract-to-intricate (A2I) T2I has been largely overlooked, which can be also a very important setting in realist world. With such background, this work contributes to the following key aspects: - We are the first to study the novel T2I setup of **intricate image synthesis from succinct abstract texts**, i.e., A2I T2I, for which we collect a dataset, COCO-A2I. - We solve the A2I T2I with a novel scene graph (SG) hallucination framework implemented based on the discrete diffusion technique. - We construct a diffusion-based T2I system, which employs the SG representations for highly controllable and scalable image generation. - Our system empirically shows great advantages over existing SoTA baselines in the A2I T2I generation. As recognized by reviewer#mzvT, the paper _'studies the specific conditioned image synthesis from an interesting and realistic perspective, with robust and novel technical methods'_, and _'will have the potential to unlock interesting future research'_. We much believe this work will show a broad impact on the future research of the community. Thus, we will release all the codes and resources upon acceptance. In response to the reviewers' comments, we have thoroughly reviewed our paper, performed additional experiments, and prepared a comprehensive response. We will fix all the typos and improve the manuscript according to your comments. We hope that our paper adequately addresses your concerns. We also kindly hope reviewer #DqEY and reviewer #GuSJ (both with borderline rejection) can raise the evaluation if our responses can effectively address the concerns, and look forward to your recognition. Best regards.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper aims at the abstractive setting for text-to-image generation (T2I). They propose scene-graph hallucination (SGH) to perform imagination over the scene graph of the input prompt and make up the missing information. Then SGH can perform better T2I with the completed scene graph. Experiments on the COCO dataset demonstrate that T2I can significantly bridge the gap of abstract-to-intricate T2I. Strengths: + This paper is well-written and easy to follow. + The goal of abstract-to-intricate T2I is important since we can not expect users always input their full prompt. This setting has the potential to improve practical applications. + The usage of scene graphs is effective and can lead to better imagination for T2I than the Chain-ot-Though (CoT) language prompt. + They provide detailed ablation studies from different aspects (Table 3/4 and Fig. 6/7) as well as rich qualitative examples (Fig. 5 and Fig. 12/13 in Appendix). Weaknesses: - Maybe I missed it, but what is the motivation for using scene graphs instead of LLM-completed prompts? Does the imagination over scene graphs work more robustly than LLM? - Since they rely on imagination to deal with abstractive issues, the hallucination situation also raises. It will be better to evaluate this issue, and how is the trade-off between this and the T2I performance. How to avoid or mitigate this deficiency in the proposed framework? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I do not have additional questions. Please see the Weakness and refer to other reviewers. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The hallucination issues over scene graphs should be carefully addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for taking the time to provide valuable feedback! especially for the recognition of our work, such as 'well-written and easy to follow', ‘goal of abstract-to-intricate T2I is important’, and 'effective', as your support means a lot to us. Following, present the point-to-point response as follows to address your concerns. --- **Q1: Maybe I missed it, but what is the motivation for using scene graphs instead of LLM-completed prompts? Does the imagination over scene graphs work more robustly than LLM?** **A:** The adoption of scene graphs over LLM-completed prompts is driven by two primary motivations. Firstly, employing scene graphs facilitates greater content controllability. LLM-completed prompts often introduce additional adjectives, attributives, or concatenate raw sentences to provide tangible explanations and contexts. However, such supplementary content inserted by LLMs lacks controllability, potentially leading to the inclusion of more abstract words, as exemplified by 'enthusiastic' and 'confident' in Figure 1, finally posing more challenges for downstream generative models. In contrast, scene graph-based imagination allows for enhanced visual controlling and a stable process, as the hallucination process entails expanding the initial scene graph with specific scene structures, where imaged objects, attributes, and relationships are drawn from limited pre-defined sets. Secondly, employing scene graphs reduces the difficulty of downstream image generation. Scene graphs offer a concise and precise means of depicting objects and their interrelationships, empowering fine-grained control over the semantic scene during the image generation process. --- **Q2: Since they rely on imagination to deal with abstractive issues, the hallucination situation also raises. It will be better to evaluate this issue, and how is the trade-off between this and the T2I performance. How to avoid or mitigate this deficiency in the proposed framework?** **A:** As addressed in **Q1**, our scene graph imagination process aims to enrich the initial scene graphs by incorporating more specific scene structures, where objects, attributes, and relationships are selected from limited pre-defined sets, facilitating controlled imagination and mitigating undesired hallucination occurrences. To evaluate the proposed scene graph imagination module whether induce reasonable scene graphs (SGs), we actually have assessed the structure alignment, specifically the recall rate (TriRec.) of the ‘object-object’ pairs, between the induced SGs and the SGs of gold images known to represent reasonable scenes. Table 4 shows that 82.01% of 'object-object' pairs in the induced SGs exhibit high alignment with those in the gold SGs, signifying the capacity of our model to synthesize valid and coherent scene content. Moreover, the impact of the hallucination issue in T2I generation is relatively less significant compared to LLMs. In our T2I generation, hallucinations are relatively acceptable, particularly when dealing with abstractive scenarios, as demonstrated by experimental results. In contrast, LLMs, having no constraint, tends to produce more uncontrollable and undesirable content hallucinations. That being said, the hallucination issue does need to be addressed in some scenarios in our system, such as in the generation of images that emphasize real scenes, which is also a potential research direction for our further work.
null
null
null
null
null
null
Rubik's Cube: High-Order Channel Interactions with a Hierarchical Receptive Field
Accept (poster)
Summary: This paper proposes the Rubik’s cube convolution operator to model the high-order channel-wise interactions. The Rubik’s cube convolution applies the spatial-shifting mechanism across channel-wise groups, which is the zero-FLOP and zero-parameter. Moreover, only the point-wise convolution and dot-product are applied in Rubik’s cube convolution. The results on image denoising, low-light image enhancement, guided image super-resolution, and image de-blurring show the effectiveness of the proposed Rubik’s cube convolution. Strengths: 1. The idea of Rubik’s cube convolution is reasonable. This is a promising step to complement research in modeling high-order channel dimension information. 2. The design of Rubik’s cube convolution is simple yet efficient and easy to follow since it only includes shift, dot-product and point-wise convolution. 3. The paper is well-writing. The authors provide extensive analysis of the proposed methods. For example, the illustration of a high-order hierarchically receptive field in Figures 1, 3, and 4, and the corresponding analysis are convincing. 4. The main results and ablation study are extensive and demonstrate the superiority of the proposed method. 5. In supplementary material, the authors provide more experiments on high-level and low-level tasks, further showing the effectiveness of the Rubik’s cube convolution. Weaknesses: 1. The FLOPs of the Rubik’s cube convolution are not provided. Although the parameter of the Rubik’s cube convolution is small (Sec. 3.3), the FLOPs may be large since there are many dot-product in it. 2. Although the spatial-shifting mechanism is zero-FLOP and zero-parameter, it is a high-latency operation. Therefore, the actual latency of the Rubik’s cube convolution may be high, hindering its application. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The proposed Rubik’s cube convolution structure is similar to the g^nConv proposed in Hornet [17]. Clarify the difference between the Rubik’s cube convolution and g^nConv. 2. In Figure 6 and Line 230, the receptive field of the network progressively expands as the number of shifted pixels increases. However, the performance of RubikConv-p is best when the p=1. Please give some explanation. 3. The authors only provide the visual comparison of the low-light image enhancement task and image denoising task. More visual results on guided image super-resolution and image de-blurring should be provided. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have discussed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Efficiency of the proposed RubikConv.** We report the model size, FLOPs (an image with 400\*600\*3 pixels), and average running time on the LOL test set (including 15 400\*600\*3 images) in Table 5. The running time is measured on a workstation with an NVIDIA GTX 3090 GPU. We only replace two standard convolution layers in the DRBN baseline with the proposed RubikConv, thus, the extra running time introduced by the RubikConv is negligible. Since the RubikConv only requires convolution with a 1x1 kernel, the FLOPs and parameters are fewer than the baseline while DRBN -RubikConv achieves a 0.58db performance improvement. Table 5: The quantitative results, FLOPs, and average running time of the DRBN baseline on the LOL test set. | Model | Config | PSNR | SSIM | Flops (G) | Running time (s) | | | | | |-------|-----------|---------|--------|-----------|------------------|---|---|---|---| | | Original | 19.7931 | 0.8361 | 39.037 | 0.256 | | | | | | DRBN | Conv1x1 | 19.8648 | 0.8340 | 38.445 | 0.255 | | | | | | | RubikConv | 20.3769 | 0.8400 | 38.563 | 0.263 | | | | | **2. Discussion with the g^nConv** First, the g^nConv only formulates the high-order spatial interaction at the same position, i.e., two-order interaction is built by the dot product between (i, j) in feature x1 and (i, j) in feature x2. In contrast, the proposed RubikConv conducts high-order interaction between the center point and the surrounding points in the channel interaction, i.e., high-order interaction is formulated by the dot product among (i, j) in feature x_c1 of x, (i-1, j) in feature x_c2 of x, (i+1, j) in feature x_c3 of x, (i, j-1) in feature x_c4 of x, (i, j+1) in feature x2 of x. Second, the receptive field in g^nConv is constant in the channel dimension, while the proposed RubikConv embraces a Rubik’s cube-like hierarchical receptive field benefitting from the shifting operation. **3. Clarification on the performance of RubikConv-p.** We clarify that p=2 is the best number of the shifted pixel in ablation studies (see Figure 7 in the manuscript). Then, we will give the explanation from two perspectives. (1) As illustrated in Figure 7 in the manuscript, with the larger shifted pixel (p > 2), the effective receptive field will focus more on the horizontal and vertical direction while less on the four corners (upper left, upper right, bottom left, and bottom right), which will lead to insufficient feature extraction for the corner pixels. (2) There is a trade-off between the insufficient feature extraction caused by the large shifted pixels and ensuring the completeness of the information. Although the first branch in the RubikConv is unchanged to preserve original information, it cannot compensate for the insufficient feature representation when the shift pixel is large. Therefore, the performance will drop when the shifted pixel is larger than 2. We will present the explanation in the revised version. **4. Visualization.** Thanks for your suggestion. We will provide more visual results in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. The authors provide comparisons of FLOPs and running time to prove the efficiency of RubikConv. And the authors analyze the difference between RubikConv and g^nConv. Overall, the author has addressed my concerns. I also read the comments and rebuttals from other reviews. Additional experiments further demonstrate the effectiveness of the method. Therefore, I would like to keep my original rating as Accept.
Summary: This paper proposes Rubik’s cube convolution operator, which is very simple and efficient, especially requiring zero FLOPS / parameters. This novel component improves the several low-level vision networks, enabling the high-order channel interaction and enlarging receptive field of the standard convolutions. Simply dividing feature maps into some groups and performing shifting operations makes the existing convolutional layers capture high-order channel interaction beyond first-order interaction. The experimental results show that multiple low-level vision tasks are improved by the proposed Rubik’s cube. However, the limitation could be handled by the authors that the performance gains and the reduced number of parameters are marginal. Strengths: Rubik’s cube convolution does not require the number of operations and parameters. It can also replace any standard CNN architectures. The effective receptive field of the proposed component is successfully enlarged when compared to vanilla CNN, as shown in Fig. 3. The main experimental results, such as Tabs. 1,2,3,4, can show superiority of the proposed component over trivial 1x1 convolution replacement. As a result, the performances of various image restoration networks can be enhanced (but the marginal gains will be mentioned in “weakness”). Moreover, the ablation studies (Tab. 5) demonstrates that the hyper-parameters for Rubik’s cube convolution have been carefully decided. This paper is well-presented and easy to follow. Weaknesses: 1. The performance gains of Rubik’s cube convolution are somewhat marginal. Especially, the low-light enhancement and deblurring could be further improved. The authors are suggested to consider more robust components to different tasks. Or if the improvement differences from task-by-task are explained and the insight for better variants of Rubik’s cube convolution resulting from these explanations can be drawn, the reviewer thinks this paper can be compelling. 2. Furthermore, the number of reduced parameters led by the novel architecture is also marginal. So, (if possible) the reviewer requires an additional experiment that highly (e.g., less than 60% of “Original”) reduces the number of parameters of existing networks by changing channels or depths and replaces existing CNN or FFN with Rubik’s cube convolution. If reduced “Original” models with Rubik’s cube convolution are still better than “Original” models while requiring a small number of parameters, the robustness of this work can be reinforced. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How about applying Rubik’s cube convolution to QKV projection before self-attention of Transformers? As the authors know, almost all image restoration fields are dominated by Transformer-based methods. And self-attention is the most core component of Transformers. However, this work seems to apply their novel component to only FFN of Transformers. So, the reviewer guesses how much difference could be made by replacement to Rubik’s cube convolution for elements that directly influence self-attention operation (e.g., QKV projection). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness and question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. About the improvement differences and the variants of RubikConv.** Thanks for the suggestion. Relationship modeling in the channel dimension has been proven effective in the literature for tasks like low-light image enhancement and image de-blurring. Previous works, such as the bright channel prior [1, 2, 3] for low-light image enhancement and the dark channel prior [4] and extreme channel [5] prior for image de-blurring, have explored the channel dimension and made significant progress. In this paper, our intention is to demonstrate the simplicity of RubikConv in achieving high-order channel interaction compared to these designs and its applicability as a generic formulation across various tasks. The main goal is not to achieve the best results within each task with dedicated components. Nevertheless, to demonstrate that RubikConv can work well with other components, we show an experiment on low-light image enhancement. In particular, we customized the exposure-invariant feature extraction of SID by adding an instance normalization operation in the first branch, while keeping the remaining operations unchanged. The instance normalization maps different exposure features to the exposure-invariant feature space without introducing extra computational costs. This variant is named RubikConv-IN. The results in Table 1 show that it can further improve the PSNR/SSIM from 20.4972/0.7979 to 20.6708/0.8079 on the LOL dataset. Table 1: Quantitative comparisons of variant RubikConv on low-light image enhancement. | Model | Config | LOL | | Huawei | | | | | | |-------|--------------|---------|--------|---------|---------|---|---|---|---| | | | PSNR | SSIM | PSNR | SSIM | | | | | | | Original | 20.1062 | 0.7895 | 20.1742 | 0.6659 | | | | | | SID | RubikConv | 20.4972 | 0.7979 | 20.2044 | 0.6655 | | | | | | | RubikConv-IN | 20.6708 | 0.8079 | 20.3251 | 0.6702 | | | | | [1] Tao L, Zhu C, Song J, et al. Low-light image enhancement using CNN and bright channel prior, 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017. [2] Lee H, Sohn K, Min D. Unsupervised low-light image enhancement using bright channel prior, IEEE Signal Processing Letters, 2020, 27: 251-255. [3] Zhao Z, Xiong B, Wang L, et al. RetinexDIP: A unified deep framework for low-light image enhancement, IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(3): 1076-1088. [4] Pan J, Sun D, Pfister H, et al. Blind image deblurring using dark channel prior, Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 1628-1636. [5] Yan Y, Ren W, Guo Y, et al. Image deblurring via extreme channels prior, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 4003-4011. **2. About the robustness of RubikConv in networks with fewer parameters.** Thanks for your suggestion. We conduct the experiments to further validate the robustness of the proposed RubikConv on the low-light image enhancement task. Firstly, we implement the baseline with fewer parameters (about 50% of baseline) by reducing channels and depths, named “Original-S”. Then, we replace the standard convolution layer with the proposed RubikConv, named “S-RubikConv”. The quantitative performance on LOL [33] and Huawei is shown in Table 2. Although the PSNR/SSIM of “Original-S” dropped from 19.7931/0. 8361 to 19.2439/0. 8175, the performance of “S-RubikConv” surpassed the “Original” and achieved 19.8146/0.8365 on the LOL dataset. Table 2: Quantitative performance of DRBN with fewer parameters on low-light image | Model | Config | LOL | | Huawei | | | | | | |-------|-------------|---------|--------|---------|--------|----------|---|---|---| | | | PSNR | SSIM | PSNR | SSIM | #Params | | | | | | Original | 19.7931 | 0.8361 | 20.1549 | 0.6851 | 0.55M | | | | | DRBN | Original-S | 19.2439 | 0.8175 | 19.9054 | 0.6758 | 0.21M | | | | | | S-RubikConv | 19.8146 | 0.8365 | 20.1643 | 0.6857 | 0.19M | | | | **3. Replacing RubikConv with the QKV-projection of ViT.** We attempted to replace the convolution layer in the QKV projection with the proposed RubikConv (named RubikConv-QKV), however, the performance improvement is marginal. We conjecture that the self-attention operation itself realizes the global receptive field, thus the proposed RubikConv with a Hierarchical Receptive Field will not further improve the performance. Thus, we conducted an experiment by replacing the FFN of Restormer with RubikConv. Due to the limited time, it is only performed on the image de-blurring task. Table 1: Quantitative comparison of replacing the QKV projection in Restormer with the RubikConv on image de-blurring. The model is only trained on the GoPro training set and directly tested on the GoPro testing set, HIDE, and RealBlur datasets. | Model | Config | GoPro | | HIDE | | RealBlur-J | | RealBlur-R | | |-----------|---------------|---------|--------|---------|--------|------------|--------|-------------|---------| | | | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | | | Original | 32.9117 | 0.9603 | 30.1568 | 0.9405 | 28.9636 | 0.8792 | 36.2017 | 0.9572 | | Restormer | RubikConv-QKV | 32.9134 | 0.9603 | 30.1706 | 0.9406 | 28.9641 | 0.8790 | 36.2029 | 0.9572 | | | RubikConv | 32.9305 | 0.9608 | 30.1977 | 0.9411 | 28.9698 | 0.8796 | 36.2136 | 0.9575 | --- Rebuttal Comment 1.1: Comment: **[1]**. Did you apply the instance normalization to your RubikConv or other parts of existing model (SID)? From your explanation, I understand you mean that a new component is inserted in other component of existing model. But, the mean of "other robust components" I mentioned was variants in RubikConv for different tasks to improve them. This was because the improvements of RubikConv on low-light enhancement (especially Huawei->LOL) and deblurring are smaller than those on other tasks. Since I understand more experiments to defense this concern are not impossible due to time limit, please discuss why Rubik's cube is less or more effective on various tasks. 2. This experimental result is impressive. 3. While this result shows marginal impacts of RubikConv on QKV projection, the discussion is not reasonable that RubikConv on QKV projection is ineffective due to the inherent global dependency of self-attention. In the ablation study of Restormer paper, they showed that introducing 3x3 depth-wise convolution, which also expands receptive field, before self-attention was very effective. Of course, the QKV projection with depth-wise convolution (original Restormer) is further improved by the proposed RubikConv (despite very marginal gains). Therefore, I think this marginal improvement issue is more related with the nature of deblurring task itself than self-attention's global dependency. In other words, this part is concerned with **[1]** of this comment. Please carefully address why RubikConv shows different performances on various tasks and present some insights from it, as I mentioned in **[1]**. --- Reply to Comment 1.1.1: Comment: **Clarification about the variant (RubikConv-IN) for low-light image enhancement.** RubikConv-IN is a customized variant for low-light image enhancement. It is designed to build upon the capabilities of the original RubikConv, further enhancing its performance. For the original RubikConv, the first group is kept unchanged while the last four are spatially shifted in a distinct direction. In contrast, RubikConv-IN incorporates an additional instance normalization within the first group, retaining the same structure for the final four groups as the original RubikConv. By applying instance normalization, RubikConv-IN efficiently maps different exposure features into an exposure-invariant feature space. This process establishes an exposure-invariant space for feature extraction, thus contributing to further performance improvement. **Performance on various tasks.** The proposed RubikConv is a general operator. Its effectiveness has been validated across various low-level tasks, demonstrating either marginal or substantial performance improvements. We provide two reasons for the marginal improvement on Huawei -> LOL: (1) Data distribution disparity: The distribution of exposure within the Huawei dataset is notably more diverse than the LOL dataset. This diversity presents a challenge for the enhancer network, as learning the mapping from distinct underexposure levels to normal-light conditions is inherently more intricate than mapping similar underexposure levels to normal lighting. Hence, compared to the LOL dataset, the improvement on the Huawei dataset is marginal. (2) Degradation level: The Huawei dataset is collected from real-world environments by reducing ISO and using a shorter exposure time. The authenticity of noise present in the real-world setting introduces complexities that contribute to the increased challenge in enhancing the Huawei dataset. Consequently, the noise removal on the Huawei dataset is notably more intricate than the LOL dataset. To elucidate the marginal improvement observed in image de-blurring, we present two reasons: (1) Characteristics of the de-blurring task: The blurriness evident in blurry images often exhibits a complex and multi-directional nature. It is important to note that the proposed RubikConv, while effective, currently focuses solely on modeling blurriness in the horizontal and vertical directions. In our subsequent endeavors, we will explore a more comprehensive representation that encompasses the intricate directional relationships among all adjacent pixels (upper left, upper right, bottom left, bottom right, as well as the directions of up, down, left, and right). (2) Performance upper bound: Both Restormer and FFTformer (in response to Reviewer CEjA), represent the forefront of algorithmic advancements. They have effectively reached the pinnacle of performance for the image d-eblurring task. Notably, Restormer with multiple transformer block has substantial model capability, encompassing a formidable 25.31M parameters. Therefore, the substitution of FFNs with RubikConv, coupled with the concurrent reduction of trainable parameters from 25.31M to 24.92M, yields only a marginal performance improvement.
Summary: This paper proposes the Rubik’s Cube, which can replace the standard convolution layer in the traditional paradigm. With shift operation and high-order channel Interaction, the Rubik’s Cube can generate a hierarchical receptive field and activate the potential of channel interactions. Experiment results show that the Rubik’s Cube enhances performance across a variety of low-level vision tasks. Strengths: 1. This Rubik’s Cube proposed in this paper can be widely applied in the framework where the convolution layers are used. If this operation can prove its effectiveness in multiple tasks, it could make a lot of sense. 2. The Rubik's Cube is easy to implement and requires fewer parameters than the original convolutional layer. It's simple yet effective and efficient. Weaknesses: 1. The experiment results on GaoFen2 make me confused since in terms of ERGAS, RubikConv has made a very big improvement (from 0.9576 to 0.5486). This increase does not fit well with other experimental results. 2. The experiments of many tasks in this paper do not adopt the best methods at present. Many of the experiments were done using methods that were used two or three years ago. In order to make the experimental results more reliable, the paper should provide a comparison with the best method as much as possible. 3. Since This Rubik's Cube splits the original convolutional layer operation into multiple convolutional layer operations, in addition to the number of parameters, it is better to provide inference time to better prove the efficiency improvement of This Rubik's Cube. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors should validate the effectiveness of the proposed Rubik’s cube convolution on broader low-level tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Results on Gaofen2.** Thanks for your reminder. It is a typo and we will revise it. The performance of MutNet on GaoFen2 is shown in Table 1. Table 1: Quantitative comparisons of guided image super-resolution | Model | Config | PSNR | SSIM | SAM | EGAS | | | | | |--------|-----------|---------|--------|--------|---------|---|---|---|---| | | Original | 47.1699 | 0.9569 | 0.0192 | 0.5626 | | | | | | MutNet | Conv1x1 | 47.1668 | 0.9563 | 0.0185 | 0.5584 | | | | | | | RubikConv | 47.3274 | 0.9885 | 0.0103 | 0.5486 | | | | | **2. Comparison with the best algorithms.** In the manuscript, we conducted experiments on image denoising and de-blurring tasks using Restormer, which achieved performance that closely approximated the best results on both restoration tasks. In addition, we made an effort to integrate the proposed RubikConv into leading algorithms, such as SNR [1] for low-light image enhancement, FFTformer [2] for image de-blurring, and UAPN [3] for guided image resolution. The quantitative results presented below further demonstrate the effectiveness of our method. | Model | Config | LOL | | Huawei | | | | | | |-------|-----------|---------|--------|---------|---------|---|---|---|---| | | | PSNR | SSIM | PSNR | SSIM | | | | | | | Original | 24.5276 | 0.8407 | 21.4308 | 0.7065 | | | | | | SNR | Conv1x1 | 24.4360 | 0.8273 | 21.2811 | 0.6906 | | | | | | | RubikConv | 24.6507 | 0.8426 | 21.5094 | 0.7113 | | | | | Table 3: Quantitative comparison of image de-blurring. The model is only trained on the GoPro training set and directly tested on the GoPro testing set, HIDE, and RealBlur datasets. | Model | Config | GoPro | | HIDE | | RealBlur-J | | RealBlur-R | | |-----------|-----------|---------|--------|---------|--------|------------|---------|-------------|---------| | | | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | | | Original | 34.0694 | 0.9527 | 31.2796 | 0.9476 | 29.5407 | 0.8860 | 36.8165 | 0.9607 | | FFTformer | Conv1x1 | 33.6308 | 0.9455 | 30.8762 | 0.9403 | 28.8752 | 0.8789 | 36.1029 | 0.9352 | | | RubikConv | 34.0867 | 0.9533 | 31.2969 | 0.9480 | 29.5549 | .0.8863 | 36.8353 | 0.9612 Table 4: Quantitative comparison of guided image super-resolution. | Model | Config | WroldView-II | | | | GaoFen2 | | | | |-------|-----------|--------------|--------|--------|--------|----------|--------|--------|---------| | | | PSNR | SSIM | SAM↓ | ERGAS↓ | PSNR | SSIM | SAM↓ | ERGAS↓ | | | Original | 41.7156 | 0.9657 | 0.0227 | 0.9506 | 47.4635 | 0.9895 | 0.0100 | 0.5382 | | UAPN | Conv1x1 | 41.6826 | 0.9615 | 0.0231 | 0.9557 | 47.4091 | 0.9890 | 0.0104 | 0.5392 | | | RubikConv | 41.7675 | 0.9660 | 0.0220 | 0.9454 | 47.5260 | 0.9898 | 0.0097 | 0.5380 | [1] Xu X, Wang R, Fu C W, et al. SNR-aware Low-Light Image Enhancement, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. [2] Kong L, Dong J, Ge J, et al. Efficient Frequency Domain-based Transformers for High-Quality Image Deblurring, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. [3] Zheng K, Huang J, Zhou M, et al. Deep Adaptive Pansharpening via Uncertainty-aware Image Fusion, IEEE Transactions on Geoscience and Remote Sensing, 2023. **3. Efficiency of the proposed RubikConv.** We report the model size, FLOPs (an image with 400\*600\*3 pixels), and average running time on the LOL test set (including 15 400\*600\*3 images) in Table 5. The running time is measured on a workstation with an NVIDIA GTX 3090 GPU. We only replace two standard convolution layers in the DRBN baseline with the proposed RubikConv, thus, the extra running time introduced by the RubikConv is negligible. Since the RubikConv only requires convolution with a 1x1 kernel, the FLOPs and parameters are fewer than the baseline while DRBN -RubikConv achieves a 0.58db performance improvement. Table 5: The quantitative results, FLOPs, and average running time of the DRBN baseline on the LOL test set. | Model | Config | PSNR | SSIM | Flops (G) | Running time (s) | | | | | |-------|-----------|---------|--------|-----------|------------------|---|---|---|---| | | Original | 19.7931 | 0.8361 | 39.037 | 0.256 | | | | | | DRBN | Conv1x1 | 19.8648 | 0.8340 | 38.445 | 0.255 | | | | | | | RubikConv | 20.3769 | 0.8400 | 38.563 | 0.263 | | | | |
null
null
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Near Optimal Reconstruction of Spherical Harmonic Expansions
Accept (poster)
Summary: ### Result ### The paper studies the problem of recovering a function from a finite number of noisy observations, for the class of "square-integrable functions on the unit sphere" (denoted by $L^2(\mathbb{S}^{d-1})$ when the sphere in question is of degree $d$). The paper shows, by developing an algorithm, that the number of samples required for recovery up to an $\epsilon$-multiplicative error is proportional to $\beta_{q, d}$, where $q$ is the degree of spherical harmonics desired and $\beta_{q, d}$ is the dimension of the space of degree $q$ spherical harmonics on the sphere of dimension $d-1$. This sample complexity is optimal, as shown by the paper via lower bounds. --------------- ### Broader Contributions ### To achieve the result above, the paper makes the following conceptual and technical contributions. First, the problem at hand is a regression problem with potentially infinite-dimensional continuous cost functions. Inspired by the approach of Avron, Kapralov, Musco, Musco, Velingker, and Zandieh (AKMMVZ19), which discretizes this problem via the leverage scores of the regression matrix. Second, the authors utilize connections between spherical harmonics and zonal harmonics to enable the implementation of leverage score sampling of the regression matrix in question. This leverage score sampling result is novel. Strengths: ### Result's Strength ### I think the optimality of the stated result of the paper makes the paper mathematically strong. ---------------------------------------- # During the Rebuttal Phase # The author's explanations, repeated readings of the paper, and Reviewer BcP8's review and questions have helped me understand and appreciate the paper better. I'm therefore raising the score from 4 to 6 and confidence from 2 to 3. Weaknesses: ### Difficulty in reading. ### I found the paper quite difficult to read though I could tell it was written well. I think this is simply because of the deeply technical nature of the problem and my unfamiliarity with the topic (at least to the level of generality of this paper) within the context of machine learning. I am not sure if I have any concrete feedback to this end but due to this particular reason I feel this paper might be a much better fit (in terms of reaching a wide audience who'd actually understand and appreciate the results) at a mathematical/optimization journal. That said, I acknowledge, based on my unfamiliarity with the topic, that my judgement could be completely misplaced. I'll be grateful to the authors if they could situate their work more in the context of machine learning. I'll also be happy to keep reading the submission and improving my understanding through the rebuttal period. ---------------------- Technical Quality: 3 good Clarity: 2 fair Questions for Authors: ### Questions ### 1. Would the authors be able to situate their results in the context of machine learning? For instance, in lines 20-21, the potential applications mentioned seem to be leaning towards physics/astronomy; it would be quite interesting to see such applications in machine learning (e.g., published in prior ICML/NeurIPS conferences) and also some that might be somewhat more recent. 2. I'm actually slightly confused about the main result (Theorem 1 in the full version of the submission; lines 53-56 in the main_full.pdf) for the following reason: In "Randomized algorithms for matrices and data" by Mahoney (2011), we see that the (essentially optimal) sample complexity of least-squares regression is $d/\epsilon^2$, obtained by leverage score sampling. Here, $d$ can be thought of as the "intrinsic dimension" of the projection matrix in question; I'm surprised the result in the paper (Theorem 1 in the full submission) seems to have a better dependence of $\epsilon^{-1}$ on the sample complexity despite (seemingly) being a more general result. Would the authors be able to comment on this? -------------------------- ### Suggestions ### 1. I think it would really help with readability if the authors could present "specific cases" of their statements wherever possible. As an example, the Definition 3, Equation 8 is essentially the definition of the sensitivities function (introduced by Langberg and Schulman; for a recent paper with this fact explicitly stated, see, for example, "Sharper Bounds for $\ell_p$ Sensitivity Sampling" by Woodruff and Yasuda); similarly, the minimum characterization of the leverage score function is essentially the one seen in Lemma 2 of "Uniform Sampling for Matrix Approximation" by Cohen, Lee, Musco, Musco, Peng, and Sidford). It would make it easier for readers to see these (possibly more familiar) statements first followed by the generalized versions stated in the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's detailed and insightful comments. > `Would the authors be able to situate their results in the context of machine learning? For instance, in lines 20-21, the potential applications mentioned seem to be leaning towards physics/astronomy; it would be quite interesting to see such applications in machine learning (e.g., published in prior ICML/NeurIPS conferences) and also some that might be somewhat more recent.` Our main focus is recovering (unknown) functions defined in the sphere, a critical task in scenarios where rotational invariance is a fundamental property. In real-life machine learning applications, this property becomes very important as a foundational requirement for modeling 3D point clouds. Notable examples that appeared in top-tier machine learning conferences include molecular/atom systems, where understanding the underlying functions within a spherical context can significantly enhance predictive modeling and simulation accuracy [1,2,3,4]. Other examples are in the field of computer vision, specifically in the recognition, classification, and reconstruction of 3D objects [5,6,7,8]. [1] Eickenberg, Michael, et al. "Solid harmonic wavelet scattering: Predicting quantum molecular energy from invariant descriptors of 3D electronic densities." NeurIPS 2017 [2] Frank, Thorben, et al. "So3krates: Equivariant attention for interactions on arbitrary length-scales in molecular systems." NeurIPS 2022 [3] Zitnick, Larry, et al. "Spherical channels for modeling atomic interactions." NeurIPS 2022 [4] Liu, Yi, et al. "Spherical message passing for 3d molecular graphs." ICLR 2022 [5] Gardner, James, et al. "Rotation-Equivariant Conditional Spherical Neural Fields for Learning a Natural Illumination Prior." NeurIPS 2022 [6] Melnyk, Pavlo, et al. "Steerable 3D spherical neurons." ICML 2022 [7] Gerken, Jan, et al. "Equivariance versus augmentation for spherical images." ICML 2022 [8] Shakerinava, Mehran, and Siamak Ravanbakhsh. "Equivariant networks for pixelized spheres." ICML 2021 > `I'm actually slightly confused about the main result (Theorem 1 in the full version of the submission; lines 53-56 in the main_full.pdf) for the following reason: In "Randomized algorithms for matrices and data" by Mahoney (2011), we see that the (essentially optimal) sample complexity of least-squares regression is d/eps^2, obtained by leverage score sampling. Here, d can be thought of as the "intrinsic dimension" of the projection matrix in question; I'm surprised the result in the paper (Theorem 1 in the full submission) seems to have a better dependence of eps^{-1} on the sample complexity despite (seemingly) being a more general result. Would the authors be able to comment on this?` In our proof of Theorem 1, we invoked Theorem 6.3 from the paper [Chen, Price'19]. Your intuition is correct that in order to achieve a subspace embedding guarantee for the design matrix in the regression problem with an approximation factor $\epsilon$ (i.e. to preserve all singular values of the gram matrix to within a factor of $1 \pm \epsilon$), it is generally necessary to have about $d \log d / \epsilon^2$ leverage score samples. The quadratic dependence on $1/\epsilon$ arises from the birthday's paradox, and the $\log d$ factor comes from the coupon-collector problem. The approximate regression results in Mahoney (2011) are derived from subspace embedding guarantee with error factor $\epsilon$. However, when approximately solving the regression problem, it is possible use fewer samples since we do not need to guarantee subspace embedding with an error parameter $\epsilon$. Instead, it suffices to have a subspace embedding that preserves the eigenvalues of the design matrix up to some constant factor, like $1 \pm 1/2$. For more precise details, please refer to the definition of "well-balanced sample" in Definition 2.1 of [Chen, Price'19] which is a sufficient condition for approximately solving linear regression. [Chen, Price'19]: "Active regression via linear-sample sparsification." COLT 2019. > `I think it would really help with readability if the authors could present "specific cases" of their statements wherever possible. ... It would make it easier for readers to see these (possibly more familiar) statements first followed by the generalized versions stated in the paper.` Your suggestion regarding enhancing readability is highly appreciated. To make it easier for readers to understand the concepts, we'll include references that they're familiar with. We'll follow your advice by adding specific cases to all our definitions. We agree with you that this will help readers start with recognizable statements before moving on to the generalized versions in our paper. --- Rebuttal Comment 1.1: Title: Thank you so much! Requesting some clarification. Comment: Dear authors, Thank you so much for that very detailed clarification. Your responses, plus Reviewer BcP8's very detailed review have helped me situate the problem a lot better. Working my way through the paper (and pattern-matching the objects to RandNLA concepts I know), I can appreciate the work more now. I have a small suggestion: It looks (from the appendix) that Lemma 1, 2, and 3 and Theorem 3 are all previously known classical results. I think it would be better to cite where they appeared in the statements of these lemmas/theorems in the main body itself, just to have a clear separation between known facts and ones you show. I was hoping for a clarification: What would you say is the key technical insight in your paper? Is it that the leverage scores of the projection operator are all constant, thereby enabling uniform sampling? I still wouldn't say I understand the paper entirely but I definitely see it better now. For this reason, I'm increasing my score; but I strongly feel that the paper needs a lot of additional writing (perhaps in the appendix) for it to be comprehensible to the general NeurIPS audience (and also for its results to be appreciated and adapted widely). Thank you again! --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment: Thank you very much for taking the time to consider our rebuttal response and for providing valuable feedback. We're pleased to hear that the additional clarification and Reviewer BcP8's insights have helped you better situate our result. Regarding your suggestion about citing the sources of Lemma 1, 2, 3, and Theorem 3, we appreciate your point and agree that it would enhance the clarity of our results and contributions. We will certainly make this adjustment in the revised version of the paper to provide a more cohesive presentation. In response to your question about the key technical insight of our paper, you've accurately captured one of our central contributions. The uniformity of leverage scores of the projection operator, which makes uniform sampling nearly optimal, is indeed a central element of our work. We will make this insight more explicit in the revised paper. We greatly appreciate your suggestion to include more explanatory content, especially in the appendix, in order to make our work more accessible to a broader audience. In our revised paper, we will take into account both your feedback and that of the other reviewers. We will also focus on providing more context and explanatory material in the appendix to ensure the broader audience can engage effectively with our findings. Once again, thank you for your thoughtful review and constructive feedback. We greatly appreciate your efforts in helping us improve the quality and accessibility of our paper. Your input is invaluable to us.
Summary: A technique is proposed to recover spherical harmonic expansions for functions defined on a d-dimensional sphere from a set of function evaluations. Spherical harmonic expansions are recovered by solving an optimizaiton problem via a kernal approach, which is accompanied by theoretical guarantees. Numerical experiments demonstrate phase transitions close to the theoretical bounds. Strengths: A solid theoretical analysis is presented to derive a new approach to recovering spherical harmonic expansions from the evaluation of functions on the d-dimensional sphere. It is shown that functions should be evalued uniformly randomly over the sphere. The approach presented is accompanied by numerous theoretical results and guarantees. Experimental results validate the theory presented, with phase transitions in success probabilities close to the theoretical bounds. Weaknesses: While the method is interesting, it does not seem that NeurIPS is the appropriate venue for this work. While the field of deep learning on the sphere is an active area of research (e.g. [Cohen et al.](https://arxiv.org/abs/1801.10130)), this contribution would appear to be somewhat orthogonal to that body of literature. No connection is made in the submitted manuscript beyond the final, somewhat cryptic, comment: "We believe our finding would appeal to the readership of the community". Furthermore, the connection to previous literature of the topic of fast spherical harmonic transforms and efficient sampling on the sphere is very poor. No reference is made to the central works of [Driscoll & Healy](https://www.sciencedirect.com/science/article/pii/S0196885884710086) and [McEwen & Wiaux](https://ieeexplore.ieee.org/document/6006544) and the follow-up articles of their groups. The use of the term "near optimal" in the title and throughout the article is somewhat overstated. The proposed method is "near" optimal up to a logarithmic factor. I would typically expect near optimal to be up to a constant factor. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: How is the proposed method relevant for the field of deep learning on the sphere? Can it offer a differentiable transform that can be integrated into approaches such as [Cohen et al.](https://arxiv.org/abs/1801.10130)? What is meant by the comment: "it is generally intractable to compute an orthogonal basis for the space of spherical harmonics, which renders the generalized Foureri series expansion in Lemma 2 primarily existential"? This comment is repeated twice in the manuscript. Could this please be elaborated? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 2 fair Contribution: 2 fair Limitations: No special negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our paper. We greatly appreciate your insights and suggestions. > `It does not seem that NeurIPS is the appropriate venue for this work. While the field of deep learning on the sphere is an active area of research (e.g. Cohen et al.), this contribution would appear to be somewhat orthogonal to that body of literature.` We appreciate your feedback on our work's relevance to deep learning on the sphere, inspired by research like Cohen et al. Although our work touches on some deep learning aspects, it's not exclusively focused on them. NeurIPS covers a broad range of research areas, with deep learning being just one facet among twelve distinct research domains highlighted in the NeurIPS 2023 Call for Papers. Our work aligns more closely with "Probabilistic methods," a specific area specified in the Call for Papers. We believe our research introduces a fresh perspective and valuable insights to the NeurIPS community. While we recognize potential concerns about its interdisciplinary nature, we firmly believe that our contributions can foster novel ideas and collaborations across research threads, ultimately enhancing the scientific landscape. > `No reference is made to the central works of Driscoll & Healy and McEwen & Wiaux and the follow-up articles of their groups.` Rest assured, we fully recognize the significance of these foundational works and in our final submission version, we will provide a comprehensive review of the key works by Driscoll & Healy and McEwen & Wiaux. Additionally, we will include references to the relevant follow-up articles to ensure we properly acknowledge their contributions. > `The use of the term "near optimal" in the title and throughout the article is somewhat overstated.` We want to highlight the common practice of describing "near optimal" performance up to a logarithmic factor, which is widely recognized in algorithmic literature. To provide some context, many prominent works in the field have adopted this perspective. For instance, the paper by Lin, Tianyi, Chi Jin, and Michael I. Jordan titled "Near-optimal algorithms for minimax optimization" (COLT 2020) employs the $\tilde{O}$ notation to discuss near-optimal solutions. Similarly, the study by Candès and Terence Tao titled "The power of convex relaxation: Near-optimal matrix completion" (IEEE Transactions on Information Theory 2010) employs this notation to describe the performance of their proposed approach. We believe that the use of "near optimal" to indicate optimality up to a logarithmic factor provides a more nuanced and accurate representation of our method's performance and aligns with the broader trend in algorithmic analysis. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Many thanks for the response to my queries. I have reviewed the comments by all reviewers and all authors' responses. Thank you for the justification of the term "near optimal", which given its common use seems in line with the related literature. The article is technically very solid and clear (although in my second review I noticed the dagger operator in Algorithm 1 is not defined), however my initial assess of the manusciprt remains mostly unchanged. I remain unconvinced that NeurIPS is the appropriate venue for this work. While NeurIPS does indeed include a focus on probabilistic methods, I still do not view the article as a good fit under that topic. After reviewing the authors responses and other reviewer comments and responses I have raised by overall recommendation from 3 (Reject) to 4 (Borderline reject). I congratulate the authors on an excellent piece of work. In my humble opinion, however, I do not believe NeurIPS is the approriate venue for this work.
Summary: The paper studies the approximation of a function $f \in L_2(\mathbb{S}^{d-1})$ from its evaluations, via a degree-q spherical harmonic expansion. To this end, an efficient kernel regression based algorithm is proposed which recovers such a degree-q expansion of f, from the evaluations of f on $\mathbb{S}^{d-1})$. In particular, the number of evaluations needed scales nearly linearly in the dimension of the space of spherical harmonics of degree at most $q$. The main idea is to exploit connections between spherical harmonics and zonal harmonics, and the fact that the zonal harmonics are the reproducing kernels of the space of degree $l$ spherical harmonics. Some numerical simulations are provided on synthetic examples to demonstrate the performance of the algorithm. Strengths: 1. The paper is written well overall with a well-defined problem statement, and a clear description of related work. 2. The results hold for any dimension $d$ which does not seem to have been handled previously. As noted in the related work, previous results typically applied to small, fixed values of $d$. In that respect, I think the results are quite strong. Weaknesses: 1. It’s a bit unclear to me whether the results hold only in the noiseless case, or is the method actually robust to noise. For instance, if we the function evaluations are corrupted with iid centred Gaussian noise, what can be said about the recovery error? The abstract mentions that the algorithm provides robust recovery, but I am not sure if this is what is proven. 2. While I understand the main contributions are theoretical, are there any real examples on which the method can be evaluated? At the moment, experiments are only conducted on synthetical examples. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the optimization problem after line 42, isn’t the solution non-unique? If $g^*$ is a solution, then $g^* + g’$ for any g’ in the null space of the operator is also a solution? Also, isn’t f a solution of this optimization problem? 2. Just to clarify my understanding, my first thought was to simply do linear regression in the basis of the lower degree spherical harmonics. But I suppose this is intractable since computing such a basis is computationally hard for even moderate values of q as noted in the paper. Is this correct? And so, this is why we can only hope for an approximate solution of Problem 1? 3. In Definition 3, couldn’t we define the leverage function for any operator mapping $ L_2(\mathbb{S}^{d-1})$ to itself? Or is it specifically defined for $\mathcal{K}_d^{(q)}$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not see the limitations discussed anywhere but this probably does not apply to this paper since it is essentially theoretical, and the sample complexity bounds are shown to be nearly optimal. Perhaps the running time which is super-quadratic in the number of samples could be improved upon? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our paper. Below we answer your concerns and questions. > `It’s a bit unclear to me whether the results hold only in the noiseless case, or is the method actually robust to noise. For instance, if we the function evaluations are corrupted with iid centred Gaussian noise, what can be said about the recovery error? The abstract mentions that the algorithm provides robust recovery, but I am not sure if this is what is proven.` In our paper, we consider a specific noise model to address the robustness of our method. We assume that the unknown function $f$, which we can query its values, is not necessarily a low-degree spherical harmonic and may contain high-degree components. These higher-degree components in the spherical harmonic expansion are treated as noise in our model. To clarify, the noise we consider in our study is different from the typical iid noise that can corrupt measurements of the function. Instead, we assume that the noise values are drawn from an underlying and unknown $L_2(S^{d-1})$ function. Under this noise model, our algorithm can successfully recover the function up to a $(1+\epsilon)$ factor of the noise's $2$-norm. Our results indeed hold under the above-specified noise model, which treats the higher degree components of $f$ as noise. Within this noise framework, we have proven the robustness of our algorithm. > `While I understand the main contributions are theoretical, are there any real examples on which the method can be evaluated?` Our method can be applied to any learning problem on the sphere. Spherical functions play a crucial role in problems with rotational-invariant property, where some real-world applications in machine learning is to recover 3D objective detection [1, 4, 5, 6], lighting estimation from images [2], and predicting atomic energies and forces [3]. [1] Melnyk, Pavlo, et al. "Steerable 3D spherical neurons." ICML 2022 [2] Gardner, James, et al. "Rotation-Equivariant Conditional Spherical Neural Fields for Learning a Natural Illumination Prior." NeurIPS 2022 [3] Zitnick, Larry, et al. "Spherical channels for modeling atomic interactions." NeurIPS 2022 [4] Gardner, James, et al. "Rotation-Equivariant Conditional Spherical Neural Fields for Learning a Natural Illumination Prior." NeurIPS 2022 [5] Gerken, Jan, et al. "Equivariance versus augmentation for spherical images." ICML 2022 [6] Shakerinava, Mehran, and Siamak Ravanbakhsh. "Equivariant networks for pixelized spheres." ICML 2021 > `Just to clarify my understanding, my first thought was to simply do linear regression in the basis of the lower degree spherical harmonics. But I suppose this is intractable since computing such a basis is computationally hard for even moderate values of q as noted in the paper. Is this correct? And so, this is why we can only hope for an approximate solution of Problem 1?` You’re right about solving a linear regression in the basis of the low-degree spherical harmonics. While this approach could solve the problem, it still requires discretizing the linear regression to deal with basis vectors which are continuous functions. One of our main objectives is to minimize the number of samples needed to accurately recover the unknown function. However, solving the exact linear regression demands inner products between the basis vectors and the input function, leading to an infinite number of required samples from the input function. Furthermore, even though it's possible to construct a basis for spherical harmonics, as shown in Theorem 5.1 (https://arxiv.org/pdf/1304.2585.pdf), these basis functions are exceptionally complex. As a result, numerical calculations, even in moderate dimensions and with a moderate degree q, become quite challenging and intractable. On the other hand, our approach is based on kernel regression, which ensures numerical stability even in high dimensions and for large degree q. This makes our method more practical and easier to implement compared to using spherical harmonics. > `Can the leverage function be defined for any operator?` Yes, you are right the leverage function can be defined for any compact operator (please see Definition3 in https://arxiv.org/pdf/1812.08723.pdf ). We will modify our Definition 3 to make it clear that the leverage function is defined for any compact operator. --- Rebuttal Comment 1.1: Title: Replying to authors Comment: Thank you for your response to my queries. I still have the following questions, it would be great if the authors could clarify them. 1. It would be helpful if the authors could answer Q1 in my review regarding the non-uniqueness of the solution of the optimization problem. 2. I am hesitant to consider the noise model in the paper as truly ``noise'' since the latter term is typically reserved for external stochastic/ adversarial noise in the samples. In that respect, I am not sure if it is correct to claim that the method is really robust to external noise. Moreover, it would be interesting to atleast discuss the technical difficulties encountered from a theoretical perspective for establishing this result. The experiments section could also demonstrate the effect of iid Gaussian noise on the performance of the method. 3. Thanks for your response to my question Q2. While I understand the drawback of linear regression from the computational perspective, its not clear to me why this would need infinite samples to work. I am drawing an analogy with what one does in the usual nonparametric regression setting for learning functions in $L_2[0,1]^d$ -- we can simply take a trigonometric basis (the first $m$ terms) and do finite dimensional linear regression. 4. Continuing point 3 above, I was wondering if the authors could comment on the precise running time of computing the first few (lower degree) basis functions. This would be relevant in terms of understanding the precise time complexity of implementing linear regression, and how it compares with the proposed method. --- Reply to Comment 1.1.1: Title: Response to Reviewer's questions Comment: We truly appreciate your time and effort in evaluating our work. We have carefully considered your concerns and questions and would like to offer our explanations below: `1. Question regarding the non-uniqueness of the solution of the optimization problem` Yes, you are right. The optimization problem does not have a unique solution and the uniqueness of the solution is not required. In fact, as you mentioned, if $g$ is a solution then $g+g’$ for any $g’$ in the null space of $\mathcal{K_d}^{(q)} $ is also an optimal solution. Our aim is to find any (approximately) optimal solution $g$ and then we will project it onto the space of spherical harmonics of degree at most $q$ by considering $ \mathcal{K_d}^{(q)} g$, which will be (approximately) the spherical harmonics expansion of $f$. Also, $f$ itself is an optimal solution to the optimization problem in line 42. `2. I am hesitant to consider the noise model in the paper as truly ``noise'' since the latter term is typically reserved for external stochastic/ adversarial noise in the samples` Our noise model encompasses any perturbation introduced into the “input signal” prior to the recovery process, which includes adversarial perturbations as well. We believe that our noise model aligns with common assumptions made in adversarial noise models. In these models, it is typically assumed that an adversary introduces noise to the input signal "prior to" the recovery process, and in our setting the recovery process does include collection of samples from the input signal. By randomizing sampling positions, the recovery algorithm prevents capturing excessive noise energy. Yet, if an adversary could see the random pattern used by the recovery algorithm, then it can concentrate noise energy in sampled points to disrupt recovery. That being said, we've come to realize that we are able to upper-bound the norm of perturbations caused by iid Gaussian noise added to our measurements. Suppose that in Theorem 5 there are no higher degree spherical harmonics present in the expansion of the input function $f$, resulting in $f = f^{(q)}$. If we denote the noise vector as ${\bf e} \in R^s$ and the kernel matrix in Algorithm 1 by $ {\bf K} \in R^{s \times s} $, we can demonstrate that the perturbation's norm in the output $y$ of our algorithm (as defined in Theorem 5) caused by this noise is as follows: $ || y - f^{(q)} ||_{S^{d-1}}^2 = (1/s) \cdot {\bf e}^T {\bf K}^+ {\bf K} {\bf K}^+ {\bf e} $ Now, note that $ {\bf e}^T {\bf K}^+ {\bf K} {\bf K}^+ {\bf e}$ is a nonnegative random variable with expected value $E[ {\bf e}^T {\bf K}^+ {\bf K} {\bf K}^+ {\bf e}] = {\tt tr} ({\bf K}^+ {\bf K} {\bf K}^+)$, thus by Markov’s inequality with 0.99 probability this random variable will be bounded by $O( {\tt tr} (K^+ K K^+) )$. Thus, the total perturbation to the output y is bounded by $ || y - f^{(q)} ||_{S^{d-1}}^2 \le O(1/s) \cdot {\tt tr} ({\bf K}^+ {\bf K} {\bf K}^+) $. Additionally, by considering the svd of ${\bf K}$ and ${\bf K}^+$ one can see that ${\tt tr} ({\bf K}^+ {\bf K} {\bf K}^+) = 1/\lambda_1 + 1/\lambda_2 + \ldots 1/\lambda_r$, where $\lambda_i$’s are the singular values of the kernel matrix ${\bf K}$. Now if we let ${\bf P}$ be the quasi-matrix defined in Theorem 4 then we have that the singular values of the kernel matrix $ {\bf K} = {\bf P}^* {\bf P} $ are equal to that of ${\bf P} {\bf P}^*$. On the other hand, using matrix Chernoff inequalities we can show that all singular values of the operator ${\bf P} {\bf P}^*$ approximate the singular values of the projection operator $ \mathcal{K_d}^{(q)} $ up to a constant factor. So we have $ {\tt tr} ({\bf K}^+ {\bf K} {\bf K}^+) = O( {\tt rank}( \mathcal{K_d}^{(q)} ) ) = O(\beta_{q, d})$. Finally, because $s \ge \Omega( \beta / \epsilon )$ this implies that: $|| y-f^{(q)} ||_{S^{d-1}}^2 \le \epsilon $. We will add a formal and precise version of the above proof sketch to the final version of our paper. We will answer questions 3 and 4 in a separate comment that will follow. --- Reply to Comment 1.1.2: Title: Continuation of response to reviewer's questions Comment: `3. While I understand the drawback of linear regression from the computational perspective, it's not clear to me why this would need infinite samples to work. I am drawing an analogy with what one does in the usual nonparametric regression setting for learning functions in [0,1]^d we can simply take a trigonometric basis (the first m terms) and do finite dimensional linear regression.` We might be misunderstanding the reviewer’s question. Please inform us if the following explanation is addressing the right question. As we interpret it, the reviewer is proposing to construct a quasi-matrix $ Y$ which has $\beta_{q,d}$ columns made up of all basis functions for the space of spherical harmonics of degree $\le q$. Each column of Y is a spherical harmonic function and its columns together span the space of spherical harmonics. Now, our understanding is that you are suggesting to solve the subsequent linear regression problem: $\min_{x \in R^{\beta_{q,d}}} || Y x - f ||_{S^{d-1}}^2$ where $f$ is the input function. This is analogous to the least square problem we considered in line 42. Solving this least squares problem “exactly” using the normal equation requires computing $Y^* f$ which is nothing but the inner product of all our basis functions with the input function $f$. Essentially, we need to project the input function $f$ onto our basis functions. However, these basis functions are continuous, making the computation of these inner products reliant on knowing the value of $f$ across its entire domain. This essentially necessitates an infinite number of samples from the function $f$. It appears that addressing this regression problem would demand a discretization approach akin to the techniques we employed in our paper. `4. Continuing point 3 above, I was wondering if the authors could comment on the precise running time of computing the first few (lower degree) basis functions. This would be relevant in terms of understanding the precise time complexity of implementing linear regression, and how it compares with the proposed method.` Looking at Theorem 5.1 (https://arxiv.org/pdf/1304.2585.pdf), calculating the value of one basis function on a single point on the sphere requires calculating d trigonometric functions and raising them to powers up to $d^2$. One additionally needs to calculate values of $d$ Gegenbauer polynomials of various degrees on d different points. The overall time complexity of this is $O(q d^2)$. There are $ \beta_{q,d} $ basis functions in total so calculating all basis function values in a single point takes $O( q d^2 \cdot \beta_{q,d} )$ times. To solve the regression problem one needs to discretize using $s$ samples and so the total time to just compute discretized basis would be $O(q d^2 \cdot s)$.
Summary: To develop kernel regression based algorithm to recover degree-q expansion of f \in L^2(S^{}d-1| - by only evaluating on uniformly sampled points on S^{d-1}. Strengths: The ideas used in this paper is deeply technical and ideas are complicated. It first re-formulates the problem as least squares regression and then uses the sampling based on leverage scores to decide which samples to pick. Then the rest of the paper is proving the estimated bound is satisfied. Weaknesses: motivate the problem - how it could be applied in real life ** after rebuttal -- thank you for providing all these papers regarding contributions of this workstream in terms of application side. I am happy to increase my score regarding soundness of the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > `Motivate the problem - how it could be applied in real life?` Many thanks for your valuable feedback on our paper. We understand the importance of motivating the problem and highlighting its real-life applications. In response to your comment, we would like to elaborate on the practical significance of the problem we address and its potential applications. Our main focus is recovering (unknown) functions defined in the sphere, a critical task in scenarios where rotational invariance is a fundamental property. In real-life applications, such as machine learning, this property becomes very important as a foundational requirement for modeling 3D point clouds. Notable examples that appeared in top-tier machine learning conferences include molecular/atom systems, where understanding the underlying functions within a spherical context can significantly enhance predictive modeling and simulation accuracy [1,2,3,4]. Other examples are in the field of computer vision, specifically in the recognition, classification, and reconstruction of 3D objects [5,6,7,8]. [1] Eickenberg, Michael, et al. "Solid harmonic wavelet scattering: Predicting quantum molecular energy from invariant descriptors of 3D electronic densities." NeurIPS 2017 [2] Frank, Thorben, et al. "So3krates: Equivariant attention for interactions on arbitrary length-scales in molecular systems." NeurIPS 2022 [3] Zitnick, Larry, et al. "Spherical channels for modeling atomic interactions." NeurIPS 2022 [4] Liu, Yi, et al. "Spherical message passing for 3d molecular graphs." ICLR 2022 [5] Gardner, James, et al. "Rotation-Equivariant Conditional Spherical Neural Fields for Learning a Natural Illumination Prior." NeurIPS 2022 [6] Melnyk, Pavlo, et al. "Steerable 3D spherical neurons." ICML 2022 [7] Gerken, Jan, et al. "Equivariance versus augmentation for spherical images." ICML 2022 [8] Shakerinava, Mehran, and Siamak Ravanbakhsh. "Equivariant networks for pixelized spheres." ICML 2021
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: Consider the $d$-dimensional unit sphere $\mathbb{S}^{d-1}$, and any function $f:\mathbb{S}^{d-1}\rightarrow\mathbb{R}$ defined on the sphere with bounded L2 norm $\|\|f\|\|_{\mathbb{S}^{d-1}}$. This function $f$ can be evaluated at any point $\vec w \in \mathbb{S}^{d-1}$ on the sphere, but it is expensive to evaluate $f$ so we wish to do this as little as possible. The goal of this paper is to recover a near-optimal polynomial $\tilde f$ approximation to $f$, where $\tilde f$ is constrained to be a multivariate polynomial where the sum-of-degrees of each term is at most $q$. (For example, the sum-of-degrees of each term in $\tilde f(x,y,z) = x^2y^3 + z + x^5$ is at most 5). Letting $\beta_{d,q}$ denote the number of free parameters in any $d$-variate polynomial with terms whose sum-of-degrees is at most $q$, this paper shows that $O(\beta_{d,q} \log \beta_{d,q} + \frac{\beta_{d,q}}{\varepsilon})$ evaluations of $f$ chosen uniformly at random on the sphere suffice to recover such a near-optimal $f$. This is proven in three parts: 1. The leverage function associated with this polynomial recovery problem is **exactly** the uniform distribution on the sphere 2. The randomized linear algebra toolkit shows this near-linear sample complexity suffices 3. A small kernel ridge regression problem can be solved to recover this $\tilde f$ from uniform random samples on the sphere They also prove an $\Omega(\beta_{d,q})$ sample complexity lower bound. Lastly, they include some synthetic experiments. Strengths: The paper is a nice application of well understood tools from randomized linear algebra (RandNLA) to spherical harmonics, which has not been explored by the RandNLA literature afaik. The real core strength of this paper is its novelty in connecting these two literatures in a simple and elegant way. Simplicity and elegance really are words that mark this paper. Almost everything is extremely clear and well written. There are basically no typos even! The flow of logic in the paper is very clear, the problem they solve seems important in the spherical harmonics literature (though I'm no expert in that domain), and the proofs are even pretty clean. To that last point, the theorems in this paper are proven either by very clean techniques that are now standard in the RandNLA literature, or by using well established classical facts about polynomials and spherical harmonics. I verified a decent amount of the math, and it all struck me as rather nice and clean. Since I'm not an expert in spherical harmonics, I can't perfectly speak to the significance of the result. Taking the authors' words at face value, it seems that prior work in spherical harmonics did not use such simple algorithms (Kernel Ridge Regression) and did not achieve optimal sample complexities. The paper may not carry a huge amount of new technical ideas to get their sample complexity and simple algorithm, but this result would only exist if someone who knew enough about both spherical harmonics and RandNLA decided to sit down and figure out if this all works together. For that view of novelty, in addition to the simplicity and elegance of the paper, I recommend accepting this paper. The experiments are perfectly fine for a theoretical paper. Nothing particularly strong about them, but nothing I'm left looking for. Weaknesses: The novelty, simplicity, and elegance is strong. But, the proof techniques are not especially novel. The proof techniques are standard relationships between operators and algorithms as explored by the RandNLA community for a good few years now. The only proof that doesn't seem directly tied to something already proven in the RandNLA literature is the proof that the leverage function for the regression problem is uniform on the sphere. Even then, this claim is pretty intuitive, since the problem statement is rotationally invariant. There's no reason for a sampling algorithm to care more about one point of the sphere than another. (This is in contrast to learning on an interval, where an algorithm may prefer to sample near the edges of the interval.) So, even this most novel proof is not terribly surprising, and the proof is short and simple. That said, I hesitate to really call this a "weakness". This isn't clearly a downside of the paper. While it's certainly nice for a paper to overcome new technical issues and proof difficulties, it's also nice for a paper to show that spherical harmonics smoothly fit into the RandNLA framework. All this is to say that the technical weight of the paper truly is in understanding two literatures and connecting them. The effort to connect the literatures seems to be low, but that's given that someone actually understands both literatures well. I'd say this overall comes out as a slight strength for the paper, but it's certainly a tradeoff in the reviewing process. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I list minor questions, typos, and recommended edits here. These are all soft recommendations, adapt whatever you want to. 1. [Line 53] Consider mentioning a failure probability here. 1. [Line 87] Replace "this paper" with "[GMMM21]" since the language is a bit too ambiguous at a glance. 1. [Line 173] Is this not a bijection? At least for traditional least squares regression, this seems to be a bijection? RandNLA people tend to use the former form, so it's nice to see if the RandNLA form is exactly equivalent to the form of Problem 2. 1. [Line 213] Consider removing the $\cdot$ between P and v? This notation feels a bit clunky and odd to me? 1. [Line 220] Cite [AKM+] or [SA] or [CP from COLT19] here, or some other paper that draws a connection between the semi-infinite regression and kernel ridge regression. This section reads like the idea of the kernel trick here is a new contribution. 1. [Line 226] Add "Then," before "Algorithm 1" 1. [Line 236] Discuss why this isn't trivial that we need $\beta_{d,q}$ samples to learn $\beta_{d,q}$ parameters in a polynomial. Certainly, when we're looking at interpolating a polynomial on the real line, we need at least $q+1$ samples to learn a degree $q$ polynomial exactly. This feels similar to the argument made in this lower bound, but I don't understand what the rest of the formalization is needed for. 1. [Line 250] Replace "accurately" with "exactly" (I think that's more technically accurate?) 1. [Lines 263-274] I got really lost here. I think that If I'd read the lower bound of AKM, then I might be able to stitch together the parts of this lower bound, but the reason any of this construction is made is unclear to me. The notation is also pretty hard to track. I don't really get what the construction is trying to get at. I don't really see why the top rows of Q need to span the queries made so far. This all feels odd to me. 1. You don't need to do this. But, if you want to, I think you could make a lower bound of $\Omega(\frac1\varepsilon)$ by following the technique in Section 5.4 of [here](https://arxiv.org/pdf/2211.06790.pdf). It'd be interesting to see if this technique holds on the sphere, because at a glance it seems like it should. It'd give an overall lower bound of only $\Omega(\beta_{d,q} + \frac1{\varepsilon})$ though, which isn't the most compelling rate. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your valuable feedback on our paper. Below we answer your questions. >`Discuss why this isn't trivial that we need \beta_{d,q} samples to learn \beta_{d,q} parameters in a polynomial.` You're absolutely correct in pointing out that there are $\beta_{d,q}$ degrees of freedom, thus any deterministic algorithm that reconstructs such polynomials needs at least $\beta_{d,q}$ samples. Our lower bound proof is showing that even a "randomized" algorithm that succeeds with only a constant probability, needs to take $\beta_{d,q}$ samples. Since our upper bound is established using a randomized algorithm, it was crucial to complement it with a randomized lower bound to create a well-rounded and balanced analysis of the problem. >`I think that If I'd read the lower bound of [AKM+], then I might be able to stitch together the parts of this lower bound, but the reason any of this construction is made is unclear to me. The notation is also pretty hard to track. I don't really get what the construction is trying to get at. I don't really see why the top rows of Q need to span the queries made so far. This all feels odd to me.` We apologize for any confusion caused by the clarity of our lower-bound section. The limited space may have contributed to the lack of detailed explanations. To provide further clarification, our hard instance is based on a random vector ${\bf v}$ following an isotropic Gaussian distribution in dimension $\beta_{d,q}$. In line 273, our aim is to demonstrate that if an algorithm reconstructs a function $\tilde{f}^{(q)}$ using only $r < \beta_{d,q}$, then even after conditioning on the samples observed by the algorithm, vector ${\bf v}$ will still possess at least one degree of freedom and will not be entirely deterministic. This intuitively holds true because ${\bf v}$ consists of $ \beta_{d,q}$ independent random Gaussian entries. Thus, when conditioning on $r < \beta_{d,q}$ samples taken by an algorithm, the conditional value of ${\bf v}$ will remain random, allowing us to extract at least one Gaussian random variable from it using an orthonormal transformation denoted as $Q^r$. We will provide additional details and improve the notation for better readability. > `Regarding minor typos:` Thanks for pointing them out. We will address them all and implement your recommended edits. --- Rebuttal Comment 1.1: Title: Thanks for you response Comment: Hi, thanks for the response! If it's trivial that at least $\beta_{d,q}$ samples are deterministically needed, why doesn't Yao's Minimax Principle imply that any randomized algorithm also needs at least $\beta_{d,q}$ samples? (Admittedly, I'm sometimes a bit unclear about when Yao's can apply, but this seems like such a setting, and if it applies then Yao's would require a very short proof). --- Reply to Comment 1.1.1: Title: Response to reviewer's question Comment: Many thanks for your prompt response. By Yao’s minimax principle, we can assume that the recovery algorithm is deterministic and requires constant probability recovery over a random ensemble of input functions, instead of considering a randomized algorithm and requiring constant probability recovery for “any” fixed given function in the ensemble. In light of this, our objective is to demonstrate that any deterministic algorithm aiming to recover a constant fraction of functions in an input function ensemble must need a minimum of $\beta_{d,q}$ samples. Now let us write precisely the degrees of freedom argument: `degrees of freedom argument:` In order to specify a spherical harmonic of degree <= q unambiguously, one needs to know the values of $\beta_{d,q}$ free parameters. This implies that any deterministic algorithm seeking to recover “all” inputs with probability 1 must utilize at least $\beta_{d,q}$ samples. Now recall that our aim is to prove that a deterministic recovery algorithm with a constant success probability over a random ensemble of input functions requires a minimum of $\beta_{d,q}$ samples. It's important to recognize that the degrees of freedom argument does not preclude the potential existence of an algorithm capable of recovering only a “constant fraction” of inputs within the entire ensemble using fewer than $\beta_{d,q}$ samples. It's worth noting that the requirement for a deterministic algorithm to achieve recovery for a constant fraction of input functions from the ensemble is a strictly weaker criterion compared to the more stringent demand of recovering “all” possible inputs. Therefore, one needs to analyze a random hard input distribution in order to prove the lower bound holds for even an algorithm with a constant success probability. Hopefully, our explanation clarifies our lower-bound results. Please let us know if we can provide further clarification.
null
null
null
null
null
null
Game Solving with Online Fine-Tuning
Accept (poster)
Summary: The paper aims to address the game solving problem: providing a game theoretic value to all states in a game. To address the task the paper extends AlphaGo to the game solving case using an approach that distributes game playing scenarios to solvers on parts of the full game tree. Building off prior distributed game solvers, the new technique uses a self-play learning algorithm to improve estimates of the difficulty of solving new game states during the game solving process (as opposed to using a pretrained and frozen estimation method). The work distribution manager incorporates several new heuristics to select the most promising nodes to solve next and avoid wasted computation. These improvements reduce computation time in 7x7 Killall-Go to roughly 1/4 of a non-learning baseline algorithm. Ablations show the heuristics chosen individually and jointly improve the overall solver performance. Strengths: ## originality Modest. The core architecture of distributed game solving and the core AlphaGo algorithm are both established in the literature. The originality of the paper stems from devising a way to do online learning during AlphaGo distributed solving in such a way that state value estimates remain useful throughout the search estimation process. This is not trivial, but is necessarily a highly targeted application. ## quality Good. The results show substantial improvements over baselines and the ablations are thorough. ## clarity Good. The substantial background that is needed to understand both distributed game solving and AlphaGo is clearly introduced. Reasoning for each of the major heuristics is clear and the results are unpacked well for an audience not familiar with the target domain. ## significance Modest. Game solving is a somewhat niche topic, so the audience will be limited by that reach. The work itself is building on two prior methods - the improvements are clear, but are sufficiently complex that it is not immediately obvious they will generalize to other games or applications. That said, reducing to 1/4 the computation needed on a fair benchmark is a substantial improvement! Weaknesses: One weakness (hard to address) is the results are only in terms of solving a single game. The game playing literature is awash with game benchmarks and has created a norm of multi-game evaluation to demonstrate generality. This may not apply to game solving, but the lack of multi-game evaluation makes it hard to tell how general the algorithms are. Are there any easy alternative scenarios (like the cited Hex or Rubik's Cube) that would show the algorithm can be generalized to other scenarios? These would be an opportunity to compare against other established baselines, perhaps on smaller or simpler problems. Below are detailed suggestions around scalability. The paper would benefit from addressing those concerns as it helps show the potential of the technique over the long run or for larger problems (even if the paper does not directly measure those cases). The lifelong learning claim would benefit from direct evidence that the manager remembers what the trainer forgets. See the questions below for more detail. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - [Q1] How do the core algorithms scale? - How does performance change with more or fewer workers? - What are the demands in compute and memory? How do they scale with the size of the problem and/or workers? - [Q2] Does critical position prioritization break down when MCTS thrashes between expanding new shallow node and deeper nodes in the tree? - Often tree search can suffer from a thrash between vastly different branches on the shallow parts of the tree. At a glance this would seem to be a more substantial problem for game solving as the full tree must be solved. - Is this not a problem in practice? Or is there a way to quantify this effect and measure it's impact? - [Q3] Table 1 - What is going on with SB? - Online-SP and online-CP both do very well compared to the baseline. But online SP+CP does substantially worse than either alone. - I don't have sufficient knowledge of Go (let alone Killall-Go) to tell why this might be expected or how to evaluate the case. It is interesting as the one scenario that defies the general patterns. - [Q4] Figure 4 - Include all 4 variants in the bar plot, not only baseline and online-CP. Also, consider changing the color scheme to be colorblind friendly. - [Q5] What is the direct evidence that the manager remembers what the OLT trainer forgets? - Is there any way to probe the checkpoints to demonstrate this phenomenon? - The claim is intuitive, but the results would benefit from a direct test / evidence for this claim. Below are some other less important questions / suggestions: - For PCN state estimates: - How effective is a naive heuristic that computes a similarity between new positions and a database of solved positions? (for solved positions using OLT) - This would be an alternative to the OLT / PCN approach. I'm not deeply familiar with the game solving literature and so am not sure if this is already a common / established practice. - Could the fixed PCN be used to generate compressed/embedded state vectors to improve this matching? - Could that apply to the learning process to prioritize selection of training samples? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper does not address limitations nor negative societal impact. Limitations merit some discussion, specifically around generalization / applicability to other game solving scenarios, classes of games solved (ex: turn-based, 2 player, ...), or computational requirements. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful reviews and constructive comments. We address your concerns and questions below. All numbered citations in our rebuttal refer to the sources in the paper’s bibliography. > results are only in terms of solving a single game Please refer to “Generalizability to other games” in the Rebuttal (at the beginning) for all reviewers. > [Q1] How does performance change with more or fewer workers? To evaluate the scalability of our distributed game solver, we add an additional experiment by running the baseline solver with different numbers of workers on opening *KA*. Specifically, we use 384, 192, 96, and 48 workers, using 8, 4, 2, and 1 GPU, respectively. Every 48 workers share one GPU. The results are shown in Table 1 in the additional one-page PDF (in rebuttal for all reviewers). Overall, the speedup is around 1.8 times faster when the number of workers is doubled (up to 384 workers due to our machine limitation). The results show that it can potentially speed up with more workers, such as 768 workers. We will add this experiment in the appendix. > [Q1] What are the demands in compute and memory? We provided details of hardware specifications in Appendix B.1. For the memory consumption: (a) the manager requires 20G RAM for expanding every 1M nodes, (b) every 48 workers together in one process requires 30G RAM at most. Note that workers use the same amount of memory regardless of problem size. They are limited to 100,000 nodes per job; the job result is “unsolved” if a solution is not obtained within that limit. Specifically, for the baseline solver with 384 workers: * solving *KA* used 2,103 seconds and 243G memory (3G for the manager and 240G for the workers) * solving *KB* used 156,583 seconds and 410G memory (170G for the manager and 240G for the workers) However, if running the baseline solver with only 48 workers, solving *KA* used 12,151 seconds but only required 32G (2G for the manager and 30G for the workers). Overall, the settings can be varied depending on available machines. > [Q2] Does critical position prioritization break down when MCTS thrashes between expanding new shallow node and deeper nodes in the tree? The critical positions may suddenly change if the manager changes its focus to an as yet unexplored subtree. For example, in Fig. 2 (f) in the appendix, the average length of critical positions drops intensely around iterations 135 and 185. However, since we only maintain the most recent 1,000 critical positions in the critical queue, old critical positions are kept temporarily for a short time and are soon replaced by new critical positions. In addition, we ran an additional experiment for using online-CP to solve opening *KA* with different critical queue sizes, from 100 to 10,000. The results are shown in Table 2 of the rebuttal one-page PDF. For larger queues, more positions are retained. Nevertheless, the solving times of using five different queue sizes are around 2,100s, demonstrating that this does not have a noticeable impact in practice. > [Q3] Table 1, What is going on with SB? In a relatively small problem such as *SB*, online learning may not have sufficient time to fine-tune the PCN models. Besides, the online-SP+CP introduces additional overheads of sending both solved and critical positions than online-SP and online-CP. We suspect this overhead is the potential reason that online-SP+CP performs worse than the other online solvers in opening *SB*. > [Q4] Figure 4 We will change to use colorblind options. Due to the space limit, the bar plot will become too crowded and almost unreadable if it includes all 4 variants. We can add a new bar plot in the appendix including all variants. > [Q5] What is the direct evidence that the manager remembers what the OLT trainer forgets? Forgetting is not a critical issue in our context since the forgotten knowledge (solved positions) is saved (remembered) in the manager solution tree, and the worker/trainer won’t need to solve/evaluate these positions again. This is the reason why we claim that the whole distributed solver (including the manager, workers, and trainer) can be seen as a life-long learning system. > How effective is a naive heuristic that computes a similarity between new positions and a database of solved positions? This is an interesting idea, and possibly feasible if a heuristic can be trained to effectively compute similarity (defined as close for two positions sharing the same winning strategy, or far for very different strategies). Even for positions that are close but do not share the exact same strategy, parts of it can be reused via simulation [Kawano]. However, to the best of our knowledge, naive heuristics to compute the similarity between two positions in Go haven’t been very effective. Given a Go position, the results might be completely different simply by adding or removing one stone. Instead of using similarity to match positions, it is common to build a transposition table in game solvers. The trick is to do partial matching with solved positions. For all solved positions, common patterns are extracted and stored in a database. If a new position exactly matches the same patterns in the database, the same winning strategy can be repeated. [Kawano] Kawano, Yasuhito. "Using similar positions to search game trees." Games of No Chance 29 (1996): 193-202. > Could the fixed PCN be used to generate compressed/embedded state vectors to improve this matching? If we understand your questions correctly, PCN already serves the role of learning a good representation (embedding). For example, in our online-SP method, given a series of solved positions, PCN should be able to further generalize to other unsolved positions and help prioritize selection. If you are proposing further improvements on PCN training, we are always looking for new ways to improve and would love to read some papers in this direction of research. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough responses and additional experiments! On the questions: > [Q1] How does performance change with more or fewer workers? Those are great speedups to see! The only suggestion I would offer is to include the speedup factor vs the baseline (48 workers) as an additional column. > [Q1] What are the demands in compute and memory? Thank you for providing more details. These are good rough estimates to include in an appendix (beyond the hardware used). > [Q2] Does critical position prioritization break down when MCTS thrashes between expanding new shallow node and deeper nodes in the tree? Great! The followup with longer queues but comparable solving times addresses my question. That is a good result to know as the thrash problem seems theoretically a concern on some problems. Likely the structure of the game prevents this from being too extreme. > [Q3] Table 1, What is going on with SB? Thank you for offering the explanation. In effect it seems that generally the overheads are small from using SP or CP or SP+CP, but in this rare case that was a dominant factor. > [Q5] What is the direct evidence that the manager remembers what the OLT trainer forgets? "Forgetting is not a critical issue in our context since the forgotten knowledge (solved positions) is saved (remembered) in the manager solution tree, and the worker/trainer won’t need to solve/evaluate these positions again." Ah, of course! This is a good point to clarify in the text as some readers may miss the detail. Other points: > However, to the best of our knowledge, naive heuristics to compute the similarity between two positions in Go haven’t been very effective. I see. I am less familiar with Go solving and this explanation helps. My intent was to suggest a naive baseline for experiments to evaluate against, so it sounds like this would be so simple as to not work. > If you are proposing further improvements on PCN training, we are always looking for new ways to improve and would love to read some papers in this direction of research. I did not have other prior work in mind, but was reflecting on the implementation in the paper and possible extensions or alterations that might help. My thought was to use the PCN as part of a database retrieval method to augment the search algorithm with known previous solutions. Feel free to disregard the comment, it may be an ill-posed idea. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your insightful comments and suggestions. We are committed to making the necessary revisions based on your feedback. Please let us know if you have any further concerns or ideas. We are eager to engage in any additional discussions during the reviewer-author discussion period.
Summary: This paper proposes a parallel setup for solving games, which includes online fine-tuning of trained proof cost networks to improve their estimations of the proof cost of nodes specifically for subsets of the state space that the prover is currently focusing on. Experiments show significant reductions in the number of nodes and computation time required to solve numerous 7x7 Killall-Go openings, with improvements also appearing to scale well with problem difficulty. Strengths: - Relatively straightforward and simple, but good idea. - Good empirical results. - Paper very well written, easy to follow. Weaknesses: - Game solving is a fairly niche topic that probably is mostly of real interest to a relatively small subset of the NeurIPS community (but I think it's fine, not a reason for rejection). - The main contribution (online fine-tuning) seems relatively limited in novelty. As far as I'm aware it's novel specifically within the game solving setting, but outside of that, online fine-tuning is not a groundbreaking idea. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: ### Suggestions - "Game solving is a much higher challenge than game playing" --> this is slightly strange phrasing, in particular the "higher". Something like "Game solving is a more difficult challenge than [...]" would seem more natural. ### Questions - "A two-player zero-sum game is considered solved if there exists a winning strategy for a player which guarantees a winning outcome." --> as far as I'm aware this is not correct. A game is solved if we know the outcome under perfect play (either from any state or from the initial state). It may be that this outcome is a draw (or a loss) for the first player though, it need not necessarily be a winning outcome with a winning strategy. I would appreciate a clarification to this question, especially because I also raised this comment when reviewing the same paper submitted to a previous comment, and at the time the authors said the manuscript would be revised accordingly. Which evidently did not happen. If I'm wrong here that's fine, but then please clarify. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No unaddressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful reviews and constructive comments. We address your concerns and questions below. > The main contribution (online fine-tuning) seems relatively limited in novelty. Please refer to “Novelty and contribution” in the rebuttal for all reviewers (at the top) for more information. > Something like "Game solving is a more difficult challenge than [...]" would seem more natural. We will revise the manuscript. > It may be that this outcome is a draw (or a loss) for the first player though, it need not necessarily be a winning outcome with a winning strategy. Thank you for pointing this out. Taking your previous comment re: draws into consideration, we added a footnote this time around (footnote 2 at the bottom of page 2), which, admittedly, is not as front and center as it could be. It is true that the solution may be a loss for the first player. That situation is covered in our statement “winning strategy for _a_ player”. In the two-player zero-sum case, a loss for the first player is a win for the second player. --- Rebuttal Comment 1.1: Comment: > Thank you for pointing this out. Taking your previous comment re: draws into consideration, we added a footnote this time around (footnote 2 at the bottom of page 2), which, admittedly, is not as front and center as it could be. It is true that the solution may be a loss for the first player. That situation is covered in our statement “winning strategy for a player”. In the two-player zero-sum case, a loss for the first player is a win for the second player. Ok, I see, thanks. But my problem is not just with the possibility of draws. The paper still says that a **game** is solved **if there exists a winning strategy**. My problems with this definition are: 1. It's not just about whether **there exists** a winning strategy (because for sure there exists one, assuming no draws, if you also choose to interpret "a player" as "either player"). It's about that **we have to know** that there exists one, **and we have to know for which player this is the case**. 2. To consider **a game** to be solved, we have to consider at least the game's initial position. If I understand correctly, you do not consider the initial position of any game, but only a bunch of "mid-game" states reached after specific openings. Maybe what you're really looking for is a definition of what a **solved position** is, rather than a **solved game**. 3. If you choose to reinterpret each of the considered openings as "initial states" of new "games", and then apply the definition of solved game to this, even then your definition of solved (plus my corrections from point (1) above) would only suffice for the notion of **ultra-weak solving**. Like how we know for the game of Hex on any board size that there exists a winning strategy for the first player (or the second if pie rule is used), but we don't actually know how to construct such a winning strategy. But this is not what you are doing in this paper. In this paper, you are finding/constructing full winning strategies for specific positions. That is a stronger notion of solving, it's **weakly solving** positions (or games, if you choose to interpret all positions as separate games), which also has a different definition. This definition should not just be about existence of winning strategies, but also requires knowing all the moves required to guarantee the win. --- Reply to Comment 1.1.1: Comment: Thanks, we agree with all three of your points. We wanted to keep the definitions short and clear, so we could move on to presenting our method, but of course correctness is most important. Would the following changes be acceptable? A two-player zero-sum game is considered *solved* if **we know of a** winning strategy for **either player** which guarantees a winning outcome$^2$, regardless of how the opponent plays; i.e. the player must have at least one action that leads to a win, for all actions by the opponent. footnote 2: We only consider “weak solutions” [1] in this paper, where different opening positions are treated as independent sub-games. Draws are also not considered, but can be determined via two searches, one for each player. If both outcomes are losses, then it must be a draw. [1] H Jaap van den Herik, Jos W H M Uiterwijk, and Jack Van Rijswijck. Games solved: Now and in the future. Artificial Intelligence, 134(1):277–311, 2002. Please let us know if you have any further concerns or questions. We are eager to engage in any additional discussions during the reviewer-author discussion period.
Summary: The paper addresses the problem of computing provably optimal solutions to game instances. A naive game solving algorithm must therefore address all the possible actions an opponent may pick before emitting a judgement about the game instance value. A leading paradigm when developing this category of algorithms is that of employing strong agents to evaluate game states and therefore inform heuristics that aid in exploring the game tree. The paper focuses on developing a AlphaZero-based heuristics for game-solving. The main issues are that a self-play learning algorithm is a very robust player on path, but its values do not distinguish among actions leading to games of different lengths (useful in order to solve subgames while exploring less) and they lack prediction quality when considering positions that it would not normally reach in game (and that instead are reached as part of the game solving proof algorithm). To address those issues, the paper proposes: - the training of Proof Cost Networks (PCN) via the AlphaZero self-play process in order to better inform the expansion heuristics about the complexity of solving a subgame. This is in contrast to naively use a value network that hints at whether the subgame is possibly lost/won, but does not include any complexity estimation. - the fine-tuning of the PCN used using selected subgames which have been encountered during the proof process. - When using solved positions (ie. solved subgames) to fine-tune, those subgames's proof complexity is set to 0 (since it is already proven their value). Then a fine-tuning batch mixing both solved games and self-play games is employed to update the network. This allows to push the heuristics towards game instances which have already been solved, thus efficiently reusing already computed subgame proofs - When using critical positions (ie. recently explored subgames) to fine tune, the fine-tuning batch is updated by adding experience sampled from applying self-play starting from those positions. This allows to improve PCN's performances on the nodes that will probably be expanded in the future Strengths: * Clear and exhaustive experimental evaluation * Strong experiment results * Clear explanation of the heuristics introduced Weaknesses: * Purely experimental work applied to a specific subset of go instances. This weakens the generality of the approach because the same heuristics may not apply to other game instances * Many dependencies on previous work. Those weaken the overall clarity of the paper for people who are not familiar with the past literature * Misleading nomenclature. Online Learning is a terminology which is misleading given the actual technique employed. In particular, Online Learning in games usually refers to *Online convex optimization* techniques, while the paper refers to a Continual Learning technique. My suggestion would be to switch to Online Fine-tuning to avoid the strong unintended overlap in terminology. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The following questions highlight parts of the paper which felt unclear or possibly benefitting from extra detail. 1. It is not clear how the PCN is trained and its relation to AlphaZero training process. My high-level intuition from reading [14] is that the MCTS algorithm on top of which AlphaZero is based is modified in the Q-values used in the exploration of the game tree. In particular, $\bar n(s)$ and $\bar m(s)$ quantities are estimated from episodes of sampled experience (approximating in a "importance weighting"-like fashion) and then used to direct self-play towards smaller areas of the tree. My opinion is that including extra details in this direction in Section 2.3 could help readers to have an algorithmic understanding of what it means to the *sampling of self-play strategy* which is employed throughout Section 3. 2. When using solved/critical positions as heuristics, are the self-play games re-sampled at the moment, or taken from some buffer of experience used to train the first version of the PCN? 3. Any idea on the theoretical lower bound achievable by having a perfect heuristcs? I.e. is it possible to evaluate the empirical increase in quality of the PCN-based heuristics in terms of the gap between baseline and a perfect heuristic? 4. How different are the performances of alpha-beta search or PNS algorithms for the problem of game solving? On top of this clarifications, I'd like to hear the authors' opinion regarding the weaknesses I've highlighted in the previous section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The only limitation of the work is the focus on Go instances and MCTS routines, and those limitations are clearly expressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful reviews and constructive comments. We address your concerns and questions below. All numbered citations in our rebuttal refer to the sources in the paper’s bibliography. > Purely experimental work applied to a specific subset of go instances. Please refer to “Generalizability to other games” in the rebuttal for all reviewers (at the top) for more information. > Many dependencies on previous work. The online learning methods (using solved/critical positions to fine-tune PCN models) in Section 3.2 and the techniques introduced in Section 3.3 are all novel methods. Please refer to “Novelty and contribution” in the rebuttal for all reviewers (at the top) for more information. > Misleading nomenclature. We settled on using the term “online learning” after reviewing chapter 2, “Related Learning Paradigms”, in the book “Lifelong Machine Learning” by Chen and Liu. Among other related learning paradigms, online learning (subsection 2.3) was the closest to our approach since, 1) our training data arrives sequentially; 2) the existing model is quickly updated when new data arrives to produce the best model so far; 3) we do not have access to a full set of training data as in traditional batch learning. To clarify how our method works, we also use the term “fine-tuning” throughout the paper. We are open to, albeit a little reluctant, to make this change since it is part of our paper’s title. If a consensus among the reviewers is that the term is confusing, we will make the necessary changes. > Q1: It is not clear how the PCN is trained and its relation to AlphaZero training process. Thank you for the suggestions. We will revise the manuscript accordingly. > Q2: When using solved/critical positions as heuristics, are the self-play games re-sampled at the moment, or taken from some buffer of experience used to train the first version of the PCN? We use replay buffers in our PCN training, as is the case with AlphaZero. In the beginning, the self-play games generated by the first version of the PCN will still be sampled by the online trainer. However, after several training iterations, these self-play games will be gradually replaced by the new self-play games (playing starts from critical positions). > Q3: Any idea on the theoretical lower bound achievable by having a perfect heuristcs? If we understand your idea of the perfect heuristic correctly, i.e. the heuristic can find a minimum solution tree for any given position, this theoretical lower bound is extremely difficult to obtain. In addition, since our online learning solver attempts to minimize the number of examined nodes dynamically, a more apt theoretical comparison for our online trainer is finding the minimum solution tree while the search is ongoing, taking into account the effort already spent. This non-stationary problem is even harder. To clarify, assume in a perfect situation, we know solving moves A and B requires 2000 and 5000 nodes. Then, the perfect heuristics for A and B will be PH(A)=2000 and PH(B)=5000. However, given a scenario where a proof tree already spent 4000 nodes on solving B but no nodes on solving A, the online trainer should theoretically adjust PCN to predict PCN(A)=2000 and PCN(B)=1000, and guide the search to keep solving on B. We hope this sufficiently explains why this kind of comparison is not feasible, unless the entire search space is fully explored. Nonetheless, our empirical experiments should indicate strongly that the quality of PCN improves during solving. > Q4: How different are the performances of alpha-beta search or PNS algorithms for the problem of game solving? Previous research has shown that none of these algorithms (alpha-beta/PNS/DFPN/MCTS) dominates the others [14, 24, 25] for game solving. We agree that it is very interesting to compare the performances of different search algorithms, and we are currently working on incorporating different search algorithms into our game solver. However, since our proposed online learning method is search-independent, we think this comparison is out-of-scope for this paper and leave it for future work. --- Rebuttal Comment 1.1: Comment: Thanks for your explanation I think my only critiscm about clarity will be properly addresses. The other questions were mainly curiosities for making the picture more complete, but I comprehend the difficulty in following those directions. Regarding the nomenclature issue, I think that this will mislead some people from the computational game theory literature as myself, but none of the other reviewers pointed this out so I desist. Regarding the generalization capabilities of this work, I don't think the rebuttal provided by the authors closes the issue, but it provides a reasonable argument regarding why this cannot be closed at the present time. Overall, I keep my score unchanged
Summary: The paper proposes an AlphaZero-based MCTS search procedure modified for game solving (vs the original game playing objective). The algorithm consists of a carefully-engineered distributed MCTS procedure leveraging GPU-based learned proof-cost-network estimates to grow the search tree in promising directions. The key idea is to interleave search steps with updating the PCN network, on a non-stationary distribution of states encountered during the search, via self-play. The main contributions are algorithmic and empirical. The experiments are conducted on a variant of Go. The results indicate the proposed method reduces the search space and overall computation time on the challenging task of solving 7x7 Killall-Go. UPDATE: I thank the authors for their detailed response. After reading the other reviews and comments, I'm now more inclined to recommend acceptance and have adjusted my review accordingly. Strengths: + The paper tackles a challenging problem of game solving, which has implications for combinatorial search problems. + The approach of adjusting the distribution of states explored during the self-play learning step towards those likely to be encountered soon by the outer MCTS search is intuitively clear. + The proposed approach consisting of a number of algorithmic choices (distributed search, guided self-play, PCN threshold, manager vs worker roles, etc.) to improve search efficiency. Despite being somewhat heuristic and perhaps not entirely novel, each implementation detail is well explained, intuitively clear and fits nicely into the final algorithm. Overall, the approach seems to be a well-engineered solution to efficiently apply modern GPU-based search procedures to the challenging problem of game solving. + The paper includes a detailed experimental investigation. The results indicate that the proposed method clearly has better empirical performance. The utility of training on critical positions is demonstrated clearly. While the baselines could be stronger and other improvements could be made (discussed below), overall, the empirical section is convincing that the proposed method does well on a challenging combinatorial search problem. Weaknesses: - The paper does not clearly formalize the online learning objective and does not include a mathematical analysis of the algorithm's performance (e.g., wrt regret). - The overall approach seems novel but I'm not sure which components of the algorithm are novel. The paper could do a better job of placing its contributions within the larger body of prior work on game solving. In its current form, it's a bit difficult to assess novelty. - It's difficult to assess how much this approach moves the needle on solving 7x7 Killall-Go or other games compared to prior efforts. Additional baselines and / or domains would help here but likely at large computational expense. - The paper could go into more detail analyzing the role of the critical node queue, given its importance to the overall performance. On a related note, the paper could do a better job of analyzing the overall stability of the updates to the PCN parameters / model (perhaps in the appendix). - Code hasn't been included as far as I can tell. The omission will make reproducibility challenging and likely to limit the impact of the paper. - (minor) The description of the experimental results could be tightened a bit more. Examples with suggestions below. - L283: Comparing "within 40000 seconds" with "more than one day" could be "~10 hours", "24 hours" or 2.4x faster. - L286: "1.29 billion or so" could be "~1.29 billion". Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Could you please describe the parts of the proposed algorithm or technical ideas that are novel? - The baseline used is the algorithm without online learning. Are there other baselines for 7x7 Killall-Go from prior work? - I assume that the reported runtimes include the time spent on updating the parameters. Please confirm this is the case. - I was a bit confused by L127-L127. The criteria for spawning a job (when $v_l < v_{thr}$) seems to control the distribution of the tree nodes sent to the solver, using the node's $v_{thr}$ value. Thus, increasing v_thr would allow more jobs to be spawned, with the newly included nodes likely being harder to solve. My question is about decreasing v_thr. I'd expect this to generate **fewer** solver jobs, which are likely easier to solve. But L128 claims otherwise, "smaller v_thr leads to easier but **more numerous** jobs". What am I missing? - Can you provide more details about the role of the critical queue? For example, are the nodes in it shallow or deep? How does node depth correlate with sampling frequency / time spent in the queue? How does the PCN network loss curve correlate with the contents of the queue? Additional empirical details, discussion and insight on this subject would be interesting. - Is it possible to include additional domains? Can you provide information on whether this approach works in other domains? - Is there a reason code wasn't included? Can it be included? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: To an extent. The paper does a better job explaining its strengths than its weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful reviews and constructive comments. We address your concerns and questions below. All numbered citations in our rebuttal refer to the sources in the paper’s bibliography. > Could you please describe the parts of the proposed algorithm or technical ideas that are novel? The online learning methods (using solved/critical positions to fine-tune PCN models) in Section 3.2 and the techniques introduced in Section 3.3 are all novel methods. Please refer to “Novelty and contribution” in the rebuttal for all reviewers (at the top) for more information. > The baseline used is the algorithm without online learning. Are there other baselines for 7x7 Killall-Go from prior work? Our baseline solver is built upon the state-of-the-art 7x7 Killall-Go solver [13, 38] but with some improvements/modifications as mentioned in appendix A.2. In Shih’s work, they solved 20 7x7 Killall-Go problems, but the most challenging problems are merely five-move openings (as opposed to the three-move openings in our paper). We didn’t compare our solver to them directly because this work mainly focuses on investigating different online learning methods (solved/critical positions) in a game-independent distributed game solver rather than solving 7x7 Killall-Go specifically. We also ran a quick experiment and compared a non-distributed version of our baseline to Shih’s solver for fairness. Shih’s solver uses a total of 692,338 nodes to solve the 20 problems in their paper, while our solver only requires 502,679 nodes (reduce by 27.39% of nodes). > I assume that the reported runtimes include the time spent on updating the parameters. Yes, the reported time is the total time from start to finding a solution, measured by the manager. > I was a bit confused by L127-L127. Higher $v_l$ ​​indicates that solving this node requires a larger proof tree, i.e. more difficult to solve. Usually, nodes closer to the root are more difficult to solve. The root generally has the highest proof cost estimate. Typically, there are also fewer nodes near the root of the tree (in contrast to the numerous leaf nodes). Therefore, smaller $v_{thr}$ leads to more jobs, whereas larger $v_{thr}$ leads to fewer. > Can you provide more details about the role of the critical queue? For example, are the nodes in it shallow or deep? How does node depth correlate with sampling frequency / time spent in the queue? Critical positions are unsolved leaf nodes that the manager recently selected. Whether the nodes in it are shallow or deep depends on the manager’s current proof tree. Fig. 5 (b) in the main text shows the average depth (length) of critical nodes of each iteration when solving opening *JA*. Figures for other openings are provided in Fig. 2 in the appendix. Generally, the critical nodes gradually become deeper when the manager digs into the search tree. However, if the manager changes its focus branch during solving (especially when some previous branches are solved), shallower nodes may become critical. For example, Fig. 2 (f) in the appendix shows such a phenomenon when solving opening *KB*. We observe that the manager changes its focus from move A to B (as shown in Fig. 1 of the one-page PDF) around iteration 135, resulting in a sudden change in the average depth of critical positions. A similar phenomenon happens again at around iteration 185. The online trainer uniformly samples critical positions from the critical queue. > How does the PCN network loss curve correlate with the contents of the queue? Since the online trainer is limited to fine-tuning the critical positions only, generally the loss curve decreases gradually during online learning. We provide two additional loss curves for the pre-trained PCN model and online PCN model while solving *JA* (as shown in Fig. 2 of the one-page PDF). Nevertheless, we have observed that the loss is not highly correlated with the critical queue. > Is it possible to include additional domains? Can you provide information on whether this approach works in other domains? Please refer to “Generalizability to other games” in the rebuttal for all reviewers (at the top) for more information. > Is there a reason code wasn't included? Can it be included? The code for our game solver is built on top of an AlphaZero/MuZero game-playing framework under development and in preparation for publication. Specifically, we used its MCTS to construct our distributed game solver and expanded it for online PCN training with slight modifications. We were planning to release the code once the report for the framework is completed. We will gladly provide access to our code for your reference during the review process if you think it necessary. We fully agree that reproducibility is critical; once the framework is published or if this paper is accepted, we will release all related code to ensure reproducibility. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. After reading the other reviews and comments, I'm now more inclined to recommend acceptance and have adjusted my review accordingly.
Rebuttal 1: Rebuttal: Dear all reviewers, We appreciate your time and effort in reviewing our paper. We would like to address the common concerns raised by reviewers. The numbered citations refer to the sources in the paper’s bibliography. * Novelty and contribution First, to the best of our knowledge, we are the first to propose using online learning in game solving and present concrete and detailed methods (solved/critical positions). Furthermore, we provide sufficient experiment results to demonstrate that using the online learning method can achieve great advantages in game solving in terms of solving time and nodes searched. Second, in terms of solving 7x7 Killall-Go, our online learning game solver is able to solve 16 three-move problems from four different opening groups, while the original state-of-the-art solver [13, 38] can only solve two five-move openings. In conclusion, our experiments not only extend solving 7x7 Killall-Go but also provide valuable insights for other domains in the community. * Generalizability to other games Our proposed online learning method (solved/critical positions) and manager job assignment are **game-independent** and can be easily applied to any other two-player zero-sum games such as Hex, Othello, Chess, etc. Beyond two-player zero-sum games, we believe that the online learning method can be extended to other problems with corresponding modifications, such as automated theorem proving, chemical syntheses, or other combinatorial search problems. However, we must emphasize that most worthwhile problems to solve often require every trick (i.e. domain specific knowledge) to be applied. This effort is non-trivial and highly specific, but entirely orthogonal to online learning, so we did not include other domains in this paper. In addition, we attached a one-page PDF, including additional experiments for answering reviewer questions. * Figures 1 and 2 provide a detailed analysis of the critical queue. (reviewer Q8Zg) * Table 1 shows the scalability of our distributed game solver. (reviewer wrov) * Table 2 shows the results of using different critical queue sizes in the online learning solver. (reviewer wrov) Pdf: /pdf/1a145cb0f788abe10a4140751d2abc0f47671227.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Parts of Speech–Grounded Subspaces in Vision-Language Models
Accept (poster)
Summary: The paper presents an innovative solution to address the problem of polysemy within CLIP's embedding space. The authors propose a novel approach that involves decomposing CLIP embeddings into distinct subspaces, with each subspace representing a specific part of speech. This decomposition technique enables the isolation of different parts of speech within a given sentence, thereby facilitating subsequent manipulation in downstream tasks. The experimental results presented in the paper demonstrate the efficacy of the proposed approach in effectively eliminating properties associated with particular parts of speech during CLIP's text-to-image generation process. Strengths: The paper presents a novel approach to decompose the embedding space of CLIP. Theoretical analysis and experimental results provide compelling evidence that the proposed approach effectively disentangles properties related to different parts of speech within the embedding space. This work is of significant value to researchers, as comprehending and manipulating the embedding space learned by deep neural networks is both crucial and challenging. Understanding the features that embeddings can represent and learning how to manipulate them is essential to the improvement of DNNs. The paper is written clearly and is well-structured. Weaknesses: While the paper's exploration of subspace decomposition focuses on addressing the polysemy issues associated with part of speech in CLIP embeddings, it is important to note that there are instances of polysemy that cannot be disambiguated solely based on part of speech. For example, consider the word "crane," which can refer to both a bird and a machine. These instances present case-by-case ambiguities, and it remains unclear whether the proposed method can be extended to tackle such scenarios successfully as there are no universal subspaces that can disentangle all of them. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In the context of addressing polysemy and disambiguation, would it be more straightforward to incorporate more detailed descriptions in the prompts? Could you please elaborate on the advantages of the proposed method over using prompts with additional details? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the praise of the paper! We address all comments and questions raised in the two sections below: - `it is important to note that there are instances of polysemy that cannot be disambiguated solely based on part of speech. […] it remains unclear whether the proposed method can be extended to tackle such scenarios successfully as there are no universal subspaces that can disentangle all of them.` * Please see **Fig. 1 of the new rebuttal PDF**, where we show our method can indeed be extended using the idea proposed in the paper of more specific visual subspaces to handle the reviewer’s example of a polysemous noun. In particular, we show one can learn a custom “animal” subspace following the same protocol in Sect. 3.1.2 of the main paper. When projecting CLIP representations of “a photo of a crane” onto this subspace, we produce animals of the *bird*, rather than of machinery. Conversely, projecting the CLIP representation of “A photo of a bass” onto this subspace’s orthogonal complement *removes* the representation of “bass” as a fish, synthesizing instead just images of bass *guitars*, following the examples in [1]. We hope this provides further indication of the proposed method and the main objective’s value. We thank the reviewer for making the insightful comment that the parts of speech alone cannot disambiguate between certain instances of polysemy—we will address this in the paper’s limitations sections and include these new results clarifying the further potential of the custom visual subspaces. - `In the context of addressing polysemy and disambiguation, would it be more straightforward to incorporate more detailed descriptions in the prompts? Could you please elaborate on the advantages of the proposed method over using prompts with additional details?` * Whilst such manual prompt engineering requires additional effort relative to the automated subspace projections, and domain-specific understanding of each dataset (in the case of zero-shot learning for example), there are also more fundamental advantages to working on the CLIP representations directly. In particular, the ability to simply manually add additional details to the input prompts is not possible for tasks where it is a *user* with unconstrained natural language control over the input prompt. In such scenarios, it’s therefore necessary to perform the filtering/disambiguation on a later representation level such as with the proposed method. One example of this is the task of blocking stylistic imitation of Sect. 3.1.2, or when preventing the synthesis of e.g. gory imagery. Additionally, we highlight that we learn subspaces on the joint vision-language embeddings, allowing the method to also be applied to *image* representations--which do not permit the option to add additional text details to disambiguate/add desired context (for example, it’s the image representations that are projected onto the subspaces in the ZS experiments in section 3.2.2). This means the subspaces would also be applicable to tasks where the sole input is an image, such as CLIP-based image retrieval (e.g. to search based on either style or content similarity only, or to block the matching on NSFW imagery). --- **[1]:** Chefer, Hila et al. “The Hidden Language of Diffusion Models.” *ArXiv* abs/2306.00966 (2023). --- Rebuttal Comment 1.1: Comment: The authors' responses have addressed my questions. I've updated my rating to accept.
Summary: The paper gives a closed-form solution to project CLIP representation of image/text into a subspace with disentangled modes. The proposed method is demonstrated qualitatively in text-to-image generation, and quantitatively by zero-shot classification. Strengths: - The subspace projection proposed by the paper with a closed-form solution can directly be applied to models based on CLIP, without further training. - Qualitative results demonstrate the effectiveness in some extent. Weaknesses: - The motivation of this work is CLIP's representation is biased and unpredictable. The paper proposes to learn sub-space representations for content and appearance. There are many diffusion-based works on guided generation for controlling contents and appearances. However, the paper (claims to have wide applications in generation tasks) fails to compare to any current work on this topic. - Limited quantitative studies, except for zero-shot classification. In summary, the close-form projection proposed in this paper is simple/fast and (qualitatively) effective in the some examples shown. But the paper lacks comparisons to recent works on diffusion generation with controllable contents and appearances. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitation is discussed, but only for the dimensionality of sub-space (a hyper-parameter k), which is superficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their praise of the method as simple, fast, and effective. We address the two weaknesses raised below (where [LX] refers to line number X of the submitted paper): - `However, the paper (claims to have wide applications in generation tasks) fails to compare to any current work on this topic. [...] paper lacks comparisons to recent works on diffusion generation with controllable contents and appearances` * We kindly highlight to the reviewer that **this work is focused on grounding vision-language representations in PoS****, with** **the** **applications mentioned above** **used** **specifically to evaluate said representations.** Therefore, we respectfully disagree that a comparison to the controllable content/appearance image generation literature is appropriate for evaluating the claims we make in the paper about the PoS subspaces. In more detail, the goal of the paper is to recover the appropriate PoS subspaces that can provide more fine-grained control over the modes of variation in CLIP representations (e.g. [L48]). We *evaluate* this claim qualitatively with TTIMs and quantitatively through class invariance metrics and ZS classification. We expand below on how this pertains to each experiment: **Visualising the CLIP representations (Fig. 3)** - As we state throughout the paper (e.g. lines [L18,L190]), the style/content experiments in Fig. 3 serve simply as a **means of validating the learnt CLIP representations** qualitatively. This evaluation protocol follows that of the related works [1, 2] exploring CLIP representations, which both use text-to-image models for the same visualisation purpose but do not compare to image synthesis methods/techniques themselves for the same reasons we outline here. For example, Reviewer-yh2S highlights that the `experimental results provide compelling evidence that the proposed approach effectively disentangles properties related to different parts of speech within the embedding space`, which is the intention of the qualitative text-to-image experiments in Fig. 3. **Theme erasing (Figs. 4&5)** - The custom theme CLIP subspaces of Sect. 3.1.2 are further evaluated qualitatively in Figs. 4&5 with the theme erasing application. As discussed in the supplementary material, there are two related preprints to first appear on arXiv ~2 months before the submission deadline [3, 4] (we thus consider these “concurrent works” per the NeurIPS guidelines). However, both of these preprints work with specific submodules of the alternative stable diffusion TTIM, therefore there is no straightforward way to adapt the approaches to the CLIP-based Paella model nor to erase concepts from the CLIP feature space for comparison to the proposed method. Finally, the reviewer quotes [L336] from our “limitations” sections, where we state that: `Whilst the model has wide application for both generative and discriminative tasks, it is not able to perfectly separate the modes of variation for every possible image and text prompt`. We have accordingly changed this to read: `Although the recovered subspaces show wide applicability in downstream tasks, [...]` to better reflect the scope of the paper. - `Limited quantitative studies, except for zero-shot classification.` * We kindly draw the reviewer’s attention to the fact that zero-shot classification is *not* just the only way we evaluate the method quantitatively. A quantitative “class invariance” score is presented for two different applications in both Fig. 6 of the main paper and Fig. 6 of the supplementary material to further demonstrate that the subspaces capture variation in just the specific categories of words of interest. ---------- **[1]:** Materzynska, Joanna et al. “Disentangling visual and written concepts in CLIP.” *(CVPR)* (2022). **[2]:** Menon, Sachit et al. “Task Bias in Vision-Language Models.” *ArXiv* abs/2212.04412 (2022). **[3]:** Gandikota, Rohit et al. “Erasing Concepts from Diffusion Models.” *ArXiv* abs/2303.07345 (Mar 2023). **[4]**: Kumari, Nupur et al. “Ablating Concepts in Text-to-Image Diffusion Models.” *ArXiv* abs/2303.13516 (Mar 2023). --- Rebuttal Comment 1.1: Comment: Thanks for the response. I've now updated my score accordingly.
Summary: The following work proposes a geometry-aware approach to identifying subspace projections within the CLIP embedding space. The projections allow one to limit the CLIP embeddings to the subspaces corresponding to individual parts of speech (noun, adj... ). This allows for more fine-grained controllability when using CLIP embeddings for downstream tasks such as text to image synthesis. Notably, the authors take into account the non-euclidean nature of CLIP embeddings (located on the surface of a hypersphere) by mapping it a specific tangent space first. Experiments demonstrate additional controllability with regards to visual style when given access to a part-of-speech partitioning of the CLIP embedding space. Strengths: - Principled approach to handling the non-euclidean nature of the CLIP embedding space. Personally I think this is an important topic that is often overlooked in many applied works building on top of CLIP and the recent improved VQGAN architecture, both of which employ a normalized embedding space. While the geometry-aware formulation is perhaps not the most novel contribution of this work, I believe it may serve as an important blueprint for future works relying on normalized embeddings. - Overall, the closed-form linear formulation of the subspace solving objective provides a low complexity but effective solution to the problem statement. Weaknesses: - As much as the geometry aware formulation is mathematically justified, it would be nice to see some sort of experiment that demonstrates a significant loss in downstream task performance or even a simple plot based analysis as in the supplementary if one were to ignore it. - The visualization for lambda selection in supplementary figure 11 is unclear. The plotting software clearly overlays each point cloud on top of the others. As such, it is difficult to visually confirm the spread of all the point clouds except for the last one rendered, which I believe is the yellow adverb cloud. In order to properly visualize this, I think we would have to have juxtaposed separate plots for each cloud with the same axis scaling and offset. - While I am aware that there are many standards for placement of related works, I think it would be best for an earlier placement before methods given the proposed formulation's close relationship to fisher discriminant analysis. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their praise of the paper and thorough review. We address the 3 stated weaknesses below: - `it would be nice to see some sort of experiment that demonstrates a significant loss in downstream task performance or even a simple plot based analysis as in the supplementary if one were to ignore it.` * We kindly draw the reviewer’s attention to Fig. 8 in the supplementary material where we show that the submanifolds lead to higher downstream zero-shot accuracy than the Euclidean subspaces across almost all datasets considered. We highlight that this leads to gains of e.g. ~30% on flowers102 and almost 20% on CIFAR100 (which are non-trivial in the zero-shot setting). We will make sure to include these results in the main paper with the extra page in the camera-ready version to emphasise these results. - `In order to properly visualize this, I think we would have to have juxtaposed separate plots for each cloud with the same axis scaling and offset.` * We thank the reviewer for the helpful suggestion to separate the PoS point clouds within each subplot of Fig. 11 of the supplementary material to better visualise the relative spread—which would indeed be more intuitive. Our only concern with this suggestion is that this would either lead to Figs. 11&12 spanning 8 pages (if we were to create 4 subplots for each current subplot) or make the figure much more cluttered and thus difficult for the reader to easily compare across values of lambda (if we instead subdivided each existing subplot into quadrants). We will endeavour to incorporate this latter subplot division suggestion and/or explore additional ways of better visualising this for the camera-ready, such as using an opacity < 1.0 for the existing plots to allow one to better view the other PoS point clouds underneath that of the adverb. - `I think it would be best for an earlier placement before methods given the proposed formulation's close relationship to fisher discriminant analysis.` * We agree with the reviewer that, all else equal, placing the related work much earlier would be preferred. The reason we eventually opted to place the related work after the methodology section is to be able to contrast the proposed objective more precisely with the related existing methods, having previously introduced the paper’s notation and proposed mathematical formulation. For example, in this position, we can discuss exactly how each $\mathbf{X}_i$ is manipulated by the different methods (the reader better understanding at this point what each matrix refers to in the context of the paper), and reference the equations describing the exact proposed formulation. We found this to be too confusing when the section was placed before the methodology. Given the close relationship of the component analyses, however, we will add an additional brief, more high-level discussion about the connections to the related work in the introduction, given the extra page allowed in any final version of the paper—we thank the reviewer for the helpful suggestion. --- Rebuttal Comment 1.1: Title: Concerns appropriately addressed Comment: I have read the authors' responses to all reviews and am satisfied with their responses. As such, I retain my original rating of "Accept".
Summary: This work proposes to learn subspaces for disentangling the visual representation in the CLIP space, based on the parts of speech of the prompt. A closed form solution is presented where the norm corresponding to the embedding of word of interest has a maximum norm while the norm of the rest is minimized. Qualitative results are shown on the CLIP-based TTIM from LAION where visual results are presented by killing on of the subspaces (noun or adjective) and quantitative through performance on a class invariance metric and through zero-shot classification. The method performs better than the prior art. Strengths: + The work is very well written, clearly motivated and well presented. + This is the first work which attempts to disentangle the subspaces using POS in CLIP based embedding models. + The method is easy to implement with the closed-form solution. + Qualitative and quantitative evaluation are performed to show the utility of POS guided subspace projection. Weaknesses: - Effect of the prompts: From the quantitative results the role of the subspaces is not clear. For example, by removing noun from "Van Gogh" it generates the painting. Painting is also a noun. Therefore, the distinction is not clear. - In the qualitative examples (Figure 5), the images also change and the semantics on removing one POS subspace do not guarantee that the original image's style is preserved. - It would be good to show the results with multiple samples from the given prompt on the original dataset to show if the method actually works or just picks up on the partial prompts which would also work with the baseline model. For example, "A mutlicolored Penguin" and "A penguin" are very general prompts and they can have similar results without the subspace projection. - "Disentanglement" in multimodal approaches has been presented in prior work [1,2]. 1. Fast, Diverse and Accurate Image Captioning Guided By Part-of-Speech. CVPR 2019. 2. Diverse image captioning with context-object split latent spaces. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How would different partial prompts in the original baseline model compare with the proposed approach as pointed to in weakness? 2. Are there insights into persevering the underlying style of the image generated from the prompt? Example removing only snow from the already generated "snowy" images of NYC? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their assessment of the paper as “clearly motivated” and “very well written”. We address below the weaknesses raised and answer the two questions asked: - `Effect of the prompts: From the quantitative results the role of the subspaces is not clear. For example, by removing noun from "Van Gogh" it generates the painting. Painting is also a noun. Therefore, the distinction is not clear.` * We respectfully argue that the distinction is clear from our qualitative results. In particular, whilst “painting” is indeed a noun, we do not see any generated “object” instantiation of a painting present *within* the image in Row 2 of Fig. 1, but rather an image with the visual styles associated with the artist. Of course, any non-blank synthetic image will always contain *something* one could call a noun (and ultimately we still see an “image”*,* even though “image” is also a noun), **but crucially this doesn’t refute our claim made in the paper about the specific roles of the subspaces** (e.g. [L199]): that the adjective subspace captures “appearance-based variation” associated with a text prompt, and the noun subspace that of the “objects” described by a text prompt. As a concrete example, we see in Row 1 of Fig. 1 that the prompt “Vincent Van Gogh” produces a combination of both the artist themselves and images in their signature style. The former we think of as the “object” associated with the text prompt, and the latter as the visual styles associated with the prompt. We see precisely these two visual components removed when projecting onto the orthogonal complements of the two subspaces in turn. We hope this further clarifies the role of the subspaces. - `In the qualitative examples (Figure 5), the images also change and the semantics on removing one POS subspace do not guarantee that the original image's style is preserved.` **+ Question 2:** `Are there insights into persevering the underlying style of the image generated from the prompt?` * In Fig. 5, our *goal* is to remove the visual styles associated with the text prompt, and thus the experiments support the claims as intended. We assume the reviewer intended to refer to Fig. 3 (and is asking why we get slightly different images after subspace projection): Whilst we don’t claim to address local image editing nor preservation of the original image’s structure, we understand this to be a common phenomenon with TIIMs. For example, [1] states that: `"In particular, even the slightest change in the textual prompt may lead to a completely different output image"` (talking of the SOTA TTIMs). So we view this tendency not as a limitation introduced by the proposed method, but rather one we would inherit if using the method for image editing. If one’s goal was to preserve the image's original structure, incorporating ideas from [1] in freezing a subset of the cross-attention maps would be a sensible approach. We thank the reviewer for pointing this out, and we will add a discussion of this to our limitations section. - **[1]:** Hertz, Amir et al. “Prompt-to-Prompt Image Editing with Cross Attention Control.” *ICLR 2023.* - `It would be good to show the results [...] to show if the method actually works or just picks up on the partial prompts which would also work with the baseline model. For example, "A mutlicolored Penguin" and "A penguin" [...] can have similar results without the subspace projection.*` **+ Question 1** * We politely draw attention to the fact that we already experiment with many examples of text prompts that are *not* decomposable into substrings (e.g. Figs. 1&3 of the main paper and Fig. 1 of the supplementary material), which support the claim that the method works as intended in separating the latent associations of appearance and content. As the reviewer correctly points out, however, in some simple descriptive example prompts (such as “a photo of a multicoloured penguin”) it is possible to manually break the prompt down into sub-prompts that describe each target component. We show in **Fig. 2 of the new rebuttal PDF** the outputs when doing this for the requested prompt. Such special cases of prompts can be seen to have a kind of “ground truth”, and thus we suggest that such comparison further validates that our method successfully produces the expected outcome in separating the two visual components. It’s important to note however that such **manual prompt engineering does not constitute a replacement to the proposed method**: for example, for the “style-blocking” tasks of Sect. 3.1.2, it is not possible to manually intervene at the text-prompt level given a user has unconstrained natural language control over the input text description. The same holds true of any other task with user-specified text input, or when the input is an image (e.g. in style- or content-based image retrieval). - `Disentanglement" in multimodal approaches has been presented in prior work [1,2].` * We thank the reviewer for bringing these two works to our attention. Whilst both works focus on the task of image captioning, which is unrelated and outside the scope of the paper, we will be sure to discuss how both relate to the proposed method in the revised manuscript. Briefly: [1]’s motivation is to produce “multiple, diverse captions that still properly describe the image”. This is similar in spirit to one problem with CLIP motivating our paper, in that there are multiple equally valid labels associated with an image. However, [1] focus solely on the task of image captioning, proposing a standalone method involving training a new series of networks for this specific task. [2] also proposes a new series of networks for the task of image captioning—attempting instead to disentangle the “object” in an image from the context in which it appears (as opposed to separating style from content, or using PoS). --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I have read the rebuttal which addresses all the points from the reviewer comments. The paper makes good contributions for controllable generation. I have updated my score accordingly.
Rebuttal 1: Rebuttal: We thank all four reviewers for their thorough comments and positive assessment of the paper: - `Reviewer-3FWH` states that the work is “clearly motivated” and the method “easy to implement”. - `Reviewer-QWgZ` praises the “principled approach to handling the non-euclidean nature of the CLIP embedding space“ that “may serve as an important blueprint for future work”. - `Reviewer-Qiru` highlights the simplicity, speed, and effectiveness of the method for some of the qualitative results. - `Reviewer-yh2S` notes the “work is of significant value to researchers”, and that the experiments and theoretical evidence provide “compelling evidence” of the proposed approach’s ability to disentangle properties relating to the PoS. In our initial response, we have addressed all weaknesses raised and questions asked by the reviewers, which we hope clarifies any confusion or concerns. We encourage all reviewers to view the **additional 1-page PDF** containing two new figures. **Fig. 1** clarifies the method’s ability to disambiguate between additional instances of polysemy (e.g. “crane”) with the custom subspaces in reply to `Reviewer-yh2S`. **Fig. 2** compares when one synthesises images from sub-prompts describing the individual modes of variation. This further confirms the method works as expected when we have something close to a “ground truth” available for what the disentanglement should look like (as requested by `Reviewer-3FWH`). We are grateful to all reviewers for their time and helpful ideas and we believe the paper will be even stronger after incorporating their comments into any final version. Pdf: /pdf/14b68f12e11f4c2067e4f111e44801dfaece5e7a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Self-Supervised Motion Magnification by Backpropagating Through Optical Flow
Accept (poster)
Summary: The paper proposed Lagrangian motion magnification using the pre-trained optical flow network. Magnification loss induces that the optical flow of the magnified frame matches with the optical flow of a given frame by $\alpha$ times; color loss regularizes the color consistency between a given and magnified frames. Test-time adaptation improves the quality of magnification on out-of-domain. Experiments show competitive performance compared to the prior arts in terms of SSIM and the proposed evaluation metrics. Strengths: 1. Simple and effective algorithm to train motion magnification network using the off-the-shelf optical flow network. 2. Given the off-the-shelf optical flow network, this approach enables the training on large-scale unlabeled videos. 3. Targeted magnification and test-time adaptation might provide a better user experience. 4. The proposed method seems to be independent of the architecture of the neural network. Weaknesses: 1. The term should be used carefully. I am not sure that the proposed method can be named "self-supervised" because the off-the-shelf optical flow network, which is used in experiments, is trained by supervised learning. If authors want to use the term "self-supervised", the self-supervised optical flow network should be used in the main experiments and the supervised one would be the strong baseline to be compared; It is not sufficient that the self-supervised optical flow network can be used in theory. 2. Evaluation metric is limited. To justify the underperformance in SSIM, the authors notice this phenomenon in the last sentence in Table 3, "DeepMag explicitly trains for SSIM". The proposed algorithm is also optimized by the proposed evaluation metric, Motion Error. For the same reason, I cannot be convinced about the quantitative results. 3. It is better to include the limitation to use the optical flow network. The optical flow network is inferior to estimate the subtle motion. It is related to the underperformance in the 0.04px subpixel test of Table 3. Thus, the limitation induced by the optical flow network should be investigated as an ablation study because this subtle motion is important in magnification. 4. I think that this proposed method is more general than supervised or self-supervised learning because this is determined by which optical flow network is used. How about using the synthetic data together? 5. Is the network architecture different from DeepMag? As for the control experiments, do the number of parameters affect the performance directly? 6. I think that DeepMag is sufficient as the strong baseline. However, I wonder why Warp Nearest and Bilinear are used, and the more advanced hand-crafted algorithms [A, B] are not used as baselines. It is because evaluation data might contain large motion? [A] Phase-Based Video Motion Processing (SIGGRAPH 2013) [B] Riesz Pyramids for Fast Phase-Based Video Magnification (CVPR 2014) Technical Quality: 3 good Clarity: 1 poor Questions for Authors: My main concerns are the used term and fair comparison. See the weaknesses part. ===== I update my rate from 4 to 6 because the authors will reflect the discussion below. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Limitations and broader impacts are described at the end of the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and comprehensive feedback. We would like to address your points below: **The Term “Self-Supervised”** We term our method “self-supervised” because it does not need ground truth magnified motions, as opposed to previous work DeepMag. It is common practice in the ML and vision communities to label methods self-supervised even if they use supervised components somewhere in the system, particularly optical flow. Below are papers presented at reputable conferences that follow this standard: - "Self-Supervised Learning of Motion Capture." H. Fish Tung, H. Tung, E. Yumer, K. Fragkiadaki. NeurIPS 2017. - Self-supervised objective for predicting a 3D body mesh from a video, with a term that uses optical flow computed using FlowNet2.0. - "The Sounds of Motion." H. Zhao, C. Gan, W. Ma, A. Torralba. ICCV 2019. - Self-supervised method for sound source separation with motion as auxiliary information, extracted from a pretrained PWCNet. - "Self-Supervised Learning via Conditional Motion Propagation." X. Zhan, X. Pan, Z. Liu, D. Lin, C. Loy. CVPR 2019. - Self-supervised pretext task in which a network is required to predict optical flow from sparse motion signals, both computed from the supervised LiteFlowNet. - "Self-Supervised Learning of Audio-Visual Objects from Video." T. Afouras, A. Owens, J. Chung, A. Zisserman. ECCV 2020. - Self-supervised loss to find "audio-visual objects," which utilizes tracks computed using PWC-Net. - "Self-Supervised Representation Learning from Flow Equivariance." Y. Xiong, M. Ren, W. Zeng, R. Urtasun. ICCV 2021. - Learns representations by enforcing equivariance to a warping under optical flow, computed from RAFT. In addition, we train a version of our model using ARFlow, a fully unsupervised optical flow model, and present results in the attached PDF in Tables 2 and 3. Despite never fully finishing training, it performs well on metrics and beats DeepMag a significant amount of the time. We will include completed results in our manuscript. **Evaluation Metrics** We would be curious if the reviewer has an alternative metric in mind. Motion magnification is an incredibly hard task to measure quantitatively with no standard benchmarks, but with many possible metrics with different trade-offs. As such, we aim to present a holistic, fair, and well-rounded view of the performance of our models and baselines. To do this we use both SSIM, which favors DeepMag and is used in their evaluations, and we introduce the Motion Error metric, which favors our method. We mitigate the advantage our model by using *three* optical flow models to compute the Motion Error: RAFT which is used during our training, and PWC-Net and GMFlow which is never used by our method at all. Our method outperforms DeepMag on all instances. In addition, DeepMag has a significant advantage in the synthetic evaluations as it is being tested on in-domain data. This is because the process used to generate the synthetic train and test set are almost identical (both use PASCAL VOC objects on COCO backgrounds undergoing only translations). Despite this advantage we are still able to outperform DeepMag on the the Motion Error metrics and perform well on the SSIM metrics. **Very Small Flows** We agree that our method is closely tied to the performance of the optical flow method used during training and will add more discussion of this in the limitations section. On the other hand, because our method does not rely on a specific optical flow method it is free to take advantage of future progress in optical flow models. As for ablations on small motions, we point out that we already investigate small sub-pixel motions in depth in table A3 of the appendix where our method outperforms DeepMag on motion error. If there are specific ablations that you believe would be helpful please let us know. **Joint Supervised and Self-Supervised Training** Our method can certainly be combined with existing supervised approaches. However, our primary goal in this paper was to investigate our motion magnification objective. For this reason we opted to focus on the methods in isolation as combining them could make evaluation difficult. It would not be clear how to disentangle the benefits from different objectives. **Architectural Differences** Yes, the architectures are different. Because our method is not tied to a specific architecture we use the simple and ubiquitous UNet. DeepMag uses a bespoke architecture designed specifically for motion magnification, consisting of an encoder, a decoder, and a manipulator. The encoder encodes frames into a "texture representation" and a "shape representation" and the manipulator modifies the shape representation to achieve motion magnification. In addition, we show results in the attached PDF of a smaller model in Tables 2 and 3. This model has 1.09 million parameters compared to DeepMag’s 0.92 million. Due to time constraints we were only able to fully train the model, but despite this it performs similarly to our original model as compared to DeepMag. We will add completed results to the paper. **Forward Warp Baselines** The forward warp baselines we use are the simplest Lagrangian motion magnification methods. Therefore these methods serve as a point of reference, or a lower bound, for the reader. In addition, they give a sense of how well using *just* optical flow can perform, as opposed to using it in our proposed objective. **Phase Based Baselines** DeepMag shows superior performance against the phase-based methods so we believed it sufficient to compare against DeepMag. Moreover, evaluation of phase-based methods typically requires application of a temporal filter which makes it quite tricky to design a fair comparison. For an example of this please see the DeepMag paper. &nbsp; Thank you for your thorough review, and please let us know if we can further answer any questions you may have. --- Rebuttal Comment 1.1: Title: Response of Rebuttal by Authors Comment: Thank the authors for responding to the comments. **The Term "Self-Supervised"** I am not sure and convinced about the authors' opinion. For meaningful discussion, I want to hear the author's opinion about the below example. Let's think about weakly-supervised semantic segmentation [A]. I think that this is similar setting of this paper. - Train Model A on task A by supervised learning - This paper: Optical flow - Weakly-supervised semantic segmentation: Classification - Apply Model A on ML algorithm B not using the ground truth of Task B - This paper: Motion Magnification - Weakly-supervised semantic segmentation: Segmentation In my opinion, "Weakly-Supervised Learning" is more appropriate to this paper because this work exploits more cheap ground truth of optical flow rather than motion magnification. Can authors provide their opinion about this? My big concern is the use of terminology in this work. If this is tackled, I will raise my score. And where can I find the result of the method trained by ARFlow in Tables 2 and 3? I would appreciate for the author to provide the line number. [A] Evaluation for Weakly Supervised Object Localization: Protocol, Metrics, and Datasets, TPAMI 2022 **Evaluation Metrics** **(i)** I understand the difficulty of evaluating motion magnification, like the evaluation of a generative model. Since there exists no public benchmark of motion magnification, it might be difficult to request the construction of real-world dataset; move the object by x and alpha x exactly and make data pair. I think that "We do better on motion error, but lag slightly behind on SSIM, which DeepMag explicitly trains for" should be removed for the following reasons: (1) Question about the superior performance on motion error because the authors' method is trained for this, (2) Doubt of the validity of the evaluation metric from (1). I believe that it does not degrade authors' work. **(ii) Joint Supervised and Self-Supervised Training** This question is followed by the performance number of motion errors and SSIM. It is because joint training will improve both metrics together. I agree with authors' response. **Architectural Differences** I think that reporting the number of parameters is sufficient though the experiment is the direct way. **Additional paper** I suggest this paper: "Video Motion Magnification to Improve the Accuracy of Vision-Based Vibration Measurements". It is because this paper can support the need for motion magnification, and the learning-based method would be superior compared to other methods.
Summary: This paper proposes a self-supervised model to solve the Lagrangian motion magnification problem without needing ground-truth labels. The network takes as input the two input frames and a magnification factor that ranges from 1 to 16, and outputs a generated frame that has magnified motion from the first frame. Off-the-shelf optical flow networks are used in loss computation for self-supervision. Test-time adaption has been explored to enhance the generation quality. Experiments show promising results. Strengths: 1. Good writing; overall clear. The studied problem has received arguably less attention in the research community, but the related work section is detailed and well-structured, which especially helps the readers to catch up. 2. The method is very simple and easy to understand. 3. Experiments show promising results. 4. The authors promised to release full code upon acceptance. Weaknesses: 1. The targeting application of this task is not clear. Why is this task important? Which data domains or scenarios do we want it to work? If our goal is just to detect small motions, we can develop optical flow estimation methods that work specifically for small motions. Even for existing state-of-the-art optical flow networks, detecting small motions is generally not a big issue, and it should not be hard to find a way to visualize small optical flow. Why do we need to generate a video? Maybe adding some application examples in the introduction and some results on related datasets will help the reader better understand the background and goal of this task. 2. There are still some confusions on the method. See questions below. 3. Some minor edits. See additional comments below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As you included the optical flow network in the whole computation graph with gradient computations, did you freeze its weights to make sure it does not shift away? If so, please state so explicitly in the paper. 2. Line 185-186: Why is the positional encoding conditioned on the factor $\alpha$? How do you do it? It does not make sense to me if no explanations are given. 3. Test-time adaptation: this trick adds in overhead on inference time. How efficient is it? 4. Did you tackle the occlusion issue? I believe this is an issue both for new frame generation and optical flow estimation. A simple warping using optical flow may be good enough if there are no occlusions at all. Additional comments: 1. Line 26-28: "The datasets and learning procedures that are used by these models are designed to be general-purpose, with a particular focus on ensuring that they apply to a variety of motions, objects, and scenes". Your cited methods are all supervised methods, which are usually trained on large synthetic datasets that could be totally different from real use cases. The generalization ability of these models is still an open question. I think a better idea is to weaken this statement. 2. Line 32-34: "And generate a new image pair whose predicted optical flow is $\alpha$ times as large as that of the input". Maybe add that the new image pair should share the same reference frame from the input, to avoid confusions. 3. Line 33-34: add $(\alpha \geq 1)$ to be more clear. Your method also makes sense even if $\alpha < 1$, so it is better to clarify your work cases. 4. Line 38: "it" -> "our method". 5. Line 61: need a citation for "Eulerian approaches". 6. Line 69: "it uses it" -> "we use it". 7. Line 115: It is better to make the symbol $x$ bold like $\mathbf x$ since it is a vector. 8. Eq 1: Maybe add that this equation assumes no occlusions. 9. Fig 6 caption: "subset" -> "subsets". 10. Repeated references: [21] and [22], [37] and [38]. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Maybe need to add occlusions as a key limitation Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough and insightful review. We would like to address your questions as follows. **Importance of Motion Magnification** Motion magnification has applications in fields ranging from medical imaging to structural engineering to micro-expression analysis. Motion magnification enables us to amplify subtle movements thereby revealing dynamics that may have previously been hidden or hard to interpret. In the realm of medical imaging motion magnification may be used to magnify minuscule motions within tissues or organs to assist medical experts for diagnosis. For engineers motion magnification may serve as a powerful tool for detecting structural faults or weaknesses in buildings, bridges, or machinery. Motion magnification can also be applied to emphasize emotions, actions, or facial expressions that would otherwise be too small to detect. **Importance of Magnification Targeting** To magnify the motion of certain objects instead of motion in the entire frame, previous phase-based work used temporal filtering to select motion within specified frequency bands. This is a rather unintuitive interface to select objects for magnification and is used mainly because frequency filtering is particularly natural for phase-based methods. In addition, there may be multiple objects in a video moving at similar frequencies making it hard to isolate each object individually. We propose “targeted magnification” as a natural and intuitive way to allow users to magnify objects within a segmented area. With recent advances in object segmentation it is much easier and straightforward for users to click and choose to magnify specific objects, giving precise control over what areas of the video to magnify. **Directly Using Optical Flow for Motion Detection** Visualization of optical flow is generally hard to decipher and motion magnification offers a much better alternative. Colorwheel visualizations where directions are mapped to arbitrary colors can be unintuitive to understand, especially when the motion changes quickly and many colors flash by rapidly. In addition, the magnitude of the motion is typically encoded by the saturation and is thus hard for humans to make fine differentiations between flow magnitudes. Overall, motion magnification is an elegant solution to the problem of detecting, visualizing, and understanding small motions in videos. We will include a discussion of these points in our paper. Please let us know if you have further questions. **Frozen Optical Flow Network** Yes, the optical flow network is frozen but is still backpropagated through to give a learning signal to the generation network. We will make this explicit in the paper. **Alpha Encoding** In order to pass the desired magnification factor, $\alpha$, to our network we use a "positional embedding" scheme to encode the scalar $\alpha$ into a vector. The details of this implementation can be found in Section A2 of the appendix as well as in the code from the supplementary materials. In particular, we use a standard sinusoidal embedding scheme where alpha is passed through a series of sinusoids with exponentially increasing wavelength and we stack the result into a vector. We apologize for any confusion and will edit the text in our paper so that this is clearer. **Test Time Adaptation Efficiency** The efficiency of the test time adaptation experiments presented in the paper were not specifically optimized, nor was their performance measured. We rerun the experiments corresponding to the three examples from Figure 2 in the paper ("baby," "cats," and "pole") to provide more careful measurements of test time adaptation efficiency. Qualitative visualizations of improvement and quantitative measures of efficiency are in the attached PDF in Figure 1 and Table 1. We find that even with a thousand gradient steps and tens of minutes, improvement can be seen on out of distribution data. We give a precise explanation of the test time adaptation procedure below, which will be added to the manuscript. Given a video for test time adaptation we construct a dataset using every frame, with minor rotation and color augmentation. We then train for N epochs, where an epoch consists of all the frames in the video. The learning rate is set to be 1e-4 (a third the learning rate during train time) and the batch size is 8. Note that the training times vary based on the number of frames and the size of the video. **Occlusion and Disocclusion** This is an excellent observation. As can be seen in the supplemental video our model produces qualitatively good results even in the presence of occlusions and disocclusions. This is because optical flow models are already designed to handle occlusions and disocclusions well, so our magnification loss also incentivizes correct predictions in these scenarios. The other component in our loss, the color loss, can be affected by occlusions and disocclusions. To mitigate this, in early experiments we tried masking occlusions with our loss, where we check for occlusions by computing flow both forwards and backwards and performing a cycle consistency check. We apply this occlusion mask to the color loss such that pixels that are occluded or disoccluded do not contribute to the loss. We found that results did not change significantly with occlusion masking, possibly because the magnification loss already handled occlusions and disocclusions adequately, and we therefore omitted this from our final method. Overall, driven by our “magnification loss” we are able to handle occlusions and disocclusions adeptly as evidenced by our qualitative results. We will revise our manuscript to include a discussion of this point. **Typos and Suggestions** Thank you for findings these. We will revise our paper to include your suggestions and fix typos. &nbsp; Thank you for your thoughtful review, and please let us know if you have further questions or need further clarification. --- Rebuttal Comment 1.1: Comment: Thanks for the response! I still have questions on the following topics. **Importance of Motion Magnification** If the goal of the task is to detect and highlight small motions in different applications, we could just visualize the magnitude of the optical flow field. Ideally, most part of the visualization will be totally black due to zero motion, and only the small motion part is highlighted. This should be obvious enough if the goal is just to detect small motions, so why do we need to generate a new video with magnified motion? For medical workers or engineers, this type of visualization should be easier to read than original color images. In addition, generating a new video with magnified motion will most likely create new occlusions, so a part of the information in the original input may be lost. For example, what if the magnified motion covers a part of the background that also contains another small motion that needs to be detected? In comparison, a simple visualization of per-pixel motion such as optical flow should be able to highlight every small motion in the image. I suggest that the authors could use some examples from targeting applications as demo, instead of the current "baby" and "cats" examples (looking like naive toy cases). That will help better explain the background applications of the task. **Occlusion and Dis-occluson** Based on my experiences in optical flow estimation, occlusion is still a very challenging issue and also a major source of error for most of the latest optical flow models including RAFT. The correspondences of occluded pixels are not perceivable in the second frame, so the model can only "guess" the correct flow based on smoothness and other cues. I think the main reason that occlusions do not hurt too much in your case is that the occlusion regions are also very small if the motions are generally small, so masking the occlusion region does not make big differences. Maybe you could visualize your occlusion masks and argue in this direction. Anyway, stating an optical flow model can handle occlusions well could be a bit dangerous. Still, the paper should acknowledge the issues on occlusions even if you could argue it does not hurt your specific task too much. Many of the equations and losses do not work at occlusion regions, so it will be confusing if you do not mention occlusions there at all. --- Reply to Comment 1.1.1: Comment: **Importance of Motion Magnification** We agree with you that if our goal were to detect or localize small motions we could just plot an optical flow field. The goal of motion magnification is instead to visualize these small motions. For very simple motions, this may not be necessary and your proposal would work well, but often people need to understand complex motions that can’t be visualized well from optical flow alone. For example, in Figure 2c of our paper we motion magnify the “pole” clip, originally from [1]. Looking at the patch visualization it is clear that the column is vibrating, but more precisely motion magnification shows that it is vibrating at *two different modes*. One lower frequency mode at a higher amplitude, and one higher frequency mode at a lower amplitude. The superposition of the two modes produces the distinct “squiggly sine wave” patch visualization. Understanding this from just optical flow visualization alone would be quite difficult, but magnifying motions makes this abundantly clear. Other examples of motion magnification visualization can be found in the cited PNAS article [2], where the authors use and validate the technique for a number of scientific applications. This includes visualization of the modal shapes of a pipe (Fig. S3) and a lift bridge (Fig. 3), vibrations in ear tissue (Fig. 2), and deformations in a metamaterial under forcing (Fig. 4). In addition, we point out that at this year’s SIGGRAPH conference the original Eulerian motion magnification paper [3] was awarded the “Test-of-Time Award,” highlighting the impact of this visualization technique. Finally, with respect to your suggestion of replacing the “baby” sequence in our paper, we note that it is a "classic" example used in prior work [3,4,5,6]. We will edit our draft to better explain the applications of motion magnification and include more practical applications of the technique, and we thank you for your suggestions. **Occlusion and Disocclusion** We also agree with you that occlusions and disocclusions are a challenge in the context of motion magnification methods. Our discussion above was not meant to imply that we’ve solved the problem of occlusions, but rather to point out our method does not ignore occlusions entirely and is able to reason about them through the optical flow model. As you correctly point out, these models are not perfect at predicting flow in occluded areas and therefore large occlusions may be challenging. However, because our method is agnostic to the form of tracking used we believe it can benefit from future progress in optical flow estimation. Additionally, empirical results demonstrate that motion is amplified well despite these concerns. This is likely due to your point that motions we consider are small. Overall, we thank you for your helpful thoughts on this problem and your insightful recommendations. We will revise our draft to include a thorough discussion of these points. &nbsp; Again, thank you for your suggestions and please let us know if you have any remaining questions or would like to discuss a point further! &nbsp; [1] “Structural Modal Identification through High Speed Camera Video: Motion Magnification.” Justin G. Chen, Neal Wadhwa, Young-Jin Cha, Frédo Durand, William T. Freeman, Oral Buyukozturk. *Proceedings of the 32nd International Modal Analysis Conference (2014).* http://people.csail.mit.edu/mrub/vidmag/papers/Chen_Imac_2014.pdf [2] “Motion microscopy for visualizing and quantifying small motions.” Neal Wadhwa, Justin G. Chen, Jonathan B. Sellon, Donglai Wei, Michael Rubinstein, Roozbeh Ghaffari, Dennis M. Freeman, Oral Büyüköztürk, Pai Wang, Sijie Sun, Sung Hoon Kang, Katia Bertoldi, Frédo Durand, and William T. Freeman. *Proc. Natl. Acad. Sci., 114 (44) (2017), pp. 11639-11644.* https://www.pnas.org/doi/full/10.1073/pnas.1703715114 [3] “Eulerian Video Magnification for Revealing Subtle Changes in the World.” Hao-Yu Wu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, William T. Freeman. *ACM Transactions on Graphics, Volume 31, Number 4 (Proc. SIGGRAPH), 2012*. [4] “Phase-based Video Motion Processing.” Neal Wadhwa, Michael Rubinstein, Frédo Durand, William T. Freeman. *ACM Transactions on Graphics, Volume 32, Number 4 (Proc. SIGGRAPH), 2013.* [5] “Riesz Pyramids for Fast Phase-Based Video Magnification.” Neal Wadhwa, Michael Rubinstein, Frédo Durand, William T. Freeman. *IEEE International Conference on Computational Photography (ICCP), 2014.* [6] “Learning-based Video Motion Magnification.” Tae-Hyun Oh*, Ronnachai Jaroensri*, Changil Kim, Mohamed Elgharib, Frédo Durand, William T. Freeman, Wojciech Matusik. *European Conference on Computer Vision (ECCV), 2018.*
Summary: The paper introduces an optical-flow-based lagrangian motion magnification method, learned through self-supervised learning. The architecture is very simple -- just a U-Net that inputs two temporally consecutive frames and outputs a motion-magnified image. To train the U-Net, the method uses an off-the-shelf optical flow method, estimates the motion between two frames, and considers the motion as real motion of the scene. Then it penalizes the difference between the estimated magnified motion (constant * estimated motion) and the motion between the reference image and motion-magnified image (i.e. output image). The method demonstrates both good quantitative and qualitative results. Strengths: + Comprehensive related work The paper provides a comprehensive literature survey, which helps understand previous related work and where this paper positions among them. + Implementation details Sec. 4.1, Sec. 4.2, and supplementary material provide sufficient implementation details so that it's easy to understand the choices of hyper-parameters, training configuration, dataset curations, and training details. Parts of source codes are also included in the supplementary material, which all help reproduce the proposed methods. + In-depth evaluation Fig. 5, Fig. 6, Table 2, and Table 3 provide in-depth evaluation of the proposed method and related methods on real-world and synthetic videos. Given that there is no public benchmark and the difficulty of evaluation on this topic, the paper tries its best on providing sufficient evaluation. Weaknesses: - How to handle occlusion and disocclusion? It seems already stated in the limitation section, but I wonder if the method doesn't explicitly handle occlusion and disocclusion. If it doesn't, then can it be a problem? Does U-Net learn to handle them to some extent? When watching the supplementary video, the model doesn't seem to output hallucinated appearance around the disoccluded region, which seems good. - Worse SSIM in Table 3 In Table 3, compared to DeepMag, all metrics are better except for SSIM. I wonder why it's the case. What makes the DeepMag's SSIM better than the proposed method? - Moving background? In the supplementary video (1m14s and 2m12s), I am wondering why the background has motion and it's moving? Is it due to that the optical flow method hallucinates motion in the background and it's used during the training? Can this problem be resolved without using the target segmentation mask? By the way, this is another question: what if $L_{mag}$ in Eq. (3) is applied to the target segmentation objects only and penalizes the background motion to be zero? Can it produce better results and prevent the background from moving? There are some unresolved concerns but the strength outweighs the weaknesses for now. I would like to give Borderline Accept for now, but the rating could change after the discussion phase. --- All concerns are resolved. Thus I am updating my rating to 7. Accept. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Usage of the discriminator loss? I wonder if using an adversarial loss from GAN would help more realistic appearance where flow is not accurate, occlusion happens, or artifacts occur. - I wonder if the collected data or at least the information on train/test splits will be available if the paper is accepted. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Probably another limitation would be that the method's success depends on the off-the-shelf optical flow and segmentation methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your positive and comprehensive assessment of our paper. Below, we address your questions and concerns: **Occlusions and Disocclusions** This is an excellent observation. As noted our model produces qualitatively good results even in the presence of occlusions and disocclusions. This is because optical flow models are already designed to handle occlusions and disocclusions well, so our magnification loss also incentivizes correct predictions in these scenarios. The other component in our loss, the color loss, can be affected by occlusions and disocclusions. To mitigate this, in early experiments we tried masking occlusions with our loss, where we check for occlusions by computing flow both forwards and backwards and performing a cycle consistency check. We apply this occlusion mask to the color loss such that pixels that are occluded or disoccluded do not contribute to the loss. We found that results did not change significantly with occlusion masking, possibly because the magnification loss already handled occlusions and disocclusions adequately, and we therefore omitted this from our final method. Overall, driven by our “magnification loss” we are able to handle occlusions and disocclusions adeptly as evidenced by our qualitative results. We will revise our manuscript to include a discussion of this point. **SSIM** There are two main advantages that DeepMag has when evaluated on SSIM. Firstly, DeepMag is trained with a loss that is relatively similar to SSIM. As a result it tends to score higher on this metric. Secondly, the SSIM metric is computed on the synthetic test set from DeepMag. This test set is generated in a procedure that is very similar to how the train set for DeepMag was generated, and is therefore quite in-domain for the DeepMag model. On the other hand, our model was trained on real videos and the test set is therefore significantly out-of-domain both in terms of content (derived from COCO and PASCAL) and motion (only translations). We note that despite this domain gap we are able to outperform DeepMag on the Motion Error metrics on the synthetic test sets and achieve good SSIM results. **Moving Background in Supplemental Video** We believe that you are correct that the shimmering background comes from optical flow errors. The background of the cat video, which is extremely out-of-focus and features bokeh artifacts, is out-of-distribution and therefore particularly hard for the optical flow model. Our model is closely tied to the performance of the underlying optical flow model, and we will add a discussion of this in our limitations section. **Adaptive Magnification Loss** Your suggestion of only applying the magnification loss to segmented objects should work in removing background motion, and would certainly be an interesting extension of our loss for future work. It would however also add complexity to the training process as it would be necessary to generate segmentations and make foreground-background predictions. In addition it could remove motion that the user might want to keep. Therefore for this paper we opt to keep our method simple and just train to magnify all motion, and then allow the user to choose at inference time which areas of the video to magnify through our targeting procedure. **GAN Loss** A GAN loss could certainly help the performance of the method. However, we wanted to focus this work specifically on the performance of the self-supervised loss that we introduce. To this end we opted to omit a GAN loss as it would make it hard to disentangle the roles of the adversarial loss and the motion magnification loss. Even without a GAN loss we are able to produce magnified sequences that are competitive with or outperform existing supervised methods, demonstrating the effectiveness of our proposed method. **Dataset Information** We are more than happy to release information about the dataset upon acceptance. In fact, enough data to perfectly reconstruct the dataset should already be in the paper and the supplemental material. Table 1 contains the temporal strides used to sample frame pairs from the 5 constituent datasets, in addition to the number of frame pairs before and after motion filtering. Appendix A3 contains additional information about how the constituent datasets were sampled, details on how the motion filtering was performed, and an overview of how the test set was constructed. We plan on releasing our datasets and all of our code upon acceptance, including the code used to sample, filter, and compile our train and test datasets. **Limitations of Off-the-Shelf Models** We agree with the observation that our method is fundamentally limited by the quality of off-the-shelf models, and will add this to our limitations section. We however also like to caveat that this dependence on off-the-shelf models means that our method can also benefit from improvements in optical flow models. &nbsp; Thank you for your insightful review, and please let us know if you have further questions or need further clarification. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. It resolves my concerns. I will also read other reviews and update my rating accordingly after the discussion period! --- Reply to Comment 1.1.1: Comment: Thank you for your reply! Please do not hesitate to contact us if you have more questions.
Summary: This paper uses the classical method of Lagrangian to self-supervise the task of motion magnification. Thanks to the proposed self-supervision-based technique, the proposed method can also be adapted during the test time. As shown in Figure.1, the proposed method is simple, where the optical flow vectors of videos before and after magnification are compared. The optical flow of the motion amplified video is compared to the scaled (by the amplification factor) optical flow of the original video, to derive the magnification loss. To make the output video color consistent, color loss is also used. Videos are provided in the supplementary material for qualitative analysis. Strengths: 1. The method presented in this paper is simple, straightforward, and meaningful. 2. The experimental evaluations validate the proposed method. Supplementary videos are helpful. 3. The source code is also provided in the supplementary, which further highlights the simplicity. 4. Limitations of the method are well discussed, and failure cases are shown. 5. The paper is well-written and easy to follow. Weaknesses: 1. The proposed method largely depends on the pre-trained optical flow network. 2. Given the nature of the addressed problem, its evaluation is known to be difficult. This is reflected in the experiments. 3. The experiments are conducted in a relatively small amount of video frames, and the paper discusses “out of the distribution” and “test time adaptation”. It would be interesting to see how the method generalizes when training on a large number of videos, before proceeding to discuss the rest. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How does the proposed method behave with low-quality optical flow? Does the method improve its performance in difficult videos, when trained on a larger amount of data? I can imagine that when trained on a large collection of videos, the method may generalize and hence also perform well in some difficult cases. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and well thought-out comments. We would like to address the questions that you have: **Pre-Trained Optical Flow** We agree with the assessment that our model’s performance is closely tied to the performance of the underlying flow model, and will add a discussion of this in our limitations section. But we also note that because our method is agnostic to the form of tracking used (as long as it is differentiable), it is also able to take advantage of future improvements in optical flow models. **Low Quality Optical Flow** Our goal was to produce the best self-supervised motion magnification method possible. As such we use one of the best off-the-shelf optical flow algorithms, RAFT, and did not conduct any experiments analyzing the consequences of using worse flow. We would guess that with lower quality flow the motion magnification capabilities of models trained with our loss will worsen. With poor enough flow, training may not even converge. Conversely, with better flow we expect our method to correspondingly get better. **Larger Amount of Data** We agree with you and believe that more data should help the performance and generalization of models trained with our loss. For example, in our paper we train on a decently sized dataset of frame pairs. However, the distribution of this data is not broad enough to cover all of our test videos and we therefore see that test time adaptation can further improve the quality of our magnification results. With diverse enough data we believe that test time adaptation would become less necessary, or provide a smaller benefit, as our model generalizes better. &nbsp; Thank you for your thoughtful questions, and please let us know if you have further questions or need further clarification. --- Rebuttal Comment 1.1: Title: Discussion follow-up Comment: Thank you for your reply. I suggest to include the experiments with different quality of optical flow. I think, these are simple experiments to conduct, therefore should be addressable in the camera-ready. Hence, I keep my review towards accepting this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and suggestion. We will add these experiments. Please let us know if you have any other advice.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful, thorough, and constructive reviews which have helped to improve our manuscript greatly. Please find individual responses to your questions and comments below. We look forward to the discussion period. Pdf: /pdf/864f8da40d647972b05dc27467996f994b9bfbbf.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on learning a pair-wise motion magnification model in a self-supervised manner. The authors employ recent optical flow models to estimate the flow fields between the original and the motion magnified image pairs. The UNet concatenates a sinusoidal encoded magnification factor with the original images and generates the magnified image. The learning process of the UNet is facilitated by a loss function that enforces consistency between the original and magnified flow fields, as well as the consistency between the backward warped images. To demonstrate the effectiveness of the proposed method, the authors curate a large-scale real-world training set. They conduct evaluations both quantitatively on a synthetic dataset and qualitatively on real-world data. The results demonstrate superior performance over previous supervised methods learned on synthetic data. Strengths: - The paper is well organized and written. Sec.1 introduces the problem effectively and motivates the design of the method. Sec.2 provides a brief but comprehensive review of the previous methods. In addition to the overall organization, sufficient details (including the code) are given for a better understanding of the method, such as the footnote on page 5. - The method itself is simple and effective: - A simple UNet is a compact solution that avoids complicated operations, e.g., explicit optical flow estimation and inpainting. - The magnification factor is concatenated with the input image pair after sinusoidal encoding, which enables regionally varying magnification. This is difficult for a single magnification factor as in [33]. - The self-supervised learning losses enable online adaptation to a specific sequence for better quality. - The evaluation is comprehensive and achieves significant improvement over previous methods. - The curated large-scale real-world dataset will encourage further research in this topic. Weaknesses: I do not see significant weakness of the paper since it is simple and effective. The only missing piece I come up with is that since the model itself is simple, the author could make some deeper analysis of a learned UNet to understand the underlying mechanism of the model. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In lines 83-84, the authors claim that the method's capabilities are more similar to those of Lagrangian methods. However, the internal workings of the model remain unknown. It would be interesting if the authors could conduct some analysis, such as investigating whether some UNet layers implicitly contain motion cues. - The warping loss compares warped images. What would happen if we additionally, or only, compare two warped images with I_0? - In the supplementary video, is the magnification factor applied only to the ground in the jumping sequence? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations have been addressed adequately in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your thoughtful and positive review. We appreciate that you believe our method to be simple yet effective, and are glad that you found our paper well-organized and well-written. Below, we address some of your questions and suggestions. **Analysis of UNet** We agree with you that a deeper analysis of the UNet could bring about better insight to how the model encodes motion information. Given that our model is able to successfully magnify motion there must be an understanding of this in the network. However, analyzing and interpreting internal activations of neural networks is still an open research problem. We believe our paper should focus primarily on our motion magnification objective, but are hopeful that our work could be a useful case study for future feature visualization or mechanistic interpretability research. **Lagrangian Label** We term our method Lagrangian because our loss explicitly tracks pixels through a video and tries to magnify those tracks, and not because of how the neural network behaves. This is as opposed to Eulerian approaches which aim to magnify motion by representing and manipulating motion as spatially fixed variables over time. These terms derive from Lagrangian and Eulerian frames of reference in the field of fluids. **Reformulation of the Loss** Given a generated image, which we hope to be motion magnified, our color loss warps this generated image such that it matches the target reference image $I_0$. We then take a photometric loss between the warped generated image and the reference image, with the goal of encouraging the network to generate images that have similar color to the reference image. To be consistent with prior work like DeepMag, our network only predicts a single magnified frame, but it should be possible to predict multiple frames. In this case we would compare two or more warped frames to the reference image. We do agree though that there is much room for extensions to our loss function for future work. Please let us know if this answered your question or if you would like anything else clarified. **Supplementary Video** The "jumping" clip appears twice in the supplementary video. In the first instance the magnification is applied to the entire clip, not just the ground. In the second instance we target the model to magnify the ground but not the legs. We remove magnification of the legs in the "jumping" clip because the motion of the legs is too large to be magnified reasonably at a magnification factor strong enough to see the ground shake. &nbsp; Again, thank you so much for your positive review, as well as your questions and feedback. We look forward to answering any more questions you may have. --- Rebuttal Comment 1.1: Comment: Dear Authors: Thank you for the response and all my concerns have been addressed. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! Please do not hesitate to contact us if you have more questions.
null
null
null
null
null
null
Unified Embedding: Battle-Tested Feature Representations for Web-Scale ML Systems
Accept (spotlight)
Summary: This paper introduces a novel approach known as Feature Multiplexing, which allows multiple features to share a single representation space in machine learning systems. This is significant for web-scale systems which handle hundreds of features with vocabularies of up to billions of tokens, where the standard embedding approaches introduce a vast number of parameters. The authors propose a new solution called Unified Embedding, which simplifies feature configuration, adapts to dynamic data distributions, and is compatible with modern hardware. The empirical results from multiple web-scale search, ads, and recommender systems show superior performance compared to highly competitive baselines. Strengths: 1) The paper addresses a crucial problem in large-scale machine learning systems related to efficient and effective learning of feature embeddings. The proposed framework, Feature Multiplexing, is innovative, allowing multiple features to share the same representation space. 2) The authors provide a well-written and clear explanation of the concepts and the proposed solution. The paper is well-structured, with a good balance of theory, experimentation, and discussion. 3) The problem this paper addresses is of substantial significance, considering the scale at which modern machine learning systems operate. The introduction of Unified Embedding could lead to substantial advancements in web-scale ML systems, serving billions of users globally. Weaknesses: 1) The paper lacks details regarding the computational benefits of the proposed technique, specifically in terms of infrastructure gains, parameter size, hardware usage, and training time. Providing such details would make the comparison to the baseline more comprehensive and persuasive. (particularly the large scale experiments explained at the end) 2) Some specific analysis and explanations are missing. For example, why the Criteo dataset behaves differently from Avazu and Movielens is not explained. A more in-depth exploration would strengthen the understanding of the behavior of the proposed technique across datasets. 3) The authors could have provided more insights into why online deployment results are providing gains. A detailed explanation could better support the claim of real-world applicability and insights into future users. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Could the authors provide more information on the impact of Feature Multiplexing and Unified Embedding on the ML infrastructure, particularly in terms of computational costs, training time, and hardware usage? 2) Can the authors elaborate on the specific behaviors of the Criteo dataset compared to Avazu and Movielens datasets? 3) Could the authors discuss the limitations and potential trade-offs of the proposed method? How might it affect the ease of extending the model and conducting future R&D? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have not adequately addressed the limitations and potential trade-offs of their proposed technique. Future work may be constrained or impacted by these unaddressed issues. For instance, the authors have not discussed the ease (or lack thereof) of extending the model with new features or conducting new R&D with the proposed method. They also have not explored the potential maintenance costs and impacts on model health and observability, which can be crucial for deploying such systems in real-world applications. Further information on these aspects could greatly benefit the audience's understanding of the practical viability and potential challenges of implementing the proposed technique. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address the weaknesses and questions below. **W1 and Q1:** - **Regarding training time:** Embedding table size rarely affects the model training time in models of practical interest. As long as there is enough memory to support the embedding tables (i.e., enough CPU RAM or total accelerator memory), the training time is mostly governed by the forwards and backwards passes on the rest of the (upstream) network. - In the following table, we report the average number of steps / second for several methods on Criteo for the 25 MB table size with CPU training (higher is better). This is representative of total training because the number of steps to convergence is fairly stable across methods (250k batches for Criteo, 200K for Avazu, and 50K for Movielens). |Method|Standard|Multiplexed| |--|--|--| |Collisionless|32.9|NA| |Hashing Trick|29.2|29.4| |(Multiple) Hash Embedding|33.5|31.3| |HashedNet|44.1|39.8| |ROBE-Z Embedding|30.0|34.4| |PQ Embedding|37.6|40.3| |QR Embedding|37.4|31.3| - Note that because our shared cluster has large variations in job load / demand, these numbers have high variance ($\sigma > 5$). However, these results still support the conclusion that multiplexing has a minimal effect on training time. We observed similar behavior in large scale experiments (see caption of Table 2). - **Hardware:** We use proprietary ML accelerators with dedicated embedding support - see [TPUv4](https://arxiv.org/pdf/2304.01433.pdf) and [MITA v1](https://ai.meta.com/blog/meta-training-inference-accelerator-AI-MTIA/) for examples of this kind of system (to preserve anonymity during review, we will disclose the full details after the review process). For our academic evaluations, end-to-end CPU training took approximately 3-4 hours for Criteo, 4-5 hours for Avazu, and 30 minutes for Movielens. **W2 and Q2:** This is a good question. Criteo differs from Avazu and Movielens in terms of vocabulary distribution - it has a heavier-tailed vocabulary than Avazu or Movielens. This worsens the errors introduced by hash collisions because it is more likely for a colliding token to overwrite the shared embedding (all collisions are between heavy hitters). This is likely the cause of the performance gap between collisionless embeddings (where hash collisions do not occur) and the other methods on Criteo. See the rebuttal PDF for a plot of the vocabulary distributions. **W3:** In short, online systems in industry are regularly memory-bound. We can almost always improve model quality with larger embedding tables, but this requires more CPU/GPU/accelerator memory and is not worth the resource cost/revenue trade-off. Improving model quality for the same memory budget is an immediate win. Our offline experiments (from Table 1 / Figure 3) show that multiplexed embeddings lift the Pareto frontier, allowing us to either have better performance at a fixed size or equal performance with a smaller size. The following evidence suggests that this tradeoff drives our online performance gains: - Increases in model capacity often lead to better online performance (see [“Scaling Law for Recommendation Models”](https://arxiv.org/pdf/2111.11294.pdf)). Our theory and offline experiments show that multiplexing increases the effective capacity of a model, allowing a smaller model to behave like a larger one. - We observe (line 353- 355) that multiplexing provides the greatest benefits for problems with large and/or dynamic vocabulary. These are exactly the applications known to require higher model capacity (see [“Learning to Embed Categorical Features without Embedding Tables for Recommendation”](https://arxiv.org/abs/2010.10784)). **Q3:** While we discuss some limitations in the text (inline, due to the page limit), we agree that a description of our engineering and modeling tradeoffs would be beneficial to include. In the revision, we plan to summarize these points in a clearly-marked limitations section and provide a full discussion in the appendix. Thank you for raising these valuable questions. **Modeling tradeoffs:** Unified embedding gives up some flexibility in setting the per-feature embedding dimension, and other (more complicated) multiplexed methods are better on the Pareto-frontier for offline experiments. In exchange, we get more per-feature embedding capacity, a much simpler hyperparameter configuration, hardware-friendly algorithm, and easy feature engineering. For example, to add new features to a model without multiplexing, we must specify and tune the table size and dimension for a new set of embedding tables. With multiplexing, we can simply add a new feature to an existing table at the cost of a few additional lookups. Another option is to use the conventional scheme for new features, then regularly consider a model update that migrates several new features into the unified embedding (typically resulting in performance gains). **Engineering tradeoffs:** During our experience deploying unified embedding in a dozen applications, we have not had problems with observability, monitoring, or model health / stability. Instead, we find that unified embeddings have better load balancing and alleviate hot-spot issues (our embedding rows are distributed over cores). The high-level takeaway is that it is easier to balance one large table across accelerators than many small tables of different shapes. **Model health:** All of the industry online experiments in Table 2 assume healthy models as a prerequisite. These models train online and adapt to constant variations in data distributions. The consistent improvements from unified embedding in these systems is strong evidence that our methods do not interfere with model health or add maintenance costs. Finally, thank you for taking the time to do a detailed review of our paper. If you feel that we have addressed your concerns, we hope that you will consider raising the score. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for detailed response. I think crucial and adding a lot of practical value to the paper and impact. I strongly recommend these comments to be embeded into the final version. I am happy to raise my score as well. I recommend accept.
Summary: The authors present a method for multiplexing embeddings of various features in the recommender and similar applications, i.e., sharing the feature embeddings in order to save space and improve performance, especially at lower memory budgets. They provide a detailed overview of the relevant prior work, and give strong both theoretical and empirical analysis of the proposed method. They show the benefits of the method on three public data sets, and also show how the method helped in large-scale production setting. Strengths: - Very important problem being addressed. - Good theoretical discussion of the method. - Good results shown in the production-level setup. Weaknesses: - In some places the explanations can be improved quite a lot. - The results are very mixed in some cases. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please find the detailed comments below: - The data set details should be explained better. E.g., in Fig 2 the authors mention 26 Criteo features, and it is unclear which ones they are referring to. This is adding to some confusion. - This holds also for other data sets. that are considered. Adding more details there would make the experimental section much more readable. - The results in Table 1 show that the method is only outperforming other baselines on low-budget Criteo, while everywhere else it is worse. This is quite a weak result, and it is unclear why would someone use their method with such mixed performance. - Multiplexed versions of the other baselines should be better explained, at least briefly. The methods are just given, without much discussion. *************** UPDATE AFTER REBUTTAL ************** I would like to thank the authors for their responses. They do address my concerns, and I am happy to increase the score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not discuss the limitations, and it would be good to add a short paragraph. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address the questions and weaknesses below. In particular, we think there is a misunderstanding about our experiment results (which we would like to clarify). **Regarding datasets:** This is a good point - we agree that it would be valuable to have more detailed explanations here. We will elaborate in the revision. To clarify the confusion about Figure 2, Criteo is an online advertisement dataset of about 45 million examples (7 days of data), where each example corresponds to a user interaction (click/no-click decision) and contains an additional 39 features. 26 of these features are categorical and 13 of them are real-valued continuous variables. Figure 2 refers to the 26 categorical features in Criteo (the continuous variables are typically fed alongside the embedded categorical features, but do not use an embedding table - see the [DLRM paper](https://arxiv.org/abs/1906.00091) and the related literature for details). For the other datasets: Avazu is a collection of 11 days of advertisement click data (approximately 36 million examples), where each example contains 23 categorical features and no continuous features. Movielens is a traditional user-item recommendation problem, very similar to the well-known Netflix Prize task. These are all highly popular datasets that are widely used to evaluate recommendation algorithms. We plan to refer the reader to "BarsCTR: Open Benchmarking for Click-Through Rate Prediction," ([CIKM 2021](https://arxiv.org/abs/2009.05794)) for a very detailed description of Avazu and Criteo and to "The Movielens Datasets: History and Context" (ACM Transactions on Interactive Intelligent Systems, 2015) for a thorough description of Movielens. **Regarding experiments:** We think there is a misunderstanding here. Table 1 shows that multiplexed embeddings (our proposal) outperform all of the baselines at all memory budgets on all datasets (except for collisionless embeddings, which are infeasible in practice and included only as a headroom reference point - see the response to Reviewer Azh8 for a detailed discussion of this matter). Note that all of the methods below the horizontal bar are newly proposed, while existing baselines are listed above the bar. Figure 3 shows the top baseline (blue) against the top multiplexed method (red) and is another view of the same data behind Table 1. In practice, we use Multisize Unified Embeddings (see line 306), which are very similar to Multiplex PQ - one of the top performers from Table 1. We would also like to highlight that in large-scale industrial applications, the extreme low-budget case is standard practice because vocabulary sizes regularly exceed 10 billion, making collisionless schemes infeasible. Our performance improvement in this memory-constrained setting is therefore a major advantage. To summarize, all of the methods below the bar in Table 1 are ours (and are new), and we show very strong results in real-world settings where the vocabulary is on the order of millions to billions (Table 2). To remove this confusion, we plan to revise the names to be clearer and more descriptive: - What was previously called “Multisize Unified Embeddings” (in Section 5.2) will be referred to simply as “Unified Embedding.” - What was previously called “Unified Embedding” (in Table 1) will be referred to as “Multiplexed Hashing Trick.” See the rebuttal PDF for a draft revision of Table 1. **Regarding other multiplexed methods:** We describe the last 20 years of SOTA embedding methods in the appendix, but we agree that it would be valuable to have a detailed description of how to construct a multiplexed method given an existing embedding strategy. We will add the following description: “To construct a multiplexed version of an existing embedding scheme, we use a single instance of the scheme to index all of the features. For example, we can look up all feature values in a single multihash table, rather than using a separately-tuned multihash table for each feature. Practically, this is equivalent to salting the vocabulary of each feature with the feature ID and merging the salted vocabularies into a single, massive categorical vocabulary (which is then embedded as usual).” **Regarding limitations:** Due to space constraints, we discussed limitations throughout the text. Limitations of the theory are addressed in Section 4.3, modeling limitations (due to embedding width constraints, etc) in Section 3 (around line 150), and the constraints of our experimental setup in Section 5. However, after reading the reviews, it seems that this was very easy to miss. We plan to add the following paragraph to the revision: *Limitations:* While we expect deeper and more complicated network architectures to exhibit similar behavior, our theoretical analysis is limited to single-layer neural networks. The Pareto frontier from Section 5.1 is based on models that lag behind the current SOTA (we use 1-2 DCN layers + 1-2 DNN layers), though we do provide evidence that feature multiplexing is also effective for SOTA models in Table 2. Unified embeddings impose limitations on the overall model architecture, and all embedding dimensions must share a common multiple. Because of the latency involved with looking up more than 6 components, we are effectively limited to 5-6 discrete choices of embedding width. Finally, unified embeddings sacrifice the ability to aggressively tune hyperparameters on a per-feature basis in exchange for flexibility, ease-of-use, and simplified feature engineering. Finally, thank you for taking the time to review our paper. If you feel that we have addressed your concerns, we hope that you will consider raising the score. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses. They do address my concerns, and I have increased my score.
Summary: This paper proposes that in web-scale machine learning systems, features from different fileds can share the embedding matrix without significantly affecting the model's performance. The insight lies in that different feature fields are processed by different model parameters. Therefore, compared to inter-feature collisions, in the case of intra-feature collisions, the embeddings of different features can tend to be orthogonal, which is beneficial to the final model performance. This point is confirmed by the empirical study based on logistic regression. The experimental results show that the multiplexed embedding scheme is more effective than the existing schemes that only consider inter-feature embedding sharing. ------ AFTER REBUTTAL: I've read and appreciated the author’s rebuttal. I understand that collisionless embeddings are considered the upper bound. However, it would be helpful if the advantages of feature multiplexing could be mentioned and demonstrated in the discussion and experiments. Thus I would like to keep the score. Strengths: - The overall organization and writing of the paper are excellent. - The proposed multiplexed feature embedding scheme is novel and its feasibility is verified both theoretically and experimentally. - The paper conducts experiments on multiple public datasets, and the overall results are promising. Weaknesses: - In the experiments, although the performance is better compared to the inter-feature embedding sharing scheme, it is worse than the Collisionless scheme on Criteo, very close on Avazu, but no relevant discussion is provided. - Compared to the Collisionless scheme, the advantages of using feature multiplexing do not seem to be fully discussed, nor are they reflected in the experimental results. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please share your views on the aforementioned weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address the questions raised in review below. **Regarding experiment results:** There seems to be a misunderstanding about collisionless embeddings. This method is not usually a feasible baseline, and it is included in our benchmark evaluation only as a headroom reference point. In industrial recommendation systems, where feature vocabularies can be massive and change dynamically due to churn, the hashing trick is the standard approach (e.g. Google [1], Facebook [2], Twitter [3]). Because of memory limits and dynamic vocabularies that are not fully known up-front, collisionless tables are often impossible to deploy. For example, our pCTR application in Table 2 would require > 1 TB of memory for a 32-dimensional collisionless table. Facebook’s DLRM model [4] requires a 40 TB table even after hashing. While Facebook does not disclose their total vocabulary size, a collisionless table would likely require space that is 1-2 orders of magnitude larger (> 100 TB in their case). We conducted our academic experiments on the 7-day Criteo dataset, where the collisionless table only requires 25 MB. When more realistic time periods are considered, the size of the vocabulary grows by several orders of magnitude (e.g., the collisionless table for the 30-day Criteo dataset is larger than 100 GB [5]). Hence, the hashing-based methods are the only realistic baselines in our evaluation. Collisionless embedding performance effectively represents an upper bound on what is practically attainable. We include them in our (small-scale) experiments as an optimistic reference point, to investigate the Pareto frontier and estimate the ideal model capacity when we are not limited by hash collisions. The experiments show that multiplexing achieves close to full capacity on Movielens / Avazu and provides the best performance on Criteo versus all baselines. **Criteo vs. Avazu / Movielens:** The performance gap on Criteo is likely a result of the vocabulary distribution. Criteo has a heavier-tailed vocabulary than Avazu or Movielens. This worsens the errors introduced by hash collisions because it is more likely for a colliding token to overwrite the shared embedding (all collisions are between heavy hitters). We have included a plot of the vocabulary distribution in the rebuttal PDF, which we hope will shed some light on this point. **Regarding our advantages:** We cover the algorithm-level advantages of feature multiplexing in Section 3 and the practical advantages in Section 5.2. Compared to collisionless embedding, these approaches are fast, implementable, and orders of magnitude cheaper to train and serve. We will add this discussion to the revision. Once again, thanks for reviewing our work. If we have addressed your questions and the weaknesses identified in the review, we hope that you will consider raising the score. **References** - [1] Learning to Embed Categorical Features without Embedding Tables for Recommendation (https://arxiv.org/pdf/2010.10784.pdf) - [2] Compositional Embeddings Using Complementary Partitions for Memory-Efficient Recommendation Systems (https://arxiv.org/pdf/1909.02107.pdf) - [3] "Model Size Reduction Using Frequency Based Double Hashing for Recommender Systems" [RecSys 2020](https://arxiv.org/pdf/2007.14523.pdf) - [4] "Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models" [ISCA'22](https://arxiv.org/abs/2104.05158) - [5] "The trade-offs of model size in large recommendation models: 100GB to 10MB Criteo-tb DLRM model" [NeurIPS 22](https://arxiv.org/abs/2207.10731)
Summary: The paper introduces a novel "Feature Multiplexing" framework which uses a shared representation space (embedding table) for multiple sparse features. This approach aims to find a balance between model size and accuracy for industrial level recommender system. Besides, the authors provide a theoretical analysis, highlighting that inter-feature collisions can be alleviated if features are projected using orthogonal weight vectors. Further gradient analysis reveals that these collisions are not uniformly detrimental; the adverse effects can be mitigated when features are processed by distinct parameters in a single-layer neural network. Strengths: **Pros**: - The paper presents the a novel "Feature Multiplexing" framework, offering a straightforward and effective method to optimize the trade-off between model size and accuracy. - This framework promises considerable practical advantages, especially in the context of large-scale recommendation systems. - A theoretical analysis is provided, addressing the advantage of the Feature Multiplexing system from rigorous theoretical analysis . It explains how inter-feature collisions can be reduced when the model uses orthogonal weight vectors to project distinct features. This analysis provide good theoretical ground for a fairly practical design. Weaknesses: **Cons**: - Certain claims in the paper, such as "0.02% increase in test AUC is considered significant in Avazu and Criteo" and "+0.1% is considered significant in online systems," raise eyebrows. These claims appear to be based on subjective opinions rather than objective facts. - The authors have not provided source code for their work. This factor makes it difficult to reproduce and verify the claims made in the paper, which is a foundational principle of the NeurIPS community. If this work is industry-driven and the code cannot be released easily, perhaps the authors should consider venues more suited to applied data mining. - The content and focus of the paper lean heavily towards data mining and address real-world, industrial-scale problems. As such, it might be better suited for venues like KDD, WWW, or SIGIR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not discuss the limitations at all. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address the weaknesses below. **Regarding significance of results:** It is completely fair to be skeptical about the significance of a +0.1% improvement. However, seemingly small improvements can translate to huge numbers in large online systems. For example, Anil et al. state that “accuracy improvements above 0.1% are considered significant” in their [paper](https://arxiv.org/abs/2209.05310) “On the Factory Floor: ML Engineering for Industrial-Scale Ads Recommendation Models'' when discussing Google Ads models. This is an industry with $200B+ annual revenue. In “BarsCTR: Open Benchmarking for Click-Through Rate Prediction” ([CIKM 2021](https://arxiv.org/abs/2009.05794)), Zhu et al. summarize the literature: “existing studies from Google [8, 47] and Microsoft [28] reveal that an absolute improvement of 0.1% in logloss (or AUC) is considered as practically significant in real CTR prediction problems.” All of the results in Table 2 are statistically significant with $p < 0.05$, and 0.1% is very practically significant in industrial systems. This is also true in academic evaluations: the differences between logistic regression and SOTA on Avazu and Criteo are +1.7% and +2.1% AUC, respectively (see BarsCTR). The [DCNv2 paper](https://arxiv.org/abs/2008.13535) states, “For Criteo, a 0.001-level improvement is considered significant (see [13, 46, 50]).” The [AutoInt paper](https://arxiv.org/abs/1810.11921) claims that “a slightly higher AUC or lower LogLoss at 0.001-level is regarded significant for CTR prediction task.” Our +0.02% AUC number from Table 1 is based on the 95% confidence interval surrounding the mean AUCs, with $\sigma = 0.00022$ (approximate population standard deviation, estimated via ~20 runs). However, we really should have made this much clearer, so thank you for making this point in your review. In the revision, we have performed an unpaired t-test between the values in Table 1, finding most differences to be statistically significant. You can find the revised table in the rebuttal PDF. **Regarding code:** We are in the process of releasing an open source implementation for the academic experiments to reproduce Table 1. Due to our internal publication review process, it wasn’t possible to release this by the paper submission deadline, but it will be present in the revision. **Regarding venue:** Embedding learning (or representation learning) is the heart of deep learning, and is a problem that occurs in NLP, vision, and bioinformatics as well as in data mining. Like many recent papers (e.g. ROBE-Z at NeurIPS 2022), we focus on SAR (search, ads, and recommendation) systems because they are the most compelling application of categorical representation learning. However, feature multiplexing is a general technique with many potential applications. Applications of embedding learning include scaling up the vocabulary size in transformers and improving the compute-memorization tradeoff of embedding retrieval-augmented transformers and LLMs. Finally, we note that the top baseline methods for this paper recently appeared at NeurIPS (Multihash Hash Embeddings in 2017, ROBE-Z in 2022) and similar venues (HashedNets and Hashing Trick at ICML). **Regarding limitations:** Due to space constraints, we discussed limitations throughout the text. Limitations of the theory are addressed in Section 4.3, modeling limitations (due to embedding width constraints, etc) in Section 3 (around line 150), and the constraints of our experimental setup in Section 5. However, after reading the reviews, it seems that this was easy to miss. We agree that it would be clearer to consolidate this in a separate section. We’re happy to add the following paragraph to the revision: *Limitations:* While we expect deeper and more complicated network architectures to exhibit similar behavior, our theoretical analysis is limited to single-layer neural networks. The Pareto frontier from Section 5.1 is based on models that lag behind the current SOTA (we use 1-2 DCN layers + 1-2 DNN layers), though we do provide evidence that feature multiplexing is also effective for SOTA models in Table 2. Unified embeddings impose limitations on the overall model architecture, and all embedding dimensions must share a common multiple. Because of the latency involved with looking up more than 6 components, we are effectively limited to 5-6 discrete choices of embedding width. Finally, unified embeddings sacrifice the ability to aggressively tune hyperparameters on a per-feature basis in exchange for flexibility, ease-of-use, and simplified feature engineering. Once again, thank you for taking the time to review our paper. If we have addressed the questions and weaknesses from your review, we hope that you will consider raising the score.
Rebuttal 1: Rebuttal: We'd like to thank all of the reviewers for their efforts. We have included a rebuttal PDF that contains: - Vocabulary distributions for Critoe, Avazu, and Movielens. This plot helps to explain the differences in behavior for embedding algorithms on Criteo vs Avazu and Movielens. - A revised version of Table 1 that we hope will clear up some common confusions about our results. We would like to highlight that multiplexed embedding algorithms (our proposal) show improvements across the full memory-accuracy tradeoff for each of our three datasets. The revision of Table 1 also includes results for t-tests at the $p < 0.01$ and $p < 0.05$ significance levels, which demonstrate statistically significant differences between multiplexed vs. non-multiplexed embeddings. Please see the individual rebuttals for responses to specific reviewer questions. Pdf: /pdf/efe75c80882126247c4b216f0682fcd0b18c70ec.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift
Accept (poster)
Summary: This paper proposes an augmentation technique to handle covariate shift during graph classification by creating new "environmental factors" and simultaneously preserving "stable" factors. Since unseen environmental factors occur during covariate shift, the authors propose using an adversarial augmentor to find augmentations that increase the "GCS." However, since unconstrained adversarial augmentation could harm the "stable" factors, the method includes a second network that learns to a mask over the stable nodes/edges. The found mask is used to help keep the adversarial augmentation to environmental factors and also in the loss. The method is shown to do well on the a variety of benchmarks. **POST REBUTTAL:** I have read the other reviews and the author's rebuttals. I thank the authors for their thorough responses! The explanations and additional experiments (non-random augs., graph transformers) are helpful and strengthen the paper. I encourage the authors to add these discussions, and experiments to their final paper as well as cite the suggested related work on task-relevant invariances. My concerns have been addressed and have raised the score! Strengths: - Motivation: The method is well-motivated: the environmental feature discrepancy and stable feature consistency objectives for promoting performance under covariate shift make sense. - Strong experimental results. The proposed method is evaluated against many baselines on several different benchmarks. The evaluation of augmentation diversity and discrepancy is also useful. Weaknesses: - Limited Novelty: Individual pieces, as well as the motivation for the method are not particularly novel. These concepts have been discussed in the context of designing augmentations in general and also in the context of creating augmentations for SSL (in-variance/recover-ability, style/content). Adversarial augmentation has been explored by other methods, and masking for explanation too. To this end, I don't think that the theoretical discussion adds too much. - Complexity: Including two networks, as well as an adversarial objective is expensive and potentially more difficult to train. Moreover, the masking operation can be expensive, since the mask must be learned over all the nodes and the edges. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The discussion of augmentation diversity in Table 3 is interesting. However, I was wondering if it would be possible to also compare to one of the augmentation methods too that is not random? Curious to see if how well something like FLAG compares. - Quick (not important) question: how where hyper-parameters selected during tuning? - Quick question: can you add some run-times/model sizes to the appendix for later drafts? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See above. Negative societal impacts, etc are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable time and comments! We provide detailed responses as below, and some necessary results are in our **one-page PDF** in the Global Response above. We sincerely hope that addressing your concerns will help change your rating of our work! ### Q1. Limited Novelty & Motivation are not novel & Adversarial augmentation has been explored by other methods. We appreciate your thoughts but respectfully argue that our method and motivation are novel. Now we list the following aspects. - **Reviewer Feedback**. The novelty and motivation have been praised by other three reviewers, such as ***Reviewer ExMB***'s comment: "The adversarial augmentation method design is novel with good intuitions and it provides new insights into the important OOD area of graph learning."; ***Reviewer bjaC***:"The technique proposed by the authors is interesting, and the use of two components on graph data, is new."; and ***Reviewer reTY***: "I believe the extension and its implementation to this problem is sufficiently novel." - **Our Novelty & Motivation.** Covariate shift on graphs remains largely under-explored (comments from ***Reviewer ExMB***). In this paper, we provide a clear formulation for this new problem, design a new method to solve this problem with solid theoretical analyses. Further, we also propose a new metric, to measure the graph covariate shift in diverse datasets (see Table 2 in our paper). Moreover, using invariance principle with graph augmentation for handling graph covariate shift is new and unique. - **Comparisons with Similar Methods (refer to our one-page PDF).** We understand your concern that our augmentation is not novel. However, most adversarial augmentation strategies [1-5] are performed on European data (e.g. image). Due to the non-Euclidean nature of graph data, it is difficult to directly transfer these strategies to graphs. Therefore, it is non-trivial to design new method in graph domain (comments from ***Reviewer bjaC***). There are few studies applying adversarial augmentation or masking to graph data. ***In our one-page PDF (Table 3)***, we list several graph augmentation methods (ADGCL [6], RGCL [7], EERM, FLAG, GREA) that are most similar to ours, and highlight the main differences. ### Q2. Table 3 (augmentation diversity) is interesting & Compare non-random augmentation methods (e.g., FLAG). We compare with non-random methods such as FLAG, G-Mixup, ADGCL, RGCL. Since ADGCL and RGCL are SSL settings, we replace our augmentation module in AIA with theirs. From the results ***in our one-page PDF (Table 4)***, FLAG mainly focuses on node feature, which limits its diversity. And FLAG does not distinguish between stable and environmental features. For G-Mixup, despite having high environmental diversity, struggles to maintain stable feature consistency. Others like ADGCL and RGCL also fall short in comparison to our performance. ### Q3. How to select hyper-parameters during tuning? We provide detailed hyper-parameters in Appendix (Table 2). They are tuned within the following ranges: $\alpha, \beta \in \\{0.01, 0.005, 0.001\\}$; $\lambda_s \in \\{0.1, ..., 0.9\\}$; $\gamma \in \\{0.01, 0.1, 0.2, 0.5, 1.0, 1.5, 2.0, 3.0, 5.0\\}$. The best parameters are chosen based on accuracy on the validation set. For basic settings like batch size, optimizer, we keep them consistent with baseline models. ### Q4. Complexity & Adversarial objective and masking operation is expensive. & Can you add some run-times/model sizes to the appendix for later drafts? In Appendix F, we have already discussed our time complexity and model size. Here we discuss more details about complexity, model size and running time. We would also be happy to include these details in our final version if the paper is accepted. - **Adversarial objective complexity .** In our training steps, a single step of updating the parameters of the adversarial generator can achieve good performance, instead of using a multi-step approach similar to PGD. So our complexity is acceptable. - **Masking operation complexity.** Rather than computing the scores between all pairs of nodes (i.e., $O(n^2)$). We predict the score for each edge. Therefore, our time complexity is $O(m)$. For node feature, the time complexity is $O(n)$, so the total complexity for masking operation is $O(n+m)$. Furthermore, we focus on graph classification. In OGB dataset, the largest dataset has an average of only 243 nodes and 2261 edges. Hence, our approach is suitable to handle them and will not include more time complexity. - **Running time and Model size.** ***In our one-page PDF (Table 5)***, we count the running time and model size of ERM, DIR, CAL and our AIA on NVIDIA 3090. Our running time is about 1.5-2 times that of the base model. With two additional small networks, our model size is roughly 1.5 times the size of the original model. And our method is comparable to the current SOTA methods DIR, CAL in terms of running time and model size. Hence, we believe we achieve a better performance-complexity trade-off considering that ours can achieve significant accuracy gains. In practical applications, we believe that these additional complexities are acceptable. Of course, **we accept your suggestion and will put these discussions and results in our paper**. And we will also find ways to reduce our complexity in future work. [1] Adversarial AutoAugment, ICLR 2020 [2] Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness, NeurIPS 2020 [3] AugMax: Adversarial Composition of Random Augmentations for Robust Training, NeurIPS 2021 [4] Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning, ICLR 2023 [5] Harnessing OOD Examples via Augmenting Content and Style, ICLR 2023 [6] Adversarial Graph Augmentation to Improve Graph Contrastive Learning, NeurIPS 2021 [7] Let Invariant Rationale Discovery Inspire Graph Contrastive Learning, ICML 2022
Summary: The paper studies on the covariate shift problem in graph classification tasks, as opposed to the more commonly studied correlation shift. The authors propose a novel data augmentation strategy, Adversarial Invariant Augmentation (AIA), guided by two principles, environmental feature discrepancy and stable feature consistency. The method leverages an adversarial augmenter to adversarial generate masks. Another stable feature generator is also utilized to promote stable feature consistency. The proposed approach equips the graph classifier model with an enhanced ability to identify stable features in new environments and effectively mitigate the covariate shift issue. **Post rebuttal**: I'm content with the rebuttal of the authors and will keep my rating. Strengths: - This paper addresses the under-explored perspective on graph OOD, which is covariate shift. It provides new insights into the important OOD area of graph learning. - This paper designs the AIA methodology corresponding with the two proposed principles. The adversarial augmentation method design is novel with good intuitions. - The experimental results are extensive covering a list of baseline methods and datasets with diverse properties. AIA outperforms specialized graph generalization and augmentation algorithms. - The authors provide in-depth analysis discussing experiments with respect to the principles, which is persuasive. - Ablation study demonstrates the effectiveness of the designed components. - The structure of the paper is clear. The writing is easy to follow. Weaknesses: - Advsersarial training is notoriously inefficient at neural network training, so leveraging adversarial training to solve the graph OOD problem may affect the training/convergence speed of GNNs. It would be better if authors can provide discussions regarding such overhead. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - What is the GNN backbone used in AIA for the experiments? - How will the choice of GNN backbones affect the final outcome? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper addresses the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We gratefully thank you for the positive feedback and constructive comments! To address your concerns, we provide point-to-point responses, and some necessary results are in our **one-page PDF** in the Global Response above. ### Q1. Adversarial training is inefficient. In Appendix F (from Supplementary Material), we have already discussed our optimization complexity and model size. In our training steps, a single step of updating the parameters of the adversarial generator can achieve good performance, instead of using a multi-step approach similar to PGD. So our complexity is acceptable. ***In our one-page PDF (Table 5)***, we count the running time and model size of ERM, DIR [1], CAL [2] and our AIA on NVIDIA 3090 (24GB GPU). Our running time is about 1.5-2 times that of the base model. With two additional small networks, our model size is roughly 1.5 times the size of the original model. And our method is comparable to the current SOTA methods DIR, CAL in terms of running time and model size. Hence, we believe we achieve a better performance-complexity trade-off considering that ours can achieve significant performance improvements. In practical applications, we believe that these additional complexities are acceptable. Of course, we accept your suggestion and will put these discussions and results in our final version. And we will also find ways to reduce our complexity in our future work. ### Q2. What is the GNN backbone used in AIA for the experiments? For a fair comparison, we uniformly choose GIN [3] as the backbone for all algorithms. ### Q3. How will the choice of GNN backbones affect the final outcome? ***In our one-page PDF (Table 2)***, we also selected three different backbone models (GCN [4], GCNII [5] and GAT [6]) for experiments to answer your question. In addition, we have also conducted experiments based on diverse graph transformer backbones ***in our one-page PDF (Table 1)***. Note that, our observations and conclusions remain the same with the new results. If our paper is accepted, we promise to add these new results to the final version of the paper. [1] Discovering Invariant Rationales for Graph Neural Networks, ICLR 2022 [2] Causal Attention for Interpretable and Generalizable Graph Classification, KDD 2022 [3] How Powerful are Graph Neural Networks?, ICLR 2019 [4] Semi-Supervised Classification with Graph Convolutional Networks, ICLR 2017 [5] Simple and deep graph convolutional networks, ICML 2020 [6] Graph Attention Networks, ICLR 2018
Summary: The paper proposes a data augmentation technique on graph datasets. The model consists of two main components: adversarial augmenter and stable feature generator. The adversarial augmenter tries to generate new samples by generating dropping masks for the nodes and edges adversarially, within some augmentation cost. The stable feature generator, on the other hand, tries to identify subsets of graphs that are preserved among all the samples. The authors argue that this construction is more suitable for covariate shift settings compared to the previous graph augmentation methods like DropEdge. Finally, the authors demonstrate the effectiveness of the technique in the experiments. Strengths: - Data augmentations on graphs is an important area that needs more innovations, as many prominent techniques for data augmentation in other domains, like vision, do not apply to graph data. - The paper provides explanation details on covariate shift problem in graph setting - The technique proposed by the authors is interesting. The use of two components: the stable feature generation and adversarial augmenter, on graph data, is new. - The technique trains the GNN, stable feature generation, and adversarial augmenter together in an alternative way. - The results demonstrate that the proposed technique outperforms baselines on real-world covariate shift datasets. Weaknesses: - First, I would like to clarify the authors' claim that "covariate shift is less explored type of OOD" in the introduction. Contrary to this claim, covariate shift (sometimes also called sample selection bias) is a well-studied problem with a tremendous amount of previous and current research works. The efforts to develop algorithms to correct the shift/bias have been made by many previous works. The most notable family of techniques is the important weighting technique. I suggest the author deep dive more into the literature about covariate shift/sample selection bias. - There are some unclear descriptions of the methods that I would like the author to clarify. Particularly, it's not clear to me how the stable feature generator can achieve the desired results of finding features (or subgraphs) that persist in most graphs in the dataset. I understand its mask construction tries to shelter some of the components in the graph from being perturbed by the augmenter. However, from the loss function, it is not clear how this construction end-up generating mask for common patterns in the dataset, not just creating a unique mask for each graph sample. - The connection of the paper to covariate shift is also a bit misleading. In the standard covariate shift/sample selection bias setting, there are no need or stable features to exist in the dataset. The covariate shift only requires the sample distribution P(x) between training and testing to be different, while the label conditional distribution P(y|x) remains the same (Shimodaira, 2000; Sugiyama et.al, 2007). The testing sample distribution may or may not have some common characteristics with the training distribution. - The paper assumes the existence of stable features and actively finds them in the training dataset. In the standard covariate shift setting, even though some of the characteristics commonly occur in the training data, there is no guarantee that they also occur in the test data. - In short, the authors provide an interesting technique for data augmentation on graph data and demonstrate its effectiveness. However, attributing it to the covariate shift setting is misleading. Ref: - H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227–244, 2000 - Sugiyama, M., Krauledat, M., & Müller, K. R. (2007). Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8(5). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please answer my concerns in the previous section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and comments! We have responded to your concerns as follows. We sincerely hope that addressing these concerns will help change your rating of our work! ### Q1. Covariate shift is a well-studied problem. - **We're talking about Graph Covariate Shift.** We agree with you that covariate shift is well-studied in general settings [1-4]. But our paper looks at a different and new problem called "graph covariate shift." This is a new area in graph learning and hasn't been studied a lot, as Reviewer ExMB said, "*It is an important OOD area of graph learning*". When we say "covariate shift is a less explored type of OOD," we mean this specific graph learning context. We are very sorry if our words seem to be about the general settings. And we will thoroughly fix all unclear parts in our paper and make sure it's clear. - **Graph OOD is not the same as general OOD.** Traditional OOD problems usually deal with simple tasks like computer vision. For these tasks, inputs are things like variables or image features. But graphs are more complex. This means that graph OOD problems also have to deal with structure distribution shifts, not just shifts in the features. Therefore, it is non-trivial to study covariate shift in graph learning tasks (comments from Reviewer ExMB). Thanks for your suggestion! If our paper is accepted, we promise to add more discussions about covariate shift, such as [1-4]. ### Q2. The connection to covariate shift is misleading. & The paper assumes the "common patterns" should exist in both training and test distribution. ### Q3. How the stable feature generator can find stable features in the dataset? Your concerns (Q2 and Q3) might come from a misunderstanding about "stable features" and the invariance assumption. We first begin with the following two clarifications: - **Clarification 1: The Invariance Assumption.** OOD generalization is challenging and even impossible without any assumptions [5]. Hence, the invariance assumption has been introduced by [5] and applied by follow-up works [6-11] as a cornerstone assumption for data generation which enables reasonable problem-solving and analysis for OOD problems. Specifically, stable feature (or invariant feature) causally determines the label and their relationship is invariant across distributions. Let's use the example of a cow in different backgrounds (grassy or sandy). The cow object is the "stable feature" since the relation from the cow object to the label is invariant across environments (various backgrounds). If we train our model (like an image classifier) on images with a grassy background, it should learn to focus on the cow (stable feature) and ignore the background. This way, it can work well even when we show images with new environments (e.g., sandy background). Similarly, stable features also widely exist in the graph domain (e.g., molecular functional groups [9-11]). - **Clarification 2: Stable Features Are Not Always Common Patterns.** The reviewer seems to think that stable features are always common patterns. We didn't make this claim in our paper. This misunderstanding could come from Motif dataset, where the stable feature looks similar a lot. It is a synthetic dataset that are commonly-used in existing studies. They [10, 11] usually use Motif to intuitively check how well a method works. In our work, we only assume the causal mechanisms are invariant [5-11]. We don't say that the stable feature itself always looks the same. For example, in the cow picture, the cow could be different colors or shapes. In molecular graphs (e.g., Molhiv and Molbbbp in our paper), different parts might cause a certain property. Thanks, we will emphasize this in our final paper. Now let's move on to your two questions: **Answer to Q2:** In our paper (LINE-110), we provide a definition for covariate shift. Our work aims to address graph covariate shift based on the invariance assumption. This assumption for OOD has been explored in existing studies, such as in Ood-Bench [8] and GOOD [9]. Ood-Bench proves that dealing with covariate shift is challenging and even impossible without any assumptions about data generation. GOOD creates many graph-domain datasets that show graph covariate shift, also based on the invariance assumption. We use GOOD's datasets and therefore follow their settings and assumptions. **Answer to Q3**: From the above clarifications, we are not looking for "common pattern", but looking for "stable features". Stable features are substructures of the data that can causally determine the label, and their relationship is invariant across environments. The model can make right predictions based on stable features. Now let's check our loss: $\ell(f(T_{\theta_2}(g)), y) + \ell(f(\widetilde{g}), y)$. The first item "$\ell(f(T_{\theta_2}(g)), y)$" requires the model to make correct predictions based on the estimated stable features, and $\ell(f(\widetilde{g}), y)$ requires invariant predictions across different environments. We also confirmed through intuitive visualizations (see Figure 3) and experiments (see Table 3) that we can approximately find stable features. References: [1] A Theoretical Analysis on Independence-driven Importance Weighting for Covariate-shift Generalization, ICML 2022 [2] Causally Regularized Learning with Agnostic Data Selection Bias, ACM MM 2018 [3] Rethinking Importance Weighting for Deep Learning under Distribution Shift, NeurIPS 2020 [4] Covariate-Shift Generalization via Random Sample Weighting, AAAI 2023 [5] Invariant models for causal transfer learning, JMLR, 2018. [6] Invariant risk minimization, ArXiv, 2019. [7] OOD generalization via risk extrapolation, ICML, 2021. [8] Quantifying and Understanding Two Dimensions of OOD Generalization, CVPR 2022 [9] GOOD: A Graph OOD Benchmark, NeurIPS 2022 [10] Discovering Invariant Rationales for GNNs, ICLR 2022 [11] Causal Attention for Interpretable and Generalizable Graph Classification, KDD 2022 --- Rebuttal Comment 1.1: Comment: Thanks for the authors for the detailed rebuttal. First, I appreciate that the authors will revise the paper to clarify about the focus on graph covariate shift. I do agree that in graph, covariate shift is less studied compared to other OOD types. I still do suggest that the authors provide mode discussion to connect with classical covariate shift setting, e.g., by citing many works on the standard covariate shift areas. I do appreciate the efforts from the authors to clarify my confusions. It partially answers my questions. However, I still have some concerns. I now understand more clearly that the terminology of "stable feature" used by the authors. The stable features causally determines the label and their relationship is invariant across distributions; and they are not necessary always the common patterns. What my concern is that the paper only considers a subset of covariate shift, not the generic covariate shift. Let me illustrate my concern using the example given by the authors: > Let's use the example of a cow in different backgrounds (grassy or sandy). The cow object is the "stable feature" since the relation from the cow object to the label is invariant across environments (various backgrounds). If we train our model (like an image classifier) on images with a grassy background, it should learn to focus on the cow (stable feature) and ignore the background. This way, it can work well even when we show images with new environments (e.g., sandy background). Here, the authors only consider the shifts in the backgrounds, i.e., the test set contains new environments that may not appear in the training set or only represented by a very few samples. Covariate shift in general does not have that restriction (i.e., the restriction that only background features change during deployment). It starts form the concept of "sample selection bias" where there is some underlying bias in the training data vs testing data selection. Let me give a similar example of a covariate shift problem in similar setting. > We have an animal classification problem where the foreground objects are the animal (stable features), and the background may vary. The classifications are multiclass, for example ("cow", "camel", "lion", "jaguar", "llama"). During the training, due to sample selection bias, we can only gather images from animals from African continent and only very few data from outside Africa. However, in the deployment/test setting, the classifiers are tasked to classify animals that mostly come from Latin America. In these settings, the distribution of sample data in training and testing phase vastly differs (e.g., training data will contain more lion and camel samples, where the test data contain more jaguar and llama samples). The relationship between the foreground objects (stable features) and the label in both training and test data does not change. But the distributions of sample itself are drastically different. This generic covariate shift setting is not considered by the model. --- Reply to Comment 1.1.1: Title: (New rebuttal by authors) Clarification on your misunderstanding of our "cow example". Comment: Thank you for taking the time to review our work and for your constructive feedback! We're glad to hear that many of your confusions have been addressed. As graph OOD is a new field, our study is one of the early explorations into the covariate shift. If our paper is accepted, we'll expand on this topic, incorporating **a new Section** in our paper to discuss covariate shift literature more thoroughly. Please see our responses on your remaining concerns: - **Regarding the cow example.** We apologize for any confusion caused by our cow example. This example is simply a tool to help explain "stable feature" for you. It's not part of our main paper and is just for informal illustration during the rebuttal stage. Our work isn't limited to situations like this example. Instead, we follow the generic covariate shift setting. Below we explain the misunderstandings that have arisen between us: - **Two possible cases in covariate shift.** Our work is based on the generic definition of covariate shift: where $p(x)$ differs between training and testing, i.e., $p_{train}(x)\neq p_{test}(x)$, but $p(y|x)$ remains constant. The shift in $p(x)$ can arise from: - **Case 1**: A shift in environmental features, $p_{train}(x_{env})\neq p_{test}(x_{env})$. Using the cow example, this is like having different background features in training and test datasets. - **Case 2**: A shift in stable features, $p_{train}(x_{sta})\neq p_{test}(x_{sta})$. As in your animal classification example, the stable features themselves can vary between training and test datasets. You think we only considered the covariate shift caused by **Case 1**, so it is only a subset of covariate shift. However, we'd like to emphasize that our study isn't limited to this. We follow $p_{train}(x)\neq p_{test}(x)$, which also includes the **Case 2**. Below we give the following evidence. - **Case 2 in our real-world molecular dataset.** In Molbbbp, we count $p_{train}(x_{sta})$ and $p_{test}(x_{sta})$ in the training and test data, respectively. We use the Chemical Molecular Property Analysis Toolkit (RDKit) to split the functional groups. These functional groups can be regarded as stable features. We performed statistics on their distributions (top 8) in the table below. We can find that **Case 2** obviously exists in our dataset. | | [OH] | [F] | \[Cl] | \[Br] | [C(=O)N] | [NH2] | [n1ccccc1] | \[#6]\[C](=[O])[#6] | | ----------- | :----: | :----: | :----: | :---: | :------: | :----: | :--------: | :-----------: | | Training data | 19.66% | 10.20% | 14.77% | 1.41% | 25.41% | 11.07% | 5.48% | 11.99% | | Test data | 14.77% | 5.57% | 19.35% | 2.23% | 15.61% | 16.28% | 8.92% | 6.13% | - **Our performance under Case 2.** We design new experiments to support our claim. Our setup focuses on adjusting the selection bias on Motif dataset, primarily based on **Case 2** and **your example**. Specifically, the dataset comprises three classes: "house", "crane", and "cycle". In training dataset, the proportion of "house" and "cycle" is set as $b$, whereas in test dataset, the proportion for them is defined as $1/b$. This setup is very similar to your example: "training data will contain more lion and camel samples, where the test data contain more jaguar and llama samples". We observe that at $b=1$, the distributions $p_{train}(x_{sta})$ and $p_{test}(x_{sta})$ are nearly identical. For $b<1$, these distributions diverge. We adjust different selection biases $b$, and the results are shown in the following table. Our method can also achieve consistent improvements across $b$. It shows that our work indeed be applied to the generic covariate shift setting, rather than just considering changes in the background features. | Method | b=0.7 | b=0.5 | b=0.3 | b=0.1 | | ------ | :-------: | :-------: | :-------: | :-------: | | ERM | 63.88 | 52.34 | 49.56 | 51.59 | | CAL | 66.79 | 55.25 | 50.42 | 51.51 | | GREA | 64.40 | 59.69 | 54.23 | 55.49 | | Ours | **70.43** | **62.33** | **58.79** | **59.98** | **We apologize for any confusion stemming from the cow example in our rebuttal.** We will not to include this informal example in the main paper. And we deeply appreciate your constructive feedback, and in response, we've outlined the following revisions for our paper: 1. Clarify ambiguous words, notably "covariate shift is a less explored type of OOD". 2. Introduce a new Chapter/Section dedicated to extensive studies on covariate shift/selection bias. 3. Provide a precise definition and give descriptions of covariate shift for enhanced clarity. 4. Incorporate all experimental and statistical insights from our rebuttal. If our paper is accepted, we promise that we will thoroughly improve the paper according to the above to-do list. Finally, thank you again for your reviews and precious time! We sincerely hope this reply can solve your remaining concerns and change your negative rating on our work!
Summary: This paper proposes to enable GNNs handle covariate shifts through the use of synthetic augmentations. An inherent challenge in construction task-specific augmentations is to appropriately handle style (or environment features) and content (or stable features). While other existing works have also emphasized the need to promote task-relevant invariances (e.g., Analyzing data-centric properties for graph contrastive learning, Neurips 2022), this paper focuses on automatically identifying those features through an adversarial training strategy. Results on graph classification benchmarks demonstrate the benefits of AIA. **Post Author Rebuttal**:The authors have reasonably responded to all my concerns. Hence, I raise my score! Strengths: + The paper is well written and easy to follow. The problem is clearly laid out and the I personally liked the organization of the experiments in the form of research questions. + Though the proposed algorithm builds upon existing formalisms and ideas from the certified robustness literature, I believe the extension and its implementation to this problem is sufficiently novel. + The theoretical analysis is intuitive and well presented. + Experiment results and the hyperparameter study provide a convincing demonstration of the proposed approach. Weaknesses: 1. At the outset, the idea of splitting the adversarial augmenter and stable feature generator with independent (learnable) masks appears challenging to solve. There is a risk that the same entries can be picked as relevant in both the masks. While the regularizer checks for the ratio, will it benefit to include an explicit constraint to make the masks disjoint? This is often done in state-of-the-art source separation algorithms that aim to split the different sources from a given observation. 2. It will be beneficial if the authors can intuitively explain how this approach is able to handle size generalization. 3. More insights into the results will help. For example on the Motif benchmark, when compared to the baselines, the bigger benefits are observed in the base setting. Why is that? 4. While GIN has been used for all experiments, will the benefits persist with a graph transformer? Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for rating our paper as "well written and easy to follow", "clearly", "sufficiently novel", and "well presented"! According to your concerns, we provide the following responses. ### Q1: Concerns about adversarial augmenter, stable feature generator, masks and regularizer. - **Difference between stable and adversarial mask.** We utilize two distinct modules to create stable and adversarial masks. The stable mask aims to highlight the stable features in the graph data. Ideally, we'd like the mask value for the stable feature to be 1, while the other parts are 0. The adversarial mask, on the other hand, is used to modify the data. With this mask, we aim to disrupt the graph features, such as by removing certain nodes or edges, in order to create out-of-distribution data. - **Preventing mask overlap or disjoint.** We understand your concern about the potential for conflict between the two masks. To address this, we use a strategy to combine the masks in a way that the adversarial perturbation doesn't harm the stable features. This is explained in lines 207-218 of the original paper. Basically, the stable mask ($M_{sta}$) highlights the stable regions. Anything not covered by this mask, represented as $1-M_{sta}$, is the complementary part. The adversarial mask ($M_{adv}$) represents the adversarial perturbation. By applying the operation $(1-M_{sta})\odot M_{adv} + M_{sta}$, we apply the adversarial perturbation to the complementary parts "$(1-M_{sta})\odot M_{adv}$", while preserving the stable features "$+ M_{sta}$" . This ensures that the final augmented data includes both new environmental features and original stable features. - **Purpose of regularization term.** The regularization term is used to guide the masks to closely match our expectations (i.e., the mask value should be near 0 or 1). This prevents the masks from converging to a trivial solution during training, thus helping to avoid overlap or conflict between the masks. For the stable mask, we aim for it to highlight the stable feature, and for its value to be close to 0 or 1. We set the stable ratio as $\lambda_s$ and design two regularization terms to enforce this. For the adversarial mask, we limit its ratio to 1 to prevent it from creating excessive perturbations, like deleting all components to optimize the adversarial goal. ### Q2. How does this approach handle size generalization? Great question! Current studies, especially invariant learning, suggest that size distribution shift occurs due to (size) differences in environmental features during training and testing. Let's take molecular graphs as an example. The size of scaffolds, such as carbon chains or rings, can greatly vary. Our method aims to maintain stable features while modifying environmental features to create new graphs. During training, the adversarial mask might alter the graph size by removing varying numbers of nodes or edges (as seen in Figure 2 or Figure 3). This way, the model can handle graphs with different environmental feature sizes, reducing sensitivity to graph size. Therefore, the model can generalize to unseen graphs of different sizes during testing. We appreciate your suggestion and will include this explanation in the final version for clarity. ### Q3. More insights into the results. For example, the bigger benefits are observed in the Motif (base) setting. You've made a great observation. Our method shows significant improvements on Motif (base) mainly because it experiences larger covariate shift. We've quantified this covariate shift (GCS) for different datasets (as shown in Table 2). The GCS of Motif (base) is 0.557, which is larger than others. As our method is specifically designed to tackle covariate shift, and other baseline methods have limitations in this regard, our improvement is more noticeable on Motif (base). Further, upon reviewing our results in light of your suggestion, we've added two additional insights: - Our method outperforms GREA and attains the best performance in terms of size generalization, even though GREA, another data augmentation method, also performs well (second place on motif and molhiv). GREA uses two types of coarse-grained augmentation, namely, environment replacement and removal, which might produce new graphs of varying sizes. In contrast, our method applies a more detailed environmental feature augmentation, specifically by removing certain nodes or edges. Therefore, it considers a broader size scope than GREA, which further improves the performance. - Invariant learning methods like DIR, CAL, and DisC struggle to perform well as they find it challenging to generate new environmental features. However, covariate shift is very evident in these datasets (see Table 2). By creating new environments during data augmentation, our method ensures better generalization under covariate shift. Thank you for your suggestions. We will include these insights in the final version of our paper. ### Q4. Will the benefits persist with a graph transformer? Indeed, they do. We conduct experiments on Molbbbp using four different graph transformers [1-4] as shown **in our one-page PDF** (Table 1). The results show that our method continues to provide benefits across these diverse graph transformers. [1] Do Transformers Really Perform Bad for Graph Representation? NeurIPS 2021 [2] Representing Long-Range Context for Graph Neural Networks with Global Attention, NeurIPS 2021 [3] Rethinking Graph Transformers with Spectral Attention, NeurIPS 2021 [4] Recipe for a General, Powerful, Scalable Graph Transformer, NeurIPS 2022
Rebuttal 1: Rebuttal: # Global Response and One-page PDF from Authors We appreciate all the reviewers' efforts for reviewing this submission. We are delighted that our paper was noted for being "*well written and easy to follow*", "*clear",* "*sufficiently novel*" by **Reviewer reTY**; "*interesting*", "*new*" by **Reviewer bjaC**; offering "*new insights*", "*novel with good intuitions*" by **Reviewer ExMB**; and "*well-motivated*" by **Reviewer J9ZS**. To address the reviewers' concerns, we have included new experimental results and a method comparison table in our ***one-page PDF*** below. Here, we summarize the main points raised by the reviewers and our responses. - **[@Reviewer bjaC], Covariate shift & Invariance Assumption.** We understand that your main concern stems from a misunderstanding of our proposed "stable feature". We have provided a thorough explanation for you and clarified the relationship between our work and the invariance assumption. We apologize for any confusion caused by vague phrases such as "covariate shift is a less explored type of OOD", and will thoroughly revise our paper. Finally, we sincerely hope to engage in further discussions to address your concerns and improve your rating of our work. - **[@Reviewer J9ZS], Novelty & Motivation.** We understand that you question the novelty and motivation of our graph augmentation method. However, other reviewers have strongly recognized the novelty of our approach. Transferring general augmentation methods to graph data is challenging due to its non-Euclidean nature. We have also compared our approach with other similar methods in the graph domain (***Table 3 in PDF***). For your interest in our augmentation diversity experiments, we have included the results in ***Table 4 in PDF***. If our paper is accepted, we'd be glad to include all the running time and model size results (***Table 5 in PDF***) in our appendix. We sincerely hope that addressing these concerns will help change your rating of our work. - **[@Reviewer J9ZS, ExMB], Complexity & Running Time & Model Size.** Thanks to the modest size of the two extra networks we implemented and our one-step adversarial training, our approach maintains manageable complexity and model size parameters. For the running time and model size comparisons (***Table 5 in PDF***), it is evident that our method achieves a preferable performance-complexity balance compared to other baselines. - **[@Reviewer reTY, ExMB], Different Backbones.** We conducted experiments using a broader range of backbone models (***Table 1 and 2 in PDF***), including GIN, GCN, GCNII, GAT, and various Graph transformers. The results confirm that our findings and conclusions remain consistent across different backbones. Pdf: /pdf/9d70c3e2023b5b3acfe917bcad3036d567febbca.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Face Reconstruction from Facial Templates by Learning Latent Space of a Generator Network
Accept (poster)
Summary: The paper presents a method to reconstruct high resolution face images from feature vectors extracted from a face recognition (FR) system. The reconstructed face images can be used to attack FR systems to gain access under whitebox and blackbox scenarios The paper also introduce five different scenarios that one can use to attack FR systems. The authors raised awareness to protect template feature vectors stored in FR systems' databases to avoid possible adversary attacks. Strengths: The paper evaluates various attack scenarios with five different face recognition models. The reconstructed face images have high quality. Some of them have similar IDs with the original images. Weaknesses: 1. The novelty in the proposed framework is limited. Most of the components are based on the existing works, e.g. StyleGAN3 and WGAN. 2. The proposed attack scenarios are straightforward as a combination of three possible feature extraction models. The authors should also rate the practical of the scenarios in addition to the level of difficulty since some scenarios cannot achieve in practice. 3. In their problem formulation, the authors consider three different feature extraction models. However, there is an implicit assumption that the input and output of those models are the same. The authors should analyze cases when the input and output are different, e.g. handling various output dimension, various pre-processing steps. 4. There are some errors in referring to tables in the paper, e.g. line 281, 287 and 300. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors addressed one limitation of the reconstruction model. However, the authors should address some other limitation mentioned in the weakness section above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time in reviewing our paper and for the comments. Below, we tried to address the concerns raised by the reviewer: > The novelty in the proposed framework is limited. Most of the components are based on the existing works, e.g. StyleGAN3 and WGAN. We acknowledge the reviewer's comment that our method leverages WGAN training to learn a mapping network that maps the facial templates to the intermediate latent space of StyleGAN. However, as shown in our experiments (especially our ablation study), training such a mapping network is not straightforward, and each part of our proposed method contributes to the performance of our method. Therefore, our proposed face reconstruction method is not a mere trivial combination of existing techniques. Even more, the proposed method achieves state-of-the-art results in template inversion attacks against five state-of-the-art face recognition systems on different face recognition datasets, which is not trivial either. > The proposed attack scenarios are straightforward as a combination of three possible feature extraction models. The authors should also rate the practical of the scenarios in addition to the level of difficulty since some scenarios cannot achieve in practice. Following the reviewer's suggestion, we prepared a table in our `general response` and further described the adversary's knowledge and difficulty of each attack scenario. We will include this table in the final version of the paper. We would like to highlight that we define five different attacks against face recognition systems (based on the adversary's knowledge and the target system). In particular, we evaluate the transferability of the reconstructed face images and the vulnerability of SOTA face recognition models to template inversion attacks, which have not been investigated in the literature. To our knowledge, this is the first work that comprehensively evaluates the transferability of the reconstructed face images in template inversion attacks. > In their problem formulation, the authors consider three different feature extraction models. However, there is an implicit assumption that the input and output of those models are the same. The authors should analyze cases when the input and output are different, e.g. handling various output dimension, various pre-processing steps. While we use three different face recognition models in our problem formulation, there is no issue if the pre-processing steps or dimensions be different in each of these face recognition models. For differences in inputs (face images), because each of these models is performing independently on the given face image, the required pre-processing can be considered in the function of the face recognition model in our problem formulation. For differences in outputs (face templates), since the facial templates extracted by each model are compared to facial templates extracted by the same model, there is no conflict in the dimensions. The only point to be noted is that the input of our mapping network should have the same dimension as the templates of $F_{databse}$. Let us consider the complete pipeline of our problem formulation as depicted in Figure 2 of the paper. The first face recognition model $F_\{database}$ uses its own pre-process and extracts facial templates from face images captured by the camera of face recognition system. These facial templates (extracted from $F_\{database}$) are then used as input to our face reconstruction model. Therefore, the input of our mapping should have the same dimension as templates of $F_\{database}$. In any case, the output of the face reconstruction network is a high-resolution face image. During training, this high-resolution face image is first pre-processed as required by $F_\{loss}$ and the extracted templates are compared with templates of the original image extracted from $F_\{loss}$. During inference, the generated high-resolution face image is pre-processed as required by $F_\{target}$. Therefore, there is no conflict in the input/outputs in our pipeline. In our experiments reported in the paper, all face recognition models except Swin take input with $112\times112$ resolution. However, the Swin model takes input with $224\times224$ resolution. However, we acknowledge that the dimensions of facial templates extracted by all face recognition models in our experiments are similar and equal to 512. To show that our method can be used in case of different dimensions of facial templates and to showcase another face recognition model with different pre-processing, as a new experiment, we used a new model (VGGFace) with **different dimension of facial templates** (2048-dimension) and **different input image resolution** ($224\times224$) which has a **different normalisation** as well as **different landmark coordinates for face alignment**. We used ArcFace as our $F_\text{loss}$ and evaluated the reconstructed face images in attacks against different face recognition systems (as $F_{target}$) on the LFW dataset: | | ArcFace | ElasticFace| Swin | |---|---|---|---| | $\text{FMR}=10^{-2}\%$ | 92.92 | 93.10 | 83.97 | | $\text{FMR}=10^{-2}\%$ | 86.61 | 82.39 | 72.89 | As the results in this table show, our proposed method can be applied in the case that the input/output of face recognition models in our problem formulation ($F_{database}$, $F_{loss}$, $F_{target}$) are different. > There are some errors in referring to tables in the paper, e.g. line 281, 287 and 300. We apologize for this error caused in our compilation before submission. We will fix these errors and will carefully revise the paper for any typos/errors. > Flag For Ethics Review We have already included a section in the supplementary material and discussed different ethical aspects. Regarding the used datasets, we have also included a section in our supplementary material and discussed the licenses. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I have read it and will respond with points for further discussion. --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: We thank the reviewer for their time in reading our rebuttal. We would like to also mention that in addition to our individual rebuttal to answer the reviewer's question; we have also conducted new experiments as described in the `general response` based on comments we received from reviewers and would like to *add these new experiments in our final version* to improve the quality of the paper. We are more than happy to continue our discussion with the respected reviewer and to address any remaining/new concerns.
Summary: The paper studies the template inversion attack on FR systems. The paper involves both white- and black-box attacks. Moreover, five different attacks are considered. Strengths: This paper poses a potential threat to FR systems in that attackers can reconstruct victim's facial images with the features stored in the database. Comprehensive attack situations are considered with five different attacks. Weaknesses: - My main concern is about the limited application of the proposed method. The quality of the face images highly depends on the power of the generator, i.e. StyleGAN3. Therefore, the bias of it, such as resolution, gender, race, and head pose biases make the attack hardly generalize to other face images. For example, including the failure Fig. 5, if the features stored in the database are from low-resolution images, or non-frontal faces, or faces aligned with other alignment methods, the proposed method may also fail as these also are the biases/domain gap of the generator. It would be good if the authors analyzed more in this aspect that there is a large domain gap between the database and that in the generator. From the current setting, I have not seen any specific handle for this issue but a simple critic network. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See the Weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See the Weakness. Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments. We are happy that the reviewer found our paper easy to follow. We appreciate the reviewer's comments on the strengths of our work. Below, we tried to address the concerns raised by the reviewer: > My main concern is about the limited application of the proposed method. The quality of the face images highly depends on the power of the generator, i.e. StyleGAN3. Therefore, the bias of it, such as resolution, gender, race, and head pose biases make the attack hardly generalize to other face images. For example, including the failure Fig. 5, if the features stored in the database are from low-resolution images, or non-frontal faces, or faces aligned with other alignment methods, the proposed method may also fail as these also are the biases/domain gap of the generator. It would be good if the authors analyzed more in this aspect that there is a large domain gap between the database and that in the generator. From the current setting, I have not seen any specific handle for this issue but a simple critic network. In our experiments, we tried to consider the gap between our training dataset and our evaluation dataset. For training our face reconstruction network, we used the FFHQ dataset, which includes high-quality images. In contrast, for evaluation, we used MOBIO and LFW datasets. The MOBIO dataset is collected with mobile devices, and LFW is an unconstrained dataset collected from the internet. Therefore, the quality of images in these two datasets is considerably different from the FFHQ dataset, which is used for our training. As a new experiment, we consider the IARPA Janus Benchmark C (IJB-C) dataset, which is one of the most challenging face recognition benchmarking datasets. The following table compares the performance of our method with previous methods in the literature (compared in Table 4 of the paper) in attack 3 (i.e., blackbox attack against same system) using ArcFace as $F_{loss}$ against different state-of-the-art face recognition models on the IJB-C dataset: | | M1 | M2 | M3 | M4 | M5 | Ours | |---|---|---|---|---|---|---| | ElasticFace | 0.32 | 4.1 | 0.13 | 5.41 | 16.90 | **35.91** | | HRNet | 0.05 | 1.12 | 0.09 | 2.17 | 4.36 | **24.38** | | AttentionNet | 0.13 | 1.27 | 0.21 | 2.86 | 4.82 | **26.35** | | Swin | 2.40 | 15.11 | 2.45 | 20.73 | 30.91 | **45.00** | (Note: M1, M2, M3, M4, and M5 are defined in the caption of Table 4 of the paper.) As the results in this table show, our method still achieve superior performance than all face reconstruction methods in the literature. We should also note that we observe a drop in the performance of all methods in this table. It is particularly because the IJB-C dataset is a very difficult benchmarking dataset. To elaborate on this, we would like to compare the recognition performance of ArcFace (as an example) face recognition model on IJB-C with the performance on MOBIO and LFW datasets as reported in the following table: | | MOBIO | LFW | IJB-C | |---|---|---|---| | ${FMR}=10^{-2}\%$ | 100.00 | 97.60 | 95.29 | | ${FMR}=10^{-3}\%$ | 99.98 | 96.40 | 90.90 | As the values in this table show, IJB-C is a more challenging dataset, and the face recognition systems also suffer from the degradation in the recognition performance on this dataset. > Flag For Ethics Review: Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns) We have already included a section in the supplementary material for the "Ethics Statement" and discussed different ethical aspects of our paper. --- Rebuttal Comment 1.1: Comment: The rebuttal mainly answered my questions and I keep my scores. --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: We thank the reviewer for reading our rebuttal and are happy that our reply could mainly answer the reviewer's questions. We would like to add to our previous response that the quality of facial images mainly affects the feature extraction (i.e., face recognition model), which indirectly affects the performance of our face reconstruction network. Therefore, degradation in the performance of face reconstruction methods that take facial templates as input is inevitable for templates extracted from low-quality face images, while such facial templates also degrade the recognition accuracy of the face recognition system. Regarding bias in the reconstructed face images for different demographics (such as age and ethnicity), we have already discussed in the "Limitations" section of our paper that such results are caused by inherent biases in datasets used to train the face recognition model, the face generator model (StyleGAN), and our mapping network. Indeed training the models, particularly the face generator model and the mapping network, with a balanced dataset can help mitigate such biases. Regarding the reviewer's comment on the alignment of the face images with different landmark coordinates, we would also like to mention that we implemented a new experiment in reply to R5 (Reviewer dWCe) and considered a challenging scenario where the pre-processing and alignment of face images for $F_{database}$, as well as the size of facial templates, are different than $F_{loss}$ and $F_{target}$. Our experiment shows that our method can still achieve high success attack rates in attacks against systems with different pre-processing and different input (face image) and output (facial template) dimensions. We ask the reviewer to kindly check these results and our description in our reply to R5 (Reviewer dWCe). We would like to also mention that we have also conducted new experiments as described in the `general response` based on comments we received from other reviewers and would like to **add these new experiments in our final version**, to improve the quality of the paper. We would like to ask the reviewer to also kindly read our new experiments described in the `general response` and if having new experiments can help increase the reviewer's scores. In case our current reply or our new experiments described in the general response and in reply to R5 caused any new doubts or questions, we would be more than happy to continue the discussion with the respected reviewer.
Summary: In this paper, the authors introduce a new method for reconstructing high-resolution realistic face images from facial templates within a face recognition (FR) system. They employ a pre-trained StyleGAN3 network and train a mapping from facial templates to the intermediate latent space of StyleGAN using a GAN-based framework. In particular, the proposed method is designed to work in both white-box and black-box under different adversary attack scenarios. The vulnerability of state-of-the-art (SOTA) FR systems to the proposed method is evaluated. Experimental results demonstrate that the reconstructed face images using the proposed method achieves the highest success rates in both white-box and black-box scenarios and its transferability has been validated. Strengths: + Overall the paper is well organized and easy to read. + The idea of training a critic network to ensure the same distribution between $w \in W$ and ${\hat w} \in W$ is interesting. + Deep dive into five different template inversion (TI) attacks, and evaluate the transferability of reconstructed face images in TI attacks. + Solid experiment validation based on StyleGAN with better performance than the SOTA methods. Weaknesses: - The investigation on face reconstruction is insufficient. The authors didn’t mention any work related to Transformer and diffusion works, e.g, Face-Transformer (2023 Arxiv April), and VQ-DDM (CVPR’22). - The template inversion (TI) attack is a too specific adversarial attack. I would like to suggest the authors experimentally verify whether it is possible to extend the idea (e.g., critic network) to other kinds of attacks. - The current paper is heavily dependent on the StyleGAN3, which is not SOTA when considering the more advanced Transformer and diffusion models. Therefore I would like to suggest the authors to switch the current StyleGAN3 to the both Transformer and diffusion models to validate the claims made in the current paper. - Typo: miss the table number in line 287, 300. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As mentioned by the authors, there exists a bias in reconstructing faces of certain demographic groups, such as elderly individuals or people with dark skin. This bias in the reconstructed face images can be attributed to the inherent biases present in the datasets used to train the face recognition (FR) model, the StyleGAN model, and the mapping network in our face reconstruction model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments. We are happy that the reviewer found our paper well-organized and easy to read. We appreciate the reviewer's comments on the strengths of our work. Below, we tried to address the concerns raised by the reviewer: > The investigation on face reconstruction is insufficient. The authors didn’t mention any work related to Transformer and diffusion works, e.g, Face-Transformer (2023 Arxiv April), and VQ-DDM (CVPR’22). We used StyleGAN since it is one of the most popular face generator models in the literature. However, our method can also be used with other face generator networks. As a new experiment, we used StyleSwin (CVPR, 2022), which is another face generator model based on transformers. As the results (reported in our `general response`) show, our method can also be used with this face generator networks too. We should note that considering the NeurIPS guideline, papers appearing less than two months before the submission deadline are considered contemporaneous to NeurIPS submissions. Therefore, Face-Transformer (2023 Arxiv April), which was published on arxiv less than one month before NeurIPS submissions, is considered contemporaneous following the NeurIPS guidelines. In addition, according to the NeurIPS guideline, authors are also excused for not knowing about works published on arxiv. In any case, our new experiment shows that our method can be used with recent face generator models based on transformers too. We would like to also highlight that we have considered five different face recognition models in our experiments. One of the face recognition models in our experiments is based on the Swin backbone, which is a transformer-based network. Our experiment shows that the Swin face recognition model is also vulnerable to our attacks. > The template inversion (TI) attack is a too specific adversarial attack. I would like to suggest the authors experimentally verify whether it is possible to extend the idea (e.g., critic network) to other kinds of attacks. We appreciate the reviewer's comment and will consider it as our future work. In fact, our method can be extended to any type of attack in which we would like to find/modify the intermediate latent codes but ensure that the new latent codes are on the original distribution of the intermediate latent space. > The current paper is heavily dependent on the StyleGAN3, which is not SOTA when considering the more advanced Transformer and diffusion models. Therefore I would like to suggest the authors to switch the current StyleGAN3 to the both Transformer and diffusion models to validate the claims made in the current paper. As described in our reply to the earlier point raised by the reviewer and following the reviewer's suggestion, as a new experiment, we used StyleSwin (CVPR, 2022), which is another face generator model based on transformers. The results reported in the `general response` show that our method can also be used with this face generator network too. We will add these results to the final version of the paper. We should not that since our proposed method maps to the intermediate latent space of face generator model, it can be applied to different face generator models. During rebuttal period, we could investigate and showcase the application of our method with StyleSwin (which is a state-of-the-art face generator model based on transformers). Further experiments with other face generator networks can be explored in future works. > Typo: miss the table number in line 287, 300. We apologize for this error caused in our compilation before submission. We will fix these errors and will carefully revise the paper for any typos/errors.
Summary: The paper proposes a high-resolution face reconstruction method for template inversion attacks. They make use of a GAN-based framework, StyleGAN3, by learning a mapping from facial templates to its intermediate latent space. They evaluate their method on 5 different attacks in whitebox and blackbox scenarios. Strengths: The method is evaluated in a wide range of TI attack scenarios. The method, its motivation and evaluation setting is described clearly and is easy to understand. Weaknesses: The novelty of the proposed method mainly lies in the introduction of the facial template mapping network which is somewhat incremental. It would be helpful to understand the rationale behind selecting StyleGAN as the synthesis model. Additionally, exploring alternative synthesis networks could provide insights into how much of the success and limitations depend on the synthesis network. Also, it would be beneficial to explore the effect of fine-tuning the synthesis network of StyleGAN. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time in reviewing our paper and for their valuable comments. We are happy that the reviewer found our paper clear and easy to understand. Below, we tried to address the concerns raised by the reviewer: > The novelty of the proposed method mainly lies in the introduction of the facial template mapping network which is somewhat incremental. We acknowledge the reviewer's comment that we eventually aim to train one mapping network in our proposed method to map the facial templates to the intermediate latent space of StyleGAN. However, as shown in our experiments (especially our ablation study), training such a mapping network is not straightforward. In particular, using a WGAN learning approach, we simultaneously train a critic network to help our mapping network to learn the distribution of the intermediate latent space $\mathcal{W}$ of StyleGAN. In addition to adversarial training in our proposed method to learn the mapping, we also train our mapping network with a multi-term loss function, including an identity loss on the generated images to preserve the identity of the reconstructed face images. Therefore, our proposed face reconstruction method is not a mere trivial combination of existing techniques. Even more, the proposed method achieves state-of-the-art results in template inversion attacks against five state-of-the-art face recognition systems on different face recognition datasets, which is not trivial either. Moreover, we define five different attacks against face recognition systems (based on the adversary's knowledge and the target system), and evaluate the transferability of the reconstructed face images and vulnerability of state-of-the-art face recognition models to template inversion attacks. To our knowledge, this is the first work that comprehensively evaluates the transferability of the reconstructed face images in template inversion attacks. > It would be helpful to understand the rationale behind selecting StyleGAN as the synthesis model. Additionally, exploring alternative synthesis networks could provide insights into how much of the success and limitations depend on the synthesis network. StyleGAN is one of the most popular face generator models in the literature. It has available source code (and pretrained models), and several works have been using StyleGAN in different research problems. However, our method can also be used with other face generator networks. As a new experiment, we used StyleSwin (CVPR, 2022), which is another face generator model. As the results (reported in the `general response`) show, our method can be used with this face generator networks too. > Also, it would be beneficial to explore the effect of fine-tuning the synthesis network of StyleGAN. We fix the parameters of the synthesis network of the face generator network (StyleGAN) and do not update it during training. If we update the synthesis network of SyleGAN, we need to apply adversarial training on the generated images too, which makes the training more complicated. That means we need to learn again the distribution of real images, which is more difficult than learning the distribution of intermediate latent space of pretrained StyleGAN. --- Rebuttal 2: Title: From AC & SAC: Please respond by August 21 Comment: Hi Reviewer T7AV, As all fellow reviewers have seen, your original review was too short, whereas authors have provided a rebuttal to address your concerns. Therefore, it is super important if you can help out by providing some meaningful feedback before August 21, which will help us move forward to a smooth decision. Your support is highly appreciated. Best, AC & SAC
Rebuttal 1: Rebuttal: We thank all reviewers for their time and valuable comments. We tried to address point-by-point the concerns raised by the reviewers in individual responses. For simplicity, we use the following numbers to refer to each reviewer in our responses: - R1: Reviewer S2tQ - R2: Reviewer T7AV - R3: Reviewer RLF3 - R4: Reviewer pmB2 - R5: Reviewer dWCe There are also some common comments between reviewers, which we reply to in this general response and refer to it in individual responses: **Comparison of the Attack Scenarios:** R1 and R5 suggested providing a comparison of the attack scenarios defined in our paper. Following the reviewers' suggestion, we prepared Table 1 reported in our rebuttal pdf (attached) to further describe the adversary's knowledge and difficulty of each attack scenario. We will include this table in the final version and provide more descriptions to compare different attacks defined in our paper. We would like to highlight that, to our knowledge, this is the first work that comprehensively evaluates the transferability of the reconstructed face images in template inversion attacks. As a matter of fact, the transferability of reconstructed face images can lead to a critical threat where the adversary can use the reconstructed face image to enter another system in which the same user is enrolled. However, to our knowledge, the transferability of the reconstructed face images in template inversion attacks has not been investigated in the literature. **Using a Different Face Generator Network:** R1, R2, R3, and R5 asked regarding the rationale behind the choice of StyleGAN and suggested exploring alternative synthesis networks. StyleGAN is one of the most popular face generator models in the literature. It has available source code (and pretrained models), and several works have been using StyleGAN in different research problems. However, our method can also be used with other face generator networks. As a new experiment, we use StyleSwin (CVPR, 2022), which is another face generator model based on transformers. Figure 1 in our rebuttal pdf (attached) shows the reconstructed face images from ArcFace templates using StyleSwin in our method instead of StyleGAN. We used a similar mapping network and learned a mapping from facial templates to the intermediate latent space of StyleSwin. As these results show, our method can also be used with other face generator networks. We will add these results to the final version of the paper too. [StyleSwin] Zhang et al., "StyleSwin: Transformer-based gan for high-resolution image generation." CVPR 2022. **Analysis of the Important Features in the Reconstructed Face Images:** R1 raised the question of *what features between the template and synthetic images fool the recognition system* and suggested providing an analysis with visualization to investigate why (and where) the attack works. To answer this question raised by R1, we conducted a new experiment to explore what important information is encoded in the facial templates, and what features between the template and synthetic images fool the face recognition system. To this end, we applied the Grad-Cam algorithm using the face recognition model on the reconstructed face images to see which areas of the reconstructed face images are important and cause the facial templates of our reconstructed face images to be close to the original facial templates. Sample results of applying the Grad-Cam algorithm on our reconstructed face images are shown in Figure 2 of our rebuttal pdf file. As the results in this figure show, important areas that cause the reconstructed face images to have similar templates to the original ones correspond to areas such as eyes, nose, lips, etc. In particular, the area around the eyes seems to be the most important part in most of the reconstructed face images. These results also show that the general shape of the face (e.g., thin or chubby face), hairs, textures, etc., are not often necessarily important in the reconstructed face images, and thus, we can also conclude that these attributes are not well-encoded in the templates extracted by face recognition models. We will add this analysis to the final version too. [Grad-Cam] Selvaraju, et al. "Grad-cam: Visual explanations from deep networks via gradient-based localization", ICCV 2017. Finally, we would like to kindly invite all reviewers to further discussions should they still have any doubts or concerns. Pdf: /pdf/cfe590d914467c408c5949fb8cb8bc1596c5a4e8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this work, the authors propose a new method to execute the template inversion attack against face recognition systems. By simply having access to the feature vectors (latent representations) of the original faces, the adversary can reconstruct a face like the original, therefore showing that face recognition systems can be compromised. The method used comprises a StyleGAN3 network, split such that a new Mapping network attempts to synthesize a correct embedding given random noise and the victim template vector, and a critic network discriminates between that output and StyleGAN’s own mapping network with the WGAN algorithm. The authors further test their method in whitebox and blackbox scenarios, using a combination of several models for template vector leaking and transferability of reconstructing a face to attack a face recognition system. In the end, for the five attacks defined, the novel method achieves high attack success rates. Strengths: While the problem is not new, the paper takes a clever spin on existing methods and combines them to tackle five attack scenarios. Even in the most difficult attack scenario, the novel method achieves significantly higher attack success rates, but even more so, the better recognition models actually suffer the most. It is interesting and reasonable to propose face template matching via the GAN-base framework. The paper provides sufficient evaluation results for five attack scenarios, as the paper defined. Weaknesses: The paper contains grammatical errors throughout, but they should be fixable. For instance, in line 108, I believe you meant to say “critique” the generated omega-hat vectors. For Section 4.2, I believe it should be titled “Analysis”. The five attack scenarios as defined in the paper seem incremental so they do not show the broad of the method application. The synthesized images themselves in the sense that the authors are not trying to answer the question of *what* features between the template and synthetic images fool the recognition system. I think having that aspect of the attack would both boost the novelty of the attack and provide an excellent visualization as to why (and literally where) the attack works. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Is there a particular reason why a model like StyleGAN3 was chosen? (Code availability, SOTA, etc.). 2. (Related to Weakness section) Is it reasonable to think that the template vectors would be stored “raw” and not in an encrypted form? I do not think a database concerned with security would store the raw information. 3. It seems in many, if not all, images shown, the face subjects have smooth faces (this is somewhat related to elderly subjects reconstructing poor results). Do facial features (e.g., wrinkles, pores, moles, etc.) have any influence on the system recognition succeeding or failing? Perhaps this could be attributed to StyleGAN synthesis of images. 4. The paper should provide an insightful comparison of the attack scenarios, i.e. what scenario A has but B does not have, and vice versa. It looks like the evaluation of the blackbox scenario is enough since it covers all of the others. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations of the work and its biases and have stated which data they have used for experiments (and how the data influences the outcomes presented). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments. Below, we tried to address the concerns raised by the reviewer: ### Reply to Weaknesses **Reply to Weakness 1:** We acknowledge the reviewer's comment and apologize for the typos as well as grammatical errors. We will fix these errors and will carefully revise the paper for any typos/errors. **Reply to Weakness 2:** We define five different attacks against face recognition systems (based on the adversary's knowledge and the target system) to provide a comprehensive vulnerability evaluation. In particular, we evaluate the transferability of the reconstructed face images in template inversion attacks, which has not been investigated in the literature. **Reply to Weakness 3:** We appreciate the reviewer for valuable comment and interesting suggestion. We conducted a new experiment to gain a more in-depth insight of the reconstructed face images. Please kindly check our new experiment described in the `general response`. *** ### Reply to Questions **Reply to Questions 1:** StyleGAN is one of the most popular face generator models in the literature. It has available source code (and pretrained models) and several works have been using StyleGAN in different research problems. However, our method can also be used with other face generator networks. As a new experiment, we used StyleSwin (CVPR, 2022) which is another face generator model. As the results (reported in the `general response`) show, our method can be used with this face generator networks too. **Reply to Questions 2:** Data protection regulations consider biometric data as sensitive information that should be protected. However, many such regulations are issued recently and there are still many face recognition systems which store raw facial templates in their databases. We should note that using typical encryption schemes (such as hashing) cannot be used to protect facial templates. Because biometric templates of the same subject are not exactly the same due to variations in measurements (eg light change, pose change, etc), and thus exact matching (as used in hashing) cannot be used in practice for face recognition systems. Hence, protection of biometric templates is still a challenging problem and there are several standards (eg, ISO/IEC 24745) defining the requirements of template protection schemes. In our Ethics Statement, we cited some works on biometric template protection in the literature. **Reply to Questions 3:** To answer the question of `Do facial features have any influence on the system recognition succeeding or failing [of the reconstructed face images]?`, we first need to answer the question that `Do face recognition models encode such facial features?` or in other words `Do facial features (e.g., wrinkles etc) are useful for face recognition models to identify different subjects?`. To have a solid answer to these questions, we need to have access to a rich dataset of facial features and investigate the effect of facial features on the performance of face recognition models and our face reconstruction method. Unfortunately, we could not find any large-scale dataset with labels of such facial features. However, we would like to consider aging as a special case of causing facial features in people and investigate the effect of aging on the performance of face recognition models and our face reconstruction method. For instance, wrinkles often appear more (or strengthen) in elderly people, while the same person does not have (or has less) wrinkles in his/her younger images (as shown in Figure 3 of our rebuttal pdf file). To this end, we consider the AgeDB dataset, which contains 16,488 images of various famous people (a total of 568 distinct subjects). Every image is annotated with respect to the identity and includes age attributes. The minimum and maximum age is 1 and 101, respectively, and the average age range for each subject is 50.3 years. This dataset has four different protocols, where in each protocol, the age difference of each pair’s faces is equal to a fixed, predefined value, i.e., 5, 10, 20 and 30 years. As a new experiment, we consider attack 1 (ie, whitebox against the same model) against ArcFace and evaluate the performance of the face recognition model (in terms of true match rate) and our face reconstruction method (in terms of success attack rate) for different age protocols in this dataset (at FMR=1%): | | AgeDB-5 | AgeDB-10 | AgeDB-20 | AgeDB-30 | |---|---|---|---|---| | Face recognition (TMR) | 98.33 | 98.43 | 97.30 | 97.00 | | Our face reconstruction (SAR) | 75.80 | 76.26 | 80.63 | 75.98 | As the results on this dataset show, the performance of the face recognition model and also our face reconstruction method is comparable for different age protocols in the AgeDB dataset. Therefore, it is reasonable to assume that facial attributes such as wrinkles may not be completely encoded in the facial templates, or even if there is some level of information, changing these attributes does not significantly change the facial templates. **Reply to Questions 4:** Following the reviewer's suggestion, we prepared a table in our `general response` and further described the adversary's knowledge and difficulty of each attack scenario. We will include this table in the final version of the paper. We would like to also mention that in research on the security and attacks against AI systems, evaluations are not often limited to whitebox/blackbox scenarios (against the same system), and the transferability of attack needs to be investigated to evaluate the robustness of generated samples by an adversary. However, to our knowledge, the transferability of the reconstructed face images in template inversion attacks has not been investigated in the literature. As a matter of fact, the transferability of reconstructed face images can lead to a critical threat where the adversary can use the reconstructed face image to enter another system. --- Rebuttal Comment 1.1: Comment: Thank the authors for their responses. I don't have any more questions. I hope the mentioned issues can be addressed in the final version. I will keep my final score. --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: We thank the reviewer for their feedback. We are happy that we could answer all reviewer's questions in our rebuttal. We will definitely include our new analyses reported in our rebuttal in the final version of the paper to address the mentioned issues. We appreciate the reviewer for their time in reviewing our paper and for the valuable comments, which helped us improve the quality of our paper.
null
null
null
null
null
null
Block Broyden's Methods for Solving Nonlinear Equations
Accept (poster)
Summary: In this work the authors introduce block variants of both good and bad Broyden's methods, which exhibit explicit local superlinear convergence rates. The block good Broyden's method, in particular, demonstrates a faster convergence rate compared to existing Broyden's methods that are not dependent on the condition number. This is achieved by leveraging multiple rank modifications on the Jacobian estimator. On the other hand, the block bad Broyden's method directly estimates the inverse of the Jacobian, resulting in reduced computational costs during the iteration process. The theoretical findings offer new insights into why the good Broyden's method tends to outperform the bad Broyden's method in most cases. Empirical results further validate the superiority of the proposed methods and affirm the theoretical analysis conducted by the authors. Strengths: * The authors provide explicit convergence rates for the block good Broyden’s update and the block bad Broyden’s update * The block good Broyden’s update can approximate a nonsingular matrix A with a linear rate of $(1-k/d)^t$ and the “bad” update can approximate the inverse matrix $A^{−1}$ with an linear rate of $(1-k/(d \hat{\kappa}^2))^t$ * They propose the first explicit convergence rate for the block bad Broyden’s update. * The assumptions are supported by theory and experiments. Weaknesses: In the experiment section Figure 1 and Figure 2 are a little bit confusing as for example Figure 1(a) and Figure 1(d) represents the experiments for the same N so they can be grouped under the same subfigure. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Can you compare the proposed method with the results presented by [1]? * Why did you choose Chandrasekhar H-equation to conduct the experiments for the given method? [1] Robert M Gower and Peter Richtarik. Randomized quasi-newton updates are linearly convergent matrix inversion algorithms. arXiv preprint arXiv:1602.01768, 2016. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and helpful comments. > In the experiment section Figure 1 and Figure 2 are a little bit confusing as for example Figure 1(a) and Figure 1(d) represents the experiments for the same N so they can be grouped under the same subfigure. **Response:** We thank the author for pointing this and would like to improve the figures in the revision based on the suggestion. > Can you compare the proposed method with the results presented by [1]? **Response:** We make comprehensive comparison to the results presented by [1] according to the reviewer's suggestion. Please refer to Section 2 in the global response. > Why did you choose Chandrasekhar H-equation to conduct the experiments for the given method? **Response:** The Chandrasekhar H-equation plays an important role in scientific computing [A, B]. As shown in [C] and [Section 5.6, 20], its discrete version can be used in a wide class of problems of analytical radiative transfer theory. Thus we choose it to verify the empirical performance of our methods. In addition, recent studies [24, 38] on nonlinear equations also perform experiments on the H-equation. **References** [A] Subrahmanyan Chandrasekhar, Radiative Transfer, Dover, New York, 1960. [B] Richard W. Leggett. A new approach to the H-equation of Chandrasekhar. SIAM Journal on Mathematical Analysis, 7(4):542-550, 1976: [C] C. T. Kelley. Approximate methods for the solution of the Chandrasekhar H-equation. Journal of Mathematical Physics, 23(11):2097-2100, 1982. --- Rebuttal Comment 1.1: Comment: I appreciate your responses and the explanation you provided for my questions. I have decided to keep my score as it is.
Summary: This paper extends the Broyden family quasi-Newton method into block setting and shows explicit local linear convergence rate under mild conditions. More specifically, the authors studied both the “good” and “bad” Broyden algorithms, namely the update on the Hessian/Jacobian and the inverse Hessian/Jacobian respectively. The authors provided some insights on why the “good” update is better than the “bad” update and provided numerical experiments to support their findings. Strengths: The theory is well-rounded and the method is consistent with existing works. The analysis on Algorithm 2 is interesting which incorporates the condition number of the Jacobian and brings insights on why the convergence of “bad” Broyden method is worse in practice. Weaknesses: (Please reply to the Questions section directly) First the “good” Broyden family still computes the inverse of matrix in the update; Second, the Assumption 4.1 is imposed toward the iterates directly, which is not very good; Third, it’s not very clear why we consider the block update. Also the numerical experiment is not adequate. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Perhaps the biggest question I have toward the comparison of “good” and “bad” Broyden is that for the good Broyden method, still it computes the inverse of $B_{t}^{-1}$, which means that the dependency on the condition number is implicitly incorporated. I’d appreciate to hear from the author on how this problem is addressed (and how previous literature deals with this problem); 2. In assumption 4.1, the assumption is imposed toward the sequence $\{B_{t}\}$, which is not very good. Is there a possible safeguard mechanism so that the Jacobians are well-defined? This is pretty common in quasi-Newton literatures, such as [1]. 3. Another problem is that the block update seems to be lack of motivations. Isn’t the case $k=1$ already enough to show the dependency of $\kappa$ for the “bad” Broyden update? Certainly the $k$ in each of the convergence result and the numerical experiments show the efficiency of block updates, but usually block updates are for bigger targets such as parallelization or decentralized update. Could the authors give some comment on this direction; 4. The numerical experiments on Chandrasekhar H-equation is not sufficient to show the efficiency of the proposed method. It would be interesting to see applications and experiments on problems with real-world data. In particular, would the “bad” Broyden method be better when the problem dimension is relatively large? References: [1] Wang, Xiao, et al. "Stochastic quasi-Newton methods for nonconvex stochastic optimization." SIAM Journal on Optimization 27.2 (2017): 927-956. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is well stated in weakness and question sections. I’m not aware of any potential negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and helpful comments. > Perhaps the biggest question I have toward the comparison of “good” and “bad” Broyden is that for the good Broyden method, still it computes the inverse of ${\bf B}_t^{-1}$, which means that the dependency on the condition number is implicitly incorporated. I’d appreciate to hear from the author on how this problem is addressed (and how previous literature deals with this problem); **Response:** Notice that in the analysis of the good Broyden's method, we only care about the error between the estimator ${\bf B}\_t$ and the Jacobian ${\bf J}\_\*$, i.e., $\|\|{\bf C}({\bf B}\_t-{\bf J}\_\*)\|\|\_F$, rather than the error between ${\bf B}\_t^{-1}$ and ${\bf J}\_\*^{-1}$. Thus the convergence rate of the good Broyden's method is condition-number free. Previous works [25], [38] also established such condition-number free superlinear rates for quasi-Newton methods on convex optimization and nonlinear equations respectively. Their analysis is also based on controlling the error between the estimator matrix and the exact Jacobian (or Hessian) matrix which do not incorporate the condition number. On the other hand, the analysis of the bad Broyden's method considers the error between ${\bf H}_t$ and ${\bf J}\_\*^{-1}$, which incorporates the condition number in the convergence rate. > In assumption 4.1, the assumption is imposed toward the sequence ${\bf B}_t$, which is not very good. Is there a possible safeguard mechanism so that the Jacobians are well-defined? This is pretty common in quasi-Newton literatures, such as [1]. **Response:** We provide discussion on Assumption 4.1, illustrate that it is a reasonable assumption and give some potential ways to eliminate this Assumption. Please refer to the global response (Section 3). > Another problem is that the block update seems to be lack of motivations. Isn’t the case already enough to show the dependency of $\kappa$ for the “bad” Broyden update? Certainly the $k$ in each of the convergence result and the numerical experiments show the efficiency of block updates, but usually block updates are for bigger targets such as parallelization or decentralized update. Could the authors give some comment on this direction; **Response:** The block updates are very important for high performance computing. It can increase the reuse rate of the data in cache and take advantage of parallel computing. For more details, please refer to [A]. We will provide discussion on this topic in the revision. > The numerical experiments on Chandrasekhar H-equation is not sufficient to show the efficiency of the proposed method. It would be interesting to see applications and experiments on problems with real-world data. In particular, would the “bad” Broyden method be better when the problem dimension is relatively large? **Response:** We have added experiments based on the reviewer's suggestion, please refer to the global response (Section 1). We did not find that the block bad method would be better than the block good method when the dimension is relatively large. **References** [A] Tim Davis. Block matrix methods: Taking advantage of high-performance computers. Technical Report TR-98-024, Computer and Information Sciences Department, University of Florida, 1998. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses to my comments and questions. I still believe that paper is useful contribution to the field and will keep my score.
Summary: This paper studies block Broyden's methods for solving nonlinear equation systems and presents explicit local superlinear convergence rates. For the block good Broyden's method, the convergence rate is independent of the condition number of the Jacobian matrix at the solution, which that of the block bad Broyden's method depends on the condition number heavily. Numerical experiments validate the theoretical analysis. Strengths: - The authors provided explicit local superlinear convergence rates for block good Broyden's method and block bad Broyden's method. The rate improves previous results and reveals the advantage of block update. - The established convergence results give new understanding on the performance difference between the good and bad Broyden's methods. - The paper is clearly written and well-organized. Weaknesses: - In Algotithm 1 and Algotithm 2, the Jacobian matrix is explicitly needed even when $k=1$, while that is not the case for classic Broyden's methods. - The convergence rate depends on the dimension. When $d\gg 1$, the rate $1-1/d$ is close to $1$ and the convergence will be slow. - Essentially the algorithms and analysis is the block version of previous work [38]. Although the comparison with [38] is include, it is still not so clear if this generalization (from rank $1$ to rank $k$ ) is straight-forward. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - The rank $k$ is not defined in the contribution 1 when it is firstly used. - What is the definition of $\hat{\kappa}$ in line 51, equation (8) and Table 2? - I suggest to clarify the meaning of $\kappa$ in line 59. - In line 78, $x^*$ is defined as 'the solution'. I suggest to clarify the uniqueness of the solution for the problem (1) considered. The nonlinear equation (1) may have multiple solutions. - What is the meaning of the bracket notation, for example that used in Table 1? - What is the meaning of $\operatorname{e}$'s in equation (11), (12), (17) and (18)? - Typos: line 24, 'large-scale'; it seems to be '$d$' (other than '$\operatorname{d}$') in equation (11), (17) and (18). - Typos: line 94 summarized **in** Table 2. - Typos: line 150, our BGB algorithm is better **than** greedy ... - Please check the format of the reference list, particularly the letter case (in [3], [5], [9], [13]) and the math symbol (in [15]), etc. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and helpful comments. > In Algotithm 1 and Algotithm 2, the Jacobian matrix is explicitly needed even when $k=1$, while that is not the case for classic Broyden's methods. **Response:** Our algorithms **do not** require the full information of the Jacobian matrix. When updating the Jacobian estimator by the block updates (line 6 of Algorithm 1 and 2), we only need to calculate $k$ columns of the Jacobian matrix which is selected by the sampling matrix ${\bf U}_t$. Since $k\ll d$, it is cheap to obtain the partial information of the Jacobian. Using this partial information of the Jacobian can significantly improves the convergence rates compared to the classical Broyden's methods (see Table 1). The efficiency of proposed methods are also validated in our experiments part. As a result, although our methods need to calculate a bit more information per iteration, they require less running time than classical ones (see Figure 1 (d), (e), (f)). > The convergence rate depends on the dimension. When $d\gg 1$, the rate $1-1/d$ is close to 1 and the convergence will be slow. **Response:** Our block good Broyden's method has the rate of $\mathcal{O}((1-k/d)^{t(t-1)/4})$, while the classical Broyden's method converges with the rate of $\mathcal{O}(1/{t}^{t/2})$. Hence, our method is faster than the classical one if the dimension satisfies $d \le \mathcal{O}(kt/\ln(t))$. On the other hand, such dimension dependency rates are commonly appeared in greedy or random quasi-Newton methods, even for the convex minimization problems [A, 24]. > Essentially the algorithms and analysis is the block version of previous work [38]. Although the comparison with [38] is include, it is still not so clear if this generalization (from rank 1 to rank $k$) is straight-forward. **Response:** The generalization from rank-$1$ to rank-$k$ is not straight-forward. None of previous work on block updates demonstrate the superiority of using rank-$k$ update over the rank-1 update. For example, [18] propose the block bad Broyden's update for matrix approximation but the its analysis only with the implicit linear rate ([Section 8.3, 18]), while our analysis (Theorem 3.2) clearly show the advance of using rank-$k$ update. In addition, our work does not only improve the convergence rates by using the block updates compared with [38], but also weaken the initial condition by carefully choosing the measure (line 135) and generalizing the lemma for the matrix approximation (Theorem 3.1 and Remark 3.3) where we allow any nonsingular matrix ${\bf C}$ in equation (5) and (6). We show that our initial condition is strictly weaker than that in [38] (line 149-157). Also, [38] only gives the random or greedy version of the good Broyden's methods, we present the block bad Broyden's methods which are novel and firstly show the estimator matrix converges to the ${\bf J}_*^{-1}$. > The rank $k$ is not defined in the contribution 1 when it is firstly used. **Response:** $k$ is the rank of the differential between the updated matrix and the original matrix in the matrix approximation update in the contribution 1. We will clarify this in the revision. > What is the definition of $\hat{\kappa}$ in line 51, equation (8) and Table 2? **Response:** In line 51, equation (8) and Table 2, we define $\hat{\kappa}=\sigma_{\max}({\bf A})/\sigma_{\min}({\bf A})$ where $\sigma_{\max}(\bf A)$, $\sigma_{\min}(\bf A)$ are the largest and smallest singular value of a given non-singular matrix ${\bf A}$ respectively. > I suggest to clarify the meaning of $\kappa$ in line 59. **Response:** In line 59, we will define $\kappa$ as the condition-number of ${\bf J}({\bf x}^\*)$. > In line 78, ${\bf x}^*$ is defined as 'the solution'. I suggest to clarify the uniqueness of the solution for the problem (1) considered. The nonlinear equation (1) may have multiple solutions. **Response:** Thanks for the advice, we will clarify the uniqueness of the solution of eq.1. > What is the meaning of the bracket notation, for example that used in Table 1? **Response:** The notation $[d]$ means $\{1,2\cdots,d\}$. > What is the meaning of $e$'s in equation (11), (12), (17) and (18)? **Response:** The notation $e$ in equations (11), (12), (17) and (18) presents the the Euler constant, i.e, $e=2.718...$. We would like to thank the reviewer's valuable suggestion on improving the presentation, we will incorporate these in the revision. We will fix the typos and check the format of the reference list. **References** [A]. Anton Rodomanov and Yurii Nesterov. Greedy quasi-Newton methods with explicit superlinear convergence. SIAM Journal on Optimization, 31(1):785–811, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for response. After reading other reviews and the rebuttal, I have decided to maintain my current score as it stands.
Summary: The paper proposes variants of block Broyden's methods for solving nonlinear equations. Explicit convergence rates are established, and the numerical experiment validates the theoretical analysis. Strengths: The paper provides explicit convergence rates of the proposed block Broyden's methods. The theoretical analysis is sound, and the results improve some previous results related to Broyden's methods. The claims are supported by the numerical results. Weaknesses: Although this paper makes a contribution to the theoretical study of Broyden's method, there are some outstanding weaknesses that may outweigh the strengths. 1. The novelty of the proposed block Broyden's method is limited. The proposed methods are very similar to the ones given in [1] (Section 7 and Table 8.1 in [1]). The distinction should be clearly explained in the main paper. 2. The theoretical analysis is limited. Only local convergence is established. There is no discussion about the global convergence. 3. The implementation details of the algorithms are totally missing. It is likely that these algorithms cannot be efficiently applied to real-world applications. For example, each step in Algorithm 1 and Algorithm 2 needs information about the Jacobian matrix, which can incur significant overhead compared with the classical quasi-Newton methods based on secant equations (as shown in Figure 1). 4. The memory cost can be one of the biggest obstacles for the proposed methods to solving high-dimensional problems since the approximate Jacobian matrix or the approximate inverse Jacobian matrix of dimension $d^2$ needs to be maintained. However, there is no discussion about this severe issue. 5. The experimental part needs to be improved. The algorithms were only tested on a simple problem, so it is desirable to consider more different problems to make the claims more convincing. Besides, some well-known methods, e.g., the Jacobian-free Newton-Krylov method, were not compared. Regarding to the writing, it is better to reorganize the materials to focus on the main contributions. For example, it is rather strange to analyze the convergence of the Block Broyden's update first in Section 3, which is not the algorithm proposed in this paper. Some typos: 1. Line 29: approximate → approximates, update → updates 2. Line 150: better → better than 3. Line 209: The book [31] has no content about the Chandrasekhar H-equation. 4. Line 354: The equality is not correct. [1] R.M. Gower and P. Richtarik. Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms. SIAM J. Matrix Anal. Appl., 2017. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The Assumption 4.1 seems to be a strong assumption. Is it possible that the nonsingularity of the matrices is guaranteed by the iterative schemes themselves? 2. The condition (10) and condition (16) require that the initial Jacobian approximation is sufficiently close to the exact Jacobian matrix. Is it a realistic assumption? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The biggest limitation of the proposed algorithms is the large memory usage and high computational cost, but there is no discussion of this issue. It is likely that these algorithms are not suitable for solving large-scale nonlinear equations in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and helpful comments. > The novelty of the proposed block Broyden's method is limited. The proposed methods are very similar to the ones given in [1] (Section 7 and Table 8.1 in [1]). The distinction should be clearly explained in the main paper. **Response:** We present comprehensive comparison to Gower and Peter's work and clearly clarify the distinction. Please refer to the global response (Section 2). > The theoretical analysis is limited. Only local convergence is established. There is no discussion about the global convergence. **Response:** Even our paper only present the local convergence rate, we think such theoretical result is significant. We improve the local superlinear convergence of nonlinear equations. Specially, the state-of-the-art result is $\mathcal{O}((1-1/d)^{t(t-1)/4})$, while ours is $\mathcal{O}((1-k/d)^{t(t-1)/4})$ (See Table 2). We also improve the local condition of the state-of-the-art method (See line 149-157). On the other hand, our method only assumes the non-degeneration of ${\bf J}({\bf x}\_\*)$ (Assumption 2.1) and the continuity of ${\bf J}({\bf x})$ along the path to ${\bf x}\_\*$ (Assumption 2.2). Hence, we think it is reasonable to focus on the local behavior around the ${\bf x}\_\*$. To the best of our knowledge, no global superlinear rate has been established for solving nonlinear equations under such mild assumptions. Though the global superlinear of quasi-Newton is established for convex optimization very recently [A], it is still unknown whether general nonlinear equation problems have similar results. We guess that it is possible to generalize the idea of [A] and use line search strategy to establish the global convergence for solving nonlinear equations. We will study it in the future. We are happy to incorporate more discussion based on the above response in revision. > The implementation details of the algorithms are totally missing. It is likely that ... **Response:** Thank you for your suggestion. We will add the implementation details of the algorithms in the revision. Actually, our algorithms **do not** require the full information of the Jacobian matrix. When updating the Jacobian estimator by the block updates (line 6 of Algorithm 1 and 2), we only need to calculate $k$ columns of the Jacobian matrix which is selected by the sampling matrix ${\bf U}_t$. Since $k\ll d$, it is cheap to obtain the partial information of the Jacobian. Using this partial information of the Jacobian can significantly improves the convergence rates compared to the classical Broyden's methods (see Table 1). The efficiency of proposed methods are also validated in our experiments part. As a result, although our methods need to calculate a bit more information per iteration, they require less running time than classical ones (see Figure 1 (d), (e), (f)). > The memory cost can be one of the biggest obstacles for the proposed methods to solving high-dimensional problems since the approximate Jacobian matrix or the approximate inverse Jacobian matrix of dimension needs to be maintained. However, there is no discussion about this severe issue. **Response:** Quasi-Newton methods always require $\mathcal{O}(d^2)$ space complexity to store the estimator of Jacobian (or the inverse of Jacobian) to achieve the superlinear convergence rates and efficient computation. To reduce the space complexity, limited memory quasi-Newton methods are developed for solving convex optimization (a very special case of solving nonlinear equations). It would be interesting to study limited-memory methods of solving nonlinear equations in the future work. We are happy to involve this discussion in revision. > The experimental part needs to be improved. **Response:** We have added experiments based on the reviewer's suggestion, please refer to the global response (Section 1). > The Assumption 4.1 seems to be a strong assumption. Is it possible that the nonsingularity of the matrices is guaranteed by the iterative schemes themselves? **Response:** We provide discussion on Assumption 4.1, illustrate that it is a reasonable assumption and give some potential ways to eliminate this assumption. Please refer to the global response (Section 3). >The condition (10) and condition (16) require that the initial Jacobian approximation is sufficiently close to the exact Jacobian matrix. Is it a realistic assumption? The condition that the initial Jacobian approximation is sufficiently close to the exact Jacobian matrix is standard for establishing local convergence rates of Broyden's methods ([Theorem 11.5, 31], [Theorem 1, 24] and [Theorem 4.3, 38]). Furthermore, the initial condition of Algorithm 1 is weaker than that of the state-of-art method [38] (line 150-161) for solving the nonlinear equations. On the other hand, we can use the matrix approximation technique in [18] to achieve a sufficient accurate initial estimator of ${\bf J}({\bf x}_0)$ (or $[{\bf J}({\bf x}_0)]^{-1}$). Since we assume that ${\bf x}_0$ is closed to ${\bf x}^*$, condition (10) (or condition (16)) is always satisfied and thus is a realistic assumption. **Response on the writing and typos:** We will incorporate the reviewer's valuable suggestions on improving the presentation in the revision and also fix the typos. * For line 209, the reference should be [20] (see Section 5.6 of [20]) rather than [31]. * For line 354, the equality should be fixed as $$ {\bf J}\_\*^{\top}{\bf J}\_\* = {\bf J}({\bf x})^{\top}{\bf J}({\bf x}) + ({\bf J}\_\*^{\top}{\bf J}\_\* -{\bf J}({\bf x})^{\top}{\bf J}({\bf x})). $$ **References** [A]. Ruichen Jiang, Qiujiang Jin, Aryan Mokhtari. Online learning guided curvature approximation: A quasi-Newton method with global non-asymptotic superlinear convergence. Conference on Learning Theory, PMLR, 195:1962-1992, 2023. --- Rebuttal Comment 1.1: Comment: Thanks for your response. While the limitation of the theory is clarified to some extent, the main issue of memory cost is not addressed. I still believe the $\mathcal{O}(d^2)$ extra memory usage is the biggest obstacle that makes these methods not applicable for solving high-dimensional problems. Gower and Richtárik's work considers the inverse of a matrix, so it is acceptable to form the matrix explicitly. However, for solving nonlinear equations considered in this manuscript, explicitly maintaining an approximation of the Jacobian or inverse Jacobian matrix in memory is very costly and should be avoided in the algorithms due to its poor scalability and potential failure when the machine resource is limited. In practice, the limited-memory Broyden's methods are often preferable to the full-memory Broyden's methods by balancing convergence and memory usage. With regard to experiments, the provided tests are small-scale, which may be not desirable since the nonlinear equations arising in realistic applications are often of high dimension. So I decided to keep the score unchanged. --- Reply to Comment 1.1.1: Comment: Thanks for your follow-up response. We politely disagree the reviewer’s comment on the limitation of memory cost. We provide the detailed rebuttal as follows. **1. Discussion on the $\mathcal{O}(d^2)$ extra memory usage** We think the $\mathcal{O}(d^2)$ extra memory usage is reasonable. * In general, the non-degenerated mapping $F:{\mathbb R}^d\to {\mathbb R}^d$ requires at least $O(d^2)$ memory to store the information of $F$, which is unavoidable to nonlinear equations. This means the memory cost of $\mathcal{O}(d^2)$ in our algorithm (or less cost in limited memory methods) does not affect the order of total memory cost. * Furthermore, even for solving the linear equations ${\mathbf A}{\mathbf x}=\mathbf{b}$ with ${\mathbf A}\in{\mathbb R}^{d\times d}$ and ${\mathbf b}\in{\mathbb R}^{d}$, we also require $\mathcal{O}(d^2)$ memory cost to store $\mathbf{A}$. This implies solving the more difficult nonlinear equations by taking memory cost of $\mathcal{O}(d^2)$ is reasonable. **2. Discussion on inverting the matrix and solving nonlinear equations** * As we have mentioned in above, the memory cost of $O(d^2)$ cannot be avoided to store the information of the general nonlinear mapping. This is similar to finding the inverse of $\mathbf{A} \in \mathbb{R}^{d\times d}$ requires $O(d^2)$ memory cost to store the information of $\mathbf{A}$. * Even we focus on the problem of inverting matrix, we also improve the theoretical results of Broyden's update provided by Gower and Richtárik. Please refer to Section 2 of the global response. **3. Discussion on the limited memory Broyden's methods** To the best of our knowledge, limited memory Broyden's methods lack explicit superlinear convergence rates like ours, and there is no analysis can clearly show how to balance the convergence and memory usage of limited memory Broyden's methods. Few works have made some attempts at this point. For example, Ziani and Guyomarc’h [A] proposed limited memory Broyden's method with adaptive number of curvature pairs. However, their method need to increase the dimension of approximate to $d$ to achieve the asymptotic superlinear convergence, which means it also requires the extra $\mathcal{O}(d^2)$. Additionally, they have not provided any explicit convergence rate nor the theory on how to balance the convergence and memory. Of course, we will be happy to involve more discussion in the rebuttal phrase if the reviewer provides additional reference for the theory of limited memory methods. In summary, we think the study on limited memory Broyden's method is appropriately be left in future work and it cannot be viewed as the main weakness of our paper. **References** [A]. Mohammed Ziani, and Frédéric Guyomarc’h. An autoadaptative limited memory Broyden’s method to solve systems of nonlinear equations. Applied mathematics and computation 205.1 (2008): 202-211.
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed and helpful comments. We response to the common issues raised by the reviewers here. ### **1. Additional Experiments** We have compared the performance of the JFNK (Jacobian-Free Newton Krylov) method with ours on H-equation in Figure 1. We observe that our method outperforms JFNK since JFNK is unstable and very sensitive to the subspace dimension $r$. In addition, JFNK does not have superlinear convergent rates. We also test the proposed block Broyden's methods with different block size $k=\{1, 80, 500\}$ on H-equation where we set a relatively large problem dimension $N=2000$ and present the results in Figure 2. To verify the efficiency of our methods on real-world data, we adopt quasi-Newton methods to solve the classical logistic regression: $$ \min\_{{\bf x}\in \mathbb{R}^d} f({\bf x}) = \frac{1}{n}\sum\_{i=1}^n \ln(1+\exp(-b\_i{\bf a}\_i^{\top}{\bf x})) + \frac{\lambda}{2}\| \|{\bf x}\|\|^2, $$ which needs to solve the following nonlinear equations \begin{align*} \lambda {\bf x} - \frac{1}{n}\sum_{i=1}^{n}\frac{\exp{(-b_i{\bf a}_i^{\top}{\bf x})}}{1+\exp(-b_i{\bf a}_i{\bf x})}\cdot b_i{\bf a}_i = {\bf 0}. \end{align*} We compare the proposed methods BGB and BBB with GB-Cl, BB-Cl, GB-Ra and JFNK method. We do not compare them with GB-Gr because it use greedy strategy in [38] to choose ${\bf U}\_t$, which requires to access the full Jacobian and thus is very expensive in practice. We set the initial Jacobian estimator ${\bf B}_0=\bf I$ for all cases and validate our methods on two real world datasets a9a and w8a from the Libsvm dataset [A] and present the results in Figure 3. The results demonstrate that the proposed BGB method also outperforms the baselines significantly for the logistic regression. We will add these additional empirical results to the revision. ### **2. Comparison to Gower and Richtárik's work** Gower and Richtárik's work [B] mainly focus on approximately computing the inverse of a matrix. Their theoretical analysis only provide the approximation error of inverse matrix. On the other hand, our work focus on solving the general nonlinear equations, which is more challenging. We provide convergence analysis for solving nonlinear equations, which is completely not included in Gower and Richtárik's work. Even for the matrix approximation, our theoretical results improve the results of [B]. - For the block good Broyden's update, We provide the rate of $\mathcal{O}((1-k/d)^t)$ while [Section 5, 18] (arXiv version of [B]) only provides the rate of $\mathcal{O}((1-1/d)^t)$ for the case of $k=1$. - For the block bad Broyden's update, we provide the explicit rate of $\mathcal{O}((1-k/(d\hat{\kappa}^2))^t)$ while [Remark 7.2, B] (or [Section 8.3, 18]) only gives an implicit rate of $\mathcal{O}((1-\rho)^t)$ where $\rho = 1/\kappa_{2,F}({\bf A}{\bf S})$ can be arbitrary small. - We evaluate our convergence property on a more general measure than that of Gower and Richtárik [18, B]. Their results hold for $\mathbb{E}[\|\|{\bf B}_t-{\bf A}\|\|_F^2]$ and $\mathbb{E}[{\|\|{\bf H}_t-{\bf A}^{-1}\|\|^2_F}]$ while our results hold for $\mathbb{E}[{\|\|{\bf C}({\bf B}_t-{\bf A})\|\|^2_F}]$ and $\mathbb{E}[{\|\|{\bf C}({\bf H}_t-{\bf A}^{-1})\|\|^2_F}]$ for any non-singular matrix ${\bf C}$. Please also refer to Table 2 and Remark 3.3 in our paper. ### **3. Discussion on Assumption 4.1** Assumption 4.1 is standard and widely used in analysis of Broyden's methods (See Theorem 8.2.4 of [13], Assumption 2 of [38]). We think it is reasonable to follow such common assumption. It is possible to remove Assumption 4.1 if we moderate the initial conditions (10) (or (16)). Intuitively, based on the non singularity of ${\bf J}\_\*$ and the point-wise smoothness of ${\bf J}(\cdot)$, we can find some local region $\Omega$ where for all ${\bf x}\in\Omega$, ${\bf J}({\bf x})$ is non-singular. When ${\bf B}\_0$ (or ${\bf H}\_0$) is sufficiently closed to ${\bf J}\_\*$ (or ${\bf J}\_\*^{-1}$), we can guarantee the non-singularity of ${\bf B}\_0$, based on Theorem 3.1 (or 3.2), we have ${\bf B}\_{t+1}$ will be closed to the ${\bf J}\_{t+1}$, and hence guarantee the non singularity of ${\bf B}\_{t+1}$ by the iterative schemes. We thank reviewer XomP for pointing out the safeguard mechanism in [C] which can keep the Jacobians well-defined. However, the methods in [C] aim to solve minimization problem where the Jacobian (Hessian matrix) is symmetric, while the Jabobian matrix of nonlinear equations may be asymmetric. It is interesting to study how to incorporate the mechanism of [C] into solving nonlinear equations in the future. We will involve more discussion based on above response in the revision. **References** [A]. Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Trans- actions on Intelligent Systems and Technology, 2(27):1–27, 2011. [B]. Robert M. Gower and Peter Richtárik. Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms. SIAM Journal on Matrix Analysis and Applications, 38(4):1380–1409, 2017. [C]. Xiao Wang, Shiqian Ma, Donald Goldfarb, and Wei Liu. Stochastic quasi-Newton methods for nonconvex stochastic optimization. SIAM Journal on Optimization, 27(2):927-956, 2017. Pdf: /pdf/3916ffa372bfe5916a8345d1e80b19975f9b1992.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Equivariant Flow Matching with Hybrid Probability Transport for 3D Molecule Generation
Accept (poster)
Summary: The authors introduce a novel method named Equivariant Flow-Matching (EquiFM) for generating 3D molecules, aiming to enhance both categorical features (atom types) and continuous features (atom coordinates). The authors highlight the current limitations of diffusion models, particularly their instability and sampling inefficiencies. EquiFM improves upon these by using a flow-matching objective to stabilize the generative probability path of atom coordinates. Furthermore, it introduces a hybrid generative path to handle different modalities in the atomic feature space. This model utilizes an efficient ODE solver to enhance inference efficiency compared to existing SDE simulations. The authors report an improved performance with EquiFM, showing up to a 7% higher validity rate for large biomolecules and an average speed-up of 4.75x. Strengths: 1. Equivariant Optimal Transport (EOT) is a novel approach to atom alignment, minimizing straight-line distance between paired atoms across all rotations and alignments. 2. EOT-based training objective is invariant to initial translations and rotations, increasing the model's robustness against variances in sampled noise and data points. 3. The iterative algorithm proposed for obtaining the EOT map is grounded in proven techniques (Hungarian and Kabsch algorithms), enhancing the solution's efficacy. 4. The unique approach of aligning information quantity changes in the probability paths for the variables ensures better modeling of the joint variable. Weaknesses: 1. The application of the Hungarian and Kabsch algorithms for obtaining the EOT map may introduce computational overhead due to their iterative nature, affecting efficiency. 2. As the variables' probability paths are set independently, there might be cases when certain relationships or interactions between variables aren't accurately captured. 3. The document assumes prior knowledge of the topic and includes numerous technical terms and mathematical formulas, potentially making it inaccessible to a broader audience. For example, computational chemists and drug discovery scientists. 4. The use of a single rotation matrix for all possible rotations and alignments could potentially miss nuanced differences in complex data sets. 5. The model's complexity may lead to challenges in implementation, particularly in situations with less computational resources. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. What is the computational complexity of the EquiFM model, and how does it scale with increasing molecule size which limits the applicability of EquiFM for peptides and antibodies? 2. How might the performance of EquiFM change when applied to datasets with different distribution characteristics compared to QM9 and GEOM-DRUG? 3. What steps have been taken to ensure the chemical feasibility and synthetic accessibility of the generated molecules? 4. How was the model's hyperparameter configuration determined? Would a different configuration change the results significantly? 5. How would the model performance be affected if evaluated with other key metrics, like novelty, or drug-likeness? 6. Can the authors provide more detailed information on the hardware used for model training and inference, such as the specific GPU model, amount of RAM, and the number of parallel processes? 7. On what basis was the 4.75x speedup of the model calculated? Is this in comparison to a specific baseline model or an average across several? Could the authors elaborate on the exact methodology used for this calculation? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: While the study introduces an innovative approach to the generation of 3D molecules provides a rigorous comparison with baseline models, there are several notable limitations that warrant further exploration. 1. A significant limitation of the current study is the relatively narrow scope of the references and baseline metrics utilized. Furthermore, the inclusion of more diverse baseline metrics would strengthen the validity of comparisons and aid in determining the true efficacy of the EquiFM model. For instance, discussing chemical similarity metrics between generated molecules and the training dataset could offer a more nuanced view of the model's capabilities. 2. The current evaluation metrics do not include measures such as synthetic accessibility, which assesses the ease with which a generated molecule can be physically synthesized in a laboratory. Tools like RDKit offer a proxy metric to estimate synthetic accessibility (https://github.com/rdkit/rdkit/blob/master/Contrib/SA_Score/sascorer.py), and the lack of such consideration especially in relation to the DRUG dataset. 3. While the EquiFM model has shown promising results on the datasets used in this study, it remains unclear how it would perform on more diverse or complex molecular data. Expanding tests to include datasets like MOSES (https://github.com/molecularsets/moses) would allow a more comprehensive evaluation of the model's generalization capabilities. In conclusion, while the EquiFM model has shown promise in generating 3D molecules, there is room for substantial expansion and improvement in future studies, particularly in the breadth of references and baseline metrics, and the inclusion of practical and diverse evaluation metrics. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank the reviewer for the constructive and insightful comments. We address all your concerns about the computational complexity, the evaluation, and the details of the methods in the following paragraph, and any further discussions are welcome. ### Q1. Computational Complexity and Applicability for Larger Molecules Due to the page limit, the detailed response to the complexity issues could be referred to Q1 of the response of Reviewer zq9q. And we provide extensive studies on the relationship between complexity and different molecule size. The full curve could be found in the uploaded pdf. ### Q2. Limitations on Capturing the Relationship with Independent Probability Paths According to the conditional flow matching theorem (Theorem.1 in [3]), one can utilize the conditional probability path $p(x_t,h_t|x_1, h_1)$ to learn the joint marginal flow $p(x_1,h_1)$. It should be emphasized that this statement is applicable regardless of any requirements on the distribution $p(x_1,h_1)$ or the correlation between $x$ and $h$. Additionally, there are no restrictions on the form of the conditional probability path, which allows for a valid and unbiased choice of learning any joint distribution with $p(x_t|x_1,h_1) = p(x_t|x_1)$. To provide further intuition, it is worth noting that the commonly used diffusion path for image generation also fits into the above formula when considering $x$ and $h$ as two pixels within the same image. ### Q3. The limitation of using a single rotation matrix As the reviewer mentioned, the single rotation matrix could lack the capacity to capture details when the point cloud holds more nodes and a complex structure. Fortunately, such limitation could be feasibly addressed by extending the proposed EOT map to a hierarchical version. For example, We could first compute the global EOT map with a rotation matrix to only align the center positions of fragments/scaffolds, And then calculate the local EOT map among the nodes inside each fragment. ### Q4. The performance change of EquiFM on distribution with different characteristics EquiFM is likely to excel in datasets with diverse conformation distributions due to the EOT map's ability to reduce learning space, minimizing optimization variance. However, modeling distributions where atom type and 3D structure are not strongly correlated, such as antibody variable regions, may pose challenges for the current version of EquiFM. ### Q5. How to ensure chemical feasibility and synthetic accessibility with EquiFM. Thus for a fair comparison with other fundamental models, we do not explicitly involve the components for chemical feasibility and synthetic accessibility in the current version. However, it could be very flexible for ensuring feasibility and synthetic accessibility with slight extra efforts based on EquiFM. For example, we could add an extra guided component on the vectors field like [2] to optimize the corresponding chemical feasibility and synthetic accessibility. ### Q6. Sensitivity to Hyperparameter settings We experimented with different settings for several optimization-related hyperparameters, e.g., learning rates and batch size, etc. And we found there is no significant impact of such hyperparameters. ### Q7. Detailed configuration of the hardware The configuration of our server is: CPU: Intel(R) Xeon(R) Platinum 8362 CPU @ 2.80GHz / 10 Core GPU: Nvidia-3090 GPU with 24 GB GPU memory / 1 GPUs Memory: 20G ### Q8. On what basis was the 4.75x speedup of the model calculated? Here the 4.75x speedup is compared to EDM[1], which is an advanced diffusion model for the task with promising performance. Note the EDM uses fixed diffusion steps, e.g., sampling every molecule will use 1000 forward calls of the EGNN in the paper.While for our model, the generation process is essentially solving the neural ODE of the learned vector field. With adaptive ODE solvers(dopri 15), such a process could be accelerated by jumping the repeated evaluation where the vector field changes are not significant. In this way, the adaptive ODE solver could gain acceleration. As the two models share the same EGNN architectures, we calculate the acceleration factor as the forward call number of EDM divided by that of EquiFM. ### Q9. More extensive evaluation metrics As suggested, we add the following extra evaluation metrics: Quantitative estimate of drug-likeness (QED),Retrosynthetic accessibility (RA), Medicinal chemistry filter (MCF), Synthetic accessibility score (SAS), Molecular weight(MW),LogP and novelty. Besides, we evaluate the Conformation Energy Distance to evaluate the quality of 3D conformations. The experimental results can be found in the following: | Methods | QED($\uparrow$) | RA($\uparrow$) | MCF ($\uparrow$) | SAS($\downarrow$) | $$\Delta$$ MW($\downarrow$) | $$\Delta$$ LogP ($\downarrow$) | Conformation Energy distance($\downarrow$) | Novelty $\uparrow$) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | EDM | 0.608 | 0.441 | 0.621 | 4.054 | 0.566 | 23.71 | 0.2180 | 0.791 | | EquiFM | 0.627 | 0.519 | 0.693 | 3.893 | 0.478 | 19.54 | 0.2081 | 0.834 | Here $\Delta$ stands for the difference compared to ground truth distribution. And here to get valid molecules on GEOM-DRUGs, for both models, we sample Hydrogen coordinates with the help of RDkit following previous work [1]. ### Q10. Extended the experimental results on other datasets After carefully checking the data and introduction in MOSEs, we find that the MOSEs mainly contain 2D information, e.g., SMILES and atom-bond graphs while the 3D conformation data is missings. Therefore, it would be not suitable to evaluate the proposed EquiFM which is a 3D molecule generative model. [1]. Hoogeboom, Emiel, et al, Equivariant diffusion for molecule generation in 3d.ICML 2022. [2]. Bao et al. Equivariant Energy-Guided SDE for Inverse Molecular Design. ICLR 2023 [3] Lipmon et al, Flow Matching for Generative Modeling. ICLR 2023 --- Rebuttal Comment 1.1: Comment: Thank you for providing a comprehensive and detailed rebuttal to the review. The extensive explanations and additional experimental results presented in the rebuttal have effectively addressed the issues raised in the initial review. Based on the responses and the additional information provided, I am convinced of the technical soundness and contribution of the work therefore I am willing to raise the score to 6 --- Reply to Comment 1.1.1: Title: Thanks for your feedback! Comment: Thank you very much for recognizing our work and providing valuable feedback! If there is any additional information that you might need, please don't hesitate to inform us.
Summary: This paper addresses the molecular generation problem. The authors propose a conditional flow-matching-based method that employs different generative paths for coordinates and node-wise features. In addition, EGNN is used for the SE(3) invariant vector field, and an ICP-like algorithm is used for the equivariant optimal transport map. The proposed method outperforms the conventional method on the molecule generation benchmark tasks. Strengths: * The hybrid probability path is an attractive solution for the generative probability path of different modalities. * The equivariant OT map is a reasonable approach for 3D coordinate variables. * The proposed method can generate molecules not only with better quality but also faster. * The proposed method also outperforms a conventional method in conditional generation leading to many industrial applications. Weaknesses: * Since the proposed method is not clearly stated, the ablation study is not easy to understand, such as (E)OT+VP_{xxx} in Table 3. * The model architecture is poorly explained, so the generation of discrete variables, such as atomic types, is difficult to understand. * Some TeX references in the main text need to be corrected, such as Tab. 5.2 --> Table 3 in Section 5.4, Fig. 5.4 --> Fig. 3 in Section 5.5, and Appendix C --> B.4 in Section 4.3. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: * Do you have any original definitions in Section 3? If you do not mean to argue your originality in the section, it is better to clearly state that the section is a review of [26]. * What is the definition of $\sigma_{min}$? * Some contents seem to be missing. For example, the detail of Section 4.3 is not included in Appendix C, although there is a reference in the main text. * Does EOT+VP_{linear} in Table 3 mean EOT path on x and VP_{linear} on h? I cannot find the definition of + in (E)OT+VP_{xxx}. * Please explain how you generated atomic types in your model. Did EGNN output a one-hot vector representing atomic types? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: The authors did not address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the detailed and insightful comments! The responses to your concerns are listed below: ### Q1. The ablation study is not easy to understand, such as (E)OT+VP_{xxx} in Table 3 As mentioned by the reviewer, in Table 3, the term "$EOT+VP_{linear}$" does refer to the method where the EOT path is applied to x and $VP_{linear}$ is applied to h. Thank you for bringing this to our attention. We will ensure that the corresponding sections are carefully proofread in order to eliminate any confusion in the next version. ### Q2. The model architecture is poorly explained. The generation of discrete variables, such as atomic types, is difficult to understand We apologize for the confusion we may have caused. The model architecture/parameterization closely adheres to the EDM [1] paper, in order to ensure a fair comparison of the impact of the new training objective. During the generation of atom types, the EGNN produces continuous vectors at each timestep. The only additional step involved in generating discrete variables is the application of a quantized operation, such as $argmax$, to transform them into discrete vectors. ### Q3. Missing TEX references and contents, e.g. Details of section 4.3. Thank you for bringing this to our attention. We acknowledge that the missing details of Section 4.3 can be found in Appendix B.4, as correctly pointed out by the reviewer. We will thoroughly review and address the reference issues mentioned, such as replacing Tab. 5.2 with Table 3 in Section 5.4, updating Fig. 5.4 to Fig. 3 in Section 5.5, and correcting the reference to Appendix C to be B.4 in Section 4.3. These corrections will be diligently made in the updated version. ### Q4. it is better to clearly state that the section is a review of [26] Thank you for your suggestion, which we greatly appreciate. The content related to flow-matching in Section 3 serves the purpose of providing essential background information and introducing relevant notations. In response to the reviewer's suggestion, we will explicitly clarify in the section that this content is intended to provide foundational knowledge rather than claim originality. By doing so, we hope to eliminate any potential misunderstanding regarding our contributions. Thank you for pointing this out, and we will make the necessary clarification in the revised version. ### Q5. What is the definition of $\sigma_{min}$ Please note that the introduction of $\sigma_{min}$ serves the purpose of approximating each data point, specifically a delta distribution, by a Gaussian distribution with a narrow peak centered at the data point and a very small variance ($\sigma_{min}$). This approximation is employed to prevent any corner cases and to facilitate a simpler mathematical representation of the probability path, as demonstrated in [1]. In response to the suggestion, we will enhance clarity by including this explanation in the upcoming version. [1] Lipmon et al, Flow Matching for Generative Modeling. ICLR 2023 --- Rebuttal Comment 1.1: Title: Thank you Comment: I appreciate the response from the authors. The answers to my questions were satisfying, and I look forward to seeing the corrections of the pointed-out errors and additional explanations. I have also read the opinions of the other reviewers. The weaknesses of this paper primarily lie in the insufficient explanation. However, given its strong technical contribution, major revisions are not required. I believe this paper should be accepted, and I have raised my evaluation. --- Reply to Comment 1.1.1: Title: Thank you very much for your feedback! Comment: Thank you sincerely for your valuable feedback and recognition of our work! We want to assure you that we will address the insufficient explanation and fix the pointed-out errors as suggested in the next version. If there is any further information you may need, please feel free to let us know!
Summary: The contribution is a new 3D generative model for molecules. The model is trained using a novel flow-matching objective. The flow-matching objective for coordinates of atoms is novel in that coordinates of 'source' atoms are permuted and rotated to align with 'target' atoms. There is also some exploration of how to set probability paths for different parts of the molecule description, namely the atom types, charges, and coordinates. Strengths: Flow-matching represents a very promising technique to improve on generative models like Hoogeboom et al.'s EDM, in terms of both training and sampling speed. The idea of permuting and rotating source atom coordinates to straighten the flow that is matched is interesting, as is the question of how to set the relative corruption rates for different parts of a compound data type (coordinates, atom types, and charges). Figure 2 is beautiful. Weaknesses: The paper is not clearly written, and does not seem to have been proof-read (e.g., repetition in lines 26-29). Section 4.3 describes what one might try to achieve when choosing a probability path for $h$ but does not say how to do it. Is the answer that you calculate and plot the lines in Figure 4 and then pick the path whose line is visually closest to the $I(x_t, h_0)$ line? The ablation studies are not thorough. In particular, permuting and rotating atom coordinates as in definition 4.3 is a key novel feature of the proposed model, and the authors should show quantitatively what effect it has on the quality of generated samples. Line 331 refers to 'Tab 5.2' but I cannot find the table this refers to. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Lines 288-289: it looks like some words are missing ('the achieves' does not make grammatical sense) but it seems the authors are claiming that in GEOM-DRUGS, almost every molecule has one or more atoms with incorrect valency. Is that really true? I thought that GEOM-DRUGS was a collection of drug-like molecules in realistic conformations. In figure 3, it's surprising that RK4 with 4x the number of evaluations usually does worse than Euler. Why is RK4 so bad? In section 4.2 please could the authors spell out what is claimed to be equivariant with respect to what? Equation (8): this $\psi_t$ will have discontinuities with respect to $x$ at the values of $x$ where $\pi$ changes. Is this a problem? How are atom types and charges represented in $h$? Are they one-hot encoded as in Hoogeboom et al's EDM? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Among the limitations of previous models that the authors say they address is the inability to generate large molecules. However, the paper does not show any good generated molecules with more than 9 heavy atoms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and appreciate the effort in reviewing our work. We will address your concerns on the ablation studies, the evaluations, and the method details in the following paragraph, and any further comments are welcome! ### Q1. Presentations and typos in Line26-Line29: Thanks for pointing it out. We will carefully proofread the draft as suggested in the updated version. ### Q2. Section 4.3 describes what one might try to achieve when choosing a probability path for $h$ but does not say how to do it. To clarify, the key ingredient of this section is proposing the use of different probability paths for different modalities. We introduce an inductive bias by aligning the mutual information term $I(h_t,h_0)$ with $I(x_t,h_0)$. This is based on the intuition that molecular/chemical information emerges when the coordinates (x_t) are within a certain range, such as bond distances, to the ground truth. We believe the probability path of atom types $h_t$ should also reflect this emergence of information. Thus, we use mutual information as a measurable quantity to describe this tendency. Matching the tendencies of the two curves is sufficient to achieve appealing results, considering the uncertainty/bias in approximating mutual information $I(x_t,h_0). Following the reviewer's suggestion, we will introduce some quantity metrics to make the approach more rigorous in the next version. We have calculated several quantities to measure the tendencies of different probability trajectories, and these will be incorporated into the updated version. ### Q3. In section 4.2 please could the authors spell out what is claimed to be equivariant with respect to what? Apologies for the confusion caused. In section 4.2, the equivariant optimal transport is defined as a point mapping between two point clouds that is optimal for all E(3) equivariant operations on either point cloud. We understand that this definition can be misleading in terms of the method name (Equivariant Flow Matching), which actually refers to the fact that the modeled vector field is equivariant towards E(3) operations on the input. To avoid any misunderstanding, we will provide additional clarifications to make this distinction clear in the next version. ### Q4. RK4 with 4x the number of evaluations usually does worse than Euler. Why is RK4 so bad? For Euler, one step of integration requires only one NFE, while 4 NFEs are needed for one step of RK4 as it's a fourth-order Runge-Kutta method. For sufficient step time, the RK4 could be better than Euler as in Figure 3. The performance drop when the step is insufficient could be due to the discrete property of $h$ ### Q5. $\psi_t$ in Eq.8 will have discontinuities with respect to x where $\pi$ changes. Is it a problem? This is indeed a great question. Firstly, the $\psi_t$ in Eq. 8 is a valid path for training continuous normalizing flows. This is due to the fact that $\psi_t$ is time-dependent diffeomorphic [1], i.e., there exists a continuous vector field $v_t$ satisfies that $\frac{d}{d t} \psi_t(x)=v_t\left(\psi_t(x)\right)$. Here $v_t = \pi^*\left(\mathbf{R}^* \mathbf{x}_1\right)- \mathbf{x}_0$. Meanwhile, as the reviewer mentioned, this $\psi_t$ could have discontinuities with respect to x. This could hurt the generalization/robustness during sampling though the objective is unbiased. And we believe that regularizing the discontinuities could be a reasonable future direction to explore for better performance. [1] Lipmon et al, Flow Matching for Generative Modeling. ICLR 2023 ### Q6. How are atom types and charges represented in $h$? Are they one-hot encoded as in EDM? Yes, we follow the data representation as in EDM. This, for atom types, we represent it by one-hot encoding and charges are represented as integer variables. On the Evaluation and Extra Ablation Studies ### Q7. The ablation studies are not thorough. In particular, the quantitative results of EOT maps. Thanks for pointing it out, as you mentioned, the 'Tab 5.2' Line 331 is actually Table 3. In Table 3, we actually conduct ablation studies on the effect of EOT maps by fixing the probability path on $h$ as the $VP_{\text{Linear}}$ path, while enumerating the probability path on atom coordinates as $EOT$, $OT$, $VP$. As we could find, the $EOT$ map could consistently bring performance improvements upon that without the $EOT$ map, e.g. vanilla $OT$ path or $VP$ path. To make the ablation studies more comprehensive, we add extra ablations in the following: | Metrics | $EOT+VP_{cos}$ | $EOT + VP_{sin}$ | $OT+VP_{cos}$ | $OT+VP_{sin}$ | | --- | --- | --- | --- | --- | | Atom Stability | 98.7 | 98.7 | 97.9 | 97.7 | | Mol Stability | 84.7 | 83.4 | 80.1 | 79.8 | We will update the ablation studies in the updated version. ### Q8. It is claimed that in GEOM-DRUGS, almost every molecule has one or more atoms with incorrect valency. As the reviewer mentioned, the GEOM-DRUGS dataset comprises realistic conformations of drug-like molecules. In this context, it is important to clarify that we are not suggesting any incorrect valency in the real molecules within GEOM-DRUGS. The objective of the benchmarked 3D molecule generation task is to generate atom types and coordinates, followed by the addition of bonds using a predefined module [1,2]. However, the process of adding bonds may introduce some bias. Consequently, using the same bond-adding process will result in ground truth data with atom stability lower than 100%. It's worth noting that molecule stability is approximately calculated as the N-th power of atom stability, where N represents the number of atoms in the molecule. Thus, for large molecules in the GEOM-DRUGS dataset, molecule stability is estimated to be close to 0%. [1]. Hoogeboom, Emiel, et al, Equivariant diffusion for molecule generation in 3d. ICML 2022. [2]. Wu, et al. Diffusion-based Molecule Generation with Informative Prior Bridges. NeurIPS 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I like the main idea of the paper but I feel that the presentation is too poor to recommend acceptance. --- Reply to Comment 1.1.1: Title: Thanks a lot for your feedback! Comment: We thank the reviewer for the response and valuable feedback. We assure you that the paper will be carefully proofread and revised as suggested. To this end, we provide the main revisions in the following: - Line26-Line29 in the Introduction _"However, ......, the empirical evaluation metrics such as validity, stability, and molecule size."_ would be updated as: "However, despite great potential, the performance is indeed limited considering several important empirical evaluation metrics such as validity, stability, and molecule size, due to the insufficient capacity of the underlying generative models." - Before Line 188, _"The equivariant optimal transport ..... rotations and alignment."_ We will add the following sentences: "With $\mathbf{y}$ and $\mathbf{z}$ lie in the zero of mass space, the mappings $\pi^{*}$ in Eq. 7 are optimal towards any E(3) equivariant operations on either side of the point clouds. Therefore, the mappings are referred to as equivariant optimal transport(EOT)." - Section 3 will add statements to clarify the contribution before Line 92: "In this section, we provide an overview of the general flow matching method to introduce the necessary notations and concepts based on [1]." - Line 119-Line 120, _"With the prior distribution_ ........ $\mathcal{N}\left(x\mid x_0,\sigma_{\min}^2 I\right)$", will be revised as: With the prior distribution $p_1$ defined as a standard Gaussian distribution, the empirical data distribution $p_0(x\mid x_0)$ is approximated with a peaked Gaussian centered in $x_0 $ with a small variance $\sigma_\text{min}$ as $\mathcal{N}\left(x\mid x_0,\sigma_{\min}^2 I\right)$. - Section 4.3 will be revised in line with the suggested changes - Line 209-Line 213, _"With the conditional, ......., the following examples"_, will be updated as: "In this section, we address the challenges posed by the multi-modality nature of 3D molecular data. Specifically, we focus on the distinct generation procedures required for various modalities, such as coordinates and atom types, within the flow-matching framework. It is crucial to recognize that altering atom types carries a different amount of chemical information compared to perturbing coordinates. To better understand this intuition, we provide the following corner case:" - Line 217-Line 221, _"Now consider the case ........ quantity"_ will be revised as: "Here we consider the corner case that $\epsilon_\textbf{x} \to 0$ and $\epsilon_\textbf{h} \to 1$, i.e. no noise for atom types from timestep 0 to timestep $\epsilon_\textbf{h}$ and max noise level from $\epsilon_\textbf{h}$ to timestep 1. (Reversely for $epsilon_\textbf{x}$) Under such a probability path, the model will be encouraged to determine and fix the node type at around $\epsilon_h$ step (very early step in the whole generation procedure), even if the coordinates are far from reasonable 3D structures. However, this particular case may not be optimal. The subsequent steps of updating the structure could alter the bonded connections between atoms, leading to a potential mismatch in the valency of the atoms with the early fixed atom types. Therefore, selecting a suitable inductive bias for determining the probability paths of different modalities is crucial for generating valid 3D molecules. In this paper, we utilize an information-theoretic inspired quantity as the measurement to identify probability paths for learning the flow matching model on 3D molecules." - Line 232, after _"we design our probability path on $h$"_: "We follow the data representation in [2]. This, for atom types, we represent it by one-hot encoding and charges are represented as integer variables." - Line 288-Line 291 in Section 5.2: _"It is worth noticing that, ....., and distances."_ Would be updated as: "In the benchmarked 3D molecule generation task, the objective is to generate atom types and coordinates only. To evaluate stability, the bonds are subsequently added using a predefined module such as Open Babel following previous works. It is worth noting that this bond-adding process may introduce biases and errors, even when provided with accurate ground truth atom types and coordinates. As a result, the atom stability evaluated on ground truth may be less than 100%. Note the molecule stability is approximately the N-th power of the atom stability, N is the atom number in the molecule. Consequently, for large molecules in the GEOM-DRUG dataset, the molecule stability is estimated to be approximately 0%." - The TEX reference errors are fixed: - Tab 5.2 in Line 331 is corrected as Tab 3. - Fig. 5.4 in Line 343 is corrected as Fig. 3. - Appendix C in Line 238 is corrected as Appendix B.4 [1]. Lipmon et al, Flow Matching for Generative Modeling. ICLR 2023 [2] Hoogeboom, Emiel, et al, Equivariant diffusion for molecule generation in 3d. ICML 2022. Please feel free to let us know if you need any additional information!
Summary: The paper applies the Flow Matching framework for 3D molecule generation achieving state of the art performance on common benchmarks. The authors introduce several innovations to the FM framework to adjust it for the equivariant data setting with different data modalities within a single sample (e.g., 3D coordinates, atom types, charge). The two main modifications are (i) applying OT alignment between coordinates to shorten learned paths. (Similar to [1,2]), (ii) using different probability paths for different modalities in the data. The proposed method improves both sampling speed and performance and achieves SOTA in both unconditional and conditional generation. Strengths: - The paper is well written. The problem setting, the motivation and the proposed method are introduced clearly. - The paper showcases an application of FM to equivariant data domain. - Achieve SOTA performance on common benchmarks. - Experimental section is thorough and presents ablations justifying the algorithmic choices made and demonstrating the benefits in the flexibility of the FM framework. Weaknesses: - **Scalability** - in the paper the authors demonstrated the OT alignment on small molecules, how would the method scale to larger sets? How computationally prohibitive it is to compute the OT maps? - **$S_n$ invariance** - A proof that the probability path implied by the EOT map is also $S_n$-invariant is missing from proposition 4.4. Meaning that for permutation of the atoms in the molecule the same probability will be returned. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Regarding the information alignment - have the authors tried to scale one of the modalities to have the same variance as the other? Can the authors provide some more motivation for using the mutual information? **Missing related work** I ask the authors to a add discussion on the following missing related works - - Two closely related prior/concurrent works that applied OT alignment to improve sampling speed: - [[1]](https://arxiv.org/abs/2304.14772) Multisample Flow Matching: Straightening Flows with Minibatch Couplings, Pooladian et. al. (ICML 2023) - [[2]](https://arxiv.org/abs/2302.00482) Conditional Flow Matching: Simulation-Free Dynamic Optimal Transport, Tong et. al. (Preprint) - Another concurrent work pursuing the same applications: - [[3]](https://arxiv.org/abs/2305.01140) Geometric Latent Diffusion Models for 3D Molecule Generation, Xu et. al. (ICML 2023) ** Note: These works do not diminish the contribution of this work. It is however necessary to mention and discuss the differences for the completeness of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: limitations are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and the recognization of our work. And we address your concerns in the following: ### Q1. How would OT maps scales to larger sets and how computationally prohibitive is it? I first want to clarify that OT maps are not needed during inference. And it is true that OT maps add extra overhead during training. However, this does not limit scalability for larger sets. The proposed algorithm has a complexity of $O(n^2)$ for computing OT map for a single molecule, where $n$ is the number of nodes. To understand the computational burden for different molecule sizes, we evaluated the average computing time for OT maps with varying node numbers. The table below shows the burden for three datasets/tasks. **The full curve is in the PDF**. | - | Average atom number | Average EOT computing time per sample | | --- | --- | --- | | QM9 | 18 | 1.10ms | | GEOM DRUG | 47 | 1.99ms | | Antibody-CDR | 150 | 18.84ms | Even for the Antibody-CDR data, with an average of 150 atoms, the process time is only 18.84 ms, which is acceptable in practice. Additionally, we can optimize the process further by leveraging its parallelizable nature. By Enabling prefetch and multiprocessing, we can minimize the computational overhead further and make it virtually inconsequential. --- ### Q2. The missing proof for the S(n)-invariant probability path of EOT. Thank you for bringing this to our attention. A detailed proof of proposition 4.4 can be found in Appendix B.3 in the submission, and we will include the reference in the updated version. Here we summarize the key steps of the proof and provide a simplified and intuitive explanation: The used dynamics model EGNN is permutation-invariant, and Zero CoM guarantees translation invariance. Thus, our goal is to demonstrate the invariance under rotations. For any rotation $\mathbf{T}$ on the point cloud $\mathbf{x}$ with $N$ points, the new $\mathbf{R}^*$ corresponding with $\mathbf{T}\mathbf{x}$ (we denote it by $\mathbf{R}^*_\text{rot}$) exactly offsets the impact of $\mathbf{T}$. Formally, let $\mathbf{x}_0$ denotes the target point cloud, we calculate $(\pi^*, \mathbf{R}^*)= \underset{(\pi, \mathbf{R})}{\operatorname{argmin}} ||\pi(\mathbf{R}\mathbf{x}^1, \mathbf{R}\mathbf{x}^2, \dots, \mathbf{R}\mathbf{x}^N) - \mathbf{x}_0||_2$, and $(\pi^*_\text{rot}, \mathbf{R}^*_\text{rot})= \underset{(\pi, \mathbf{R})}{\operatorname{argmin}} ||\pi(\mathbf{R}\mathbf{T}\mathbf{x}^1, \mathbf{R}\mathbf{T}\mathbf{x}^2, \dots, \mathbf{R}\mathbf{T}\mathbf{x}^N) - \mathbf{x}_0||_2$. Then $\pi^*_\text{rot}=\pi^*, \mathbf{R}^*_\text{rot} = \mathbf{R}^*\mathbf{T}^{-1}$. (Refer to Appendix B.3 for strict proof.) Since $p_1$ is invariant under rotations, and the transformation $\psi_t^{\mathrm{EOT}}$ satisfies $\psi_t^{\mathrm{EOT}}(\mathbf{x}) = (\sigma_{\min}+(1-\sigma_{\min})t)\pi^*(\mathbf{R}^*\mathbf{x})+(1-t)\mathbf{x}_0$, and $\psi_t^{\mathrm{EOT}}(\mathbf{T}\mathbf{x}) = (\sigma_{\min}+(1-\sigma_{\min})t)\pi^*_\text{rot}(\mathbf{R}^*_\text{rot}\mathbf{T}\mathbf{x})+(1-t)\mathbf{x}_0$. i.e. $\psi_t^{\mathrm{EOT}}$ is invariant. --- ### Q3. Can the authors provide some more motivation for using mutual information? Have the authors tried to scale one of the modalities to have the same variance as the other? 1. We explain the motivation for using mutual information. For atom types, the probability path determines when to predict the node type from timestep 0 (noise) to timestep 1 (data). In example 1 of line 214, one corner probability path has no noise from timestep 1 to timestep $1-\epsilon$ and maximum noise level from $1-\epsilon$ to timestep 0. When $\epsilon$ is small, the model tends to fix the node type at around ε step (early in the generation procedure), even if the coordinates are far from reasonable 3D structures. This makes it difficult for the model to learn or generate following such a path. The structure update in the following steps can change the bonded connection, causing a mismatch in valency with the early fixed atom types. Figure 4 shows that as the coordinates approach the ground truth, the predictability of atom types increases significantly. To align with this observation, we use mutual information as a guided bias to describe this tendency. 2. Yes, we have tried different inductive biases for matching the intermediate distributions including scaling different modalities to the same variance as the reviewer suggested. We provide the results for several matching strategies in the following table. | Metrics | Scaling the variance | Matching entropy in intermediate distribution | Matching Mutual Information(ours) | Without any matching | | ------------------- | -------------------- | --------------------------------------------- | --------------------------------- | -------------------- | | Mol Stability (QM9) | 81.3 | 83.5 | 88.3 | 77.1 | --- ### Q4. Missing related works: Thanks for introducing useful work to improve the paper. [1] and [2] propose using a joint distribution to replace independent coupling pairs between noise and data. Our method differs from previous research as follows: Previous work focuses on general domains like image generation, without exploring joint distribution design for complex geometries. Additionally, our EOT is conducted at the sample level, instead of batch level, allowing for parallelization. [3] advances 3D molecule generation by extending the equivariant diffusion model with a geometry latent space, orthogonal from our research. However, applying EquiFM to a similar latent space may be possible in the future. We will update references and discussions in the next version. [1] Pooladian et. al. Multisample Flow Matching: Straightening Flows with Minibatch Couplings, ICML 2023 [2] Tong et. al. Conditional Flow Matching: Simulation-Free Dynamic Optimal Transport (Preprint) [3] Xu et. al. Geometric Latent Diffusion Models for 3D Molecule Generation. ICML 2023 --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers! - I recommend adding Q2,3,4 to the revised version - In you answer to Q1, the complexity of solving OT should be $O(n^3)$ correct? is it a typo in your answer? I would add this discussion as a limitation of the method, not scaling to very large molecules. Regarding other reviewer tN9X comments on clarity - - I recommend adding citations where they could clarify what is done and to credit the previous works such as in section 3, mention in the beginning that you overview the Flow matching paper by Lipman et. al. and in section 4.1 when you mention the Zero-Com space, cite Hoogeboom et. al. etc.. I still believe this work has a novel contribution showing both the flexibility in FM framework and backing it up with strong empirical performance on known benchmarks. I keep my score. --- Reply to Comment 1.1.1: Title: Thank you very much for your feedback! Comment: Thank you so much for your valuable feedback and for recognizing our work! - We assure you that we will incorporate the discussion on Q2, Q3, and Q4 as suggested in the revised version of our paper. - Regarding Q1: you are right! Our method utilizes the scipy.optimize.linear_sum_assignment based on the Jonker-Volgenant algorithm, which indeed has a complexity of O(n^3). We will include a thorough discussion on the challenges faced when dealing with very large molecules to provide a more objective statement. - We are grateful for your suggestions on clarification. In the revised version, we will improve the citations to appropriately credit the previous works and ensure that our contribution is accurately described. If there is any additional information that you might need, please don't hesitate to inform us.
Rebuttal 1: Rebuttal: We would like to express our sincere appreciation to the reviewers and the area chairs for the efforts and time spent reviewing our work and the crucial, insightful, and constructive comments. Here we would like to highlight the main concerns of the reviewers and the corresponding responses here: 1. **Scalability Issues**, we have provided extensive experiments and detailed analysis of the relationship between the computation overhead and molecule size. And we show that this does not leash the scalability of our methods. 2. **Evaluation Metrics**, we have added several evaluation metrics to better compare the chemical property of generated molecules. 3. **Ablation Study**, we have provided an extra ablation study for comparing the effect of different probability paths and the EOT maps. 4. **Presentation**, we will address all the typos and presentation issues as suggested by the reviewers. Pdf: /pdf/8b1aaa5b9d5e2ac23e49b9a7a82b14499a3fdae6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Creating a Public Repository for Joining Private Data
Accept (poster)
Summary: The paper considers a scenario where a data owner wishes to publish a private view of their data set, so that others can evaluate join (aggregations) against this data, and related questions. Formally, the problem is to support the following problem: the data owner has a high-dimensional vector which is sparse, and wants to publish an object that will allow others to estimate the inner product of this vector with their own individual vectors. This is represented by the SumOverJoin problem presented in Subproblem 1.2. Building on this primitive, the paper then considers how to use this within an optimization setting, i.e., to search for a vector within some subspace that approximately optimizes some loss function. This is sufficient to pick some approximate models, e.g. logistic regression, with a privacy guarantee. Strengths: The paper sets out a clearly defined problem, and shows how an efficient combination of sketches and noise can be applied to solve it. The algorithm is very amenable to implementation, and performs well in experiments. The proof that the optimization queries are accurate is quite technical and involved, demonstrating technical skill on the part of the authors. Weaknesses: The scenario may be viewed as potentially narrow: the authors describe several alternative approaches that could be used, but which fail due to the requirement that the protocol is non-interactive (e.g., private set intersection approaches). I like the non-interactive property, but given the promotion of private set intersection protocols, it could be argued that it is not a compelling need, It may be debatable how novel the core contribution is: there is precedent for using sketches with DP noise. However, the extension to optimization queries goes beyond results I have seen before. A few opportunities for improving presentation, e.g., "we can view the sketch as a function accepting as query an entire dataset" - I found this hard to parse "we are not trying to compress a stream" clarify that the objective is to take an input with n non-zeros over a domain of size D. The goal is to be proportional to n in size, but sublinear in D. Maybe discuss more what the solution would be if D were small. "secure hash functions should give the same guarantees" Perhaps more precise to say 'give the same performance' Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could the optimization results be interpreted in terms of defining a subspace embedding? I think the result follows if we can argue that the SumOverJoin bound holds for all vectors within a subspace. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No limitations in the sense of negative societal impacts. The paper could say more about the technical limitations of the approach, e.g., extending to more complex non-linear transformations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Could the optimization results be interpreted in terms of defining a subspace embedding? I think the result follows if we can argue that the SumOverJoin bound holds for all vectors within a subspace. This sounds interesting. Could you say more? Do you mean a subspace in the vector space of functions $f(x, y)$? > The paper could say more about the technical limitations of the approach, e.g., extending to more complex non-linear transformations. This is a good point. Many real-world questions can't be expressed as a linear query. We will expand our future work section. > A few opportunities for improving presentation, e.g., $\ldots$ Perhaps more precise to say 'give the same performance' Thank you, we will incorporate these suggestions. >It may be debatable how novel the core contribution is: there is precedent for using sketches with DP noise. However, the extension to optimization queries goes beyond results I have seen before. While there is past work on private sketches, we are not aware of past work on joinable private sketches. Also, as you correctly point out, we are solving a more general question of joint optimization. ### Relation to PSI > The scenario may be viewed as potentially narrow: the authors describe several alternative approaches that could be used, but which fail due to the requirement that the protocol is non-interactive (e.g., private set intersection approaches). I like the non-interactive property, but given the promotion of private set intersection protocols, it could be argued that it is not a compelling need. Following [1], there have been many nice PSI developments over the last two decades. These include handling malicious parties [2], ensuring all parties receive the intersection [3], joining and then aggregating over a column [4], other downstream computations on the join [5], secret sharing of the intersection [6] and more. Indeed, PSI solves a similar, but different problem. Here is a comparison: - **Interaction.** As you point out, our method requires no interaction. With PSI, two parties mutually agree to estimate the intersection of their inputs. With our method parties do not have to pre-agree on a joint function, and can use the same sketch to estimate joint distributions or train neural networks using stochastic gradient descent. - **Exact vs. Approximate Answers.** While the generality of our sketch implies that more functions can be estimated without interaction, this flexibility comes at the cost of more noise and therefore lower accuracy than SFE, which can either give exact answers or add just enough noise to satisfy DP. - **Simple Implementation.** SFE protocols can be quite complex to implement. This submission proposes a simple implementation with two hash functions and some noise. - **Data Discovery in the Wild.** We are targeting a data repository of one-time sketches enabling data discovery in the wild. The proposed method lowers the barrier to private joins. Data repositories such as UCI ML repository and MIMIC (healthcare data) presently, rightly, conceal information about individuals making it challenging to join. We are excited to see the types of data discoveries that will be quickly and easily possible in a sketch-based public repository. We can add an appendix providing more perspective on the strengths and weaknesses of both approaches. [1] Freedman, M.J., Nissim, K. and Pinkas, B., 2004, May. Efficient private matching and set intersection. In International conference on the theory and applications of cryptographic techniques (pp. 1-19). [2] Kissner, L. and Song, D., 2005, August. Privacy-preserving set operations. In Annual International Cryptology Conference (pp. 241-257). [3] Gordon, S.D., Hazay, C. and Le, P.H., 2022. Fully Secure PSI via MPC-in-the-Head. Proceedings on Privacy Enhancing Technologies. [4] Ion, M., Kreuter, B., Nergiz, E., Patel, S., Saxena, S., Seth, K., Shanahan, D. and Yung, M., 2017. Private intersection-sum protocol with applications to attributing aggregate ad conversions. Cryptology ePrint Archive. [5] Buddhavarapu, P., Knox, A., Mohassel, P., Sengupta, S., Taubeneck, E. and Vlaskin, V., 2020. Private matching for compute. Cryptology ePrint Archive. [6] Falk, B.H., Nema, R. and Ostrovsky, R., 2022, June. A linear-time 2-party secure merge protocol. In International Symposium on Cyber Security, Cryptology, and Machine Learning (pp. 408-427). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses to the review comments. Regarding subspace embeddings, I am referring to the definition in Woodruff's survey: https://arxiv.org/pdf/1411.4357.pdf The point here is that the basic results for sketches consider giving a guarantee that holds for a pair of vectors (a "for each" guarantee). Proving that the sketches form a subspace embedding means that the guarantee holds for every pair of vectors (a "for all" guarantee), which can be much stronger. The comparison and contrast to PSI is useful additional context, and would be nice to include in the paper. --- Reply to Comment 1.1.1: Comment: Thanks for the information about subspace embeddings. Those do seem similar in flavour to Theorem 6.4, because we do use sketching, and because we are interested in a uniform bound (over $\mathcal{F}$). We're having trouble getting the details to line up. The most obvious connection would be to make the subspace be the function class $\mathcal{F}$, since that is what we're trying to prove the uniform bound over. But $\mathcal{F}$ is not a vector subspace in general; instead we have a bound on its VC dimension. Also, it is worth noting that Theorem 6.4 isn't just about the dimensionality reduction from the count sketch. Suppose we modified our method to skip the CountSKetch, and instead send a vector indexed by every possible (identity, label) pair (an exponentially large set, or even infinite) --- in other words, make the hash function h the identity function, so no collisions are possible --- and then add noise to each entry, as before, to preserve privacy. Then Theorem 6.4 would still be nontrivial to prove, since we still need to argue that the noise does not change the result of the optimization too much. Is a connection to subspace embeddings still possible?
Summary: The authors study the problem of computing a function of a join of two datasets on a user key so that the two datasets are owner by independent parties that never share their data directly. Instead the sender party gives out a DP sketch of the data that is later used by any receiver to compute an approximate answer to the query on the join. This is a very interesting formulation as it allows practical exchanges of information when the data exchanged is so sensitive (e.g., medical condition) that no party would want to participate. Contrary to other works (e.g., MPC, vertifcal Fed learning) this is one shot and does not require iterative algorithms or coordination. The method is based on count sketch with added noise for privacy. The sender algorithm algorithm is simple to implement, efficient and elegant. The received algorithm is also straighforward for linear queries. The authors report results also on optimization of arbitrary functions. This requires that an optimizer is known for a weighted dataset ( a mild assumption). The algorithm for optimization is also very elegant requiring only a sketch-based weighting for the examples (and a factor k blow up in the number of examples to generate all possible labels) The authors implement their method and test it for learning classification functions for up to k = 10 classes. They use standard public datasets and compare their method with a sketch-based method showing improved results. Strengths: + the problem is very relevant to Neurips + the algorithm proposed are elegant and efficient. + theoretical analysis show utility and privacy guarantees for a wide range of applications Weaknesses: - The sender domain must be a small set {1, … k}. So this does not allow arbitrary regression (but is ok for classification in few classes). The factor k appears in the blow up of space, time and in the utility / privacy tradeoff so it really needed to have small k. - The comparison is only with one baseline. I agree that this area is quite new but it would be informative to compare with some other baselines with different computational models. E.g., in the case that a curator can be trusted with the label data the equivalent privacy property is Label DP. It would be interesting to see how the results here compare with the best central label DP algorithm--thus showing the price for not having a trusted curator. Minor: /NR(h(id, y). Missing closed parenthesis. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1) what is the largest k for which you find meaningful results in experiments? 2) Given the lack of baselines for this exact model, can you compare this result with the SOTA baselines in some other relaxed or more restrictive settings e.g., the trust curator model equivalent to running a central Label DP on the join, or local DP baseline (e.g., randomized response on the label)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the comments and questions, based on which we did two new experiments, shown in Figure 2 of the attached pdf: ## What's the largest $k$ for which we get meaningful results? (Weakness 1, Question 1) To investigate the effect of large $k$, we used the EMNIST `bymerge` dataset consisting of 760K images of handwritten digits and uppercase and lowercase letters, with 47 classes. The left side of Figure 1 shows test accuracy as a function of the number of labels $k$. For each run, we randomly chose $k$ out of 47 classes, and applied our method with $ε=1$. The figure shows that, as expected, the performance degrades significantly as $k$ increases, but the method is still viable with $k=45$. Note that in this experiment, the size of the dataset, and thus the join size, changes as we change $k$. ## Comparing to label DP (Weakness 2, Question 2) The right side of Figure 2 compares our method to Label-DP for the EMNIST `digits` dataset of 240K handwritten digits. The definition of "neighbouring datasets" differs in our setting compared to Label-DP: we use the "add/remove" definition where a single row is added or removed; for Label-DP it is more appropriate to say a single value is changed. To account for this, we double the privacy budget for Label-DP, so for example at ε=1 on the x-axis, we actually give Label-DP a privacy budget of ε=2. The Label-DP method benefits from having a trusted curator who is able to perform the join, and so we expect it to perform better than our method, where the parties must join using a non-interactive protocol. We were surprised to see our method perform better for small ε, and do not understand why this happens. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal
Summary: The paper considers the problem of publishing a privacy-preserving version of a dataset consisting of an identifier and a sensitive attribute. Additionally, it should be possible to join this published dataset by the identifiers with another dataset and compute aggregate statistics on the sensitive attributes in the joined database. This is a natural application that comes up in many scenarios, and a key innovation of this paper is the non-interactive nature of the solution. The privacy-preserving database needs to be published only once and can be reused by multiple recipients. The approach followed is quite interesting. A dataset D \subseteq U \time {1... k} is processed as follows. For each entry (x,i) \in D store the value i + noise_i in the published table's h(x) row. In case of collisions, multiple entries are added to the same row. The noise in each row makes it differentially private. In addition to computing aggregate statistics, the paper also considers the task of performing learning tasks over the privacy-preserving dataset. Strengths: + The application considered is exciting. + The construction is relatively simple to understand. Weaknesses: - The abstract/intro does not talk about the amount of noise needed or the level of accuracy they get. It would be helpful in better understanding the results of the paper. - The system does not define the correctness desired in this system. In particular, the joins in the proposed system will create false "positive matched rows." Does this mean that the correctness is quantified for a random receiver dataset? In particular, a particular receiver dataset can have a lot of false positive matches. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can you please formally define the correctness and privacy properties your system satisfies? - For my understanding, what is the magnitude of noise you expect in Z_\epsilon in Figure 5 of Appendix A for the case where the output of s is \in {0,1}? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: - The paper says that the published database has DP. Can the authors expand on what this means? Typically in DP, aggregate statistics on the database are published. However, here the entire database is published. Thus, one would expect that a much larger noise parameter will be needed. - The paper reports an increase in test results accuracy with lower noise. This makes sense but does not provide insight into the appropriate noise level. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for these insightful comments. ## How much noise to add / how to choose ε The reviewer points out we did not clearly describe how much noise should be added. Many of the points in the review relate to this topic; we'll address them in this section. ### Specifying noise and accuracy in abstract/intro (Weakness 1, Question 2) Thanks for pointing this out. In our Contributions paragraph (end of Section 1) we will add: > We find that with a reasonable privacy parameter (ε=1) we acheive over 92% accuracy on the EMNIST dataset, and applying logistic regression on the UCI Adult dataset, we get accuracy within 1% of an algorithm trained on the original dataset with no privacy. $Z_ε$ in Figure 5 in the paper is sampled from TGeom($e^{-ε}$) (Def. 3.3) which for ε=1 has variance 1.84. Our method takes $s(\cdot,\cdot)\in\{-1,1\}$, but for $s(\cdot,\cdot)\in\{0,1\}$ the answer would be the same (because the sensitivity would still be 1, so we would still sample from TGeom($e^{-ε}$)). ### In the experiments, how much noise is appropriate? (Limitation 2) Our experiments show accuracy improves with less noise, but how much noise is appropriate? When applying differential privacy, one typically begins by deciding what privacy parameter ε is appropriate. This is a hard question, and the answer depends on the requirements of the party publishing the data. Ponomareva et al. give some practical advice in "How to DP-fy ML: A Practical Guide to Machine Learning with Differential Privacy". In Section 5.2.1, they suggest ε≤1 gives a strong privacy guarantee, ε≤10 gives "a reasonable level of anonymization for many applications", and ε>10 is not a sufficient guarantee of privacy. Once ε is chosen, the amount of noise necessary to achieve that guarantee is computed according to the particular method being applied. In our case, noise sampled from TGeom($e^{-ε}$) is added to each entry of the sketch (see Defs. 3.3, 3.5); this distribution has variance $2e^{-ε} / (1 - e^{-ε})^2$, which is about 1.84 when ε=1. To interpret our results, we suggest fixing ε=1 as a reasonable value; for example, when reading the right-hand side of Figs. 2 and 3, find ε=1 on the x-axis. ### Publishing a database with DP / wouldn't that require a lot of noise? (Limitation 1) The reviewer asks what it means to publish a database with DP, since typically only aggregate statistics are published. Technically, the answer comes down to the definition of differential privacy (lines 127-131 in Section 3, Preliminaries). We guarantee that for all pairs of datasets $D_1,D_2$ where $D_2$ is $D_1$ with one row added or removed, the output of our method is statistically almost the same on input $D_1$ as $D_2$, in the sense that for any set of outputs S (in our case, an output is a b-dimensional vector of integers as in Def. 3.5), the probability we output a vector in S is almost the same on input $D_1$ vs. $D_2$. As pointed out, many DP mechanisms involve publishing a few aggregate statistics with some noise added, but our method does not fall in this category. Instead, we publish a large collection of numbers, often more than there are rows in the dataset. Contrary to intuition, this does not require adding a correspondingly larger amount of noise to each of those numbers. In fact, the noise we add to each entry of the sketch depends only on the privacy parameter ε, and does not scale with the size of the sketch. The reason we can do this is that the count sketch has *sensitivity* 1. Adding an individual to the dataset adds or subtracts 1 to just one entry of the count sketch, and leaves the rest of the sketch unchanged. This reasoning applies to private count sketches in general; see e.g. "Differentially private linear sketches" by Zhao et al. Note that even though the noise we add to each number in the sketch is small, Algorithms 2 and 3 look at sums of many sketch entries, so the noise in the *outputs* of our algorithms will be large compared with methods that only look at a few aggregate statistics. ## Correctness and privacy properties (Weakness 2, Question 1) ### Correctness Thank you for pointing out that our theorem statements are missing the following information: Our theoretical guarantees (Theorems 5.1 and 6.4) apply to *any* sender and receiver datasets. We do not assume they are random. However, these theorems do assume the hash functions $h,s$ and noise values $Z$ are chosen randomly independently of the sender and receiver datasets. In particular, we do not allow the datasets to be chosen based on the particular hash functions. Because $h$ and $s$ are random, the false positive matched rows will also be random. We have designed our method so that in expectation they contribute 0 to the output of Algorithm 2, or to the score function used in Algorithm 3. This happens because for any two pairs (id,y) and (id',y'), $E[s(id,y)\cdot s(id',y')] = 0$. We will edit the theorem statements to make it clear they are quantified over all possible sender and receiver datasets, but that the randomness must be chosen independently of the datasets. ### Privacy The sketch $C$ output by Algorithm 1 is ε-differentially private. We briefly explain this in the sentence immediately after Definition 3.5, but should have made it more prominent, e.g. by marking it as a theorem or proposition or similar.
Summary: Edit: Change from Weak Accept to Accept under the expectation that the reviewer discussions are incorporated into the manuscript. The manuscript presumes that two independent parties hold different data related to persons in a repository. As the data is vertically partitioned across the repositories, cross-referencing both repositories by joining on some person identifier yields a richer repository that interrelates more data dimensions which can be useful for data discovery/analysis. As sharing personal data with another party could potentially lead to a violation of user privacy, the manuscript adopts the setting where party A sends a differentially private (DP) sketch of their repository to a party B such that party B can generate an approximate/noisy join result where data dimensions coming from party A can be inaccurate. The manuscript proposes a novel DP sketch to improve the accuracy of the approximate/noisy join and compares it to differentially private linear sketches (Zhao et al @ NeurIPS'22). Strengths: S1) Method proposed in manuscript satisfies pure DP unlike prior work on DP linear sketches that satisfies zCDP (which can only be translated to ADP regimes with delta > 0). S2) Empirical results are promising: improvement over baseline and reasonable test accuracy for epsilon <= 1. S3) Claims seem to be substantiated and key concepts are presented clearly. Weaknesses: W1) Many-to-many relationships (e.g., pseudo-identifiers) and multi-way join not explicitly considered. W2) Not clear why the other party could not also receive a private sketch to improve the private sketch they provide to the other party (apart from complicating the setting). W3) Minor: p.9, l. 326: "as does private multi-dimensional sketches" => "as do private multi-dimensional sketches" Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1) How do the methods potentially generalise to pseudoidentifiers (e.g. non-unique person names) and multi-way joins with multiple private repositories (see W1). Q2) Could a private sketch of the 2nd repository (non-private from our point of view) be helpful to guide the sketching of the private repository to improve utility when using it for join with 2nd repository? (see W2) Q3) What does the last sentence refer to when it mentions multi-dimensional sketches? (could multiple categorical dimensions not simply be merged into one unless there are multi-dimensional range queries over numerical attributes?) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Relevant limitations seem to be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insights. We have done two new experiments based on them, shown in Figure 1 in the attached pdf and described below. ## Many-to-many relationships (W1, Q1) The reviewer points out our work is limited to datasets with unique keys. In practice, we imagine repeated keys will be common for two reasons: 1. Identities may not be unique, e.g. when joining on (first name, last name). 2. Datasets with multiple records per person, e.g. one row for each flight a person has taken. For case 1 (non-unique IDs) one solution is to apply our methods as described. Note that the sketch (Definitions 3.3, 3.4) is well-defined even if keys are repeated. However, this approach is theoretically unsatisfactory: Theorems 5.1 and 6.4 no longer apply when identities are not unique, and it is possible to construct adversarial examples where false matches steer Algorithms 2 or 3 in the wrong direction. Finding a better solution is a great direction for future work. The left side of Figure 1 in the attached pdf shows the performance of our algorithm in a simulation of case 1. As in the paper, we added a unique identity to digits from the EMNIST dataset (240K examples), and created sender and receiver datasets $D_S, D_R$ with labels and images respectively. Then we added extra rows to $D_S$ which duplicated the identities of existing rows, but with random labels, and applied our method to $D_S, D_R$ with $ε=1$. At $x=1$ no new rows are added and at $x=8$ each identity appears an average of 8 times in $D_S$, once with the true label and 7 times with random labels. We left $D_R$ unchanged. This simulates a situation where pseudo-identifiers may be repeated but are independent of the data. The test accuracy on the y-axis shows that our method is robust and gives useful results even when the randomly-labelled false matches significantly outnumber the true matches. For case 2 (multiple records per person), in order to preserve privacy, the noise added to the private count sketch would need to scale with the maximum number of records a person can have. It would be interesting to evaluate our method empirically with this change. It should be possible to adapt Theorems 5.1 and 6.4 to this setting, probably with some loss in the bound depending on the maximum number of rows per person. Depending on the problem being solved, another approach might be to pre-process datasets to combine each identity's data into a single row. For example, an individual's history of flights could be represented just by the number of flights. We will add an appendix discussing this. A new paper (and new insights) would be needed to do the topic justice. ## Multi-way joins (W1, Q1) Joins of more than two datasets are also common in practice, and we should have addressed this. We can adapt our method to allow multiple senders' datasets. For example, looking at Figure 1 in the paper, suppose we add a second sender with age bucketed as 0-15, 16-45, 46+. Then in addition to the "Cancer +1" and "Cancer -1" columns, the receiver could add three columns corresponding to these age buckets. We tried this experimentally using three columns from the UCI Adult dataset. The receiver has *education*, one sender has *relationship* and the other sender has *income*, and we added identifiers so all three can be joined. We had the receiver train a logistic regression model using *education* and *relationship* as features to predict *income* (more or less than 50K). The test accuracy for different values of ε is shown in blue on the right side of Figure 1 in the attached pdf. Note that the ε value is per sketch, so the total privacy cost of the three-way join (blue line) is 2ε. For comparison, the red line shows test accuracy if the receiver does a two-way join using only the *income* sketch, so their model's only feature is *education*. The difference between the red and blue lines shows that for sufficiently high ε the receiver is able to make use of the three-way join. We will add an appendix. ## Using another private sketch to improve the sketch being sent (W2, Q2) Having the receiver use some hints from the sender in order to improve their sketch is a great idea. The reason we did not explore this is that we are focussed exclusively on the "non-interactive" setting, in which the sender publishes their sketch to a repository not knowing who will download it. If interaction is allowed, then existing work on cryptographic multi-party computation comes into play, and the parties are likely to get more accurate answers by using a cryptographic protocol. It would be interesting to explore a "limited-interaction" setting in which only two rounds are allowed: R sends a message to S, then S sends a message to R. ## Multi-dimensional sketches (Q3) Our future work section was too terse. By "multi-dimensional sketches" we mean sketching datasets with more than one value column --- e.g. instead of a single label in {1..k}, the sender may wish to sketch several features. It is true that multiple categorical columns could be merged into one: for example, a label in {1..3} and a label in {1..5} could be combined into a number in {1..15}. Note the performance of our method degrades as the number of possible labels increases, so this approach can only be taken so far. --- Rebuttal Comment 1.1: Comment: Thank you for the thoughtful and comprehensive responses!
Rebuttal 1: Rebuttal: Thanks for the thoughtful comments and suggestions. The attached figure includes experiments responding to questions from reviewers eoei and dAeb. Pdf: /pdf/2bfe364b6d3a3fc048fb04ba044761430ccdf802.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Finite-Sample Analysis of Payoff-Based Independent Learning in Zero-Sum Stochastic Games
Accept (poster)
Summary: The authors propose a doubly smoothed best-response dynamics for two-player zero-sum Markov games, with matrix games as the degenerate case. Upper bounds of Nash Gap with bias and regularized Nash Gap without bias are presented. Strengths: The technical sections are concrete and solid. The idea of simultaneously smoothly updating in the value (q) space and the policy (simplex) space appears novel. In such a way, each player is facing a much more stable environment than simply smoothing in the value space. As a consequence, the convergence results are natural outcomes of such an algorithmic design. Moreover, the Assumption 3.1 makes the results more appealing in a sense that it deviates from typically made assumptions in RL analysis. Weaknesses: Some major concerns: (1) The "first form of smoothing" in the policy space is very similar to the mixed strategy in fictitious self-play but not exactly the same. In fictitious self-play, the policies are mixed in the way of having $1-\alpha_k$ vs $\alpha_k$ probabilities to select the old policy vs the new best response. In comparison, here the authors simply averaged two stochastic strategies. Even with the explanations provided in the paper, it is still not very clear to me why the authors would want to deviate from the classical practice in learning Markov games (or extensive form games), which is backed by enormous amount of practical implementations. (2) While the authors claim that the learning dynamics are independent for the two players, the learning rates of them are tied through a constant, which makes the claim somewhat questionable. Please elaborate more about your definition of being "independent". (3) The bound for Nash gap is somewhat weak. Apart from the term $\ell_\tau$ that is exponential in $\tau$, the constant $c_2$ also has both $O(\tau)$ and $O(\tau^{-1})$ terms, which requires balancing. Overall, I think a corollary with the best choice of $\tau$ is highly desirable to at least let readers to directly evaluate the strength of the bound. (4) The bound of regularized Nash gap is not at all surprising -- it appears rather standard in minimax (saddle point) optimization with regularized geometry. So the real emphasis should be the Nash gap itself. Although I see the authors discuss the natural difficulty for softmax policy algorithms to converge without bad dependencies on the temperature $\tau$, it still feels quite vague why this is the case -- I believe a deeper illustration with math will greatly strengthen the paper -- it won't make the Nash gap look as unsatisfying as it appears now. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, there is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the nice comments about our learning dynamics and the presentation of our technical sections. We next provide a point-by-point response to the reviewer's comments. **Comment:** The "first form of smoothing" in the policy space ... **Response:** To our knowledge, fictitious play refers to the learning dynamics where each player estimates its opponent's policy by taking an empirical average of the opponent's historical actions and then best responds to that estimated policy. It is not entirely clear to us whether the fictitious self-play (the reviewer refers to) is the same as the fictitious play we illustrated above. We would greatly appreciate it if the reviewer can provide more information or pointers to the references. On the other hand, it seems that the learning dynamics the reviewer describes ("the policies are mixed in the way of having $(1-\alpha_k)$ vs $\alpha_k$ probabilities to select the old policy vs the new best response") is exactly what we are doing here. Using zero-sum matrix games for illustration, the new policy from our policy update equation is a convex combination of the old policy and the new best response (estimated through the $q$-function): $\pi_{k+1}^i=\pi_k^i+\beta_k(\sigma_\tau(q_k^i)-\pi_k^i)=(1-\beta_k)\pi_k^i+\beta_k\sigma_\tau(q_k^i)$, which can exactly be interpreted as w.p. $1-\beta_k$ taking actions according to the old policy and w.p. $\beta_k$ taking actions according to the new best response (up to a smoothing using softmax). The reason that we choose the convex combination parameter to be $\beta_k\ll \alpha_k$ (which is the stepsize for the $q$-function) is to ensure that the policies are evolving in a slower time scale. **Comment:** While the authors claim that ... **Response:** Our learning dynamics is independent in the sense that to carry out the algorithm, each player only needs to use its local information, i.e., its own actions and the realized payoffs. We agree with the reviewer that for our algorithm to achieve provable guarantees, the stepsizes used by the two players should be of the same order. We will clarify this in our revised manuscript. That being said, this type of conditions on stepsizes are actually common in the existing literature studying independent learning dynamics. For example, even for asymptotic convergence, the results in Leslie and Collins (2005), Sayin et al (2021) require the stepsizes used by the two players to be of the same order. Leslie, D. S., & Collins, E. J. (2005). Individual Q-learning in normal form games. SIAM Journal on Control and Optimization, 44(2), 495-514. Sayin, M., Zhang, K., Leslie, D., Basar, T., & Ozdaglar, A. (2021). Decentralized Q-learning in zero-sum Markov games. Advances in Neural Information Processing Systems, 34, 18320-18334. **Comment:** The bound for Nash gap is somewhat weak ... **Response:** Please see the response to the common comments from the reviewers in the beginning of this page. **Comment:** The bound of regularized Nash gap is not at all surprising ... **Response:** We agree with the reviewer that the finite-sample bound is qualitatively similar to existing work studying saddle point optimization problems. However, there are two major differences of this work in terms of the algorithm and the sampling. One is that almost all existing results studying saddle point problems in optimization use gradient-based method (including mirror descent), while our learning dynamics is best-response type, which is more natural in the game setting. Second, the sample collection process in learning is significantly different from that in optimization. In particular, to estimate the marginalized payoff to the opponent, our learning dynamics introduces the $q$-function, which is updated using an asynchronous stochastic approximation algorithm based on the realized payoffs. The fact that the update for the $q$-function is asynchronous is the main reason for us to have a constant that is exponential in $\tau$ in the bound, which we will illustrate in more detail in the next paragraph. To our knowledge, for payoff-based best-response learning dynamics, there are no existing results that provide last-iterate finite-sample analysis. We next provide a detailed elaboration on why we have an exponential dependence on $\tau$ in the bound. For simplicity of illustration, consider the zero-sum matrix game setting. The update equation for the $q$-functions in Algorithm 1 performs asynchronous update as only one component (which corresponds to the action taken at time step $k$) of the $q$-function is updated in the $k$-th iteration. Therefore, suppose an action is never taken in the algorithm trajectory, we cannot hope for convergence as the specific component of the $q$-function is never updated during learning. Similarly, suppose an action is rarely taken in the learning dynamics, we would expect the overall convergence rate to be slow. Therefore, the finite-sample bound should depend on the minimum frequency of taking actions in the learning process. This is captured by the quantity $\min_{1\leq k\leq K}\min_{a^i}\pi_k^i(a^i)$, which is lowered bounded in Lemma D.1. Due to the exponential nature of softmax functions, the lower bound is also exponential in $\tau$, which eventually leads to the exponential dependence in $\tau$ in the finite-sample bound. We will add this discussion with more mathematical details to the next version of this work. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: Thanks for your responses and I am generally glad with the changes the authors promise to make in the next version. Since some weaknesses still stand, I'm raising my score to 5. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the feedback and if you have any other questions, please let us know.
Summary: This paper studied a best-response type learning dynamics in two-player zero-sum stochastic games called doubly smoothed best-response dynamics. The dynamics uses smoothed value function updates with softmax smoothed best-response, and combines minimax value iteration. Ths dynamics is payoff-based, convergent, rational, and symmetric. The author also provided the first finite-sample analysis of these type of dynamics, showing convergence to Nash equilibrium up to smoothing bias. Strengths: 1. Finite-sample analysis (or convergence rates analysis) for payoff-based (or noisy bandit feedback-based) learning dynamics in stochasitc games is an interesing question with practical importance. This paper contributes to this area by the first finite-sample analysis of a payoff-based best-response-type independent learning dynamics for zero-sum stochastic games. 2. This paper is well-written and easy to follow. I appreciated it that the authors provided detailed discussion and high-level ideas of the algorithm and proof sketch for the main results. Weaknesses: The claim that the proposed dynamics is *convergent* and *rational* seems not rigorous since the smoothing bias $\tau$ persists over time and no convergence to exact Nash equilirium or best response is really proved. One possible fix might be introducing a decreasing parameter $\tau_t$ so that exact convergence can be shown. This might significantly slow down the convergence of other bias terms because of the exponetionally dependence on $\tau$. The proposed dynamics is thus very slow in terms of convergence to Nash equilirium and this is a weakness of the current work. Minor comments: 1. Line 277, $Q(s,a^{i}, a^{-i})$ has not been defined. 2. Under Line 284: one term should be $R^i(s, a^{i}, a^{-i})$. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Do Algorithm 1 and 2 provide sublinear regret guarantee for one player when the other player might be an adversary? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the nice comments about our presentation. We next provide a point-by-point response to the reviewer's comments. **Comment:** The claim that the proposed dynamics is convergent and rational seems not rigorous since the smoothing bias persists over time and no convergence to exact Nash equilirium or best response is really proved. One possible fix might be introducing a decreasing parameter so that exact convergence can be shown. This might significantly slow down the convergence of other bias terms because of the exponential dependence on $\tau$. The proposed dynamics is thus very slow in terms of convergence to Nash equilirium and this is a weakness of the current work. **Response:** Please see the response to the common comments from the reviewers in the beginning of this page. **Comment:** Minor comments: Line 277, $Q(s,a^i,a^{-i})$ has not been defined. Under Line 284: one term should be $R(s,a^i,a^{-i})$. **Response:** In Line 277, we use $Q$ as a dummy variable to introduce the notation. We will change the notation to avoid confusion in the next version. $R(a,a^i,a^{-i})$ is a typo, we will correct it in our next version. **Comment:** Do Algorithm 1 and 2 provide sublinear regret guarantee for one player when the other player might be an adversary? **Response:** For simplicity of illustration, consider the matrix-game setting. Our learning dynamics is closely related to the celebrated smoothed fictitious play for zero-sum games in the sense that their corresponding ODE is the same, which is the continuous smoothed best-response dynamics $\dot{\pi}^i=\sigma_\tau(R^i\pi^{-i})-\pi^i$, $i\in \{1,2\}$. It was shown in the existing literature (Benaïm and Faure 2013) that smoothed fictitious play (with a diminishing sequence of temperatures $\{\tau_k\}$) is consistent, which implies that the average regret goes to zero asymptotically. See (Benaïm and Faure 2013) Definition 1.1 for the definition of consistency and (Benaïm and Faure 2013) Theorem 1.8 for the result. Since our learning dynamics is a discrete and stochastic variant of the smoothed best-response dynamics, it is conceivable that our learning dynamics can also be consistent. Rigorously proving the result and explicitly characterizing the rate at which the regret goes to zero are interesting future directions. Benaïm, M., \& Faure, M. (2013). Consistency of vanishingly smooth fictitious play. Mathematics of Operations Research, 38(3), 437-450. --- Rebuttal Comment 1.1: Title: Acknowledgment of Rebuttal Comment: Thank you for the detailed reply! I have no further questions. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the feedback.
Summary: Authors propose algorithms for learning the Nash equilibrium in two-player games and two-player stochastic games. The algorithm for games is effectively a single-time scale algorithm (Doubly smoothed best response dynamics). While , the algorithm for stochastic games (Double smoothed best response dynamics with value iteration) is a two-time scale approach. They show that these algorithm is independently implemented by each agent without any communication requirement and the rewards are based on actual payoffs obtained after each each action, rather than the full information setting. Also, they also show that the players reach a policy that is a best-response of the other players' (stationary) policies. Strengths: The aim to reach convergence in last-iterate in zero-sum games and stochastic zero-sum games through independent learning is a challenging problem and the authors provide not only a finite sample guarantee, but also in addition, that the learned policies are best-responses to the stationary policies of other agents and achieve "rational" learning (Bowling and Veloso ' 2001). Also, the assumption about existence of a joint policy that induces an irreducible and aperiodic Markov chain is relatively weaker than other existing assumptions, that assume any policy always induces such an irreducible and aperiodic Markov chain or any policy created by the algorithm's trajectory is uniformly geometrically ergodic. They use coupled Lyapunov drift inequalities to guarantee convergence and finite sample guarantees. Weaknesses: 1) If the game has a non-interior Nash, do you still get convergence? Lemma D.1 seems to say that all strategies are played with a probability lower bounded by a certain value. 2) Is there any hope of getting a stronger probabilistic guarantee for at least the two-player zero-sum games setting? Essentially to understand the convergence of the iterates (in distribution or w.h.p)? instead of an expected sense as derived in this paper). 3) Authors state that prior two-time scale approaches might require implicit coordination amongst players, however, their algorithm for the stochastic games is also two-time scale and it would help if they could clarify how they are able to avoid this in the algorithm and how they make up for this in the analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: They have mentioned the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging comments about our work. We next provide a point-by-point response to the reviewer's comments. **Comment:** If the game has a non-interior Nash, do you still get convergence? Lemma D.1 seems to say that all strategies are played with a probability lower bounded by a certain value. **Response:** We want to clarify that, regardless of whether a Nash equilibrium is interior or not, our finite-sample bound always holds. However, the finite-sample bound in general does not imply asymptotic convergence to *zero*. Specifically, observe that in either Theorem 2.1 or Theorem 3.1, the last term on the right-hand side of the bound (which we call the smoothing bias in our paper) is asymptotically nonvanishing, and is proportional to $\tau$, which is the temperature used in defining the softmax operator. This term captures the error between the output of the algorithm and a Nash equilibrium because a Nash equilibrium can potentially be a pure strategy (i.e., a non-interior Nash) while our learned policies are always stochastic. **Comment:** Is there any hope of getting a stronger probabilistic guarantee for at least the two-player zero-sum games setting? Essentially to understand the convergence of the iterates (in distribution or w.h.p)? instead of an expected sense as derived in this paper). **Response:** In zero-sum matrix games, the policies produced from our algorithm do enjoy mean-square convergence to the Nash distribution, denoted by $(\pi_\tau^1,\pi_\tau^2)$. Recall that the Nash distribution is the unique minimizer of the regularized Nash gap defined right before Corollary 2.2.1; see also (Leslie and Collins 2005) for the definition of the Nash distribution. The uniqueness part follows from the regularized Nash gap being a strongly convex function (thanks to the regularizer). Therefore, by the quadratic growth property of strongly convex functions, we have $\sum_{i=1,2} \lVert \pi^{i}-\pi_{\tau}^{i} \rVert^2\leq c RNG(\pi^1,\pi^2) $ for some constant $c>0$, which, after combined with Corollary 2.1.1., provides the mean-square convergence of $(\pi_k^1,\pi_k^2)$. A mean-square bound implies a high probability bound via the Markov inequality (while the tail is polynomial instead of sub-Gaussian). Investigating whether a high probability bound (with sub-Gaussian or sub-exponential tail) is achievable or not is a future direction. Also by Markov inequality, mean-square convergence implies convergence in probability, which in turn implies convergence in distribution because the limit point $(\pi_\tau^1,\pi_\tau^2)$ is deterministic. We will include this result as a corollary in our next version. Leslie, D. S., & Collins, E. J. (2005). Individual Q-learning in normal form games. SIAM Journal on Control and Optimization, 44(2), 495-514. **Comment:** Authors state that prior two-time scale approaches might require implicit coordination amongst players, however, their algorithm for the stochastic games is also two-time scale and it would help if they could clarify how they are able to avoid this in the algorithm and how they make up for this in the analysis. **Response:** To clarify, in Lines 620 - 628, we say that the proposed learning dynamics in these results are two time-scale in the sense that there is a time-scale separation between the two players. Specifically, one player is updating much faster than the other, making the learning dynamics *asymmetric* between the two, which also indicates implicit coordination. In contrast, our learning dynamics only require the update of each player's policy to be slower than the update of their $q$-functions, but crucially we do not assume a time-scale separation between the players, making our learning dynamics *symmetric*. We will clarify this point in our next version. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I thank the authors for their rebuttal and I understand this is mainly a theoretical study, but did the authors try their algorithms against well known algorithms (MWU, OMWU) in simple games such as Rock, Paper, Scissors etc? I think it might be useful to shed light on the smoothing bias, temperature and how this affects the convergence. Other than that my initial questions have been clarified. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the great suggestion. We will include numerical simulations on Benchmark examples (such as the RPS game suggested by the reviewer) in our next version and compare our learning dynamics with existing algorithms such as MWU and OMWU. Moreover, we will also investigate the dependence on the temperature $\tau$ (which determines the smoothing bias) in our numerical simulations.
Summary: The authors focus on the problem of finite-sample convergence analysis of independent best-response-type learning dynamics in two-player zero-sum stochastic games. The dynamics are payoff based and stimulate the Shapley operator (that is known to be a contractive). Under the assumption that for any pair of policies, the induced Markov chain is irreducible and aperiodic, it is shown that in expectation the Nash gap is at most epsilon after 1/eps steps of the algorithm. Finally the authors provide results rationality type results (i.e., on the regret when the one player follows the dynamics and the other plays a stationary policy). Strengths: I think that the result of 1/eps rate is quite interesting and surprising (I would expect a rate of 1/eps^2). Moreover the techniques seem highly non-trivial (design of a novel Lyapunov type function). Weaknesses: Sometimes the write-up is not self-contained, but this is because of the limited space. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Can you please explain if your results carry over for other settings like Markov potential Games? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging comments about our work. **Question:** Can you please explain if your results carry over for other settings like Markov potential Games? **Response:** For potential games, it was shown in Swenson et al. (2018) that continuous best-response dynamics provably converges to a pure-strategy Nash equilibrium with an exponential rate. Since our learning dynamics is a discrete, smoothed, and stochastic variant of the best-response dynamics, it should converge for potential games when the stepsizes are appropriately chosen, while the analysis and the rate of convergence might be largely different. Rigorously proving this result could be an interesting future direction. In the Markovian setting, recall that the outer loop of our learning dynamics is an approximation of minimax value iteration, which works because minimax value iteration converges due to the contraction property of the minimax Bellman operator. For Markov potential games (MPGs), it is unclear if value iteration leads to convergence. However, since there exists a potential function for MPGs, existing results mostly use the gradient-based method, which works because the potential function naturally serves as a Lyapunov function. See for example Ding et al. (2022); Zhang et al. (2022). Swenson, B., Murray, R., \& Kar, S. (2018). On best-response dynamics in potential games. SIAM Journal on Control and Optimization, 56(4), 2734-2767. Ding, D., Wei, C. Y., Zhang, K., \& Jovanovic, M. (2022, June). Independent policy gradient for large-scale markov potential games: Sharper rates, function approximation, and game-agnostic convergence. In International Conference on Machine Learning (pp. 5166-5220). PMLR. Zhang, R., Mei, J., Dai, B., Schuurmans, D., \& Li, N. (2022). On the global convergence rates of decentralized softmax gradient play in markov potential games. Advances in Neural Information Processing Systems, 35, 1923-1935. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have no further questions. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the feedback.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and effort in reviewing this work. We here provide the response to the common comments raised by the reviewers. The point-by-point response to each reviewer's individual comments is provided under the corresponding review. **Common Comments:** *Major comment from Reviewer zACq:* The claim that the proposed dynamics is convergent and rational seems not rigorous since ... *The (3)rd comment from Reviewer bFVC:* The bound for Nash gap is somewhat weak ... **Our Response:** We agree with the reviewers that the convergence bound is not asymptotically vanishing because of the presence of the smoothing bias, which will be made clear in the next version. In the next version, we will also explicitly choose the temperature $\tau$ to provide an overall (possibly slower) rate of convergence to a Nash equilibrium. The consequence of having slow convergence to a Nash equilibrium is due to having a constant that is exponential in $\tau$ in the bound, which dominates other terms that are polynomial in $\tau$. We have acknowledged this as a weakness of this work in the conclusion section. In general, we believe that achieving a sharp (polynomial) last-iterate convergence to a Nash equilibrium (without regularization) with best-response learning dynamics is a much more challenging problem. Recall that for fictitious play (which is the simplest best-response dynamics, and inspires our algorithm design), while Samuel Karlin conjectured an $\mathcal{O}(1/k^{1/2})$ rate of convergence in 1959, the state-of-the-art provable rate of convergence is $\mathcal{O}(1/k^{1/(m+n-2)})$ (where $n=|\mathcal{A}^1|$ and $m=|\mathcal{A}^2|$) provided in Shapiro, H. N. (1958). Note that this provable rate, while being polynomial, is *dimension* dependent, and hence can be arbitrarily slow when $m$ and $n$ are large. Therefore, establishing dimension-independent polynomial rate of convergence for fictitious play remains as an *open problem* for more than $70$ years. Compared with fictitious play, our algorithm is payoff-based (more challenging than fictitious play where the opponent's actions can be observed), and achieves $1/K$ rate of convergence to the regularized Nash, while achieving a worse rate of convergence to the true Nash. This, to some extent, also reflects the challenge in establishing dimension-independent polynomial rate of convergence for best-response type learning dynamics. That being said, this work provides **the first last-iterate finite-sample analysis of best-response independent learning dynamics** in the literature. Investigating the possibility of achieving an improved rate of convergence is a compelling future direction. Shapiro, H. N. (1958). Note on a computation method in the theory of games. Communications on Pure and Applied Mathematics, 11(4), 587-593
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition
Accept (poster)
Summary: I thank the authors for interesting paper and the effort in expanding the field of face recognition. The paper proposes a new loss function USS (and its variants) for face recognition. The motivation is to jointly learn a threshold that can be used during the verification process. The authors show the efficacy of UniTSFace when USS is used jointly with ArcFace or CosFace (margin based softmax loss) in various evaluation sets. Strengths: - The performance gain in various datasets using UniTSFace is clearly shown. - The derivation of USS loss starting from contrastive objective is interesting. - The motivation of trying to learn one threshold that can be used in all evaluation dataset is interesting. Weaknesses: - The authors do not show whether learning the threshold t is really necessary. The readers cannot distinguish whether the performance gain comes from making the threshold t, a learnable parameter or it comes from the addition of the supervised contrastive learning objective which is a loss function that boosts the performance when combined with margin based softmax losses [1]. - The authors do not show whether the learned threshold is indeed better than the more common method which is to calculate the optimal threshold specific to the validation dataset. - Figure 1 seems to suggest that despite the L_uss loss, the variance in threshold remains therefore, it is better to calibrate different threshold for different datasets. [1] CoReFace: Sample-Guided Contrastive Regularization for Deep Face Recognition Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - It was not stated clearly in the text what unified refers to. If it is correct that the authors use the term unified because the learned threshold is used for all evaluation settings *(in contrast to the normal scenario where the threshold is computed for each dataset), then it would be good to say it in the beginning. - Why does the unified threshold satisfy the equation 3? There may not exist t that satisfies this for a given dataset. - How does UniTSFace compare with ArcFace + Supervised Contrasitve Loss (InfoNCE) and ArcFace + L_naive (which does not learn t). In such case, t is computed for each validation dataset as is customary for face recognition. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes. The limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. The authors do not show whether learning the $t$ is really necessary.** Thanks for the suggestion. We have compared the performance of $L_{\text{uss}}$ and $L_{\text{naive}}$ in **Table A**, it is evident that $L_{\text{uss}}$, by adding the $t$, achieves a remarkable enhancement over $L_{\text{naive}}$. We note that we dedicated extensive efforts to train the model with $L_{\text{naive}}$, but the model struggled to converge, resulting in inferior results. In **Tables A and B**, we further compared the performance of both $L_{\text{uss}}$ and $L_{\text{naive}}$, when employed in conjunction with the CosFace and ArcFace. Similarly, we observed that the combination of ArcFace/CosFace with $L_{\text{naive}}$ resulted in notably inferior outcomes compared to the baseline ArcFace/CosFace. This starkly contrasts with the more favorable performance achieved by pairing ArcFace/CosFace with $L_{\text{uss}}$. These results show that the $t$ plays a pivotal role in boosting performance. Moreover, we note that in Table 1 of our paper, we have compared our $L_{\text{uss}}$ with the other two baseline sample-to-sample losses, $L_{\text{soft}}$ and $L_{\text{bce}}$ (neither has a unified $t$). Our $L_{\text{uss}}$ again outperforms these two losses, which also shows the effectiveness of incorporating the $t$. **Q2. Whether the learned $t$ is indeed better than the more common method which is to calculate the optimal threshold specific to the validation dataset.** Thanks for the question and we will clarify this in the revision. Firstly, the learned $t$ is not directly used in testing. In our experiments, the optimal threshold is indeed calculated specific to the validation dataset as well as the testing criteria. Secondly, though the unified $t$ learned by the model on the training dataset cannot be directly applied to the validation dataset, the model itself has learned compact and discriminative features. Specifically, without this explicit $t$, the losses, $L_{\text{naive}}$, $L_{\text{soft}}$, and $L_{\text{bce}}$, only promote proximity among intra-subject features and discrepancy among inter-subject features for each subject. With this explicit $t$, our USS loss aims to learn features to distinguish all the positive sample-to-sample similarities from all the negative ones on the whole training dataset. In other words, the learning objective of USS loss is more stringent, the features learned using our USS loss are therefore expected to be more discriminative. The results reported in Table 1 of the paper, as well as in the **Tables A and B**, demonstrate the benefits of the features learned by adding the unified $t$. **Q3. Figure 1, the variance in threshold remains therefore, it is better to calibrate different threshold for different datasets.** As per our response to **Q2**, the optimal testing threshold is indeed calibrated for specific dataset. On the other hand, though there are variances in Fig. 1, we note that the distribution from $L_{\text{uss}}$ is much more compact than the other two counterparts, which qualitatively support our claim that the model has managed to learn more compact and discriminative features, due to the application of $L_{\text{uss}}$ loss to learn a unified threshold. **Q4. It was not stated clearly in the text what unified refers to.** Thanks for the advice and we will clarify this in the revision. In the case of training, the unified threshold $t$ distinguishes our $L_{\text{uss}}$ from the other sample-to-sample loss such as $L_{\text{naive}}$, $L_{\text{soft}}$, and $L_{\text{bce}}$, which only care about the subject-level separation: i.e., intra-subject similarity is higher than inter-subject similarity. They do not pursue that, for all subjects, all the positive sample-to-sample similarity should be larger than all the negative sample-to-sample similarity. The $t$ in our $L_{\text{uss}}$, however, explicitly requires that all the positive sample-to-sample similarity should be larger than all the negative sample-to-sample similarity for all samples. We, therefore, name our loss as a unified threshold integrated loss. In the case of testing, the learned $t$ on the training set cannot be directly applied to the testing datasets. The optimal testing threshold will be calculated based on different datasets as well as the application scenarios. **Q5. Why does the unified $t$ satisfy the equation 3? There may not exist a $t$ that satisfies this for a given dataset.** We agree that there might not exist a unified $t$ that satisfies Eq. (3) for a given dataset, which have noise samples, or wrong labels. Ideally, if the dataset is clean and the face model is well-trained, all positive sample-to-sample similarity should be close to 1 and negative similarity should be close to -1. Then the $t$ will exist. Naturally, our USS loss is built on the assumption that there exists such a unified $t$ for a given dataset, then it is our objective to find the unified threshold satisfying Eq. (3) in the training of a facial model. We have demonstrated that the unified $t$ can be learned after the training of the model by adopting the USS loss, in lines 141-148, and the unified $t$ is evolved into the learnable parameter bias $b=\gamma t$ in the USS loss. Even though the unified $t$ may not exist, the learning process towards this ideal condition still enables the model to learn more compact and discriminative features than the other losses, which can lead to better performance on various testing datasets. **Q6. Compare UniTSFace with ArcFace + InfoNCE / + L_naive.** As per the reviewer's request, we have conducted the related experiments and reported the results in **Table A**. It is clear that ArcFace + $L_{\text{naive}}$ produced significantly inferior results than the plain ArcFace, while ArcFace + InfoNCE improves the performance of ArcFace, it is still outperformed by our UniTSFace, demonstrating the effectiveness of our method. --- Rebuttal Comment 1.1: Comment: I appreciate the comprehensive reply from the authors. I have carefully reviewed the response. I no longer have any questions, and my issues have been resolved. I encourage the authors to further refine the paper based on reviewer feedback, and I'd like to once again thank them for their diligent efforts. --- Reply to Comment 1.1.1: Comment: Thanks for raising the scores and the valuable suggestions; we will further refine the paper according to all reviewers' comments, when submitting the final version.
Summary: The paper proposes a unified threshold integrated sample-to-sample loss for face recognition, which addresses the limitations of existing methods in exploring the cross-sample relationship and setting a unified threshold. The proposed loss function achieves exceptional performance on multiple benchmark datasets. Strengths: 1) The paper introduces an approach called UniTSFace, which combines a unified threshold with a sample-to-sample loss for face recognition. This approach addresses the limitations of existing methods and proposes a new loss function that achieves exceptional performance on multiple benchmark datasets. 2) The experimental results of the paper demonstrate significant improvement, especially compared to the baseline, on highly discriminative datasets such as MFR. Weaknesses: 1) It is unnecessary to use complex symbolic formulas to express simple concepts, such as in lines 105 to 124. 2) Adding a sample-to-sample loss in the field of face recognition is not novel enough. 3) The training method of using a fixed threshold may appear to be consistent with testing, but it actually conflicts with it. In testing, different false acceptance rates (FAR) are used on different scenarios, meaning that for the same model, different thresholds will be used to determine the predicted results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the Weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. It is unnecessary to use complex symbolic formulas to express simple concepts, such as in lines 105 to 124.** Thanks for the comments. The reviewer appears to be well-versed in face recognition research and reckons that these formulas present a simple concept. However, it is essential for us to cater to a wider range of readers, including beginners in this field, and accurately and clearly present preliminary concepts in a comprehensive manner. Moreover, through these formulas, we aim to mathematically present our motivation for this work, i.e., explicitly learning a unified threshold to distinguish the positive sample-to-sample similarities from the negative ones on the training set, which is absent in existing sample-to-sample losses. Additionally, these notations are consistently used in sections 3.1, 3.2, and 3.3 to formulate different losses, namely $L_{\text{naive}}$, $L_{\text{soft}}$, $L_{\text{bce}}$, and $L_{\text{uss}}$. Removing these notations/formulas directly would adversely affect the presentation of these losses and lead to misunderstandings. Therefore, we believe it is crucial to retain the formulas in lines 105-124 to maintain a coherent and informative paper that caters to both experienced researchers and newcomers to the field of face recognition. **Q2. Adding a sample-to-sample loss in the field of face recognition is not novel enough.** Sample-to-sample based loss has been a crucial research topic in deep face recognition for the past decades. Researchers have dedicated substantial efforts to this area and have introduced prestigious losses and methods such as DeepID2, Triplet Loss, and (N+1)-Tuplet Loss, just to name a few. The majority of them aim to maximize inter-subject discrepancy while minimizing intra-subject distances, which may need a meticulous sampling/paring step for every mini-batch. Moreover, in face verification tasks, a single threshold is required to distinguish positive facial pairs from negative ones. Unfortunately, none of the aforementioned methods incorporate such an explicit constraint. To address this limitation, we propose the USS, which explicitly incorporates a learnable unified $t$ during the training process. By encouraging to separate positive and negative image pairs by the $t$, we expect that the model is able to learn more discriminative features, which thereafter can perform better on the unseen testing sets. Additionally, through our derivations, the $t$ can be embedded in the bias term $b=\gamma t$. After the model is trained, we are able to directly investigate the value of this $t$, which can help us to quantitively and qualitatively understand what the model is learning, as shown in Fig. 1 in the paper. Furthermore, the USS can be effortlessly extended to the Marginal USS loss and can be jointly used with other sample-to-class losses. In our experiments, we have combined the USS loss with CosFace as UniTSFace, which surpassed other sophisticated combinations such as VPL, UNPG, and AnchorFace. In fact, as the reviewer pointed out, "**the experimental results of the paper demonstrate significant improvement**". Reviewer nR9r agrees with us: “**It is interesting to focus on the threshold for distinguishing positive from negative pairs, which is paid less attention.**” Reviewer qAqH also stated that “**The advantages of the USS loss over other losses are numerous**”. Overall, we assert that our USS is novel and provides the research community with a much more general and effective solution for face recognition tasks. **Q3. The training method of using a fixed $t$ may appear to be consistent with testing, but it actually conflicts with it. In testing, different FAR are used on different scenarios, meaning that for the same model, different thresholds will be used.** We thank the reviewer for pointing this out. We will clarify this part in the revised version. Firstly, we note that the threshold $t$ is not fixed during training but a learnable parameter embedded in $b=\gamma t$. For different training datasets, the learned $t$ might be different. Secondly, it is true that different thresholds will be selected based on various criteria such as different FAR. This however, is not conflicting with our goal of learning a unified threshold in our USS loss. We explain the reasons as follows: 1. The training and testing stages in the context of deep face recognition are two completely separate stages. During the training stage, a face model is supervised and trained using either sample-to-class or sample-to-sample loss functions. Once the model is trained, it will solely be employed to extract latent facial features for testing samples (which may not necessarily belong to the subjects in the training set). The extracted features will be used to evaluate the performance of the trained face model. In other words, the performance of a face model essentially relies on effective feature learning, which is, features corresponding to the same subject are brought closer together, while features from different subjects are pushed apart. In testing, based on the features extracted by the learned deep face models, different thresholds will be determined for best performance in different scenarios. 2. We propose to explicitly learn a unified threshold $t$ through our USS loss. The training process towards the unified threshold ensures that the model not only promotes proximity among intra-subject features and discrepancy among inter-subject features, but also explicitly encourages all positive sample-to-sample similarities to exceed all negative sample-to-sample similarities. By imposing this more stringent constraint, the features learned using our USS loss are anticipated to be more discriminative. 3. Though the learned unified $t$ might not be directly used in various testing scenarios, the ultimate goal of separating the negative from positive pairs using a unified threshold is consistent in both training and testing. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I agree with the explanations provided by the authors for Q1 and Q2, but I still haven't received a very clear answer to the most important Q3. We can still only see the necessity of widening the distance between intra-class and inter-class distances, but for the main point of this paper: a unified threshold, there still hasn't been convincing evidence presented to demonstrate its necessity. --- Reply to Comment 1.1.1: Comment: **Q3'. We can still only see the necessity of widening the distance between intra-class and inter-class distances, but for the main point of this paper: a unified threshold, there still hasn't been convincing evidence presented to demonstrate its necessity.** We thank the reviewer for agreeing our explanations provided for Q1 and Q2. As for the concerns raised by Q3, we apologize for not clearly stating the necessity of learning the unified threshold in our first rebuttal to Q3. we would like to clarify as follows: **1) "We can still only see the necessity of widening the distance between intra-class and inter-class distances."** We propose the unified threshold integrated sample-to-sample (USS) loss to learn the unified threshold $t$ on the training dataset. By integrating such an objective into the loss functions, the model trained by our USS loss is guided to produce more discriminative features than the model trained by other losses. The reviewer agrees that the necessity of widening the distance between intra-class and inter-class distances, which is actually consistent with the learning of an unified threshold, i.e., we need a threshold to separate them and the combination with margins can further improve the performance. As our USS loss is expected to separate all positive from negative pairs by the explicit threshold $t$, which is a **stricter** constraint than simply widening the distance between intra-class and inter-class distances. **2) "for the main point of this paper: a unified threshold, there still hasn't been convincing evidence presented to demonstrate its necessity."** We highlight that we have quantitatively and qualitatively demonstrated the necessity of learning the unified threshold and we provide the evidence below. Firstly, in Table A, significant improvements have been achieved by $L_{\text{uss}}$ over $L_{\text{naive}}$, which does not learn an explicit threshold. Similar improvements can also be observed between $ArcFace + L_{\text {naive}}$ and $ArcFace + L_{\text{uss}}$ in Table A, as well as $CosFace + L_{\text{naive}}$ and $CosFace + L_{\text{uss}}$ in Table B. These improvements clearly show the necessity of learning the unified threshold. Secondly, in Table 1 of the paper, we have compared $L_{\text{uss}}$ and $L_{\text{bce}}$, and their marginal extensions $L_{\text{uss-m}}$ and $L_{\text{bce-m}}$, the improvement achieved by our USS over the sample-to-sample BCE are also significant. We notice that the only difference between $L_{\text{uss}}$ and $L_{\text{bce}}$ is the objective of a unified threshold $t$ and the results further show the necessity of learning the unified threshold. Thirdly, we depicted the optimal threshold distributions of sample-to-sample Softmax loss, BCE loss, and our USS loss in Figure 1 of the paper. We can clearly observe that the distribution learned by our USS has the smallest variance and the learned unified threshold $t = 0.4896$ lies around the median in the interquartile range. This qualitative illustration also suggests the necessity of learning the unified threshold.
Summary: This paper proposes UniTSFace, a new approach to face recognition that uses a unified threshold integrated sample-to-sample loss (USS loss). The USS loss features a unified threshold for distinguishing positive from negative pairs and can be enhanced with an auxiliary margin. The authors show that the USS loss can be integrated with sample-to-class based losses and evaluate its effectiveness on various benchmark datasets, demonstrating its suitability for real-world applications. The contributions of this work include introducing the USS loss, demonstrating that it can learn a unified threshold, and showing that it can be enhanced with an auxiliary margin and is compatible with existing sample-to-class based losses. In addition, UniTSFace outperforms state-of-the-art methods on multiple benchmark datasets, including the Megaface 1 dataset, and the proposed approach is evaluated on the large-scale WebFace42M dataset, which contains 42.5 million images of 2 million identities, demonstrating its efficacy for real-world applications. Strengths: 1. The advantages of the USS loss over other loss functions are numerous. First, it is highly efficient and can work seamlessly with sample-to-class-based losses. Second, it overcomes the limitations of previous sample-to-sample losses by explicitly incorporating a learnable threshold that separates positive and negative pairs. Third, it achieves state-of-the-art performance on multiple benchmark datasets, demonstrating its effectiveness in real-world face recognition applications. 2. The experiments in this paper are solid and have been conducted on the largest-scale dataset WebFace42M, achieving the state-of-the-art performance on the most challenging benchmark dataset MFR in the academic track, and the comparisons are fair. 3. The proposed method is original 4. There are no significant flaws in the experiments conducted in this work. Weaknesses: The paper does not explicitly mention any weaknesses or limitations of the proposed UniTSFace approach or the USS loss function. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The paper mentions that the unified threshold is limited to a sample-to-sample loss. Have you considered extending the unified threshold to sample-to-class loss, and if so, what are the potential benefits and challenges of doing so? 2. The paper mentions that UniTSFace achieves state-of-the-art performance on multiple benchmark datasets. However, how does UniTSFace compare to other state-of-the-art methods in terms of computational efficiency and memory usage? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper does not explicitly mention any limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1.The paper mentions that the unified threshold is limited to a sample-to-sample loss. Have you considered extending the unified threshold to sample-to-class loss, and if so, what are the potential benefits and challenges of doing so?** We thank the reviewer for pointing this out. We are actually in the process of integrating the unified threshold into a sample-to-class loss, which we term USC loss here in coordination with USS loss for convenience. The sample-to-class-based losses are conventionally built on Softmax loss in face recognition, including the well-known Large Margin Softmax loss, normalized Softmax loss, cosine-margin and angular-margin based Softmax loss, such losses only learn a class proxy conveyed in the weight vector, instead of exploring the cross-sample relationship. To incorporate the unified threshold into the sample-to-class Softmax loss, it is straightforward that we directly introduce a unified threshold: $$L_{\text{usc}}(\textbf X^{(i)})= \log(1+e^{-\gamma\ g(\textbf x^{(i)}, \textbf c^{(i)})+b}) +\sum_{j\neq i\atop j=1}^N\log(1+e^{\gamma\ g(\textbf x^{(i)}, \textbf c^{(j)}) - b}),$$ where $b = \gamma t$ is a constant to be learned, $t$ hereby is the desired unified threshold, and $\textbf c^{(i)}$ is the feature proxy of the $i$-th class. Since the unified threshold in USC provides a stricter constraint than the original normalized Softmax loss, i.e., we anticipate that such a unified threshold can separate all the positive sample-to-class similarities from the negative ones. The USC loss is expected to outperform the original normalized Softmax counterpart in face recognition. Additionally, the USC is essentially a sample-to-class-based classification loss, it can naturally be used in general object classification tasks. On the other hand, we must admit that the USC loss has its own shortcomings in face recognition. Different from the sample-to-sample loss (such as $L_{\text{naive}}$, $L_{\text{uss}}$) used in this submission, it cannot directly explore the relationships among the facial samples in the training of the facial model, as it only optimizes the class proxy and sample features during training. The key challenge towards robust face recognition is how to learn a representative class proxy $\textbf c^{(i)}$ that can benefit from the abundant training facial samples. Therefore, as we stated in the conclusion of the manuscript, effectively combing the two approaches into one framework will be our future work. **Q2. The paper mentions that UniTSFace achieves state-of-the-art performance on multiple benchmark datasets. However, how does UniTSFace compare to other state-of-the-art methods in terms of computational efficiency and memory usage?** We thank the reviewer for pointing this out. UniTSFace utilizes the ResNet architecture as its backbone and optimizes the parameters using the algorithmic average of the cosine-margin Softmax loss and the proposed USS loss. In the inference stage, the trained UniTSFace model is solely a convolutional ResNet used to extract latent facial features for testing images. Naturally, the computational consumption and memory usage totally depend on the convolutional operations inherent to the selected ResNet architecture and the resolutions of the input images used during testing. Since all the methods under comparison in Tables 4 and 5 adopt the same ResNet-50, ResNet-100, and ResNet-200 backbones and the same input images for testing, the differences in computational and memory usage are negligible. This is affirmed in **Table C**, where we present the inference runtime across various methods. The Table clearly indicates that the variations in inference times among different methods are negligible, measured in fractions of a millisecond. Contrastingly, in the training stage, different models employ different loss functions to train the convolutional ResNet backbone, resulting in a diverse computational overhead and memory usage. Theoretically, 1. when comparing to deep models based on the purely sample-to-class losses, such as CosFace and ArcFace, UniTSFace requires additional $O(N)$ logarithmic/multiply-addition operations attributable to the USS loss. The joint USS loss entails additional memory usage to accommodate $N$ facial features, where $N$ denotes the number of subjects within the training dataset. 2. when comparing to deep models based on the purely sample-to-sample losses, e.g., $L_{\text{soft}}$ or $L_{\text{bce}}$, UniTSFace only requires $O(N)$ extra logarithmic/multiply-addition operations to account for the cosine-marginal Softmax loss. 3. when comparing to deep models trained with a combination of the two kinds of losses like VPL, UNPG, and AnchorFace, the computational requirements and memory consumption of our UniTSFace remain at par. In experiments, since the forward and backward propagation of convolutional operations (including the activations functions and normalization layers) accounts for the majority of computations and memory usage, we found that the computational difference between these methods during the training stage is minimal as well. As reported in **Table C**, the memory usage (in GB) and training speeds (in seconds per batch, i.e., seconds per 512 images) across different methods similarly exhibit minimal discrepancies.
Summary: This paper presents an interesting idea by focusing on the unified threshold of distinguishing positive from negative pairs in face recognition. Although face recognition has undergo a big step towards real application driven by deep learning, this submission has some new proposal for sample-to-sample based loss with unified threshold. Experiments on typical facial image datasets show the effectiveness. =============== After reading the authors' rebuttals to my concerns and dicussions, I upgrade my rating as "weak accept". Strengths: +It is interesting to focus on the threshold for distinguishing positive from negative pairs, which is paid less attention. +The derivation of the USS loss makes sense by defining the upperbound of the naive loss. +Experiments by deploying USS into ArcFace and CosFace show the effectiveness of the proposed USS. The threshold range is narrowed, which shows the certainty of threshold is improved. Weaknesses: -From Table 1, it shows that BCE loss is much worse than USS loss. This seems strange and I concern if the experiment is wrong for this loss. The reasons should be discussed. -From Fig.1, the threshold range is narrowed compared to other losses, but I concern if it is correct to claim the word "unified". -Face recognition models are explosive increased with deep learning. A number of advanced methods are developed. The necessity of this submission can be discussed in the paper. The contribution of the proposed method to the community can be discussed by combining with previous advanced face recognition models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In inference stage, how to determine the threshold? pre-computed? It is better to clarify this point because the title is unified theshold, which gives me the first expression about the pre-trained threshold. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not observed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. From Table 1, it shows that BCE loss is much worse than USS loss. This seems strange and I concern if the experiment is wrong for this loss. The reasons should be discussed.** Firstly, we believe that the results are correct and we want to emphasize that the results are not cherry-picked. For a fair comparison, in Table 1, we adopted the same experimental setting when we trained the model using three different losses: $L_{\text{soft}}$, $L_{\text{bce}}$, and $L_{\text{uss}}$. Specifically, during our experiments, we found that the model is easy to converge using $L_{\text{soft}}$ and $L_{\text{uss}}$, and the training process is stable, which leads to more favorable results at the end. However, we found that the model is extremely hard to converge when using the $L_{\text{bce}}$. Despite many efforts such as prolonged iterations, have been dedicated, the convergence is still problematic. We reckon that $L_{\text{bce}}$ has respective one bias term for each subject, resulting in $N$ different explicit thresholds as $b_i = \gamma t_i$, which lead to instable training. In contrast, the $L_{\text{uss}}$ has only one explicit threshold and $L_{\text{soft}}$ does not integrate any explicit threshold. **Q2. From Fig.1, the threshold range is narrowed compared to other losses, but I concern if it is correct to claim the word "unified".** We thank the reviewer for pointing this out. Firstly, our theoretical objective is to learn a unified threshold that satisfying Eq. (3) during the training of a facial model, which is consistent with the requirement of the testing. Therefore, we first assume the existence of such a unified threshold, and then propose the unified threshold integrated sample-to-sample (USS) loss. We have proven through our analysis in lines 141-148 of the paper that, ideally, a model trained by USS could learn a unified threshold for the training dataset. However, we must admit that achieving this ideal goal is subjective to the model capacity, training hyper-parameters, and even the training dataset itself, which are all independent from our USS loss. For example, if the backbone network only uses one single linear neural layer, our USS loss definitely cannot guarantee a unified threshold either. In Fig. 1, however, using the same backbone architecture and training hyper-parameters, our USS loss are able to achieve a more compact threshold distribution than the other losses, which suggests the superiority of imposing the unified threshold and is consistent with our expectation. Therefore, we believe it is reasonable to retain the word "unified". **Q3. Face recognition models are explosive increased with deep learning. A number of advanced methods are developed. The necessity of this submission can be discussed in the paper. The contribution of the proposed method to the community can be discussed by combining with previous advanced face recognition models.** We thank the advice from the reviewer. Significant advancements in deep face recognition have been introduced to the community in the past decade, including sample-to-class-based diagrams, sample-to-sample-based diagrams, and the combinations of the two diagrams. However, none of them can explicitly learn a unified threshold to separate the positive sample-to-sample similarities from the negative similarities among all samples in the whole training dataset. An optimal threshold separating all positive sample-to-sample pairs from the negative ones is also in demand during testing. In our submission, we propose the USS loss to explicitly learn a unified threshold for the training dataset. Though this unified threshold learned in the training stage cannot be directly applied to the testing stage, the model trained by USS is desired to extract more discriminative features and subsequently improves the face verification performance in various testing scenarios, which have been demonstrated through our extensive experiments. Furthermore, the proposed USS loss can be effortlessly extended to the Marginal USS loss and can also be seamlessly jointly used with other sample-to-class losses. In our experiments, we have combined the USS loss with ArcFace and CosFace methods, and the combined approaches consistently surpass their respective individual counterparts. We denote the fusion of the CosFace and USS losses as "UniTSFace," which we have compared with other sophisticated combinations such as VPL, UNPG, and AnchorFace. The experimental results on multiple benchmark datasets further suggest the superiority of our UniTSFace. In conclusion, we believe our USS loss provides the research community with a much more versatile and effective solution for face recognition tasks. **Q4. In inference stage, how to determine the threshold? pre-computed? It is better to clarify this point because the title is unified threshold, which gives me the first expression about the pre-trained threshold.** We thank the reviewer for this suggestion. We will clarify this in the revised version. Yes, the threshold learned by the training stage cannot be directly used in testing. In the testing/inference stage, we first extract the features using trained models for all testing images, and then the threshold is determined according to the specific testing criteria. For example, when reporting the 1:1 verification accuracy on LFW, CFP-FP, AgeDB in Table 5, 10-fold validation is used. We first select the threshold which achieves the highest accuracy in the first 9 folds, and then adopt this threshold to calculate the accuracy in the leave-out fold. --- Rebuttal Comment 1.1: Title: Concern on the inference stage Comment: Thanks for the authors' detailed response. About Q4, the authors said that in testing phase, the features should be extracted for all testing images and 10 fold validation is used. But this seems not that practical in testing, and in real application, we may not collect all testing images. How to predict the face ID per image? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the question and we would like to clarify as follows. 1) **Face recognition is different from general image classification.** The general image recognition tasks are closed-set recognition, such as digital zip code recognition in MNIST and natural image classification in ImageNet. Both the training and testing categories are fixed in such tasks, which means that each testing image must be classified into one of the existing classes. For example, every test image in MNIST must and will be classified into a number between 0 and 9 according to the classification probabilities. However, real-world face recognition is often an open-set recognition task, i.e., the training images and testing images are not always limited to the same classes (identities/individuals). For example, the used training dataset CASIA-WebFace has only about 10,575 identities while globalized multi-racial (GMR), one of the testing sets, contains 242,143 identities (>> the 10575 training IDs). Therefore, it is evident that we cannot directly input a testing image into the trained model and predict its ID according to the classification probabilities. 2) **Face recognition can be divided into face verification and face identification.** Face recognition tasks can be divided into 1:1 verification and 1:n identification. In 1:1 verification, we will be given two face images (one probe image and one template image) and we are required to predict whether these two images belong to the same ID. 1:1 verification is the same as iPhone Face ID and Google Pixel Face Unlock, we have a template face image saved in the phone and we need to distinguish whether the saved template face and the probe face in front of the camera are from the same individual, or not. In 1:n identification, we will be given a probe face with an unknown ID and a gallery set that stores a set of faces of known individuals, we are required to pick up the right ID for the probe face image from the given gallery set. 1:n identification is often been seen in movies where the police take a picture of the person of interest and compare it with a local database. 3) **How to predict the face ID per image?** In 1:1 verification, we use the trained face model to extract features for both the probe image with an unknown ID and the template image with a known ID. Subsequently, we compare the feature similarity with a pre-defined unified threshold $\hat{t}$, if the similarity is greater than the $\hat{t}$, the two images are assumed to have the same ID, vise versa. In 1:n identification, we use the trained face model to extract features for the probe face image (unknown ID) as well as all the images in the gallery set (known IDs). The probe face will be assigned to the ID of the gallery image that has the highest feature similarity with the probe image. 4) **Q4 seems not that practical in testing, In a real application, we may not collect all testing images.** 1. For LFW, CFP-FP, and AgeDB in Table 5, we report the 1:1 verification accuracy with 10-fold validation. For IJB-C, we report True Accept Rate (TAR) at False Accept Rate (FAR)= 1e-4 and 1e-5. For the MFR benchmarks, we report TARs at FAR=1e-4 for the Mask and Children test sets, and TARs at FAR=1e-6 for the GMR test sets. For the MegaFace Challenge 1, we report Rank1 accuracy for identification and TAR at FAR=1e-6 for verification. These metrics have been widely and standardly used to evaluate the performance of different facial models, we hereby follow the same settings for a fair comparison. 2. We agree that we cannot collect all testing images in real applications. Therefore, many efforts have been dedicated to collecting larger datasets to mimic real-world scenarios, for example, LFW (2007) only has 13,233 images from 5749 identities, while GMR (2021) contains 1,624,305 images from 242,143 identities. 3. We notice that for most real applications, we only need to extract and compare features for the probe image and the template image in 1:1 verification, or to extract and compare features for the probe image and the faces in gallery sets in 1:n identification, which are practical. Hope our clarification answers the question raised. And of course, we are open to continue discussions if the reviewer has further questions.
Rebuttal 1: Rebuttal: We thank all reviewers for appreciating the state-of-the-art performance of our UniTSFace; especially the recommendations from reviewers qAqH and nR9r that “It is interesting to focus on the threshold for distinguishing positive from negative pairs, which is paid less attention,” and “the advantages of our USS loss over other losses are numerous”. We provide point-to-point responses to the main concerns raised by the reviewers. We here present the experimental results requested by the reviewers and attach them in the PDF file. **Table A. Comparisons of different losses/methods on the MFR-Ongoing dataset. We note that model trained with Lnaive encountered challenges in convergence and exhibited the least favorable results.** | Method | MR-ALL | IJB-C | LFW | CFP | Age | |----------------------------|---------------|--------------|--------------|--------------|--------------| | $L_{\text{naive}}$ | 0.0 | 0.35 | 50.0 | 50.0 | 50.0 | | $L_{\text{uss}}$ | 38.43 | 72.20 | 99.40 | 96.51 | 94.05 | | ArcFace | 42.21 | 48.49 | 99.31 | 97.07 | 94.51 | | ArcFace + InfoNCE | 45.47 | 88.00 | 99.40 | 97.11 | 94.71 | | ArcFace + $L_{\text{naive}}$ | 0.48 | 3.52 | 98.21 | 80.42 | 84.13 | | ArcFace + $L_{\text{uss}}$ | 48.76 | 89.06 | 99.58 | 97.40 | 94.73 | | ArcFace + $L_{\text{soft-m}}$ | 46.03 | 88.16 | 99.45 | 97.20 | 95.03 | | ArcFace + $L_{\text{bce-m}}$ | 17.28 | 65.10 | 96.38 | 74.40 | 82.81 | | ArcFace + $L_{\text{uss-m}}$ | 48.92 | 89.56 | 99.40 | 97.22 | 95.20 | **Table B. Comparisons of different losses in combination with CosFace on the MFR-Ongoing dataset.** | Method | MR-ALL | IJB-C | LFW | CFP | Age | |----------------------------|---------------|--------------|--------------|--------------|--------------| | CosFace | 45.12 | 56.65 | 99.36 | 97.30 | 94.98 | | CosFace + $L_{\text{naive}}$ | 1.82 | 13.03 | 98.98 | 95.47 | 93.05 | | CosFace + $L_{\text{uss}}$ | 49.75 | 89.62 | 99.41 | 96.78 | 95.30 | | CosFace + $L_{\text{soft-m}}$ | 47.90 | 88.71 | 99.46 | 97.12 | 95.50 | | CosFace + $L_{\text{bce-m}}$ | 15.53 | 61.53 | 96.15 | 73.20 | 80.36 | | CosFace + $L_{\text{uss-m}}$ | 50.28 | 89.84 | 99.41 | 97.35 | 95.13 | **Table C. Comparisons of different methods/losses in terms of the Inference Time (in millisecond), Training Memeory usage (in GB), and Training Speeds (in seconds per batch i.e., seconds per 512 images). All experiments are conducted on the same machine with Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz; TITAN RTX, 24GB.** | Method | Inference Time (ms) | Training Memory (G) | Training Speeds (s/batch, s/512images) | |-------------------|--------------------------------|------------------------------|-------------------------------------------------| | ArcFace | 14.42 | 9.20 | 0.93 | | CosFace | 14.25 | 9.20 | 0.92 | | $L_{\text{soft}}$ | 14.50 | 9.19 | 0.96 | | $L_{\text{bce}}$ | 14.42 | 9.19 | 0.97 | | $L_{\text{uss}}$ | 14.33 | 9.22 | 0.97 | | VPL | 14.25 | 9.19 | 0.97 | | UNPG | 14.66 | 9.24 | 0.96 | | AnchorFace | 14.25 | 9.24 | 0.99 | | UniTSFace | 14.60 | 9.22 | 0.97 | Pdf: /pdf/ea923bcab92ec2efa28e649f26fc34cbe9e3129f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Distributionally Robust Linear Quadratic Control
Accept (spotlight)
Summary: The paper **Distributionally Robust Linear Quadratic Control** considers the case of finite-time linear quadratic optimal control with process and measurement noises and known time-varying dynamics. The novelty lies in the fact that the distributions of the initial state and noises are unknown but lie in an ambiguity set defined as a 2-Wasserstein ball centered in a known Gaussian distribution and with known radius. The objective is to minimize the expected LQR cost with the distributions chosen adversarially in that ball. The authors call this adversarial problem "distributionally robust Linear Quadratic Gaussian". It strict generalizes LQG control, which assumes these distributions are known and Gaussian. The main contribution is twofold. The first part is theoretical, as the authors prove the existence of a Gaussian adversarial distribution, that is, the distributionally robust LQG reduces to classic LQG with unknown Gaussian distributions. A consequence is that the optimal controller is a linear state feedback, like in the classic LQG case. This leads to the second contribution: an algorithm to efficiently estimate the adversarial Gaussian distribution from data. The authors claim that, then, the LQG problem with estimated noise distributions can be solved efficiently by using classical methods. The authors illustrate convergence of their estimation algorithm on a simulated example. Strengths: The paper is interesting and the clear exposition makes the reasonings easy to follow. The appendix is well-managed, and I appreciate Appendix A on the solution of classical LQG with a Kalman filter. I like the idea of allowing for a whole family of noise distributions rather than assuming a fixed one. The main theoretical result helps mitigate the restrictiveness of the commonly-accepted "white noise assumption"; indeed, it shows that allowing for a larger family of noises does not provide any benefits (for distributions close enough to Gaussians). This new problem formulation thus seems relevant. The subsequent algorithm to solve the distributionally robust LQG problem follows naturally and further justifies the interest of the theoretical result. Overall, the reasoning exposed is sound and well-motivated. Finally, I appreciate that the authors went the extra mile in Section IV by augmenting the theoretical result with a data-efficient algorithm. Weaknesses: The three main weaknesses of the paper are, in my opinion, 1. the lack of thoroughness of the simulation study; 2. the lack of discussion of the effect of hyperparameters; and 3. some arguments are unclear and should be explicited (although I do not question their conclusions). I detail these three points. These points are not critical for acceptance, but I believe the paper would be improved by addressing them. W1) The simulation study shows convergence of the algorithm on a randomized use case. This is a good sanity check, but I would also appreciate a simulation of the case when the noise distribution is not normal but still within the Wasserstein ball. The theoretical result ensures that Nature's adversarial strategy _is_ normal, but an illustration of the case when Nature is sub-adversarial would be welcome. In particular, is the cost of the learned policy reduced compared to when the noise is normal? W2) The role of the choice of ball radii is not discussed. While there is obviously no notion of "optimal" radius, since it simply corresponds to a degree of robustness, I would like to see the evolution in performance with increasing radii. Intuition tells that performance should decrease as robustness increases, but a confirmation in simulation would be welcome. W3) Some arguments are unclear to me, although their conclusions seem to be valid. In particular: 1. The authors make multiple claims on convexity of sets of probability measures. Examples are lines 112, 151, 189. I understand that these claims are made by considering Borel measures as a subset of the vector space of signed Borel measures with standard addition and scalar multiplication. I believe this superset should be mentioned explicitly at least ones, as it is rather unusual and notions of convexity of a metric space exist (and differ). This is also relevant on line 151, where $\mathcal{W}$ is claimed to be infinite-dimensional despite not being a vector space. 2. With this understanding of convexity, why is the set $\mathcal{W}$ non convex, as claimed e.g. on lines 112, 151 and 189? As far as I understand, each set $\mathcal{W}_{z}$ is convex, with $z\in\{x_0, w_t, v_t\}$. Then, $\mathcal{W}$ should be convex as the cartesian product of these sets. The only way I understand this non-convexity is if $\mathcal{W}$ is not _equal_ to the cartesian product, but only isometric to this cartesian product by the mapping that computes the marginal distributions. I believe this should be mentioned explicitly, at least in a footnote, as this questions distracted me from more central claims of the paper for a while. 3. I am unsure about the claim on line 131 that the controller can compute the fictious states $\hat x_0,\dots,\hat x_t$ from the real observations $y_0, \dots, y_t$ without knowing the initial state. Indeed, this would imply in particular that the controller can reconstruct the initial state. Since the time origin is arbitrary, any state could be reconstructed. This claim appear unnecessary for the rest of the argument and should be either removed or clarified in my opinion. 4. I find the formulation of the first sentence of Proposition 4.1 extremely confusing. In particular, what comes after "then" in the first sentence reads as a logical consequence of what precedes whereas it is actually a definition of the symbols $\mathbb{P}^\star$ and $V_t^\star$. I recommend reformulating. 5. In Proposition 4.2, I recommend avoiding using the term "smooth". While it has a precise meaning in a specific branch of mathematics, it is often only used informally in control. I would prefer the more standard "infinitely differentiable with $\beta$-Lipschitz gradient". Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions are detailed in the above paragraph on weaknesses. I would appreciate if the authors could respond to these. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful comments. W1) Thank you for suggesting an investigation into the expected cost under different distributions within the ambiguity set. We note that due to the construction of our distributionally robust control problem, the expected cost of our optimal policy under any distribution (Gaussian or not) in the ambiguity set cannot exceed its worst-case expected cost, $i.e.$, the expected cost under the worst-case distribution. However, it is still valuable to investigate how the expected cost evolves under different distributions, so we conducted a numerical experiment along the lines of your suggestion. In this experiment, we consider the same setup as in Section 5, where the time horizon is set to $T=2$ and the common radius is set to $\rho = 10$. We generate different distributions that fall within our Wasserstein ambiguity set via a contamination model; specifically, for any $\varepsilon\in[0, 1]$, we compute the $\varepsilon$-contamination distribution ${\mathbb{P}}^{\varepsilon}$ as the Gaussian distribution with mean $0$ and covariance matrix $\Sigma^\varepsilon=\varepsilon\times\Sigma^\star+(1-\varepsilon)\times\hat\Sigma$, where $\hat \Sigma,\Sigma^\star$ denote the covariances matrices of $\hat{\mathbb P}$ and $\mathbb P^\star$, respectively. By leveraging the convexity of the squared Gelbrich distance and the equivalence between the Gelbrich and Wasserstein distances in the case of Gaussian distributions, we can show that $\mathbb{P}^\varepsilon$ belongs to the Wasserstein ambiguity set for all $\varepsilon \in [0, 1]$. We generate Gaussian distributions because checking whether an arbitrary distribution belongs to the Wasserstein ambiguity set is very challenging computationally (in fact, establishing whether an arbitrary discrete distribution falls within ${\mathcal{W}}$ is tantamount to solving a semi-discrete optimal transport problem, which is known to be #P-Hard [R1, Theorem 2.2]). * Figure 1b depicts the expected cost of the robust optimal policy $u^*$ under the distribution $\mathbb{P}^\varepsilon$ as a function of $\varepsilon$. Note that the expected cost increases with $\varepsilon$ (as the contaminated distribution approaches the worst-case distribution). Additionally, the expected cost is linear with respect to the contamination level $\varepsilon$, which is aligned with our theoretical results that the objective function of the robust problem admits a linear reformulation in the covariance matrices, as in (9). * Figure 1c shows the difference in expected costs between a policy $\hat{u}$ that is optimal under the nominal case (i.e.,that minimizes the expected cost under $\hat{\mathbb P}$) and the robustly optimal policy $u^*$, with both policies evaluated under the contamination distribution $\mathbb{P}^\varepsilon$ for different values of $\varepsilon$. Note that as long as $\varepsilon\geq 0.05$, the robustly optimal policy outperforms the nominal one under $\mathbb{P}^\varepsilon$, therefore resulting in better performance for the vast majority of contamination levels. In addition, even for $\varepsilon\leq 0.05$, the performance of the robust policy is similar to that of the nominal one. [R1] B. Taskesen, S. Shafieezadeh-Abadeh, and D. Kuhn. "Semi-discrete optimal transport: Hardness, regularization and numerical solution." Math. Prog. 199.1-2 (2023): 1033-1106. W2) You are correct in noting that the optimal worst-case expected cost is nondecreasing in each of the radii, because increasing any of the radii expands the ambiguity set and relaxes nature's maximization problem. Following your suggestion, we designed a new experiment to quantify the benefits of the robustly optimal policy $u^*$ in comparison to the nominal optimal policy $\hat u$ (that minimizes the expected cost under the nominal distribution) when both are assessed under (i) the nominal distribution and (ii) the corresponding worst-case distributions, respectively, with respect to different radii. We consider the same setup as in Section 5, with a time horizon $T=2$. We vary the common radius $\rho$ from 0 to 10 and we estimate the difference between the expected costs of $\hat{u}$ and the expected costs of $u^*$ under (i) the nominal distribution $\hat{\mathbb P}$ and (ii) under their respective worst-case distributions, which depend on the radius $\rho$. The results, which are shown in Figure 3, illustrate that the gap in expected costs under the worst-case distributions drastically increase as $\rho$ increases, which indicates that the performance of the nominal optimal policy deteriorates rapidly in comparison to the robust policy in worst-case scenarios. In contrast, the gap in expected costs under $\hat{\mathbb P}$ barely changes with the radius $\rho$, which shows that the robust policy has a robust performance in the nominal scenario, almost matching that of the nominal policy (which is optimal under that scenario). W3) * Indeed, the concept of convexity for sets is meaningful only when these sets are part of a linear (vector) space. In our revised manuscript, we will specify that when referring to the convexity of $\mathcal{W}$, where all measures are supported on $\mathbb R^d$, it is implied that $\mathcal{W}$ is a subset of the linear (vector) space of all signed measures on $\mathbb R^d$. * In our revised manuscript, we will use $\otimes$ operator instead of $\times$ and make the distinction clear. * In the noise-free system (please see lines 127-128), the fictitious initial state is set to $\hat{x}_0 = 0$. In addition, the control input $u_t$ can be computed from observations $y_0,\ldots,y_t$ and does not necessitate knowledge of the true initial state $x_0$. Consequently, the true initial state does not appear in or affect the noise-free system in any manner, and it is not possible to construct the true initial state from this system. We hope our response clarifies your concern. * We will clarify the last two points in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the response and clarifications. I have no further questions at this stage and will retain my score.
Summary: The paper proposes a robust method for controlling LQ systems, where process and observation noise distribution laws are unknown, but noise samples are assumed to be independent, zero mean and have their distributions lying close to a a nominal Gaussian distribution in Wasserstein-2 space. A pair of relaxed settings is used to prove a strong duality between the minimax\maximin problems, showing that the worst-case distribution is Gaussian. A numerical method is proposed for computing this distribution for long horizons. Strengths: The paper itself is well-written and gives a thorough overview of the problem. The fact that the solution for the minimax problem is given by a Gaussian distribution \ linear controller, even if not surprising considering the nature of the quadratic cost and $W_2$/Gelbrich distance, is still not clear from the outset and is important enough from a theoretical point of view. Its proof seems correct as well. Weaknesses: My main concern here, is that the theoretical contribution is limited to one main result, which is confined to a Gaussian-centered ambiguity set, i.e. the real distribution is assumed to be close to a Gaussian one. This calls for an additional study of other nominal distributions (as the Gelbrich distance equals to $W_2$ for other families of distributions [18], and discussion may be confined to linear filters) or mixtures, or at least for a detailed discussion about the practical implications of this choice (beyond that of the 'concluding remarks' in Sec.6) i.e. about whether this hypothesis may (or might not) be practical in real problems. However, even if limited, the contribution is still novel and important enough to warrant acceptance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some minor comments \ typos: It is somewhat misleading to call $\mathcal{W}$ a 'Wasserstein ball', e.g. l.33, while it is not even convex, but I guess that's forgivable since this set is defined clearly. l. 93 dimension should be $p \times T$. l.104 it would probably be more didactic to introduce the Wasserstein distance before using it to define $\mathcal{W}$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors addressed the limitations, but a more detailed discussion about the assumption that the nominal distribution is Gaussian is required. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Extension of Theorem 3.5:** We sincerely appreciate your insightful comment, which has led us to a significant advancement in our work. Specifically, we managed to extend the applicability of our findings to cases where the nominal distribution is an elliptical distribution with finite first- and second-order moments. For more comprehensive information regarding this extension, please refer to the overall response. * **Wasserstein ball:** Thank you very much for bringing this into our attention. You are correct in asserting that the phrase "Wasserstein ball" could lead to confusion due to the non-convex nature of the ambiguity set $\mathcal W$. The phrase was chosen for its brevity and alignment with customary distributionally robust literature. We will emphasize the non-convex nature of our ambiguity set in the revised paper. * **Typo:** Thank you very much for pointing out this typo. We will correct this in the revised manuscript. * **Definition of Wasserstein distance:** Thank you for bringing this to our attention. We will move the definition of the Wasserstein distance to l.100 in the revised manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I am excited to see you managed to improve your results. Although I believe the new results are sound and extend the theory, it is hard to judge since they haven't been reviewed. I will retain my recommendations.
Summary: The paper proposes a distributionally robust version of the output feedback linear quadratic control problem. The goal is to control a system with known partially observed linear dynamics in the face of stochastic disturbances. The stochastic disturbances (both measurement and state disturbances) are drawn from an unknown distribution. This distribution is assumed to belong to some ambiguity set centered at a known gaussian distribution. The disturbances drawn from any distribution in the ambiguity set are assumed to be mean zero, and independent across time, with measurement disturbances independent from the process disturbances. Furthermore, the marginal distribution for the measurement or process disturbance at each time is assumed to be close to the corresponding nominal marginal distribution (as measured by the 2-Wasserstein distance). Under this setting, the paper finds the optimal output feedback controller for the worst case distribution of disturbances in the ambiguity set. It is found that the worst case distribution of disturbances belonging to the ambiguity set described above is Gaussian. Therefore, the corresponding optimal controller is a Linear-Quadratic Gaussian controller designed for this worst case distribution. It is then shown that the worst case distribution may be determined in a computationally efficient manner. In particular, it is demonstrated that the Frank-Wolfe algorithm converges to the optimal covariance parameters for the worst case distribution. Furthermore, it is shown that each step of the Frank-Wolfe algorithm may be decomposed into simple, easily parallelizable components. Strengths: The formulation of the distributionally robust output feedback LQ control problem is novel. It is related to previously published results on distributionally robust state feedback LQ control. As acknowledged by the authors, the output feedback setting brings an extra technical challenge due to the dependence of the optimal state estimator upon the disturbance distribution. The problem and corresponding solution are clearly presented, and easy to follow. The authors highlight the rather surprising result that the worst case disturbance distribution from the prescribed ambiguity set is Gaussian. Given the appropriate context, the results here could be a meaningful step in a unification of worst case and stochastic control. Weaknesses: The contextualization of the setting studied relative to prior work could be improved. In particular, there are a large class of control synthesis approaches for settings where the noise distributions are either not available or are not Gaussian. In particular, consider H-infinity approaches [Zhou et. al, Robust and Optimal Control, 1996], mixed H-2/H-infinity approaches [(Doyle et. al Optimal Control with Mixed H-2/H-infinity Performance Objectives, 1989) (Bernstein and Haddad, LQG control with an H-infinity performance bound, 1988)], adversarially robust control [Lee et. al, Performance-Robustness Tradeoffs in Adversarially Robust Control and Estimation, 2023], and nonstochastic control [Hazan and Singh, Introduction to Online Nonstochastic Control, 2023]. It would be useful to discuss several of these results and contrast the setting with the distributionally robust setting. A thorough discussion about the choice for the ambiguity set and what distributions it can model would be beneficial. In particular, considering mean zero disturbances which are independent across time is restrictive. It fails to model e.g. colored noise. Despite the fact that it was possible to propose a relatively efficient method to optimize for the worst-case covariance of the noise distribution, the required computation time appears to scale quite poorly with the problem horizon, T. Only short horizons, up to T=20, were considered in the simulations, however many control problems have much longer horizons. This appears to limit the practical applicability. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is there any concrete theoretical connection between distributionally robust linear quadratic control and the other methods for incorporating robustness mentioned above (e.g. mixed H-2/H-inf)? Are there any practical examples where the proposed method substantially outperforms conventional approaches for incorporating robustness to unknown disturbance distributions? Such an example would make the setting more compelling for practical use. From the experiments, is it possible to detect any clear trends regarding the worst case covariance relative to the nominal covariance of the ambiguity set? E.g. Do we see or expect to see that the worst-case covariances are larger than the central covariances in Loewner order? Combined with the theoretical results, such an observation might justify crude approximate approaches for distributionally robust control design in practice. For example, one could take their estimate for the covariance for the central distribution in the ambiguity set, and simply scale it up by some constant. The resulting covariances could then be used to design a LQG controller. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: There is no negative societal impact that the authors must address. Limitations to the method that would be worth addressing are mentioned in the weaknesses section. To reiterate, it would be helpful to address: - What the ambiguity set can model, and what it cannot - In which settings the proposed method would outperform conventional methods from robust control theory for handling unknown disturbance distributions - Acknowledging the computational burden of the approach relative to e.g. LQG with a known distribution Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Extension of Theorem 3.5**: We sincerely appreciate your insightful comment, which has led us to a significant advancement in our work. Specifically, we managed to extend the applicability of our findings to cases where the nominal distribution is an elliptical distribution with finite first- and second-order moments. This extension also involves relaxing the assumption of independence among noise components to uncorrelatedness. Additionally, we can also relax the assumption that the distributions in the ambiguity set have a fixed mean. For more comprehensive information regarding this extension, please refer to the overall response. * **Scalability of Algorithm 1**: Thank you very much for pointing us in this direction. Your question helped us realize that our chosen matrix $A$—which was not a convergent matrix ($i.e.$, was not satisfying $\lim_{t \to \infty} (A^t)_{ij} = 0$ for all $i, j \in [n]$)—was causing numerical instability in the linearization oracle of Algorithm 1 for $T > 50$. To address this issue, we rescaled the $A$ matrix described in the numerical section by a factor of $0.1$, ensuring it is a convergent matrix. Subsequently, we reran our experiment according to the procedures outlined in Section 5, but this time for $T=100$. The results of this modified experiment are shared in Figure 1a of the attached PDF document. * **Relation to the relevant literature**: Indeed, our work is related to the aforementioned literature, which involves minimizing a worst-case objective functional ($e.g.$, transfer function, cost, regret), where the noise perturbations are selected adversarially [Zhou et. al., Hazan and Singh], or considers mixtures of nominal and worst-case perspectives [Doyle et. al., Bernstein and Hadded, Lee et. al.]. In contrast to these approaches, we adopt a distributionally robust approach, evaluating performance based on the worst-case expected cost in view of all noise distributions close to a nominal one. Although we share the same motivation of enhancing control policies' robustness against uncertainty, we are currently unable to identify any rigorous equivalence or theoretical relationship between our approach and the aforementioned methods. We will investigate further and will incorporate a comprehensive discussion of this related literature in the revised paper. * **Loewner order of worst-case covariance matrices**: Assume that we fix the optimal affine controller. Then, the expected cost can be reformulated as $\mathbb E_{\mathbb{P}}[\xi^\top S\xi]$ for some matrix $S \succeq 0$, where $\xi\sim {\mathbb{P}}$ encapsulates the uncertainties inherent to the model with $\mathbb E_{\mathbb{P}}[\xi]=0$ and $\mathbb E_{\mathbb{P}}[\xi \xi^\top]=\Sigma$. Let $\hat\Sigma$ denote the nominal covariance matrix. Then, by Theorem 3.5 maximizing the expected cost over a 2-Wasserstein ambiguity set $\mathcal{W}$ of radius $\rho$ is equivalent to maximizing $Tr(S \Sigma)$ over all $\Sigma\succeq 0$ satisfying $G(\Sigma, \hat \Sigma)\leq\rho^2$. In the following, we will show that the worst-case covariance matrix cannot be smaller than the nominal covariance matrix $\hat\Sigma$ with respect to the Loewner order. By applying a linear coordinate transformation, we may assume without loss of generality that $\hat\Sigma=I$. Consider now any matrix $\Sigma\succeq 0$ in the Gelbrich ball, that is, $G(\Sigma, I)^2 = Tr(I+\Sigma-2\Sigma^{1/2})\leq \rho^2$. Using the spectral decomposition $\Sigma=\sum_{i=1}^n\lambda_i v_i v_i^\top$, where $\lambda_i\geq 0$ is the $i$-th eigenvalue and $v_i$ is the corresponding eigenvector (we may assume the eigenvectors are orthonormal), the squared Gelbrich distance can be reformulated as $Tr(I+\Sigma-2\Sigma^{1/2}) = \sum_{i=1}^n (1-\lambda_i^{1/2})^2$. Note that $\hat \Sigma = I$ has eigenvalues 1. In the following, we will show that if $\Sigma$ has any eigenvalues strictly less than 1, it will not attain the maximum value of $Tr(\Sigma S)$, that is, there exists $\Sigma'$ satisfying $G(\Sigma', I) \leq \rho$ with eigenvalues greater or equal than 1. Now let $\Sigma'=\sum_{i=1}^n \max${$1,\lambda_i$}$v_i v_i^\top$, which has the same eigenvectors as $\Sigma$, but all eigenvalues smaller than 1 are set to 1, that is, $\Sigma'$ has larger eigenvalues than $\Sigma$ with the same eigenvectors. The matrix $\Sigma'$ is feasible because $Tr(I+\Sigma'-2(\Sigma')^{1/2}) = \sum_{i=1}^n (1-\max${$1,\lambda_i$}$^{1/2})^2 \leq \sum_{i=1}^n (1-\lambda_i^{1/2})^2 = Tr(I+\Sigma-2\Sigma^{1/2}) \leq \rho^2$. In addition, the expected loss with respect to $\Sigma'$ is at least as large as the expected loss with respect to $\Sigma$. Formally, we have $Tr(S\Sigma') \geq Tr(S\Sigma)$ because $S\succeq 0$ and because $\Sigma'\succeq \Sigma$ by construction. This shows that the worst-case covariance matrix must be non-inferior to the nominal covariance matrix in Loewner order. Similar arguments can be used to show that increasing $\rho$, increases the worst-case covariance matrix in Loewner order. In Figures 2a and 2b shared in the uploaded PDF file, for $n=m=p=2$, we illustrate the worst-case covariance matrices for 10 different values of $\rho$ split evenly in the range $[0,1]$ satisfying $\rho_{x_0} = \rho_{w_0} = \ldots= \rho_{w_{T-1}} = \rho_{v_0}=\ldots=\rho_{v_{T-1}}=\rho$. We can see empirically from the plots that indeed the worst-case covariance matrices are inflating as the radius $\rho$ of the balls increases. Interestingly, Figure 2 indicates that scaling the empirical covariance matrix $\hat X_0$ might approximate its worst-case in this experiment, but the same scaling wouldn't provide an accurate estimate for the worst-case covariance of $v_0$. --- Rebuttal Comment 1.1: Comment: Thank you - your response mostly addresses my questions. While I will raise my score, I do want to see better physical justification for the chosen ambiguity set. --- Reply to Comment 1.1.1: Comment: Thank you very much for raising your score. We will provide additional justification for using Wasserstein ambiguity sets in the final version of the paper.
Summary: - The paper considers a standard LQ setup with uncertainty in the distributions of system noise, observation noise and initial state. - Described as Zero-Sum Game between the controller and nature, they show the optimal decisions for both players. Specifically they show that the worst-case distribution is Gaussian and the optimal control law is linear. - They provide an numerically efficient approach to solve the distributionally robust LQ control problem based on Frank-Wolfe algorithm. - Simulation experiments are provided to show the computational efficiency of the proposed algorithm compared to MOSEK. Strengths: The results provided in the manuscript make multiple important contribution. - The optimal control law still remains linear and the worst case distribution is still Gaussian. - The proof technique is novel which relies on the "purified states" instead of usual dynamic programming approaches, it is interesting to see how this approach can be used in other LQG problems. - Simulation results show the computational efficiency of the Frank-Wolfe algorithm over MOSEK. Weaknesses: - There are no obvious major weakness in the manuscript. Some potential minor weakness are currently mentioned in the form of questions in the comment below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In general the adaptive linear quadratic control results are provided under assumption of sub-gaussian system noises (Assumption A1, http://proceedings.mlr.press/v19/abbasi-yadkori11a/abbasi-yadkori11a.pdf). Is it possible to extend these results to Sub-Gaussian nominal distribution? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors provide some limitations in form of possible extensions and future work in the last section. Social Impact: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful comment, which has led us to a significant advancement in our work. Specifically, we managed to extend the applicability of our findings to cases where the nominal distribution is an elliptical distribution with finite first- and second-order moments. This extension also involves relaxing the assumption of independence among noise components to uncorrelatedness. Therefore, our results extend to the sub-Gaussian distributions that are elliptical. For more comprehensive information regarding this extension, please refer to the overall response. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I thank the authors for their responses. After reading their rebuttal, I will retain my recommendation regarding this paper.
Rebuttal 1: Rebuttal: We express our gratitude to the reviewers for their thoughtful comments and questions. In the reviews, a common concern was raised about our main result in Theorem 3.5. This theorem establishes the optimality of a linear control policy and identifies the worst-case distribution as a Gaussian distribution. However, our proof was specifically formulated under the assumption of a Gaussian nominal distribution. Thanks to the insightful feedback provided by the reviewers, we have identified the potential for extending Theorem 3.5. Specifically, we can **extend Theorem 3.5** to situations where **the nominal distribution is an elliptical distribution with finite first- and second-order moments** and show that a linear control policy is optimal while the worst-case distribution is an elliptical distribution. In addition, we can **relax the independence assumption** of the noise components **to uncorrelatedness** in this case. Finally, we can also **drop the zero mean assumption for the nominal distribution** and **relax the assumption that the distributions in the ambiguity set have a fixed mean**. The class of elliptical distributions generalizes multivariate Gaussian distributions and spherical distributions [R1] and includes symmetric distributions with light and heavy tails. Examples of elliptical distributions include the Laplace, logistic, and $t$-distribution, among others. Therefore, even though we cannot extend our results readily to any sub-Gaussian distribution as asked by Reviewer RqGy, we can extend our results to some sub-Gaussian distributions that are also elliptical. There are two key reasons that allow us to extend Theorem 3.5 to situations where the nominal distribution is elliptical with finite first- and second-order moments. Firstly, when ${\mathbb{P}}$ is an elliptical distribution, the conditional expectation $\mathbb E_{\mathbb{P}}[\xi | \xi' = H\xi ]$ is linear in $\xi'$ [R2, Theorem 4]. This implies that under any fixed elliptical distribution, a linear control policy is optimal. Consequently, this allows us to establish a parallel lower-bound problem to (10), wherein $\mathcal W_{\mathcal N}$ is substituted by a subset of the ambiguity set $\mathcal W$ containing only elliptical distributions sharing the same characteristic function with the nominal one. This observation permits us to focus on linear policies within the inner minimization of the lower-bound problem without loss of generality. Secondly, the distance between two elliptical distributions, sharing identical characteristic functions, is equivalent to the Gelbrich distance between their corresponding mean vectors and covariance matrices [R3, Theorem 2.4]. This observation further empowers us to substitute the feasible set of the outer maximization with the Gelbrich ambiguity set. The remainder of our proof seamlessly adapts to elliptical distributions, as other arguments do not rely on the Gaussian structure of the nominal distribution. Furthermore, we can also extend our numerically efficient method to compute the worst-case distribution, now elliptical, along with the optimal linear control policy, to situations where the nominal distribution is elliptical. Notably, Proposition 4.1's applicability naturally extends the case where the worst-case distribution ${\mathbb{P}}^\star$ becomes an elliptical distribution sharing the same characteristic function with the nominal one. By leveraging our Frank-Wolfe algorithm, we can compute the covariance matrix of this distribution. Subsequently, solving a linear quadratic control problem using ${\mathbb{P}}^\star$ allows us to determine the optimal controller. Remarkably, even in this context where the nominal distribution is elliptical, the separation principle remains applicable [R4], and the recursive equations of the Kalman filter continue to hold for elliptical distributions [R5, R6]. The departure from the fixed mean assumption in the ambiguity set can be inferred from [R7, Theorem 2.7], [R8, Theorem 2.16], and [R7, Theorem 3.5]. However, to maintain focus on the core message of the paper and prevent undue complexity in terms of calculations and notation, we did not include this extension in the submitted version of our manuscript. We thank the reviewers for recommending us to explore this meaningful direction that strengthens the main result of our paper. [R1] R.D. Lord, The Use of the Hankel Transform in Statistics I. General Theory and Examples, Biometrika, vol. 41, no. 1/2, 1954, pp. 44–55, JSTOR. [R2] K.-C. Chu, Estimation and decision for linear systems with elliptical random processes, IEEE Conference on Decision and Control on Adaptive Processes, 1972, pp. 647-651. [R3] M. Gelbrich, On a formula for the $L^2$-Wasserstein metric between measures on Euclidean and Hilbert spaces, Mathematische Nachrichten, 147 (1990), pp. 185–203. [R4] Witsenhausen, Hans S., Separation of estimation and control for discrete time systems, Proceedings of the IEEE 59.11 (1971): 1557-1566. [R5] Basu, A. K., \& Das, J. K. (1994). A Bayesian Approach to Kalman Filter for Elliptically Contoured Distribution and its Application in Time Series Models. Calcutta Statistical Association Bulletin, 44(1–2), 11–28. [R6] Girón, F. J., \& Rojano, J. C. (1994). Bayesian Kalman Filtering with Elliptically Contoured Errors. Biometrika, 81(2), 390–395. [R7] V. A. Nguyen, S. Shafieezadeh-Abadeh, D. Kuhn, \& P. Mohajerin Esfahani, Bridging Bayesian and minimax mean square error estimation via Wasserstein distributionally robust optimization. Mathematics of Operations Research, 2023, 48(1), 1-37. [R8] K.-T. Fang, S. Kotz, and K. W. Ng, Symmetric Multivariate and Related Distributions, Chapman \& Hall, 1990. Pdf: /pdf/7b2a15b874478c38c81859ae5b62a7844aa62b08.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Online Adaptive Policy Selection in Time-Varying Systems: No-Regret via Contractive Perturbations
Accept (poster)
Summary: The paper studies online adaptive policy selection for nonlinear time-varying discrete-time dynamical systems. The algorithm named GAPS is proposed and is shown to achieve optimal regret, which closes the regret gap between online convex optimization (OCO) and online policy selection. En route, a general proof framework based on exponentially decaying perturbation property is developed that connects online policy selection with OCO. Numerical experiments are provided to demonstrate GAPS's superior performance over baselines. Strengths: The paper is well-written. The problem and the results seem significant. Weaknesses: A slight weakness is that the algorithm needs $\Omega(\log T)$ memory instead of a constant complexity to $T$. I did not spot any major weakness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I hope the authors can provide clarification for two questions: 1. Can the authors comment on the necessity of assuming Definition 2.6 and Definition 2.7 for achieving optimal regret? 2. Does the variational intensity or a similar quantity also appear in the analysis of Theorem 3.3 (the convex setting)? What is $V$ in that setting? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in Section 3 and the conclusion. In addition to the ones mentioned in the paper, the algorithm also needs the knowledge of problem parameters $\rho$ and $\epsilon$ (or $\rho$ and $V$) to set the learning rate and run optimally. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and please find the response to your comments below. > The algorithm needs $\Omega(\log T)$ memory instead of a constant complexity to $T$. To the best of our knowledge, there is no algorithm that can achieve sublinear regret with $O(1)$ memory length in our setting. It is possible that this is a fundamental limit of the online policy selection problem. > The necessity of assuming Definitions 2.6 and 2.7. We discuss below about the necessity of Definitions 2.6 and 2.7 respectively. $\varepsilon$-time-varying contractive perturbation (Definition 2.6): This assumption guarantees that the impact of a past decision decays quickly over time. Intuitively, general online policy selection is intractable without an assumption that limits the impact of a bad decision in the past. To see this, consider a setting where the dynamics is $x_{t+1} = x_t, \forall t \geq 2$. It can satisfy all of our assumptions except contractive perturbation. Any online algorithm may suffer a linear regret in this setting because it cannot foresee the future to choose $\theta_1$ optimally before the state `freezes’ at time step $2$. So we argue that some kind of assumption in the spirit of “forgetting the past” is necessary for sublinear regret in online policy selection. Searching for more general assumptions in this spirit is a future research direction of great interest. $\varepsilon$-time-varying stability (Definition 2.7): This assumption guarantees that any slowly-time-varying policy parameter sequence can stabilize the system. Without this assumption, it is possible for the trajectory of GAPS to grow to an unbounded magnitude. This will break our approximation error bounds (Theorem 3.2) and regret bounds (Theorems 3.3 and 3.6) because the gradients and parameter updates are no longer uniformly bounded. > Does the variational intensity or a similar quantity also appear in the analysis of Theorem 3.3? No. Intuitively, a quantity like variational intensity appears in the regret bound when we allow the comparator policy parameters to change over time. For the metric of adaptive regret in Theorem 3.3, the comparator policy parameter is fixed (see equation (2)). In contrast, for the metric of local regret in Theorem 3.6, one can view any local minimizer of $F_t$ as the comparator policy parameter, which is changing over time. Thus, we need to introduce the variational intensity in the regret bound in Theorem 3.6. > The algorithm also needs the knowledge of problem parameters $\rho$ and $\varepsilon$ (or $\rho$ and $V$) to set the learning rate and run optimally. In the literature of online learning/optimization, it is common for the optimal learning rate to depend on the system parameters (see [31] for a survey). In the case when these system parameters are unknown, our Theorems 3.3 and 3.6 also provide regret guarantees for arbitrary learning rates. For example, even when $\rho$ is unknown, one can still achieve $O(\sqrt{T})$ regret in Theorem 3.3 with the learning rate $1/\sqrt{T}$ given that $T >> 1/\varepsilon$. --- Rebuttal Comment 1.1: Comment: I appreciate the clarifications from the authors in their rebuttal and maintain my positive rating of this paper.
Summary: This paper proposes an algorithm, GAPS, for online adaptive policy selection in time-varying systems. The algorithm is shown to achieve optimal $O(\sqrt{T})$ regret based on the contractive perturbation property of the online policy-induced dynamics. Numerical results are provided to verify the performance of GAPS. Strengths: 1. The paper is well-written and organized. The insights behind the main results are effectively presented. 2. The optimal $O(\sqrt{T})$ regret can be achieved using partial derivatives of the dynamics and costs. Weaknesses: 1. The results require a slow change of the policy parameter sequences and an $\epsilon$-time varying contractive perturbation, which cannot handle sudden changes. 2. There is an additional assumption on the initial state. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is it possible to relax the projection step in the algorithm? 2. It would be interesting to explore the introduction of switching costs in this problem. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and please find the response to your concerns below. > The results require a slow change of the policy parameter sequences and an $\varepsilon$-time-varying contractive perturbation. We require the policy parameter sequences to change slowly for two reasons: 1. Bound the approximation error introduced by using the efficient gradient estimator $G_t$ (see equation (4)). Note that when there is an `abrupt’ change on the policy parameter, our construction of $G_t$ based on the actual trajectory may not approximate $\nabla F_t(\theta_t)$, which comes from using $\theta_t$ repeatedly since time $0$. 2. Satisfy $\varepsilon$-time-varying contractive perturbation (see line 257 in Theorem 3.2). Note that when $\varepsilon = +\infty$ as in the case of Examples 2.1 and H.1, this constraint is always satisfied. And even when $\varepsilon$ is a small constant, the constraint can be easily satisfied because we require $\eta$ to be $O(1/\sqrt{T})$ to achieve the optimal regret (see Corollaries 3.4 and 3.7). We don’t view our requirement for the policy parameters to change slowly as a limitation because this sequence is fully under the control of our algorithm. GAPS ensures that the parameter $\theta_t$ does not change too fast by using a carefully chosen learning rate $\eta$ (see Theorem D.5 for the formal statement). > There is an additional assumption on the initial state. We make this assumption to ensure the critical contractive perturbation property (Definition 2.6) is always satisfied at any state visited by our algorithm. Such assumption is necessary because our $\varepsilon$-time-varying contractive perturbation is locally, and large initial state $x_0$ can make an intermediate state $x_t$ go out of the region $B_n(0, R_C)$ where contractive perturbation holds. This assumption can be relaxed when contractive perturbation holds globally (e.g., Example 2.1). > Is it possible to relax the projection step in the algorithm? Thank you for mentioning this point! The projection step is a standard way to handle the constraint in many first-order online optimization algorithms (see [31] for a survey). When the parameter set $\Theta$ is complicated, the projection step might be expensive to compute. An interesting future direction is to design a projection-free algorithm (see Chapter 7 in [31] for a survey) in the setting of online policy selection. > About introducing the switching costs. We didn’t see a direct motivation to introduce switching costs on the decisions (which are the policy parameters $\{\theta_t\}$ in our setting), because GAPS already guarantees that the policy parameter sequence changes slowly. However, it would be interesting to consider more general stage costs. For example, we can allow $c_t$ to also depend on some previous states and actions. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their detailed response. Thanks!
Summary: This paper studies online adaptive policy selection for nonlinear time-varying discrete-time dynamical systems. At time step $t \in\mathcal{T}$, the policy picks a control action $u_t$, and the next state and the incurred cost are given by $x_{t+1}=g_t\left(x_t, u_t\right), c_t:=f_t\left(x_t, u_t\right)$, where $g_t(\cdot, \cdot)$ is a time-varying dynamics function and $f_t(\cdot, \cdot)$ is a time-varying stage cost. The goal is to minimize the total cost $\sum_{t=0}^{T-1} c_t$. The regret definition in (2) is similar to [52]( https://arxiv.org/pdf/1708.00075.pdf). They require two key properties to achieve sub-linear regret in this time varying systems. Definition 2.6 requires that Intuitively, two trajectories starting from different states (in a bounded ball) to converge towards each other if they adopt the same slowly time-varying policy parameter sequence, and definition 2.7 requires that the policy class $\pi_{0:T-1}$ can achieve stability if the policy parameters $\theta_{0:T-1}$ vary slowly. Assumption 2.1 is standard (see [52]( https://arxiv.org/pdf/1708.00075.pdf)) while assumption 2.2 ensures starting state stays within an euclidean ball whenever the dynamics changes. Since the complexity of computing $\nabla F_t$ exactly grows proportionally to $t$, the key difference in their approach is that their algorithm GAPS uses $G_t$ to approximate $\nabla F_t\left(\theta_t\right)$ over a batch size of $B$. This results in solving only one MPC optimization problem. Finally in their main theorem 3.6 they show that a regret of $R^L(T)=O\left((1-\rho)^{-\frac{9}{2}}(1+V)^{\frac{1}{2}} T^{\frac{1}{2}}\right)$ is possible without any convexity assumption on $F_t$, where $V$ is the variation intensity of the time-varying system. Finally they empirically validate their algorithm. Strengths: 1) The paper analyzes Online Gradient Descent (OGD) algorithm for time varying systems. They propose the GAPS algorithm which uses the approximate gradient $G_t$ to estimate surrogate function $F_t$. This is also computationally faster than previous methids. 2) They theoretically analyze their algorithm and provide a regret bound of $R^L(T)=O\left((1-\rho)^{-\frac{9}{2}}(1+V)^{\frac{1}{2}} T^{\frac{1}{2}}\right)$ is possible without any convexity assumption on $F_t$, where $V$ is the variation intensity of the time-varying system. This improves over the previous bounds in this setting by a $log(T)$ factor. 3) They show empirically that their algorithm is competitive. Weaknesses: 1) While assumption 2.1 is standard, I think assumption 2.2 is very strong. The implication of the $R_C>R_S+C\left\|x_0\right\|$ in assumption 2.2 is not clear to me. Moreover $\mathcal{G}$ is the set of all possible dynamics/policy sequences ${g_t, \pi_t\}_{t \in \mathcal{T}}$ the environment/policy class may provide and you assume that if $\{g, \pi\}$ is the dynamics/policy at an intermediate time step of a sequence in $\mathcal{G}$, then the time-invariant sequence $\{g, \pi\} \times T$ is also in $\mathcal{G}$. This seems to be very strong assumption. Where do you use it? Are there other works also that require this assumption or is this specific for the time varying system to provide stability? 2) The key novelty in their method lies in using $G_t$ instead of $F_t$ and substituting the ideal sequence by the actual sequence $\theta_{0: t}$. However, doesn't this approach might introduce additional variance in your estimation of the gradient? How do you control for that? Similarly when you truncate your observation to $B$ timesteps rather than the ideal sequence there must be approximation error creeping into your estimation of $F_t$ through $G_t$. How do you account for that? Also it will be great if you can point out where in the theory you deal with these issues. 3) It is not clear to me how the regret improvement occurs in Theorem 3.3 and Theorem 3.6 that results in a regret of $O(\sqrt{T})$ (and improves by a factor of $\log(T)$. The paper has limited discussions on how this happens, and I would like the authors to discuss/clarify this in more details. It will be also great if the authors specifically point out where in their proof the use the assumption 2.1 and 2.2 ti get the improvement. Also why [52]( https://arxiv.org/pdf/1708.00075.pdf) fails to achieve this bound. 4) It makes sense to me that the quantity $V$ occurs in the time-varying system which is similar to the quantity in [Besbes et al.](https://arxiv.org/pdf/1307.5449.pdf). However, it is defined on $f,g$, and policy $\pi$. Shouldn't this $V$ only depend on the environment dynamics and cost $f,g$? Can you please elaborate on this? Also please point out how it comes up in the proof of Theroem 3.6. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1) The writing can be improved. I think you some of the definitions and assumptions can be moved to the section 3. Also the authors need to discuss the results more. It is not clear to me exactly what technical novelty over [52]( https://arxiv.org/pdf/1708.00075.pdf) led to the regret bound of $O(\sqrt{T})$ which does not include the $\log T$ factor. 2) The two examples in the main paper seems to be slightly contrived (and i did not see the appendix). Can the authors give moire real life examples where their approach can be used? 3) See weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. Please see our global response about the generality and complexity of our assumptions. Detailed responses are below. > The implication of the $R_C > R_S + C\|x_0\|$ in Assumption 2.2 is not clear. The goal of Assumption 2.2 is to guarantee that the critical contractive perturbation and stability properties (Definitions 2.6 and 2.7) hold on the trajectory of GAPS when its learning rate is small enough. By assuming $R_C > R_S + C\|x_0\|$, we show that any state $x_t$ on the trajectory of GAPS satisfies $\|x_t\| \leq R_S + C\|x_0\|$ (eq. (25) in Appendix D.5), so one can apply contractive perturbation from any intermediate state visited by GAPS to bound the partial derivatives of multi-step dynamics/costs (Lemma D.3, Corollary D.4). > The assumption that “If $g, \pi$ is the dynamics/policy at an intermediate time-step of a sequence in G, then the time-invariant sequence $g, \pi$ repeating $T$ times is also in G” seems very strong. To clarify, we only need this repeating sequence of $g, \pi$ to satisfy contractive perturbation and stability, not to occur in real problem instances. We only use this assumption in the proof of Theorem 3.6 (in a rather technical way, see Appendix F). There is no fair comparison with previous works because Theorem 3.6 is the first regret bound for online policy selection with nonconvex surrogate costs. Also, this assumption is without the loss of generality for time-invariant dynamics and policy classes. We will discuss in the revision. > About variance in our gradient estimation introduced by using the actual parameter sequence $\theta_{0:t}$ and bounded buffer length $B$. $\varepsilon$-time-varying contractive perturbation (Definition 2.6) is the key property that enables us to bound the bias of our gradient estimation. (Our setting is nonstochastic, so it has no variance.) Intuitively, when the learning rate is small, this property guarantees that the actual state/action pair is close to the state/action pair achieved by applying $\theta_t$ repeatedly since time $0$. This intuition extends to the gradient estimation, formalized in Theorems D.5 and D.6 in the appendix. Discarding the partial derivatives before $b$ steps ago in the expression of $\nabla F_t(\theta_t)$ introduces small bias because the magnitude of this partial derivative decays exponentially with respect to $b$ (see Corollary D.4). We will discuss more about this intuition. > What enables the regret to improve by a factor of $\log T$ in Theorems 3.3 and 3.6? As discussed after Corollary 3.4, the regret bounds in [1, 3] are loose by a factor of $B = O(\log T)$ because they apply general OCO with memory results. This treats the impact of all inputs $\theta_{t-B+1:t}$ to the OCO stage cost equally, introducing the factor of $B$. Our problem is more “structured” because the impact of a past parameter $\theta_{t-b}$ on the current $c_t$ decays exponentially with respect to $b$. We formally state this insight in Corollary D.4, which uses Assumptions 2.1 and 2.2. > Comparison with [52]. [52] studies online nonconvex optimization, so its regret bounds are not directly comparable with our main results for online policy selection. It is also challenging to use our gradient estimator $G_t$ (4) in the algorithms proposed by [52]. This is because $G_t$, constructed using the actual trajectory experienced by GAPS, is only a good approximation of $\nabla F_t(\theta)$ for $\theta$ that is at or very close to $\theta_t$. However, line 7 in Algorithm 1 of [52] queries for the gradients that may be far from $\theta_t$. > Why should $V$ depend on $f, g,$ and policy $\pi$? $V$ depends on $\pi_t$ because the online agent can only pick the policy parameters $\theta_t$. The policy classes $\pi_t$ are given. Even under time-invariant costs and dynamics, time-varying policy classes can change how policy parameters affect the states and actions, which lead to the changes in surrogate cost functions $F_t$. In the proof, the terms that measure the variation on policy classes are introduced in equations (48-50) in the proof of Lemma F.4 in Appendix F.3, where we bound the variation of surrogate cost functions $F_t$ by the variation of $f_t, g_t, \pi_t$. > The two examples in the main paper seems to be slightly contrived… Can the authors give more real life examples where their approach can be used? We are not sure if the Reviewer refers to Examples 2.1 and 2.2 or our two numerical experiments. We contend that Example 2.1 can be broadly useful in settings with a complex, but partially predictable disturbance process. The manuscript’s reference [10] gives examples of EV charging and trajectory tracking. Regarding Example 2.2, it models stabilizing a nonlinear system about an operating point by linearization, a standard textbook technique in control engineering. Our experiments are meant to clearly demonstrate the properties of GAPS that our theory predicts. Experiment 1 shows the fast adaptation predicted by our adaptive (vs. static) regret bound. Experiment 2 shows that we can handle nonlinear systems. Also, the inverted pendulum is a “wrong but useful” approximation of bipedal walking [Grizzle et al., 2014] and a standard benchmark problem. For real-life applications, GAPS can be instantiated in almost any system even if one cannot verify all assumptions. The only hard requirement is the smoothness for locally bounded derivatives. By analogy, gradient descent for optimization has strong theoretical guarantees mostly for convex problems, but good empirical performance in much broader settings. We are excited to extend GAPS to more complex systems in both theory and practice. The examples/experiments in this paper are a starting point, since we focus on theory for now. [Grizzle et al., 2014] J.W. Grizzle, C. Chevallereau, R.W. Sinnet, A.D. Ames. "Models, feedback control, and open problems of 3D bipedal robotic walking." Automatica (2014). --- Rebuttal Comment 1.1: Title: Response to author's rebuttal Comment: I thank the authors for their response. I have some further questions to understand the paper correctly: - Thank you for clarifying assumption 2.2. - I want to dig deeper into this repeating sequence of $g, \pi$ to satisfy contractive perturbation and stability idea. I understand that real-life problems may not satisfy this and this is only required for proof. However, it is important to know whether this assumption makes the proof trivial or can be removed for future works. Can the authors clarify how this is used in the proof? - Thank you for clarifying the variance of the gradient comment. However, do you have any intuition about how you are controlling the bias? - The writing style of this paper is unsatisfactory. For example, corollary D.4 which actually discusses how we can get the improvement is shifted to the appendix. I think these things should be discussed in the main paper in detail as this paper is more theoretical in nature. I have not checked the proof of corollary D.4 in detail but it is used to prove Theorem D.5 on bounding actual stage cost and Theorem D.6 for bounding the bias. - The buffer length $B$ plays a crucial role in gradient concentration. Can the authors discuss how it is chosen, and how choosing it too small or large affects the proof? - Thank you for your clarification on $V$, and experiments. --- Reply to Comment 1.1.1: Title: Response to the follow-up questions Comment: Thank you for providing valuable feedback on our rebuttal. Please find our response to your follow-up questions below. > How we use the assumption about the repeating sequence of $g, \pi$ in the proof. To show Theorem 3.6, we first show a local regret bound for online nonconvex optimization (Thm F.1) and then use Theorem 3.2 to transfer the regret to online policy selection. In the second step, we need to convert the measure of variation on $F_t$ defined for online optimization (see Thm F.1) to variation intensity $V$ on $g_t, f_t, \pi_t$ defined for control (see Def 3.5). To do the conversion, we adopt an approach that requires the assumption about repeating $g, \pi$, whose insight is discussed below. We realize that $F_t$ is constructed by the sequence of dynamics/policies $$\pi_0, g_0, \pi_1, g_1, \ldots, \pi_{t-1}, g_{t-1}, \pi_t, f_t,$$ while $F_{t-1}$ is constructed by another sequence of dynamics/policies that is shorter: $$\pi_0, g_0, \pi_1, g_1, \ldots, \pi_{t-2}, g_{t-2}, \pi_{t-1}, f_{t-1}.$$ Although bounding the distance between $F_t$ and $F_{t-1}$ directly may be challenging, the comparison becomes much easier if we first compare each of them to the auxiliary sequences that repeat $\pi_t, g_t$ for $t$ and $t-1$ times with the help of the assumption (see equation (49)). We can compare repeating $\pi_t, g_t$ with different lengths easily under the assumption because they converge quickly to a limit as shown in Lemma F.3. A formal statement and the detailed proof can be found in Lemma F.4 and Appendix F.3. It is interesting to see if an alternative approach can relax the assumption of repeating $g, \pi$. > Intuition about how to control the bias. The bias on our gradient approximation is controlled by choosing (1) a sufficiently small learning rate $\eta$ and (2) a sufficiently large buffer length $B$. To understand why this works intuitively, we can think about the two sources where the approximation bias comes from. The first source of the bias is that, while we want to evaluate the current policy parameter $\theta_t$, the past policy parameters that lead to the current state are different with $\theta_t$. Under learning rate $\eta$, for a past time step $\tau$, the difference between the parameters can be bound by $\|\theta_t - \theta_\tau\| = O((t-\tau)\eta)$. Thus, under contractive perturbation, the impact of this difference on the current state is $O(\rho^{t-\tau} (t-\tau)\eta)$. The total impact from all previous time steps can be bounded by $O(\eta)$ because the exponentially decaying term $\rho^{t-\tau}$ dominates the linear term $(t-\tau)$. Therefore, we can control this bias by choosing a small learning rate $\eta$. The second source of the bias comes from the truncation using a finite buffer length $B$. Under contractive perturbation, we know $\frac{\partial f_{t\mid 0}}{\partial \theta_\tau} = O(\rho^{t-\tau})$, so the sum of the discarded partial derivative terms under truncation is $O(\rho^B)$. Therefore, we can control this bias by choosing a large buffer length $B$. > Improving the writing style of this paper. Thank you for this valuable comment. From our discussion, we realized that more important theoretical insights could be highlighted in the main body to facilitate understanding. Space is a challenge due to page limits, but we will do our best to move more about the proof outline and add pointers to the appendix. > How the buffer length $B$ affects the proof? The buffer length $B$, either small or large, does not affect our proofs because the bounds in Theorems 3.2, 3.3, and 3.6 take all possible values of $B$ into consideration. However, a small buffer length (e.g., constant) is not sufficient to achieve a sublinear regret, and one can see what regret a specific $B$ can achieve by substituting the value into the bounds in Theorems 3.3 or 3.6. We discuss about the lower bounds of $B$ to achieve the optimal regret bounds in Corollaries 3.4 and 3.7. Note that choosing a larger buffer length $B$ will not make the regret bounds worse.
Summary: The paper studies online adaptive policy selection for nonlinear systems. The algorithm proposed by the authors, GAPS, is a gradient-based algorithm that achieves the first optimal regret bound in the convex case, and the first local regret bound in the case when convexity does not hold. The authors provided numerical experiments. Strengths: The novel approach to the online control problem is interesting and closes the $\log T$ gap between the currently established bounds for online nonstochastic control and OGD. The paper is well-organized. Weaknesses: 1. The experiments do not compare the algorithm proposed by the paper to the existing algorithms in online nonstochastic control. It would be interesting to see the comparison against benchmarks in the online control literature including GPC in Algorithm 1, https://arxiv.org/pdf/2211.09619.pdf. 2. Although the analysis is novel, the algorithm proposed is essentially OGD with approximated gradient. The idea of truncation is also very similar to the gradient-based existing algorithms in online control like GPC. 3. To compute the gradient estimator in GAPS, do we need access to all $\theta_t$'s, requiring storing all $\theta_t$? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the authors provide more justifications of Definition 2.7 ($\epsilon$-time-varying stability)? How does this assumption compare with the standard assumptions made in existing literature? 2. One of the main contributions of the paper is that it closes the $\log T$ gap between OCO-M based control algorithms and the OGD regret guarantee. However, there are also OCO-M based algorithm that achieves $O(\sqrt{T})$ bound such as in https://arxiv.org/pdf/2210.09903.pdf. Can the authors compare GAPS to this work? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and please find the response to your comments below. > It would be interesting to see the comparison against benchmarks in the online control literature including GPC. The Gradient Perturbation Control (GPC) in Hazan and Singh, [2022] can be viewed as a special case of our Ideal OGD Update (Definition 3.1) when applied to the disturbance-action controller class in linear time-varying systems. We discussed the major algorithmic differences between GAPS, Ideal OGD, and the finite-memory reduction approach [1] in Section 3. For the rebuttal, we compared GAPS to the Ideal OGD as well as the gradient approximation of [1]. Plots are shown in the rebuttal supplement. The setting is MPC with confidence coefficients for a 2D double integrator, as discussed in Appendix I.3 of the manuscript. In the computation time plot, we see that the oracle’s computation time grows quadratically and we must terminate it early. GAPS and the method of [1] both use constant time per step, but GAPS’ constant is smaller. On the regret plots, the three methods are indistinguishable. The final regret of GAPS and [1] differ by less than 0.02%, while the computation time of GAPS is over 15x faster. We can include this result in the final paper. > The idea of truncation is very similar to the gradient-based existing algorithms in online control like GPC. As we discussed before, GPC can be viewed as a special case of our Ideal OGD Update (Definition 3.1) when applying to the disturbance-action controller class in linear time-varying systems. While Ideal OGD does not use truncation, GAPS and the finite-memory reduction approach [1, 3, 6] use the truncation but they approximate the gradients of the surrogate costs in different ways. As we discussed in line 239, Section 3, the design of GAPS enables it to be implemented much more efficiently with less derivative information than existing approaches. > Does GAPS require storing all previous parameters $\{\theta_\tau\}_{\tau \leq t}$? No. The time- and space-efficient implementation of GAPS only requires to store $B$ partial derivatives for $B$ previous time steps (see Algorithm 2 in Appendix B). In revision, we will add a clarification under the simplest form of GAPS (Algorithm 1) that it does not store the previous parameters $\theta_{0:t}$ in the practical implementation. > Can the authors provide more justifications of Definition 2.7 ($\varepsilon$-time-varying stability)? Intuitively, $\varepsilon$-time-varying stability holds if any slowly time-varying policy parameter sequence $\theta_{0:T}$ can achieve stability from state $0$. By Lemma 2.8, one only needs to verify this property for $\varepsilon = 0$ (any fixed policy parameter) to claim that this property holds for some strictly positive $\varepsilon$ in our setting. By assuming this property holds, we study the problem of optimizing the policy parameter $\theta_t$ while the stability issue has been handled by the policy class $\pi_t$. Handling the case where not all policy parameters can stabilize the system is challenging, and we leave it as a future direction (see Section 5 for a discussion). We believe our $\varepsilon$-time-varying stability property is more general than the combination of disturbance-action controller (DAC) class applied to linear systems (e.g., see Section 6.2.4 of Hazan and Singh, [2022]), which is commonly used in many previous works [1, 3, 6]. Specifically, DAC satisfies $\varepsilon$-time-varying stability with $\varepsilon = +\infty$ when applied to linear time-varying systems, which means arbitrary policy parameter sequences can achieve stability (see Appendix H.1). In contrast, our algorithm and theoretical results also apply to settings where $\varepsilon$-time-varying stability only holds for small $\varepsilon$. An example of such settings is linear feedback control in nonlinear systems (see Example 2.2 and Appendix H.3). > Can the authors compare GAPS to Kumar et al., [2022]? Thank you for pointing out this related work! This work does indeed close the $\log T$ regret gap as well, but in a more restricted setting (as discussed below). Since this paper and the preprint version of our work appeared on arXiv within a week of each other, we will revise our submission to include this paper as concurrent related work. There are several major differences between Kumar et al., [2022] and our work. For comparison, one shall view the history vector $h_t$ in Kumar et al., [2022] as our state $x_t$, the decision vector $x_t$ as our policy parameter $\theta_t$, where the policy $\pi_t$ is always an identity function (i.e., the control action $u_t = \pi_t(\theta_t) = \theta_t$). Under this mapping of the notations, Kumar et al., [2022] studies a special case of our setting where the dynamics is linear time-invariant and the policy is identity. And one can verify that our Assumptions 1 and 2 hold under their Assumptions A1-A5 when our parameter set $\Theta$ (corresponds to their $\mathcal{X}$) is a convex compact set. The main regret upper bound in Theorem 3.1 of Kumar et al., [2022] should be compared with our Theorem 3.3 and Corollary 3.4 because our surrogate cost $F_t$ is convex in their setting. Both regret bounds are in the order of $O(\sqrt{T})$, while our metric of adaptive regret is stronger than their metric of static regret. As a practical matter, Kumar et al., [2022] uses a follow-the-regularized-leader type of algorithm, which is often (much) less computationally efficient than our gradient-based algorithm. One distinct contribution of Kumar et al., [2022] is a lower bound for online convex optimization with unbounded memory. [Hazan and Singh, 2022]: Hazan, Elad, and Karan Singh. "Introduction to online nonstochastic control." arXiv preprint arXiv:2211.09619 (2022). [Kumar et al., 2022]: Kumar, Raunak, Sarah Dean, and Robert D. Kleinberg. "Online Convex Optimization with Unbounded Memory." arXiv preprint arXiv:2210.09903 (2022). --- Rebuttal Comment 1.1: Title: Thank you for your clarifications. Comment: Thank you for your clarifications. I have no further questions.
Rebuttal 1: Rebuttal: Several reviewers were concerned that our assumptions are restrictive. We argue that they are perhaps more complicated, but *less* restrictive than the assumptions in the most closely related work (e.g., [1, 3, 7]). In particular, we relax the common linear-time-varying dynamics assumption, and our assumptions are local, which becomes important in nonlinear settings. The motivation of our assumptions is to generalize two key properties of linear systems under typical controllers: 1) the effect of past decisions on the current state decays exponentially fast, and 2) if initialized near the origin, they remain near the origin. These are generalized by contractive perturbation (Definition 2.6) and time-varying stability (Definition 2.7) respectively. The remaining assumptions are more technical in nature, but their main purpose is to ensure that the online controller 1) never leaves the region where contractive perturbation applies, and 2) the magnitude of the state does not grow to be unbounded. These two properties are critical for our analysis of GAPS, and are again generalizations of properties found in the literature (e.g., Examples 2.1, 2.2, and H.1). Please see our responses to the individual reviews for more details about the generality/strength of our assumptions. Pdf: /pdf/1cdd1628b9d318f39495413b5c3c749f13ccd938.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a new algorithm, Gradient-based Adaptive Policy Selection (GAPS), for online adaptive policy selection with time-varying dynamics and costs. For analysis, it proposes a general analytical framework for online policy selection. Under this framework, by restricting the problem and policy class to have the contractive perturbation property it identified, GAPS is shown to approximate an ideal online gradient descent algorithm. This results in better regret bounds compared to existing results. When convexity holds, GAPS is the first to achieve optimal regret in this setting; when convexity doesn’t hold, it gives the first local regret bound for online policy selection. Empirical results on two examples in the main text also illustrate GAPS’s better adaptivity to changing environments compared to baselines. Strengths: 1. The results seem to be significant. First, when assuming convexity, the regret bound of the proposed algorithm improves over existing work and fills a gap in the literature. It is the first to achieve the optimal regret of $O(\sqrt{(T)})$ under the discussed setting while requiring less information about the problem. Second, when the cost function is nonconvex, it gives the first local regret bound for online policy selection. 2. The proposed contractive perturbation property and its corresponding analytical framework are general and subsume an existing class (DAC) as well as some known downstream applications. They may be helpful for future research. 3. The paper is well-written, and the presentation of empirical results is clear, which is convincing. Weaknesses: Overall, the paper seems strong to me, and I have a minor suggestion: If possible, it would be nice to have the result of online gradient descent (OGD) oracle in the numerical experiments. This may help the reader gain more understanding about GAPS’s performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Though the proposed contractive perturbation property includes some existing works as special cases, I wonder how likely or hard it is to find a more general property. Typos 1. In Line 216, it seems that some part is missing after “satisfies”. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As the paper discussed in the Conclusion and Future Directions section, the major limitation of this work may be that the assumptions on the contractive perturbation property and stability are quite strong. It requires the properties should hold for all policy parameters. But still, this work is quite complete, and relaxing the assumptions can be interesting future directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and please find the response to your concerns below. > It would be nice to have the result of online gradient descent (OGD) oracle in the numerical experiments. Thank you for the suggestion. For the rebuttal, we performed an experiment comparing GAPS to the OGD oracle as well as the gradient approximation from the manuscript’s reference [1], which we discussed in lines 239 of the manuscript. Plots are shown in the rebuttal supplement. The setting is MPC with confidence coefficients for a 2D double integrator, as discussed in Appendix I.3 of the manuscript. In the computation time plot, we see that the oracle’s computation time grows quadratically and we must terminate it early. GAPS and the method of [1] both use constant time per step, but GAPS’s constant is smaller. On the regret plots, the three methods are indistinguishable. The final regret of GAPS and [1] differ by less than 0.02%, while the computation time of GAPS is over 15x faster. We can include this result in the final paper. > I wonder how likely or hard it is to find a more general property than contractive perturbation. The goal of contractive perturbation is to guarantee the impact of a previous decision decays quickly over time as long as the policy parameter is changing sufficiently slowly. We believe a similar decay property is needed to address the challenge of indefinite impact of a past error in policy selection. An interesting direction towards relaxing it is to consider the case where only a subset of policy parameters satisfies this property. We will add a discussion about this intuition in revision. > The assumptions on the contractive perturbation property and stability are quite strong. It requires the properties should hold for all policy parameters. Thanks for pointing this out! As we discussed in Section 5, an interesting future direction is to study what guarantees can be achieved when not all of the candidate policy parameters satisfy these assumptions. It is challenging to detect and rule out the policy parameters that violate these properties when $\Theta$ is a continuous parameter set, so we leave this direction as future work. --- Rebuttal Comment 1.1: Comment: Thank you for the new experiment and your explanation. The additional results look promising and answer my question about the experiments. I don't have other questions and maintain my assessment.
Summary: The paper studies online adaptive policy selection in systems with time-varying costs and dynamics. This paper proposes an algorithm that obtains optimal regret bound in the convex case and a local regret bound in the non-convex setting with four assumptions: (1) the dynamics are contractive starting from a ball near 0 if the policy has small variations across time. (2) the dynamics starting from 0 never goes out of an even smaller ball if the policy has small variations across time. (3) the dynamics will start from a point that would never go out of the ball in assumption (1). (4) smoothness and Lipschitzness for the dynamics, policy function, and cost functions. The algorithm does not require Oracle access to the dynamics. Strengths: The problems under investigation are interesting. Weaknesses: It seems that the restriction on the dynamics is quite severe. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could you give some examples of dynamics where the assumptions hold? I am giving a low score mainly because I am not very sure that I understand the dynamics of interest. If there are good examples which show that the assumptions are not as severe as vacuous, then I will change my score. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and please find the response to your concerns below. Our assumptions generalize the assumptions in the most closely related previous work on online control – please see the global rebuttal for details. The intricate nature of the contractiveness and stability ball assumptions come from our desire to have local, instead of global, assumptions. Note that in our DAC and MPC examples (Appendix H.1 and H.2) the contractiveness ball has radius $\infty$. The example settings discussed in our paper have each appeared in related work previously with motivating applications: - Example 1: Learning-augmented model predictive control in linear time-varying systems (Example 2.1), which generalizes the setting studied in the previous work [10] on learning-augmented control. This setting has applications in EV charging and trajectory tracking [10], and we show that it satisfies all of our assumptions in Appendix H.2. - Example 2: Linear feedback control applied to time-varying nonlinear control (Example 2.2), which was studied in [11, 46] on nonlinear control. This example is practical because it is a standard technique in control engineering to stabilize a nonlinear system about an operating point by linearizing the system and using linear control synthesis. We show it satisfies all of our assumptions in Appendix H.3. - Example 3: Disturbance-action controllers in linear time-varying systems (Appendix H.1), which has received much attention from previous works on no-regret online control [1, 3, 6]. We show it satisfies all of our assumptions in Appendix H.1. More discussion: The literature on nonlinear control contains many examples of a parameterized family of controllers for a time-invariant system, each of which renders the closed-loop dynamics exponentially stable about an equilibrium. For example, the well-known “computed torque control” feedback linearization controllers for robotic manipulators, where the feedback gains can be parameterized. These settings satisfy our assumptions in a neighborhood about the equilibrium, via our Lemma 2.8. Even with time-invariant dynamics, the time-varying costs (such as tracking a trajectory determined online) provide an online, possibly adversarial, setting where our algorithm is useful. --- Rebuttal Comment 1.1: Comment: Thanks for your classification. --- Reply to Comment 1.1.1: Comment: Thank you for reading and responding to our rebuttal. Please let us know if you have any further questions.
Summary: This paper presents a novel algorithm, Gradient-based Adaptive Policy Selection (GAPS), for online policy selection in time-varying systems. The authors introduce a general analytical framework for online policy selection via online optimization. The paper also provides theoretical guarantees for the performance of the GAPS algorithm under some assumptions on the stability of the dynamical system and on the convexity of the surrogate cost function. Complementary local bounds are also give in the case of nonconvex cost. Strengths: The paper is well-organized and easy to follow, with clear explanations of the theoretical concepts and practical implementation details. The authors provide detailed proofs of their theoretical results in the appendices, as well as numerical experiments to document the performance of the GAPS algorithm in two concrete example settings. On the math side, while I did not check the details of the proof, I find the underlying perturbative idea new and interesting, and on a high level the steps of the proof check out. Weaknesses: The main weakness of the paper is, given its novelty, that it is hard for the reader to understand how restrictive the set of assumptions that are put on the dynamical systrem. This holds for both assumptions of Theorem 3.3: Convexity of $F$ and $\epsilon$-time varying contractive perturbation/stability. It is good that the authors give examples of systems where these assumptions hold, and describe in Lemma 2.8 how the time-invariant stability can be translated to these conditions, but I still find it a bit hard to understand how restrictive these conditions are. For instance, I imagine that in the case of a multistable dynamical system the contractive perturbation property would not hold? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: one main question listed in the weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Some of the limitations havebeen listed in the conclusions, but perhaps a more extended discussion about the applicability of the assumptions would be informative. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. Our major assumption (Assumption 2.2) is about the joint properties of both the dynamical system and the policy class when composed together in a closed loop. Thus, it is not particularly restrictive on the dynamical system when one has the freedom to choose/design the corresponding policy class. An example is the design of the disturbance-action controller (DAC) class for linear time-varying (LTV) systems (see Appendix H.1), which is a special case of our setting and has been studied by many previous works in online control [1, 3, 6-8]. Assumption 2.2 is also satisfied by other online control settings including learning-augmented model predictive control (MPC) for LTV systems [10] and linear feedback control in nonlinear systems [11, 46] (see Examples 2.1 and 2.2). We will add a discussion about this in the revision. The literature on nonlinear control contains many examples of a parameterized family of controllers for a time-invariant system, each of which renders the closed-loop dynamics exponentially stable about an equilibrium. For example, the well-known “computed torque control” feedback linearization controllers for robotic manipulators, where the feedback gains can be parameterized. These settings satisfy our assumptions in a neighborhood about the equilibrium, via our Lemma 2.8. Even with time-invariant dynamics, the time-varying costs (such as tracking a trajectory determined online) provide an online, possibly adversarial, setting where our algorithm is useful. We also want to emphasize that our result about the local regret of GAPS (Theorem 3.6) does not require the surrogate cost $F_t$ to be convex. The convexity assumption is required or satisfied by many previous works on online control [1, 3, 6, 53], and relaxing it is one of our major contributions. Lastly, a dynamical system with multiple stable equilibrium points can still satisfy the contractive perturbation property (Definition 2.6) because the property is only assumed locally in the ball $B_n(0, R_C)$. By controlling the step size of GAPS (Algorithm 1), we can guarantee that the state of GAPS always stays within $B_n(0, R_C)$ (see (25) in Appendix D.5). Therefore, it does not matter if there are other stable equilibrium points out of $B_n(0, R_C)$. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their response. My positive assessment remains unchanged.
null
null
Knowledge Distillation for High Dimensional Search Index
Accept (poster)
Summary: The paper proposes a new method called KDindex to learn lightweight indexes by distilling knowledge from high-quality ANNS. The method outperforms existing learnable quantization-based indexes and state-of-the-art ANNS methods by learning to keep the ranking order, adding the reconstruction loss to minimize the compression error, and adopting a balanced partition strategy Strengths: KDindex achieves good performance which outperforms existing learnable quantization-based indexes and some well-performed ANNS methods. Weaknesses: 1. This paper misses some related work, such as Poeem, JPQ, and MoPQ. 2. The idea of this paper is similar with Distill-VQ. It improves the Distill-VQ by adding reconstruction loss into the learning objective and adjusts the posting list according to a balance strategy. I would like to see the comparison with Distill-VQ. 3. For search efficiency and retrieval quality experiment, it does not compare with the state-of-the-art ANN algorithms in the ANN-Benchmarks, such as NGT-qg, qsgngt, and vamana. 4. It seems balance strategy does not help too much. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the above weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It does not include limitation analysis. Please add a Limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. [Other related works] > **Comments:** This paper misses some related work, such as Poeem, JPQ, and MoPQ. Thank you for your valuable advice. The mentioned related works, including Poeem, JPQ and MoPQ, focus more on the joint learning of both retrieval embedding models and the quantization models, where their aim is to integrate the separate encoding and compression process into an end-to-end training process. Different from these works, our work mainly focuses on the better compressed methods given well-trained embedding vectors, where we do not focus on the encoding models. We will elaborate on this difference in our work. 2. [Discussion with Distill-VQ] > **Comment:** The idea of this paper is similar with Distill-VQ. It improves the Distill-VQ by adding reconstruction loss into the learning objective and adjusts the posting list according to a balance strategy. I would like to see the comparison with Distill-VQ. Indeed, Distill-VQ incorporates the distillation method to jointly learn the query encoder and the index models. Distill-VQ can relax the requirements of ground-truth labels while KDindex is designed for unlabeled data. However, there are several differences between Distill-VQ and KDindex. The first distinction lies in the dense embedding encoders. Distill-VQ necessitates the learning of query encoders for documents or images to align with the candidate document embeddings. For KDindex, all the high dimensional embeddings are fixed so that it is a pure ANNs problem without the attention for encoding queries since we have no content information. The second differentiation revolves around the nature of distillation signals. Distill-VQ takes the similarity information of fixed document embeddings as signals by disclosing the scores between the original fixed embeddings and the reconstructed embeddings. However, KDindex aims to preserve the ranking information from a teacher index model, where the teacher index model directly provides the retrieved top-k results. Thus, the sampled candidate document set plays an important role in Distill-VQ, which just represents local similarity information in the case of a huge document size. For better compare the performance of different distillation methods, we conduct experiments for better illustration. For fair comparison, all the embeddings are fixed in Distill-VQ and the index structure follows AQ. We adopt the in-batch sampling method with a batch size of 64 in Distill-VQ and calculate the distillation loss with KL-divergence loss function. The following table shows the results: | Models | SIFT1M | | MS MARCO Doc | | | ------------ | --------- | ------- | ------------ | ------ | | | Recall@10 | NDCG@10 | Recall@10 | MRR@10 | | Distill-VQ | 35.79 | 78.43 | 17.34 | 39.93 | | KDindex (AQ) | 37.30 | 80.01 | 18.93 | 41.69 | As shown in the above table, KDindex still outperforms Distil-VQ, demonstrating the superiority of the distillation methods in KDindex. 3. [Graph-based ANN Benchmarks] > **Comments:** For search efficiency and retrieval quality experiment, it does not compare with the state-of-the-art ANN algorithms in the ANN-Benchmarks, such as NGT-qg, qsgngt, and vamana. The results of NGT-qg, qsgngt, Milvus and vamana are added in Fig 2, which can be referred in Figure 6 in the supplement PDF. These alogrithms are graph based and full percision vectors are stored on node of graph. These graph-based methods show better performances especially with more latency. All these graph-based method can act as teacher model to train our KDindex. We point that our KDindex is designed to enhance the performance of compressed index with the help of better-performed teacher index rather than defeat those graph-based methods. As for compressed methods, such as ScaNN and BLISS, KDindex can perform better. In the futrue work, we will try more efficient method as teacher index to further improve compressed index. 4. [Balance Strategy] > **Comments:** It seems balance strategy does not help too much. Compared with other important parts, the relative 2.61% and 2.76% improvements in Recall@10 and MRR@10 of posting balance strategy are smaller but it still accounts for significant retrieval improvements. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal! For Q2, Distill-VQ can also fix query encoder to just learn the quantization codebook. It is also optimized to perserve the ranking order of topk results. It uses the ListNet loss instead of the KL-divergence loss. Could you provide the results of Distill-VQ with the right setting? For Q3, where can I find the results? I cannot see the results in supplement PDF. You can distill KDindex from the graph-based ANN methods. However, how can you outperform these indices when application scenarios require high recall since quantization is a lossy compression solution?
Summary: This paper proposes to train an quantization-based approximate nearest neighbor search index using query and its approximate kNN items which are obtained using a teacher kNN search model. Authors propose loss functions to incorporate the ranking of kNN items as well as additional constraints to prevent trivial solutions and improve quality of approximation. The proposed framework yield significant improvement over baseline quantization methods as well as previous work on learning-based quantization approaches. Overall score updated from `6: Weak Accept` to `7: Accept` after author response. Strengths: - The proposed method yields strong empirical results and experiment section includes ablation study, analysis of sensitivity to hyper-parameters, and comparison with state-of-the-art methods. - Each component of the proposed work is well-motivated and is accompanied with corresponding ablation results (although some of them have been moved to appendix but I would encourage authors to include some more details about baselines in the main paper). Weaknesses: - The presentation of the method and results can be improved further. - For instance, a clear description of test-time inference process would be helpful. - Also, it is not intuitively clear why using query-item interactions can results in better quantization for kNN search. The goal of the quantizer is to accurately express the original distance function. Does the proposed method improves kNN search performance at the expense of quantization performance or is the resulting quantization also a better approximation of the original distance function. - Missing information on Indexing time. - While the proposed method does improve test-time performance, it would be interesting to see how much time does it take to index a given set of items - (Minor) Distillation is perhaps not the right term here. - The model is trained using approximate nearest neighbors as per an expressive kNN search index. But unlike “standard” distillation papers, the teacher model is not used to provide any soft training signal. The only training signal from query-item interactions is the ranking of items and this ranking is not induced by the teacher model but by the underlying similarity function. - Why did the authors not used exact kNN items to train the index? I understand that obtaining exact kNN item might be significantly slower than retrieving approximate kNN items using teacher kNN index. But finding exact kNN items is a one-time cost and may turn out to be a small fraction of the overall running/training time as this exact kNN may not be the as expensive as the subsequent training of the kNN index. - (Minor) Proposed method requires a set of training queries to index the items. In typical settings for kNN search, the indexing algorithm has access to only the set of items. It would be interesting to use a subset of items as (pseudo-)queries in order to learn such an index. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - What is the accuracy of the teacher model? Consider adding performance of teacher model in Table 1. - What is the intuition behind query-item interaction being helpful over just item-based quantizers? - Eq 6 is references in Algo 2 but I could not find it in the paper. Did authors mean eq 1? - Description of `w/o distillation` ablation says that “trains the encoder with knowledge distilled from the teacher model (HNSW in the experiment).” Can authors elaborate on this point in line 229? “*The improvement of KDindex is more significant when the distribution between query space and database space is different”.* - What do authors mean in Line 231 by “The similarity could not  be obtained by original quantization methods”? Is this some fundamental limitation of the quantization-based methods? - Is it not clear how the proposed method avoid alternating between updating codewords and query/item-to-codeword assignments. - In line 14 as Algo 2, as the codewords are updated, the index assignments in computed in Line 5 will become out-dated, right? - Are the index assignments (assignments of documents to codewords) also updated somehow together with the codewords in line 14? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. [test-time inference process] > **Comment**: a clear description of test-time inference process would be helpful. For comparsion among quantization methods in Table 1, Asymmetric distance computation (ADC) is used. To compare with more ANNS methods in Table 2, an inverted file system is combined with the asymmetric distance computation (IVFADC). This allows rapid access to a small fraction of database indices and is shown to be successful for very large scale search. More detail can be found in [1]. For KDindex, Table 1 is based on ADC and Figure 2 is based on IVFADC. [1] Product quantization for nearest neighbor search. TPAMI 2020. 2. [Motivation of query-item interactions] > **Comment**: why using query-item interactions can results in better quantization for kNN search. According to previous work[1], the introduction of query-item interactions would increase the probability of similar items assigning to the same centroids. Similarly, we extend the theoretical results to KDindex, which adopts the quantization methods rather than hashing based methods in [1]. For the training query $q$ and its corresponding ground truth $p$ in the database, the expected probability of the centroids containing $p$ given the query $q$ increase by a positive margin after reassignment, i.e. $\mathbb{E}[𝑓^′(e^′(𝒑)|𝒒)] ≥ \mathbb{E}[𝑓 (e (𝒑)|𝒒)]$,where $f(\cdot)$means the scoring function given by the model and $e(\cdot)$means the quantization function that maps the points to centroids. The increment in this probability results in an increment in the quality of the retrieved candidates during inference. The theorem implies that the centroids containing the relevant points derive a higher aggregated probability as it will contain other ground truths with higher probability. Consequently, query-item interaction is advantageous over item-based quantizers alone. The increment from interaction will be higher in the beginning as the relevant documents are quantized into the same centroids. As training goes on, the increment dies down to 0 and the model converges to the optimal quantization. [1] BLISS: A Billion scale Index using Iterative Re-partitioning. SIGKDD 2022 3. [Missing information on Indexing time.] In the indexing stage, the codebooks are well learned and the indexing time of KDindex is same as corresponding basic quantization. We detail the time complexity in global response PDF. 4. [Distillation Details] > **Comments:** Distillation is perhaps not the right term here Conventional knowledge distillation fits in the classification tasks, where the more powerful teacher model imparts the soft labels to guide the learning of the student models. However, in the ranking tasks, such as recommendation and retrieval tasks, ranking distill methods are proposed, such as RD[1], RankDistill[2], to cater to the unique demands of ranking-oriented tasks. The ranking knowledge is distilled to the student model to learn a more accurate ranking performance. In this paper, KDindex adopts a more direct form of ranking signals from teacher index, where the approximate nearest neighbours are retrieved by the teacher index and the student models aim to preserve the ranking performance with the quantized embeddings. These retrieved topk items are different from different training queries and thus enhance the ability to capture similarities and conduct effective searches. [1] Ranking distillation: Learning compact ranking models with high performance for recommender system, KDD 2020 [2] Rankdistil: Knowledge distillation for ranking, ICML 2021 5. [Pseudo Query for training] The large scale datasets with high dimensional vectors for ANN search usually include the training query vectors for learning and the test query vectors to fit in the dense retrieval scenarios. For better illustrate the generalization ability, we conduct experiments with pseudo queries where we select 100K and 367K items as query vectors from the SIFT1M and MS MARCO Doc datasets. The results on SIFT1M of Pseudo query are approaching to the performance of True Query while on MS MARCO Doc is worse. The reason lies in the significantly different distribution between the query and database, as shown in Figure 5 of the supplemental PDF. Thus, distillation information from query vectors is helpful to learn codebooks. |KDindex (AQ) | SIFT1M | | | MS MARCO Doc | | | | ------------ | ------- | --------- | ------- | ------------ | --------- | ------ | | | #query | Recall@10 | NDCG@10 | #query | Recall@10 | MRR@10 | | True query | 100,000 | 37.30 | 80.01 | 367,013 | 18.93 | 41.69 | | Pseudo query | 100,000 | 37.15 | 79.43 | 367,013 | 16.64 | 38.25 | 6. [Performance of Teacher Model] For Table 1, we perform ADC over these quantization-based methods. Since HNSW is a graph-based method, the comparison with these quantization-based methods is not fair. We provide more details about training. For training, We obtain the approximate top-K neighbors from teacher model and the performance is approaching ground-truth by adding the search latency. The time and performance used in experimental are as following: (More details in Appendix B.3) | Datasets | SIFT1M | GIST1M | MS MARCO Doc | MS MARCO Passage | | -------------- | ------ | ------ | ------------ | ---------------- | | Recall@10 | 0.9865 | 0.9859 | 0.9292 | 0.9182 | | NDCG@10 | 0.9999 | 0.9999 | N/A | N/A | | MRR@10 | N/A | N/A | 0.9493 | 0.9327 | | Query time (s) | 0.5862 | 1.3082 | 1.4805 | 4.7689 | For more comparison about the recall-time performance, please refer to Figure 6 in the global response PDF. --- Rebuttal Comment 1.1: Title: Follow-up after author response Comment: I would like to thank the authors for the clarifications provided. 1. Indexing Time Follow-up I saw that the authors added indexing complexity details in the global author response pdf. While it is useful to understand asymptotic time complexity of these methods, the actual time taken for indexing might be more meaningful as the constants involved in training might be crucial. Specifically for each quantization method, I am curious about how much time does vanilla quantization method take and how much time does the corresponding KDIndex variant take. Also, how can training time be independent of the total number of items ($N$)? 2. Re: Advantage of using query-item interactions? Do you think using query-item interactions help in converging to better quantization parameters while vanilla quantization methods might converge to sub-optimal parameters? In this case, by better I mean wrt pure quantization loss. It would be interesting to see quantization based loss both vanilla quantization and corresponding KDIndex model to understand if the performance gain in k-NN search metrics for KDIndex is coming at the cost of some drop in quantization performance. Could you also elaborate on some of the questions in my original review? --- Reply to Comment 1.1.1: Title: Further Response to Reviewer aAUh Comment: Thanks for the reviewer's valuable reply! 1. Indexing Time Follow-up (1) **actual time taken for indexing:** The indexing complexity of KDindex is the same as the basic quantization method since the structure of codebooks and the way to quantize are the same. In the inference stage, only codebooks (and rotation matrix in OPQ) are used. The neighbors in the distillation or balance strategy are unrelated to inference. The actual indexing time on SIFT1M (# items = 1M) is as follows: | indexing time (s) | basic quantization | KDindex | | ----------------- | ------------------ | ------- | | PQ | 34.7339 | 34.9440 | | OPQ | 40.2778 | 41.5282 | | AQ | 59.8159 | 59.1787 | (2) **independent of $N$:** In the training process, codebooks are not learned by using all items so $N$ is not included in the expression. The actual number of items used in training is related to training queries and their neighbors. For $M$ training queries of each batch, a fixed number $K$(constant) of neighbors for each query is used. We haven't put the constant term in the expression. 2. Pure quantization loss Using query-item interactions does help in converging to better quantization parameters. The pure quantization loss on SIFT1M is as follows. More important, query-item interactions provide neighbor information which is beneficial to divide data points (keep the same code for similar points and use different codes for points with large differences). The neighbor relationship is the internal cause while reconstruction loss is appearance. $L = \|x-Q(x)\|^2=\sum_d^D(x_d-Q(x)_d)^2$, where $D$ denotes dimension, $x$ denotes item vector and $Q(x)$ denotes quantized item vector. | pure quantization loss for each item $L$ | basic quantization | KDindex | | --------------------------------------------- | ------------------ | ----------- | | PQ | 23521.02141 | 23190.76753 | | OPQ | 21728.97262 | 20653.29408 | | AQ | 19675.23785 | 19029.35791 | Due to the word limitation, we deleted some of the original answers, and now we add them as follows. 3. [Misleading of Eq 6] Eq (6) is represented in Appendix, which actually is the same as Eq (1). We are sorry for this repeated reference. 4. [Details about experiments] > **Comment:** Description of w/o distillation ablation says “trains the encoder with knowledge distilled from the teacher model (HNSW in the experiment).” Thank you for your careful review and we are sorry for the misleading descriptions. We will modify it as follows: (1) Quantization. (2) Initialization is warmed up by quantization methods. It updates index assignments and centroids iteratively. (Details can be found in Appendix D. We obtain the pre-trained codebooks by iterative training manners. ) Below three experiments are based on (2)Initialization in the differentiable training manner. (3) w/o Distillation loss denotes the training without knowledge distilled from the teacher model (HNSW in the experiment). It optimizes the centroids and trains the encoder under the constraint of reconstruction loss and balance strategy. (4) w/o Balance strategy denotes methods without Sinkhorn-Knopp balance strategy. (5) KDindex denotes methods that differentially train models with Reconstruction loss, Distillation loss, and Balance strategy. > **Comment:** Can authors elaborate on this point in line 229?“The improvement of KDindex is more significant when the distribution between query space and database space is different”. When the distribution of query vectors and database vectors are different, the query information is more important to centroid learning. Knowledge distillation plays an important role in distilling query information from the teacher index to the student index. Thus, the more important the query information is, the better distillation works well. The distribution of four datasets is attached in PDF. > **Comment:** What do authors mean in Line 231 by “The similarity could not be obtained by original quantization methods”? Is this some fundamental limitation of the quantization-based methods? Original quantization methods only learn database distribution, which is the fundamental limitation of quantization-based methods (such as PQ, OPQ, and AQ). To utilize the information of query vectors, existing works (ScaNN and QUIP) sample part of query vectors. Instead, KDindex learns from the teacher index. 5. [Details about codeword update and index assignments] The general process includes the codebooks updating and index assignments within a mini-batch. For better illustration, we detail the process with respect to each query within the mini-batch and present it with for loops. Actually, the calculation is aggregated within the mini-batch and the updating of codebooks is performed for each batch, followed by the reassignment of the index.
Summary: This work addresses the problem of learning a lightweight index for high-dimensional similarity search problems. Lightweight index is desirable in many applications which can't afford a high precision heavyweight index due to higher storage cost or computational constraints. Unlike past work on lightweight index learning, this work assumes the availability of a heavyweight index at training time and explores the possibility of learning of the lightweight index with the guidance of the heavyweight index. It proposes a knowledge distillation framework for learning a lightweight index when no label information is available. The key idea is to use the top-K nearest neighbor results of every training query for guiding the learning of the lightweight index. This enables the use of ranking-oriented loss in the training of the lightweight index. The work employs two tricks to avoid trivial and imbalanced solutions - i) a reconstruction loss that minimizes the distance between the query/candidate and its codeword ii) balancing of posting list. The proposed framework is applied on four datasets and experimental results are discussed. For every query, a cross-entropy like loss is computed with respect the top-K approximate nearest neighbors returned by the heavyweight teacher search model. This loss downweighs the similarity score between the codeword and the candidate with the reciprocal of the candidate's rank. Strengths: Experimental results show that knowledge distillation from a heavyweight index improves the retrieval performance of the lightweight indexes. Ablation study shows that two of the three strategies employed make a significant difference to retrieval performance of the lightweight indexes. Weaknesses: As the knowledge distillation process involves retrieving the top-K results for each query using the teacher search model and this is repeated in each iteration, the computational cost of distillation is high. Datasets used in the experiments are small in size (< 10M documents). No discussion of and comparison with LSH and learning to hash techniques. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: "Randomly initialize 133 the centroids (codewords) and assign indexes" --> is it possible to take the help of teacher index to initialize in a more informed manner? In Figure 2, what is the unit of time in x-axis? In Figure 2, KDIndex stands for which specific student model? Is it PQ or OPQ or AQ? In Figure 2, HNSW is time efficient compared to KDIndex for low to moderately high recall scenarios. However, in high recall scenario, HNSW is significantly worse than KDIndex. How can this happen as HNSW is the teacher and KDIndex is the student? In Figure 2, best recall is achieved by ScaNN though this comes at increased search latency. However, KDIndex plateaus off quickly and increased search latency doesn't seem to help. The recall numbers of KDIndex in Figure 2 for both SIFT1M and GIST1M datasets don't match with the recall numbers reported in Table 1. In fact Recall @10 in Table 1 for KDIndex is less than 35 for SIFT1M and less than 22 for GIST1M whereas Figure 2 reports Recall @10 as high as 0.8. What explains this discrepancy? Table 1 should report the results for the teacher search index HNSW to give an idea of the relative performance. In Section 4.4, it would be interesting to know the effect of B and W on storage and search latency. On what criteria was B = 8 W = 256 was chosen as the optimal hyper-parameter setting? Was this the optimal setting for all the datasets? In Table 2, it would be interesting to include K = 1 and 2. As K = 5 and K = 10 give very similar recall and MRR, it would be good to find out if smaller K also does similarly well. In Section 4.5 and Table 2, what exactly is initialization warmed by quantization methods? Are you initializing the centroids with those for AQ/PQ/OPQ instead of random initialization? Of the three strategies employed by KDIndex, Balance seems to give least improvement in retrieval performance going b y Table 3 (for instance, PQ Recall @10 8.64 vs 8.62). However, Balance adds significant complexity to the training algorithm. It would be good to report and compare time taken for training KDIndex and w/o Balance in Table 3 to get a better understanding of the tradeoff between incremental improvement in retrieval performance and training complexity. In Table 4, KDIndex refers to which of KDIndex(AQ), KDIndex(PQ), KDIndex(OPQ)? In Table 4, why is compression for SIFT1M is much lower than GIST1M and other datasets (7 vs 63)? Why are the similarity functions for MS MARCO different from that for SIFT1M and GIST1M? What is the additional time complexity of KDIndex relative to AQ, PQ and OPQ? Why haven't 10M SIFT dataset and 5M SIFT dataset been used in the experiments as done by [38]? References [27] and [28] are one and the same! Figure 2: "trage off" should be "trade-off" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The submission doesn't discuss the limitations of its work and the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. [computational cost of distillation] > **Comments**: As the knowledge distillation process involves retrieving the top-K results for each query using the teacher search model and this is repeated in each iteration, the computational cost of distillation is high. Thanks for your thoughtful approach to KDindex training. Initially, top-K results are fetched for all training queries after teacher index (HNSW) training, ensuring efficient retrieval. This step occurs once in real implementation, with results cached for future use, forming precomputed label generation. When reviewing the Algorithm, notably line 17, "IO cost" primarily matters. Cached results from file reduce computational load. After obtaining the top-K retrieved results, the time complexity of distillation loss calculation is $O(BWK)$ for each query. Compared with the quantization time, which takes $O(BWD)$, the distillation cost is comparable and acceptable. 2. [large datasets] Refer to reponse 3 for Reviewer LzAv. 3. [discussion and comparison with LSH and learning to hash techniques] > **Comments**: No discussion of and comparison with LSH and learning to hash techniques. Thank you for your valuable suggestions. Actually, we have compared KDindex with the state-of-the-art hashing learning based method, i.e., BLISS [2], which preserves the ground-truth ranking orders to learn bucket partition for large-scale data. The experiments in section 4.3 demonstrate that our proposed KDindex outperforms BLISS in both retrieval performance and efficiency. For more discussion about the hashing-based methods, they can be integrated into the discussion with quantization-based methods in introduction. LSH (Local sensitive hashing) is a data-independent un-supervised method, similar to those clustering-based conventional quantization methods. LSH approaches have the property that objects that are close to each other have a higher probability of colliding than objects that are far apart across various distance metrics. The drawbacks of these approaches are the requirement for a large number of hash tables in order to achieve good search quality and these methods are unmindful of the distribution of vectors, often leading to lop-sided partitions and long query times.   Learning to hash is also an effective method to compress high-dimensional data into low-dimensional binarized codes, where two types of loss functions are usually used. One is the reconstruction loss, which minimizes the distance between the original vector and the encoded vectors. The another one is the _ranking-based loss_, e.g., triplet loss and pair-wise loss, which encourages the model to learn a hash function that preserves the pairwise similarity relationships between positive points and negative points. However, only when the data set *contains the interaction information*, such as user clicks, the ranking loss would be utilized. In this paper, the training data has *no ground-truth labels (positive labels)*, so that the explicit ranking loss can not be adopted for optimization. The proposed KDindex incorporates extra supervision ranking-based signals through a teacher index to capture interactions between queries and items. There are also numerous works that use knowledge distillation to improve the performance of hashing-based codes, such as [1], where the ranking information is distilled from a graph-based network to enhance the performance of hashing codes. However, these works rely on the ground-truth labels (user-item interactions) to learn the ranking orders. This is different from our work, where label information is not accessible for learning.   Furthermore, our proposed framework can be adapted to a variety of compressed indexes. The learning to hash method would be instantiated as the student index in the subsequent works to show the generality of the model. [1] Binarized collaborative filtering with distilling graph convolutional networks. IJCAI 2019. [2] BLISS: A Billion scale Index using Iterative Re-partitioning. SIGKDD 2022 4. [The initialization of the codewords] > **Comments:** Is it possible to take the help of teacher index to initialize in a more informed manner? Thank you for your valuable suggestion. A straightforward way is to exploit the neighbor information in the teacher index to aggregate the similar items with the same index. But for the centroids, the size and scale are remarkably different from that in HNSW. This would remain a further problem. KDindex is initialized with AQ/PQ/OPQ instead of random initialization. KDindex(AQ) represents that the quantization method is warmed up by AQ. The same goes for PQ and OPQ. We also report that 'To accelerate the training, codebooks are warmed by original quantization methods such as PQ, OPQ, and AQ.' in Appendix D. 5. [More descriptions abouth Figure 2] The unit is second which corresponds with the results in [ann-benchmarks](https://ann-benchmarks.com/). KDindex here stands for AQ. > **Comments:** How can this happen as HNSW is the teacher and KDIndex is the student? HNSW's performance can be enhanced by extending search latency. In experiment, we take topk neighbor from HNSW whose recall@10 approaching 0.99 at the expense of search time. Thus, the teacher model provides more powerful signals for training student index. > **Comments:** best recall is achieved by ScaNN though this comes at increased search latency. However, KDIndex plateaus off quickly and increased search latency doesn't seem to help. We increase the search latency for comparision, which can be referred in Figure 6 in the supplemented global response PDF. According to the figure, KDindex outperforms ScaNN and BLISS. 6. [More choices of K] We conduct experiments on MS MARCO Doc datasets with $K=2$. The experiments are run for 5 times. The Recall@10 is $17.39\pm0.25$ and MRR@10 is $40.10\pm0.21$. Compared with more neighbors, e.g., $K=5$or $K=10$, extremely smaller number performs worse. --- Rebuttal Comment 1.1: Title: Reviews and Rebuttal Comment: I've read the reviews and the rebuttal. I thank the authors for their clarification to some of the questions I had asked. I would appreciate if the authors can address some questions in my review that they seem to have missed responding to. --- Reply to Comment 1.1.1: Title: Response to Reviewer uFQG (1/2) Comment: Thanks for your appreciation and reply! We provide more responses. 1. [More descriptions about Figure 2] > **Comments:** The recall numbers of KDIndex in Figure 2 for both SIFT1M and GIST1M datasets don't match with the recall numbers reported in Table 1. What explains the discrepancy between Tab. 1 and Fig. 2? The discrepancy between Table 1 and Figure 2 results from the different search methods. In Tab. 1, the methods are quantization-based, and we index and search by asymmetric distance computation (ADC) following typical settings for quantization-based models [1]. In Fig. 2, to compare with other uncompressed methods, the inverted file system combined with the asymmetric distance computation (IVFADC) is adopted to evaluate the performance under different latency times. Specifically, the utilization of IVF is consistent with the settings in ScaNN. We will add these details to the experiment sections. [1] Product quantization for nearest neighbor search. TPAMI 2010 > **Comments:** Table 1 should report the teacher search index HNSW results to give an idea of the relative performance. Refer to response 6 for aAUh 2. [Effect of B and W] > **Comments:** In Section 4.4, it would be interesting to know the effect of B and W on storage and search latency. As for the storage, it would take $B \times \log W$bits to store the index, which is linear with the number of codebooks ($B$) and logarithmic with the number of codewords ($W$). As for the search latency, we report the search latency of KDindex(PQ) on the SIFT1M dataset based on the ADC retrieval method as follows. | Time (s) | $B=4$ | $B=8$ | $B=16$ | $B=32$ | | -------- | ------- | ------- | ------- | ------- | |$W=64$|23.0896|27.0325|35.2343|52.7827| |$W=128$|23.0791|27.0185|35.2848|53.0061| |$W=256$|23.1086|27.0060|35.3733|52.8418| |$W=512$|23.1076|27.0620|35.5124|52.9954| The database vector $d$ is represented by $Q(d)$ where $Q(\cdot)$ denotes the quantization function. The distance $d(q,d)$ is approximated by the distance $\tilde d(q,d) = d(q, Q(d)) = d(q, \sum_{b=1}^B c_{w_b}^b )$ where $B$ is the number of codebooks and $c_{w_b}^b$is the w-th codeword of b-th codebook. For each query, the times of distances computation with $W$ centroids are $\frac{D}{B} \times B \times W$. The number of neighbors approximates $\frac{N}{W} \times B$ under balance constraints in the inference stage. Thus, the total computation complexity takes $D\times N \times B$, which grows linearly with $B$. Thus larger number of the codebooks $B$ corresponds with more latency while larger number of the codewords has little influence about the search latency, which is consistent with the experimental results. > **Comments:** On what criteria was B = 8 W = 256 chosen as the optimal hyper-parameter setting? Was this the optimal setting for all the datasets? The selection of hyper-parameters $B = 8$ $W = 256$ was determined since that the Bytes for each items' code are 64 bits $(B \times \log W)$. While larger values $B$ and $W$ tend to enhance performance across different scenarios, the resource constraints imposed by storage limitations led us to adopt a consistent configuration for all datasets. 3. [Concerns about Balance Strategy] > **Comments:** Of the three strategies employed by KDIndex, Balance seems to give least improvement in retrieval performance going by Table 3 (for instance, PQ Recall @10 8.64 vs 8.62). However, Balance adds significant complexity to the training algorithm. It would be good to report and compare the time taken for training KDIndex and w/o Balance in Table 3 to get a better understanding of the tradeoff between incremental improvement in retrieval performance and training complexity. The time complexity of the balance strategy based on sinkhorn-knopp is $O(MBW)$ where $M$, $B$ and $W$ denote batch size (the number of queries within the batch), the number of subspaces, and the number of codewords in each codebook. It is acceptable compared to the time complexity of quantization ($O(MBWD)$) In the training process, the balance strategy cost about 57.49ms for each batch of 64 samples for B=8 and W=256 while the whole batch computation including the forward, backward, and data IO takes about $1.65\pm0.04$ seconds. 4. [Details about Table 4] > **Comments:** In Table 4, KDIndex refers to which of KDIndex(AQ), KDIndex(PQ), KDIndex(OPQ)? KDindex in Table 4 refers to KDIndex(AQ). > **Comments:** In Table 4, why is compression for SIFT1M is much lower than GIST1M and other datasets (7 vs 63)? It's crucial to note that the dimension of GIST1M vectors is 960, whereas the dimension of SIFT1M vectors is 128. As both datasets are compressed using the same number of bits, the compression ratio is inherently influenced by the dimensionality of the vectors. Higher-dimensional vectors tend to yield higher compression ratios when compressed using the same number of bits. --- Reply to Comment 1.1.2: Title: Response to Reviewer uFQG (2/2) Comment: 5. [Similarity functions] > **Comments:** Why are the similarity functions for MS MARCO different from that for SIFT1M and GIST1M? For more general cases, ANNs rely on the L2 distance, such as for SIFT1M and GIST1M datasets. Under the document retrieval tasks, the similarity functions often take the inner product, which is known as MIPS (Maximum Inner Product Search). KDindex performs well on both settings, which also indicates the good generalization of KDindex over different similarity functions. 6. [Additional Complexity] > **Comments:** What is the additional time complexity of KDIndex relative to AQ, PQ and OPQ? The training ways of KDindex and basic Qutization methods (AQ, PQ and OPQ) are different, therefore the complexity is different in the training phase. The indexing and inference way of KDindex and Qutization methods are consistent and there is no additional time complexity in the index and inference stages. More details are described in Table 8 in the global PDF. | Methods | KDindex (AQ) | KDindex (OPQ) | KDindex (PQ) | | ---------------- | ---------------------- | ----------------------- | --------------------- | | Initialization | $O(MBWD)$ | $O(MWD^2)$ | $O(MWD)$ | | Training (Full) | $O(MBWD + MBW + MBWK)$ | $O(MWD^2 + MBW + MBWK)$ | $O(MWD + MBW + MBWK)$ | | Training (Final) | $O(MBWD)$ | $O(MWD^2)$ | $O(MWD + MBWK)$ | | Indexing | $O(NBWD)$ | $O(NBW((D/B)+D^2))$ | $O(NWD)$ | 7. [Experiments with Larger Datasets] > **Comments:** Why haven't 10M SIFT dataset and 5M SIFT dataset been used in the experiments as done by [38]? We conduct more experiment results on Yandex DEEP1B which has a larger scale. Please refer to response 3 for LzAv. 8. [Discussion of limitations and potential negative societal impact of their work] KDindex mainly focuses on distilling knowledge from the teacher index and taking a trade-off between storage and search efficiency. In the future, we will try more student models such as lattice quantization and learning to hash methods to improve accuracy. And we will take labels into account to improve retrieval performance progressively. More details can be found in Appendix E.
Summary: The paper proposes a method to compress indexes for ANNS by using knowledge distillation. They propose to use a graph-based index teacher model and use the top-k nearest results obtained from the teacher indexes to act as the supervision signals to optimize the compressed function. The student model is optimized to have the same ranking orders as the teacher models. Different from previous work, they use a differentiable training process that updates the centroids and indexes simultaneously per mini-batch. Strengths: Vector search is an important direction. This work tries to compress the embedding index, which is a critical task that can save huge storage and also increase the search performance. The results seem promising. The authors conduct experiments on 4 benchmarks and show that KDindex achieves a 40x index compression ratio, and 2x CPU speedup compared to the non-compressed method (HNSW). Weaknesses: There are several important questions that are not answered in this paper. The paper doesn’t mention the training time for the KDindex. In addition to the storage, the indexing time for the index is also important. One other question is about how to index new documents. The student model training depends on a trained teacher model (HNSW). What if there is a new document added to the index corpus, will the distilled student generalize well for the new document? Also, it would be interesting to see larger benchmark, e.g. those have 1B entries, and see if the distillation would work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why not reporting the HNSW performance in table 1? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. [Training and Indexing time for the KDindex] > **Comments**: The paper doesn’t mention the training time for the KDindex. In addition to the storage, the indexing time for the index is also important. Thank you for your suggestion, and we conduct an analysis on the complexity of training and indexing time. The whole training process includes the initialization with different conventional quantization-based methods, followed by the proposed learning-based approach. The various quantizers have an impact on the complexity analysis, and thus we discuss them separately. First of all, we describe the notations for clear understanding. Denoted by $D$ item embedding dimension, $B$ the number of subspaces, $W$ the number of centroids in each codebook, $M$ the batch size (the number of queries in each batch), and $K$ the number of neighbors. $N$ is the number of items in the database. As for ScaNN, $K_v$ denotes the number of centroids in VQ ( vector quantization ) and $K_p$ in PQ (  Product Quantization ). In terms of the learning process, we analyze the time complexity of the forward training process during each batch, which encompasses quantization, balance strategy, and distillation loss calculation. The quantization complexity is intricately tied to the specific quantizers employed. The balance strategy exhibits a complexity of $O(MBW)$ and the distillation takes a complexity of $O(MBWK)$. We summarize the complexity with respect to each batch as Table 8 in the global response PDF. Considering the training process, the complexity of the balance strategy, i.e., $O(MBW)$, is considerably lower than that of quantization. The number of neighbors often refers to small numbers, e.g., $K=10$, which are smaller than the dimension of embeddings ($D>100$). [1] Sinkhorn distances: Lightspeed computation of optimal transport. NeurIPS 2013 2. [how to index new documents] > **Comments**: how to index new documents. The student model training depends on a trained teacher model (HNSW). What if there is a new document added to the index corpus, will the distilled student generalize well for the new document? Given that our document corpus consists of over 1 million data points, the addition of a single new document is unlikely to significantly alter the overall data distribution. Unless the new document is notably distinct from the existing data, its influence on the distribution would be minor. As a result, the relationships between documents, both in the teacher index and the student model, tend to remain relatively stable even with the introduction of a new document. Thus, we can assign the new document with appropriate centroids according to the well-trained quantized index, i.e., $\arg\min_{i\in1,2,\dots, W} \| x - c_i^b \| ^2$ where $i$ is the code (index) in the $b$-th codebook (subspace) for a new document $x$. The new document may share the same index with its similar document. More concretely, only a set of items are retrieved by the teacher index so that the learning of student index is accessible to the subset of items. The other items are assigned to certain index according to the well learned quantization-based index for later inference. The scale of the subset of items is closely related to the number of the neighbors $K$. We vary $K$in the experiment as shown in Table 2 to demonstrate the effectiveness of the well-learned KDindex with subset of items. 3. [Larger Dataset with 1B benchmark] > **Comments**:  larger benchmark, e.g. those have 1B entries, and see if the distillation would work. We add a 1B dataset Yandex Deep1B[1], which is a image descriptor dataset consisting of the projected and normalized outputs from the last fully-connected layer of the GoogLeNet model. The embeddings is pretrained on the Imagenet classification task. Due to the RAM resource constraints, we randomly sample 20M of the datasets. The details are as follows: | Datasets | #Database | #Train | #Test | Dim | | ------------- | ---------- | --------- | ------ | ---- | | Yandex Deep1B | 20,000,000 | 7,000,000 | 10,000 | 96 | \#Train and \#Test here represent the number of queries. We compare the KDIndex with different quantizers in the following table, where KDindex outperforms all the original quantizers. | Model | PQ | KDindex (PQ) | OPQ | KDindex (OPQ) | AQ | KDindex (AQ) | | --------- | ----- | ------------ | ----- | ------------- | ----- | ------------ | | Recall@10 | 9.61 | 12.47 | 16.84 | 18.77 | 17.81 | 18.57 | | NDCG@10 | 33.85 | 37.71 | 45.34 | 49.32 | 53.63 | 55.15 | [1] Efficient indexing of billion-scale datasets of deep descriptors. CVPR 2016 4. [Performance of HNSW] > **Comments:** Why not reporting the HNSW performance in table 1? For Table 1, we perform ADC over these quantization-based methods. Since HNSW is a graph-based method, the comparison with these quantization-based methods is not fair. We provide more details about training. For training, We obtain the approximate top-K neighbors from teacher model and the performance is approaching Recall@10 0.99 by adding the search latency. The time and performance used in experimental are as follows: | Datasets | SIFT1M | GIST1M | MS MARCO Doc | MS MARCO Passage | | --- | --- | --- | --- | --- | | Recall@10 | 0.9865 | 0.9859 | 0.9292 | 0.9182 | | NDCG@10 | 0.9999 | 0.9999 | N/A | N/A | | MRR@10 | N/A | N/A | 0.9493 | 0.9327 | | Query time (s) | 0.5862 | 1.3082 | 1.4805 | 4.7689 | For more comparison about the recall-time performance, please refer to Figure 6 in the global response PDF. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It's very informative and helpful. I'd like to update my score from 4 to 5. One minor question: `Given that our document corpus consists of over 1 million data points, the addition of a single new document is unlikely to significantly alter the overall data distribution.` >> I think one key issue is that each application will have its own corpus. Do you think the same distilled student would work for different corpus? --- Reply to Comment 1.1.1: Title: Response to Reviewer LzAv Comment: We sincerely appreciate your time in reviewing our work and considering our rebuttal. Your thoughtful reconsideration of the score is truly encouraging, and we are pleased to learn that you found our rebuttal to be informative and helpful. As your minor concern, in fact, when a large number of new documents are introduced to the corpus for indexing, we need to retrain the index structure. The distribution of queries in the dataset often differs from that of documents. By adding a certain number of documents, the discrepancy between these two distributions becomes even larger. The distillation student models are separately learned for different corpora. However, your suggestion provides us with a valuable direction for future research. Specifically, your suggestion prompts us to explore the potential of a unified model designed for comprehensive indexing, where the challenge lies in achieving alignment across diverse corpora. Thank you once again for your thoughtful feedback.
Rebuttal 1: Rebuttal: Dear reviewers, Thank you for all the time and effort you spent in writing reviews for KDindex. We polish the description and conduct more experiments folloing reviews. The figures (datasets distribution and time-recall curves) and table (time complexity) are attached in PDF. Authors. Pdf: /pdf/f24f33fab781b0982dd7704414860e04d087fce8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization
Accept (spotlight)
Summary: This work proposes optimization algorithms that achieve optimal convergence rate and reproducibility for the settings of convex optimization and convex-concave minimax settings. This work settled some of the open questions from the previous work, and extend the results to the minimax setting. Strengths: - Clear presentation. - Novel technical contributions. - Strong theoretical results and cleanly written proofs. Weaknesses: - Overall, the presentation is very clear, and the results are rigorous, and I didn't see any major weaknesses. - Some details about Inexact-EG would have been helpful since it seems like a new method developed in this work. In particular, is it a direct extension of Devolder et al. to the minimax setting? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Regarding the O(delta^2/\eps^{2/5}) bound on the reproducibility, do you have some intuitions as to why improving upon this would be difficult? Even if it's a heuristic argument, it would help readers understand the main challenge. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: - They discussed them well in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the expert reviewer for recognizing our contribution and for the very positive feedback. 1. **Details about Inexact-EG.** > Inexact-EG seems like a new method developed in this work. Is it a direct extension of Devolder et al. to the minimax setting? Extragradient (EG) is proposed by Korpelevich in 1976 and becomes popular in the minimax literature. Devolder et al. introduce an inexact oracle in the context of smooth convex minimization problems that is different from the inexact gradient oracle considered in this work. We provide a detailed discussion about the relationship between the two oracles in Appendix A.2. In this work, we analyze the Extragradient method under the inexact gradient oracle. Since the method does not use the true gradients but some inexact gradients, we denote it as Inexact-EG. In this sense, Inexact-EG is neither a new method nor a direct extension of Devolder et al. to the minimax setting. 2. **Sub-optimal reproducibility of Inexact-AGD.** > Are there any intuitions why improving upon the $\mathcal{O}(\delta^2/\epsilon^{2.5})$ reproducibility of AGD would be difficult? The reproducibility of the proposed framework depends on how accurately the strongly-convex sub-problem is solved. According to the lower bounds in Devolder et al., it is only possible to guarantee convergence to a neighborhood of the optimal point when the oracle has certain inexactness, and the size of this neighborhood depends on the condition number $\kappa:=\ell/\mu$. Since the strongly-convex parameter $\mu$ of our sub-problems is in the order of $\epsilon$ to ensure convergence to the original convex problem, we introduce additional $\epsilon$-dependence in the reproducibility as well. We argue improving upon such dependence could be difficult as a result of the lower bounds provided in Devolder et al. More discussions can be found in Appendix A.2. Probably a different algorithm design is required to attain optimal reproducibility of AGD. --- Rebuttal Comment 1.1: Comment: I see, thank you for your response -- please reflect them in the final version. This is a great work and technically sound. Congrats, and good luck!
Summary: The authors proposed and studied optimization algorithms in both the convex and the convex-concave minimax setting, where both the criteria of convergence and reproducibility are measured. The authors provided upper bounds on these criteria for the proposed algorithms that matches nearly all of the lower bounds. While I'm not an expert on this subject (and I leave the judgement of correctness to other experts in this area), I believe this work has reached satisfying results, and I would recommend accept. Strengths: 1. The upper bounds on convergence and reproducibility matches nearly all of the known lower bounds. 2. The presentation of the work is quite clean and readable. Weaknesses: N/A Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the contribution of this work.
Summary: The authors investigate the reproducibility problem of algorithms that solve convex and convex-concave minimax optimization problems. They propose new methods with better reproducibility guarantees while maintaining the theoretical state-of-the-art convergence rates. Strengths: *I want to acknowledge that I got this paper for review **after the deadline**, so I didn't have much time to check every detail, especially the proof.* The idea of using regularization is good. It leads to new theoretical state-of-the-art convergence rates and reproducibility guarantees. This is a solid contribution to the NeurIPS community. Weaknesses: 1. The new method requires $\epsilon$ and the distance $D$ between the starting point $x_0$ and $x^*.$ (e.g. Theorem 3.3.) I am not sure that the previous methods in Table 1 need these parameters. Do the authors discuss these important limitations? 2. Algorithm 1 is a well-known method in the optimization community. For instance, see https://arxiv.org/pdf/1603.05642.pdf. It is called "regularization technique" or "regularization reduction." I believe that the authors should cite the previous works that consider Algorithm 1. 3. Wrong citation [55] in Theorem 3.3. The paper [55] doesn't provide an analysis of AGD for strongly convex functions. It is better to cite any of Nesterov's books. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: . Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for positive feedback and valuable suggestions. We will mention the limitations that our algorithm requires knowledge of $\epsilon$ and $D$, whereas previous methods do not. We will also add and correct the citations following the reviewer's suggestion. Thanks for the pointers. --- Rebuttal Comment 1.1: Title: Respond Comment: Can the authors also comment on the following weakness? > Algorithm 1 is a well-known method in the optimization community. For instance, see https://arxiv.org/pdf/1603.05642.pdf. It is called "regularization technique" or "regularization reduction." I believe that the authors should cite the previous works that consider Algorithm 1. Do the authors discuss the connection between these methods and their method? --- Reply to Comment 1.1.1: Comment: Thanks a lot for the follow-up discussion. We apologize for not clearly addressing your question before. Adding regularization is indeed a common and useful technique in the optimization literature. The work [AH16] mentioned by the reviewer is one important use case where regularization is added to **boost convergence analysis**, i.e., to leverage known and good convergence properties of algorithms on smooth strongly-convex functions and transform it to other functions including convex and nonsmooth cases. The algorithmic frameworks 1 and 2 in our paper only consider solving one auxiliary regularized strongly-convex problem, which is referred to as **classical regularization reduction** in [AH16]. The algorithm is *biased* and requires the knowledge of $\epsilon$ and $D$ to control the biased term introduced by the regularization term. The convergence guarantee also has an additional sub-optimal logarithmic term depending on $\epsilon$. In comparison, [AH16] propose to use a double-loop algorithm, where a sequence of auxiliary regularized strongly-convex problems with decreasing regularization parameters are solved. The decreasing regularization ensures the algorithm is *unbiased*, and the resulting convergence guarantee requires no knowledge of $\epsilon$ and does not have an additional logarithmic term. We realize that the same idea could apply to our case as well, where it is very possible to remove the additional sub-optimal logarithmic factor in our convergence rate as well as the requirement of knowing $\epsilon$. We want to thank the reviewer again for pointing this out. We will add this discussion in the conclusion section for our limitations and potential future work. In addition to boosting convergence such as [AH16] and Catalyst [LMH15], the regularization technique has also been demonstrated to be useful in improving stability and generalization [AK22, Zha+21], enhancing sensitivity and privacy guarantees [FKT20], etc. In this paper, we provide another use case by showing an improved convergence-reproducibility trade-off. We will add another paragraph in the related work section to discuss these related examples as well. **References** * **[AH16]** Zeyuan Allen-Zhu and Elad Hazan. Optimal black-box reductions between optimization objectives. Advances in Neural Information Processing Systems, 2016. * **[LMH15]** Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. Advances in Neural Information Processing Systems, 2015. * **[AK22]** Amit Attia and Tomer Koren. Uniform stability for first-order empirical risk minimization. Conference on Learning Theory, 2022. * **[Zha+21]** Junyu Zhang, Mingyi Hong, Mengdi Wang, and Shuzhong Zhang. Generalization bounds for stochastic saddle point problems. International Conference on Artificial Intelligence and Statistics, 2021. * **[FKT20]** Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: optimal rates in linear time. Symposium on Theory of Computing, 2020.
Summary: The paper considers the problem of ensuring reproducibility in convex optimization. It builds on a recent framework for understanding reproducibility initiated by Ahn et al. The paper considers both minimization and minimax optimization (the latter being a new setting investigated in this work). The key results of the paper are the following: 1. Minimization problems: The paper shows an improvement in the convergence-reproducibility tradeoffs under inexact initialization and inexact gradients compared to the results of Ahn et al. For inexact initialization, the paper shows that an L2 regularized version of AGD simultaneously obtains optimal convergence and optimal reproducibility. For inexact gradients, the same algorithm obtains sub-optimal reproducibility but with optimal convergence. 2. Minimax optimization: L2 regularized versions of existing algorithms achieve optimal reproducibility and near-optimal gradient complexity. Similar to Ahn et al., SGD attains optimal reproducibility and convergence under a stochastic gradient oracle. Strengths: 1. The problem of developing reproducible optimization algorithms is well-motivated and relevant. Various empirical studies have shown that randomness in initialization, training, data augmentation and numerical instabilities can lead to models which make significantly different predictions on test points. 2. The paper demonstrates a valuable algorithmic principle of using L2 regularization to ensure reproducibility. All the results in the paper (apart from those for a stochastic gradient oracle) are obtained by incorporating L2 regularization into prior algorithms. The results strongly suggest that there could be deeper connections between stability and reproducibility, since L2 regularization is also a similarly useful techniques for ensuring algorithmic stability. Investigating this could be an interesting direction of future work. 3. The paper improves stronger bounds across a number of settings compared to prior work. It also demonstrates that suspected instability issues of AGD are not a barrier to obtaining reproducibility guarantees for it. The paper also broadens the study of reproducibility to minimax optimization, with a similar message for it. Weaknesses: 1. The writing of the paper is decent overall, but could do with some improvements. The paper does not motivate reproducibility adequately on its own. Some of the comments seem to concern reproducibility in science broadly rather than the particular issues which concern reproducibility in modern ML. Since the paper has a lot of different results, some more intuition behind the specific bounds that are obtained could be useful. For example, the paper could comment on the reproducibility obtained in different settings and why these bounds arise from the algorithm. 2. Though I can understand that there is not too much space given the number of results, some intuition for the technical ideas which go into the bounds would be good as well. In particular, what are the main ideas behind extending reproducibility to minimax optimization? Does the intuition for why L2 regularization works for reproducibility in minimization mostly carry over and give the bounds for minimax optimization? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Overall, this paper makes a good contribution on a relevant problem and I don't have any major concerns or questions. Some other suggestions are included above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: These are discussed adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and insightful suggestions. 1. **Writing of the paper.** > The paper does not motivate reproducibility adequately on its own. Since the paper has a lot of different results, some more intuition behind the specific bounds and technical ideas obtained could be useful. We will try our best to motivate why reproducibility needs to be studied and provide more intuition about our results. For now, in addition to the theoretical work of Ahn et al. [1], we provide a set of previous empirical works on reproducibility to justify that reproducibility has become an important topic in modern machine learning, e.g., [40] for reproducibility issues in reinforcement learning and [59] for a report of NeurIPS 2019 reproducibility program. Although there are lots of different results in the paper, we think the main motivations and insights behind them are **consistent**. * For the smooth convex minimization setting, previous work suggests that GD is optimally reproducible but converges sub-optimally, while AGD converges optimally but is not reproducible. There seems to be a fundamental trade-off between convergence speed and reproducibility of algorithms. This motivates us to study whether it is possible to attain both optimal convergence and reproducibility at the same time. * For the smooth convex-concave minimax setting, we observe a similar behavior of GDA and EG that mirrors the minimization setting, and we also ask the same question here. The reason why GD/GDA can be optimally reproducible is that the gradient descent step is **non-expansive** when the objective is smooth and convex. When introducing certain momentum or extrapolation step to accelerate convergence speed, such non-expansiveness property often disappears, which makes the optimally convergent algorithms AGD/EG not reproducible. The main idea behind our (near)-optimal algorithmic framework to simultaneously attain the best of the two worlds is to add regularization and leverage **uniqueness and stability of the solutions to strongly-convex problems** (or the non-expansiveness of proximal point steps), e.g., Lemma 3.2. As a result, it is possible to improve the reproducibility of optimally convergent algorithms through convergence on the strongly-convex sub-problems while selecting the regularization parameter small enough to avoid too much approximation error introduced by the regularization term. 2. **Extension to minimax optimization.** > What are the main ideas behind extending reproducibility to minimax optimization? Does the intuition for why regularization works for reproducibility in minimization mostly carry over to minimax optimization? The intuition and technical ideas to use regularization mostly carry over from minimization to minimax optimization. In particular, similarly to Lemma 3.2, the saddle point $(x\_r^\*, y\_r^\*)$ of the strongly-convex-strongly-concave (SC-SC) function $F(x,y) + (r/2)\Vert x - x\_0\Vert^2 - (r/2)\Vert y - y\_0\Vert^2$ is also unique and satisfies that $$\Vert x\_r^\* - (x\_r^\*)'\Vert^2 + \Vert y\_r^\* - (y\_r^\*)'\Vert^2 \leq \Vert x\_0 - x\_0'\Vert^2 + \Vert y\_0 - y\_0'\Vert^2.$$ As a result, by converging closely enough on the SC-SC sub-problem, this property of the optimal solution can be leveraged to obtain the optimal reproducibility guarantee. In addition, the smooth SC-SC minimax problems can be solved efficiently by a large class of algorithms, which at the same time maintains fast convergence guarantees. More interestingly, since the (inexact) proximal point method already attains the (near)-optimal convergence rate for the smooth convex-concave minimax problems, it is possible for Algorithm 3 to be optimally reproducible and convergent at the same time for minimax problems. The same framework cannot be used to improve the trade-offs in the minimization setting because of its sub-optimal convergence for smooth convex minimization problems. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thank you, the proposed updates sound good and I'm happy to still recommend the paper for acceptance.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies the problem of reproducibility in convex optimization. The notion of reproducibility, borrowed from prior work, measures the "stability" of the output of a procedure under noisy initialization or gradient computation. For the smooth convex setting, they design an algorithm based on running accelerated gradient descent on a regularized objective, which achieves optimal reproducibility and convergence rate. This answers an open question from prior work. The authors further extend their results to the minmax optimization setting deriving many new results. Strengths: 1. Reproducibility has become an important topic in modern machine learning. Since (convex) optimization is the dominant algorithmic paradigm for modern ML, it is imporant to formulate and study reproducibility in optimization. The topic of the paper thus is important and timely. 2. The paper obtains the optimal bounds on convergence and reproducibility for the smooth convex setting, something which prior work conjectured to be unattainable. This is an important contribution. Further, they managed to also get optimal, and non-trivial rates for the minmax optimization, a setting which has received considerable attention lately. 3. The paper is well written. Granted that it covers a lot of algorithms and results, the writing is to the point and the flow of ideas is natural. Weaknesses: 1. About the definition of reproducibility: since this is a new field, I presume that the community has not yet agreed upon a definition. However, it seems to me that the only paper using the definition in this paper is the prior work of Ahn et al. Does adhering to this definition indeed reflect reprodubility in practice, in some sense? Even in optimization settings, there are other sources of instability not accounted for in the analysis, for instance, truncation and rounding due to the finite precision. Is adhering to reproducility (say with respect to initialization) and disregarding potential numerical instability arising in other steps give some meaningful in practice? 2. Related to the above, some experiments demonstrating usefulness of the framework would strengthen the paper. In the current version, there are no experiments. 3. The underlying idea is very simple and has been used many a times in (related) prior works -- regularization makes the problem strongly convex and thus aids leasds to (various forms of) stability. Nonetheless, the authors build on this to provide non-trivial bounds for many settings. 4. Some technical details, which I presume are in prior work, are not covered in the main text. Something that confused me is how to define "optimal reroducibility", which is referred many a times in the paper. Some text explaining it, perhaps in the preliminaries will be helpful. 5. What if we consider an inexact initialization as well as inexact grdient, with the same $\delta$ say -- is it possible to say something about this from the algorithms proposed? 6. The authors analyze a number of algorithms in the minmax setting, as a result this part of the paper looks rather dense with Thm statements. Some organization of what is to come will help the reader. From my understanding, Alg3, Inexact Proximal Point Method, strictly improves over all others? If yes, this should be conveyed early on in this section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please answer the questions posed in the weekaneses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work is limited to a certain notion of reproducility, used only in one prior work, and thus which may or may not be the definition as the area matures. Further, even though the authors identify two sources of instability: initialization and gradient computation, they are studied separately. A unified analysis can perhaps reflect more about the practical aspects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable suggestions. 1. **Definition of reproducibility.** > Does this definition indeed reflect reproducibility in practice? There are other sources of instability not accounted for in the analysis. Is disregarding potential numerical instability arising in other steps still meaningful in practice? Moreover, some experiments would strengthen the paper. We agree with the reviewer that there is no consensus on what is the right mathematical notion of reproducibility in the community. It would be impossible to find a perfect definition that accounts for all sources of instability in practice. In our humble opinion, the current definition adopted in our paper is a meaningful one and at least partially reflects practical needs. Here, the source of irreproducibility is modeled by inexact oracles and reproducibility is defined to be the deviation in algorithms' outputs under such inexact oracles. The numerical instability in practice can be modeled as inexact updates $x\_{t+1} = x\_t - \alpha\nabla F(x\_t) + \delta$ where $\delta$ represents all errors coming from truncation and rounding due to the finite precision. This could fit into the inexact gradient oracle model that we considered as $x\_{t+1} = x\_t - \alpha(\nabla F(x\_t) + \delta')$. Hence, our analysis could also apply to such sources of numerical instability. 2. **Lack of experiments.** > Some experiments demonstrating the usefulness of the framework would strengthen the paper. In the current version, there are no experiments. We are afraid that the reviewer might have missed checking our appendix in the supplementary material, where we have already provided some toy numerical experiments in Appendix D (along with Python codes in the supplementary material) to showcase the effectiveness of regularization on improving reproducibility for both minimization and minimax optimization. We kindly note that the supplementary material can be found in the zip folder. 3. **The underlying idea is simple.** > The underlying idea is very simple and has been used many times in related works. Nonetheless, the authors build on this to provide non-trivial bounds for many settings. We take simplicity rather as a compliment especially when the simple algorithm yields the optimal guarantees. Although the idea of regularization has been developed before, they are often used for different purposes, e.g., to boost convergence [68], to improve stability [5], or to enhance privacy guarantees [71]. We provide an important use case of the regularization technique by showing an improved convergence-reproducibility trade-off. 4 and 6. **Organization of the paper.** > Some text explaining "optimal reproducibility" perhaps in the preliminaries will be helpful; The authors analyze a number of algorithms in the minimax setting. Some organization of what is to come will help the reader; Algorithm 3 should be conveyed early on in this section. Thanks for the suggestion. For "optimal reproducibility", we mean the algorithm attains the lower bound of reproducibility established in Ahn et al. We will add more discussions about it in the preliminaries section. In the minimax setting, the aim of providing analysis of GDA/EG is to mirror what is known in the minimization setting, i.e., the optimally reproducible algorithm GD converges sub-optimally, while the optimally convergent algorithm AGD is not reproducible. This motivates us to study whether both optimal results can be achieved at the same time. Following the same idea as the minimization setting, we also propose Algorithm 2 for minimax problems. Finally, since the inexact proximal point algorithm already achieves near-optimal convergence for smooth convex-concave minimax problems (but not for minimization problems), it is possible to have Algorithm 3 that strictly improves all the others. We will add organization paragraphs and more discussions at the beginning of sections. 5. **Combination of different oracle settings.** > What if we consider both the inexact initialization oracle as well as the inexact gradient oracle? A unified analysis can perhaps reflect more on the practical aspects. Thanks for the interesting question. It is possible and immediate to extend the current definition to consider both errors at the same time, and the resulting bounds will simply be the **summation of the two**. Taking gradient descent as a simple example, the deviation in iterates will be $$\Vert x\_t-x\_t'\Vert \leq \Vert x\_0 - x\_0'\Vert + 2\alpha\delta t,$$ where $\Vert x\_0 - x\_0'\Vert$ is the inexactness of initialization, $\delta$ is the inexactness of gradients, and $\alpha$ is the stepsize. This expression successfully unifies both inexact oracles and recovers either one by setting the other source of error to 0. The same holds true for the proposed regularization framework. We will add a discussion about this in our revision. --- Rebuttal Comment 1.1: Title: Thanks! Comment: I thank the authors for their detailed response and pointing to experiments in Appendix D. I encourage the authors to include some of the above discussion, especially those around definiton of optimal resproducibiliy from Ahn et al, as well as combination of the two settings, to the revised version. I increase my score to 7.
Summary: In this paper, the authors introduce a novel algorithmic framework that can achieve near optimal convergence while preserving optimal reproducibility, for minimizing smooth convex objectives and minimax optimization of convex-concave objectives. Here, reproducibility under inexact initialization oracles, inexact deterministic gradient oracles, and inexact stochastic gradient oracles are considered. The key idea is to optimize a regularized strongly convex objective ( or Strongly convex - strongly concave objective for minimax optimization) using a given base algorithm, and bounding the error introduced by the regularization. The authors derive convergence and reproducibility guarantees that match or improve existing guarantees for different base algorithms applied to the proposed framework. Strengths: * The paper discuss about theoretically improving the trade-off between optimal convergence and optimal reproducibility of algorithms, which is an important emerging research area. * The paper is fairly easy to read, and the methods and results are presented in a clear manner. * Using the proposed algorithmic framework, the authors show, contrary to what was previously believed, that accelerated gradient descent (AGD) method can achieve near optimal convergence preserving optimal reproducibility, which seems like a non-trivial and interesting result. Weaknesses: * The paper covers only convex (convex-concave) objective minimization (minimax optimization), which prevents the results of this paper being applied to many applications where reproducibility is a challenge (e.g. reinforcement learning) as mentioned in the introduction of the paper. * The paper considers a constrained optimization setting for minimax optimization, while most applications of optimization, such as machine learning, will be deployed in an unconstrained setting. This might again prevent these results being applied to many practical applications. * The dependence of the convergence bounds on the diameter of $\mathcal{X}$ and $\mathcal{Y}$ in Theorems 4.4. and 4.6 makes the corresponding bounds too loose when $\mathcal{X}$ and $\mathcal{Y}$ are significantly large. Minor comments * Using $x^*_{r’}$ to denote $\underset{x\in\mathcal{X}}{\operatorname{argmax}}\\{ F(x) + (r/2) \Vert x - x_0' \Vert^2\\}$, it might suggest $x^*_{r’}$ corresponds to the optimum when $r’$ is used as the regularization parameter, which can be a bit confusing. * Introduction of Assumptions 3.1 and 4.1 seems abrupt, and some discussion on the assumption (e.g. their implications and how these assumptions compare with prior work) seems missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * What kind of modifications to the proposed algorithmic framework or the assumptions on the objectives will allow this framework (or similar framework) being applied to the non-convex setting? * Intuitively, why it is necessary to consider the constrained optimization setting for minimax optimization when obtaining these results given using the proposed framework? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work only considers convex (convex-concave) setting, and some convergence results contain logarithmic factors, which are recognized by the authors in the conclusion. In addition to these, this work considers a constrained minimax optimization setting, which might limit the applicability of the corresponding results in practical applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the questions. Here are some clarifications. 1. **The convexity assumption on the objectives.** > The paper only covers convex objectives. What kind of modifications to the proposed algorithmic framework or the assumptions on the objectives will allow the results to be applied to the nonconvex setting? We focus on the convex case as a first step since it is the **most basic and fundamental** setting in optimization. We believe a solid understanding of the reproducibility in convex optimization will also shed insights for that of the more challenging nonconvex optimization. Note that some of the analysis and techniques used in this paper can be extended to the smooth nonconvex setting. For example, to track the difference between iterates along the trajectory of GD/GDA, we would obtain $\Vert z\_{t+1} - z\_{t+1}'\Vert \leq (1+\alpha\ell)\Vert z\_t - z\_t'\Vert$ without assuming convexity. The error can still be controlled when the stepsize $\alpha$ is small enough. For the proposed framework, extension to nonconvex functions is also possible. See [All18] and [Yan+20] for the convergence analysis of regularization/proximal point-based methods for nonconvex functions in the minimization and minimax settings respectively. However, for the reproducibility analysis, the non-expansiveness property, e.g., Lemma 3.2, will not hold any more without the convexity assumption. One potential way to alleviate it is to assume the negative comonotonicity [Gor+23] of the gradients. We leave a detailed study of the nonconvex setting to future work. 2. **The constrained optimization setting.** > The paper considers a constrained optimization setting for minimax optimization. Why is it necessary? Also, the dependence of the convergence bounds on the diameter of the constrained set makes the corresponding bounds too loose. The assumption that the domain $\mathcal{X}$ and $\mathcal{Y}$ are convex and compact for the minimax problems is to ensure the **existence of the saddle point**, as suggested by von Neumann's minimax theorem. The dependence on the diameters in the convergence bounds of convex-concave minimax problems is actually **inevitable** according to the lower bound in [OX21]. However, the reproducibility of the proposed framework under the inexact gradient setting crucially depends on the convergence guarantees of the sub-problems to the optimal solutions, which will introduce diameter dependence. It would be interesting to see whether such dependence in reproducibility can be relaxed. 3. **Other questions.** * **Notation.** Thanks for the suggestion. We will change the notation to $(x_r^*)'$ to avoid confusion. * **Assumptions 3.1 and 4.1.** The assumptions (convexity, smoothness, and bounded initial solution/ bounded domain) are **standard** in the convex optimization literature. We will add more discussions to introduce them. * **The logarithmic factors.** The logarithmic factor comes from the complexity of inexactly solving the strongly-convex sub-problems and is common for inexact proximal point type methods. It could be possible to remove such term by directly unwrapping the proposed framework to obtain a single-loop algorithm, with a much more involved and less intuitive analysis. **References** * **[All18]** Zeyuan Allen-Zhu. How to make the gradients small stochastically: Even faster convex and nonconvex SGD. Advances in Neural In- formation Processing Systems, 2018. * **[Yan+20]** Junchi Yang, Siqi Zhang, Negar Kiyavash, and Niao He. A catalyst framework for minimax optimization. Advances in Neural Information Processing Systems, 2020. * **[Gor+23]** Eduard Gorbunov, Adrien Taylor, Samuel Horváth, and Gauthier Gidel. Convergence of proximal point and extragradient-based methods beyond monotonicity: the case of negative comonotonicity. International Conference on Machine Learning, 2023. * **[OX21]** Yuyuan Ouyang and Yangyang Xu. Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems. Mathematical Programming, 2021. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgment Comment: I thank the authors for the clarifications. In light of satisfactory clarifications, I raise my score.
null
null
null
null
On the Generalization Properties of Diffusion Models
Accept (poster)
Summary: This paper studies theoretical bounds on score-matching diffusion models' generalization ability, in terms of KL divergence between the true distribution and the learned distribution generated by the model. It is assumed that the score model is parametrized as a time-dependent (time refers to $t$ in the diffusion process) 2-layer random feature model, and that the training process is the continuous-time gradient flow with respect to the score matching loss $\tilde{\mathcal{L}}$. The authors characterize the generalization bound as a decreasing function of $n$ (the sample size) and $m$ (the model capacity; the hidden layer size), and shows that the error decreases polynomially in $n$ with appropriately chosen early-stopping time $\tau$, under the setup where the target distribution has compact support. They also study the effect of distance between the modes on the theoretical bound in the toy case of fitting a 2-Gaussian-mixture. Strengths: - The paper successfully builds the setup where the generalization of diffusion models is theoretically analyzable in terms of sample size, model capacity and training time, using common machineries used in ML theory. - The analysis carefully deals with the necessary technical components for bounding the generalization error for diffusion models and the proofs seem sound (legitimate arguments assigned to each component of the analysis). - Overall, I think the paper provides a good starting point for the theoretical study on the generalization of diffusion models and identifies some interesting questions on diffusion model training such as dimension dependency or the choice of optimal early stopping time. Weaknesses: - Of course, the investigated setup is limited (dealing with 2-layer random feature models) and is far away from what is being used in practice. To some extent this is understandable, considering that the similar limitation is shared among most of the relevant works in the field. However, one can still argue that the setup is obsolete as there are now more 'modern' tools like the neural tangent kernels or mean field limits for performing theoretical analyses on neural networks. - I think the paper has insufficient discussion regarding the optimal solution $\bar{\boldsymbol{\theta}}^\ast$. It does not discuss, e.g., the universal approximation property of the random feature parametrization with respect to the score-matching loss $\tilde{\mathcal{L}}$ which is a sort of modified $L^2$ norm. So it is confusing whether the authors are assuming that the true score function is representable as a continuous random feature model not (so, if $\bar{\tilde{\mathcal{L}}}(\bar{\boldsymbol{\theta}}^\ast)=0$ or not); the term $\bar{\tilde{\mathcal{L}}}(\bar{\boldsymbol{\theta}}^\ast)$ is present in Theorems 1, 2 while absent in Corollaries 1, 2. It is also confusing whether $\mathcal{H}$ in Theorem 1 coincides with the RKHS induced by $k_{\rho_{0}}$. - The value of Section 3.2.2 is unclear to me. It deals only with the toy case of 1D Gaussian mixture with two modes. Additionally, it provides an upper bound that is adversely affected by $\mu$, but there is no guarantee of tightness so it does not sufficiently "explain" the modes shift effect (to do so, one might rather need some sort of lower bound). - Experiments are minimal and loosely designed. It is performed over a single toy case (1D 2-Gaussian-mixture) and it is not reported how many repetitions were tried and whether the results were consistent. Also, it is inconsistent with the theory, which deals with the continuous-time gradient flow. Why use Adam optimizer for the experiments? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the role of the time embedding function $\mathbf{e}$? Does it have any meaningful effect on the analysis? - In Lemma 3, the authors use the convexity of the diffusion loss $\bar{\tilde{\mathcal{L}}}$ with respect to the trainable parameter $\bar{\mathbf{a}}$, and maybe this is why they consider the random feature model. But won't similar analyses be possible in more general setups using e.g., neural tangent kernels? - Please clarify the second point from the "Weaknesses" section. - In Theorem 2, how does variance of the Gaussian distributions comprising the mixture affect the bound? If the variance is not important, does simply scaling down all the data resolve the adverse modes shift effect claimed in Section 3.2.2? - In the equation right before Line 511 in the appendix, is the minus sign missing in front of $\frac{d}{d\tau} \tilde{\mathcal{L}}(\boldsymbol{\theta}(s))$? I think the subsequent lines using the integral Cauchy-Schwarz inequality also share the sign issue. - In the experiments, do the authors observe that their theory provide quantitative predictions on the optimal early-stopping time (the "transition point" in the training dynamics), or is it just a post-hoc observation? Were the results consistent over multiple runs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer uaj6 Thank you for your comprehensive review and valuable feedback to help improve the paper. We detail our response below, and please kindly let us know whether our response addresses your concerns. --- **Weakness 1**: There are two points regarding the setup concern: - First, we choose the random feature model (RFM) as the score network because it is simple to analyze, and still possesses universal approximation capability ([1]). We want to emphasize the insights gained from diffusion models rather than the neural network parametrization, which is relatively more developed in both theory and practice. - In addition, while the mathematical tools such as NTKs and mean fields may seem modern, they are more complex and require the infinite-width regime to achieve meaningful results, which is less practical in real-world applications. Nevertheless, employing these modern tools in the future work is valuable at least for theoretical completeness. --- **Weakness 2**: $\||\cdot\||_\mathcal{H}:=\||\cdot\||\_{\mathcal{H}\_{k\_{\rho\_0}}}$ (refer to **A3** in **Response to Reviewer ab6x** for more details). For the universal approximation, we discuss in two points: - First, the approximation is a separate problem that can be analyzed independently of training and generalization. While it is beyond the scope of current work, we have included the approximation error in the final estimates. When approximation by RFMs fails, the generalization error is supposed to be significant. - In addition, the RFM can approximate Lipschitz continuous functions on a compact domain (Theorem 6 in [1]). Notice that the forward diffusion process defines a random path $(\pmb{x}(t),t)\_{t \in [0,T]}$ contained in a rectangular domain $R_{T,\delta}:=[-C\_{T, \delta}, C\_{T, \delta}]^d \times [0,T] \subset \mathbb{R}^{d+1}$ with $C\_{T, \delta}:=C\_T (C_{\pmb{x}} + \sqrt{\log (1/\delta^2)})$ (use Lemma 1 and the boundness of inputs), one can apply Theorem 6 in [1] to bound Eq. (7) on the domain $R_{T,\delta}$ in $\mathbb{R}^{d+1}$ to obtain approximation results. --- **Weakness 3**: Three points: - Informally, there may be inconsistency between the score network model and target score function. In fact, given the target Gaussian mixture $p_0(x) = q_1 \mathcal{N}(x; -\mu, 1) + q_2 \mathcal{N}(x; \mu, 1)$, the score function is $$s_0(x)=-x+\frac{q_2 \mathcal{N}(x; \mu, 1)-q_1 \mathcal{N}(x; -\mu, 1)}{q_2 \mathcal{N}(x; \mu, 1)+q_1 \mathcal{N}(x; -\mu, 1)}\mu, $$ which gives $s_0(x) \approx -x+\mu$, $x\ge 0$ and $s_0(x) \approx -x-\mu$, $x\le 0$. While the score network is $$ s_{0,\theta}(x) \approx \left(\frac{1}{m} \sum\_{i: w\_i>0} a_i w_i\right) x+\left(\frac{1}{m} \sum\_{i: w\_i>0} a_i\pmb{u}_i^{\top} \pmb{e}(0)\right), $$ where "$\approx$" holds since $x=\Theta(\mu)$ (Eq. (38)). Both $s_0$ and $s\_{0,\theta}$ are linear functions, but they have unmatched scales in slopes and intercepts: $O(1)$ and $O(\mu)$ for $s_0$, but both $O(a)$'s for $s\_{0,\theta}$. That is, modeling Gaussian mixtures with large modes distances as RFMs may be inconsistent (a rigorous lower bound is needed in the future). - Our experimental results (Figures 3 and 4) demonstrate a significant gap in the density estimation performance for different modes shifts. - Our new simulation on MNIST with the U-net score network (refer to **Point 1** in **Summary of new results (as required)** and **Figure 1** in the **pdf attachment** for details) suggests that modes shift may adversely affect the performance of diffusion models in general. --- **Weakness 4**: The experiment is strengthened on a real dataset (see above). For consistency, we reproduce original figures trained with (S)GD for many repetitions, including Figure 2 (studying the KL divergence dynamics), and Figure 3, 4 (studying the modes shift effect). Refer to **Figure 2, 3, 4** in the **pdf attachment**. It is shown that our experimental results are consistent with the theory and replicated over multiple runs. A different optimizer (e.g. Adam) is used only for faster training. --- **Question 1**: The embedding function $\pmb{e}(t)$ is required to be bounded on a compact interval, and hence has no meaningful effect on the analysis. --- **Question 2**: The extension to NTKs is possible, since the training of RFMs follows a specific NTK regime (only the last layer is updated) in the output space (instead of parameter space). We leave these as the future work. --- **Question 3**: Please refer to the response to **Weakness 2**. --- **Question 4**: A standard variance is used now for convenience. In general, if $\text{var} = o(\mu)$ as $\mu \to +\infty$ (e.g. a bounded $\text{var}$), similar analysis and results are supposed to hold. However, this is different when $\text{var} = \Theta(\mu)$ as $\mu \to +\infty$, since the modes are not *separated* in this case, and we can not characterize modes shift by simply varying $\mu$. Simply scaling down inputs seems not to resolve the adverse modes shift effect, since the ground truth $\mu$ is unknown. One may use the input scale to approximate $\mu$ on toy datasets, but it is not that trivial in practice, particularly for real-world problems with multiple high-dimensional modes in varied scales. --- **Question 5**: The typo is fixed. --- **Question 6**: Currently, it is difficult to predict the optimal early-stopping time quantitatively due to upper bound estimates and unknown universal constants. However, the reproduced plot on the KL divergence dynamics shows consistency in transition points (refer to **Figure 2** in the **pdf attachment**). For other consistency concerns, refer to the response to **Weakness 4**. --- **References** [1] Daniel Hsu, Clayton H Sanford, Rocco Servedio, Emmanouil Vasileios Vlatakis-Gkaragkounis. On the approximation power of two-layer networks of random ReLUs. *Proceedings of Thirty Fourth Conference on Learning Theory*, PMLR 134:2423-2461, 2021. --- --- Rebuttal Comment 1.1: Title: A remaining question on specifying the role of approximation error Comment: I have read the rebuttal, and I thank the authors for the detailed response, including additional experiments to appropriately support the theory. I find most of the responses adequate. One remaining question is: although I acknowledge that universal approximation may be a separate problem, in the statements of Corollary 1 and Corollary 2, the bound (right hand side of $\lesssim$) only involves $(\log(d+1)/n)^{\frac{1}{4}}$ and lacks the term $\bar{\tilde{\mathcal{L}}}(\bar{\boldsymbol{\theta}}^*)$ (and also the $m$-dependent term and KL divergence term between $p^T$ and $\pi$). This is confusing, and it is the reason why I initially thought the authors are implicitly assuming universal approximation and infinite width, etc.. Therefore, unless I am missing something, I insist that the authors should carefully clarify what they are omitting in Corollaries 1 and 2, and precisely under which conditions/assumptions one can expect the *KL divergence between $p^0$ and the generated $p^{0, \hat{\boldsymbol{\theta}}_n (\tau^{es})}$ actually goes to zero, polynomially in $n$*. --- Reply to Comment 1.1.1: Title: Further discussion on error bounds Comment: Thanks for your detailed suggestion. We agree with you, and we omit other errors in the original version only to highlight the early-stopping estimates. Based on your suggestion, we plan to replace Corollary 1 (and 2) by a corresponding paragraph named "Discussion on error bounds" in the updated version, to discuss conditions under which the error terms in Theorem 1 (and 2) are negligible: - The first two terms: use original contents in Corollary 1 (and 2), which derive the early-stopping times and corresponding errors. They are core terms and we will highlight them as before. - The 3rd term: $m$-dependent, which is $o(1)$ when $m \gg 1$. - The 4th term: approximation error. We will include contents in above response (refer to the response to Weakness 2, Point 2) and the above-mentioned reference [1]. - The 5th term: KL divergence between $p_T$ and $\pi$. We cite a classical result in e.g. [2] (Theorem 3.20, Theorem 3.24 and Remark 3.26), which states that if $\pi$ satisfies the log-Sobolev inequality (e.g. $\pi$ is a Gaussian density), the KL divergence between $p_T$ and $\pi$ is exponentially small in $T$. Note that the conditions for 4th and 5th error term (Lipschitz continuous target score functions and log-Sobolev stationary distribution, respectively) are sufficient but *not necessary*, we only list the most common conditions for demonstration. --- **References** [2] Ramon van Handel. Probability in high dimension. Technical report, Princeton University, 2014.
Summary: This paper provides some generalization bounds for Diffusion Models when seen as score-based models. These bounds are similar to previous literature for GANs and consider two scenarios. In the first one, the target distribution has compact support, and in the second case, it corresponds to a one-dimensional 2-mode Gaussian mixture. Strengths: This paper is well-written and provides some generalization bounds on a general and toy problem and a numerical experiment. The sketches of the proofs are well detailed and explained. Weaknesses: Some of the motivations of the paper can seem obscure and a bit misleading. For instance, from the introduction, it feels that the authors will provide a general result for mode shift distribution while they only provide a theorem for the case of a 1-dimensional mixture of 2 Gaussian. It seems fine to study that toy problem, but it should be advertised as is. Also, the first two paragraphs on Page 2 were a bit obscure. Technical Quality: 3 good Clarity: 3 good Questions for Authors: "early-stopping estimates" better generalize, do you have a practical approach to early stopping here? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer WQ2A Thank you for your comprehensive review and your valuable feedback to help us improve the paper. We detail our response below, and please kindly let us know whether our response addresses your concerns. --- > **Q1**: Some of the motivations of the paper can seem obscure and a bit misleading. For instance, from the introduction, it feels that the authors will provide a general result for mode shift distribution while they only provide a theorem for the case of a 1-dimensional mixture of 2 Gaussian. It seems fine to study that toy problem, but it should be advertised as is. Also, the first two paragraphs on Page 2 were a bit obscure. **A1**: We answer the questions on introduction as follows: - For the analysis on modes shift distributions, we indeed investigate a specific setting in theory, with additional experiments on density estimation comparison (Figure 3 and Figure 4). Notably, we provide a new simulation result on a real dataset (refer to **Point 1** in **Summary of new results (as required)** and **Figure 1** in the **pdf attachment** for more details), which suggests that the adverse effect of modes shift on the performance of diffusion models may appear in general. But anyway, this is still not a general rigorous mathematical analysis on the practical setting, and we will follow your advice to tone down a bit in the corresponding statements. We plan to modify lines 65-67 to: "This "uniform" bound is further extended to a *data-dependent* setting, where a sequence of unidimensional Gaussian mixtures distributions with an increasing modes' distance $\mu$ is considered as the ground truth. This result characterizes the effect of ``modes shift'' quantitatively, which implies that..." in the final version. - The first two paragraphs on Page 2 mainly conveys the information that generative models may possess the memorization property, which has been already proved for bias potential models and GANs in theory and shown for large language models in applications, leading to potential privacy and copyright risks. This is our motivation to investigate the generalization properties of diffusion models. To clarify, we plan to modify these two paragraphs to the following. - "In theory, the generalization issues of generative modeling (or learning for distributions) may exhibit as the *memorization* phenomenon, if the modeled distribution is eventually trained to converge to the empirical distribution only associated with training samples. Intuitively, memorization arises from two reasons: (i) it is useful for the hypothesis space to be large enough to approximate highly complex underlying target distributions (universal convergence; [59]); (ii) the underlying distribution is unknown in practice, and one can only use a dataset with finite samples drawn from the target distribution. Rigorous mathematical characterizations of memorization are developed for bias potential models and GANs in [60] and [61], respectively. A natural question is, does a similar phenomenon occur for diffusion models? To answer this, a thorough investigation of generalization properties for DMs/SGMs is required." - "In practice, the generalization capability of diffusion models is also an essential requirement, as the memorization can lead to potential privacy and copyright risks when models are deployed. Similar to other generative models and large language models (LLMs) ([5, 64, 17, 6]), diffusion models can also memorize and leak training samples ([5, 42]), hence can be subsequently attacked using specific procedures and algorithms ([26, 16, 56]). Although there are defense methods developed to meet privacy and copyright standards ([10, 13, 55 53]), these approaches are often heuristic, without providing sufficient quantitative understandings particularly on diffusion models. Therefore, a comprehensive investigation of the generalization foundation of diffusion models, including both theoretical and empirical aspects, is of utmost importance in improving principled tutorial guidance in practice." --- > **Q2**: "Early-stopping estimates" better generalize, do you have a practical approach to early stopping here? **A2**: It is a common practice to use the test error to evaluate the generalization performance. For diffusion models, a straightforward approach is to compute the negative log-likelihood (averaged in bits/dim) on the test dataset *during training* with the instantaneous change-of-variable formula ([1]) and probability flow ODE Eq. (4), where the true score function $\nabla\_{\pmb{x}} \log p\_{t}(\pmb{x})$ is replaced by $\pmb{s}\_{t, \pmb{\theta}(\tau)}(\pmb{x})$. --- **References** [1] Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David K. Duvenaud. Neural ordinary differential equations. In *Advances in Neural Information Processing Systems*, pp. 6571–6583, 2018. ---
Summary: The paper aims to characterize the generalization gap of score-based diffusion models. The setting is as follows. The input is $n$ samples from some unknown distribution $p_0$, and the goal is to approximate $p_0$ by a distribution $p_{0,\hat{\theta}_n}$ where the parameter $\hat{\theta}_n$ is a minimizer of the empirical loss function obtained by running gradient descent up to some time $\tau.$ The loss function is the expected difference between the gradients of the log density function of distributions along the trajectories of 2 SDEs (see equation (2)), one initialized at $p_0$ and the other at $p_{0,\hat{\theta}_n}.$ The paper proves a bound on the KL divergence between the true distribution $p_0$ and the learned distribution $p_{0,\hat{\theta}_n}$ in terms of the training time $\tau$ for gradient descent, the sample size $n$ and model parameter $m.$ The bound is obtained by using [Song, Durkan, Murray, Ermon’21]’s main result to bound the aforementioned KL divergence in terms of the empirical loss plus a negligible term, then bound the empirical loss by the population loss. The paper states a general result (theorem 1), then shows how to apply it to the specific case when the target distribution is a mixture of two 1-dimensional Gaussians with the same covariance. They quantitatively show that the error grows when the distance between the two modes of the Gaussians increases, the KL divergence between the true distribution and the learned one becomes worse, which matches the qualitative result given in [Koehler, Heckett, Risteski—ICLR’23]. Strengths: The paper explicitly bounds the generalization error, which is defined as the KL divergence between the true distribution and the learned one, in the model parameters and the sample size. Similar bounds are shown before by [Koehler, Heckett, Risteski—ICLR’23] but this previous work doesn’t make explicit the dependency on the sample size and the model parameters, and further requires that the learned distribution. The results appear to be interesting, if correct. Weaknesses: The paper might be incorrect, or at least currently doesn’t do a good job of convincing me otherwise. I believe the statement of theorem 1/theorem 3 in the appendix lacks the important assumption that the loss function is convex (see the top line of page 15 of the supplement (proof of Lemma 3) and my 1st question for details). However, the loss function of mixtures of Gaussians appears non-convex (or at least it’s unclear to me why it should be so), so lemma 3 couldn’t be used in the proof of theorem 2 as the paper currently does. Aside from that, the paper is quite hard to understand, and the definition of many terms are not easy to find. For ease of understanding, the author should make clear the difference between $T$, the time to run the diffusion models so that $p_T$ is close to the known prior distribution $\pi$ (e.g. Gaussian), and $\tau,$ the time to train the neural net for gradient estimation. These definitions, along with the definition of $p_T$ and $\pi$, should be clearly stated in the statement of theorem 1. The preconditions of the main theorem (theorem 1) should be stated upfront in the main text instead of in the appendix of the supplement. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1/ In proof of lemma 3, the top line of page 15 of the supplement, does “by convexity” refer to the fact that the loss function is a convex function? If so, this assumption should be clearly stated as part of theorem 1 or theorem 3. 2/ In the neural net setup, lines 156 and 157 of the supplement, does A essentially play the role of theta? If so, this should be clearly stated. 3/ If lemma 3 requires the assumption that the loss function is convex, then can the authors justify that the loss function for Gaussian mixtures is convex? Else, theorem 2 cannot use lemma 3 like it currently does. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The paper's proof might be incorrect. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 4Zu3 Thank you for your detailed review and feedback to help us improve the paper. We detail our response below, and please kindly let us know whether our response addresses your concerns. --- > **Weakness 1**: Convexity concern. **A1**: The convexity arises from the following two points: 1. We analyze the score-matching loss Eq. (7), which is a $L^2$-metric between the score network model and target score function; 2. The score network is defined as a random feature model Eq. (12) that is linear to trainable parameters. Therefore, using some trace techniques, it is not hard to derive the fact that the loss is quadratic and hence convex w.r.t. trainable parameters $\text{vec}(\pmb{A})$ (i.e. the vectorization of $\pmb{A}$). Concretely, recall $\pmb{s}\_{t, \pmb{\theta}}(\pmb{x}(t)) = \pmb{A} \sigma(\pmb{W}\pmb{x}(t) + \pmb{U} \pmb{e}(t))$, and let $\pmb{s}\_{t}(\pmb{x}(t)):=\nabla\_{\pmb{x}(t)} \log p\_{t}(\pmb{x}(t))$, $\pmb{h}_1(\pmb{x},t):=\sqrt{\lambda (t)}\sigma(\pmb{W}\pmb{x} + \pmb{U} \pmb{e}(t))$, $\pmb{h}_2(\pmb{x},t):=\sqrt{\lambda (t)}\pmb{s}\_{t}(\pmb{x})$, we have $$ \tilde{\mathcal{L}} (\pmb{\theta};\lambda(\cdot)) = \mathbb{E}\_{t \sim \mathcal{U} (0,T)} \mathbb{E}\_{\pmb{x}(t) \sim p\_{t}} [\pmb{h}_1^{\top}(\pmb{x}(t),t)\pmb{A}^{\top} \pmb{A} \pmb{h}_1(\pmb{x}(t),t) - 2\pmb{h}_2^{\top}(\pmb{x}(t),t)\pmb{A}\pmb{h}_1(\pmb{x}(t),t)] + \text{constant}. $$ Since for any $\pmb{h}, \bar{\pmb{h}}, \pmb{B}$, we have $$ \begin{aligned} \mathbb{E}\_{t} \mathbb{E}\_{\pmb{x}(t)} [\pmb{h}^{\top}(\pmb{x}(t),t) \pmb{B} \bar{\pmb{h}}(\pmb{x}(t),t)] &= \mathbb{E}\_{t} \mathbb{E}\_{\pmb{x}(t)} [\text{trace}(\pmb{B} \bar{\pmb{h}}(\pmb{x}(t),t)\pmb{h}^{\top}(\pmb{x}(t),t))] \\\\ &= \text{trace}(\pmb{B} \mathbb{E}\_{t} \mathbb{E}\_{\pmb{x}(t)} [\bar{\pmb{h}}(\pmb{x}(t),t)\pmb{h}^{\top}(\pmb{x}(t),t)]), \end{aligned} $$ we further obtain $$ \tilde{\mathcal{L}} (\pmb{\theta};\lambda(\cdot)) = \text{trace}(\pmb{A}^{\top}\pmb{A} \pmb{B}_1) -2 \text{trace}(\pmb{A} \pmb{B}_2) + \text{constant}, $$ where $\pmb{B}_1:=\mathbb{E}\_{t} \mathbb{E}\_{\pmb{x}(t)} [\pmb{h}_1(\pmb{x}(t),t)\pmb{h}_1^{\top}(\pmb{x}(t),t)]$ and $\pmb{B}_2:=\mathbb{E}\_{t} \mathbb{E}\_{\pmb{x}(t)} [\pmb{h}_1(\pmb{x}(t),t)\pmb{h}_2^{\top}(\pmb{x}(t),t)]$. Here, $\pmb{B}_1$ is a positive semi-definite matrix, since $\pmb{v}^{\top}\pmb{B}_1\pmb{v}=\mathbb{E}\_{t} \mathbb{E}\_{\pmb{x}(t)} [(\pmb{v}^{\top}\pmb{h}_1(\pmb{x}(t),t))^2]\ge 0$ for any $\pmb{v}$. Notice that for any $\pmb{A}, \pmb{B}$, $$ \begin{aligned} \text{trace}(\pmb{A}^{\top}\pmb{A} \pmb{B}) &=\text{trace}(\pmb{A} \pmb{B}\pmb{A}^{\top}) =\sum\_{i,j}\pmb{B}\_{ij}(\pmb{A}\_{:,j})^{\top}\pmb{A}\_{:,i} =\text{vec}(\pmb{A})^{\top} (\pmb{B} \otimes \pmb{I}) \text{vec}(\pmb{A}), \\\\ \text{trace}(\pmb{A} \pmb{B}) &= \sum\_{j} (\pmb{A}\_{:,j})^{\top}(\pmb{B}^{\top})\_{:,j} =\text{vec}(\pmb{A})^{\top} \text{vec}(\pmb{B}^{\top}), \end{aligned} $$ where $\otimes$ denotes the Kronecker product. Hence $$ \tilde{\mathcal{L}} (\pmb{\theta};\lambda(\cdot)) = \text{vec}(\pmb{A})^{\top} (\pmb{B}_1 \otimes \pmb{I}) \text{vec}(\pmb{A}) - 2 \text{vec}(\pmb{B}_2^{\top})^{\top}\text{vec}(\pmb{A}) + \text{constant}. $$ It is straightforward to show that the eigenvalues of $\pmb{B}_1 \otimes \pmb{I}$ are the same as $\pmb{B}_1$ but with multiplicity (see Lemma 0 below), implying that $\pmb{B}_1 \otimes \pmb{I}$ is also positive semi-definite. Therefore, $$ \nabla\_{\pmb{\theta}}^2 \tilde{\mathcal{L}} (\pmb{\theta};\lambda(\cdot)) = \nabla\_{\text{vec}(\pmb{A})}^2 \tilde{\mathcal{L}} (\pmb{\theta};\lambda(\cdot)) =2(\pmb{B}_1 \otimes \pmb{I}) $$ is positive semi-definite, i.e., the loss is convex to trainable parameters. **Lemma 0**: Let $A\in \mathbb R^{n\times n}$, $B\in \mathbb R^{m\times m}$ have the eigenvalues $\\{\nu_i\\}_{i=1}^n$, $\\{\mu_j\\}\_{j=1}^m$, respectively. Then, the eigenvalues of $A \otimes B$ are $\nu_i\mu_j$, $i=1,\cdots,n$, $j=1,\cdots,m$. *Proof.* By Jordan–Chevalley decomposition, there exist invertible matrices $P, Q$ such that $A=P\Lambda P^{-1}$, $B=Q\Delta Q^{-1}$, where $\Lambda, \Delta$ are upper triangular matrices. Therefore, $$ \begin{aligned} A \otimes B = (P\Lambda P^{-1}) \otimes (Q\Delta Q^{-1}) = (P \otimes Q) (\Lambda \otimes \Delta) (P^{-1} \otimes Q^{-1}) = (P \otimes Q) (\Lambda \otimes \Delta) (P \otimes Q)^{-1}. \end{aligned} $$ That is, $A \otimes B$ and $\Lambda \otimes \Delta$ are similar. Notice that $\Lambda \otimes \Delta$ is still an upper triangular matrix with diagonal elements $\nu_i\mu_j$, $i=1,\cdots,n$, $j=1,\cdots,m$, we complete the proof. --- > **Weakness 2**: Definitions of notations and theorem statement. **A2**: To clarify, we have added a figure (refer to **Figure 5** in the **pdf attachment**) to illustrate the problem formulation and important notations, as is also suggested by **Reviewer ab6x** (see **Q1** and **A1** there). We hope this plot will further improve the readability. We will also add the omitted conditions in Theorem 1 back from the appendix in the updated version. --- > **Question 1**: Convexity. **A3**: As is discussed in **A1**, the loss function is a convex function w.r.t. trainable parameters ($\text{vec}(\pmb{A})$). We do not need an additional assumption on convexity. --- > **Question 2**: Trainable parameters. **A4**: Only the outer layer is trainable in the setting of random feature models. We will clarify and emphasize this point when introducing score networks in the updated version. --- > **Question 3**: Convexity with different target distributions. **A5**: As is discussed in **A1**, the loss function is a quadratic and hence convex function w.r.t. trainable parameters ($\text{vec}(\pmb{A})$), and this fact holds for any given target score function, including the Gaussian mixture. --- --- Rebuttal Comment 1.1: Comment: Thank you the explanation. I realize that the convexity was due to the way the loss function is parametrized. However, I think a separate lemma and some brief explanation on this point should be added to the paper. As I have said in my previous comments, the writing stands to be improved and I don't think the paper is completely ready for publications at this point. I raise my overall score to 4 and reduce my confidence level to 3. --- Reply to Comment 1.1.1: Title: Specific suggestions Comment: Thanks for your comment. We are delighted to see that your (original) major concern on convexity has been resolved. We agree with your point to add a separate lemma and some brief explanation. Since all the contents including the explanation and proofs have been provided in the above response (refer to **A1**), we are ready to place them in the appendix before original Lemma 3 in the updated version. On the writing concern, as is shown in the above response (refer to **A2**), we have added a "formulation" figure (refer to **Figure 5** in the **pdf attachment**) to illustrate the setup and key notations, including - basic elements to perform a fundamental generalization analysis: the hypothesis space (diffusion process + random feature score network), concept space (different types of target distributions), loss objective (time-dependent score matching) and training algorithm (gradient flow); - mentioned (important) notations, such as the SDE time $t$ and its maximum $T$, training time $\tau$, and terminal state $p_T \approx \pi$ (a known prior). We feel that Figure 5 is clear enough to cover the details raised in your previous comments on the writing aspect. We would appreciate it if you can further provide more *specific* suggestions on the writing.
Summary: This paper proves generalization bounds for diffusion models. With specific stopping, they show that the generalization error goes to zero, with a specific upper bound on the rate that scales polynomially with the sample size and the model capacity. Theorem 1 is the main convergence result, while in Theorem 2 they extend their results to the data-dependent setting. The paper is concluded with some supporting experiments. Strengths: - well-motivated problem and novel formulation - nice theoretical results - enough citations to prior works Weaknesses: - while it's generally well-written, it could've been better (e.g., a figure could've been used for problem formulation to make it better readable) Technical Quality: 3 good Clarity: 3 good Questions for Authors: This is a nice theoretical result. Here are some comments/questions. - Equation 3: there are two $dt$'s there, is that right, or it's a typo? Please make it clear - Theorem 1: do you need to know that RKHS to run the algorithm? Or do you just use it in the proofs? This is not that much clear from the context - it is stressed that the upper bound on the generalization is dimension-free, while there is dependence on $d$ in the bound (Equation 18) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer ab6x Thank you for your comprehensive review and your valuable feedback to help us improve the paper. We detail our response below, and please kindly let us know whether our response addresses your concerns. --- > **Q1**: While it's generally well-written, it could've been better (e.g., a figure could've been used for problem formulation to make it better readable). **A1**: This is a useful suggestion to further improve the readability. We have followed this suggestion to plot a figure (refer to **Figure 5** in the **pdf attachment**) to illustrate our problem formulation, including the diffusion process, (score matching) loss objectives, the random feature score network model, *different* target distributions, the gradient flow training dynamics and other important notations. We will also include this figure in the final version. --- > **Q2**: Equation 3: there are two $dt$'s there, is that right, or it's a typo? Please make it clear. **A2**: Yes, it is a typo. We have fixed it in the updated version. --- > **Q3**: Theorem 1: do you need to know that RKHS to run the algorithm? Or do you just use it in the proofs? This is not that much clear from the context. **A3**: The RKHS norm $\||\cdot\||_\mathcal{H}$ in Theorem 1 is the shorthand of $\||\cdot\||\_{\mathcal{H}\_{k\_{\rho\_0}}}$ , which is defined in lines 163-167. In short, $\rho_0$ is the distribution to initialize inner parameters $(\pmb{w},\pmb{u})$, $k\_{\rho\_0}$ is the induced kernel, and $\mathcal{H}\_{k\_{\rho\_0}}$ is the induced RKHS. We certainly use it in the proofs, and notice that the RKHS norm is just a weighted $L^2$-norm averaged on $\rho_0$ (see the definition in lines 166-167), and hence can be easily estimated by the Monte Carlo method (e.g. an empirical mean) when specifying $\rho_0$. --- > **Q4**: It is stressed that the upper bound on the generalization is dimension-free, while there is dependence on $d$ in the bound (Equation 18). **A4**: By saying "dimension-independent", we mean escaping from the curse of dimensionality (CoD), i.e., the upper bound does not exponentially depend on the data dimension. We will remove the term "dimension-independent" to avoid misunderstanding in the final version. --- --- Rebuttal Comment 1.1: Title: Response Comment: I acknowledge the response provided by the authors, specifically the idea of adding new figures. Please also add the clarifications suggested in my comments to the new version of the paper (as promised here). I decided to keep my score unchanged. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for acknowledging our response and considering our proposed changes, including the addition of new figures. We appreciate your feedback and your valuable comments. We will incorporate all the suggestions you provided into the new version of the paper. Thank you once again for your time and insightful reviews.
Rebuttal 1: Rebuttal: # Summary of new results (as required) We sincerely appreciate all reviewers for their insightful and constructive feedback. Besides answering questions in detail to address all of the reviewers’ comments, we want to summarize and highlight new key results as follows, and all of the results are included in the **pdf attachment** and will be added in the final version. 1. We provide a new simulation result (refer to **Figure 1** in the **pdf attachment**) on the MNIST dataset with modeling the score function as the commonly-used U-net architecture, which suggests that the adverse effect of modes shift on the performance of diffusion models may appear in general. The setup procedure is as follows: (i) construct datasets from MNIST with different distances between modes; (ii) train diffusion models and evaluate respective performance on different scales of modes distances. - Concretely, let $\mathcal{D}$ denote the whole MNIST dataset, we first perform a $K$-Means clustering on $\mathcal{D}$ to get $\mathcal{D}=\bigcup_{k=1}^K \mathcal{D}_k$, and $\bar{\pmb{x}}_k$ as the center of $\mathcal{D}_k$, $k=1,\cdots,K$. Let $(i^*,j^*):=\arg\max\_{i \ne j} \||\bar{\pmb{x}}_i-\bar{\pmb{x}}_j\||$, and $\mathcal{D}\_{\text{farthest}}:=\mathcal{D}\_{i^*} \cup \mathcal{D}\_{j^*}$ ($\mathcal{D}\_{\text{nearest}}$ is similarly defined by corresponding $\arg\min$ indices). By randomly selecting the same number of data samples and using the same (hyper-parameters) configuration, we train two separate diffusion models on $\mathcal{D}\_{\text{farthest}}$ and $\mathcal{D}\_{\text{nearest}}$, respectively, and then perform inference (sampling). The training loss curves are shown in **Figure 1 (a)**, and the sampling results are shown in **Figure 1 (b)** for $\mathcal{D}\_{\text{farthest}}$ and **Figure 1 (c)** for $\mathcal{D}\_{\text{nearest}}$. One can observe a clear performance gap: the diffusion model trained on $\mathcal{D}\_{\text{farthest}}$ appears a higher learning loss and worse sampling quality compared to those of $\mathcal{D}\_{\text{nearest}}$. 2. We reproduce original Figure 2 (studying the KL divergence dynamics) in the current paper by using (S)GD for training with many repetitions (refer to **Figure 2** in the **pdf attachment**), as is suggested by Reviewer uaj6. It is shown that the original experimental results are consistent with the theory and over multiple runs. 3. We reproduce original Figure 3 and Figure 4 (studying the modes shift effect) in the current paper by using (S)GD for training (refer to **Figure 3** and **Figure 4** in the **pdf attachment**), as is suggested by Reviewer uaj6. It is shown that the original experimental results are consistent with the theory. 4. We add a figure to illustrate our problem formulation (refer to **Figure 5** in the **pdf attachment**), as is suggested by Reviewer ab6x. Pdf: /pdf/8aa26c81d61b088cddd7094763983c84f4a2321a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Modulated Transformation in GANs
Accept (poster)
Summary: The paper applies ideas from Spatial Transformer Networks in the context of generative models, specifically GANs, introduces a new module that gives the GAN generator a more natural way to generate content at spatially varying locations. The resulting models are better able to generate content with smooth geometric changes, such as videos, but more traditional non-video datasets also see an improvement. The method seems easy to implement and shows promise when applied to various network architectures and datasets. Strengths: The changes introduced are self-contained and minimally disruptive, and notably do not require changes to hyper parameters or training schemes. The numerical results look good across the board, and the technique seems compatible with various network architectures. Even though the proposed method seems straight-forward to implement, the promised code release is still a plus. Weaknesses: The paper does not impart much intuition to the reader: what exactly are the new capabilities of these models? Are there smooth spatial movements upon latent-space interpolation etc? Are there any situations where the proposed method performs worse and should not be used? In a similar vein, visual comparisons to baselines are missing. Seeing FIDs decrease is nice, but understanding the properties of the new model is even better. Figure 4 is a step in this direction, but more is needed. Figure 2 is quite uninformative on its own - without comparisons to baseline models, the reader doesn't really learn anything (we already know vanilla SG2 works great for cats, for example). Failure cases are not discussed in detail - were there datasets where the proposed method did not help, or even hurt performance? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Line 42 talks about warping the input features, whereas line 112 talks about bilinear interpolation (presumably of the unwrapped input feature map). Figure 1 also seems to indicate that the feature map remains unwarped. How is the operation implemented in practice? The text and figures need to be unambiguous. SG3 takes great care to use very high-quality resampling operations within the network, in order to minimize aliasing. Does the proposed bilinear interpolation negatively affect the equivariance of SG3? Is the interpolation operation otherwise compatible with the strict signal-processing requirements of SG3? A comment on this would be valuable. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are not discussed much and should be expanded upon. The paper makes it seem like the proposed method lead to improvements in every single setting that was tried, which is unlikely. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. "What exactly are the new capabilities of these models?"** Compared to existing GANs that perform convolution at *fixed* locations (*i.e.*, shared by all samples), we offer the generator an addition degree of freedom though performing convolution at *variable* locations. For this purpose, we apply learned transformation to feature maps before feeding them to the convolution operation, where the learned transformation is controlled by the latent code. Such a newly introduced lightweight plug-in alleviates the difficulty of modeling geometric variations within the dataset. Here, we would like to clarify that our design does not intend to enable new functionalities, but it is indeed possible to study whether MTM enables some functional byproducts during inference. Thanks for your suggestion and we will leave it as future work. **Q2. "Figure 2 is quite uninformative on its own - without comparisons to baseline models."** Thanks. We include some qualitative comparisons in Fig. R2 (see **the newly uploaded one-page PDF**). We will also add the results in revision. **Q3. "Failure cases are not discussed in detail - were there datasets where the proposed method did not help, or even hurt performance?"** Thanks. So far, we evaluate our approach on multiple datasets for image generation, video generation, and 3D-aware image synthesis, and observe consistent performance gain. Still, it is interesting to mention that, on the commonly used FFHQ dataset, applying our MTM seems to have neither positive nor negative effect. One possible guess is that FFHQ contains human faces that are with similar shape and already well-aligned, containing limited geometric variations. Under such a case, using MTM would bring additional computation overheads during training yet achieve on-par performance. We will add the discussion. | FFHQ-256 | FID | | :- | :-: | | StyleGAN2 | 3.72 | | *w/* MTM | 3.74 | **Q4. "How is the operation implemented in practice?"** In practice, feature warping is incorporated into the convolutional operation. Namely, when a kernel performs convolution, the desired features are obtained through bilinear interpolation, which is equivariant to warping them first. We will revise the text and the figures to clarify this. Also, the core implementation (PyTorch code) is already submitted as the supplementary material, and we will release the entire code to facilitate reproduction. **Q5. "Does the proposed bilinear interpolation negatively affect the equivariance of SG3?"** Thanks. Following the suggestion, we evaluate the generator with respect to its equivariance property to translation (measured by EQ-T, where higher number is better). The table below suggests that our MTM significantly improves the synthesis performance without sacrificing much equivariance property. We will add the results. | TaiChi-256 | FID | EQ-T | | :- | :-: | :-: | | StyleGAN2 + Fourier features | 23.89 | 9.36 | | StyleGAN3 | 21.36 | 45.87 | | *w/* MTM | 13.60 | 41.52 | --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. As for Q1, I'm not confused about what the new capabilities are from a technical standpoint. What I'm wishing for is some intuition about the properties of these new models. As the authors of a new architecture, I expect you to play around with the models and try to figure out if they behave differently compared to the baseline. Maybe latent space interpolations show smoother movements of objects etc. As it stands, the paper seems to indicate that the FIDs improve, but that the models otherwise behave identically to the baselines. The other questions have been addressed, thanks. I will follow the discussions here and reconsider my rating. --- Reply to Comment 1.1.1: Title: Discussion Comment: Thanks for your reply, together with the clarification on Q1. Your suggestion about "playing around the model supported by our new architecture to study whether it behaves differently compared to the baseline" is very instructive. Following your suggestion, we evaluate latent space interpolation with the model well-trained with our MTM, and report the comparison results against the baseline in the table below. We can tell that our method also helps improve the interpolation performance. We will also add some qualitative results in the revision (not sure whether this is allowed in the discussion period). | TaiChi-256 | FID | FID after Interpolation | | :- | :-: | :-: | | StyleGAN3 | 23.89 | 22.64 | | *w/* MTM | 13.60 | 14.51 | However, we need to admit that, taking the improved synthesis performance (*i.e.*, from 23.89 to 13.60) into account, we cannot directly conclude whether the "smoother movements" come from a more capable generator or from a better interpolation property. More detailed analyses would be needed and we leave them as the follow-up work. You are right that our work, in its current form, primarily focuses on how to improves GANs in learning from data with large geometry variations. We believe that *this problem itself is already challenging and fundamental in GAN studies, to which we provide an effective and generalizable solution*.
Summary: The paper introduces learnable modulated convolutions to the literature on GANs. Instead of using standard convolutions with fixed 3x3 kernels, the paper proposes to learn the kernel spatial offsets to allow flexibility in the receptive field of generators. The proposed module can be added to any GAN based on convolutions, and experiments demonstrate it for StyleGANv2, StyleGANv3, EG3D, StyleSV. The module introduces only marginal computational overhead but improves the synthesis of models substantially. Strengths: - To my knowledge, this paper brings deformable convolutions to the GAN literature for the first time. This method can be easily applied to almost any CNN-based GAN. It introduces only marginal computational overhead but sometimes gives big gains if FID. It can play a role in the further development of GANs, esecially in video synthesis. - The paper is easy-to-follow, the ideas are intuitive, and the experiments are extensive. Weaknesses: [Novelty] - I do not understand what is the technical novelty introduced in Sec. 3. Learnable offsets, as shown in Sec. 3.1., have already been introduced in cited prior work (e.g., DCN). The Style Block used for latent modulation in Sec 3.2. has already been introduced in Karras et al. It seems that the method naively applies existing technology (DCN) into existing GANs without any modifications or novel analysis. It is therefore not clear where is the technical challenge the paper solves. [Comparison] - The presented motivation for introducing learnable offsets is the fact that GANs are too "local" and cannot handle more global dependencies. This problem has been also investigated in prior work (U-Net based discriminator, SAGAN, etc), and some solutions exist. With this motivation, I would expect a discussion and comparison of how well global coherency is preserved in generated images thanks to the introduced method, as well as comparisons to alternatives. [Comparison] - I would also expect more visualizations of what the module actually learns. For example, how big are the learned offsets usually in comparison to whole feature dimensions? [Comparison] - As a thought, would a simpler strategy without learnable parameters that increases the receptive field of convolutions (e.g., 5x5, 7x7 convolutions, strides) also help? [Experiment] - At the moment, there are no visual results to see the qualitative effect of the presented method on studied models (e.g., StyleGANv2 w/o ours vs with ours). I think this is an interesting yet missing analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please comment on the Weaknesses. In addition, why are the baselines in Table 1 non-uniform? Would it be possible to show results for all the datasets for both StyleGANv2 and StyleGANv3? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations and Societal Impacts are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. About the technical novelty introduced in Sec.3, and "It seems that the method naively applies DCN into existing GANs without any modifications of novel analysis. It is therefore not clear where the technical challenge the paper solves".** Disagree. Unlike DCN, which is originally evaluated on discriminative tasks, MTM is particularly designed for generative tasks, which require to decode the sampling stochasticity to diverse realistic images. To this end, we condition the learning of the transformation on the sampled latent code. As the table shown below, directly applying deformable convolution to GANs drastically harms the synthesis performance. By contrast, applying MTM to GANs brings substantial performance gain. From this perspective, even though DCN and MTM share a similar philosophy, MTM surpasses DCN and works as an general and efficient module in GANs, which is further supported by the experiments on a range of generation tasks (including image generation, video generation, and 3D-aware image synthesis) and datasets. To our best knowledge, we are the first to *make the idea of spatial transformation work on generative models*, which is not that straightforward according to the table below. We hope our discovery could make MTM a basic operation in the future design of GANs. We will add the additional results and the discussion in the revision. | TaiChi-256 | FID | | :- | :-: | | StyleGAN3 | 21.36 | | *w/* deformale convolution | 192.65 | | *w/* MTM | 13.60 | **Q2. "I would expect a discussion and comparison of how well global coherency is preserved in generated images thanks to the introduced method, as well as comparisons to alternatives."** In generation tasks, global coherency should be well reflected by the overall synthesis quality, which is usually measured by FID and FVD. The consistent FID/FVD improvement on various tasks and datasets provides strong support for the effectiveness of our proposed MTM. Following your suggestion, we compare our approach with SAGAN, which introduces self-attention to the GAN generator. For a fair comparison, we use StyleGAN3 as the same backbone and enhance the conventional convolution operation with self-attention or our MTM. The table below demonstrates the superiority of MTM over SAGAN. | TaiChi-256 | FID | | :- | :-: | | StyleGAN3 | 21.36 | | *w/* SAGAN | 26.83 | | *w/* MTM | 13.60 | **Q3. "I would also expect more visualizations of what the module actually learns. For example, how big are the learned offsets usually in comparison to whole feature dimensions?."** Due to the lack of spatial correspondence between feature maps and the final synthesis, it is hard to directly visualize the offsets learned by our MTM, which are at early layers (*i.e.*, small resolution), on the generated image. We have included Fig. 4 in the manuscript, where the offsets are disabled during inference, to give a rough overview of what the module learns. Following your suggestion, we summarize the stats of the learned offsets at the resolution of 36x36 and 52x52. As shown below, compared to the conventional convolution which adopts a fixed receptive field, our MTM offers the model an additional degree of freedom to decode the sampling stochasticity. | TaiChi-256 | 36x36 | 52x52 | | :- | :-: | :-: | | StyleGAN3 | 3 ± 0 | 3 ± 0 | | *w/* MTM | 4.7 ± 0.22 | 5.4 ± 0.36 | **Q4. "As a thought, would a simpler strategy without learnable parameters that increases the receptive field of convolutions (e.g., 5x5, 7x7 convolutions, strides) also help?"** Following your suggestion, we conduct an experiment by replacing the 3x3 convolutional kernels in StyleGAN3 with 5x5 kernels. The results are listed below, where we can tell that larger kernels do not always lead to better performance. | TaiChi-256 | FID | | :- | :-: | | StyleGAN3 | 21.36 | | *w/* 5x5 convolution | 45.94 | | *w/* MTM | 13.60 | **Q5. "There are no visual results to see the qualitative effect of the presented method on studied models."** Thanks. We include some qualitative comparisons in Fig. R2 (see **the newly uploaded one-page PDF**). We will also add the results in revision. **Q6. "Why are the baselines in Table 1 non-uniform?"** Compared to StyleGAN2, StyleGAN3 targets the anti-alias property of the generator instead of improving the synthesis performance. In fact, StyleGAN2 has already achieved satisfying performances on many object-centric datasets, such as LSUN church and LSUN cat. But for the challenging TaiChi dataset, where the human body is far from aligned, StyleGAN2 struggles in learning such a complex distribution. Hence, we choose StyleGAN3 as the baseline on TaiChi dataset considering its effectiveness in learning from unaligned data. We also evaluate the generator with respect to its equivariance property to translation (measured by EQ-T, where higher number is better). The table below suggests that our MTM significantly improves the synthesis performance without sacrificing much equivariance property. Due to the limited time, however, we cannot conduct experiments on all datasets with both StyleGAN2 and StyleGAN3. | TaiChi-256 | FID | EQ-T | | :- | :-: | :-: | | StyleGAN2 + Fourier features | 23.89 | 9.36 | | StyleGAN3 | 21.36 | 45.87 | | *w/* MTM | 13.60 | 41.52 | --- Rebuttal Comment 1.1: Comment: I share some of reviewer zzjJ's concerns about novelty (Q1): the paper proposes delta_p = ModConv(x, z) (eq. 4) as the modulation mechanism, but the ModConv operation itself is not new, only the context in which it is applied. Was any testing performed to make sure the proposed mechanism is indeed better performing than alternatives (that also incorporate the latent vector)? As it stands it does indeed seem like two existing techniques were combined without much modification or analysis. --- Reply to Comment 1.1.1: Title: Discussion on Novelty Comment: Thanks for your comments. We hope the following discussions could help address the novelty concern. - Our MTM is **ideationally novel** in that we introduce a lightweight plug-in module into GANs and bring consistent performance gain across a range of architectures and datasets. The nice experimental results are appreciated by all reviewers, demonstrating the effectiveness and generalizability of our MTM. Hence, we hope our MTM could **play a fundamental role** in the future design of GANs, just like AdaIN raised in StyleGAN. We believe our discovery, which is **never explored before**, would be of great interest to most audiences in the field of GANs. - Our MTM is **logically well motivated** from the aspect of geometry variation modeling. To offer the generator in GANs an additional degree of freedom to handle the geometry variation, we propose to introduce instance-aware learnable offsets to the convolution operation. In practice, this idea shares a similar philosophy to DCN (*i.e.*, ModConv()), which has already been well evaluated and efficiently implemented in many discriminative tasks. Therefore, we choose to borrow the implementation of DCN instead of designing a new approach for offset learning in convolution. We believe DCN should have already made many attempts and selected deformable convolution as its final form. Our contribution is to incorporate the stochasticity into the offset learning process, which is essential to solving generative tasks. - We would also like to argue that making deformable convolution compatible with GANs is **not a very straightforward thing** (see the table transcribed below). As appreciated by Reviewer zzjJ, our work is the first to demonstrate the effectiveness of deformable convolution in the GAN literature, which discovery we believe already serves as a strong contribution, let alone its simplicity and generalizability. | TaiChi-256 | FID | | :- | :-: | | StyleGAN3 | 21.36 | | *w/* deformale convolution | 192.65 | | *w/* MTM | 13.60 | Finally, with all due respect, we would like to point out that most fundamental designs are simple (such as the residual connection in ResNet and AdaIN in StyleGAN) as long as they are well motivated and empirically work. --- Rebuttal Comment 1.2: Title: Reply from Reviewer zzjJ on rebuttal. Comment: I sincerely thank the authors for their answers. I have no concerns about the performance gains that the proposed technology offers, and see potential for future usage in other MTM in other GANs. I believe, however, that a NeurIPS paper should not only demonstrate that some (inherited from non-GAN literature) technology brings big improvements in GANs, but also provide novel lessons and some insights for the community. My current evaluation of this aspect is not high. In this regard, my two major concerns are: 1.1) The proposed MTM is taken in DCN. The idea (Equations 1,2,3) is exactly the same as in DCN. From the implementation perspective, the paper incorporates DCN into StyleGAN's convolutions that were already part of StyleGAN convolutions. 1.2) I think that table Q1 is misleading. StyleGANs by default use noise modulation. In line 2 (w/deformable convolution), the noise modulation is deactivated. Therefore, the high FID of 192.65 just shows that having at least some form of noise modulation is important for StyleGANs. It is not surprising that DCN+modulation performs better than just DCN. 2.1) The paper lacks explanations about the observed results. I agree with reviewer A6mm that "intuition about the properties of these new models" is missing. I agree with the reviewer cDZK that the motivation for the module is explained vaguely and does not correspond to the results. 2.2) In the rebuttal, the answers for Q2 and Q4 are not informative. The results in the tables are not intuitive and not properly explained. Why using 5x5 convolutions is so harmful (while MTM also becomes somewhat 5x5 given the answer Q3)? I still cannot understand the intuition of why the FID is improved so greatly with MTM, while techniques with a similar motivation do not improve it at all. To conclude, I still do not understand the novelty and explanation of "why" it works. I would be happy to continue the discussion with the two following questions: 1) Is it not true that the noise modulation being used comes straightforwardly from default StyleGAN modulation? If not, please explain the difference. 2) Please provide a concise comprehensive explanation of *why* using DCM brings such big gains in FID. --- Reply to Comment 1.2.1: Title: Clarification on the Implementation Comment: Thanks for your reply. After reading your further concerns, we think we have traced the gap between the message we want to deliver and your actual understanding. In the following, we would like to make some clarifications, which we will also include in the revision to avoid misunderstanding of readers. - First, we would like to reaffirm that there are **two** noise modulations in our proposed MTM, one for style modulation in the original StyleGAN and the other for transformation modulation, which distinguishes our MTM from the deformable convolution. Detailed explanations are as follows: 1. With $x$ and $z$ denoting the feature map and the latent code respectively, the conventional convolution can be formulated as $\texttt{Conv}(x)$. 1. Deformable convolution proposes to first learn an offset $t = \texttt{Conv}(x)$ from the feature map, which is then used to guide the convolution operation, written as $\texttt{DConv}(x, t) = \texttt{DConv}(x, \texttt{Conv}(x))$. 1. StyleGAN propose style modulation which uses the latent code to modulate the feature map, written as $\texttt{ModConv}(x, z)$. 1. In our MTM, we propose to incorporate deformation into GANs, where **both** the feature map and the learnable offsets are controlled by the latent code, as $\texttt{MTM}(x, z) = \texttt{DConv}(\texttt{ModConv}(x, z), \texttt{ModConv}(x, z))$. It is noteworthy that, the learnable offset in DCN, $t = \texttt{Conv}(x)$, differs from the **stochasticity-aware** learnable offset in our MTM, $t = \texttt{ModConv}(x, z)$. - Second, we would like to argue that the modification from learnable offset (*i.e.*, in DCN) to stochasticity-aware learnable offset (*i.e.*, in MTM) is fundamental and essential. Recall our motivation that the convolution kernel in $\texttt{Conv}$ and $\texttt{ModConv}$ only interacts with the feature map at **fixed** locations. We would like to offer the model an additional degree of freedom to handle the geometry variation. Hence, we do not only require the receptive field to be large, but more importantly we would like the receptive field to **vary across instances**. That is the reason why modulating the learnable offset with the latent code is important. - Third, you might have mis-interpreted the results in Q1. For the experiment "*w/* deformable convolution", we do *not* disable the original style modulation in StyleGAN. Instead, we compare $\texttt{DConv}(\texttt{ModConv}(x, z), \texttt{ModConv}(x, z))$ with $\texttt{DConv}(\texttt{ModConv}(x, z), \texttt{Conv}(x))$ to validate the effectiveness of changing $t = \texttt{Conv}(x)$ to $t = \texttt{ModConv}(x, z)$. That is the reason why we claim in the previous response that generative models may follow a different philosophy from discriminative models. Directly applying DCN to GANs causes strong training instability. - Fourth, for the explanation of results in Q2 and Q4, we believe that even self-attention and 5x5 kernel size could help enlarge the receptive field, but the field is still **fixed** among instances. Instead, MTM provides a solution for the generator to varying the receptive field across instances. Now, we will answer the reviewer's follow-up questions. 1. The noise modulation being used does not only come from the default StyleGAN modulation, but is also applied to offset learning. 1. Using DCM brings such big gains in FID is because we offer the generator an additional degree of freedom to handle the geometry variation with a varying receptive field across instances. 1. Main insights of this work: - Instance-wise variation is important for the design of GANs, which is also a clear difference between generative models and discriminative models. - The potential usage of MTM as a basic operation in the future design of GANs is already a sound contribution, which is also appreciated by the reviewer. We hope the above discussions could help address your concerns. Again, we will revise our manuscript according to our discussions to make the presentation clearer. Thanks again for your suggestions.
Summary: In this paper, the authors equip the generator in generative adversarial networks (GANs) with a plug-and-play module, termed as modulated transformation module (MTM). This module predicts spatial offsets under the control of latent codes, based on which the convolution operation can be applied at variable locations for different instances, and hence offers the model an additional degree of freedom to handle geometry deformation. Extensive experiments suggest that this approach can be faithfully generalized to various generative tasks, including image generation, 3D-aware image synthesis, and video generation, and get compatible with state-of-the-art frameworks without any hyper-parameter tuning. Strengths: The paper is well written and easy to understand. Weaknesses: 1. The proposed Spatial Temporal Latent Code Modulation is somewhat similar to deformable convolution. 2. The comparison methods in Table 1 are too old. StyleGAN2 and StyleGAN3 were published in 2020 and 2021, respectively, and need to be compared with more recent methods published in 2022 and 2023. 3. In Table 2, the authors also need to compare with more recent methods in 2023 such as [1,2]. 4. The model complexity should be compared with the SOTA methods, such as training time, inference time, model parameters, etc. [1]Xie, Jiaxin, Hao Ouyang, Jingtan Piao, Chenyang Lei, and Qifeng Chen. "High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 321-331. 2023. [2]Shi, Zifan, Yujun Shen, Yinghao Xu, Sida Peng, Yiyi Liao, Sheng Guo, Qifeng Chen, and Dit-Yan Yeung. "Learning 3d-aware image synthesis with unknown pose distribution." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13062-13071. 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. About "the proposed Spatial Temporal Latent Code Modulation is somewhat similar to deformable convolution".** About the rationale of performing convolution at variable locations, our philosophy is indeed similar to the family of spatial transformation networks, to which deformable convolution belongs. However, different from discriminative tasks, generative tasks require the model to learn data variations. That is the reason why we introduce latent code modulation into our MTM, which is able to link the sampling stochasticity to the generation process. The table below suggests that, without such a modulation, directly applying deformable convolution to GANs drastically harms the synthesis performance. To our best knowledge, we are the first to make the idea of spatial transformation work on generative models, and we also demonstrate its effectiveness and generalizability across various generation tasks, including image generation, video generation, and 3D-aware image synthesis. We hope our discovery could make MTM a basic operation in the future design of GANs. | TaiChi-256 | FID | | :- | :-: | | StyleGAN3 | 21.36 | | *w/* deformale convolution | 192.65 | | *w/* MTM | 13.60 | **Q2. About "the comparison against StyleGAN2 and StyleGAN3 are too old".** Disagree. Even till now, StyleGAN2 and StyleGAN3 still serve as strong baselines for GAN-related studies, especially for the works that target architecture design. For example, the very recent text-to-image generation works [a][b] are developed from StyleGAN2 and StyleGAN3 as well. As in this work we would like to propose a general and fundamental operation for GANs, using StyleGAN2 and StyleGAN3 as our baselines is fair. Still, following your suggestion, we evaluate our MTM on GigaGAN [b], which is currently the most cutting edge algorithm in this field. The results are shown below, where we can observe that MTM even manages to improve the performance of such a powerful model. We hope that the additional results could help address your concern. | ImageNet-64 | FID | | :- | :-: | | GigaGAN | 7.62 | | *w/* MTM | 6.73 | NOTE: The GigaGAN results are reproduced by ourselves due to the fact that the official implementation is not open-sourced. [a] StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis. Sauer *et al.* ICML'23. [b] Scaling up GANs for Text-to-Image Synthesis. Kang *et al.* CVPR'23. **Q3. About the comparison with more recent methods in 2023.** With all due respect, [c] solves a different task (*i.e.*, 3D GAN inversion) from this work, making it hard to compare our MTM (which aims to improve the GAN model itself) with it. The table below includes our comparison with PoF3D [d], where our approach achieves better performance. | FFHQ-256 | FID | | :- | :-: | | PoF3D [d] | 4.99 | | MTM (ours) | 4.07 | [c] High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization. Xie *et al.* CVPR'23. [d] Learning 3d-aware image synthesis with unknown pose distribution. Shi *et al.* CVPR'23. **Q4. "The model complexity should be compared with the SOTA methods, such as training time, inference time, model parameters, etc."** We transcribe Tab. 4 of our manuscript below, which indicates that our MTM works as a lightweight plug-in module. | ImageNet-128 | FID | Training time | Inference time | # Param. (MB) | | :- | :-: | :-: | :-: | :-: | | StyleGAN2 | 21.14 | 1.0× | 1.0× | 27.78 | | *w/* MTM | 19.16 | 1.2× | 1.0× | 28.55 | --- Rebuttal Comment 1.1: Comment: Q2: I agree with the authors here. SG2/3 are time-tested architectures that many more complicated methods build upon, so studying MTM in this setting makes sense. This is a non-issue to me, especially with the inclusion of the GigaGAN results.
Summary: The paper proposes a modulated transformation for GANs. Specifically, they propose learning the offset of each convolutional layer by learning additional convolution layers to predict offsets where the inputs are the latent code and the current intermediate layer features. The proposed method improves the expressive power of overall GAN framework and shows notable improvements in various situations, including GAN-based image, video, and 3D-aware generation. Strengths: - The paper is generally well-written and easy to follow. - The proposed method is simple yet effective while showing quite a strong performance in a variety of tasks. - The analysis (e.g., which layer should use this concept, the role of this learned transformation) is interesting. Weaknesses: - The paper argues the motivation of this paper is the limitation of AdaIN in handling complex data distribution (e.g., ImageNet). But for me, the motivation seems a bit unclear as that how this learned modulated transformation can mitigate this issue. It seems like the paper claims the proposed method helps to mitigate the limitation of AdaIN that acts at every spatial location equality by learning (possibly) non-local offsets for convolutions. Then one might expect the performance gain would be dramatic in complex datasets such as ImageNet, but it seems the gain is a bit marginal here and rather larger in other datasets or situations. Thus I suspect the improvement of the performance is simply from the increased expressive power than prior GAN architecture, not directly related to AdaIN. - As mentioned in the discussion section, I think showing whether this type of architecture can improve the performance of other types of generative models, as the proposed method itself does not modify the AdaIN itself and thus can be used in any type of convolutional networks. For instance, Can we expect the better performance of 2D UNet-based diffusion models if we replace some convolutional layers with this method? - For 3D-aware generation, I expect the paper should show EG3D+MTM also maintains great 3D consistency, rather than just showing the improvement in FID score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How much memory is increased during training (and inference) with the proposed method, compared with baselines? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper appropriately addresses the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. About the motivation of mitigating the limitation of AdaIN.** This should be a misunderstanding. AdaIN has become a standard operation in GANs, which helps model the cross-instance variation. This work inherits the AdaIN design and does *not* intend to improve this operation. Instead, our point is that "the conventional convolution interacts with the feature map at *fixed* locations, resulting in limited expressive power of handling *spatial* variations". We suspect that this might be the reason why GANs usually work well on datasets where all instances are with a similar shape (*e.g.*, human face), but perform poorly on datasets where instances are with large shape variations (*e.g.*, human body movements in TaiChi). This motivates us to propose MTM, which allows the convolutional kernels to act on the feature map at *variable* locations. It is noteworthy that MTM is also controlled by the latent code (similar to AdaIN) such that different samples will be convoluted at different locations. Experiments suggest that MTM indeed helps improve the performance of GANs on datasets with large shape variations, such as TaiChi (FID from 21.36 to 13.6) and LSUN churches (FID from 4.04 to 2.32). We also included some qualitative comparisons in Fig. R2 (see **the newly uploaded one-page PDF**). **Q2. About “performance gain on ImageNet is marginal.”** With all due respect, we do *not* think improving the FID of StyleGAN2 on ImageNet-128 from 21.14 to 19.16 is marginal. Considering the fact that ImageNet is with complex data distribution, improving the generative performance on ImageNet is not an easy thing, let alone we only introduce a lightweight plug-in module (*e.g.*, MTM) without any other modifications or hyper-parameter tuning. Meanwhile, StyleGAN2 only employs around 30 million parameters, which could be the major bottleneck under such a challenging setting. To verify this hypothesis, we evaluate our approach on a more capable GAN model (*i.e.*, GiganGAN [a]), and confirm that our MTM can also improve the performance of such a strong baseline (with 210 million parameters). We hope that the additional results could help address your concern. | ImageNet-64 | FID | | :- | :-: | | GigaGAN | 7.62 | | *w/* MTM | 6.73 | NOTE: The GigaGAN results are reproduced by ourselves due to the fact that the official implementation is not open-sourced. [a] Scaling up GANs for Text-to-Image Synthesis. Kang *et al.* CVPR'23. **Q3. About “the improvement of the performance is simply from the increased expressive power, not directly related to AdaIN”.** You are correct that our MTM indeed offers a promising way to enhance the expressive power of generators in GANs, with only a few more parameters and computation overheads (see the table below). Meanwhile, as stated in Q1, this work does *not* target improving the AdaIN operation. In fact, AdaIN has inspired style modulation, which provides a good solution to modeling the cross-instance variation. Its core idea is to use the latent code to control the feature map modulation. However, style modulation usually works with conventional convolution, where the convolution position is shared across instances. Our key motivation is to offer the generator an additional degree of freedom (*i.e.*, the spatial convolutional locations), which is also controlled by the latent code. We are sorry for the misunderstanding we have caused, and we will revise the presentation to make our motivation clearer. | ImageNet-128 | FID | Training time | Inference time | # Param. (MB) | | :- | :-: | :-: | :-: | :-: | | StyleGAN2 | 21.14 | 1.0× | 1.0× | 27.78 | | *w/* MTM | 19.16 | 1.2× | 1.0× | 28.55 | **Q4. About applying MTM to 2D UNet-based diffusion models.** Diffusion models adopt a different philosophy from GANs to model the data variation. More concretely, there is no concept of "latent code" in diffusion models. Recall that, the rationale behind MTM is to use latent code to modulate the convolutional positions of each instance. From this perspective, it is not straightforward to test MTM on diffusion models. **Q5. About the 3D consistency evaluation on the task of 3D-aware generation.** Thanks. Following EG3D, we calculate the depth error for 3D consistency evaluation. The results are listed in the table below, where a smaller depth error means better 3D consistency. We can tell that our MTM improves the synthesis performance (*i.e.*, measured by FID) without sacrificing the 3D consistency. We also include a new figure (Fig. R1) in **the newly uploaded one-page PDF** to visualize the geometry of synthesized samples from various viewing points. All these results will be included in the revision. | FFHQ-256 | FID | Depth Error | | :- | :-: | :-: | | EG3D | 4.32 | 0.328 | | *w/* MTM | 4.07 | 0.336 | **Q6. About the training/inference memory comparison against baselines.** Following the suggestion, we list the GPU memory cost of both training and inference stages in the table below. Thanks to the lightweight design of our MTM, the memory cost barely increases. | ImageNet-128 | Training (batch size 128) | Inference (batch size 1) | | :- | :-: | :-: | | StyleGAN2 | 32.17G | 2731M | | *w/* MTM | 32.34G | 2751M | --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the detailed response. For Q1 and Q3, I now understand the point but please edit your introduction slightly if the paper is accepted to reflect your response, as two paragraphs are mentioning AdaIN in the section and some readers might misunderstand the purpose of the work. For Q2, I am not saying the performance gain in ImageNet is marginal; I wanted to say the improvement seems "relatively" marginal. As you mentioned, ImageNet is a complex dataset and I expect improvement should be much larger than this dataset considering the motivation; but it seems the improvements is larger in other fine-grained dataset (e.g., Taichi). That being said, considering the results with GigaGAN, I understand the point. For Q4, I understand the point. For Q5, I wonder why the depth error becomes larger. For me FID improvement and the performance drop in depth error seems compatible and I have a concern if this method really works well in 3D generation as well. At least authors should provide some analysis in this results, not just stating "without sacrificing the 3D consistency." because one can state the opposite side as well with the provided results: EG3D w/o MTM works better by achieving better 3D consistency without sacrificing FID. If Q5-related concerns is addressed, I will raise my score. --- Reply to Comment 1.1.1: Title: Response to Q5 Comment: We are glad that our previous responses have addressed most of your concerns and we will make our introduction clearer to avoid misunderstanding in the revision. Thanks again for your suggestions. In the following, we would like to make further explanations on Q5. - First, please allow us to recapitulate the **setting and instantiation** of 3D-aware image synthesis. Recall the formulation of EG3D, which first employs a 2D backbone to generate triplane features and then decodes the triplane features to a 2D image via volumetric rendering. We would like to clarify that the proposed MTM is directly applied to the 2D backbone, in the same way as how it is applied to 2D image generation (*e.g.*, TaiChi), and we inherit the rendering pipeline from EG3D. - Second, we provide **detailed analyses** of the comparison results regarding FID and depth error, which are transcribed below. (i) Learning 3D-aware image synthesis from 2D datasets is challenging and hence previous attempts usually observe a performance (*i.e.*, quality and diversity of the synthesized images) gap between 2D generators and 3D-aware generators. For example, the state-of-the-art 3D-aware model, EG3D (FID 4.32), is still left behind by the 2D model, StyleGAN2 (3.78). Our MTM helps *narrow down this gap (from 0.54 to 0.29) with only marginal computational overheads*. (ii) The reviewer is right that 3D consistency is an important metric to evaluate 3D-aware image synthesis. However, we need to recap that our module is only applied to the 2D backbone and enables 2D learnable deformation offsets, hence it could barely contribute to the learning of 3D geometry due to the lack of explicit 3D supervision or 3D modeling. Here, by saying "without sacrificing the 3D consistency", we would like to deliver the message that *we are encouraged that the introduction of our MTM does not weaken the model's capability of learning 3D geometry*. | FFHQ-256 | FID | Depth Error | | :- | :-: | :-: | | StyleGAN2 | 3.78 | - | | EG3D | 4.32 | 0.328 | | *w/* MTM | 4.07 | 0.336 | - Third, we would also like to point out the **limitation of existing approaches for 3D-aware image synthesis**, which heavily rely on a well-defined canonical space. Hence, exisiting methods are commonly evaluated on well-aligned datasets (such as human faces) and also require the ground-truth object pose of each training sample, hindering them from being applied to datasets with large shape variations. That is the reason why we fail to evaluate the compatiblity between our MTM and 3D-aware image synthesis on a more challenging dataset. - Fourth, inspired by your comments, we would like to discuss some **future work** of our MTM. For example, it is possible to introduce our MTM to both the 2D backbone and the triplane features, to further offer the generator a degree of freedom regarding 3D geometry deformation. We will include the above discussion in the revision to help readers better interpret the experimental results, as well as understanding the scope of this work. Thank you for pointing this out to us.
Rebuttal 1: Rebuttal: Thank all reviewers for their valuable comments and suggestions. We additionally included some geometry of synthesized samples from various viewing points and qualitative comparisons in **the newly uploaded one-page PDF**. Pdf: /pdf/a8a586b04a9eb575a2bcd1fcb6c968d18eeee0af.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards robust and generalizable representations of extracellular data using contrastive learning
Accept (poster)
Summary: The authors propose a contrastive learning method for obtaining representations of extracellular recordings which could be used for spike sorting and cell type classification. Transformer based encoder is used to generate low-dim representations of random views of spike waveforms which are then compared using a contrastive loss. The authors evaluate the proposed method on a simulated and real extracellular datasets, showing that it outperforms PCA embeddings for spike sorting and WaveMap on zero-shot cell types classification. Strengths: + A nice application of modern machine learning methods (contrastive learning + transformers) to a classical problem (spike sorting) + Clear presentation Weaknesses: - I found the notation in Section 3 to be a bit confusing. E.g. \max over m and t in line 136 of A_{n, M} which doesn't have m and t in its indices. Superindices appear in line 149, even though W_{n, M} was defined without them in line 124. I think the notation could be simplified or at least made more consistent. - The experimentation evaluation is limited. The authors show the proposed method outperforms PCA on spike sorting, but typically PCA is not the only tool used for spike sorting. I am not sure I understand how close CEED gets to state-of-the-art spike sorting methods or manual spike sorting based on a variety of tools (PCA, autocorrelations, visualisations, etc.) - Ablation experiments could be useful as well. For example, it would be interesting to see if any of the proposed augmentations are more useful than the others or the effect of the transformer architecture choices. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * How does CEED compare to state-of-the-art spike sorting methods? The ARI scores above 90 suggest it is quite similar to Kilosort in terms of spike sorting performance, would it be fair to say that CEED performs on par with state-of-the-art spike sorting methods? * Have you tried comparing to non-linear embeddings rather than PCA (e.g. UMAP or autoencoders)? Would you believe CEED would outperform these methods as well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I found the limitations section to be adequate and clearly written. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their strong assessment of our work and for their detailed comments and questions. The review is very useful and led us to some changes that strengthened the paper significantly. In the following response, we address each point raised by the reviewer. - The reviewer correctly pointed out that the notation in section 3 was sometimes confusing or inconsistent. We agree with this assessment and will make the notation more clear and consistent in the final version of the paper. Specifically, we will make sure the superscripts and subscripts are used consistently and correctly. We will also try to remove unneeded subscripts. - The reviewer pointed out that the experimentation evaluation is limited and that the baselines (PCA and denoised PCA) are insufficient given the rich literature of spike sorting methods. This is a valid concern and highlights a clarity issue with our paper. We want to emphasize that *CEED is not a spike sorting method, but rather a feature extraction method for spike sorting and cell-type classification.* While there are a number of spike sorting methods for MEAs, almost all of these methods utilize PCA for feature extraction (Klusta, HerdingSpikes2, Mountainsort4, SpykingCircus, Trideclous, and even Kilosort uses a form of SVD/PCA). We argue that any method that uses PCA would benefit from switching to our more discriminable and robust features. Since directly incorporating CEED into these methods is a non-trivial coding challenge (especially given the short rebuttal window), we plan to add a new experiment to the final manuscript where CEED is incorporated into a simple spike sorting pipeline which includes detection, featurization, clustering, and template matching. We will use SpikeInterface for this analysis. - The reviewer asked how CEED compares to a state-of-the-art spike sorting method such as Kilosort. It is important to note that Kilosort has a number of additional processing steps (cluster splitting and merging, template matching, etc.) which make direct comparison challenging. However, as stated in the above paragraph, CEED is a feature extraction method which can be utilized by full spike sorting pipelines. - The reviewer asked if we have compared CEED to any non-linear methods (i.e. umap or autoencoders). This is a great question as without any comparisons to non-linear methods it is unclear if the performance gains come from the non-linearity or the proposed data augmentations and contrastive learning objective. For the cell-type classification experiments, we refer the reviewer to Figure 3 where we provide a direct comparison to a non-linear method, WaveMap, which utilizes UMAP. We show that CEED is slightly stronger than WaveMap even without training on the dataset they provide. For spike sorting, we did not compare to a non-linear baseline in the submitted manuscript. To correct for this, we now include results in Table 1 of the attached PDF and the General Response showing that CEED also significantly outperforms a non-linear baseline (an autoencoder) on spike sorting. We thank the reviewer for this suggested experiment as it strengthens the manuscript considerably. - The reviewer mentions that ablation experiments would be useful. We completely agree and have included an ablation study of CEED’s data augmentations (Tables 2 and 3) in the attached PDF. We also benchmark an MLP architecture for CEED which has comparable performance to our SCAM architecture (see Table 1, Column 1). --- Rebuttal Comment 1.1: Comment: Thank you very much for addressing my comments and providing additional results in the attached PDF. The rebuttal confirms my positive evaluation of the paper. I don't have any further questions at this point.
Summary: Rebuttal Update: I thank the authors for answering my questions and for the changes and additional experiments that they conducted. I have raised my score accordingly. This paper proposes a contrastive framework to do spike sorting and cell type identification. Strengths: The paper is well written. It is easy to follow and the motivation is clear. The experiments are sound. The data augmentations encode very useful inductive biases for spike sorting. Weaknesses: The paper lacks more extensive benchmarking. There is a whole Zoo of spike sorting methods and dedicated benchmark (e.g., https://www.nature.com/articles/s41592-020-0902-0) – limiting the comparison to PCA and one recent method for clustering is not satisfactory. Classical spike sorting builds on a lot of theory. Proposing a learned method instead is interesting, but it is important to check how this performs against classic methods more extensively out of distribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It seems like the training data already requires knowledge of the spikes. Does this mean that the model cannot be trained unsupervised from MEA recordings alone? Can you check if your learned features are actually becoming invariant to your data augmentations? I remember work showing that, in practice, this is often not the case. Please give a more extensive discussion of refs 41-43. How do they differ from your approach? Could one just use CEBRA to do the same? Is there no next token prediction in your transformer? If not, why do you use causal masks? Line 95, ref 45 is form 2015 – please look at more recent reviews on Spike Sorting. I am skeptical that PCA is still ubiquitous, that seems like a straw man. Lines 99-106: Please comment of functional, anatomical and genetic cell type classification approaches. Electrical profiles are only way. 116 -are. Also, this only gives approx. invariance (see above) 121 Please define MEA 145-147 please clarify 161 as -> a 177 is -> are Are there no tokens in your transformer? I.e., no discrete symbols? Why did you pick K=5? This seems low? Please comment on hyper parameter search. 190 SimCLR did not invent this loss function. Please cite the original references. 192 Did you perform an ablation on the projection MLP? What happens? 205 Only 10 neurons? Does your approach scale to big/relevant modern datasets (i.e., KILOsort...)? 227 If you have ground truth labels, why not compute accuracy (after optimal permutation with, e.g., Hungarian algorithm)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I really want to know how classic spike sort algorithms (including the whole BSS pipeline and source recovery) compare to your approach as you are moving more out of distribution. My intuition is that learned (deep) approaches break down faster in those regimes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their very detailed review and for their useful feedback. We hope to address their concerns point-by-point in the following response. - The reviewer pointed out that the manuscript lacks extensive benchmarking especially with the large number of spike sorting methods currently available for MEAs. This is a valid concern and highlights a clarity issue with our paper. We want to emphasize that *CEED is not a spike sorting method, but rather a feature extraction method for spike sorting and cell-type classification*. While there are a number of spike sorting methods for MEAs, almost all of these methods utilize PCA for feature extraction (Klusta, HerdingSpikes2, Mountainsort4, SpykingCircus, Trideclous, and even Kilosort uses a form of SVD/PCA). We argue that any method that uses PCA would benefit from switching to our more discriminable and robust features. Since directly incorporating CEED into these methods is a non-trivial coding challenge (especially given the short rebuttal window), we plan to add a new experiment to the final manuscript where CEED is incorporated into a simple spike sorting pipeline which includes detection, featurization, clustering, and template matching. We will use SpikeInterface for this analysis. - The reviewer correctly pointed out that the method needs more extensive benchmarking for out-of-distribution spike sorting datasets. We completely agree with this criticism and have added an experiment to quantify the performance of our method on out-of-distribution (OOD) data. We show that CEED outperforms all baselines (including a new non-linear autoencoder) on spikes from neurons outside the training set. Please see the Table 1 in the attached PDF and the General Response for a summary of this analysis. - The reviewer asked if CEED requires knowledge of the spikes and cannot be trained without supervision. CEED does require spike times and channel positions for all the spikes in the training set. These are easy to obtain experimentally. However, CEED does not require spike sorted data and can be trained after a simple voltage-based thresholding detection step. We agree it would be an interesting future work to find useful embeddings of extracellular data without spike detection. - The reviewer asked us to check if the proposed data augmentations actually induce approximate invariance in CEED’s representations. This is an important point and something we visualize in supplementary figure 6. We further extend this analysis in the attached PDF by adding a visualization of embeddings from 3 different neurons with all data augmentations (please see Figure 1). - The reviewer asked us to provide more discussion of refs 41-43 and how they differ from our method. References 41-43 all applied their method to neural response data (processed spike trains, obtained after spike sorting) and not to raw detections from extracellular data (before spike sorting). To our knowledge, we are the first contrastive learning method for extracellular data and had to develop specific data augmentations for this data modality. A large difference with CEBRA is that CEBRA explicitly mentions that they do not utilize data augmentations which could make the learned representations sensitive to the nuisance variables that are prevalent when working with MEAs (e.g. collisions). We would be happy to add this explanation to the final manuscript. - The reviewer asked about SCAM’s causal mask. As each next token in SCAM is conditioned on all the previous tokens and the last token is then used as the representation, we thought a causal mask would be appropriate. - The reviewer asked if there are no tokens in our transformer, i.e., no discrete symbols. For CEED, the signal at each time point of the data corresponds to a single "token" in our model. However, this value is continuous unlike the discrete inputs to a language transformer. Correspondingly, instead of a look-up table typically used in language transformers to map the discrete tokens into a higher dimensional embedding, we simply use a trainable linear layer to map the signal at each time point to a high dimensional embedding. - The reviewer asked us to add discussion about functional, anatomical, and genetic cell-type classification. We will add this discussion in the final manuscript. - The reviewer asked why we chose K=5 for the representation. We chose K=5 because the clustering performance and our heldout KNN metric saturates at this dimensionality. We would be happy to add this analysis to final version of the paper. - The reviewer asked if we performed an ablation of the projection MLP. We found that removing the MLP hurt the performance of our method. We would be happy to include this analysis in the final manuscript. - The reviewer asked if CEED can be scaled to big/relevant modern datasets which have potentially 100s of neurons. It is important to note that the clustering step in many spike sorting algorithms is performed on a local spatial neighborhood of channels which has anywhere from 1-10 neurons. Therefore, CEED can be used with this approach to cluster large extracellular datasets. We test this hypothesis in supplement figure 5 where we show that CEED can be trained and tested on a much larger dataset (spikes from 400 neurons) and still has large performance gains over the baseline feature extraction methods. In Table 1 of the attached PDF, we also show that when CEED is trained on spikes from 100s of neurons, it is able to generalize well to OOD data. - The reviewer asked why we did not evaluate our spike sorting experiments using the accuracy after optimal permutation with the Hungarian algorithm. We can add this metric to the final manuscript, however, we do not expect the conclusions to change as this metric is very correlated with the ARI. - The reviewer pointed out a few errors in the writing (missed citations, spelling, and abbreviations) that we will address in the final manuscript.
Summary: This paper presents a novel method for self-supervised learning of useful representations for data from extracellular, multielectrode recordings in electrophysiology. The method uses a transformer architecture with causal spatiotemporal attention masks, and contrastive learning based on a set of desirable and relevant invariances. The paper describes the proposed neural architecture and training method, and tests them on spike sorting and cell-type classification (two standard problems for this type of data), on synthetic as well as real data from publicly available databases. Strengths: - the design choices for the model (including the details of the invariances used for CL) are well explained. - the proposed method performs well, beating denoised PCA in spike sorting and WaveMap in cell-type classification. - the paper is overall well written. - the limitations of the method are clearly signposted. Weaknesses: - as reported by the authors, the method is currently slow to train and it is untested for spike sorting on more diverse data with waveforms that could have very different shapes than those in the training data (which would be the most useful case for general usage). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Cold you give some more information about the practical computational and data requirements for the method? In particular, I could not find exact details on the hardware used for training (and the time required). It would also be great to have an additional analysis showing for instance how the results in figure 2 change as the amount of training data is changed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations of the method are discussed adequately. The paper mentions potential environmental issues related to the heavy computational requirements for model training, as well as possible mitigation strategies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their strong assessment of our work and for their useful questions/feedback. We hope to address their concerns and questions in the following response. - The reviewer correctly pointed out that CEED is untested for spike sorting of more diverse data with waveforms that have different shapes than the training data. We agree with the reviewer that our experiments, which focused on CEED’s in-distribution (ID) performance, were insufficient for showing the general usability of the method. To address this, we include a new experiment to quantify the performance of our method on out-of-distribution (OOD) data. To strengthen our baselines, we also benchmark a non-linear baseline (an autoencoder) to show that the increase in performance comes from more than just the non-linearity in CEED. Please see Table 1 in the attached PDF and Global Response section for a summary of these results. We thank the reviewer for proposing this suggested experiment as we believe it strengthens the submission significantly. - The reviewer asked about the computational complexity and hardware used for training our method. For our spike sorting experiments, we utilized 16 NVIDIA V100s in parallel. Our runtime was 35 seconds per epoch for the 10 neuron, 200 spike, 11 channel model. For the 400 neuron, 200 spike, 11 channel model, the runtime was 3.1 minutes per epoch. - The reviewer mentioned that the computational complexity of our method is a weakness of the model. We note that this concern was shared across many of the reviewers so we provide a detailed response in the Global Response section. To briefly address the point here, we agree that our originally proposed model had significant computational complexity which limited its applicability to new datasets. In order to address this we, (1) sped up the data augmentation significantly by only computing the augmentations on a small neighborhood of channels, (2) proposed an alternative MLP architecture that also has strong performance and is much faster to train (only requires 1 V100 GPU), (3) quantified CEED on OOD datasets to show that even without training, CEED can outperform the current baselines for spike sorting or cell-type classification. While we are still looking into ways to improve the computational complexity of SCAM, this is challenging to address as transformers are still an active area of research. Recent progress in quantization [1] and acceleration software [2] offer promising solutions to transformers’ runtime issues, but incorporating these methods into CEED would be a future direction. - The reviewer mentioned that it would be useful to show how the results in figure 2 change as the amount of training data is changed. This is a great suggestion and we would be happy to include this analysis in the final manuscript. As can be seen in Tables 3 and 4 of our attached PDF, CEED still outperforms all baselines when using only 200 spikes from each neuron (rather than the 1200 spikes we use in figure 2). [1] Liu, Zhenhua, et al. "Post-training quantization for vision transformer." NeurIPS 2021 [2] Ren, Jie, et al. "ZeRO-Offload: Democratizing Billion-Scale model training." 2021 USENIX Annual Technical Conference. 2021. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and for the additional work! I confirm my score.
Summary: The paper proposes an approach based on transformer and contrastive learning for tackling problems from extra cellular recordings, including spike sorting and cell type classification. Experiments show the validity over standard PCA and traditional approaches in the field. Note after rebuttal: I have appreciated the work on the revised manuscript, and raised the score accordingly. A concern on the rigor of the model selection is still present, though. Strengths: - The topic is fascinating - The paper is well written - The potential impact of the results are promising Weaknesses: - The experimental setup is not fully convincing Technical Quality: 3 good Clarity: 3 good Questions for Authors: - One of the main issues that I see in this work is also pointed out in the limitations section: the computational complexity of the approach. While the performance seems interesting, the point on the much higher computational complexity compared to the baselines risks to reduce the relevance of the analysis. Anyway, I suggest the authors to clearly indicate in the paper the number of trainable parameters, the computational complexity and an analysis of the computational cost required for training the proposed method, in comparison to the literature approaches. - As far as I could see, no model selection on the hyper-parameters is performed. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the authors correctly indicated a major potential limitation of the approach, namely its huge computational costs compared to literature alternatives. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging that the paper is well-written, interesting, and that the results are promising. Since the reviewer has highlighted these strengths, we hope that by addressing their concerns, we can improve the rating. - The reviewer indicated that the experimental setup is not fully convincing. We agree with the reviewer that our spike sorting baselines and analyses could be improved. To address this, we include two new experiments to quantify (1) the performance of our method on out-of-distribution (OOD) data, (2) the performance of our method in comparison to a non-linear autoencoder. Please see Table 1 of the attached PDF and the Global Response section for a summary of these results. As morphoelectrical cell-type classification is still very much an open problem, we utilize the experimental setup and metric introduced in the Eric Kenji Lee 2021 paper. We feel this is a fair way of comparing our method to WaveMap. We do agree that quantifying our method and other cell-type classification methods with functional measures (cell-type selectivity, brain region cell-type distributions, etc.) is an exciting direction and would make for interesting future work. - The reviewer indicated that they did not see a section about model selection of the hyper-parameters. In supplement A.2, we discuss hyperparameters and how the learning rate was tuned using a heldout validation set (separate from the test set). We also did some hyperparameter tuning to choose the optimizer (‘adam’ vs. ‘sgd’) and representation size. We will include these details in the supplement of the final paper. We agree that more rigorous hyperparameter tuning could improve the performance of the model, however, we believe that it is a strength of the method that we already have strong results without too much hyperparameter tuning. - The reviewer expressed concerns about the computational complexity of our method. We note that these concerns were shared across other reviewers so we provide a detailed response in the Global Response section. To briefly address the point here, we agree that our originally proposed methods had significant computational complexity which limited their applicability to new datasets. In order to address this we: (1) sped up the data augmentation significantly by only computing the augmentations on a small neighborhood of channels, (2) proposed an alternative MLP architecture that also has strong performance and is much faster to train (only requires 1 GPU), (3) quantified CEED on OOD extracellular spikes to illustrate that even without training on new data, CEED’s features are more discriminable than the baselines’ features. While we are still looking into ways to improve the computational complexity of SCAM, this is challenging to address as transformers are still an active area of research. Recent progress in quantization [1] and acceleration software [2] offer promising solutions to transformers’ runtime issues, but incorporating these methods into CEED would be a future direction. [1] Liu, Zhenhua, et al. "Post-training quantization for vision transformer." NeurIPS 2021 [2] Ren, Jie, et al. "ZeRO-Offload: Democratizing Billion-Scale model training." 2021 USENIX Annual Technical Conference. 2021. --- Rebuttal Comment 1.1: Title: Follow up after rebuttal Comment: Thank you for your work. I have appreciated the work on the revised manuscript, and raised the score accordingly. A concern on the rigor (and completeness) of the model selection is still present, though. --- Reply to Comment 1.1.1: Title: Hyperparameter selection experiments Comment: We greatly appreciate that the reviewer raised their score because of our additional experiments. Based on their response, however, we realize that not including hyperparameter/model selection experiments was a weak point of our original manuscript and rebuttal. To address this, we have run additional experiments to understand the effect of the *representation size*, *learning rate*, *batch size*, and *number of hidden layers in the MLP*. For all experiments, we compute the KNN decoding accuracy (common in contrastive learning literature) and our GMM clustering ARI metric on a heldout validation dataset of 10 neurons. Our training set consists of 200 spikes from 400 neurons for all the models. The results are summarized below. | Representation Dimension | 2D | 3D | 4D | 5D (current) | 6D | 10D | |--------------------------|------------------------|-------------|-------------|------------|------------|------------| | KNN | .63 | .88 | .94 | **.96** | **.96** | **.96** | | GMM | .44 ± .01 | .67 ± .02 | .80 ± .06 | **.82 ± .07** | .79 ± .09 | .68 ± .04 | | Learning Rate | 1e-4 | 1e-3 (current) | 5e-3 | |--------------------------|------------------------|-------------|-------------| | KNN | **.97** | .96 | .95 | | GMM | .81 ± .10 | **.82 ± .07** | .81 ± .06 | | Batch Size | 128 | 256 | 512 (current) | |--------------------------|------------------------|-------------|-------------| | KNN | .95 | **.96** | **.96** | | GMM | .80 ± .09 | .79 ± .08 | **.82 ± .07** | | MLP Num Hidden Layers | 2 | 3 (current) | 4 | |--------------------------|------------------------|-------------|-------------| | KNN | **.96** | **.96** | **.96** | | GMM | **.83 ± .09** | .82 ± .07 | .82 ± .09 | The MLP architecture we used for the rebuttal had a representation size of 5D, a learning rate of 1e-3, a batch size of 512, and 3 hidden layers. We hope these experiments bring confidence to the reviewer that our model choices were appropriate and that our model is not overly sensitive to any of hyperparameters. It is important to note that the GMM clustering performance does suffer as the dimensionality of the representation size becomes large (10D). We believe this is a limitation of our specific clustering algorithm, however, as the KNN decoding accuracy is still quite high at 10D.
Rebuttal 1: Rebuttal: We thank all the reviewers for the detailed and useful reviews. Using this feedback, we have made improvements to CEED and run a number of new experiments which we detail below. We also provide some discussion of shared reviewer concerns below. Please see the attached PDF for the referenced tables and figure. - **Computational complexity**. There were concerns about CEED’s high computational complexity and the requirement that it needs multiple GPUs to train. To improve the computational complexity of CEED, we propose a few modifications to the original method. (1) We realized our original data augmentations were slower than expected because we were computing them on additional channels that were not used during training. By only computing the augmentations on a small neighborhood of channels, we see a noticeable per epoch runtime boost. (2) While our SCAM architecture showed high performance across a number of datasets, we want to emphasize that the CEED framework is general and can be utilized with other architectures that require less computational resources. To illustrate this, we train a number of MLPs using our data augmentation and contrastive learning scheme. We show that the performance of the MLPs are comparable to the original SCAM architecture (see Table 1, Column 1) with a significant reduction in computational complexity (see Table 2). We use the MLP architecture for all results in the attached PDF. While we are still looking into ways to improve the computational complexity of SCAM, this is challenging as transformers are still an active area of research. Recent progress in quantization [1] and acceleration software [2] offer promising solutions, but incorporating these methods into CEED would be a future direction. - **Out-of-distribution (OOD) training and a new non-linear baseline**. A few reviewers wanted to see non-linear baselines and OOD performance evaluation for our spike sorting experiments. We completely agree with both of these points and have added a new experiment to address this (see Table 1). In this experiment, we benchmark the performance of CEED, our original baselines, and a new non-linear autoencoder on a new 10 neuron OOD dataset. For this OOD dataset, we train each method with spikes from a large number of neurons (390) and then test on spikes from 10 heldout neurons (Table 1, Column 3). We show that CEED significantly outperforms the original baselines and the new autoencoder baseline for both the in-distribution (ID) and OOD datasets. - **Ablations of the data augmentations**. Some of the reviewers wanted us to ablate our data augmentations to see which ones were the most important for CEED. Please see Table 3 of the attached PDF for an ablation of three of our data augmentations (all augmentations are detailed in supplement A.1). The most impactful data augmentation for CEED’s performance was the max channel shift augmentation (.38 drop in the ARI). To further test the importance of this max channel shift augmentation, we ran another experiment (see Table 4) where we created a new training and testing dataset. Instead of extracting each spike at its maximum amplitude channel; we instead extract each spike on *its template's maximum amplitude channel*. This means that spikes from the same neuron will be centered on the same channel. We designed this experiment to evaluate the performance of CEED and our baselines without having to account for max channel shifts due to noise. For fair comparison, we turn off CEED’s max channel shift augmentation for this dataset. As can be seen, CEED still has the highest performance of all methods. We want to emphasize that this dataset utilizes ground-truth information (each neuron’s template) and is an ablation to better understand each method’s performance. We would be happy to add an ablation for all the data augmentations in the final version of the manuscript. - **Visualizing approximate invariances of CEED's embeddings**. A few reviewers wanted to see if CEED's embeddings were approximately invariant to the proposed data augmentations. Please see Figure 1 in the attached PDF for a visualization of CEED embeddings for 3 neurons under all different data augmentations. - **CEED vs. full spike sorting pipelines**. A few reviewers had questions about how CEED compares to full spike sorting pipelines such as Kilosort. We want to emphasize that *CEED is not a spike sorting method, but rather a feature extraction method for spike sorting and cell-type classification*. While there are a number of spike sorting methods for MEAs, almost all of these methods currently utilize principal components analysis (PCA) for feature extraction (Klusta, HerdingSpikes2, Mountainsort4, SpykingCircus, Trideclous, and Kilosort use a form of SVD/PCA). We argue that any method that uses PCA would benefit from utilizing our more discriminable and robust features. Since directly incorporating CEED into these methods is a non-trivial coding challenge (especially given the short rebuttal window), we plan to add a new experiment to the final manuscript where CEED is incorporated into a simple spike sorting pipeline which includes detection, featurization, clustering, and template matching. We will use the SpikeInterface for this analysis. [1] Liu, Zhenhua, et al. "Post-training quantization for vision transformer." NeurIPS 2021 [2] Ren, Jie, et al. "ZeRO-Offload: Democratizing Billion-Scale model training." 2021 USENIX Annual Technical Conference. 2021. Pdf: /pdf/8953d476b33b87f713b455d3f0d17daf92dcb00f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
VideoComposer: Compositional Video Synthesis with Motion Controllability
Accept (poster)
Summary: This work aims to allows users to flexibly compose a video with textual conditions, spatial conditions, and temporal conditions. It introduces a novel framework namely VideoComposer based on the paradigm of compositional generation. To be specific, it introduces the motion vector from compressed videos as an explicit control signal to provide guidance regarding temporal dynamics. Moreover, it develop a Spatio-Temporal Condition encoder (STC-encoder) that serves as a unified interface to effectively incorporate the spatial and temporal relations of sequential inputs, with which the model could make better use of temporal conditions and hence achieve higher inter-frame consistency. Extensive experiments demonstrate that VideoComposer control the spatial and temporal patterns simultaneously within a synthesized video in various forms. Strengths: 1. It introduces motion vector as a more flexible user-guided signal. 2. It proposes Spatio-Temporal Condition encoder (STC-encoder) that serves as a unified interface to effectively incorporate the spatial and temporal relations of sequential inputs. 3. Extensive experiments show the effectiveness and superiority of VideoComposer. Weaknesses: 1. What is the difference between the roles of the ``Style`` of CLIP and ``Single Image`` of STC-encoder? They both seem to provide content to videos. 2. VideoComposer only obtain comparable performance with prior video generative models. Is it more efficient than previous methods? The authors could give their comparisons in training cost and inference time. 3. Lack of extensive visualization comparisons with existing video generative models. The authors are encouraged to provide extensive qualitative comparisons in video generation task. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: What is the difference between the roles of the Style of CLIP and Single Image of STC-encoder? They both seem to provide content to videos.** Thank you for highlighting this point. - **Style condition**. The style condition mainly encapsulates the holistic characteristics of the input image, capturing elements like its color, style, and content. Analogous to the textual condition, it functions as a **global** descriptor, providing a generalized perspective of the image. When utilizing an image as this global condition, spatial pooling is employed to derive a singular embedding. This spatial reduction, however, causes the loss of detailed structural information, which can lead to generalized content interpretation. Thus, it is possible that the generated video can have varying poses (if containing humans or animals) and slight color shifts compared to the reference image. - **Single image condition**. The primary role of the single image condition is to serve as the initial frame for the video being generated. As such, it conveys **local** details and intricacies of the image, ensuring the video's initiation is aligned pixel-for-pixel with the provided image. In conclusion, both conditions influence the video's content and color, but the depth and granularity of influence differ. **Q2: VideoComposer only obtain comparable performance with prior video generative models. Is it more efficient than previous methods? The authors could give their comparisons in training cost and inference time.** Thank you for raising this point. We want to clarify that the primary objective of VideoComposer is to enhance controllability and applicability, rather than to reduce training cost and inference time. Recognizing the value of the reviewer's concern, we also provide a brief discussion in terms of **efficiency**. - **Controllability and versatility**. One of the primary advantages of VideoComposer over previous methods is its augmented controllability. Both Make-A-Video and Video LDM primarily support video generation from text descriptions, limiting their capacity for customized, controllable video generation. In contrast, VideoComposer benefits from diverse conditional guidances, facilitating video generation utilizing user-defined textual, spatial, and temporal conditions. This versatility paves a path to truly compositional video generation tailored to specific requirements. - **Broad applicability**. VideoComposer's design isn't just about controllability, but also about applicability. Our model's capability to address multiple video generation tasks without the need for repeated re-training underscores its unique value. We have equipped one model with a multitude of application scenarios, marking a significant superiority to existing works. - **Efficiency**. While we can't provide direct numerical comparisons due to the unavailability of source codes from competitive methods like Make-A-Video and Video LDM, we'd like to highlight certain design features that contribute to VideoComposer's efficiency. Specifically, unlike Make-A-Video, which leverages multiple cascaded models, VideoComposer adopts a unified approach, possibly reducing both the training cost and inference time. We anticipate that this design could allow VideoComposer to achieve comparable efficiency to existing methods such as Video LDM since both base on Stable Diffusion and extend it by adding additional temporal layers. In summary, while we acknowledge the request for direct comparisons in efficiency, we'd like to emphasize VideoComposer's controllability and versatility. These features make our model a crucial contribution to the current landscape of video synthesis. **Q3: Lack of extensive visualization comparisons with existing video generative models. The authors are encouraged to provide extensive qualitative comparisons in video generation task.** Thanks for the suggestion. We show more qualitative comparisons with existing Text2Video-Zero and Gen-1 in Figure R4. We observed that Text2Video-Zero suffers from appearance inconsistency and structural flickering due to the lack of temporal awareness. Meanwhile, Gen-1 produces a video with color inconsistency and structure misalignment (revealed by the orientation of the bird head). The video generated by VideoComposer is faithful to the structure of the input depth sequence and maintains a continuous appearance. The above experiments demonstrate the superiority of our method in terms of controllability. --- Rebuttal Comment 1.1: Title: Thanks for reponse Comment: Thanks for the author's elaborate response, and all my concerns have been well addressed. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Dear Reviewer Pp5z, We really appreciate your constructive feedback to improve our manuscript, thank you! Best regards, The Authors
Summary: This work proposes a new method called VideoComposer for conditional video generation, especially for video-to-video translation. VideoComposer is constructed upon the Video Latent Diffusion Model and introduces an STC-encoder to integrate multiple spatial and temporal conditions such as RGB images, sketches, motion vector sequences, etc. The architecture design involves simple 2D convolutions and temporal transformer layers. The conditional features are fed into the U-Net input together with noise. The demonstrated results have good temporal consistency. Strengths: - This is one of the pioneering works in controllable video synthesis. The temporal consistency of the video results is impressive, considering that its conditioning modeling enables several editing abilities such as image-to-video translation and motion/depth/sketch-driven local/global video editing. - The jointly training strategy is good for flexible inference within one model, e.g., video inpainting, without second training. - The paper organization and illustrations are easy to follow. Weaknesses: - The authors could have tried other design choices for integrating Condition Fusion as input into the U-Net, such as integration through cross-attention. - In line 215, it is claimed that “we observe that the inclusion of mask and style guidance can facilitate structure and style control.” However, the corresponding evidence should be presented for the style representation extracted by clip image encoder and concatenated with text embedding. - It seems that a single STC-encoder is used for all different conditions via random dropout. It would be interesting to see if different STC-encoder weights for different conditions are better. - The examples in Figure 6 with reference image look like failure cases. Besides, the tiger texture and box shape are changed in Figure 8. It would be helpful to see more discussion and analysis on this part. - The ablation study of STC-encoder is not presented in a fair way. The main benefit of using STC-encoder comes from the video information condition instead of the network design. - The important comparisons and discussions with other methods are not sufficient, such as VideoP2P and vid2vid-zero mentioned in the related works. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The comparisons with other methods are not sufficient and the ablation study is not well presented. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The societal impact has been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Integrating the Condition Fusion into U-Net through cross-attention.** The suggestion to use cross-attention for Condition Fusion in the U-Net is appreciated. While our current choice might not potentially be the optimal one, improving the micro-design of conditioning is beyond the scope of this work and requires considerable computational overhead associated with pre-training on LAION and WebVid. Therefore, we finalize this design choice more from the perspective of empirical analysis. In VideoComposer, regarding the injection of global conditions, such as textual condition and style condition, we opt for cross-attention mechanisms following Stable Diffusion as it contains the high-level and abstractive information. Due to the quadratic complexity of cross-attention mechanisms, implementing cross-attention for Condition Fusion (which is not pooled in the spatial dimension) introduces considerable computational overhead, especially when compared against the relatively efficient concatenation approach. **Q2: Evidence should be provided to prove the statement in Line 215 that "mask and style guidance can facilitate structure and style control".** We appreciate the reviewer's observation. Our description in line 215 might have been ambiguous. To align with this statement, we add the corresponding result in Figure R3 by adding the style condition. The generated video adheres to the given text, style and mask sequence. We will revise the confused descriptions in the next version. Feel free to raise questions if we misunderstand the question. **Q3: Is the STC-encoder shared across all conditions? Would the STC-encoder perform better with distinct weights?** Thanks. We use separate STC-encoders for different conditions without weight sharing by default. In Figure R5, we compare the results using STC-encoders *w/* and *w/o* weight sharing on video inpainting, and find that the former causes performance degradation for conditions like mask sequence. We attribute this to the uniqueness of different conditions, and sharing weights may lead to modeling difficulties. We will further improve the description of the STC-encoder in the revision. **Q4: More analysis should be provided to clarify: (i) the examples in Figure 6 with reference image look like failure cases; (2) the tiger texture and box shape are changed in Figure 8.** We appreciate your observation about Figure 6 and Figure 8. Firstly, We want to clarify that the role of the reference image is primarily as a global condition, offering stylistic features such as color and some aspects of content. As such, the generated videos tend to resemble the reference in terms of color and certain content attributes. According to our observation, we think the examples in Figure 6 exhibit such properties, thereby serving as successful cases. We acknowledge that in Figure 8, the tiger texture and box shapes are compromised as the video progresses. This stems from a lack of structural control since we only provide the motion condition and the textual condition. To enable temporally consistent generation, we augment the motion-controlled video generation with a simple structure control by adding an auxiliary single-sketch condition, as shown in Figure R6. This addition has substantially improved the shape consistency in the generated video, rendering results that outperform those presented in the main paper. **Q5: The ablation study of STC-encoder is not presented in a fair way. The main benefit of using STC-encoder comes from the video information condition instead of the network design.** Thank you for bringing our concern. We might fail to precisely communicate the setting of this ablation study. To clarify, in our ablation study of STC-encoder, the baseline method (*w/o* STC-encoder) entails removing the temporal Transformer in STC-encoder while retaining the spatial convolution, designed to verify the effectiveness of incorporating temporal modeling. The spatial convolution remains in place to ensure the dimensions of all input conditions are consistent. Under such a setting, VideoComposer and the baseline method both take advantage of the informative video conditions as input and, therefore, are compared in a fair way. As indicated in Line 242, such an ablation study underscores the significance of the STC-encoder's temporal modeling capacity during the input phase. In light of your feedback, we will supplement the details of the baseline method in the revised version to avoid confusion. **Q6: Comparisons and discussions with methods, such as VideoP2P and vid2vid-zero, are not sufficient.** Thank you for raising this point. VideoP2P and vid2vid-zero are customized for video editing, which require the access to the original reference video (*i.e.*, the video to be edited) in order to finely optimize the video editing model for every video to be manipulated. However, our VideoComposer can circumvent this time-consuming re-training, and video generation can be performed given video conditions without accessing the original video. For a more illustrative comparison with VideoP2P and vid2vid-zero, we present results in Figure R7. In this example, our objective is to transform the dog in the video to a tiger using an updated textual condition. Even though VideoP2P and vid2vid-zero need to specially optimize the model with the reference video, they still have difficulty maintaining structural consistency due to the lack of sequential structure guidance. In contrast, the video edited by VideoComposer can retain the structural alignment with the reference video while ensuring temporal continuity. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I have read other reviews and authors' feedback. The rebuttal has addressed most of my concerns. Please add these additional experiments to the final paper/supp. I would keep my initial rating. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer EXcw, Thank you for all feedback and positive comments. We will update our final version accordingly. Best, The Authors
Summary: VideoComposer is a tool designed to enhance video synthesis by incorporating textual, spatial, and temporal conditions. It uses motion vectors from compressed videos to guide temporal dynamics and employs a Spatio-Temporal Condition encoder to effectively integrate spatial and temporal relations of inputs. This improves inter-frame consistency and allows for greater control over the synthesized video's spatial and temporal patterns. Strengths: The VideoComposer offers better control over video synthesis, temporal guidance using motion vectors, improved inter-frame consistency with its Spatio-Temporal Condition encoder, versatility in accepting various forms of inputs, and high customizability, resulting in more precise and desired synthesized videos. Weaknesses: An ablation study could be conducted on VideoComposer, where each component is removed in turn to evaluate its impact on overall performance. This would help evaluate the value of training under multiple conditions versus a single condition. Additionally, comparing VideoComposer to a simpler method like Text2Video-Zero [a] with ControlNet [b] would demonstrate whether the increased complexity of VideoComposer yields significantly better results, hence justifying its sophistication. [a] Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators, L Khachatryan et al. [b] Adding Conditional Control to Text-to-Image Diffusion Models, L. Zhang et al. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: How are image-text pairs utilized in the training process of the model? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: What are the limitations? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: An ablation study could be conducted on VideoComposer, where each component is removed in turn to evaluate its impact on overall performance. This would help evaluate the value of training under multiple conditions versus a single condition.** Thanks for your suggestion. We want to clarify that adding conditions will not necessarily improve the performance, and we do not claim this as a motivation for VideoComposer. Rather, our objective is to enhance the controllability by decomposing videos into various conditions. This approach allows VideoComposer to execute multiple tasks with a single model after just one-time training. Recognizing the validity of the reviewer's suggestion, we aim to validate whether adding more conditions will augment the controllability. Specifically, we seek to better reconstruct the original video by incrementally introducing the textual condition, depth map condition, and single image condition. As illustrated in Figure R8 of the attached one-page PDF file, we observe an improved alignment between the reference video (top row) and the generated videos. If the reviewer raises further concerns, we are more willing to address them and clarify any ambiguities. **Q2: Additionally, comparing VideoComposer to a simpler method like Text2Video-Zero [a] with ControlNet [b] would demonstrate whether the increased complexity of VideoComposer yields significantly better results, hence justifying its sophistication.** Thanks for the valuable suggestion. To address this concern, we provide examples to demonstrate the superiority of the depth map-conditioned generation ability of VideoComposer. In Figure R4, we compare our VideoComposer with Text2Video-Zero and existing state-of-the-art Gen-1. We observed that Text2Video-Zero suffers from appearance inconsistency and structural flickering due to the lack of temporal awareness. Meanwhile, Gen-1 produces a video with color inconsistency and structure misalignment (revealed by the orientation of the bird head). The video generated by VideoComposer is faithful to the structure of the input depth sequence and maintains a continuous appearance. This shows the superiority of our VideoComposer in terms of controllability. **Q3: How are image-text pairs utilized in the training process of the model?** Thanks. We train VideoComposer jointly on video-text and image-text pairs by treating images as 'one-frame' videos. When using image-text pairs for training, the shape of the input noise is $1 \times h \times w \times c$, where the temporal length is 1 (*i.e.*, $F=1$). **Q4: What are the limitations?** Due to the page constraints, we include the discussion of limitations in the Sec. C of the Supplementary Material. In brief, the limitations include **(i)** the use of a watermarked pre-training dataset that leads to visually unappealing videos and **(ii)** the resultant low-resolution videos due to computational constraints.
Summary: The paper presents a method for compositional video synthesis. It introduces motion vectors from compressed videos as a control signal for temporal dynamics. The motion vector can be combined by other conditions such as sketch, and depth map. Both qualitative and quantitative results show that the proposed method can control the spatial-temporal patterns. Strengths: + The motion controlled generation result (fig 8) using hand-crafted strokes is interesting. + Table A1 shows effectiveness of the proposed method quantitatively compared to previous methods. + The paper is well written and easy to follow. Weaknesses: - There are a few GAN-based video synthesis approaches that are worth discussing in the related work. For example, MoCoGAN [1] approaches the problem by decomposing motion and content. - The two-stage training strategy needs more clarification. What is "compositional training" particularly in the second stage? How does it differentiate from the "text-to-video" generation in the first stage? - In line 164-165, the authors "repeat the spatial conditions of a single image and single sketch along the temporal dimension". If the input condition is simply repeated, what's the point of applying a temporal Transformer? It will be equivalent to applying the spatial operation only and repeat at the latent space but with higher computation cost, no? (for motion vector, I totally agree that a spatial-temporal modeling would be necessary.) - Motion vectors can be less meaningful in the background due to lack of high-level semantics. It can also be clearly seen from the top row in Fig 4. I wonder if the authors treat the motion vector field equally for all locations. It seems that the generated results with motion conditions has more blurry background. - From Figure 2 and Figure 1(d), my impression is that the conditions (say motion and depth) can be combined together. However, in ablation studies (table 2), only one condition is added at a time. Another ablation that studies all combinations of these conditions will be favored. [1] Tulyakov, Sergey, et al. "Mocogan: Decomposing motion and content for video generation." CVPR 2018. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In the video translation demo (2:11-2:16), the right example's output does not have a consistent color (white before jumping and brown afterwards). Is there any particular reason why the color consistency fails to hold in such a case? 2. I'd suggest moving some quantitative results (Table A1) to the main text. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have addressed the limitations in the supplementary materials. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Discuss the GAN-based methods like MoCoGAN in the Related Work.** We greatly appreciate your intention of improving the Related Work by comparing with GANs, such as MoCoGAN. - Different motivations. MoCoGAN is an unconditional method that aims to improve the quality of video generation, while VideoCompoer is a conditional approach to enhance the controllability during synthesis. - Different methodology. MoCoGAN decomposes videos into content and motion by sampling the latent gaussian noise from different space, but VideoComposer composes a video with textual, spatial, and temporal conditions in the input phase, resulting in a vast design space for customizable video creation. We will include MoCoGAN and other GAN-based methods in our next version. **Q2: Clarifying the two-stage training strategy.** We apologize for unclear presentation. The two-stage training is designed to methodologically address the learning challenge. - In the first stage, VideoComposer focuses on learning the temporal dynamics by only leveraging the textual condition. This foundation allows for a focused understanding of temporal relationships within the video content. - In the second stage, VideoComposer performs **compositional training** by utilizing textual, spatial, and temporal conditions, building on the temporal modeling ability. The major difference between two stages lies in this incorporation of additional conditions, leading to a comprehensive learning of synthesizing video content from multiple compositions. We have further detailed the explanation of this concept in the revised version. **Q3: Justifications for "repeat the spatial conditions of a single image and sketch".** Thanks for pointing out this observation. We greatly value your suggestion to improve the efficiency, and acknowledge that simply repeating the latent feature can be an alternative. Additionally, we also think it to be plausible to utilize the temporal transformer: - **Unified interface**: We design the STC-encoder as a unified interface to incorporate different conditions (*i.e.*, single and sequential inputs) without re-designing the architecture, which can equip VideoComposer with better expansibility. Thus, repeating the input can adhere to this spirit well. - **Minimal computational cost**: The computational cost of the temporal transformer is negligible compared to the 3D UNet. Specifically, the computational overhead for inferring one STC-encoder is just 0.042\% of that required by the 3D UNet. **Q4: Question about the utilization of motion vectors and the resultant blurry background in Figure 4.** Thank you for the careful questions. This question touches on two aspects: - Treatment of Motion Vectors. In Figure 4, we treat the motion vector field equally for all locations. However, we offer flexibility during inference by using partial motion vectors. - Background Blurriness in Figure 4. We attribute it to the motion magnitude difference between two examples. The first video in the tiger example of Figure 4 contains minimal magnitude of motion, thus generating clear background. In comparison, the second video in the tiger example of Figure 4 (*i.e.*, the example in Figure R1(b)) utilizes motion vectors displaying large magnitude of motion, thereby easily resulting in blurriness. If we constrain the motion to a small magnitude as shown in Figure R1(a), we obtain clearer background. **Q5: More ablation studies of combining more conditions should be provided in Table 2.** Thanks for raising this valuable concern. - **The purpose of Table 2**: As the reviewer mentioned, multiple conditions can be utilized together to guide the generated videos. However, we want to clarify that the aim of Table 2 is to demonstrate the usefulness of the STC-encoder in enhancing the temporal awareness of input conditions. By comparing results with and without STC-encoder with identical input conditions, we can highlight such effectiveness, using the metric of frame consistency. We do not expect that adding more conditions will consistently improve this metric. - **More comprehensive study**: Recognizing the validity of your suggestion, we have expanded Table 2 to encompass more conditions. Additional results presented in Table R1 demonstrate how the STC-encoder functions effectively with various combinations of conditions, confirming its versatility and applicability. **Table R1**: Quantitative ablation study of STC-encoder. "Conditions" denotes the conditions utilized for generation. |Methods|Conditions|Frame consistency| |:-|:-:|:-:| |*w/o* STC-encoder / VideoComposer|Text, sketch sequence, and depth sequence|0.911 / **0.918**| |*w/o* STC-encoder / VideoComposer|Text, sketch sequence, and motion vectors|0.912 / **0.919**| |*w/o* STC-encoder / VideoComposer|Text, depth sequence, and motion vectors|0.916 / **0.923**| |*w/o* STC-encoder / VideoComposer|Text, sketch sequence, depth sequence, and motion vectors|0.914 / **0.920**| **Q6: Color inconsistency of the tiger in video translation demo (2:11-2:16).** Thank you for the careful observations. The problem of color consistency is a long-standing challenge in video generation. In this particular example, we conjecture that this stems from **the lack of an explicit textual condition**: The web-scale pre-training data contains both white and brown tigers. The coarse description applied doesn't specifically differentiate between them. We also provide **one possible solution** to address this inconsistency: providing more specified textual instructions, particularly regarding the color of the tiger. By regenerating the video using an identical random seed, as shown in Figure R2 (below), we achieve more satisfactory color consistency in the resultant video. **Q7: Moving results in Tab. A1 to the main paper.** Thanks for the valuable suggestion. We will move it to the main paper to ensure a comprehensive comparison. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for responding to my questions. My concerns have been well addressed and I believe adding these into the revision would strengthen the paper. Therefore I would like to lift up my rating. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer akz1, Thanks for raising the score rating. We appreciate your efforts in the reviewing process and all useful feedback to improve our manuscript. We will update our final manuscript to reflect all the modifications. Best, The Authors
Rebuttal 1: Rebuttal: We appreciate the reviewers for their positive comments and constructive feedback on our paper. We are encouraged that VideoComposer is recognized for its merits, including - clarity in presentation [Reviewer akz1, EXcw] - superior performance both quantitatively and qualitatively over prior methods [Reviewer akz1, Pp5z] - introduction of innovative motion vectors [Reviewer Pp5z, BaoX] - a novel interface for handling inputs [Reviewer Pp5z, BaoX] - its heightened controllability and customizability [Reviewer akz1, BaoX, EXcw]. We also acknowledge the concerns raised and will address them comprehensively. These insightful reviews lead to multiple improvements of our original manuscript. In this rebuttal, we have included some new figures in **the newly uploaded one-page PDF**. Here, we briefly summarize the content of these figures to facilitate quick reference. - Figure R1 shows the effect of varying motion magnitude by sampling at different frame rates of the motion vector condition. [Reviewer akz1] - Figure R2 gives the video translation results, suggesting that we can leverage a detailed prompt to generate a more desired and color-consistent video. [Reviewer akz1] - Figure R3 gives the compositional video generation results, where the video is generated using text, image and mask sequence as conditions. [Reviewer EXcw] - Figure R4 compares VideoComposer with other existing methods such as Text2Video-Zero and Gen-1. [Reviewer BaoX, Pp5z] - Figure R5 shows the comparison of VideoComposer using STC-encoder with and without weight sharing on video inpainting task. [Reviewer EXcw] - Figure R6 includes the experiment of motion control with single sketch condition. [Reviewer EXcw] - Figure R7 compares VideoComposer with VideoP2P and vid2vid-zero. [Reviewer EXcw] - Figure R8 shows the comparison of video generation with single and multiple conditions. [Reviewer BaoX] We are open to discussions and are committed to addressing any concerns from the reviewers, ensuring the continual refinement of VideoComposer. Pdf: /pdf/4bebd2f206b9311d07e3651eaece13a08cad7171.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive representation of natural language.
Accept (poster)
Summary: In transformers architecture a word sequence is first represented by a sequence of word embeddings. This sequence of vector is then iteratively transformed by the successive transformer layers of the model. This paper proposes to characterize the trajectories taken by word embedding sequences. Features are introduced to better understand what happens during these transformations: the curvature along one dimension is estimated by looking the angle of the "curve" (the arcos) and trying to make correlation with "surprisal". Strengths: It is important to understand what is going on inside transformers and what kind of transformation are learnt by the model. Looking at the trajectories is clearly a good idea. This notion of curvature tries to characterize the surprisal observed in the data based on a intuitive assumption. Some experimental results show that the curvature can indeed capture something of the iterative transformations. Weaknesses: While the starting idea is nice, the submission needs to be improved. Many important points remain unclear. Here is a list in reading order (more or less). - Concerning the dataset: UD is a multilingual dataset, what is the language ? Why selecting only very short sentences ? - For the models used, it is really messy. Are the models retrained from scratch ? finetuned ? All along the paper, we never know. - On the same topic, what does "untrained model" mean ? Is it randomly initialized ? Only pre-trained ? Why it is a good basis for comparison ? - This a bit similar for evaluation data: you could define the different datasets once and make clear reference afterwards. - The decoding strategy maybe importance for some measurement. Maybe the greedy choice is not the best. - The definition of surprisal could be given and related to the perplexity/NLL which can be the optimization criterion. - At the end the statistical observation are not so impressive, or I did not clearly understand. The claim are not so clear at the end. It is difficult to really conclude that the prediction aims at linearizing the input through a series of transformations. - You could use latex reference with section numbers (see eg line 130). - The colors used for some figures (3 and 5) are really difficult to distinguish even on a color printed version, or with the pdf. - The figure captions are very long ! As a conclusion, I really liked the starting idea of the paper and I think that it deserves further improvement before submission. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The notion of straithening is a nice idea and a important contribution of the paper. Maybe it deserves a discussion. Why it is estimated like that ? Is there other features that can characterize the curvature ? Maybe they are less tractable, but they are important. It could be nice to relate this notion to the manifold defined by the trajectories. Why the change in curvature is only measured relatively with the first layer ? Why only looking at the cosinus ? The cosinus cannot distinguish the sign of the angle. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Weaknesses: While the starting idea is nice, the submission needs to be improved. Many important points remain unclear. Here is a list in reading order (more or less). Concerning the dataset: UD is a multilingual dataset, what is the language ? Why selecting only very short sentences ? We focused on only English sentences in UD corpus. We limited the number of words in the sentence so that at we don’t get averaging effect since we are looking at an average metric. we used universal dependencies because we wanted to be able to quantify the sentences and phrases along multiple lexical, semantic and syntactic feature. UD corpus provides precomputed values for syntactic features. We further restricted our sentences to include 100K most common words in English (using google ngram dataset). Our choice of sentence length was to ensure we have enough sensitivity in measuring average curvature. Sentences that are very long would result in averaging out the differences that we are interested in. For the models used, it is really messy. Are the models retrained from scratch ? finetuned ? All along the paper, we never know. We are sorry for this overlooking this. Yes, the models were trained on next word prediction objective from scratch. We will address this in the updated version of the paper On the same topic, what does "untrained model" mean ? Is it randomly initialized ? Only pre-trained ? Why it is a good basis for comparison ? The untrained model is indeed randomly initialized (same procedure as (Radford et al.,2020.)) we believe it is a good basis for comparison because the representation is not yet shaped by the training objective, and only shaped by the architecture and operations such as layer norm. the lack of evidence for curvature in untrained model further suggests that it is a consequence of predictive objective that the model is trained on. This a bit similar for evaluation data: you could define the different datasets once and make clear reference afterwards.? We will update the manuscript accordingly The decoding strategy maybe importance for some measurement. Maybe the greedy choice is not the best. For completeness we have included in supplementary figure 5 additional decoding strategies. However for the purpose of interrogating the internal representation of the model, we think the greedy is the most direct one. The definition of surprisal could be given and related to the perplexity/NLL which can be the optimization criterion. We included a supplementary figure 3G,H to address reviewer’s concern. We observed similar relationship between surprisal and curvature for model generated surprisal. At the end the statistical observation are not so impressive, or I did not clearly understand. The claim are not so clear at the end. It is difficult to really conclude that the prediction aims at linearizing the input through a series of transformations. We would appreciate if the reviewer could point us to specific observations so that we could address their concern. You could use latex reference with section numbers (see eg line 130). The colors used for some figures (3 and 5) are really difficult to distinguish even on a color printed version, or with the pdf. We appreciate this comment and can address specific figures that the reviewer think would benefit from restructuring. The figure captions are very long ! We will update the figure captions in the updated manuscript to fix this. As a conclusion, I really liked the starting idea of the paper and I think that it deserves further improvement before submission. We thank the reviewer for the positive outlook, and will integrate all their input to improve the work before submission Questions: The notion of straithening is a nice idea and a important contribution of the paper. Maybe it deserves a discussion. Why it is estimated like that ? Is there other features that can characterize the curvature ? Maybe they are less tractable, but they are important. It could be nice to relate this notion to the manifold defined by the trajectories. We think the current estimation is the simplest way of parametrizing the neural sentence trajectory. We agree that connecting trajectories to the manifolds that govern their evolution in the internal representation of the model would be an exciting next direction Why the change in curvature is only measured relatively with the first layer ? Why only looking at the cosinus ? The cosinus cannot distinguish the sign of the angle. We measured the curvature change from the first layer to be able to compare models. We have included original curvature values in the supplementary figure. We always observed positive curvatures in model representations, as can be seen in the supplementary figures 1, 2, 3,and 6
Summary: This paper hypothesizes that the deep, casually-masked transformer models learn to predict by linearizing representational trajectories. This hypothesis is rooted in observations from the neuroscience literature. The hypothesis is tested through experiments that probe: (1) the degree to which representation curvature decreases with network depth, (2) the relationship between curvature and model performance, (3) the curvature of representations of model-generated text, (4) the relationship between text surprisal (entropy?) and curvature. Curvature is defined in the sense of pairwise cosine-similarity between adjacent representations, averaged across sequences of text. Strengths: The straightening hypothesis is interesting, and the experiments convince me that transformers do exhibit straightening behavior. The experiments appear to be generally well executed (but see my questions below). Experiments #1 and #2 in particular clearly establish the pattern of increased straightening as a function of model depth, model size, and optimization steps. It seems possible that this observation of straightening could be important and exciting to the neuroscience community, but I do not have the right background to make that judgement. Weaknesses: It is not clear to me why the straightening hypothesis is important. Accepting that LLM's do indeed straighten trajectories, what should I do with this knowledge? The conclusion gestures at the possibilities with respect to interpretability of models and revealing "when and how they could fail and suggest ways to make models more efficient and robust." But the connection between straightening and these broader goals (which are undoubtedly of relevance to a broader NeurIPS community) are are not clear to me. I am very open to an argument that more clearly makes the case straightening is important: either from the perspective of its importance to the neuroscience community, or for its potential significance to the broader machine learning community (as hinted at in the conclusion). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is there a connection between straightening and Koopman operator theory? For Koopman, non-linear dynamics are explicitly linearized in a latent Hilbert space. Could we view the behavior of trained autoregressive transformers as approximations to a Koopman operator? Do you believe this result is specific to the transformer architecture? Would we also expect to observe this straightening in state-space models, or causally masked convnets? Do we know whether text in the Universal Dependencies dataset might have been part of the GPT-2 training data? How might this affect the results if this data was indeed included during training? Why are the generated sequences for Experiment #2 so short (7 tokens, with a 3 token prompt)? Why measure surprisal in Experiment #4 using trigrams? Would an information measure like entropy be more natural? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: As the authors acknowledge, the stronger hypothesis--that straightening is a consequence of the predictive loss--is not supported by the experiments in this paper. This would require experiments involving, e.g., MLM or classification tasks to observe whether straightening also appears in these settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Strengths: We agree with the reviewer on this point, and to our knowledge, no prior studies in neuroscience have investigated geometrical properties of language networks related to straightening and connecting it behavior of these models. Weaknesses: We thank you the reviewer for bringing this issue. We think the building a hypothesis at the level of representation, and not behavior, is the first step in understanding model behavior. Diagnosing failure modes of the model at the level of behavioral is useful but not enough to reveal what change in the internal representation could avoid them. Our work develops a hypothesis instead at the level of representations. In figure 4 for example we showed that model generated sentences diverge from original sentences, and this is evident in the difference in the curvature between the two conditions. One could foresee that using for example a new decoding strategy could make the two curvature more similar and thus the model generate sequences to ground truth text. We demonstrated this in supplementary figure 5, as some decoding strategies are closer to ground-truth than greedy sampling. Moreover, the straightening can be used to design more efficient models. For example Olshausen and Field (Olshausen and Field 1996) showed that a sparsity prior over internal representation can push the neural network to develop receptive fields that are similar to that of biological visual system. We aim to use the straightening as an inductive bias over the model representation and train new models suitable for next word prediction tasks. We will certainly elaborate on these points in the updated version of the work Questions: Is there a connection between straightening and Koopman operator theory? For Koopman, non-linear dynamics are explicitly linearized in a latent Hilbert space. Could we view the behavior of trained autoregressive transformers as approximations to a Koopman operator? This is a very interesting proposal and we thank the reviewer to bring this to our attention. This is beyond the scope of the work, and we can hint towards it in the discussion part of the work Do you believe this result is specific to the transformer architecture? Would we also expect to observe this straightening in state-space models, or causally masked convnets? In supplementary figure 3, we show suggestive evidence that the state-space models exhibit similar behavior when trained on next word prediction objective., and in supplementary figure 2 we showed that bidirectional models behave differently from unidirectional models. Unfortunately, we are not aware of models that are causally masked convnet in text model. We would appreciate if the reviewer could point us to one so we can test their curvatures. Do we know whether text in the Universal Dependencies dataset might have been part of the GPT-2 training data? How might this affect the results if this data was indeed included during training? (search if UD was a part of openwebtext). Given that the training dataset for GPT2 is not publicly available, we instead compare model behavior on openwebtext and UD corpus supplementary figure 6, we do observe similar curvature reduction between both datasets. Why are the generated sequences for Experiment #2 so short (7 tokens, with a 3 token prompt)? We chose 3 token as a prompt for 2 reason, 1. We wanted to limit the amount of veridical information the model is exposed to before generating the sequence of tokens 2. The definition of straightening requires at least 3 initial points, so we wanted to give the model the minimal amount of information, in order to have the model produce its prior over trajectories Why measure surprisal in Experiment #4 using trigrams? Would an information measure like entropy be more natural? We included in supplementary figure3G,H correlation between model generated surprisal and curvature, and observed a similar relationship as the main manuscript. We hope this addresses this question Limitations: As the authors acknowledge, the stronger hypothesis--that straightening is a consequence of the predictive loss--is not supported by the experiments in this paper. This would require experiments involving, e.g., MLM or classification tasks to observe whether straightening also appears in these settings. We agree and in future work we aim to address this. --- Rebuttal Comment 1.1: Comment: Thanks for answering my questions! > Unfortunately, we are not aware of models that are causally masked convnet in text model. We would appreciate if the reviewer could point us to one so we can test their curvatures. Hyena might be a good convolutional model to investigate: https://github.com/HazyResearch/safari Given the results for RWKV, I find it highly likely that the straightening phenomenon is architecture-independent; no need to rush to run additional experiments on my account (although they might be a nice addition to round out the results for the camera ready). The straightening phenomenon is a well-documented by this paper, and apparently it is a general observation about the representations learned by autoregressive language models. It remains unclear to me whether this observation is important, but only time can answer that question. I believe that the observation of straightening is of broad interest to the Neurips community, and I am satisfied by the authors responses to my questions. In light of this, I have raised my score.
Summary: This work investigates whether large language models learn to "straighten" the word-by-word representation of sentence as it passes through the model layers. The word-by-word curvature of the sequence embeddings is defined as the angle between two consecutive word embeddings (i.e. arccos of the cosine similarity of consecutive word embeddings from a particular layer). The idea is that a "straighter" trajectory would enable generalization via extrapolation. This work tests a number of models from the GPT-2 family of various sizes and shows that the word-by-word sequence curvature decreases from the early to middle layers (relative to the first layer of the model), and then increases towards the later layers. Strengths: - Well written, clear, and concise manuscript - Investigates a topical question that will be of interest to many in the NeurIPS audience Weaknesses: W1. Several times throughout the manuscript (including in the abstract and title), it is claimed that the results that larger models that have better next-word-prediction also have less curved trajectories in the early-to-mid layers suggests that models learn straighter trajectories in order to predict better. There is no evidence in this manuscript to support this claim. There is only evidence of correlation between the two, and not of causation. These claims need to be dialed way down and qualified. There can be other causes that lead to both straighter trajectories and better prediction performance. For example, the finding in the later part of the paper that sentence surprisal is correlated with curvature can be exactly this cause: it is possible that the most likely next word is the one that leads to the most "straight" trajectory. Therefore, a model which learns to predict the most likely next word (i.e. a language model) and achieves a good performance, will also have a straight trajectory. The interesting question is why the most likely next word would lead to a straight trajectory, but that is not answered by the current work. W2. The manuscript heavily leans on a hypothesis developed by previous work (Henaff et al. 2018/9) but it is not clear how the curvature measure defined in the current work is related to the one developed by previous work. This needs to be clarified. W3. A few possible confounders for the results. See Questions below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Major: Q1. What angles are considered to lead to a "straight" trajectory, and what are the individual values of the curvatures for each layer? Does straight mean closer to 0? Is the curvature considered to increase as arcos increases? As in, the curvature is the highest for arccos = pi when the vectors point in completely opposite directions? Q2. Currently the average of the curvature between consecutive words/tokens in the sentence is considered, but how does the curvature actually evolve through the sentence? This question also somewhat relates to L127-129: “If the model is achieving next word prediction by reducing the curvature in the trajectory of its internal states over the course of a sentence, then we should observe a reliable decrease in the average curvature across the model layers.” The entailment is not clear to me here. If the model is reducing the curvature over the course of the sentence, then I would expect that there will be a decrease in the curvature that relates to the position of the sentence. It’s not clear why one would expect to see the suggested result across different layers of the network. Q3. Relatedly, how do the learned positional encodings of GPT-2 interact with the results in this work? Specifically, I am wondering if the reason that the randomly initialized model does not show any substantial changes in curvature is due to the poor representations of position. Perhaps a better baseline would be a model that has pretrained positional encodings but a random initialization of other embeddings and parameters. Q4. It seems like the surprisal is measured entirely on the test corpus, but it should be tested w.r.t. the training corpus. The models are learning the statistics of language using the training corpus, so one would predict that these statistics are closer to the ones estimated by the models. Perhaps the correlation between surprisal and curvature can be even higher if the surprisal better captures the training statistics. Minor: - All the tested models produce token-level embeddings. The curvature computation in Section 2.2 discusses word-level embeddings. Is the curvature computation indeed done on the word-level, and if so, how were the word-level embeddings produced? If the curvature computation is produced by aggregating the token-level embedding within a word using a specific function, such as mean pooling, then how does this function affect the computed curvature? (e.g. when compared to max pooling or taking the last token in the word as the word-level token) - L6: work from 2019 is hardly recent - L90 + 6 lines (missing line numbers on page 3): typo “as the difference between to adjacent states“ - Top Fig1, right hand side: words out of order “fox jumps over the” -> “fox jumps the over” - L234: the earliest citations for the middle layers of language models predicting brain recordings the best is Jain and Huth, 2018 NeurIPS (who show this for LSTM-based models) and Toneva and Wehbe 2019 NeurIPS (who show this for larger transformer and LSTM-based models). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please discuss the fact that there can be other causes for the results you observe (see Weakness 1) Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1. we agree with the reviewer that the causation is a harder question to answer. And will emphasize in the updated version of the paper that we are showing the evidence at the correlation in this work. We intend to extend our work in causation direction by training models and biasing their representation toward specific curvature values and observe their next word prediction performance. However, we are connecting the next word prediction as behavior of the model to straightening as a hypothesis about evolution of neural trajectories that yield next word prediction behavior. the reviewer points to this fact here. We certainly not claiming that the straightening is the only features that allow the model to develop linguistic competency. As reviewer pointed out we will make this point clear in the discussion. W2. The manuscript heavily leans on a hypothesis developed by previous work (Henaff et al. 2018/9) but it is not clear how the curvature measure defined in the current work is related to the one developed by previous work. This needs to be clarified. We are sorry about that, our work defines the curvature in the trajectory of representation the in the same manner as henaf 2019, we will clarify this in the text. W3. A few possible confounders for the results. See Questions below. Questions: Major: Q1. We consider straightness with respect to initial layer of the network, we have shown in the supplementary material that a random trajectory on average has a curvature close to 120 degrees and this is indeed the case for untrained and early layers models ( supplementary figure 6A). We considered a trajectory straighter when the average angle of a trajectory decreases with respect to angle in the first layer. in none of the networks we observed curvature to be close to pi Q2. This is a fair point, however here we are measuring an average measure for curvature over the whole sentence, and not at the level of individual words. It is not straightforward to interpret the word-by-word changes in the curvature, one reason for this is that the error in estimating next state in time is cumulative in the sense that the error in predicting the n+1 token will affect n+2 token prediction and so on. As a result, the later part of the sentence could have more variation in the curvature. Q3. This is a good point; we validated that positional information is not contributing to curvature in untrained models.to do so we studied a transformer model with Rotary positional encoding (Su et al. 2021; Biderman et al. 2023). Similar to GPT2 model with learnt positional embedding, we observed that the untrained version of the model does not exhibit any reduction in curvature, supplementary figure 3C,D Q4. This is a great point, we unfortunately don’t have access to the training corpus for many of these models, but agree that it would be a next step. We will try to address this by using models that are trained on publicly available corpus. Minor: • All the tested models produce token-level embeddings. The curvature computation in Section 2.2 discusses word-level embeddings. Is the curvature computation indeed done on the word-level, and if so, how were the word-level embeddings produced? If the curvature computation is produced by aggregating the token-level embedding within a word using a specific function, such as mean pooling, then how does this function affect the computed curvature? (e.g. when compared to max pooling or taking the last token in the word as the word-level token) For word level embedding we average the activation of tokens that compose the word, and from that point on the curvature computation is identical between the two settings. • L6: work from 2019 is hardly recent • L90 + 6 lines (missing line numbers on page 3): typo “as the difference between to adjacent states“ • Top Fig1, right hand side: words out of order “fox jumps over the” -> “fox jumps the over” • L234: the earliest citations for the middle layers of language models predicting brain recordings the best is Jain and Huth, 2018 NeurIPS (who show this for LSTM-based models) and Toneva and Wehbe 2019 NeurIPS (who show this for larger transformer and LSTM- We thank the reviewer for point these out and we will update the manuscript accordingly --- Rebuttal Comment 1.1: Comment: Thanks for the response. I agree with the rest of the reviewers that the straightening phenomena is well examined by the authors and that it would be of interest to the NeurIPS audience. I am raising my score to a borderline accept, but I do believe that there is a lot that can be done to strengthen the impact of the paper. For instance, providing more intuition for why straightening is measured the way it is in the work vs other possibilities (e.g. w.r.t. position in a sentence) would be very helpful. --- Reply to Comment 1.1.1: Comment: we thank the reviewer again their insightful comment, and the reconsideration of manuscript. We also agree that we can strengthen the impact of the paper. with regards to measuring straightening, we think this way of measuring curvature is the simplest form, and potentially affect the main effect. for example we could have measure curvature using more parametric approaches ( spline estimation) or over longer temporal window, ( considering multiple words at a time ), but our worry was that it would make the interpretation harder. we will make sure to add points in the the methods, and discussion to clarify our rationale and emphasize reviewers points.
Summary: This work examines the an hypothesis regarding neural trajectory straightening as a mechanism, by which neural language models achieve next word prediction. Specifically, this hypothesis connects between the objective of next word prediction and extrapolation to the embedding of the next word in neural representation space. They define layer curvature based on prior work and find that (1) autoregressive LMs consistently reduced their curvature from early to middle layers, and this effect was only observed for trained models; ,odel size and training dataset size affected the model’s ability to reduce curvature; model-generated sentences exhibited lower curvature compared to natural human-generated sentences; average curvature correlated with average sentence surprisal in the middle layers of the model. Strengths: This paper explores an interesting hypothesis regarding the connection between an internal geometric property of Transformer based LMs and their performance. If this was not examined in prior work, I find the questions posed in all 4 experiments novel and interesting, and the results non-trivial. In particular, I liked the thoroughness in testing both different model sizes and different training set sizes for the same model size, that did a good job in removing an important confounder in my opinion (though not enough discussion and experimentation regarding the point in 1B tokens training set that broke the trend) Weaknesses: I find that this paper can be strengthened from several different angles: - Discussion of prior work: I am not familiar with literature on geometric interpretability of language models. This paper is on this exact topic but does not convey sufficient background on related work. - Question scope: the focus on one specific geometric measure limits this paper’s strength, to me it seems that several related measures can be examined. Alternatively, though it’s intuitive, I find the focus on this specific geometric measure as not sufficiently motivated. - Depth of investigation: For each experiment, only the basic setup was ran and often there was not sufficient discussion on the outcome or follow up experimentation (eg, what happens in the second half of the network? Why did the 1B token experiment not show the same trend as 1M, 10M, 100M?) - Several experimental design choices were not sufficiently motivated (Why only one dataset of 8,408 sentences? Why constrain sentences to be between 6 and 19 words long, and to not contain abbreviations or uncommon words?) - writing and presentation. There were several clumsy sentence phrasings (eg, first intro sentence) and some typos (eg, mid sentence capitalization line 121). More importantly, some core quantities were not adequately presented (eg, no formula given for the employed 3-gram surprisal metric), and the figures were generally pretty hard to decipher (eg, What does figure 4A mean? I didn’t understand what quantity is referred to by the title: “The predictions of the representation straightening hypothesis”. What do the axes of this plot correspond to?). As mentioned above, I found the results sections not written well enough, often reiterating the premise and the intuition and not conveying and discussing the actual experimental outcome clearly enough. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Why only train and not compare to other models? - The y-axis in your reported plots is "curvature change" What was the distribution of the original absolute curvature angle in the first layer? If you report only a relative number, how can I know that curvature drop makes it close to zero doesn’t end up in a negative degree, rendering the conclusion on "straightening" incorrect? To be clear I'm pretty sure / hopeful that the authors will have a good answer and that "straightening" indeed takes place, but this represents some weakness in the presentation. - I found this specific sentence very odd: "The results generalize to smaller models and other surprisal metrics (other n-gram metrics and PCFG-parser-based surprisal - not shown here)" why do the authors mention other related results but do not show them? - Why did you use n-gram surprisal and not LM perplexity in the forth experiment? The first is model agnostic and the latter is model related, so maybe you should have ran both, but I find perplexity very natural to use here. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Discussion of prior work: Thanks for the reviewer for bringing this issue to our attention, we will include a more through discussion of prior work in our revised manuscript. There are a number of prior work that utilized geometric approaches to understand the internal representation of language models. (Hewitt and Manning 2019) used linear transformation to identify projection of over model representation that is maximizing similarity to syntactic distances in a parse tree, and found that middle layers of BERT provide best representation.In another line of work (Mamou et al. 2020) used manifold analysis to uncover how different features of words and sentences, such as part of speech ) become separable in deep layers of language models like BERT, and GPT. More recently, (Valeriani et al. 2023) have investigated how the geometric properties such as intrinsic dimension changes across the layers of transformer models, finding that intrinsic dimension first increases before sharply decreeing in deeper layer of the bidirectional transformer models. While these works provide insight to geometrical properties of these network, to our knowledge no prior work have tested a representational level hypothesis that connect behavior of the network (next word prediction) with neural trajectories of individual sentences. Scope: The reviewer brings up a good point, and we will include more information regarding our motivation to focus on straightening. We specially focused on straightening because it provides a testable prediction about how model behavior is shaped by its internal representation. Figure 4 is one such test in which we used our geometric measure to reveal a possible mechanism leads model to deviate from natural language, mainly that their generated sequences exhibit more straightening compared to human produced language. Depth of investigation: We agree with the reviewer that there are aspects of the results that we did not discuss thoroughly. We hope to investigate these phenomena in future work, and tried to address some of the follow up experiments in the supplementary section. experimental design : We thank the reviewer for bringing these to our attention, and will fix the errors and clarify other topics in the updated version of the paper. 1. Choice of dataset: we used universal dependencies because we wanted to be able to quantify the sentences and phrases along multiple lexical, semantic and syntactic feature. UD corpus provides precomputed values for syntactic features. We further restricted our sentences to include 100K most common words in English (using google ngram dataset). Our choice of sentence length was to ensure we have enough sensitivity in measuring average curvature. Sentences that are very long would result in averaging out the differences that we are interested in. 2. We will modify figure 4A to emphasize that it is for illustrative purposes and will clarify the detail. the axes in this figure correspond to state spaces built by units in layer P of the network. The trajectories represent evolution of unit activity when the model is generating a sentence vs when it is exposed to a veridical sentence. If the unit activity is favoring a straight trajectory, then the model-generated sentences would be have lower curvature (red compared to blue). Figure 4.B shows the results of an experiment in which we provided a model with 3 context tokens and recorded its generated sequence for 7 tokens, along with the model representation for the generated sequences, and contrasted it with a condition we provided the full 10 token sequence to the model. We then compare the curvature for generated to the ground-truth, and observed that model generated sequences achieve lower curvatures, in line with our prediction in figure 4.A. Questions: Why only train and not compare to other models? we added new models in supplementary figure 3 (RWKA, GPT-NEOX, OPT,GPT2-XL). We observed similar decrease in the curvature in trained version of model, as well as similar correlation between curvature and surprisal. The y-axis in your reported plots is "curvature change" What was the distribution of the original absolute curvature angle in the first layer? If you report only a relative number, how can I know that curvature drop makes it close to zero doesn’t end up in a negative degree, rendering the conclusion on "straightening" incorrect? To be clear I'm pretty sure / hopeful that the authors will have a good answer and that "straightening" indeed takes place, but this represents some weakness in the presentation. We focused on curvature change so that we can compare models in their performance for curvature reduction. However, we agree that the absolute curvature values are important to consider as well. To this end we added supplementary figure 6 to clarify this. First using simulation, we showed that for a trajectory over random points in space, the average curvature is distributed around 120 degrees. For many of the models we tested, early layers indeed have a curvature in the range of 120, and there is a drop in the curvature in deep layers of the network. I found this specific sentence very odd: ... We are sorry for not including this in the main part of manuscript. We included the additional surprisal metric as well as the relationship between surprisal and curvature for a number of new models in the supplementary information supplementary figure 4. Why did you use n-gram surprisal and not LM perplexity in the forth experiment? The first is model agnostic and the latter is model related, so maybe you should have ran both, but I find perplexity very natural to use here. We picked a model agnostic measure so we could test it across many models as the reviewer mentioned. But we agree and included a new figure in the supplementary that is relating curvature to LM perplexity. We included the results for the main model (GPT-XL) and few other models in supplementary figure 3. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing many of my comments and questions. Given their responses, I am raising my score. --- Reply to Comment 1.1.1: Comment: We are glad to hear that we were able to address reviewer's questions, and appreciate their reconsideration of the work.
Rebuttal 1: Rebuttal: Pdf: /pdf/f7360df41b06309afb15fe6352db7e478bf0301d.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper provides evidence that autoregressive language models - specifically the GPT family straighten the internal trajectory of word sequences, making them more linear, in order to better predict next words. They show that trained models decrease sequence curvature across layers, larger models straighten more, model-generated sentences are straighter, and curvature correlates with unpredictability. Strengths: The paper introduces computational evidence for the trajectory straightening hypothesis using a simple and intuitive curvature metric and backs it up with various experiments. It is also well-written and raises exciting questions about the working/interpretability of these models. Weaknesses: The paper did not perform any ablation studies to see what is causing the trajectory straightening. Removing different components of the transformer like the feed-forward layer, and seeing the effects on straightening may lead to some more insights. It is not clear if straightening depends on the transformer architecture specifically or also occurs in other model architectures like LSTMs or vanilla RNNs Do similar dynamics occur in MLPs? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Does straightening occur in LSTM or GRU sequential models? Or is it unique to transformer self-attention? If we remove layers or transformer components like self-attention or the feed forward layer is straightening impeded? Does straightening also happen in other transformer-based models but with a different pertaining objective such as BERT ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have a sufficient limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Weaknesses: The paper did not perform any ablation studies to see what is causing the trajectory straightening. Removing different components of the transformer like the feed-forward layer, and seeing the effects on straightening may lead to some more insights. It is not clear if straightening depends on the transformer architecture specifically or also occurs in other model architectures like LSTMs or vanilla RNNs Do similar dynamics occur in MLPs? We appreciate the reviewer for pointing this out. To address this, we performed ablation on GPT2-XL autoregressive model (Supplementary figure 1). We performed ablation at different layers of the network [5,15,25,35,45], and for each layer on individual modules [attention head, attention projection, MLP]. To do the ablation in each module we set the weights with identity matrix and biases to 1. Specifically, for attention head we replace the weight for Key matrix to identity, for attention projection we replace the weights with identity, and for MLP we replace the two weight matrices (h*4h, 4h*h, where h=hidden size=1600) such that the effective weight is identity. This approach allowed us to do minimal disruption in on model and observe how the representation of ablated layer is transformed. For computational considerations, in each experiment we tested 500 randomly sentences sampled uniformly. We observed 2 main effects. First, ablation of early layers has more consequence on curvature in the succeeding layers. Second, we observed that the ablation of attention mechanism has leads to largest deficit in reducing curvature in the succeeding layers. This suggest that attention mechanism is causing the straightening. We also tested recent transformer like RNN model ((Peng et al. 2023)) and observed similar straightening to transformers (Supplementary figure 3). Questions: Does straightening occur in LSTM or GRU sequential models? Or is it unique to transformer self-attention? If we remove layers or transformer components like self-attention or the feed forward layer is straightening impeded? Does straightening also happen in other transformer-based models but with a different pertaining objective such as BERT? We thank reviewer for bringing this issue up. This is a very important question that we intend to address in our continuation of this work. Our current results points towards objective function as the main driver of straightening. As reviewer suggested, we tested the straightening hypothesis in a bidirectional model ( Bert-large-uncased, Supplementary figure 2). We observed that a different pattern of curvature across layers. Early layers do not show a gradual drop in curvature, something that we observed in autoregressive transformer models. instead, there was a drop in curvature in deeper layers. Importantly there is no reliable relationship between surprisal and curvature for across the layers. What could contribute to curvature reduction in deeper layers of BERT. It is possible that a masked language modeling objective still share some similarities to an autoregressive objective. For example, cases masked words that are towards late part of a sequence of tokens, the model can use information from past words to predict missing masked token. Limitations: The authors have a sufficient limitations section. --- Rebuttal Comment 1.1: Comment: Thank you authors for your diligent response. I agree with the other reviewers that the straightening phenomena is interesting and would be of interest to the wider NeurIPS audience. However, the paper does need some work to be improved in terms of writing and explaining the results. I would recommend adding the result on "objective function as the main driver of straightening" with experiments showcasing the BERT results in the main paper. Thus, I would like to keep my original score. --- Reply to Comment 1.1.1: Comment: we appreciate reviewers insightful inputs. We will indeed follow their suggestion to include a new section on "objective function as the main driver of straightening" in which we will discuss the results for BERT and emphasize how in autoregressive models, a predictive objective function can lead into straightening of neural trajectories, and connect it to a mechanistic description of model behavior.
null
null
null
null
null
null
Front-door Adjustment Beyond Markov Equivalence with Limited Graph Knowledge
Accept (poster)
Summary: Causal effect estimation from data often requires assumptions about the causal relationships, either through explicit causal graph structures or implicit conditional independence statements. When confounding exists, the front-door adjustment becomes important for estimating the causal effect of treatment on the outcome using post-treatment variables. This paper studies testable conditional independence statements to compute causal effects using a front-door-like adjustment without knowing the graph under limited structural information. The effectiveness of the method is demonstrated through experiments on both random graphs and real-world causal fairness benchmarks. Strengths: 1. The proposed method enables estimating causal effects without requiring knowledge of the causal graph. Instead, it utilizes front-door-like adjustments based on post-treatment variables, making it applicable even in scenarios with unobserved confounding. 2. The proposed method relies on conditional independence statements that can be directly tested from observational data. This allows for identifying causal effects using observable information without the need for specifying the entire causal graph. 3. The proposed method requires only limited structural side information, which can be obtained from an expert. This requirement is less demanding than specifying the entire causal graph, making the approach more practical and feasible. Weaknesses: 1. The algorithm presented in the paper relies on stronger assumptions, but the paper does not mention them, raising doubts about the soundness and completeness of the proposed method. My main doubts are listed in the Questions. 2. Figure 5 appears to have a mislabeled Y-axis. It seems to be "Average ATE errors". Minors: Line 59, criteria -> criterion Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Assumption 1 alone may not be sufficient. It might also be necessary to explicitly state that variable y is not a child node of variable t. 2. TTheorem 3.1 appears to be incomplete. For example, we have $t\rightarrow b\rightarrow y$ and $t\leftrightarrow y$, where $\leftrightarrow$ denotes a latent confounder. In this case, b is a cause of y, and the conditional independence between b and y may no longer hold. 3. Some cases present challenges in identifying P(Y|do(t=t)) due to the complexity introduced by latent confounders. It is unclear how to exclude these non-identification cases and handle them appropriately. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and time. Below, we respond to their comments: 1. **Figure 5**: We thank the reviewer for pointing out this typo. We will correct this in the revised version. 2. **It might be necessary to explicitly state that $y$ is not a child node of $t$**: We thank the reviewer for bringing this up. First, note that if $y$ is a child of $t$ and a bi-directed edge exists between $t$ and $y$ (as stated in line 160), the causal query is not identifiable (see Theorem 4 of Tian and Pearl 2002, *A General Identification Condition for Causal Effects*). Moreover, in this scenario, our desired conditional independencies will not pass as there does not exist a set $\mathbf{z}$ for which $b \perp y | \mathbf{z}, t$ since $y \in b$. Thus, our algorithm will correctly return “I don’t know” as the answer. We will add a remark in the revised version to make this explicit. 3. **Theorem 3.1 appears to be incomplete ($b \perp y | \mathbf{z}, t$ may not hold for the graph $t \leftrightarrow y; t \rightarrow b \rightarrow y$)**: We thank the reviewer for mentioning this case. We note that for the traditional front-door graph that the reviewer suggested, our available side information only reveals the edges $t \leftrightarrow y$ and $t \rightarrow b$, i.e., it does not reveal the edge $b \rightarrow y$ as we do not assume the knowledge of the entire graph. **This is not sufficient to identify the causal effect from observational data as the edge between $b$ and $y$ cannot be oriented.** Specifically, the traditional front door graph is not distinguishable from the following two graphs: (a) $t \leftrightarrow  y; t \rightarrow  b \leftarrow  y$  and (b) $t \leftrightarrow  y; t \rightarrow  b \leftrightarrow  y$. For these two graphs, we have $\mathbb{P}(y|do(t)) = \mathbb{P}(y)$ which is different from the front-door adjustment formula. Therefore, yes, our algorithm cannot identify the causal effect in the traditional front-door graph but correctly so, since it is not identifiable from the available side information. We will include this discussion in the revised version. 4. **Complexity introduced by latent confounders**: We note that our conditions are sufficient irrespective of the number or complexity of the latent confounders. This means that if a causal query is not identifiable, then our conditions will not hold. Thus, the algorithm may say “I don’t know” but it will not make a mistake. For more examples of graphs with multiple latent confounders beyond Figure 2, please have a look at Figure 8 and Figure 11 in the supplementary. --- We hope that our response addresses the reviewer's concerns and that they would consider increasing their score. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for your response and clarification. I have read the authors' rebuttal and other reviewers' comments. I will maintain my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal as well the comments of other reviewers. We are glad that the reviewer's questions were clarified.
Summary: The authors proposed a method for estimating causal effects without requiring the knowledge of fully-specified causal graph, focusing on the case where unobserved confounding between treatment and outcome exists. This approach using a front-door-like adjustment formula has a novel contribution in that it can estimate causal effect using only simple structural side information which can be obtained from an expert and is less demanding than specifying the entire causal graph. The authors provide sufficiency proofs and demonstrate clear graphical criteria (a generalized front-door condition) for the proposed front-door-like adjustment formula. Strengths: The authors present a generalized formula that accounts for the variability of the front-door criterion based on the structure of the graph. They provide sufficient conditions clearly for the formula and demonstrate its validity. Hence, in realistic scenarios where unobserved variables may exist between treatment and outcome, this methodology can be effectively utilized, proving its utility. In order to facilitate understanding for readers, the paper includes comprehensive prerequisite knowledge. it is anticipated that the formula proposed in this paper will have high utility. Weaknesses: No specific weakness. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I once worked on the same problem a few years ago when I first read Entner’s paper on backdoor criterion but I had no luck. So I am super happy to read this paper! I am wondering whether you can compute the variance of ATE estimation for each selection of S so that we can make use of inverse variance weighting instead of simple average. Further, can you employ double machine learning like approach? (e.g., Jung et al. 2021, https://ojs.aaai.org/index.php/AAAI/article/view/17438) Suggestion: In consideration of the importance of Theorem 3.2 as a sufficient condition, it is recommended to include brief proof sketch not only in the appendix but also in the main body of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The author clearly presents the limitations of the methodology proposed in Appendix A.2. As stated by the author, assumption 3 among the three assumptions is undoubtedly a strong assumption, which would require substantial domain knowledge to satisfy it in practical situations. Therefore, it is necessary to exercise caution and ensure sufficient attention when applying the methodology suggested by the author in experimental settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and time. Below, we respond to their comments: 1. **Using inverse variance weighting + double machine learning**: We thank the reviewer for pointing out the possibility of variance and bias reduction using inverse variance weighting and double machine learning. Although it might be hard to non-parametrically estimate the variance for each selection S, perhaps it is possible with parametric assumptions such as linearity. We will add this remark as a possibility in the camera-ready version. 2. **Proof sketch of Theorem 3.2**: We thank the reviewer for their suggestion and will include a proof sketch of Theorem 3.2 in the main body in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Other reviewers comments and the authors' responses are satisfactory. I will maintain my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal as well the comments of other reviewers. We are glad that the reviewer found our response satisfactory.
Summary: The paper investigates the problem of estimating the average treatment effect of variable "t" on variable "y" within the Pearlian framework. The paper proposes an algorithm that enables causal effect estimation using a front-door-like criterion while relying on only a limited knowledge about the underlying graph structure. The core of the algorithm lies in the search for a subset of obserables "z" that satisfies a series of independence criteria, thereby establishing a front-door-like formula using "z". The proofs employed in the paper leverage the do-calculus and the identifiability criterion of Tian and Pearl. In addition to its theoretical contributions, the paper also presents empirical demonstrations of the proposed approach across three distinct categories: (1) random Structural Causal Models; (2) synthetic data; and (3) real-world fairness benchmarks. Strengths: The paper addresses an important problem in the field of causality research by examining the limitations of existing algorithms for causal effect estimation. It specifically aims at improving on the assumption of having access to the underlying causal graph, which is often not readily available. The main contribution of the paper is the introduction of a method for identifying a set of observables 'z' that enables the generation of a front-door-like formula, thereby improving causal effect estimation under limited graph availability. The paper presents a clear problem statement and provides well-presented proofs. By offering an alternative perspective on causal effect estimation, the paper provides valuable insights for tackling this challenging problem. Overall, the paper makes a meaningful contribution to the field and opens avenues for further research. Weaknesses: One potential weakness of the paper is that once the requirement of designing a subset "z" satisfying the front-door-like criterion (Eqn (9)) is fixed, the proofs and the proposed independence criteria are relatively straightforward and achievable using the rules of do-calculus and the identifiability criterion. It is also not convincing to me how and why the exhaustive search runs fast for the random class of graphs generated in Section 4.1. The expected number of unobservables is of the order O(p), I am curious to know why the bidirected edges are chosen with probablity q/p? It would be convincing to see positive results for larger p with more unobservables. The in-completeness of the proposed algorithm and the mandatory requirement of Assumption 2 are the two major drawbacks of the paper (Please refer to Limitations for a detailed discussion.) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: As pointed out by the authors, Assumption 2 is mandatory for the approach to work. This observation is not surprising but somewhat disheartening. It would be valuable to explore if there are any workarounds or alternative approaches to testing this assumption, possibly utilizing Confidence Interval (CI) tests. Given that the available knowledge is limited to only the children of "t," I am not sure if this is possible to test. Further exploration or discussion on potential alternatives or extensions to address this limitation would enhance the paper's robustness. Regarding the choice of the random class of graphs for the benchmark, it would be beneficial to have an explanation or justification provided in the paper. Understanding the rationale behind the class of graphs considered would provide more insights. I also suggest to provide a more detailed description of Algorithm 1 in the main paper, as that would enable readers to have a better understanding of the algorithm. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The primary limitation of the paper, in my opinion, is regarding the completeness of the algorithm. The fact that the proposed algorithm is not complete represents a significant drawback. A complete algorithm would have provided a more robust and comprehensive solution to the problem. As explained in the paper, Assumption 2 is crucial in order for the search to work which is another crucial downside of this approach. This reliance on a critical assumption may limit the generalizability of the approach to real-world scenarios where such assumptions may not hold. Indeed, I feel like finding a workaround or alternative approach to mitigate the reliance on Assumption 2 would greatly enhance the paper's technical as well as practical value. I believe issues regarding sample complexity are out of the scope of the paper and perhaps be considered for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and time. Below, we respond to their comments: 1. **How and why does the exhaustive search run fast for the random class of graphs generated in Section 4.1?**: We thank the reviewer for this question. We do not claim that the exhaustive search runs fast for the class of random graphs considered in Section 4.1 and we will clarify this in the revised version. The worst-case run-time would still be exponential in $p$. 2. **Why are the bi-directed edges chosen with probability $q/p$?**: We choose $q/p$ to be the probability with which a bi-directed edge exists between two observed nodes so that the expected number of unobserved nodes is $q/p \times p(p-1)/2 = q(p-1)/2$ as the reviewer mentioned. As per the reviewer’s suggestion, we increased the number of unobserved nodes by choosing $q$ to be the probability with which a bi-directed edge exists between two observed nodes. In this case, the expected number of unobserved nodes is $qp(p-1)/2$. We ran simulations with $q \in$ {0.1, 0.15, 0.2}. As expected, with more confounding, there are almost no conditional independence (CI) tests to exploit, yet there are (roughly 1% in all 3 settings) graphs where our approach still applies. We note that any causal discovery-based approach is expected to suffer even more without any meaningful collection of CIs. We will add these results in the revised version. 3. **Rationale behind the class of graphs considered**: We thank the reviewer for the suggestion to add more explanation regarding the choice of random class of graphs. Our choice ensures that every node has the same bounded in-degree in expectation. This is common in recent works, e.g., Addanki et al. 2020, *Efficient Intervention Design for Causal Discovery with Latents*, where every directed edge is added with the same probability as in our work. We are happy to address any more questions that the reviewer may have. 4. **Detailed description of Algorithm 1**: We thank the reviewer for their suggestion and will include a detailed description of Algorithm 1 in the main body in the revised version. The variable $c_1$ is used to perform an average over different train-test splits. The variables $c_2$, $ATE^r_z$, and $ATE^r_s$ are used to perform an average over different subsets $\mathbf{z}$ that satisfy our conditions for a specific train-test split. We will also add more explanation on how the two equations in the algorithm are equivalent to the first moment versions of (9) and (10), respectively. 5. **Completeness of the algorithm**: To show the completeness of Theorem 3.2, one would have to show the following -- for every class of graphs where our structural side information holds but our algorithm fails, (i.e., at least one the conditional independencies in (7) or (8) fail to hold), the causal effect is not unique, i.e., the causal effect cannot be identified by the front-door-like formulae in (9) and (10) while only using observational data and not knowing the underlying causal graph. Typically, this is done by explicitly constructing two causal graphs where the algorithm fails and the causal effects have different values. For example, Shpitser et al. 2008, Complete Identification Methods for the Causal Hierarchy, showed the completeness of their ID algorithm this way through explicit constructions. It is highly non-trivial to show the completeness of our algorithm using a similar approach where there are no specific graphs at hand. However, we do show the importance of (8) via the causal graphs in Figure 3. These graphs are such that our structural side information holds, (7) holds for both, but (8) only holds for the bottom one. Then, we show that causal effects formulae for these graphs are indeed different (see lines 195-215). While we agree that providing additional examples towards full completeness of our algorithm would strengthen our results, we reserve this for future work as it is a highly non-trivial extension. 6. **Workaround for Assumption 2**: We thank the reviewer for their comment about Assumption 2. In practice, we believe this assumption is more likely to be true than not. But, we agree with the reviewer that this assumption is a limitation and working around this assumption is an interesting future work. We believe that our results could be derived under the weaker condition that there is a back-door path between $t$ and $y$ which is not blockable. On the other hand, if there is no unblockable back-door path between $t$ and $y$, it may be easier to find back-door adjustment sets. We will append this to the discussion on Assumption 2 in the limitations section (Appendix A.2). --- We hope that our response addresses the reviewer's concerns and that they would consider increasing their score. We believe it is indeed an advantage that our proofs are straightforward and clear in hindsight. However, note that it is not clear a priori that the proposed independence criteria would lead to a single (front-door-like) causal effect formula that spans multiple causal graphs which are not Markov equivalent as we show in this work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I would like to maintain my current score.
Summary: This paper proposes a method for estimating causal effects between the treatment variable and the outcome variable using front-door adjustment beyond the Markov equivalence class. This method is applicable when there are unobserved confounders between the treatment and outcome variables and does not require knowledge of the entire causal graph, but only limited graph knowledge. The authors introduce three assumptions and the causal identifiability theorem and the generalized front-door condition to achieve the estimation of the Average Treatment Effect (ATE). Through experiments, the paper demonstrates that the proposed framework provides identifiability in random fusion compared to PAG-based algorithms, exhibits lower error rates in ATE estimation compared to baseline, and shows practical applicability in causal fairness analysis. Strengths: - The authors propose testable conditional independence statements for front-door-like adjustment without graph knowledge under limited structural side information. - The experimental results show that the proposed method is effective on random graphs and real causal fairness benchmarks. Weaknesses: - It seems that the identification of the proposed method highly depends on Assumption 3. Assumption 3 requires knowledge of all direct descendant nodes $b$ of the treatment variables, which is too strong and difficult to achieve in practical scenarios. - Compared to PAG-based algorithms, the proposed method in this paper proves its ability to effectively provide identifiability. However, it requires expert knowledge to provide structural information, which may not necessarily demonstrate better applicability than PAG-based methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and time. Below, we respond to their comments: 1. **Assumption 3 requires knowledge of all direct descendant nodes**: We agree with the reviewer that requiring the knowledge of all the children of the treatment variable is crucial for our method. An important future direction could be to alleviate this requirement, say, to knowing only a subset of the children. For example, one could think of approximating the causal effect when only the children corresponding to weak edges are unknown. Such variations around our condition are promising directions for future work. We will append this to the discussion on Assumption 3 in the limitations section (Appendix A.2). 2. **Comparing the applicability of our method to PAG-based methods**: We thank the reviewer for bringing up this great point. We agree that in scenarios where the structural side information is not available, we may have to resort to PAG-based methods. We will clarify this in the revised version. Additionally, we note that the FCI algorithm (that produces a PAG) involves a sequence of adaptive conditional independence (CI) tests where the choice of the next test depends on the previous ones. Specifically, orientation stages tend to propagate errors in non-trivial ways (for example, https://arxiv.org/pdf/1607.03975.pdf shows how to handle these for the PC algorithm which has only three orientation rules). This gets very involved for the FCI algorithm which has many orientation rules and makes it difficult to control the false discovery rate for PAG based methods. In contrast, the CI tests involved in our method could be carried out in parallel and therefore require little-to-no adaptivity. Thus, our method can be viewed as a way to mitigate the issues associated with adaptive testing by using structural side information. --- We hope that our response addresses the reviewer's concerns and that they would consider increasing their score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Customizable Image Synthesis with Multiple Subjects
Accept (poster)
Summary: This paper aims to generate controllable images with multiple subjects as constraints. A residual token embedding is learned to shift the raw subject to the customized subject. A layout prior is further provided as the spatial guidance for subject arrangement. The experimental results demonstrate the effectiveness of the proposed method under a variety of settings. Strengths: The idea of using residual token embedding for a specific subject is a simple and effective way to generate customized subjects. The residuals and layout priors could be further utilized to adjust the attention for multi-subject generation without retraining. Compared to existing works, the proposed method enables the generation of a greater number of subjects for multi-subject generation. Weaknesses: 1. The key contribution of this paper is the ability to generate a greater number of subjects compared to existing works. However, in general, the maximum number of subjects that this paper can deal with is 6. I was wondering about the comparison results that contain more than 6 subjects. 2. Different from existing works, this paper utilizes a predefined layout prior as the spatial guidance for multi-subject generation. Such result comparisons may be unfair, as the inputs of existing works do not contain the layout. Existing works that generate layouts from textual descriptions could be applied here to avoid predefining layouts. 3. I cannot find any quantitative results for the ablation study. In addition, how to determine the term that controls the relative weight of the text-embedding-preservation loss? 4. The authors state that they select cones as the baseline for customization with similar subjects, while Fig.4 shows the results of Dreambooth. I was wondering which baseline is actually used in this experiment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. For an image with 6 or more subjects (e.g., Fig. 5), will the layout prior still be easy-to-obtain? It would be better to show all the predefined layouts in the figures. 2. How to determine the number of steps of guidance to generate satisfactory results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Author Response to Reviewer oYd1** We greatly appreciate all of your valuable suggestions, which play a pivotal role in enhancing the quality of our paper. Bellow we address all your concerns. #### **Q1: About generated results with more than 6 subjects.** **A1:** Thanks. As shown in Fig. 3 in our paper, When dealing with three subjects, our method has already demonstrated significant superiority over existing methods. As for more subjects, it is limited by the capabilities of the pre-trained model itself. As shown in Fig. R5 **in the newly added PDF file**, more involved subjects usually decrease the final visual quality. we observe significantly more failure cases when generating five or more subjects. We will include this in the revision. Several recent works [1, 2, 3] point out that stable diffusion is a struggle in generating multiple subjects. A potential way to ease this issue may be to apply our method to a better text-conditioned diffusion model. #### **Q2: About avoiding predefining layout.** **A2:** Thanks. Generating layouts from textual descriptions automatically [4,5,6,7] makes the entire process more convenient, and we will also include relevant discussions in the revision. #### **Q3: About showing quantitative results for the ablation study.** **A3:** Thanks. Here we present the quantitative results of the ablation experiments for the two operations in our layout guidance: strength and weaken attention activations. The visualizations of these results correspond to Fig. 6 in our paper. | Average CLIP Image Similarity | Ours | Only Strength | Only Weaken | | :-: | :-: |:-: |:-: | | Single-subject | 0.7949 | 0.8081 | 0.7988 | | Two-subject | 0.7075 | 0.6736 | 0.6568 | In addition, we conducted additional ablation experiments on the text preservation loss weight $\lambda$ we designed, and the quantitative results are as follows: | Average CLIP Image Similarity | $\lambda=0$ | $\lambda=0.5$ | $\lambda=1.0$ | $\lambda=2.0$ | | :-: | :-: |:-: |:-: | :-: | | Single-subject | 0.7900 | 0.7881 | **0.7949** | 0.7886 | | Two-subject | 0.6196 | 0.6719 | **0.7075** | 0.6631 | As shown in the table, completely omitting the text-embedding preservation loss leads to a collapse in the performance of multi-concept customization. In practical implementation, we selected a loss weight of 1.0 for this regularization term. #### **Q4: About the baseline used in Figure 4.** **A4:** Thanks. Actually, we choose Cones as a baseline. In Fig. R11 **in the newly added PDF file**, we conduct a new experiment to compare the generation capabilities of DreamBooth, Cones, and our method in challenging cases. #### **Q5: Discussion about layout prior.** **A5:** In practice, users can simply select the customized subjects in the text prompt by clicking on them and then place and resize the bounding boxes accordingly. We show some examples of the bounding boxes we used in Fig. R3 **in the newly added PDF file** and will add all the bounding boxes we used in the revision. #### **Q6: About how to determine guidance steps.** **A6:** As shown in the two columns on the right in Figure 6 in our paper, for simple combinations like "mug + teapot," satisfactory results could be achieved with 30 steps of guidance. However, for more challenging combinations such as "cat + dog," 50 steps of guidance were required to achieve better attribute binding results. [1] Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis. Feng *et al.* ICLR'23. [2] Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. Chefer *et al.* SIGGRAPH'23. [3] MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation. Bar-Tal *et al.* ICML'23. [4] LayoutGPT: Compositional Visual Planning and Generation with Large Language Models. Feng *et al.* arXiv preprint arXiv:2305.15393. [5] Grounded Text-to-Image Synthesis with Attention Refocusing. Phung *et al.* arXiv preprint arXiv:2306.05427. [6] VisorGPT: Learning Visual Prior via Generative Pre-Training. Xie *et al.* arXiv preprint arXiv:2305.13777. [7] LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models. Lian *et al.* arXiv preprint arXiv:2305.13655. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' clarifications. My concerns have been addressed. I will increase my score to 6. Please involve the additional evaluations and discussions in the revised version. --- Reply to Comment 1.1.1: Comment: Dear reviewer oYd1, thank you very much for your affirmation. Your suggestions regarding the ablation experiments and predefining the layout through LLM contribute to enhancing the quality of our work. We will discuss and incorporate them in the revision. Once again, thank you for your patient response!
Summary: This paper proposes a method to achieve customizable image generation with multiple subjects. To achieve the subject generation, it is done by learning a residual prediction for the subject tokens. To make multiple subject tokens combinable, it proposes a text-embedding preservation loss to make the embedding of the category from fine-tunes E and the freezed E to be the same expect for the subject. To ensure the subjects does not conflict and the spatial layouts, it additionally use bounding boxes to modulate the cross-attention when generate the outputs. The proposed method is compared with baselines such as DreamBooth, CustomDiffusion, Cones on the subjects previous works used. The results shows promising results on custimizable image generation with up to four or even more subjects. Strengths: - Proposed a efficient method to handle multiple instance of customization by learning a residual token from the base-category. This makes the customization succeeds without fine-tuning a large number of parameters. - The text-embedding preservation loss is proposed to make multiple subject tokens combinable. I really like this simple yet effective design of the loss. - Using layout to guide the generation process makes sense: it provides fine-grained control and avoid the conflicts between subjects. - The paper is easy to follow and the presentation of the results is clear. Weaknesses: - Some important aspects of the method can be elaborated more clearly. For instance, how is the layout be taken for the model as the inputs? Is it also the input in the denoising steps or they are just used to find the corresponding area in the attention maps? Why do we need \ita(t) in Eq. 5? Are layouts used during the pretraining of the subject tokens? - Although the method has been compared with previous works, some ablation studies and discussion of the limitation are missing. For instance, the quantitative evaluation of the ablation studies. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Ablation study for the proposed components such as text-embedding-preservation loss and the effects of the layout-guidance generation are missing. - What's the limitations of the proposed methods beside scaling up to even more subjects? - Typo: Line 245: Verift -> Verify. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors did not address the limitation and potential negative societal impact of their work. Subject generations can produce fake contents and might be dangerous if not used carefully. The authors should discuss this aspect of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Author Response to Reviewer msmN** We greatly appreciate all of your valuable suggestions, which play a pivotal role in enhancing the quality of our paper. Bellow we address all your concerns. #### **Q1: More details about the layout guidance method.** **A1:** During inference, the layout provided by the user is not directly input into the model. The core of our proposed layout guidance method lies in editing the activation matrices of the cross-attention layers in each iteration of denoising to guide the model in generating the desired image. The layout serves as a reference for this editing operation. In Equation 5, $\eta(t)$ is a concave function designed as a noise scheduler referencing the pre-train model. It gradually decreases as $t$ decreases(from $T$ to 1). Its role is to weaken the intensity of layout guidance during the sampling process as time step $t$ decreases. In the sampling process of the diffusion model, as t decreases from $T$ to 1, the inference process can be understood as transitioning from determining high-level semantics to determining low-level semantics. The purpose of gradually decreasing $\eta$ with $t$ is to prevent excessive guidance from the layout when t approaches 0, which could result in noticeable disharmonious artifacts in the generated images. When training the residual token embedding for a specific subject, layout is not required. Only a set of multi-view reference images of this subject is needed. We will clearly add these details discription in the revision. #### **Q2: About supplementing ablation study.** **A2:** Thanks. Here, we present the quantitative results of three ablation studies. Firstly, we compare the approach of directly learning a text embedding like Textual Inversion with our design of learning residuals. The table below suggests that, without the residual design, the overall performance degrades given varying contexts. | Average CLIP Image Similarity | Ours | Learn Directly | | :-: | :-: |:-: | | Single-subject | **0.7949** | 0.6953 | | Two-subject | **0.7075** | 0.6092 | Secondly, we present the quantitative results of the ablation experiments for the two operations in our layout guidance: strength and weaken attention activations. The visualizations of these results correspond to Fig. 6 in our paper. | Average CLIP Image Similarity | Ours | Only Strength | Only Weaken | | :-: | :-: |:-: |:-: | | Single-subject | 0.7949 | **0.8081** | 0.7988 | | Two-subject | **0.7075** | 0.6736 | 0.6568 | Finally, we conducted additional ablation experiments on the text preservation loss weight $\lambda$ we designed, and the quantitative results are as follows: | Average CLIP Image Similarity | $\lambda=0$ | $\lambda=0.5$ | $\lambda=1.0$ | $\lambda=2.0$ | | :-: | :-: |:-: |:-: | :-: | | Single-subject | 0.7900 | 0.7881 | **0.7949** | 0.7886 | | Two-subject | 0.6196 | 0.6719 | **0.7075** | 0.6631 | #### **Q3: About limitations of the proposed method.** **A3:** Besides scaling up to more subjects, our method cannot strengthen the interaction relationships between multiple subjects. The difficulty in generating complex interaction relationships of the pretrained model is inherited in our method. This issue is shared by all existing customized methods. As shown in Fig. R9 **in the newly added PDF file**, in the case of straightforward interaction relationships, such as "sit" and "wear," both our approach and the pretrained model achieve satisfactory generated results. However, for more intricate interaction relationships, such as "handshake," the performance of both our approach and the pretrained model falls short of expectations. --- Rebuttal Comment 1.1: Comment: Thanks for the additional ablation study on the model. I have read the responses and all my questions are resolved. I am willing to adjust the rating to 6. --- Reply to Comment 1.1.1: Comment: Dear reviewer msmN, thank you for your valuable suggestions on our work! Your suggestions regarding the ablation study are highly significant to our research, and we will incorporate the relevant experiments in the revision. We sincerely appreciate your recognition of our work and your willingness to raise the rating!
Summary: This paper introduces a novel method for multi-subject, subject-driven text-to-image generation. The authors develop an efficient system that embeds single subject information effectively and seamlessly combines these separate subject embeddings to generate the final multi-subject image. The key concept is to learn a residual on top of the clip text-embedding for the subject token, thereby enriching it with subject-specific information. This residual is optimized so that it encapsulates specific information about the single subject without imposing further restrictions on the images. Another innovation is a test-time cross attention manipulation technique. Notably, most previous personalization approaches experience object neglect or attribute confusion as the number of subjects increases. This paper addresses these issues by strengthening and weakening certain regions of the cross-attention maps, guided by user-provided layouts. Both quantitative and qualitative results demonstrate superior performance compared to the DreamBooth, Custom-Diffusion, and Cone baselines Strengths: S1: The paper innovatively designs a preservation loss to optimize text embedding offset, enhancing class information preservation and reducing overfitting. S2: The proposed layout manipulation effectively addresses object neglect and attribute confusion, enabling better scalability for larger numbers of subjects. S3: The method significantly reduces storage and computational costs from exponential to linear, enhancing accessibility for multi-subject image generation. Weaknesses: W1: Despite the novel use of text embedding, the paper exhibits a limitation in this area. Without model fine-tuning, the images don't preserve subject detail as effectively as one might hope (e.g. see figure 1 white dog), which would be problematic when the subject details are important, e.g. when the subject is a human (which is not tested in this paper). W2: The use of layout as guidance may also present some constraints. It relies heavily on user inputs, potentially limiting the system's flexibility and automation. Furthermore, it appears to be primarily suited to group photos, and may struggle with more diverse actions that involve intricate interactions. W3: Although the paper presents an effective approach to reduce computation and storage costs, the process is still relatively expensive, particularly when compared to methods such as tuning an encoder [1] or ELLITE [2]. The requirement for fine-tuning limits the method's accessibility or widespread adoption. W4: The paper's approach to the evaluation of multi-subject generation raises some concerns. The authors mentioned that "For multi-subject generation, we calculate the image similarity of the generated images and each target subject separately and finally calculate the mean value." A potentially more meaningful approach could be to perform object detection first, matching the resulting detections with the reference subjects. Without this, the similarity between single subject and multi subject image doesn't seem very meaningful. [1] Gal, Rinon, et al. "Designing an encoder for fast personalization of text-to-image models." Siggraph 2023 [2] Wei, Yuxiang, et al. "Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation." arXiv preprint arXiv:2302.13848 (2023). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Here are a few minor comments and suggestions. 1. line 64 over -> out 2. EDIFF-I [1] uses similar cross-attention manipulation formulation for layout-to-image generation. See Section 4.3 of the paper. I suggest the authors to add some discussions. 3. figure 4 mismatch with the text description. Which baseline is used here? DreamBooth or Cone ? [1] Balaji, Yogesh, et al. "ediffi: Text-to-image diffusion models with an ensemble of expert denoisers." arXiv preprint arXiv:2211.01324 (2022). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors partially address the limitations. Please find other suggestions in the weakness and question section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Author Response to Reviewer Nfe2** We greatly appreciate all of your valuable suggestions, which play a pivotal role in enhancing the quality of our paper. Bellow we address all your concerns. #### **Q1: Ability to maintain subject detail.** **A1:** Thank you for the questions raised by the reviewer. As shown in Fig. R8 **in the newly added PDF file**, we show the generated results of the subjects robot and human, both of which attain comparable results. However, for subjects with a large number of intricate details, there is indeed a gap compared to model tuning methods. Finding better conditions to enhance the preservation of subject details is a promising direction for future improvement. However, for multi-subject generation, existing model tuning methods suffer from high training costs, subject disappearance, and attribute confusion. Our method exhibits superior performance. #### **Q2: Constraints of layout guidance.** **A2:** Actually, it is very easy to obtain a bounding box. Users can simply select the customized subjects in text prompt by clicking on them and then place and resize the bounding boxes accordingly. We do not make any corrections to the attention maps corresponding to relational words; thus, our ability to represent the interactions between subjects relies entirely on the pre-trained model. As shown in Fig. R9 **in the newly added PDF file**, in the case of straightforward interaction relationships, such as "sit" and "wear," both our approach and the pre-trained model achieve satisfactory generated results. However, for more intricate interaction relationships, such as "handshake," the performance of both our approach and the pretrained model falls short of expectations. Improving the representation of interaction relationships between subjects is a common challenge faced by all existing customized generation methods. Currently, there are also some recent works [1, 2] exploring better ways to represent interaction relationships. #### **Q3: Computation comparisons with fast customized methods.** **A3:** We are very grateful for the reviewer's suggestion, and we will add discussion in the revision. Currently, some works [3, 4] focuses on achieving faster customized generation by pretraining an image encoder and using it to encode a reference image during generation. However, these methods share some common issues: - They require collecting data in advance to train the encoder, which can limit their generalization [1]. - We compare our method with Elite in Fig. R10 **in the newly added PDF file**. Compared to methods that fine-tune for each individual subject, these approaches may exhibit difficulty in preserving fine details. - In addition, these methods can only encode one image at a time, they cannot customize multiple subjects simultaneously. #### **Q4: Quantitative evaluation with a detection-based metric.** **A4:** Great point! We sincerely appreciate the valuable suggestion. Our evaluation metric of multi-subject generation is inherited from Custom Diffusion and Cones. However, we firmly believe that the evaluation metric you mentioned is more reliable. Therefore, we conduct an evaluation and show the results in the table below. |Subjects | DreambBooth |Custom diffusion|Cones|Ours| | :-: |:-: |:-: |:-: |:-: | | 2| 0.7301 $\pm$ 0.0054 |0.7238 $\pm$ 0.0023 |0.7591 $\pm$ 0.0013 | **0.8107 $\pm$ 0.0004** | | 3| 0.6981 $\pm$ 0.0101 |0.7150 $\pm$ 0.0086 |0.7276 $\pm$ 0.0072 | **0.7752 $\pm$ 0.0031** | | 4| 0.6312 $\pm$ 0.0183 |0.6387 $\pm$ 0.0109 |0.6771 $\pm$ 0.0089 | **0.6987 $\pm$ 0.0042** | During the evaluation process, we apply GLIP [5] to detect the corresponding subjects in the generated images and crop the images accordingly. Then we calculate the CLIP image similarity for each customized concept and take the average. The results indicate that our method outperforms the baseline in terms of CLIP image similarity. Additionally, the evaluation shows that our method has a lower variance, indicating that it can generate all the required subjects in a more stable and consistent manner. #### **Q5: Discussion about difference from EDIFF-I.** **A5:** Different from existing methods [6] that strengthen the signal of the target subject within the layout-indicated area of the cross-attention map, we also propose to **weaken** the signal of irrelevant subjects in the same area. Such a design helps alleviate the issue of attribute mixing across different subjects, especially along with the number of subjects increasing. Fig. 6 in the submitted manuscript and the table below demonstrate the effectiveness of our weakening design both qualitatively and quantitatively. | CLIP Image Similarity | Ours | Only Strength | Only Weaken | | :-: | :-: |:-: |:-: | | Single-subject | 0.7949 | **0.8081** | 0.7988 | | Two-subject | **0.7075** | 0.6736 | 0.6568 | | Two-subject (w/ Detection) | **0.8107 $\pm$ 0.0004** | 0.7508 $\pm$ 0.004 | 0.7993 $\pm$ 0.0006 | #### **Q6: The baseline used in Figure 4.** **A6:** We sincerely apologize for our oversight. Actually, we choose Cones as the baseline. In Fig. R11 **in the newly added PDF file**, we conduct a new experiment to compare the generation capabilities of DreamBooth, Cones, and our method in challenging cases. [1] ReVersion: Diffusion-Based Relation Inversion from Images. Huang *et al.* arXiv preprint arXiv:2303.13495. [2] ProSpect: Expanded Conditioning for the Personalization of Attribute-aware Image Generation. Zhang *et al.* arXiv preprint arXiv:2305.16225. [3] Designing an encoder for fast personalization of text-to-image models. Gal *et al.* SIGGRAPH'23. [4] Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. Wei *et al.* arXiv preprint arXiv:2302.13848. [5] Grounded language-image pre-training. Li *et al.* CVPR'22. [6] eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers. Balaji *et al.* arXiv preprint arXiv:2211.01324. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the reply. The new evaluation and comparison with EDIFF-I makes the paper stronger. I increase my score to 6. Regarding point 1,2,3, the limitations still hold so I suggest the authors to discuss this further in the revised version. --- Reply to Comment 1.1.1: Comment: Dear reviewer Nfe2, first and foremost, we want to extend our gratitude to you for your meticulous review and valuable feedback on our manuscript, particularly with regard to the evaluation metric of multi-subject generation. These suggestions are very important to refine our work! We will discuss the limitations in the revision. Once again, we want to express our sincere appreciation for your time, effort, and expertise.
Summary: This paper studies how to efficiently represent a particular subject as well as how to appropriately compose different subjects. The author finds that the text embedding regarding the subject token can serve as a simple yet effective representation. To capture features of a specific subject, the author propose a text-embedding-preservation loss to learn a residual token embedding. Based on the residual embeddings, the author employ layout as the spatial guidance for subject arrangement into the attention maps. Both qualitative and quantitative experimental results demonstrate the superiority of the proposed method Strengths: a) The paper is clear and easy to read. b) The proposed method shows superiority towards challenging cases, compared with existing methods. c) From both quantitative and qualitative results, the author conducted detailed experiments to analyze and demonstrate the effectiveness of the proposed method. Weaknesses: a) What is the difference between residual token embedding with textual inversion? If a textual inversion is trained for each concept to obtain a token embedding, and then the layout is used for guidance, what are the results like? b) In Table 2, the author calculates the complexity of different methods. Regarding multiple subjects, what are the training time results between different methods? c) From the second row of figure 4, the color of the white puppy does not seem to have been well maintained. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: a) What is the difference between residual token embedding with textual inversion? If a textual inversion is trained for each concept to obtain a token embedding, and then the layout is used for guidance, what are the results like? b) In Table 2, the author calculates the complexity of different methods. Regarding multiple subjects, what are the training time results between different methods? c) From the second row of figure 4, the color of the white puppy does not seem to have been well maintained. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Please refer to the weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Author Response to Reviewer QhyH** We sincerely appreciate the affirmation from the reviewer for our work. It serves as a strong motivation for us! Bellow we address your concerns separately. #### **Q1: About difference between residual token embedding and Textual Inversion.** **A1:** Textual inversion find a single word embedding (input of the text encoder) to represent user-provided subject. Different from textual inversion,we find the residual token embedding for each subject, and add these residuals to their corresponding token/text embeddding (output of the text encoder). The "residual" actually refers to the ability to transform a subject of one category into a customized subject that we need, for example a "random dog" to the specific "customized dog". As shown in Fig. R6 **in the newly added PDF file**,we train model using method in textual inversion but learn a text embedding (output of the text encoder) and apply our layout guidance approach. We can see that simply employing Textual Inversion to learn a customized text embedding does not adequately fulfill the customized requirements. This approach fails to fully capture all the features of the reference subject, sepecially when dealing with multi-subject customization. Even with the utilization of our layout guidance method, better results cannot be achieved. #### **Q2: Training time results between different methods.** **A2:** We present the training time between different methods for single-subject generation in the table. All experiments are completed on a single 80G-A100. | Method | Textual inversion |DreamBooth |Custom diffusion|Cones|Ours| | :-: | :-: |:-: |:-: |:-: |:-: | | Training time| 30 minutes |15 minutes |10 minutes |10 minutes |20 minutes | It is important to note that our approach utilizes learned single-subject residual token embeddings, enabling seamless combinations without the need for retraining. This helps us avoid the exponential training costs associated with other methods. In contrast, other methods require additional storage space and training time for each new combination of subjects, and their training time increases linearly with the number of customized subjects. #### **Q3: Color of white puppy.** **A3:** Our training data is sourced from the official DreamBooth dataset. In Fig. R7 **in the newly added PDF file**, We present the training dataset for the "white puppy". This dataset contains instances of blurred and poorly lit images, which to some extent, influence the generated results. --- Rebuttal Comment 1.1: Title: Official Comment by QhyH Comment: I would like to express my gratitude for the diligent efforts made by the authors in addressing my questions. The authors have addressed my concerns, and I will maintain my score unchanged. --- Reply to Comment 1.1.1: Comment: Thanks for your valuable suggestions and feedaback! we're also glad that we've addressed all your concerns. Lastly, we would like to express our gratitude for your time and insights.
Rebuttal 1: Rebuttal: ## Author Response to All: Dear reviewers, We thank all reviewers for their time and efforts in reviewing our paper. These constructive reviews can bring multiple improvements for our manuscript. We are encouraged that the reviewers appreciate our method, including * innovatively designs [Reviewer Nfe2] * a simple and effective method [Reviewer oYd1] * outperform prior methods both quantitatively and qualitatively [Reviewer X4Jp,QhyH] * well written and easy to follow [Reviewer X4Jp,QhyH,msmN] * efficient storage and computational cost [Reviewer Nfe2,msmN] We also have made diligent efforts to address all the concerns raised point by point. In this rebuttal, we have incorporated some new figures to more effectively address the concerns. Kindly review the newly uploaded one-page PDF. * Figure R1 compares our method with Paint by Example. Our method can better preserve the identity of all customizable objects. [Reviewer X4Jp, Q2] * Figure R2 gives generated images using specific semantic mask. We use a more specific semantic mask as a prior to get more fine-grained customizable objects. [Reviewer X4Jp, Q3] * Figure R3 gives generated results using overlapping boxes. [Reviewer X4Jp, Q3; oYd1, Q5] * Figure R4 gives generated results with inconsistent layouts and prompts. [Reviewer X4Jp, Q4] * Figure R5 gives generated results with more customized subjects. [Reviewer X4Jp, Q5; oYd1, Q1] * Figure R6 shows the effect of combining residual token embedding with textual inversion. [Reviewer QhyH, Q1] * Figure R7 includes the examples of white puppy. [Reviewer QhyH, Q3] * Figure R8 shows generated results of robot and human. [Reviewer Nfe2, Q1] * Figure R9 shows generated results with interactions. [Reviewer Nfe2, Q2; msmN, Q3] * Figure R10 compares our method with ELITE. [Reviewer Nfe2, Q3] * Figure R11 compares our method with Dreambooth and Cones on some challenging cases. [Reviewer Nfe2, Q6] We are open to discussions and addressing any issues from reviewers. Your constructive comments can further help us to improve our method. Sincerely yours, Authors Pdf: /pdf/b6eb263584fce420fe41aec00a2d5d36292af8d2.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a new method for generating new images of any combination containing given objects. It combines individually learned single-subject residuals for multi-subject generation without retraining. The authors proposed to use text-based embedding to represent individual objects. Once a residual token was learned, they can then add these residuals to the embedding and adjust the attention maps based on a given layout. The proposed method outperforms other existing methods in all multi-subject customized tasks, especially in a three-subject and four-subject generation. Strengths: This is a timely paper that works on an interesting and challenging problem of controllable image synthesis based on the diffusion model. The authors proposed to compose multiple subjects by leveraging layout guidance. Such prior is simple and intuitive to use. The paper is well written and easy to follow. They performed extensive evaluation/ablation studies and outperforms other existing methods both quantitatively and qualitatively. They also conducted a user study to further evaluate the performance of their method. Weaknesses: This paper is a combination of many existing ideas, such as text embedding vector from zero-shot img2img paper, text-embedding-preservation loss, and cross-attention map. While the execution and presentation of these ideas are well done, I wes hoping for a more original contribution and unique solution to the problem. Overall, despite this, the paper still demonstrates impressive results and meets the standard for acceptance. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Would the authors consider other image composition methods as baselines to this method? For example, CVPR 2023 paper titled "ObjectStitch: Object Compositing with Diffusion Model"? 2. Is there a way to apply more fine-grained control on the composition of multiple objects, for example, by applying a specific mask or changing the pose of each object in the new composition? does the proposed require non-overlapping boxes? It would be more informative to have examples of bounding box layouts on the side of each result. 2. What happened when the user-provided layouts are not consistent with the text description? Are there other potential failure cases for this method? Typo: Line 245 Verift -> Verify Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors address the limitations in supplementary material in that their method is limited by the inherent capabilities of the base model. The paper also discussed the potential societal impact of user-specific image generation. I think they are valid. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Author Response to Reviewer X4Jp** Thank you for your positive comments and valuable feedback on our work! We are excited and encouraged by your support! Bellow we address your concern separately. #### **Q1: About the main contributions.** **A1:** Thanks. As you have pointed out, using diffusion models for customizable image synthesis is currently a crowded field and hence many similar techniques have been proposed. Compared to existing studies, our approach enjoys two original designs. - Regarding **subject representation**, we propose to learn a **residual** on top of the text embedding of a "base word" (*e.g.*, from "dog" to the "customized dog"). Unlike prior works that directly optimize the word embedding for the customized subject, our residual design helps the customization well blends with various context. Our motivation is that, assuming the text encoder could already adequately encode "dog" with different surroundings, the learned embedding shift could harmoniously blend "customized dog" with the same surrounding as well. The table below suggests that, without the residual design, the overall performance degrades given varying context. | Average CLIP Image Similarity | Ours | Learn Directly | | :-: | :-: |:-: | | Single-subject | 0.7949 | 0.6953 | | Two-subject | 0.7075 | 0.6092 | - Regarding **subject arrangment**, we introduce layout as a very abstract and easy-to-obtain prior to guide the generation process. Different from existing methods that strengthen the signal of the target subject within the layout-indicated area of the cross-attention map, we also propose to **weaken** the signal of irrelevant subjects in the same area. Such a design helps alleviate the issue of attribute mixing across different subjects, especially along with the number of subjects increasing. Fig. 6 of the submitted manuscript and the table below demonstrate the effectiveness of our weakening design both qualitatively and quantitatively. | Average CLIP Image Similarity | Ours | Only Strength | Only Weaken | | :-: | :-: |:-: |:-: | | Single-subject | 0.7949 | 0.8081 | 0.7988 | | Two-subject | 0.7075 | 0.6736 | 0.6568 | We will clearly explain our core contributions in the revision. #### **Q2: Comparison with image composition methods.** **A2:** Thanks. Since [1] mentioned by reviewer has no official open source, we select [2] as a baseline of image composition methods. In order to compare the effect of the baseline and our method more intuitively, we use Paint by example to inpaint the image generated by our method with the reference image as input. We conduct visualization results in Fig. R1 **in the newly added PDF file**. Generated results of our method have better visual similarity, and we will provide more comparisons in the revision. Besides, We consider multi-subject customized generation is different with image composition in two a aspects. - The purpose of multi-subject customized generation is to implant all user-provided subjects into the diffusion model; so that the model can generate various images of all subjects vividly guided by prompts. However, the purpose of image compositing is to insert an object into another image in a realistic way, without guidance from prompts. - Specifically, existing effective image composition methods require collecting data pairs and training an image encoder. During inference, they utilize the reference image as a condition to generate the image. In addition, to compose multiple subjects by the image composition methods, multiple iterations of inference are required, especially when there is an overlapping relationship among subjects. For example, in case "a cat wearing sunglasses sitting on a chair", inpainting "chair", "cat", and "sunglasses" in turn is necessary. #### **Q3: More fine-grained control.** **A3:** Actually, users have the flexibility to input a more specific mask, enabling them to achieve fine-grained control. In the paper, we choose bounding boxes due to their ease of acquisition. Experimental results in Fig. R2 **in the newly added PDF file** demonstrated that our method can use specific mask for guidance sampling. Our method supports overlapping boxes, and we demonstrate the bounding boxes we adopted in Fig. R3 **in the newly added PDF file**. Specifically, the bounding boxes for glasses and hat overlap with the bounding box for the dog. We will add more examples of the bounding boxes we used in the revision. #### **Q4: More failure cases.** **A4:** We add more failure cases **in the newly added PDF file**. Specifically, in Fig. R4 we show a case that user-provided layouts is contradict to the text description. In Fig. R5, we show that our method is limited by the performance of the pretrained model when it comes to customizing the generation of more than 6 subjects. When dealing with 7 concepts, the performance of the pretrained model deteriorates. In our generated results, details of certain subjects are not fully preserved. With the generation of 9 subjects, the combination of the pretrained model and our guidance method contributes to some improvement in the generation quality. However, at this point, both the pretrained model and our method exhibit some subject disappearance, and our method fails to maintain the identity of all subjects under such circumstances. [1] ObjectStitch: Object compositing with diffusion model. Song *et al.* CVPR'23. [2] Paint by example: Exemplar-based image editing with diffusion models. Yang *et al.* CVPR'23.
null
null
null
null
null
null
Scattering Vision Transformer: Spectral Mixing Matters
Accept (poster)
Summary: Assistant The article is about the Scattering Vision Transformer, or SVT, which is a new adaptation of transformers for computer vision tasks. Its unique feature is the use of a spectrally scattering network, which captures fine-grained information about an image and addresses the issue of information loss caused by down-sampling operations. The SVT achieves state-of-the-art performance on the ImageNet dataset and outperforms other transformers in other vision tasks. Strengths: 1. The use of DT-CWT, which is both shift-invariant and free from aliasing, is a robust and reliable approach. 2. The combination of tensor multiplication in low-frequency components and Einstein multiplication in high-frequency components is a compelling and efficient technique. 3. The experimental results demonstrate great promise for the proposed method. Weaknesses: 1. To enhance comprehension, a brief illustration of DT-CWT through either descriptive language or illustrative figures should be provided in the method or background section. 2. Not all layers of vision transformers are scatter layers. An explanation and study should be provided. 3. The paper would benefit from the inclusion of mathematical theorem proofs. I found it difficult to understand the benefits of DT-CWT, particularly the decoupling of low and high frequencies. The paper's methods read more like a technical report than a compelling story of motivation, possibly due to the omission of some logistic or preliminary details. 4. To further evaluate the performance, provide comparisions of speed and over-fitting tests on ImageNet V2. 5. There are some typos in the paper. For example, on line 260, "figure ??" is not clear. Additionally, the official reference uses brackets [xx], whereas parentheses are used in the text. Please ensure consistency and accuracy in all references. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Could you provide an explanation for why scatter layer is used in shallower layers but traditional attention layer is used in deeper layers? 2. Is there any evidence, such as references, mathematical induction, or related research studies, to support the idea of using tensor multiplication for low pass and Einstein multiplication for high pass? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There appear to be no potential negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments, which we believe are insightful and shall help in improving the quality of the final submission. 1. A background section on DT-CWT [46] shall be added to the final version of the paper to help readers understand it, before explaining SVT method details. This has now been included in **the main rebuttal**. 2. We compare the performance of SVT when the architecture changes from all attention ( PVTv2[58] ) to all spectral layers ( GFNet[44] ) well as a few spectral and remaining attention layers(SVT ours). We observe that combining spectral and attention boost the performance compared to all attention and all spectral layer-based transformer as shown in Table-2 of the main paper. 3. We provide a mathematical characterization of DT-CWT [46] below and how it helps in decoupling low and high-frequency components. We will include this in the background section of the paper. 4. We have conducted an experiment to measure the latency of SVT and compare the same with ViT and GFNet, the results of which are given in the Table-2 of the attached PDF. This is shown in Table 2 below, where we compare latency, FLOPS, number of parameters, and reconstruction loss. The table-2 shows the Latency (mili sec) of SVT compared with the Convolution type network, attention type transformer network, POOl type transformer network, MLP-mixer type transformer network, and Spectral type transformer network. We report latency per sample on A100 GPU. 5. We have also conducted an experiment to test robustness by using the ImageNet C dataset, the results of which are reproduced in table-7 of the attached PDF. We also use an SVT pre-trained on the ImageNet-1K dataset and use the ImageNet-P dataset to measure the robustness of SVT compared with various transformers such as ViT-B, MLP-Mixer, and SVT as shown in the table-7 of the attached PDF. The table compares accuracy and mCE score on the ImageNet-C dataset. SVT-H-B has less mCE score compared to other transformers. 6. Thanks for pointing out typos etc. in the paper, we shall ensure a thorough revision of the paper and make sure to address all language issues. 7. The ablation study was conducted to show that initial scatter layers followed by attention in deeper layers are more beneficial than having later scatter and initial attention layers ( SVT-H-S-Inverse). We also compare transformer models based on an alternative the attention and scatter layer (SVT-H-S-Alternate). The results are documented in the table-3 of the attached PDF. From all these combinations we observe that initial scatter layers followed by attention in deeper layers are more beneficial than others. 8. The use of tensor multiplication for low pass and Einstein multiplication for high pass is our contribution. It must be noted that low-frequency components contain the energy component of the signal which requires all the frequency components to provide energy compaction, while high-frequency components can be represented by only a few components, which can be achieved using Einstein multiplication. We have evaluated the same empirically and shown in Table 7 of the main paper that SVT_{TTEE} is the best performing compared to other alternatives. We found that using Einstein multiplication in channel and token mixing makes the transformer architecture more efficient in terms of FLOPS and no. of parameters, without compromising on accuracy. We have also revised the contributions section of the camera-ready paper to reflect this. ## Mathematical Formulation of DTCWT \begin{equation*} x(t)= \sum_{n=-\infty}^{\infty}c(n)\phi(t-n) +\sum_{j=0}^{\infty}\sum_{n=-\infty}^{\infty}d(j, n)2^{j/2}\psi(2^{j}t-n). \end{equation*} where $c(n)$ is the scaling coefficients and $d(j,n)$ is the wavelet coefficients. \begin{equation*} c(n) =\int_{-\infty}^{\infty}x(t)\phi(t-n)dt, \quad d(j, n) =2^{j/2}\int_{-\infty}^{\infty}x(t)\psi(2^{j}t-n)dt. \end{equation*} The complex value wavelet is given by \begin{equation*} \psi_{\rm c}(t)=\psi_{\rm r}(t)+{\rm j}\psi_{\rm i}(t). \end{equation*} Where $\psi_r(t)$ is real and even and $j\psi_i(t)$ is imaginary and odd \begin{equation*} d_{\rm c}(j, n)=d_{\rm r}(j, n)+{\rm j}\ d_{\rm i}(j, n) \end{equation*} With Magnitude and phase \begin{equation*} \vert d_{\rm c}(j, n)\vert =\sqrt{[d_{\rm r}(j,n)]^{2}+[d_{\rm i}(j,n)]^{2}}, \quad \angle d_{\rm c}(j,n)= \arctan \left({d_{\rm i}(j,n)\over d_{\rm r}(j,n)}\right) \end{equation*} Let $h_0(n), h_1(n)$ denote the low-pass and high-pass filter in the upper band, while $g_0(n), g_1(n)$ denote the same for the lower band. The wavelets corresponding to the upper band and lower band are denoted by $\psi_h(n), \psi_g(n)$. The filters are designed to get the complex wavelet by satisfying the Perfect Reconstruction (PR) conditions. The complex wavelet filter can be represented as $\psi(t):= \psi_h(t)+j\psi_g(t)$. where $\psi_g(t)$ is approximately the Hilbert transform of $\psi_h(t)$ i.e. $\psi(t) \approx \mathcal{H}{\psi_h(t)}$ . \begin{equation*} \psi_{h}(t)=\sqrt{2}\sum_{n}h_{1}(n)\phi_{h}(t), \quad \phi_{h}(t)=\sqrt{2}\sum_{n}h_{0}(n)\phi_{h}(t) \end{equation*} Two low-pass filters should satisfy a very simple property: one of them should be approximately a half-sample shift of the other. $g_{0}(n)\approx h_{0}(n-0.5) \Rightarrow \psi_{g}(t)\approx {\cal H}\{\psi_{h}(t)\} $ Since the filters are real, no complex arithmetic is required for implementing DTCWT. It is just two times more expansive in 1-D. It is also easy to invert, as the two separate DWTs can be inverted. Final Dual-Tree CWT can be designed using follow steps: - Approximate half-sample delay property - PR (orthogonal or biorthogonal) - Finite support (FIR filters) - Vanishing moments/good stopband - Linear-phase filters: Only the complex filter responses need to be linear-phase; this can be achieved by taking $g_0(n)=h_0(N−1−n)$ --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: The response has addressed my concerns, so I am updating my rating to "accept." --- Reply to Comment 1.1.1: Title: Replying to Reviewer ghvx Comment: We thank reviewer ghvx for his insightful comments and shall revise the paper thoroughly to address his review comments.
Summary: This paper proposed to use DTCWT in the transformer model in early stage, with the motivation of saving computation cost without information loss. The proposed scattering module has a low-pass and a high-pass branch. Tensor multiplication & Einstein multiplication are applied to the LF & HF branch respectively. The proposed module replaces the first 2 stages in hierarchical ViT models. The proposed SVT model achieved superior performance on image classification on the ImageNet1K dataset, and the downstream transfer learning results. Strengths: Replacing low level transformer modules with the proposed DTCWT based module achieves better accuracy and latency trade-off. Applying DTCWT to transformer models may not be explored yet by prior work. Weaknesses: DTCWT has been used in the previous CNN based deep learning method, e.g. Uses of Complex Wavelets in Deep Convolutional Neural Networks. Existing work already shows that replacing early transformer stage with convolutional layers could give better accuracy and latency trade-off. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How do we compare SVT with combining ViT with CNN, i.e. an architecture with CNN in early stages while transformer in late stages? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The proposed module is a highly hand engineered component. Adding manually designed components to the deep learning models may contradict with the motivation of removing inductive bias to the model and building generic learning based architectures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Comparing SVT with ViT with initial CNN layers:** We have conducted an experiment where the initial layers of a ViT are convolutional networks and later layers are attention layers to compare the performance of SVT. The results are captured in Table 4 in the attached PDF, where we compare SVT with transformers having initial convolutional layers such as CVT, CMT, and HorNet. Initial Convolutional layers in a transformer are not performing well compared to the initial scatter layer. Initial scatter layer-based transformers are better performance and less computation cost compared to Initial Convolutional layer-based transformers which is shown in Table 4. We also measure the latency of various transformers as shown in Table 2. In this table, we compare latency, FLOPS, number of parameters, and reconstruction loss. The table-2 shows the Latency (mili sec) of SVT compared with the Convolution type network, attention type transformer network, POOl type transformer network, MLP type transformer network, and Spectral type transformer network. We report latency per sample on A100 GPU. 2. SVT is a generic recipe for componentizing the transformer architecture and efficiently implementing transformers with lesser parameters and computational complexity with the help of Einstein multiplication. So, this can be viewed as a simple and efficient learning transformer architecture with minimal inductive bias. We provide the source code for SVT in the paper and the reviewer is free to verify the above claim.
Summary: This paper proposes to use Dual-time Complex Wavelet Transforms to decompose images into high and low frequency components in vision transformers. With this technique, it claims to address the problem of attention complexity without the loss of information as in Fourier or DWT based Transformers. This claim is based on the invertible property of Dual-time Complex Wavelet Transforms. This paper claims to achieve state-of-art image classification performance on the ImageNet dataset with a significant reduction in a number of parameters and FLOPS, and comparable results in instance segmentation. Qualitative and quantitative results are provided. Strengths: 1. The most attractive of this paper is the usage of Dual-time Complex Wavelet Transforms. The reviewer appreciates that the authors noticed the information loss in previous Fourier or DWT based Transformers, and proposed to adopt Dual-time Complex Wavelet Transforms to solve such a problem. It is invertible (though also redundant) and hence would not lead to a loss. 2. The paper provides a lot of technical details, and also the codes to reproduce the results. 3. Extensive experiments are conducted in the paper. Weaknesses: 1. Clarity of Technical Contribution The paper's presentation raises some concerns about clarity and comprehension. The key technical contribution of the work is obscured by convoluted explanations and distracting statements. For example, L81-L93, at the beginning of the method section, this paper detailedly talks about something like "In Vanilla SVT, Given an image, we split it into patches of size 16*16. We use a linear projection layer to get the embedding feature for each patch ......" These sentences seem to depict unique contributions of the proposed method but, upon closer inspection, merely explain standard operations in the Vision Transformer (ViT). Similarly, lines 102-109 are bewildering, appearing repetitive and unclear about the authors' intended message. Moreover, the paper repeatedly emphasizes "tensor multiplication" and "Einstein multiplication" across various sections. These appear to be mere implementation details, not substantial technical contributions. The manuscript's current style is closer a technical report rather than an academic paper. The authors are strongly encouraged to (a) explicitly state the key contribution of this work in the rebuttal and method section, and (b) condense lines 81-126 into a succinct, high-level introduction of the method, ideally within 10 lines. 2. Claims need support Some of the claims in the paper, while potentially plausible, lack sufficient support/proof. For example, L52, "the ability to separate low-frequency and high-frequency components of an image is also important". In L100, "SVT has improved robustness compared to most other transformers, which will also be established in the performance studies." These could be true, but the authors did not prove the robustness is improved. 3. Minors There are some minor typos/problems to be fixed. For example, L130, "Where" to "where". In page 7, some table captions are bold and some are not. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Overall the reviewer thinks using DTCWT in vision transformers is worth exploring, and this is why the reviewer still vote for borderline accept at this stage. If the authors cannot provide appropriate explanation/discussion about their contribution in the rebuttal period, the reviewer would downvote the score. Besides, the reviewer feel some details provided by the main paper are not interesting/exciting, e.g., Table 1. For future work, the reviewer would suggest the authors to explore more insightful things. For instance, DTCWT enjoys invertibility but has redundancy at the same time. How would the network deal with such a redundancy? Moreover, providing more quantitive or qualitative comparisons to Fourier or DWT based Transformers to analyse how the invertibility of DTCWT helps vision transformers understand visual contents, instead of just reporting the final numbers for the tasks like classification. ____________________________________________________ The author response solves the main concerns. Under the assumption that the authors would revise the manuscript as they promised, the reviewer agreed to raise the score to weak accept. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments, which shall result in significant improvements to the quality of the final camera-ready version of the paper 1. One of the claims of the paper is the ability of DTCWT [46] to separate the low-frequency and high-frequency components of the image. This has now been included in the background section of the paper and is captured in **the main rebuttal section**. 2. **Invertibility vs redundancy trade-off:** Regarding the invertibility vs redundancy trade-off, we conduct an experiment to show that invertibility helps in comprehending the image and not just contributing to the performance. We pass an image through raw DTC-WT and we put an inverse DTC-WT operation and compute the reconstruction loss. We ran the experiment with various values of J, which represent the orientation in SVT. We observe that the reconstruction loss in the image reduces with increasing values of J, which clearly shows that as the orientations increase, SVT is able to comprehend the image better – the orientations are able to represent the high-order properties of the image which benefits SVT. We have compared different spectral transforms such as Fourier Transform, Discrete Wavelet Transform as well as DTC-WT. We observe that the reconstruction loss is lesser in DTC-WT as compared to other spectral transforms, as captured in Table 1 below. In Table 1 we measured reconstructed loss(MSE) for FFT, DWT stage -1,2,3, and DTCWT stage -1,2,3. It shows that the MSE loss is less as we increase the level of decomposition (J) and order of selectivity. DTCWT has less MSE than DWT. Similarly, the PSNR value of DTWT is gier than DWT and Fourier. The Peak Signal-to-Noise Ratio (PSNR) is a measure of the ratio between the maximum possible power of an image and the power of corrupting noise that affects its representation. It is defined via the MSE and is expressed in decibels (dB). The higher the PSNR, the better the quality of the reconstructed image. In summary, for a reconstructed image to be considered of high quality, it should have a low MSE and a high PSNR. 3. It must be noted that Fourier Transforms cannot perform low-pass and high-pass separation, as mentioned in the GFNet paper. GFNet always uses only tensor multiplication, which may be inefficient compared to Einstein multiplication, which is efficient and reduces the number of parameters and computational complexity. Thus, while there is offset and SVT does not lag in performance nor in computational complexity but gains better representational power. This is shown in Table 2, where we compare latency, FLOPS, number of parameters, and reconstruction loss. The table-2 shows the Latency (mili sec) of SVT compared with the Convolution type network, attention type transformer network, POOl type transformer network, MLP type transformer network, and Spectral type transformer network. We report latency per sample on A100 GPU. We have also visualized the filter coefficients of all six orientations in Supplementary Figure 1. 4. **Robustness:** We also use an SVT pre-trained on the ImageNet-1K dataset and use the ImageNet-C dataset to measure the robustness of SVT compared with various transformers such as ViT-B, MLP-Mixer, and SVT as shown in Table 7 of the attached PDF. The table compares accuracy and mCE score on the ImageNet-C dataset. SVT-H-B has less mCE score compared to other transformers. 5. The quality of writing will be improved considerably in the final version and we shall also include a Mathematical Formulation of DTCWT. ## Mathematical Formulation of DTCWT \begin{equation*} x(t)= \sum_{n=-\infty}^{\infty}c(n)\phi(t-n) +\sum_{j=0}^{\infty}\sum_{n=-\infty}^{\infty}d(j, n)2^{j/2}\psi(2^{j}t-n). \end{equation*} where $c(n)$ is the scaling coefficients and $d(j,n)$ is the wavelet coefficients. \begin{equation*} c(n) =\int_{-\infty}^{\infty}x(t)\phi(t-n)dt, \quad d(j, n) =2^{j/2}\int_{-\infty}^{\infty}x(t)\psi(2^{j}t-n)dt. \end{equation*} The complex value wavelet is given by \begin{equation*} \psi_{\rm c}(t)=\psi_{\rm r}(t)+{\rm j}\psi_{\rm i}(t). \end{equation*} Where $\psi_r(t)$ is real and even and $j\psi_i(t)$ is imaginary and odd \begin{equation*} d_{\rm c}(j, n)=d_{\rm r}(j, n)+{\rm j}\ d_{\rm i}(j, n) \end{equation*} With Magnitude and phase \begin{equation*} \vert d_{\rm c}(j, n)\vert =\sqrt{[d_{\rm r}(j,n)]^{2}+[d_{\rm i}(j,n)]^{2}}, \quad \angle d_{\rm c}(j,n)= \arctan \left({d_{\rm i}(j,n)\over d_{\rm r}(j,n)}\right) \end{equation*} Let $h_0(n), h_1(n)$ denote the low-pass and high-pass filter in the upper band, while $g_0(n), g_1(n)$ denote the same for the lower band. The wavelets corresponding to the upper band and lower band are denoted by $\psi_h(n), \psi_g(n)$. The filters are designed to get the complex wavelet by satisfying the Perfect Reconstruction (PR) conditions. The complex wavelet filter can be represented as $\psi(t):= \psi_h(t)+j\psi_g(t)$. where $\psi_g(t)$ is approximately the Hilbert transform of $\psi_h(t)$ i.e. $\psi(t) \approx \mathcal{H}{\psi_h(t)}$ . \begin{equation*} \psi_{h}(t)=\sqrt{2}\sum_{n}h_{1}(n)\phi_{h}(t), \quad \phi_{h}(t)=\sqrt{2}\sum_{n}h_{0}(n)\phi_{h}(t) \end{equation*} Two low-pass filters should satisfy a very simple property: one of them should be approximately a half-sample shift of the other. $g_{0}(n)\approx h_{0}(n-0.5) \Rightarrow \psi_{g}(t)\approx {\cal H}\{\psi_{h}(t)\} $ Since the filters are real, no complex arithmetic is required for implementing DTCWT. It is just two times more expansive in 1-D. It is also easy to invert, as the two separate DWTs can be inverted. Final Dual-Tree CWT can be designed using follow steps: - Approximate half-sample delay property - PR (orthogonal or biorthogonal) - Finite support (FIR filters) - Vanishing moments/good stopband - Linear-phase filters: Only the complex filter responses need to be linear-phase; this can be achieved by taking $g_0(n)=h_0(N−1−n)$ --- Rebuttal Comment 1.1: Comment: After carefully reviewing the rebuttal and considering feedback from other reviewers, I am inclined to raise the score. I hope the authors can address and refine the manuscript as recommended by the reviewers. The current writing distracts. --- Reply to Comment 1.1.1: Title: Replying to Reviewer kC2p Comment: We thank reviewer kC2p for his insightful comments and shall revise the paper thoroughly to address his review comments.
Summary: The paper proposes SVT, a novel vision transformer model that addresses the challenges of attention complexity and capturing fine-grained information in images. SVT utilizes a spectral scattering network and the Dual-time Complex Wavelet Transforms (DTCWT) to decompose image features into low-frequency and high-frequency components. The paper also introduces an efficient feature mixing technique using Einstein multiplication in the high-frequency components and tensor multiplication in the low-frequency components. Experimental results demonstrate that SVT achieves state-of-the-art performance on the ImageNet dataset with reduced parameters and computational complexity. Strengths: 1.The proposed SVT model presents an innovative approach to addressing attention complexity and capturing fine-grained information in images. The use of spectral scattering and DTCWT decomposition enables efficient representation and separation of frequency components. 2.The feature mixing technique using Einstein multiplication is a novel contribution that efficiently combines token and channel features, leading to improved performance. 3.The experimental results on the ImageNet dataset and other vision tasks demonstrate the superiority of SVT compared to existing vision transformers, such as LiTv2 and iFormer. The significant reduction in parameters and computational complexity further enhances the practicality and scalability of SVT. Weaknesses: 1. The English writing in this paper needs to be carefully reviewed as there are several grammar errors. For example, line 79, lesser should be less, line 83, "Given" should be in lower case "given". And in Figure 4's illustration, some parts are missing. There are many more errors in your paper writing. 2. I find the approach in this paper to be interesting but overly tricky. As we have seen with LLMs, many highly technical improvements become less significant when supported by the large parameter size and data volume of such models. Moreover, in the experimental section, the performance improvement brought by SVT does not appear to be significant. From my perspective, simple and elegant techniques often yield more meaningful improvements compared to complex transformations. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In the experiments, why not compare your method with more popular CNN-based methods like Yolo and Faster-RCNN? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Author has presented some limitations of the proposed methods and future plan. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his comments. 1. **Language:** Thanks for pointing out this – we shall undertake a thorough revision of the paper to address all language issues. 2. SVT is a generic recipe for componentizing the transformer architecture and efficiently implementing transformers with lesser parameters and computational complexity with the help of Einstein multiplication. So, SVT can be viewed as a simple and efficient learning transformer architecture, as opposed to viewing it as a complex transformation. The essence of SVT is the tensor multiplication in low-pass and Einstein multiplication in high-pass to get efficient transformer architecture. SVT leverages DT-CWT to get the phase information or high-frequency components, which is not possible in DWT. 2. We have conducted an experiment to compare SVT’s performance on object detection tasks compared to Mask-RCNN, as well as Faster-RCNN – the results of which are tabulated in Table-6 in the attached PDF. This table provides performance results on COCO val2017 dataset for the downstream task. Here We compare Faster-RCNN with the Mask R-CNN 1x [21] method. We have reported the bounding box (\emph{i.e.}, $AP^b$ ) for evaluation. The $AP^b$ scores for SVT using Mask RCNN are better than Fster-RCNN. We shall include the comparison with Yolo in the final version. 3. **SVT Compared with LVM/LLM**: We wish to state the following on the comment of the reviewer about large vision models (LVM/LLM): We have observed in recent papers that certain efficient transformer models such as efficientFormer and CvT have significantly larger number of parameters, with BiT-M has 928 million parameters and achieving 85.4% accuracy on ImageNet 1K, whereas ViT-H has 632 million parameters and achieving accuracy of 85.1. Comparitively, SVT-H-L has 54 million parameters and achieves 85.7% accuracy on ImageNet 1K - nearly 10X lesser number of parameters and FLOPS but with improved accuracy, as captured in Table 3 of CvT paper reference [60] of our paper.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive suggestions. We intend to incorporate the feedback to obtain an improved revision of our paper – we sincerely believe that the comments shall improve the quality of the paper significantly. We provide clarifications for the points raised by the reviewers: 1. We have added a separate background section and included a detailed explanation of Dual-Tree Complex Wavelet Transform (DTCWT) as suggested by the reviewers. Here we have included a diagram in the attached PDF for your kind reference. The same shall be added to the camera-ready version of the paper. - The explanation also captures the ability of the DTCWT to separate low-frequency and high-frequency components of an image as well as the use of DTCWT in vision transformers. - We also give a mathematical characterization of DTCWT. 2. We have added several experiments to support & validate the claims made in the paper. - **Initial Convolutional Layers VS Scatter Layers**: These experiments include a comparison of SVT with transformer architectures having initial convolutional layers and final attention layers. This is now substantiated with detailed experiments, where we compare SVT with Hornet, CMT, and CVT. The new table is numbered Table 5 in the attached PDF. - **Efficiency of Einstein Multiplication based SVT Implementation**: We have also conducted a new experiment to measure the latency of SVT and compare it with the latency of other types of transformers. The table, which is captured in Table 2 in the attached PDF, shows that SVT has lower latencies in spite of having redundancies, which proves the efficiency of the Einstein multiplication in high-frequency components. SVT also has lower FLOPS and a lesser number of parameters due to the efficient implementation. - **Qualitative and quantitative study of Invertibility**: We have conducted an additional experiment to compare the invertibility of DTCWT with Fourier transform and DWT – we measure the reconstruction loss of the transformer based on all three and compare the same in Table 1 of the attached PDF. We have also visualized qualitatively the invertibility property of DTCWT, Fourier, and DWT shown in Figure-1,2. Table 1 captures the invertibility studies. - **Invertibility VS Redundancy Trade-off**: We have analyzed the trade-off between invertibility and redundancy of SVT, where we were able to show that if we increase the number of orientations and number of stages, the reconstruction loss decreases, which is indicative of better invertibility at the cost of redundancy. The efficient implementation based on Einstein multiplication offsets the redundancy with improved performance with a reduced number of parameters and computational cost. We also show that SVT’s orientation better represents the higher-order properties of the image. This is captured in Table 6 of the attached PDF. - **Robustness**: We have also conducted an experiment to measure the robustness of SVT by pre-training on ImageNet 1K and comparing it with other architectures like ResNet, ViT, and MLP Mixer on the ImageNet C dataset. This is captured in Table 7 of the attached PDF and clearly demonstrates that SVT. SVT achieves better robustness compared to other transformer architectures. - **Initial Scatter VS Initial Attention**: We have conducted an additional experiment to show that SVT with initial scatter layers and deeper attention layers performs better than SVT having initial attention layers and deeper scatter layers. This is captured in Table 3, in the attached PDF. - **Faster RCNN comparison**: We have included the SVT VS Faster RCNN comparison in Table 4. 3. We have revised the paper significantly to take care of grammatical and language issues, as suggested by reviewers. ## Overview of DTCWT and Decoupling of Low & High Frequencies Discrete Wavelet Transform (DWT) replaces the infinite oscillating sinusoidal functions with a set of locally oscillating basis functions, which are known as wavelets Kingsbury et al.[46]. Wavelet is a combination of low-pass scaling function $\phi(t)$ and a shifted version of a band-pass wavelet function known as $\psi(t)$. It can be represented mathematically as given below: \begin{equation*} x(t)= \sum_{n=-\infty}^{\infty}c(n)\phi(t-n) +\sum_{j=0}^{\infty}\sum_{n=-\infty}^{\infty}d(j, n)2^{j/2}\psi(2^{j}t-n). \end{equation*} where $c(n)$ is the scaling coefficients and $d(j,n)$ is the wavelet coefficients. Kingsbury et al.[46] have identified four issues in DWT including oscillations, shift variance, aliasing, and lack of directionality. One of the solutions to solve the above problems is the Complex Wavelet Transform (CWT). CWT is inspired by Fourier representation and has a complex-valued scaling function and complex-valued wavelet function, as given below: $\psi_{\rm c}(t)=\psi_{\rm r}(t)+{\rm j}\psi_{\rm i}(t)$ CWT is a double redundant tight frame in 1-D, able to overcome the four shortcomings mentioned above. DTCWT is a specific redundant type of CWT, which is based on two Filter Bank (FB) trees. The DTCWT uses two real DWTs, with the first one giving the real part of the transform, while the second one gives the imaginary part. The two real DWTs use two different sets of filters, which are jointly designed to give an approximation of the overall complex wavelet transform and satisfy the Perfect Reconstruction (PR) conditions. Let $h_0(n), h_1(n)$ denote the low-pass and high-pass filter in the upper band, while $g_0(n), g_1(n)$ denote the same for the lower band. The wavelets corresponding to the upper band and lower band are denoted by $\psi_h(n), \psi_g(n)$. The filters are designed to get the complex wavelet by satisfying the PR conditions. Since the filters are real, no complex arithmetic is required for implementing DTCWT. It is just two times more expansive in 1-D. It is also easy to invert, as the two separate DWTs can be inverted. Pdf: /pdf/7626bf58a8eb58a264eb76bd7a188ff98a32c388.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces the Scattering Vision Transformer (SVT), which utilizes a spectrally scattering network to capture fine-grained information in images and addresses the invertibility issue. SVT incorporates a novel spectral mixing technique using Einstein multiplication for efficient channel and token mixing. The approach achieves state-of-the-art performance on the ImageNet dataset, significantly reducing the number of parameters and FLOPS. It also demonstrates competitive results in other vision tasks, including transfer learning on standard datasets. Strengths: ## Novelty The use of scattering networks and Fourier-like frequency processing is novel and innovative. The paper addresses a significant problem related to the texture processing ability of vision transformers, showcasing desirable novelty and originality. ## Quality & Clarity The paper is well-organized, providing clear preliminaries, assumptions, definitions, and solutions. The experiments are detailed and concrete. ## Significance SVT effectively separates low-frequency and high-frequency image components while reducing computational complexity through the Einstein multiplication-based mixing technique. It achieves state-of-the-art performance on image classification and instance segmentation tasks and shows comparable results in object detection tasks. Weaknesses: The computational costs and complexity limit the number of directional orientations used in SVT. Currently, SVT employs six orientations, but increasing the number of orientations would capture more semantic information at the expense of higher computational complexity. Optimization possibilities should be explored. Additionally, SVT's performance in domains such as speech and NLP remains unexplored. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his insightful comments on the paper and to guide us on the possible research directions. 1. Regarding the invertibility VS redundancy trade-off, we conduct an experiment to show that invertibility helps in comprehending the image and not just contributing to the performance. We pass an image through raw DTC-WT and we put an inverse DTC-WT operation and compute the reconstruction loss. We ran the experiment with various values of J, which represent the orientation in SVT. We observe that the reconstruction loss in the image reduces with increasing values of J, which clearly shows that as the orientations increase, SVT is able to comprehend the image better – the orientations are able to represent the high-order properties of the image which benefits SVT. We have compared different spectral transforms such as Fourier Transform, Discrete Wavelet Transform as well as DTC-WT. We observe that the reconstruction loss is lesser in DTC-WT as compared to other spectral transforms, as captured in Table 1 below. We also capture the invertibility VS redundancy trade-off in table-5. In Table 1 we measured reconstructed loss(MSE) for FFT, DWT stage -1,2,3, and DTCWT stage -1,2,3. It shows that the MSE loss is less as we increase the level of decomposition (J) and order of selectivity. DTCWT has less MSE than DWT. Similarly, the PSNR value of DTWT is gier than DWT and Fourier. The Peak Signal-to-Noise Ratio (PSNR) is a measure of the ratio between the maximum possible power of an image and the power of corrupting noise that affects its representation. It is defined via the MSE and is expressed in decibels (dB). The higher the PSNR, the better the quality of the reconstructed image. In summary, for a reconstructed image to be considered of high quality, it should have a low MSE and a high PSNR. 2. Optimization possibilities – we shall explore a few optimization possibilities to reduce the redundancy in the orientations of SVT for future work and these include: - The selection of relevant orientations which best capture image properties, instead of using all six orientations - The use of symmetric and anti-symmetric pairs in the orientations – for instance, $15^\circ$ and $165^\circ$, as well as $45^\circ$, and $135^\circ$, are such pairs that capture similar image properties, which we can leverage to optimize the number of orientations and reduce redundancy. - Use of pyramidal decomposition of orientation – the finer layer requires more orientation compared to the coarser layer. 3. We shall be exploring the applicability of SVT in speech data as it is also a spectral signal. We shall also be exploring NLP datasets for SVT. 4. We are adding visualization for filter characterization DT-CWT in terms of low-frequency and all 6 six directional components of high-frequency components as shown in Figure-1 and 2. Figure-1 shows the phase and magnitude of FFT and the low-frequency component of DTCWT, whereas Figure-2 shows the phase and magnitude of the high-frequency component of DTCWT. It clearly indicates that it captures six degrees such as $15^\circ$, $45^\circ$, $75^\circ$, $105^\circ$, $135^\circ$, and $165^\circ$. This can't be captured by FFT and DWT.
null
null
null
null
null
null
Multi-modal Queried Object Detection in the Wild
Accept (poster)
Summary: Based on recent vision-language fundamental models such as GLIP and Grounding DINO, the authors propose an improved multimodal query pipeline. A Gated Class-scalable Perceiver is used to apply cross-attention for both the language and vision query inputs. A masking strategy for text tokens is further proposed to improve the generalization ability. The proposed method achieves strong performance on zero-shot LVIS and ODinW benchmarks. Strengths: - This paper first explores using both vision and language queries as inputs - The proposed method achieves superior experimental results over previous methods - The authors release the source code Weaknesses: - Overclaimed training efficiency, see Q1 in the Questions section. The pre-training time of GLIP should also be considered, which makes MQ-GLIP more time and data-intensive. - Potential violation of zero-shot setting, see Q2 in the Questions section. The MQ-Det might improperly use exemplar images in a zero-shot setting, which is against the task definition. - Subpar full-shot performance, see Q3 in the Questions section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I want to verify one point: the proposed method, such as MQ-GLIP, is trained from scratch, or finetuned from the initial weights from GLIP? According to #175-176: "we freeze the entire pre-trained detector and only train the newly-added gated class-scalable perceiver modules", I assume that MQ-GLIP is finetuned on the pre-trained GLIP model. Then I think the author overclaims their training efficiency. I think the pre-trained time of GLIP cannot be ignored. For example, in #232, it should be "MQ-GLIP-T adds extra 2% training time and extra 12% data usage of GLIP-T." rather than "only requires 2%". In my opinion, MQ-GLIP increases the training time and training data compared to GLIP, since MQ-GLIP brings an extra fine-tuning stage. 2. When evaluating the zero-shot setting such as zero-shot LVIS, does the MQ-Det requires the exemplar images for each class of the LVIS dataset? I guess the answer is 'yes' since the MQ-Det requires both the language and vision queries as inputs. But I think this way may violate the zero-shot setting. In the zero-shot setting, the LVIS classes are treated as unseen classes, and the exemplar images of the unseen classes are not allowed to be accessed. The model can only use the textual descriptions or text attributes associated with the unseen classes. So I think MQ-Det achieves strong zero-shot performance via a cheating inference evaluation. I think using the exemplar images of test-set classes is allowed for the few-shot setting but is not allowed for the zero-shot setting, which will weaken the contribution of this paper. 3. In table-2, the proposed MQ-Det performs worse than DINO-Swin-L on the full-shot setting. Can the authors provide some explainable reasons? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discussed their limitations in section C of the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. Below, please find our responses to each of the concerns or questions raised in the review: > **Training efficiency on time and data** We acknowledge that our model is built upon pretrained GLIP/GroundingDINO models, which indirectly utilizes their pretraining datasets and accounts for more training time if trained from scratch. That is the reason we refer to our training process as modulated pretraining (lines 156-159) instead of pretraining. Meanwhile, we'd like to emphasize that the aforementioned issue does not contradict the efficiency of our method. Our efficiency lies in the fact that our approach allows current mainstream language-queried detectors to be equiped with multi-modal queries only through a lightweight modulating process. This offers a promising solution to overcome the limitations on insufficient granularity and ambiguous queries in current language-queried detectors. As an example, one may say that LoRA [1] (low-rank adaptation for efficient tuning on GPT) takes more training time and data than GPT if trained from scratch, but clearly, its efficiency lies in the fact that we do not need to train from scratch. We will make the following revision to avoid misleading: 1. We will provide more clarification on our data discrepancy in Table 1 and Table 2, for example: Table 1: xxxx. $^\dagger$ Modulating upon pretrained models indirectly utilizes their pre-training data, and potentially consumes more time if we take the training time of the pretrained language-queried detectors into consideration. | Model|...|Pre-train Data|...|Training Time| |-|-|-|-|-| |...|...|...|...|...| |MQ-GLIP-T|...|O365 (+GLIP$^\dagger$)|...|10$^\dagger$| |MQ-GroundingDINO-T|...|O365 (+GroundingDINO$^\dagger$)|...|10$^\dagger$| 2. Add more description on the data discrepancy after the last sentence in line 232: "It is worth noting that modulating upon pretrained GLIP/GroundingDINO indirectly utilizes their pretraining data. The efficiency here describes that our approach allows current mainstream language-queried detectors to be equiped with multi-modal queries only through a lightweight modulating process, avoiding training from scratch." 3. Add clarification on the training time and data in lines 13-16, lines 76-78, and lines 231-232. For example, "For instance, MQ-Det ... with merely additional 3% modulating time upon GLIP" in lines 13-16. [1] Hu, Edward J., et al. Lora: Low-rank adaptation of large language models. ICLR2022 > **Zero-shot setting** We acknowledge that our setting is different from the zero-shot setting in previous language-queried detectors, since we use additional visual exemplars along with texts as category descriptions. Our setting is derived from practical implementation, where users can detect their customized objects through textual descriptions, visual exemplars, or both without any fine-tuning. As an example, detecting different types of "mushrooms" in the Mushroom dataset of OdinW-13 can be much easier with visual exemplars than ambiguous textual descriptions. It is hard to reach absolutely fair comparison in this setting despite a lot of efforts we have made. The reason is that most methods, except ours, do not support visual exemplars as inputs, while comparing our finetuning-free method with other finetuned methods in a few-shot setting would also be unfair. To avoid misleading, we will make the following revisions: 1. Modify the setting name: (a) The name of section 3.2.1: "Multi-modal queried detection without finetuning" (b) In the begining of line 226: "We evaluate the model's ability ... in a **finetuning-free setting**, where users can detect their customized objects through textual descriptions, visual exemplars, or both without any finetuning. " 2. Add more clarification: (a) In the title of Table 1: "**Finetuning-free detection** results on the LVIS benchmark. Differently, we provide models with 5-shot instances as vision queries without any fine-tuning..." (b) Lines 13-15 in the abstract: "For instance, MQ-Det significantly improves the state-of-the-art open-set detector GLIP by +7.8% AP on the LVIS benchmark with multi-modal queries without any downstream fine-tuning..." (c) Similar modifications as (b) in line 52, lines 76-77, and lines 226-232. Additionally, we provide two more methods to acquire vision queries in the **finetuning-free setting (the zero-shot setting in the original version)** in Section A.1 of the Appendix, which do not have access to the target dataset. We would also like to emphasize that the **core merit of this work** is that, we address the insufficient granularity and ambiguous queries that existing language-queried detectors suffer from via multi-modal queries, as recognized by Reviewer RdSN "The paper **tackles an important problem**: ..." and Reviewer JLFL "Leveraging both textual and visual info ... **makes lots of sense**". The main message from our original zero-shot comparison is that, equipped with multi-modal queries, previous language-queried detectors are allowed to detect objects with various granularity and demonstrate great performance improvement without any finetuning. This confirms our core merit and indicates that multi-modal queried object detection can be a promising future direction. > **Full-shot performance** In the full-shot setting, the reasons that MQ-Det performs worse than DINO-Swin-L lie in twofold. First, our approach is more suitable in few-shot scenarios, where the auxiliary information provided by the vision queries plays a vital role. This information takes weaker effect with sufficient data in the full-shot setting. Second, MQ-GLIP-T is modulated upon GLIP-T, with only the expectation of improving over GLIP-T. Given that DINO-L is a larger model and holds stronger architecture design (e.g., mixed query selection and look-forward-twice module), it is understandable for MQ-GLIP-T to perform worse than it. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing the rebuttal replies. > Training efficiency on time and data The authors agree that it should be “add extra” rather than “only requires” in #232. In Table 1, the authors may consider adding two columns, e.g., ‘training time’ and 'modulating time’, ‘training data’ and ‘modulating data’ to avoid confusion. > Zero-shot setting While the authors acknowledge that their experiments do not conform to the zero-shot setting, they introduce a new term, the "finetuning-free setting". However, my concerns remain. For instance, the comparison in Table 1 is not fair because other methods did not use the 5 vision examples. Also, since the authors propose a new setting, they may need more effort to provide more baselines rather than the variants of MQ-GLIP-T in Appendix A.1 under their new setting. Reviewer 9Uan also expressed concerns about this particular point. I also have the same concern with the reviewers 9Uan about missing the ablation studies about the number of vision examples used in the "finetuning-free setting". > Full-shot performance I am still confused about why the authors do not provide the MQ-GLIP-L results in the few-shot/full-shot setting. Overall, I will keep my initial rating. --- Reply to Comment 1.1.1: Comment: Thanks for the reviewer's response. Our new response is as follows. > Q1: In Table 1, the authors may consider adding two columns, e.g., ‘training time’ and 'modulating time’, ‘training data’ and ‘modulating data’ to avoid confusion. Thanks for the reviewer's valuable recommendation. We will revise the table according to your suggestion. > Q2: For instance, the comparison in Table 1 is not fair because other methods did not use the 5 vision examples. Also, since the authors propose a new setting, they may need more effort to provide more baselines rather than the variants of MQ-GLIP-T in Appendix A.1 under their new setting. 1. The finetuning-free comparison between our multi-modal queried method and previous language-queried methods is reasonable because it verifies the superiority of multi-modal queries over singe-modal queries, and shows that multi-modal queried object detection can be a promising future direction, rather than simply presenting a state-of-the-art model. 2. Thanks for the reviewer's suggestion. It is hard to find existing baselines that suitable in the multi-modal queried object detection setting, as recognized in the reviewer's initial review: "This paper first explores using both vision and language queries as inputs". To this end, we modify GLIP/GroundingDINO through the method in OWL-ViT as baselines and conduct finetuning-free evaluation in LVIS MiniVal. Specifically, our modification contains three steps. 1) Acquiring naive vision quries: we feed the visual examplars togather with LVIS 1203 category texts into GLIP/GroundingDINO, then crop the corresponding regions on the output image features using an RoIPooler. The averaged cropped region features are treated as vision queries. 2) Constructing naive multi-modal queries: we average the classification logits of original language queries and the naive vision queries as the results of multi-modal queries. 3) Detection evaluation: we separately use the original language queries, the naive vision queries, and the naive multi-modal queries for evaluation on LVIS MiniVal. The classification is conducted via dot-product similarity that similar to OWL-ViT. The results are shown in the following table. For example , GLIP-T, GLIP-T-Img, and GLIP-T-MM denote GLIP with original language-queries, naive vision queries, and naive multi-modal queries, respectively. The number of vision queries is set to 5. | Model | $AP$ | $AP_r$ | $AP_c$ | $AP_f$ | | ------------------- | ---- | ------ | ------ | ------ | | GroundingDINO-T | 25.7 | 15.2 | 21.9 | 30.9 | | GroundingDINO-T-Img | 7.7 | 2.6 | 6.5 | 9.8 | | GroundingDINO-T-MM | 14.7 | 8.2 | 12.6 | 17.9 | | MQ-GroundingDINO-T | 30.2 | 21.7 | 26.2 | 35.2 | | GLIP-T | 26.0 | 20.8 | 21.4 | 31.0 | | GLIP-T-Img | 7.6 | 2.4 | 6.8 | 9.5 | | GLIP-T-MM | 15.4 | 10.6 | 13.4 | 18.0 | | MQ-GLIP-T | 30.4 | 21.0 | 27.5 | 34.6 | | GLIP-L | 37.3 | 28.2 | 34.3 | 41.5 | | GLIP-L-Img | 10.9 | 4.1 | 9.2 | 13.7 | | GLIP-L-MM | 24.3 | 17.7 | 21.5 | 27.9 | | MQ-GLIP-L | 43.4 | 34.5 | 41.2 | 46.9 | The results show that directly using vision queries in language-queried detectors through some naive modification demonstrates poor detection performance, and combining such vision queries with language queries as multi-modal queries impairs the performance. > Q3: I also have the same concern with the reviewers 9Uan about missing the ablation studies about the number of vision examples used in the "finetuning-free setting". We have provided the ablation results in the rebuttal. Please refer to the `Meta-parameters and the related work chapter` part of our initial response to Reviewer 9Uan. > Q4: I am still confused about why the authors do not provide the MQ-GLIP-L results in the few-shot/full-shot setting. We have provided the results in the paper. Please refer to Figure 3 in the paper and Table Ⅲ in the Appendix. We did not present the MQ-GLIP-L results in Table 2 only for fair comparison. --- Please feel free to let us know if you have other questions.
Summary: The paper propose MQ-Det, a novel module that integrates both language and visual queries efficiently for object detection tasks. This module enhances each category token with vision queries, providing rich detailed visual context to the text-based models. The proposed method is experimentally tested on LVIS and ODinW datasets, obtaining state of the art results. The method is versatile and can be applied to other state of the art language queried object detectors. Strengths: The overall idea of the paper is easy to understand. The presented method, MQ-Det is novel because it introduces a unique combination of language and visual queries (multi-modal queries) to enhance object detection in an efficient manner. To train the proposed module it's not expensive and it does not require a lot of data. The proposed method, MQ-Det, was evaluated using two state of the art models, GLIP and GroundingDINO. The results demonstrated superior performance when MQ-Det was applied, showcasing the effectiveness of the MQ-Det approach. Weaknesses: The comparison in the zero-shot and few-shot scenario is not fair since this method uses "5 instances as vision queries for each category from the downstream training set" thus having access to some information from the target dataset while all the other methods do not have any kind of access to that (for example owl-vit for image guided detection also doesn't use any fine-tuning, but when using vision queries from the target dataset they present the results as few-shot -- please see chapter 4.4 from owl-vit). Same observation for few-shot. Also it's not clear how are those 5 instances picked and how does that affect the final performance. The time and data comparison against state of the art it's not entirely fair since this method builds on top of GLIP and GroundingDINO. So this method benefits of the training data and training time of GLIP/GroundingDINO, thus the affirmations and comparisons may be misleading. It's not clear if the method can be used with text only or you always need some vision queries. Tied to that it's not clear what MQ-GLIP-T-Txt from Table 1 represents. Can you mask the visual queries? The comparison between MQ-GLIP-T-Txt and GLIP-T it's confusing. Various meta-parameters seem to be chosen randomly: for example choosing 5 as the vision queries. Ultimately, while I do see the merits of this work, I find the comparisons to be misleading and not entirely fair. minor typos: visual detials After rebuttal: The authors addressed my concerns properly and with the promised modifications I consider that this paper meets NeurIPS standards. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How are the vision queries chosen? How does this affect the performance? Choosing to have the related work chapter as the forth one it's unusual, is there any reason for that? please see Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: the paper discuss some of the limitations throughout the paper and does not address societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. Please find our responses below: > **Comparison in the zero-shot scenario** We acknowledge that our setting is different from the zero-shot setting in previous language-queried detectors, since we use additional visual exemplars along with texts as category descriptions. Our setting is derived from practical implementation, where users can detect their customized objects through textual descriptions, visual exemplars, or both without any fine-tuning. As an example, detecting different types of "mushrooms" in the Mushroom dataset of OdinW-13 can be much easier with visual exemplars than ambiguous textual descriptions. It is hard to reach absolutely fair comparison in this setting despite a lot of efforts we have made. The reason is that most methods, except ours, do not support visual exemplars as inputs, while comparing our finetuning-free method with other finetuned methods in a few-shot setting would also be unfair. To avoid misleading, we will make the following revisions: 1. Modify the setting name: (a) The name of section 3.2.1: "Multi-modal queried detection without finetuning" (b) In the begining of line 226: "We evaluate the model's ability ... in a **finetuning-free setting**, where users can detect their customized objects through textual descriptions, visual exemplars, or both without any fine-tuning. " 2. Add more clarification: (a) In the title of Table 1: "**Finetuning-free detection** results on the LVIS benchmark. Differently, we provide models with 5-shot instances as vision queries without any fine-tuning..." (b) Lines 13-15 in the abstract: "For instance, MQ-Det significantly improves the state-of-the-art open-set detector GLIP by +7.8% AP on the LVIS benchmark with multi-modal queries without any downstream fine-tuning..." (c) Similar modifications as (b) in line 52, lines 76-77, and lines 226-232. Additionally, we provide two more methods to acquire vision queries in the **finetuning-free setting (the zero-shot setting in the original version)** in Section A.1 of the Appendix, which do not have access to the target dataset. We would also like to emphasize that the our goal is to address the insufficient granularity and ambiguous queries that existing language-queried detectors suffer from. To achieve this goal, we propose to equip language-queried-only detectors with multi-modal queries. The main message from our original zero-shot comparison is that, equipped with multi-modal queries, previous language-queried detectors are allowed to detect objects with various granularity and demonstrate great performance improvement without any finetuning. This confirms our goal and indicates that multi-modal queried object detection can be a promising future direction. > **Comparison in the few-shot scenario** One thing we'd like to clarify is that the few-shot comparison is absolutely fair. The vision queries are strictly selected from the few-shot datasets, as described in lines 217-218. For example, in the 3-shot setting, we only use the 3-shot training samples as the vision queries for each category, without any additional data. > **Comparison on training time and data** We will provide more clarification to avoid misleading. Due to the character limit, please refer to the "`training efficiency on time and data`" part of our response to Reviewer wDpN, for detailed modifications we will make during revision. > **Inference with masked vision queries** Our approach supports inference with mixed queries, namely, only augmenting a part of categories with multi-modal queries while leaving other categories with single-modal queries. For example, as shown in the table, MQ-GLIP-T-Mix only provides "birdfeeder" with both textual descriptions and visual exemplars, while providing "armchair" with only visual exemplars and "straw" with only textual descriptions. MQ-GLIP-T-Txt only uses text queries, which is equivalent to GLIP. We will add a clear definition on MQ-GLIP-T-Txt in line 238. MQ-GLIP-T-Img masks all input texts and only uses vision queries. The categories are from LVIS MiniVal and the AP results in the finetuning-free setting are reported. |Model|Armchair #19|Straw #1024|Birdfeeder #100| |-|-|-|-| | MQ-GLIP-T| 45.8| 12.0| 2.8| | MQ-GLIP-T-Txt | 44.1| 8.2| 0.0 | | MQ-GLIP-T-Img | 41.7| 12.3| 4.2 | | MQ-GLIP-T-Mix | 41.8 (Img) |8.9 (Txt)|2.8 (Txt+Img)| We did not include the mixed results in the initial submission because it is laborious to design a customized query type for each category in the massive benchmarks used in the paper. However, it is relatively easy for users to flexibly adjust the query types to meet their own needs during implementation. We leave this study to future work. > **How does the vision queries affect the final performance?** The 5 vision queries are randomly picked. We did not employ specific tricks for the selection of vision queries, as even the lower bound achieved through random sampling outperformed the language-queried baseline models. The detailed results with multiple random sampling can be found in the "`inference variance with error bars`" part of our response to Reviewer RdSN. > **Meta-parameters and the related work chapter** We did not conduct a specific search on meta-parameters since it is not the key point of this work. Nonetheless, here we provide an analysis of the number of vision queries in the finetuning-free setting. We observed a clear improvement in detection performance even with a minimal number of vision queries, e.g., 1 vision query. | #Vision queries|0|1|3|5|10| |-|-|-|-|-|-| |LVIS AP (Finetuning-free)|26.0|28.5|29.9|30.4|30.6| We present the related work chapter as the fourth one to encourage readers to focus on our method. A similar pattern can be found in [1], [2]. [1] Radford A, et al. Learning transferable visual models from natural language supervision. ICML2021. [2] Chen T, et al. A unified sequence interface for vision tasks. NeurIPS2022. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in addressing the concerns through their rebuttal and the supplementary experiments. I recognize the merits of the proposed method and commend the clarity of the manuscript's presentation. However, my reservations persist regarding the fairness of the zero-shot comparisons. Despite the authors' acknowledgment of this issue, the forthcoming modifications may only partially alleviate the reader's uncertainties. It's pertinent to note that certain methods, even in a zero-shot paradigm without any fine-tuning or weight updates, might inherently benefit from the availability of 5 samples, as exemplified by the "owl-vit" method. --- Rebuttal Comment 1.2: Comment: How are the results changing in the zero-shot setup if only one example is used as opposed to 5? --- Reply to Comment 1.2.1: Comment: Thanks for the reviewer's response. Our new response is as follows. > Q1: It's pertinent to note that certain methods, even in a zero-shot paradigm without any fine-tuning or weight updates, might inherently benefit from the availability of 5 samples, as exemplified by the "owl-vit" method. 1. In fact, "owl-vit" did not demonstrate the benefits from vision samples compared to its language-queried counterpart. In chapter 4.4 of the "owl-vit", only the results of using 1 vision sample and 10 vision samples were compared, without including its language-queried results. To verify this, we conducted experiments using its code in Huggingface and compared its language-queried model, OWL-ViT-Txt, with its vision-queried model, OWL-ViT-Img, as shown in the table below. OWL-ViT-Img, MQ-GLIP-T, and MQ-GLIP-T-Img all use one identical vision sample as the vision query. The results show that the usage of vision samples alone in OWL-ViT showcases inferior open-world detection performance. Furthermore, "owl-vit" does not support language and vision queries as joint inputs, which neglects the complementary nature of vision and language, as outlined in our related work chapter. | Model | Rabbit | Pothole | ODinW-13 | | -| ------ | ------- | -------- | | OWL-ViT-Txt (ViT-L/14) | 73.0 | 17.5 | 40.9 | | OWL-ViT-Img (ViT-L/14) | 19.4 | 0.3 | 10.5 | | MQ-GLIP-T | 74.4 | 13.2 | 43.9 | | MQ-GLIP-T-Txt | 71.6 | 6.7 | 41.9 | | MQ-GLIP-T-Img | 71.1 | 4.0 | 29.6 | 2. We are the first work to explore multi-modal queries in object detection, which supports jointly using textual descriptions and visual exemplars. The finetuning-free comparison between our multi-modal queried method and previous language-queried methods is reasonable because it verifies the superiority of multi-modal queries over singe-modal queries, and shows that multi-modal queried object detection can be a promising future direction, rather than simply presenting a state-of-the-art model. 3. It is challenging to use multi-modal queries without any finetuning for previous language-queried detectors. We conduct further experiments to show that, directly using vision queries in language-queried detectors through some naive modification demonstrates poor detection performance, and combining such vision queries with language queries as multi-modal queries impairs the performance. We modify GLIP/GroundingDINO through the method in OWL-ViT as baselines and conduct finetuning-free evaluation on LVIS MiniVal. Specifically, our modification contains three steps. 1) Acquiring naive vision quries: we feed the visual examplars togather with LVIS 1203 category texts into GLIP/GroundingDINO, then crop the corresponding regions on the output image features using an RoIPooler. The averaged cropped region features are treated as vision queries. 2) Constructing naive multi-modal queries: we average the classification logits of original language queries and the naive vision queries as the results of multi-modal queries. 3) Detection evaluation: we separately use the original language queries, the naive vision queries, and the naive multi-modal queries for evaluation on LVIS MiniVal. The classification is conducted via dot-product similarity that is similar to OWL-ViT. The results are shown in the following table. For example , GLIP-T, GLIP-T-Img, and GLIP-T-MM denote GLIP with original language-queries, naive vision queries, and naive multi-modal queries, respectively. The number of vision queries is set to 5. | Model | $AP$ | $AP_r$ | $AP_c$ | $AP_f$ | | -| ---- | ------ | ------ | ------ | | GroundingDINO-T | 25.7 | 15.2 | 21.9 | 30.9 | | GroundingDINO-T-Img | 7.7 | 2.6 | 6.5 | 9.8 | | GroundingDINO-T-MM | 14.7 | 8.2 | 12.6 | 17.9 | | MQ-GroundingDINO-T | 30.2 | 21.7 | 26.2 | 35.2 | | GLIP-T | 26.0 | 20.8 | 21.4 | 31.0 | | GLIP-T-Img | 7.6 | 2.4 | 6.8 | 9.5 | | GLIP-T-MM | 15.4 | 10.6 | 13.4 | 18.0 | | MQ-GLIP-T | 30.4 | 21.0 | 27.5 | 34.6 | | GLIP-L | 37.3 | 28.2 | 34.3 | 41.5 | | GLIP-L-Img | 10.9 | 4.1 | 9.2 | 13.7 | | GLIP-L-MM | 24.3 | 17.7 | 21.5 | 27.9 | | MQ-GLIP-L | 43.4 | 34.5 | 41.2 | 46.9 | > Q2: How are the results changing in the zero-shot setup if only one example is used as opposed to 5? Please refer to the `Meta-parameters and the related work chapter` part of our initial response for the results of MQ-GLIP-T on LVIS MiniVal. Also, the MQ-GLIP-T and MQ-GLIP-T-Img in the first table uses one example for ODinW-13 evaluation in the zero-shot setup, namely, | Model | ODinW-13 (1 vision query) | ODinW-13 (5 vision queries) | | -| - | -| | MQ-GLIP-T | 43.9 | 45.6 | | MQ-GLIP-T-Img | 29.6 | 31.9 | --- If you have other concerns, please feel free to reply.
Summary: The paper introduces MQ-Det, a novel approach for open-vocabulary object detection that combines textual descriptions and visual exemplars as category queries. MQ-Det aims to address the limitations of existing text-queried object detectors by incorporating visual information and providing various granularity in the descriptions. The proposed architecture can be easily integrated with pre-trained language-queried detectors. To overcome the learning inertia problem caused by frozen detectors, MQ-Det employs a vision-conditioned masked language prediction strategy. This strategy involves randomly masking text tokens and allowing vision queries to independently predict objects, ensuring sufficient visual intervention during training. Overall, MQ-Det presents a simple yet effective architecture and training strategy that improves open-world object detection performance by incorporating multi-modal queries. Strengths: 1. The paper is well-written and easy to follow: The authors present their research in a clear and organized manner. 2. The paper tackles an important problem: The authors address the challenge of improving the robustness and generalization of current language-query based object detectors by incorporating visual cues. This is a significant problem in the field of open-vocabulary detection, as relying solely on textual descriptions can lead to insufficient granularity and ambiguous queries. By introducing multi-modal queries, the paper provides a promising solution to overcome these limitations and enhance detection performance in real-world scenarios. 3.The proposed GCP module and training techniques have broader applicability: GCP module introduced in the paper is a valuable contribution that can have implications beyond the scope of this study. For example, conditional gating layer of the GCP module provide a framework for effectively integrating useful visual information (instead of noisy visual templates) into language-queried detectors. Additionally, the random masking strategy addresses the learning inertia problem and enables better fusion of visual and textual cues. These findings can benefit future research. 4. The experiment is solid and proves the effectivenss of the proposed method. Weaknesses: 1. The training data in Table 1 and Table 2 is confusing. It appears that the authors loaded the pretrained GLIP models, which were pretrained on datasets such as Objects365, GoldG, CC4M, and Cap24M. However, the presented training data in those tables only includes the Objects365 dataset for proposed models. It is important to note that the performance of the model is achieved by utilizing all the mentioned pretraining datasets, rather than just the Objects365 dataset. Clarifying this discrepancy in the presentation of training data would provide a clearer understanding of the model's training process. 2. The evaluation of open-vocabulary or open-set object detection requires a clear separation of base and novel classes. Unfortunately, the paper does not explicitly evaluate the performance on these two sets separately. This omission makes it difficult to assess the model's generalization capability specifically on novel classes. To gain a deeper and better understanding of the model's performance in generating novel classes, it is recommended to conduct separate evaluations and present the results accordingly. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: As mentioned in the paper, the selection of visual templates during inference plays a critical role in the success of the proposed approach. I would like to inquire about the impact of random sampling templates on the performance. Specifically, I am interested in whether the authors performed multiple inference runs, averaged the performance results, and presented the inference variance with error bars. Could you please clarify whether this analysis was conducted and whether the paper provides information on the effect of random sampling templates on performance? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I do not see potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. Please find our responses below: > **Training data** Thank the reviewer for your valuable reminder. Our model is indeed built upon pretrained GLIP/GroundingDINO models, which indirectly utilizes their pretraining datasets. We will provide the following clarification on the data discrepancy to avoid misleading. 1. We will clarify the data discrepancy in Table 1 and Table 2, for example: Table 1: xxxx. $^\dagger$ Modulating upon pretrained models indirectly utilizes their pre-training data, and potentially consumes more time if we take the training time of the pretrained language-queried detectors into consideration. | Model|...|Pre-train Data|...|Training Time| |-|-|-|-|-| |...|...|...|...|...| |MQ-GLIP-T|...|O365 (+GLIP$^\dagger$)|...|10$^\dagger$| |MQ-GroundingDINO-T|...|O365 (+GroundingDINO$^\dagger$)|...|10$^\dagger$| 2. Add more description on the data discrepancy after the last sentence in line 232: "It is worth noting that modulating upon pretrained GLIP/GroundingDINO indirectly utilizes their pretraining data. The efficiency here describes that our approach allows current mainstream language-queried detectors to be equiped with multi-modal queries only through a lightweight modulating process, avoiding training from scratch." > **Explicit evaluation on an open-vocabulary detection setting** Thank the reviewer for the constructive recommendation. We add the following experiment to further investigate the model's generalization performance. We first construct a novel category set from 1,203 LVIS categories. Specifically, we remove the LVIS categories that exist in the 365 classes of Objects365 and finally obtain 986 novel categories that did not appear during our modulated pretraining. The remaining 217 categories are represented as base categories. Then, we conduct zero-shot inference on the separated categories to verify the generalization of multi-modal query learning. The results are shown in the following table. More details: We consider a category in LVIS as a base class if its name or synonyms appear in the category name set of Objects365. The names are all in lowercase and singular form, with all "()" and "_" removed. | Model | $AP_{novel}$ | $AP_{base}$ | $AP_{all}$ | | ------------------ | ------------ | ----------- | ---------- | | GroundingDINO-T | 22.1 | 36.7 | 25.6 | | GLIP-T | 20.8 | 42.0 | 26.0 | | GLIP-L | 35.4 | 45.5 | 37.9 | | MQ-GroundingDINO-T | 26.2 | 43.0 | 30.2 | | MQ-GLIP-T | 26.5 | 42.8 | 30.4 | | MQ-GLIP-L | 41.7 | 51.3 | 44.0 | The results indicate that multi-modal queries generalize well to novel classes that do not exist in the modulated pretraining. Specifically, +4.1%, +5.7%, and +6.3% AP on novel classes of MQ-GroundingDINO-T, MQ-GLIP-T, and MQ-GLIP-L over their baselines, respectively. We will include this experiment in the revised version. It is worth noting that the separation of base and novel classes differs from previous works on open-vocabulary detection (OVD). The reason is that the testing categories of previous separation are partially included in our training dataset Objects365. Therefore, we represent the classes in LVIS that do not exist in our modulated pretraining dataset Objects365 as novel classes. The frequency distribution of the separated LVIS dataset is shown in the following table: | Classes | #Rare | #Common | #Frequent | | ------- | ----- | ------- | --------- | | Novel | 326 | 404 | 256 | | Base | 11 | 57 | 149 | | All | 337 | 461 | 405 | > **Inference variance with error bars** We did not report the averaged results of multiple runs in the initial submission. Here, we provide the averaged results with error bars from 3 inference runs using seeds of 3, 30, and 300 for both zero-shot and few-shot settings. Models with different seeds randomly select diffrent vision queries. We will include them in the revised version. Meanwhile, this work uses random sampling templates to investigate the feasibility of multi-modal queries. We leave the study of more complicated template sampling to future work. LVIS MiniVal zero-shot: | Model | $AP$ | $AP_r$ | $AP_c$ | $AP_f$ | | ------------------ | ------------- | ------------- | ------------- | ------------- | | MQ-GLIP-T | 30.5 $\pm$0.1 | 21.8 $\pm$0.8 | 27.4 $\pm$0.1 | 34.8 $\pm$0.2 | | MQ-GLIP-L | 43.7$\pm$0.3 | 34.8 $\pm$0.3 | 41.6 $\pm$0.4 | 47.2 $\pm$0.3 | | MQ-GroundingDINO-T | 30.4 $\pm$0.2 | 21.8 $\pm$0.3 | 26.3 $\pm$0.1 | 35.4 $\pm$0.2 | OdinW zero-shot: | Model | OdinW-35 $AP_{avg}$ | OdinW-13 $AP_{avg}$ | | ------------------ | ------------------- | ------------------- | | MQ-GLIP-T | 20.7 $\pm$0.5 | 45.4 $\pm$0.6 | | MQ-GLIP-L | 24.0 $\pm$0.4 | 54.1 $\pm$0.3 | | MQ-GroundingDINO-T | 22.5 $\pm$0.3 | 51.0 $\pm$0.3 | OdinW few-shot (3-shot): | Model | OdinW-35 $AP_{avg}$ | OdinW-13 $AP_{avg}$ | | --------- | ------------------- | ------------------- | | MQ-GLIP-T | 43.1 $\pm$0.4 | 57.2 $\pm$0.5 | --- Rebuttal Comment 1.1: Comment: Thanks, authors for the detailed response. It largely solves my concern. Thus, I keep my original rating in the current stage. --- Reply to Comment 1.1.1: Comment: Thanks for the reviewer's recognition and constructive suggestions.
Summary: In this paper, the authors proposed MQ-Det which leverage both textual and visual information for object detection in the wild. The proposed plug-and-play GCP module is very compatible with existing mainstream architectures. The authors conducted extensive experiments on multiple benchmark datasets and showed improved performance over several baseline methods for both zero-shot and few-shot detection. Strengths: 1. Leveraging both textual and visual info for open-vocabulary detection makes lots of sense as they provide different level of signals for recognizing objects as mentioned in the draft. The proposed GCP module and masking based training strategy is a novel technique to combine those informations. 2. The authors conducted extensive experiments and ablation studies to sufficiently validate the proposed approach over multiple baseline methods on several benchmarking datasets. 3. Writing is good and easy to follow. The tables and figures are also easy to understand. Weaknesses: 1. For the last row of Tab. 3 (b) where the joint input leads to lower performance, the authors' explanation is that "this task may introduce redundant information and rise the learning difficulty." This is not very clear to me since redundancy could also lead to easier training rather than difficulty. Could you please elaborate more on this? 2. For Fig. 4, it is weird that the baseline 'No Vision Query' (blue box) model actually recognize the two zebras as either elk or horned cow. These two zebras looks very easy to recognize even without visual cue for large VL models. Is this due to insufficient training of baseline models or hand-picked examples to illustrate the idea? 3. As illustrated in Fig. 3, with increasing amount of training data, the gap between MQ-Det and other baseline methods are getting more and more closer. What would be a rule of thumb strategy for using MQ-Det versus other simpler methods? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses section for details. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. Please find below our responses to the questions raised in the review: > **More explanation on redundancy with joint input in the last row of Tab. 3 (b)** The learning difficulty derives from the simple concatenated input $cat(\hat v_{i}, t)$, which equally considers vision and language information to learn the scale of vision queries. This may impede the gate on learning from vision information that contributes the most to the vision scales. In the future, we will explore a more reasonable combination of the two modalities to benefit the performance. > **Question on the baseline 'No Vision Query' model in Fig. 4** This example is selected from the bad cases of GLIP that is equal to the 'No Vision Query' model. The reasons of this failure are twofold. First, it is rather challenging to recognize "zebra" from all 1,203 categories in LVIS, since the model is more prone to make mispredictions as the number of categories increases. Second, "zebra" is not included in the detection training set (Objects365), making it a novel class. Through the use of multi-modal queries, we can provide the model with more clues to the category, thereby improving detection performance. > **A rule of thumb strategy for using MQ-Det versus other simpler methods** Thank the reviewer for the thoughtful question. We delve deeper into investigating whether there exists a rule of thumb strategy for deploying MQ-Det. In the table below, we gradually increase the data size (number of shots) and record the performance gap between MQ-GLIP-T (multi-modal queries) and GLIP-T (language queries). We report the averaged results on 13 benchmarks of OdinW-13 and two specific datasets. | Performance Gap | 0 | 1 | 3 | 5 | 10 | 50 | full | | --------------------------------------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | $AP_{MQ-GLIP-T}-AP_{GLIP-T}$ (OdinW-13) | 3.7 | 5.8 | 6.3 | 4.7 | 3.5 | 1.1 | 0.6 | | $AP_{MQ-GLIP-T}-AP_{GLIP-T}$ (Aquarium) | 2.2 | 3.2 | 2.6 | 2.4 | 1.5 | 1.4 | 0.9 | | $AP_{MQ-GLIP-T}-AP_{GLIP-T}$ (Pothole) | 6.7 | 5.8 | 6.2 | 7.1 | 7.3 | 2.3 | 0.2 | Here, we provide several points for the implementation guidance of MQ-Det: - A rule of thumb for using MQ-Det is that: MQ-Det is widely suitable for low-shot scenarios (empirically below 50-shot), but the case of the biggest improvement varies based on specific datasets. As exemplified in the table above, in OdinW-13, there generally exists an obvious gap between multi-modal queries and language queries under 10 shots. For a relatively simple task like Aquarium, the gap narrows significantly with just 10-shot learning. However, for a more challenging task like Pothole, which is to detect the potholes on the ground, the gap remains large even with 50-shot. - It is worth noting that employing MQ-Det is not laborious. The reason lies in twofold. First, our modulating process is efficient for training. Second, the GCP module and the masking strategy are easy to reproduce. We will open-source the complete code and provide more detailed guidance on how to incorporate MQ-Det into customized language-queried detectors, hoping that our work could embrace wider application. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. My main concerns are addressed and thus keep my original positive rating.
Rebuttal 1: Rebuttal: Dear reviewers, area chairs, senior area chairs, and program chairs, We sincerely thank for the valuable comments. It is pleasure that this work has been fully recognized by Reviewer RdSN (Accept) and Reviewer JLFL (Weak Accept), including "**a novel approach**", "**a simple yet effective architecture and training strategy**", "**The paper tackles an important problem**", and "**makes lots of sense**". The main concerns of the other two reviewers lie in the fairness of the zero-shot comparison and the unclear statement of the data usage. In this regard, we explain in detail that our setting is a bit different from previous zero-shot setting and derived from practical implementation, where users can use multi-modal queries (textual descriptions, visual exemplars, or both) to detect a wider range of objects without any finetuning. Due to the setting gap, we have tried our best to conduct fair comparison. We also add detailed clarification on our data usage and our modulated training process to avoid misleading. We look forward to a better appreciation of this manuscript that incorporates our great efforts. Furthermore, this manuscript has been carefully revised according to the suggestions of the reviewers. We have made our code open-sourced, hoping to promote the development of open-world object detection. The followings are our detailed responses. We greatly thank the constructive suggestions that significantly help improve the quality of our paper.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Faster Differentially Private Convex Optimization via Second-Order Methods
Accept (poster)
Summary: This paper studies DP convex optimization by using second-order methods. The authors present a private version of the cubic regularized Newton method and prove it faster than the first-order methods under strongly convex case. They also provide efficient second-order method for solving the DP logistic regression problems. Strengths: This is the very first paper to study the second-order methods for DP convex optimization. The author present algorithms for both general strongly convex functions and logistic regressions which are very novel. The Algorithm 3 for DP logistic regression consider the special structure of the object function which is very interesting. Weaknesses: The algorithm for solving the general strongly convex functions seems straight forward. It is a combination of cubic-regularized Newton and DPGD. The requirement of solving subproblem by DPGD at each iteration is not satisfying and may cause the total computation cost even worse than the first-order method. Can the author present the total number of the iterations (including the inner loop) and compare it with the first-order DP method? Although the author present efficient method for solving the DP logistic regression, Algorithm 3 seems quite different from the meta algorithm for the general strongly convex optimization problems. Can the authors show the connection between the Algorithm 3 and Algorithm1,2? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weakness part. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments on our submission. We respond to the weaknesses raised. > **W1**: The algorithm for solving the general strongly convex functions seems straight forward: We respectfully disagree. The most natural way to privatize the non-private Newton’s method is to add noise directly to the gradient and Hessian, as was proposed by [ABL21]. However, to achieve optimal excess error with this method we need an additional assumption that the Hessian of the loss function is a low rank matrix; our algorithm does not suffer from such a limitation. Please refer to Remark 4.5 for further discussion. On the practical side we tried many noise adding schemes before settling on the double noise method (Algorithm 3) and we found that this significantly outperformed the simpler approach of adding noise directly to the Hessian matrix. > **W2** Algorithm for strongly convex functions and its total iteration cost: We discuss this in [our common response](https://openreview.net/forum?id=h2lkx9SQCD&noteId=ZhStwLJeHt). The total iteration cost depends on the subproblem solver. In particular, as discussed in our common response, the total iteration complexity is $\log\log(n) * \log(n)$. Therefore, the iteration cost is competitive with first-order methods. The win is in the oracle complexity. > **W3**: Connection between Algorithm 1 and Algorithm 3 in the paper: Logistic loss is *not strongly convex* in the unconstrained setting. Also, the main limitation of the proposed cubic Newton's method is that each iteration requires solving a nontrivial subproblem. These are the main reasons why we develop a new algorithm for private logistic regression with significantly improved *wall clock time* (in seconds) compared to other baselines, as shown in Table 1. Note that our proposed algorithm is not limited to the logistic loss; we provide a generalization of Algorithm 3 in Appendix C.6. Please see Remark 5.5. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed reply and I would like to keep my score unchanged. --- Reply to Comment 1.1.1: Comment: Thanks for your reply! We will revise our paper and incorporate your constructive comments.
Summary: This work investigates the use of second-order methods in differentially private convex optimization for machine learning. The authors propose a private variant of the regularized cubic Newton method and demonstrate its quadratic convergence and optimal excess loss for strongly convex loss functions. They also design a practical second-order differentially private algorithm for unconstrained logistic regression, which outperforms other baselines in terms of excess loss and is significantly faster than DP-GD/DP-SGD, achieving a speedup of 10-40 times. Strengths: This paper is good as it has been a really hard to use second-order information while preserving DP property. The contribution is novel and the proof seems to be correct. Weaknesses: 1. Maybe extend to use laplace noise? 2. The presentation of the paper is a little bit awkward. For example, the formal DP theorems and proof of Alg. 3 only presents in Appendix. It is really hard to find the corresponding reference. This paper might need to be rewritten. 3. Experiments: 1. Compare the true running time. It takes a lot of time to compute the CLIP and ADD operations as it needs to compute SVD. So instead of seeing how many steps it needs, please present the true computational time to show the true running time improvement compared with DP-SGD and DP-GD. 2. Compare memory usage. SVD needs tons of memory. 3. DP-SGD brings randomness and sometimes accelerates the training procedure. Please add a comparison with DP-SGD. 4. Compare with different learning rates (I know there does not exist a learning rate in Alg. 3 but a learning rate exists in DP-(S)GD). It would be interesting to see the impact of different learning rates (10, 0.1, or 0.01). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See Weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: See Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. We respond to the weaknesses and questions below. > **W1**: Laplace noise and pure DP: We can potentially use laplace noise to design algorithms with stronger pure-DP guarantee. However, it has been observed empirically and theoretically that the excess loss of convex optimization under pure-DP is much higher than the relaxed notions such as zero-concentrated DP (zCDP). For instance, for DP convex optimization, by Theorem 3.2 and Theorem 2.4 in [BST14] we can see that pure-DP algorithms exhibit an excess loss dependence of O(dimension), while zCDP algorithms demonstrate a dependency of sqrt(dimension). [BST14] Bassily, Raef, Adam Smith, and Abhradeep Thakurta. "Private empirical risk minimization: Efficient algorithms and tight error bounds." 2014 IEEE 55th annual symposium on foundations of computer science. IEEE, 2014. > **W2**: Presentation of the paper: Thanks for the suggestion! We will state the privacy proof more formally as a theorem in the main body of the paper. We would greatly appreciate any other suggestions for improving the presentation. > **W3-Part1**: Comparison of the true runtime: In Section 6, we have already compared the runtime (in sec) of our algorithm with DP-(S)GD. Please refer to Table 1 and Fig.2 in the main body and Table 4 in the appendix. For instance, for Covertype dataset our algorithm is 30 times faster compared to DPGD for achieving $\varepsilon=1$. This is remarkable since the per-iteration cost of the second-order methods is higher than DP-(S)GD. > **W3-Part2**: Comparison of memory usage: It is correct that SVD may need large memory space. The memory usage varies with the chosen SVD implementation; we utilized numpy's linalg.eigh in our experiments. To address this concern, we'll provide memory usage specifics for SVD on each dataset in the revised paper. For instance, for Adult dataset with dimension 100, the memory usage is 52 MiB, and for Fashion-MNIST with dimension 784, the memory usage is 67 MiB. > **W3-Part3**: Comparison with DP-SGD: In Section 6.1, we have already compared the minibatch variant of our algorithm with DP-SGD and show that it has faster convergence. Fig. 4 summarizes the results. The subsampled variant of our algorithm achieves the same excess error as DP-SGD with $8$-$10 \times $ faster run time over all the datasets while the batch sizes of our algorithms are larger than that of DP-SGD (Please see Section 6.1). > **W3-Part4**: Different Learning Rate for DPGD: Thanks for the suggestion. The **attached PDF (Item 1)** in [our common rebuttal](https://openreview.net/forum?id=h2lkx9SQCD&noteId=ZhStwLJeHt) shows the results for different learning rates for Adult dataset. We will include the results for all the datasets in the next revision of the paper. The main observation is that a larger learning rate helps DPGD in the initial phase of optimization. However, after getting close to the optimal point, large learning rate leads to a higher excess error. Also, in the submitted paper Line 331, we have provided numerical results comparing our algorithm with DP-GD where at each iteration, we perform linesearch to select learning rate. We call this variant DP-GD-Oracle and obviously this variant does not satisfy DP. Nevertheless, Figure 3 shows our algorithms converge faster than DP-GD-Oracle which is not even a DP algorithm.
Summary: The paper introduces the use of second-order methods in differentially private optimization. In particular, two algorithms are proposed. One is based on cubic regularized Newton and works for strongly-convex functions. The other one is designed specifically for the logistic regression problem. Numerical experiments on the logistic regression are provided. Strengths: The topic of using second-order methods in differentially private (DP) optimization is under-explored. In this sense, the paper can be a good starting point and introduce the idea to the DP community. The paper is well-written and easy to follow, with theoretical results that do not exist before and are supported by numerical experiments. Weaknesses: 1. The theoretical justification of the benefit of using second-order methods is not strong enough. For the considered smooth strongly-convex setting, first-order methods can already achieve the same rate in linear time $O(n)$ [arXiv:1703.09947, arXiv:1802.05251, arXiv:2005.04763, arXiv:2102.05855, arXiv:2206.00363]. As a comparison, since the subproblem of the cubic regularized Newton cannot be efficiently solved due to nonsmoothness, the total gradient complexity of the proposed method is $NT\sim n^2$. The main improvement is mostly on the oracle complexity but with more assumptions on the second-order information. Even for the oracle complexity, the proposed algorithm achieves $\sqrt{L_2}/\mu^{3/4}+\log\log(n)$, which is hard to compare with the one achieved by first-order methods (e.g., $(\sqrt{L_1}/\sqrt{\mu})\log(n)$ for private versions of the Nesterov's accelerated gradient descent [arXiv:2102.04704, arXiv:2206.00363]). I am also wondering if an improved lower-bound is possible with second-order information. 2. The convergence analysis of the logistic regression case seems to be not complete. I don't find (also in the appendix) any specific rate, complexity, and comparison with first-order methods. The proposed method requires the local strong convexity assumption at the optimum. Does it hold for the logistic regression loss? 3. The proposed method requires computing the inverse of the second-order information, which could be hard for large-scale experiments. Also, all the numerical experiments in this paper use small models and datasets, where computing this inverse does not cost too much. Given that DP-GD already completes the task within 1 minute, such a 10-40$\times$ improvement is not considered to be so surprising. 4. What does $T^*$ mean in Table 1, or what is the stopping criterion to define $T^*$? Why do Figure 3 & 4 only show the first 10 steps or 0.6s? I guess the noise added for the proposed method is computed according to the theory. Are there any numerical privacy accounting methods used in the experiments to justify the effectiveness of the added noises? In case the privacy analysis is wrong and less noise is added than what is required, this might not be a fair comparison. 5. Minors: It might be good to also summarize related works for non-DP second-order optimization algorithms. What does privacy budget for direction mean in $\lambda_{0,t}$ in line 249. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. We respond to the weaknesses raised below. >**W1**: Justification of using second-order methods and comparison with first order methods Our goal in this paper is to understand whether second-order methods are compatible with DP. For the theory, we use oracle complexity, which is the standard measure of convergence rate in the non-private literature. The subproblem we solve is a cubic function, which is indeed smooth *over the constraint set.* This observation lets us use more efficient algorithms as the subproblem solver. Please refer to [our common response](https://openreview.net/forum?id=h2lkx9SQCD&noteId=ZhStwLJeHt) where we discuss it further. In particular, we can show that the number of gradient and Hessian-vector evaluations is $\log\log(n)* n * \log(n)$. As mentioned, the oracle complexity of first order methods is $\log n$ while ours is $\log\log(n)$. I.e. our is an exponential improvement in oracle complexity. This result tight, including the other terms, as shown by [non-private oracle complexity lower bounds](https://arxiv.org/abs/1705.07260). > **W2**: local strong convexity at the optimum for logistic loss and its convergence analysis Unless the data is linearly separable, the logistic loss has a minimum eigenvalue at the optimum which is ** bounded away from zero**. Let $f(x) = \log(1+\exp(-x))$, then we have $f’’(x)\geq 1/4 \exp(-|x|)$. Therefore, by Eq.(5) in the paper, eigenvalues at the optimum are strictly larger than zero for all eigenvectors that are not orthogonal to the subspace spanned by the data. This assumption has been used before as well. See arxiv:1303.6149. In the submitted paper, we only present the recursion of the error. It can be easily used to obtain the following for our local convergence result: Assume that the initial point is sufficiently close to the optimum. Assume that $\lambda_0 > \lambda^\star_{min}$, then after $T=\tilde{O}\left( \frac{\lambda_0}{\lambda^\star_{min}} \log(n) \right)$ iterations, Algorithm 3 achieves the excess loss of $\tilde{O}\left( \kappa^\star \frac{\text{rank}}{\rho n^2 \lambda^\star_{min}} \right)$ where $\kappa^\star$ is the condition number at the optimum. We will include this result in the paper. Note that the dependence on the condition number and the assumption of the local strong convexity at optimum appear in many prior work on second-order optimization (See arxiv:1508.02810,arxiv:1607.00559) > **W3**: On significance of speed-up of our algorithm and the choice of datasets For larger datasets (i.e. larger $n$), the gap between our algorithm and DP-(S)GD increases. For instance, for Adult the completion time for DPGD is 5 minutes while for ours is 8 seconds. We conducted an additional larger scale experiment for the synthetic dataset with a five times more samples. In the attached PDF Item 2 in our common response, we plot the excess loss versus runtime, and as can be seen again there is a significant gap between the completion time of our algorithm and DPGD. It is a well-known limitation of second-order methods that they require matrix computations, rather than just vector computations like first-order methods, which is a problem when the dimension $d$ is large. There is an extensive line of work on addressing this challenge by working with low-rank approximations to the second-order information. It would be interesting to combine our methods with these approaches to scale to high-dimensional settings. However, this is beyond the scope of our work. Our goal is simply to demonstrate the feasibility of using second-order information to accelerate DP convex optimization. We use standard datasets for classification for which the linear classifier learned via logistic regression is successful. The dimension of the datasets are in the range of 55 to 784. We would appreciate the reviewer’s suggestions for further datasets to consider. > **W4**: What does T^\star mean? And what is the stopping criterion? $T^*$ is the runtime of the algorithm in seconds. The star refers to the fact that we perform hyperparameter tuning to optimize the accuracy and report the runtime for this setting of hyperparameters. Note that we perform tuning of $T^*$ for **all** iterative algorithms. Note that the number of steps $T$ is specified a priori as a hyperparameter; it is not a dynamic stopping criterion. This is necessary, as we must know $T$ in order to divide the privacy budget between iterations. > **W4**: Why do Figure 3&4 only show the first 10 steps or 0.6s? In Figure 3, our goal is not to plot the excess error vs runtime. Our goal was to compare the impact optimal step size for DPGD and second-order information on the excess error. The choice of 15 steps is arbitrary. Figure 4 provides a comparison between the minibatch variations of our algorithm and DP-SGD. The x-axis capped at our algorithms' maximum $T^\star$ value. Notice that for synthetic data $T^\star$ of our algorithm is around 0.05. Please see Item (3) in the attached pdf in [our common response](https://openreview.net/forum?id=h2lkx9SQCD&noteId=ZhStwLJeHt) for a plot without truncating x-axis. > **W4**: Privacy accounting for the experiments We use the same privacy accountant for all studied algorithms. The full-batch variant of our algorithms and DP-GD satisfies zCDP. zCDP provides a simple composition theorem–the privacy parameter adds up when we compose and it is tight. For translation from DP to zCDP, and for the mini batch variants, we use Opacus package. > **Minor Comments**: Thanks! We will include a comprehensive literature review of the non-private second-order methods. For the adaptive scheme we divide the privacy budget into three parts. We refer to the privacy budget for estimation of $\Phi(H)^{-1} (\tilde{g})$ as the privacy budget for the direction. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I still have some concerns that the authors do not clearly address. > For the oracle complexity, the proposed algorithm achieves $(\sqrt{L_2}/\mu^{3/4})(\ell_0 - \ell^*)^{1/4} + \log\log(n)$, which is hard to compare with the $\sqrt{L_1/\mu}\log(n)$ complexity achieved by first-order methods. 1. I agree that the proposed second-order method achieves exponential speed-up in terms of dependence on $n$. However, the constant $L_2$ and $\ell_0-\ell^*$ can be pretty large in practice. Also, it is hard to compare because of different assumptions and constants. > I am also wondering if an improved lower-bound is possible with second-order information. 2. Could the authors also comment on the lower-bound? > Reply to W1: The subproblem we solve is a cubic function, which is indeed smooth over the constraint set. 3. Then the smoothness parameter depends on the diameter, which could be very large and will enter the number of gradient and Hessian-vector evaluations. In comparison, first-order methods do not necessarily have a dependence on the diameter in their gradient complexity (considering output perturbation for smooth strongly-convex functions). 4. My last concern is regarding the extension to other settings. Since the understanding of the convergence guarantees on second-order methods is restricted to smooth convex functions even for non-DP optimization, the extension to more practical nonconvex problems could be hard. It only looks promising to first run a first-order method to a small neighborhood of local minimum that potentially satisfies the smoothness and strong-convexity assumptions, and then use a second-order method. However, the existence of DP noise might prevent outputs of first-order methods to be in a small neighborhood of local minimum, unless other structural assumptions are made. Except for these, my other concerns are successfully addressed by the author's rebuttal. --- Reply to Comment 1.1.1: Title: Reply to Reviewer FNjC Comment: Thanks again for your detailed comments. We respond to the remaining points. 1. Indeed, it is hard to compare Theorem 4.2 directly with first-order methods. As mentioned in Remark 4.4, the analysis suggests a phase transition. When we are far from the optimum, second order information does not help much, but once we are close we achieve an exponential speed up. It is possible that in practice the first phase is more important, but in general the optimization theory literature focuses on the asymptotic convergence rate which is determined by the latter phase. We have tried to address questions about practical performance with our experimental results. 2. We haven't thought about lower bounds for second order methods with DP. There are lower bounds for second-order methods without DP and there are information theoretic lower bounds for DP. Other than taking the max of these lower bounds, we don't know what to do here. 3. Note that assuming the loss function is Lipschitz and strongly convex implies a bound on the diameter. So a diameter bound is often implicit in the analysis of first order methods even if not explicitly stated. 4. Applying second order methods to practical non-convex optimization is a challenge even without privacy, although there is [work on this](https://arxiv.org/abs/2002.09018). The main message of our work is that second order methods can work with DP to accelerate optimization. This goes against the commonly held belief that second order methods are too brittle to work with noise. Of course, second order methods have other limitations unrelated to privacy and we do not escape these. We hope that the reviewer is convinced of our main message and is willing to support our submission.
Summary: This paper considers the problem of differentially-private convex optimization and discusses how second-order information can be utilized to accelerate the optimization process while also achieving the optimal excess error, with DP involved. The main contributions of this paper are the two proposed algorithms, and the corresponding analyses; 1) A second-order DP algorithm that is based on the cubic-regularized Newton's method - for a specific class of convex functions, 2) A second-order DP algorithm for the logistic regression problem. Strengths: The paper considers an important problem which studies how the performance of a differentially-private convex optimization process can be enhanced with the use of second-order information. The paper is well organized and rigorous proofs have been provided for the claims made. The first algorithm presented in this paper (DP variant of the cubic-regularized Newton's Method) achieves the optimal excess loss and quadratic convergence. The proposed second-order DP algorithm for logistic regression achieves equal or better excess losses and lower computational times compared to the existing DP algorithms, based on the experimental results. Weaknesses: The authors could clearly state their contributions and remark on the significance of the contributions, as in some cases (for example algorithms 1 and 2), it seems like the authors have simply incorporated DP into existing non-private results. The main paper lacks justification on the privacy guarantees. The authors could comment on how the stated DP guarantees are achieved by the selected noise parameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) For a given set of inputs in algorithm 3, how can one determine whether the SOI modification is "add" or "clip"? The reader could benefit from a clear description of this part (lines 3-6) in algorithm 3. 2) Do the authors assume any characteristics (i.e., any specific distribution or any statistical characteristics) on the dataset $S_n$ in either of the algorithms? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have clearly stated the limited types of functions which the proposed algorithms can be applied on, and have also commented on possible extensions to other classes of functions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. We respond to the weaknesses and questions below. > **W1**: The authors could clearly state their contributions and remark on the significance: The existing DP optimization literature is almost exclusively restricted to first-order (and zero-order) methods. Our goal in this paper is to answer the following question: “Can second-order information yield faster convergence in the differentially private setting?” Combining second-order methods with differential privacy requires a great deal of care, as they are fairly sensitive to noise. We believe that our results provide a convincing affirmative answer to this question both in terms of worst-case convergence guarantee and practical algorithm design. > **W2**: it seems like the authors have simply incorporated DP into existing non-private results: We respectfully disagree. The most natural way to privatize the non-private Newton’s method is to add noise *directly* to the gradient and Hessian, as was proposed by [ABL21]. However, to achieve optimal excess error with this method we need an additional assumption that the Hessian of the loss function is a low rank matrix; our algorithm does not suffer from such a limitation. Please refer to Remark 4.5 for further discussion. On the practical side we tried many noise adding schemes before settling on the double noise method (Algorithm 3) and we found that this significantly outperformed the simpler approach of adding noise directly to the Hessian matrix. > **W3**: Proof of Privacy: The detailed privacy guarantee of Algorithm 3 is stated and proved in Appendix C.3 and C.4. Based on your comment, we will move the formal statement of privacy guarantee to the main body. > **Q1**: On “add” or “clip” for second order modification: We apologize if the presentation of Algorithm 3 caused confusion. The type of second-order information modification is an *hyperparameter* of our proposed algorithm. Also, notice that the modification based on “add” and “clip” have a different privacy proof as can be seen by the scale of noise. Empirically, we observe that for the full-batch setting “clip” is better than “add” and for minibatch version “add” performs better. Based on your suggestion, we have included a more precise discussion of it in the revised version. > **Q2**: Dataset assumptions. No. We assume **worst-case** dataset for both our privacy proof and for our convergence guarantees. It is an interesting direction to analyze the generalization performance of our algorithms with distributional assumptions. We will add your suggestion to the list of future work. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanations. Regarding W2: I understand that the double noise method introduced in Algorithm 3 which privatizes the gradient and the direction (first and second order information), is different from the typical DP-Newton's method that the authors have described. However, it is not very clear how Algorithm 2 stands out from the typical case. While remark 4.5 explains the fact that Algorithm 2 does not impose any restrictions on the Hessian matrix unlike in the typical case, it would be more clear to the reader if the authors can explain the reason/intuition behind this statement. --- Reply to Comment 1.1.1: Title: Reply to Reviewer QyKz regarding the intuition behind Algorithm 1 Comment: We thank the reviewer for their reply. The main idea of our result for strongly convex function revolves around using the cubic upperbound and showing that obtaining an **approximate** solution of the cubic subproblems suffices to obtain an optimal excess loss. Notice that the accuracy of the cubic subproblem solver is influenced by the sensitivity of the Hessian, which characterizes the noise scale. In Algorithm 2, we only need the gradient of the cubic function (See Line 5 in Alg.2.). The gradient of the cubic function depends on the Hessian through $H(\theta_i - \theta_0)$, and the scale of the noise depends on the $L_2$ norm of $\|\|H(\theta_i - \theta_0)\|\|\leq M * D$, where $M$ is the smoothness parameter and $D$ is the diameter of the space. This approach is different from the straightforward approach where two independent noises are added directly to Hessian and the gradient. The noise for privatizing the Hessian matrix scales with its Frobenius norm. Therefore, unless the rank is bounded, the scale of the noise for Hessian scales with dimension which results in a suboptimal utility bound (See [ABL21, Appendix E].). In summary, as opposed to the straightforward approach where gradient and Hessian are privatized independently, we only add noise once since the performance of our algorithm depends on the accuracy of the cubic subproblem solver.
Rebuttal 1: Rebuttal: We are extremely grateful to the reviewers & AC for their time & comments. Their comments have been very constructive. We respond to each review individually, but we respond to some common points here: **On the slow convergence of DP-SGD:** Several reviewers questioned the claim that DP-(S)GD converges slowly, which we made in the last paragraph on the first page of the submission and supported with Figure 1. We wish to elaborate on this point, since it is integral to the motivation for our work. Informally, this is because the addition of noise can push us in directions of increasing loss, so we need more conservative step sizes to avoid moving too far in the wrong direction. This fact is reflected both theoretically and empirically in the literature: - Theoretically, optimal instantiations of DP-SGD use a smaller step size than in the non-private case, e.g. [Bassily et al. (2019)](https://arxiv.org/pdf/1908.09970.pdf) use $\eta$ that decays as $\max\\{1/\sqrt{n}, \sqrt{d}/\varepsilon n\\}$, much smaller than the non-private setting of $\eta = 1 / \beta$ when the loss is $\beta$-smooth. Smaller step size requires more steps (i.e. more iterations) to converge. Also, refer to the attached PDF (Item 4) for an example of noiseless and noisy gradient steps on a 1-smooth quadratic loss. This example intuitively explains why we need small $\eta$ for DP-(S)GD. - Empirically, e.g., Figure 1 of [Kurakin et al. (2022)](https://arxiv.org/abs/2201.12328) shows that the accuracy of DP training significantly improves if we run DP-SGD for more iterations while keeping the overall privacy budget fixed. **On computing second-order information and iteration/runtime complexity:** Several reviewers had concerns relating to the complexity of Algorithm 1. We focused on oracle complexity, which is common in the literature. We can also show that in terms of computational complexity, Algorithm 1 is competitive with the best first-order methods. We will add a discussion of this to the paper. First, we do not need to compute a full Hessian to run DPSolver, but only to compute Hessian-vector products (HVPs). HVPs in practice can often be computed in the same order of time as it takes to do a gradient computation (see e.g. https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html). So for any first-order method we use as DPSolver, each iteration of Algorithm 1 can be made to take time comparable to the runtime of DPSolver on an arbitrary loss. While we use DP-GD as DPSolver for simplicity of presentation, any first-order method with the same privacy and utility guarantee as DP-GD gives the same privacy and utility guarantee for Algorithm 1, so we can choose e.g. a more efficient method for strongly convex losses as Algorithm 1. As a concrete example, we can use Alg.1 in [Wang et al. (2017)](https://proceedings.neurips.cc/paper/2017/file/f337d999d9ad116a7b4f3d409fcc6480-Paper.pdf) using which the subproblem solver has HVP and gradient complexities of order $n\log(n)$ and iteration complexity of order $\log(n)$. Putting it all together, the runtime of Algorithm 1 in practice will be no worse than $O(\log\log n)$ times the runtime of the first-order method we choose as DPSolver. Furthermore, we hypothesize that since we are using DPSolver to optimize a well-behaved cubic function, the runtime of DPSolver can be made even faster, making our Algorithm 1 even faster as well. Pdf: /pdf/e0f967216bae236210672a781ae91951df96ebfd.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces an algorithm that utilizes second-order information to enhance the speed of private convex optimization, concurrently ensuring optimization excess error is minimal. The outcomes of the experiments indicate that employing second-order information can expedite Differential Privacy (DP) optimization. In addition, it achieves an excess loss that either equals or surpasses that of first-order methods like DP Gradient Descent (DP-GD). Strengths: The strength of this paper are 1) The application of second-order methods to convex optimization is challenging research area. While some progress has been made, it's still uncertain whether second-order methods can be as practical as first-order methods. This paper unveils a novel second-order approach for Differential Privacy (DP) optimization, which demonstrates optimal efficiency for strongly convex functions. 2) The authors offer an analysis of both the local convergence assurance associated with Hess-clip and Hess-add, as well as the global convergence guarantee for QU-clip and QU-add. 3) The numerical findings indicate that the proposed method outperforms DP Gradient Descent (DP-GD) substantially, demonstrating a speed that is 10 to 40 times quicker for the datasets tested. Weaknesses: The weakness the paper include 1) The potential impact of this work remains uncertain to me. While it presents strong results in the field of Differential Privacy (DP) private machine learning, it raises the question: can these results or concepts be applied more broadly? 2) The mini-batch version offers interesting numerical results, however, the loss it produces is notably higher compared to the full-batch version. Technical Quality: 3 good Clarity: 3 good Questions for Authors: On page 1, "One of the major drawbacks of DP-(S)GD is slow convergence. We argue that the main reason for this is the difficulty of choosing the hyperparameters (/eta; T)." Please explain why it is that. How about gradients? For minibatch version, how to make sure the second-order information is still relevant/meaningful from one batch to another batch? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, I think so. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. We respond to the weaknesses and questions below. > **W1** potential impact DP optimization is an important area with numerous practical applications. Our results are a substantial deviation from the existing DP optimization literature, which is almost exclusively restricted to first-order (and zero-order) methods. We believe our work can open up many directions for future investigation within the area of DP optimization. Beyond DP, we speculate that our results may be helpful for designing second-order convex optimization algorithms under **data corruptions**, i.e., robust second-order optimization. Recently, it has been a flurry of interest in understanding the connection between robust algorithms and private algorithms. (see [Asi+23].) [Asi+23] Asi, Hilal, Jonathan Ullman, and Lydia Zakynthinou. ["From robustness to privacy and back."](https://arxiv.org/abs/2302.01855) (2023). > **W2** weaker minibatch results: This is an important observation and one that has been observed repeatedly in the differential privacy literature. In general, it has been observed that for DP optimization the best results are attained by larger batch sizes. E.g., see Figure 1 of [P+23] which shows that for training a neural network with DP, the required batch size needs to be large in order to reduce the amount of noise. It is an interesting phenomena that we have also observed in our experiments. It is an open question to show that such a limitation is inherent. [P+23] Ponomareva, Natalia, et al. ["How to dp-fy ml: A practical guide to machine learning with differential privacy."](https://arxiv.org/abs/2303.00654) Journal of Artificial Intelligence Research 77 (2023). > **Q1a**: Why is it difficult to set hyperparameters for DP-(S)GD? Please see [our common rebuttal](https://openreview.net/forum?id=h2lkx9SQCD&noteId=ZhStwLJeHt) elaborating on this point. We will try to clarify this important point in the revision. > **Q1b**: How to make sure second-order information is still relevant/meaningful from one batch to another batch? Our subsampling procedure for second-order information (SOI) based on Poisson sampling implies that the expected value of the subsampled SOI is equal to the full-batch SOI at each iteration. Also, using the classical matrix concentration results, we can show that the subsampled SOI is close to the full-batch SOI with high probability as well. These two observations show that second-order information is still relevant in the minibatch version. --- Rebuttal Comment 1.1: Comment: The rebuttal addressed my questions. Thank you! --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply. We will revise our paper according to the constructive comments in the reviews.
Summary: This paper focuses on the problem of private convex optimisation using second order information where the main motivation is the acceleration of private convex optimisation problem as DP-SGD shows slow convergence. This is done in two parts. Firstly, they introduce the privatised version of the cubic method of Nesterov and Polyak when the loss function is strongly convex. In this step, the privatisation is done by solving the optimisation problem given by the global cubic upper-bound privately (DP-Solver). They analyse the algorithm and provide convergence guarantees. Since the cubic method is computationally expensive, for the rest of the paper the authors focus on a method inspired by Newton's method with its specific application to logistic regression. Specifically, they provide a global upper bound for the logistic loss and for which the optimisation the upper bound has a closed form. They then privatise this method by privatising the steps of the optimisation algorithm. They do this in two steps. First by adding the proper amount of noise to the information from gradient and then adding noise at the update step. Finally, they empirically compare the performance of their proposed method for logistic regression to DP-(S)GD and objective perturbation. Strengths: 1. Convex optimisation appears in many settings in Machine Learning and given the increasing attention to privacy, this is a timely problem to study. 2. The combination of ideas to use a method similar to newton's method while making sure the upper bound is global like in the cubic method is nice. 3. The convergence analysis for the proposed algorithms. 4. The details of the experiments are explained thoroughly and for the most part, they seem fair. Weaknesses: 1. While private convex optimisation is an important problem and the techniques are interesting, I feel the scope of the paper is a bit limited as it only allows us to use the results for logistic regression in a private setting. For any other method that uses convex optimisation, like kernel methods, the user would need to calculate the sensitivity of the queries in algorithm 3 at which point the method being faster than DP-SGD might not be justification enough for this method. 2. One of the main motivations for this paper is the slow convergence of DP-SGD. While the authors provide references to other bottlenecks for DP-SGD like the batch size and hyper-parameter tuning, there are no references for the slow-convergence of DP-SGD and the main thing supporting this claim in the paper is Figure 1. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The upper-bound for the convergence analysis for ($(lw) - l(w^*)$) in both Theorem 1 and Theorem 2 is done w.r.t $w^*$ which is the optimal w not attainable in the case where $\rho>0$ as given by the lower-bound given in [BST14]. Does it not make more sense to provide the upper bound w.r.t the set of parameters $w$ which are achievable under privacy constraints? 2. Throughout the paper you mention that your proposed method is 10-40 times faster but looking at Table 1 this is the case for $\epsilon = 10$, otherwise the range is something like 3-40. Is there a reason for mentioning 10-40? 3. [KSJ18] show that the Newton's method globally converges under some conditions which hold for logistic regression. Given this result, why do we need to build the global upper bound of lemma 5.1? 4. Is the idea for building global upper bounds for newton's something done for the first time in your paper? If not, it might be a good idea to add some references. Minor comments: 1. You can add other examples where convex optimisation appears in ML. Some examples are kernel regression and extreme learning machines. 2. I think adding some explanation and details to Theorem 4.2 might help the reader to digest the theorem a bit better i.e. What is the value of $w$ for which the transition between the convergence rates happen? My intuition is that the privacy constraint makes a set of parameters centred around $w^*$)admissible and while $w$ is outside of this range the convergence rate is slower and once we are within those parameters the convergence is much much faster. 3. For theorem 5.6, it might help to mention that the relationship between the semi-norm and the $\ell^2$ norm for ease of understanding. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It is similar to the comment I make in the weaknesses. The authors have not clearly mentioned that while the method is faster than DP-SGD, it requires an expert to get the algorithm up and running for other convex optimisation problems. Additionally, it is known that the computational complexity of Newton's method scales badly with dimension d which I think should be mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. We respond to the main points below. > **W1:** The scope of the paper is limited to logistic regression. For simplicity and clarity, many of our results focus on logistic regression, but our techniques are more widely applicable. Logistic regression is a widely-used method and there are many DP baselines for this task, so it is an ideal case study for our methods. Indeed, our algorithm based on double noise can be applied more broadly. In Appendix C.6 (Algorithm 6), we have provided an extension of Algorithm 3 whose privacy proof holds for **every** convex, Lipschitz, and smooth loss functions. Please refer to Remark 5.5 in the main body for further discussion. Also, Algorithm 3 and its convergence analysis in Theorem 5.6 holds for every convex, Lipschitz, smooth, and doubly-differentiable GLM loss function. Based on your comment, we will highlight the extensions beyond logistic loss in the paper. > **W2:** Supporting claim for the slow-convergence of DP-(S)GD. Most of the theoretical literature on DP convex optimization either ignores the problem of slow convergence or makes strong assumptions (e.g., strong convexity, small diameter constrained set) to ensure rapid convergence. But it is a major issue. See [our common rebuttal](https://openreview.net/forum?id=h2lkx9SQCD&noteId=ZhStwLJeHt) for further discussion. We will emphasize these points in the next revision. > **Q1:** On the definition of excess loss with privacy constraints. Our results follow the standard formulation in terms of excess loss used in the literature. It would be interesting to compare our upper bound with the optimal excess loss attainable under DP, but this would be difficult as the latter quantity is not known exactly. E.g., the lower bounds of BST14 are asymptotic and do not give tight leading constants. However, the excess error of our cubic Newton’s method is (near) optimal: the lower bound for the class of strongly convex functions [BST14,Thm.5.5] implies that the achievable excess error has the optimal dependence on the dimension, privacy budget, and number of samples up to a log factor. > **Q2:** 10-40 times faster / 3-40 times faster Thanks for noticing this oversight. We neglected to update this number after adding further experiments. We have updated the abstract and mentioned that our method is 3-40 times faster in general. Our method shows the greatest improvement for datasets such as Adult or Covertype where the logistic loss has an ill-conditioned Hessian. For well-conditioned synthetic data, there is less room for improvement. > **Q3:** On the comparison with the results of [KSJ18]. The results of [KSJ18] show a global convergence for the “damped” Newton method, i.e., Newton's method with **non-unit** step size. However, it is not clear how the algorithm of [KSJ18] can be used for logistic regression in an unconstrained setting. By the results in Section 2.3 Part (a) in [KSJ18], the step size is proportional to $\exp(-\|\|\text{optimal solution}\|\|)$. For unconstrained logistic regression, there is no a priori knowledge on the norm of the optimal solution, therefore it is not clear how this result can be used. We have included a detailed comparison with [KSJ18] in the revised version of the paper. > **Q4:** Novelty of the quadratic upper bound. To the best of our knowledge our quadratic upper bound for the logistic loss is a novel optimization technique. We will emphasize this point. > **minor comments** Thanks! We have included more examples in the introduction. The transition point in Theorem 4.2 happens when $\|\|w_t - w^\star\|\|$ is less than $ \frac{3 \mu }{4 L_2} $ where $\mu$ and $L_2$ are the strong convexity parameter and Hessian Lipschitzness constant, respectively. Your intuition is correct and we will provide more discussion on this point after the statement of Theorem 4.2. Relationship between semi-norm and $\ell_2$ norm: V is the projection matrix on the subspace spanned by the training set. Therefore, for every point $x \in \mathbb{R}^d$, its semi-norm, i.e., $\|\|x\|\|_V \leq \|\|x\|\|_2$. The main reason behind proving the convergence results in $\|\|x\|\|_V$ is that the components of the output vector, i.e., $w_T$, outside the subspace spanned by the data do not affect the excess error. --- Rebuttal Comment 1.1: Comment: The rebuttal has addressed my questions and comments. Thanks. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your response! We'll revise our paper based on the constructive feedback from the reviews.
null
null
null
null
Improved Convergence in High Probability of Clipped Gradient Methods with Heavy Tailed Noise
Accept (spotlight)
Summary: The author(s) proposed an improved convergence analysis of clipped gradient descent with heavey tail noise. By the improved analysis, the author(s) can either improved the logarithmic dependency of $T$ or relax the known time horizon assumption. Strengths: - The author(s) proposed a new analysis framework for the high-probability error bound of clipped SGD with heavy-tail noise. The analysis is different from the classic Freedman concentration analysis, and can be used in more flexible settings compared to previous analysis, e.g., unknown time horizon, the new analysis also results in slightly better dependency of $T$. - The writing is clear and easy to follow. - Related works are well-addressed to my knowledge. - Overall, I think this paper makes a decent contribution to the field if all other reviewers believe the proof is correct. Weaknesses: - It is better to include a table to compare with prior works in terms of the convegence rate and the assumptions being used. So readers can better understand the position of this work in the literature. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Ling 190, there is no $ \langle \nabla f(x_t), \theta_t \rangle $ in equation (5), is this a typo? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I do not find any nagative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We will add a table in the revision of the paper. In Line 190: The term $\left\langle \nabla f(x_{t}),\theta_{t}\right\rangle$ does not appear in Eq (5) because we have decompose it into $\left\langle \nabla f(x_{t}),\theta_{t}^{u}\right\rangle +\left\langle \nabla f(x_{t}),\theta_{t}^{b}\right\rangle$. We next explain how treating the term $\left\langle \nabla f(x_{t}),\theta_{t}^{b}\right\rangle$ more carefully can give us the optimal rate. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks for your response, I am keep my score unchanged.
Summary: The paper addresses the problem of deriving high-probability complexity bounds for the convex and non-convex optimization problems on closed convex sets under the assumption that the noise in the stochastic gradient has bounded $p$th moment for some $1<p\leq 2$. The key feature of the results is that they have improved dependence on the failure probability $\delta$ in comparison to the prior works. In the convex case, the authors also consider non-Euclidean norms, Clipped Stochastic Mirror Descent, and its accelerated version. Overall, the results are good and very relevant for the community. Strengths: 1. **Improved dependence on $\delta$ in the convex case.** Previous works addressing similar setups provide complexity results that depend on $\delta$ as $\log(1/(\varepsilon\delta))$ while the rates derived in this work (in the convex case, for known horizon) for Clipped-SMD and Clipped-ASMD imply the complexities that are proportional to $\log(1/\delta)$. Although this affects only the logarithmic term, I believe it is an important contribution to the stochastic optimization literature. In particular, this requires applying the proof technique (without induction argument) that differs from the existing approaches. 2. **Constrained case for Clipped-SGD and non-Euclidean prox-structure.** This works provides high-probability convergence analysis for the constrained problems for the first time without the bounded variance assumption. Moreover, it provides an extension to the non-Euclidean case. This is an important step forward. 3. **Horizon-independent results.** The authors also provide results that are independent of the time horizon $T$, while the previous works explicitly use $T$ to choose the parameters of the methods. 4. The paper is well-written. I did not find any serious issues in the proofs. Weaknesses: 1. **Deterministic term and logarithms in the results for the non-convex case.** Although the dominating terms (that contain $\sigma$) has optimal dependence on $T$, the deterministic term (the one that does not depend on $\sigma$) is $O(T^{\frac{1-2p}{3p-2}})$ that is not optimal for the deterministic case: it becomes worse when $p \to 1$, while should be $O(T^{-1})$ as for Gradient Descent. Such a $O(T^{-1})$ deterministic term can be achieved (e.g., see Sadiev et al. (2023)). Next, the improvement of the logarithmic factor is not that evident: while under the logarithm, there is no $T$ anymore, the power of the logarithm increases. In particular, the first term in the rate from Theorem 3.1 is proportional to $\gamma^{\frac{p}{p-1}} \geq \gamma^2$. If one wants to achieve $\frac{1}{T}\sum_{t=1}^T \|\nabla f(x_t)\|^2 \leq \varepsilon$, then according to Theorem 3.1, the complexity bound will have a term proportional to $O(\frac{\log^{\frac{p(3p-2)}{2p-2}}\frac{1}{\delta}}{\varepsilon^{\frac{3p-2}{2p-2}}})$. In the best case (when $p=2$), this equals to $O(\frac{\log^{4}\frac{1}{\delta}}{\varepsilon^2})$. The corresponding term from (Sadiev et al., 2023) is $O(\frac{\log\frac{1}{\delta\varepsilon^2}}{\varepsilon^2})$. When both $\delta$ and $\varepsilon$ are small, $\log\frac{1}{\delta\varepsilon^2}$ can be much smaller than $\log^{4}\frac{1}{\delta}$. I believe this should be discussed in the paper, and a fair comparison should be provided. 2. **Logarithms in the horizon-independent results.** When $T$ is unknown, the results become worse by a polynomial factor of $\log T$ that also spoils the complexity even in the convex case (and worse than for the case of known $T$ in the prior works). 3. **Some parts of the proofs are not finished.** The proof of Lemma 3.4 is not finished. Next, although the result is believable, the proof of Theorem B.2 is not complete as well. The authors should at least provide a more detailed sketch explicitly pointing to the places that will be changed and how. 4. **Results for the accelerated method requires $\nabla f(x^\ast) = 0$.** When the gradient at the solution equals zero, then the problem becomes almost an unconstrained one. (in terms of the analysis). The authors should indicate in the main text that the accelerated algorithm is analyzed under this additional assumption. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Line 39, "where the convergence rate depends on...": the work [32] does not have high-probability results. Do you mean that one can get such results from the results of [32] using Markov's inequality? 2. Line 165, " The proof can be found in, for example, Lemma 1 of [16]": It seems that the reference is not accurate (I did not find this result in the mentioned paper). 3. Lines 447-448: what is $A'$? 4. Line 480: what is the first case? 5. I believe authors should provide the missing proofs (I mentioned them in the Weaknesses section). ### Minor 1. Line 204: step sizes are $\eta_t$ are fixed $\to$ step sizes $\eta_t$ are fixed 2. Lemma 3.4: the summation is forgotten in the third term of the main inequality. 3. Line 430: should be Lemma 3.2 instead of Lemma 5 4. Line 439: the second inequality holds with probability strictly smaller than $1$ 5. Lines 484-485, second and third rows: should be $\lambda$ instead of $\lambda_t$. 6. Line 487: Lemma 3.6 $\to$ Proposition 3.6 7. After line 488, definitions of $C_2$ and $C_3$: should be "$=$" instead of "$\leq$". 8. After line 493: after the first inequality the square is missing. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors address the limitations of their work in detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank we reviewer for pointing out the significance of our contributions and for the detailed review. **Regarding the deterministic term**: The power of T in the term is $O\left(T^{\frac{1-2p}{3p-2}}\right)$. We can see that as $p\to1$ the rate becomes precisely $O\left(T^{-1}\right)$, which is the rate of gradient descent. **Regarding the comparison with Sadiev et al. (2023) in the nonconvex case**: In fact, Sadiev et al. (2023) achieve a rate with a significant suboptimal dependency on $T$, namely $O\left(T^{\frac{1-p}{p}}\right)$, while we achieve the optimal bound $O\left(T^{\frac{2-2p}{3p-2}}\right)$. This is a $poly\ T$ improvement for $p<2$, which vastly dominates the $\log T $ terms arising from the higher dependence on $\log(1/\delta)$. The reviewer is right that our dependence on $\log(1/\delta)$ is slightly higher in the very small $\delta$ regime and we will discuss this in the next revision. **For the unknown $T$**: Thank you for pointing out the gap between known $T$ and unknown $T$. We agree that removing the extra $\log T$ for unknown $T$ is an interesting question. Indeed, this remains open even under the more restrictive sub-gaussian noise assumption instead of heavy-tailed noise. **Regarding the incomplete proofs**: We thought some of these steps can be derived similarly to other proofs in our paper or cited ones, so we left some details out for the conciseness of the paper. We will add in more details in the revision of the paper. **Regarding the accelerated algorithm**: We will add this additional assumption in the main text. **Question**: 1. Line 39: Yes, we try to convey that one can obtain a probability bound from in-expectation bound using Markov inequality. 2. Line 165: Indeed, the reference is not correct. The correct reference is should be Beygelzimer et al. (2011), in the proof of Theorem 1 Eq. (1)-(3). 3. Lines 447-448: We define $A'$ in Line 441 as a proxy for $A$, using the variables $Z_{t}$. 4. Line 480: The first case here we meant the case for known $T$. We forgot to rewrite this part when moving some of the theorems and proofs to the appendix. The proof for Lemma B1 is complete. 5. We will add in the missing details for the incomplete proofs in the next revision. Finally, thank you for the attention to details and catching some typos. We will fix these in our revision. References: Beygelzimer, A., Langford, J., Li, L., Reyzin, L., & Schapire, R. (2011, June). Contextual bandit algorithms with supervised learning guarantees. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (pp. 19-26). JMLR Workshop and Conference Proceedings. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: I thank the authors for the clarifications: my comments are adequately addressed. **Deterministic term.** I am sorry for the confusion: when $p \to 1$ this term becomes $O(T^{-1})$. I wanted to say that when $p \in (1,2]$ this term is worse than $O(T^{-1})$, e.g., for $p=2$, it is $O(T^{-3/4})$. Therefore, when $\sigma$ is small, the result is not optimal. **Discussion of the logarithms.** In view of the above comment, when $\sigma$ is small the term $O(\gamma T^{\frac{1-2p}{3p-2}})$ can be the main one. In this case, as I explained in the review, the result from Sadiev et al. (2023) can be better for small $\delta$. In any case, the authors should discuss the dependence on the logarithmic factors more in the final version (as the authors promised). --- I want to keep my score unchanged, assuming that the authors will make the promised modifications in the final version of the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the insightful feedback. We will make sure to include these discussions in the revision.
Summary: This paper considers the high probability guarantees for clipped stochastic gradient descent with heavy-tailed noise, in the setting of Zhang et al. (2020) where the unbiased gradient noise has finite p-th moments, where $p\in(1,2]$. The bound obtained in the paper is time-optimal and in the non-convex case, matches the lower bound in the literature. The analysis is based on the novel whitebox approach that analyzes the generating function of a well chosen martingale difference sequence to obtain tighter rates for stochastic gradient methods. Strengths: The bound obtained in the paper is time-optimal and in the non-convex case, matches the lower bound in the literature. The analysis has technical novelty that involves analyzing the generating function of a well chosen martingale difference sequence to obtain tighter rates for stochastic gradient methods. Weaknesses: In terms of weakness, I think the first weakness is that some of the model assumptions seem to be very strong. For example, it is assumed that $\mathbb{E}[\Vert\hat{\nabla}f(x)-\nabla f(x)\Vert_{\ast}^{p}|x]\leq\sigma^{p}$. This assumption does not seem to hold in the simple setting of mini-batch. Is it possible to extend your analysis to allow the relaxation of this assumption to something like $\mathbb{E}[\Vert\hat{\nabla}f(x)-\nabla f(x)\Vert_{\ast}^{p}|x]\leq\sigma^{p}(1+\Vert x\Vert^{q})$ where $q$ can be related to $p$? In your main result, i.e. Theorem 3.1., you mainly discuss the dependence on time $T$. What you obtained is $O(T^{\frac{2-2p}{3p-2}})$. However, there are other terms with explicit constants in your bound, and how does your bound depend on $p$? Does it have monotonic dependence on $p\in(1,2]$ or not? It seems to me that as $p\rightarrow 1$, the upper bound you have depends on whether $8\gamma/\sqrt{L\Delta_1}}$ and $\sigma$ are bigger than $1$ or smaller than $1$ and you will get very different results in these two cases. Is this something you expected and is there any intuition behind it? Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) On page 3, you cited some works about high probability convergence for noises with bounded variance and heavy tails. Are there some works on unbounded variance that you can cite? (2) On page 6, line 234, can you provide a reference to Ville’s inequality? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I did not see such discussions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. **Regarding the assumption** $E [ ||\widehat{\nabla}f(x)-\nabla f(x)||\_*^p | x ]\le\sigma^{p}$: The reviewer is correct that in the small mini-batch setting, the variance may not be bounded. The assumption to the bounded $p$-th moments is a step towards relaxing the bounded variance assumption, and this is motivated by observation in practice (see Zhang et al. (2020) and our literature review). All prior works in the literature of heavy-tailed noise make this assumption. For example, this assumption is used in Gorbunov et al. (2020), for $p=2$ (the strongest case), or Zhang et al. (2020) and Sadiev et al. (2023) (the same assumption). Cutkosky & Mehta (2021) use this assumption in addition to another assumption that $E[||\widehat{\nabla}f(x)||\_*^p | x ]\le G^p$. We also think that relaxing this assumption, such as to $E[||\widehat{\nabla}f(x)-\nabla f(x)||\_*^p | x ]\le \sigma^{p}(1+||x||^q)$, is an important and interesting question to study in the future. **Regarding the dependency on $p$**: Generally $p$ is treated as a fixed problem parameter, and as the reviewer suggest, the dependency on p of the constants in front is not necessarily monotonic, but depends on the regime. What we care most about in our paper is the exponent of $T$. This is monotonic in $p$. **Questions**: 1. Among the works we cited, Sadiev et al. [27] and Liu et al. [18] consider the unbounded variance case (ie, the assumption, $E [ ||\widehat{\nabla}f(x)-\nabla f(x)||\_*^p | x ]\le\sigma^{p}$ for $1<p\le2$). 2. Ville's inequality is a classic inequality (Ville, 1939). See, for example, the Wikipedia page for Ville's inequality and references therein. Reference: Ville, J. (1939), Etude Critique de la Notion de Collectif., Gauthier-Villars, Paris. --- Rebuttal Comment 1.1: Title: Did we address the reviewer's concerns? Comment: We hope that we addressed all of the reviewer's concerns. We are happy to answer any questions but the reviewer-author discussion period ends soon. As discussed in our response, our assumptions are the least restrictive compared to all previous work in the same line of research. Could the reviewer please consider adjusting the score in light of this comparison?
Summary: This paper provides theoretical guarantees for clipped gradient methods in the presence of heavy-tailed gradient noise distributions, where the noise has a bounded $p$th moment for some $1 < p \leq 2$. The convergence guarantees are established with high probability. The authors' approach distinguishes itself from previous work by deviating from the classical techniques rely on using concentration inequalities coupled with an inductive union bound argument to control the iterates across all iterations. Instead, they bound the moment generating function of a well chosen supermartingale sequence which allows them to enhance the convergence guarantees for a wide range of clipped gradient based algorithm. The authors analyze Clipped-SGD algorithm in the smooth non convex setting, Clipped-SMD and Clipped-ASMD in the smooth convex setting. Specifically the rates they obtain are time-optimal and align with the latest lower-bounds, i.e., respectively $O(T^{\frac{2-2p}{3p-2}})$, $O(T^{\frac{1-p}{p}})$ and $O(T^{\frac{1-p}{p}}\sigma + T^{-2})$. Strengths: The paper have a well-written and well-structured presentation. The authors stresses the importance of their work by giving many intuitions on their new proving techniques and give comparisons with related works. The authors provide guarantees and parameter values for the three algorithms studied in the paper, both for a known time horizon $T$, and an unknown time horizon. The theoretical analysis sounds clear and easy to follow backed with insightful intuitions. The "whitebox" technic based on a well chosen supermatingale exhibits potential reusability for addressing other problems. Weaknesses: The paper appears to be quite incremental and technical lacking of substantial novel contributions. The three algorithms studied in the paper are already well-established methods, and aside from the application of the "whitebox" analysis technique, the convergence analysis for the algorithms follows conventional approaches commonly employed for clipped-gradient-based methods. Furthermore, the analysis of Clipped-SMD heavily depends on the hypothesis that there exists a bound $\nabla_1$ on the gradient at the initial point i.e., $ \lVert \nabla f(x_1) \rVert_* $ $ \leq \nabla_1$. If the optimal point $x_*$ does not lie in the domain $ \lVert\nabla f(x_1) \rVert_*$ can be estimated w.h.p but requires the knowledge of $\sigma$ which is quite restrictive. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1) Is it possible to extend the "whitebox" technic to the analysis of the Clipped-SGD with momentum in the non-convex setting? Q2) How the studied algorithms compares practically with the parameters in the paper against existing state-of-the-art clipped gradient methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I think that the guarantees on the accelerated stochastic mirror descent algorithm should be added to the main body of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and feedbacks. As pointed out by reviewer 2mhk, we believe that our paper makes “an important contribution to the stochastic optimization literature”. This includes: 1. Achieving the optimal rates in $T$ in all considered cases, closing the upperbound and lowerbound. Especially in the nonconvex case, we improve the existing bound by $poly\ T$ factor for $p<2$ (from $O\left(T^{\frac{1-p}{p}}\right)$ in Sadiev et al. to $O\left(T^{\frac{2-2p}{3p-2}}\right)$ in our case). Even if we are willing to suffer $\log T$ factors, previous techniques only allow us to obtain $O\left(T^{\frac{1-p}{p}}\right)$. Obtaining optimal bounds requires new insights and techniques. 2. Providing an analysis that is applicable generally for known and unknown time horizon. Analysis for the unknown time horizon case has not been achieved before to the best of our knowledge. 3. Showing an approach that works for the general domain (for SMD). Existing analysis only applies for compact or unconstrained problems. We would like to stress that the existing approach does not seem to be able to obtain these results. **Regarding the knowledge of $\sigma$**: We highlight that existing Clipped-SGD and Clipped-SMD algorithms generally require the knowledge of $\sigma$ even when the domain is unconstrained and the optimal point lies in the domain. We strictly improve this and only require \sigma when the optimal point does not lie in the domain. **Questions**: Q1) We believe that as long as a high probability analysis relies on a Freedman's type concentration inequality, it would be possible to apply the whitebox approach that to obtain tighter bounds. Clipped-SGD with momentum, such as in Cutkosky & Mehta (2021), can be proved using a Freedman's type concentration inequality, so we think it is possible to extend our techniques to this algorithm. Q2) Our new analysis of clipped gradient methods are mainly theoretically motivated, so we did not run any numerical analysis. However, we think the performance of the new parameter choices wouldn't differ too much from the state-of-the-art clipped algorithms, because the step sizes and clipping parameters are primarily obtained via hyperparameter tuning in practice. --- Rebuttal Comment 1.1: Title: Please respond to the rebuttal, thanks! Comment: Dear Reviewer QDgN, It would be nice if you could respond to the rebuttal. Thanks! AC --- Rebuttal Comment 1.2: Comment: I would like to thank you for your response. I have no further concern. --- Reply to Comment 1.2.1: Comment: We thank the reviewer for the insightful feedback. Given that the reviewer has no further concerns, would it be possible for the reviewer to adjust the score accordingly?
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors consider clipped stochastic gradient methods: clipped-SMD in convex case (in non-convex case the y consider clipped-SGD), clipped-ASMD in convex case. They provided improved high-probability convergence analysis, which has some similar steps with analysis from (Gorbunov et.al 2020, Sadiev et.al 2023). Compare to this works, the authors provided analysis in non-Euclidean setup for clipped-SMD, clipped-ASMD. Also, they explain why the proposed approach works better than the previous one. Strengths: 1. A good explanation of the difference between blackbox and whitebox approaches in a detailed way. 2. The authors provide the convergence analysis of the proposed methods in a non-euclidian setup. Generally, the paper is well-written and has good results. Weaknesses: 1. There are some issues dealing with clipped-ASMD. At the beginning of the analysis, they consider a more general assumption on the smoothness of $f$. But in the end, they use only standard smoothness assumption. Also, compare to clipped-SMD, they assume that $f(x^*) = 0$. 2. From my perspective, it would be better, if the proofs of all facts were provided in a more detailed way. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Could you explain why $z_k$ is decreasing sequence? It is not clear to me. 2. About extension to non-smooth setting: How exactly could it be done? According to the proof and the final statement of the main fact about complexity, there is no constant $G$ in the final complexity. Could you comment on it? 3. The proof via blackbox approach is supported with a lack of explanation for some details, could you provide a full proof or the citation on the same proof technique? It is similar to the proof from the paper of Sadiev et.al, 2023. 4. Please, could you provide a small explanation for the third inequality in the last chain of inequality on page 19? 5. For inequality (d) on page 20, you did not write any explanation. Did you forget to do it? Typos: 1. On line 204, the first 'are' is not needed. 2. In eq. (6), it will look better, If '(6)' is on the same row as expression. 3. On line 430, maybe, it would be better to write 'Lemma 3.2' or 'eq. (5)' instead of 'Lemma 5' 4. On lines 545 and 612, 'Proposition' is missed before '4.8'. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: There are no limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the compliments and feedback. **Regarding the weakness**: We use the general definition to address smoothness and non-smoothness in a unified way. The proofs for the non-smooth case follows very similarly to the smooth case. We give a sketch below in response to the reviewer's second question and will add this case to the revision of the paper. **We address the reviewer's questions below**: **Regarding the sequence $z_{t}$**: In Section 3, for example, we should have mentioned that we select the parameters $P_{t},Q_{t},\eta_{t},\lambda_{t}$ and ensure that $P_{t}\eta_{t}\lambda_{t}$ and $Q_{t}\eta_{t}^{2}\lambda_{t}^{2}$ are constants (Line 466). This guarantees that $z_{t}$ is a decreasing sequence. We will make this clear in the revision of the paper. **Regarding extension to non-smooth setting**: We can proceed similarly to the smooth case. Let $G$ be the Lipschitz constant. Starting with the basic inequality $\eta_{t}\Delta_{t} \le D_\psi(x^*,x_t) -D\_\psi(x\^*,x\_{t+1}) $ $\quad+\eta_t \langle x^*-x_t,\theta_{t}^{u} \rangle +\eta_t \langle x^*-x_t,\theta_{t}^{b} \rangle +\eta_{t}^{2}G^{2}$ $\quad+2\eta_t^2 ( || \theta_{t}^{u}||\_{*}^{2} -E[|| \theta_{t}^{u}||\_*^2 | F_{t-1} ] )$ $\quad+2\eta_t^2 E[|| \theta_{t}^{u} || \_{*}^{2} | F_{t-1} ]+2\eta_{t}^{2} || \theta_{t}^{b} ||\_*^2,$ we define $Z_{t} =z_{t}\big(\eta_t \Delta_t - D_\psi(x^*,x_t) + D\_\psi(x\^*,x\_{t+1})$ $\quad- \eta_t \langle x^*-x_t,\theta_{t}^{b} \rangle -\eta_{t}^{2}G^{2}$ $\quad - 2\eta_t^2 E[|| \theta_{t}^{u} || \_{*}^{2} | F_{t-1} ] -2\eta_{t}^{2} || \theta_{t}^{b} ||\_*^2 \big) $ $\quad - (\frac{3}{8\lambda_{t}^{2}}+24z_{t}^{2}\eta_{t}^{4}\lambda_{t}^{2}) E[ || \theta_{t}^{u}||\_*^{2} | F_{t-1}] $ where $z_{t}=\frac{1}{2P_{t}\eta_{t}\lambda_{t}\max_{i\le t}\sqrt{2D_\psi (x^{*},x_{i})}+16Q_{t}\eta_{t}^{2}\lambda_{t}^{2}} $. For constants $c_{1}$ and $c_{2}$, in the case of known $T$, we choose $\lambda_{t}=\lambda=\max \\{ 2G, (\frac{26T}{\gamma} )^{1/p}\sigma \\} $ $\eta_{t}=\eta=\min \\{ \frac{c_{2}}{G\sqrt{T}};\frac{c_{1}}{24\gamma\lambda} \\} =\min \\{ \frac{c_{2}}{G\sqrt{T}};\frac{c_{1}}{48G\gamma};\frac{c_{1}}{24\gamma}\left(\frac{26T}{\gamma}\right)^{-1/p}\sigma^{-1}\\} $ and by similar analysis, we have $\frac{1}{T}\sum_{i=1}^{T}\Delta_{i} \le\frac{1}{2}(R_{1}+c_{1}+2c_{2})^{2}\max\\{ \frac{G}{c_{2}\sqrt{T}};\frac{48G\gamma}{c_{1}T};\frac{24\gamma^{\frac{p-1}{p}}\cdot26^{1/p}\sigma}{c_{1}}T^{\frac{1-p}{p}}\\}$. **Regarding the Blackbox proof**: The proof technique is generally similar to the approaches in Sadiev et al. (2023) and Gorbunov et al. (2020; for noise with bounded variance), so we only provide a sketch in our paper. One important note is that the existing bound in Sadiev et al. (2023) is suboptimal in $T$. In Remark 3.3, we explain the reason for this and how to make the analysis optimal. We will add in the complete analysis for the Blackbox case in our revision. **Regarding the inequality on page 19**: We use the inequality $ax-x^{2}\le\frac{1}{4}a^{2}$. The third inequality of the last chain on page 19 follows from combining two inequalities $\eta_{t} \langle \theta_{t},x_{t}-x_{t+1} \rangle -\frac{1}{4} || x_{t+1}-x_{t} ||^{2} \le\eta_{t} || \theta_{t} ||\_{*} || x_{t}-x_{t+1} ||-\frac{1}{4} || x_{t+1}-x_{t} ||^{2} \le\eta_t^2 || \theta_{t} ||\_*^2$ $G\eta_{t} || x_{t}-x_{t+1} || -\frac{1}{2} || x_{t+1}-x_{t} ||^{2} \le 2G^{2}\eta_{t}^{2}.$ **On Page 20**, we indeed forgot to explain (d), which follows from $|| \theta_{t}^{u}||\_{*}^{2}\le 4\lambda_{t}^{2}$ (Lemma 2.1, Eq (2)). We apply this on the term $E[ ||\theta_t^u ||_*^4 ] \le 4\lambda_t^2 E [||\theta_t^u||\_*^2]$ and obtain (d). We will add this explanation in our next revision. --- Rebuttal Comment 1.1: Comment: Thank you for your explanation, which allows me to understand your work better!
null
null
null
null
null
null
Resilient Constrained Learning
Accept (poster)
Summary: Reasonable requirement specification in constrained learning has long been hindered by the presence of compromises and limited prior knowledge about the data. As a treatment, this paper proposes resilient constrained learning that adapts the requirements while simultaneously solving the learning task. Specifically, it balances the performance gains obtained from the constraint relaxation against a user-defined relaxation cost function. The paper provides theoretical provements for the balance (e.g. approximation and generalization guarantees) along with a practical algorithm. The algorithm is validated in invariant learning and federated learning experiments. Strengths: 1. The paper is well motivated to solve the constraint-performance trade-off problem, with the algorithm derived from detailed assumptions and theorems. 2. Overall, I think the paper is well-written and easy-to-follow. 3. The final algorithm is simple yet effective, allowing for a straightforward and flexible adjustment of the trade-off through a relaxation cost parameter $\alpha$. Weaknesses: 1. My primary concern regarding the acceptance of this paper lies in its simplistic experimentation. From my view, the federated learning problem is more like a toy example. 2. The paper lacks in-depth study on various aspects of the algorithm's performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Have you try cost functions other than $\alpha||u||^2$, and how does different cost functions impact the algorithm's performance and efficiency? 2. How does the learning rates $\eta, \eta_u,\eta_\lambda$ influence the algorithm's performance? Do they require meticulous joint tuning, or are they effective within certain ranges of individual values? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. First, the federated and invariance learning problem are manually synthesized for validating the effectiveness of the proposed method, which I regard not convincing enough without results on large-scale real-world datasets. 2. The federated learning experiment lacks quantitative metrics and diverse baselines other than single constrained learning. 3. The empirical efficiency has not been studied in the paper. 4. The paper lacks a detailed limitation discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. In what follows, we address the points raised by the reviewer. We respectfully disagree with the claim that the experimentation conducted is *(Weaknesses 1) simplistic*. Both the federated and invariant learning are benchmark setups used in recent literature [R4, R5]. We also disagree on the claim that we did not conduct several experiments that assess *(Weaknesses 2) various aspects of the algorithm's performance*. In both setups, we analyze several properties of our method, that validate empirically the motivating theory, for example: - *Figure 2 (left) and 8*: more stringent requirements are relaxed more - *Figure 2 (right), 7, 8, Table 1 and 4*: these relaxations lead to an improvement in performance. In what follows, we provide a detailed discussion of new experimental results addressing the reviewers concerns. In the federated setup, we adopt as benchmarks Ratio-Loss [R2] and CLIMB [R3]. The latter is a constrained learning approach which we adopt with the modification that constrains cab be relaxed as per the definition of resilient equilibrium. In this setup, average accuracy is not really the pertinent metric. Observe that the *average* accuracy (Response Table 1) is overall similar to the constrained approach, and consistently higher than the baseline [R3]. However, the distribution o performance among clients varies. We actually chose this experiment as one in which the differences between resilient learning and standard constrained learning are apparent. Resilient learning has (generally) less spread in the interquartile range and higher maximum spread (Response Table 2) than the constrained approach. This is precisely what the method is designed to do. Sacrifice the performance of outliers to benefit the performance of the majority of the agents. In order to showcase this, we order clients by their accuracy. We then compute the fraction of clients in the resilient method that outperform equally ranked clients for baseline methods (Response Table 3). ## New Ablations ### Choice of cost function We run $||u||_\beta$ with $\beta =1, 2, 4$, and infinity, for fashion MNIST in both the federated and invariant setting. As discussed in Section 3.3, $\beta = 1$ recovers penalty based methods. **Federated Learning** | $\beta$ | Mean Acc | IQR | Max Range | |------------|----------|-------|-----------| | 1.0 | $93.3$ | $2.5$ | $49.5$ | | 2.0 | $93.4$ | $2.4$ | $50.6$ | | 4.0 | $93.4$ | $2.6$ | $45.6$ | | $\infty$ | $92.7$ | $3.8$ | $28.1$ | **Response Table 4**:*Cost function ablation for the heterogeneus federated learning in fashion-MNIST using 100 clients, 3 minority classes, an imbalance ratio of $\rho=10$ and dirichlet allocation with parameter $d=0.3$. We report the mean, interquartile range and range (maximum value minus minimum value) test accuracy across clients.* **Invariance** | $\beta$ | Partially Rotated | Translated | Scaled | |----------|-------------------|-----------------|-----------------| | 1.0 | $86.08\pm 0.38$ | $86.85\pm 0.20$ | $85.02\pm 0.46$ | | 2.0 | $85.37\pm 0.17$ | $86.65\pm 0.25$ | $84.92\pm 0.39$ | | 4.0 | $85.23\pm 0.20$ | $86.64\pm 0.11$ | $84.65\pm 0.68$ | | $\infty$ | $82.94\pm 0.17$ | $85.47\pm 0.19$ | $83.35\pm 0.49$ | **Response Table 5**:*Cost function ablation for invariant fashion-MNIST datasets. We compute the mean and standard deviation in test accuracy across three independent runs.* ### Learning Rates We run an ablation on $\eta_u, \eta_\lambda$, the perturbation and dual learning rates over a small grid of 12 values, and find that, in this range, the performance of the algorithm is not overly sensitive to this choice. We also observe that that the rates that were used in the paper ($\eta_u = 0.1, \eta_\lambda = 2$) are not optimal in this setup, and thus further improvements in performance could be obtained through more extensive hyperparameter tuning, but this was not the focus of our experiments. | $\eta_u$ \ $\eta_\lambda$ | 0.1 | 0.5 | 1 | 2 | |-----------------------|------|------|------|------| | 0.1 | 81.4 | 81.6 | 81.8 | 81.8 | | 0.5 | 81.4 | 80.9 | 81.4 | 81.1 | | 1 | 81.6 | 81.6 | 81.6 | 81.7 | **Response Table 6**:*Dual and resilient learning rate ablation in Heterogenous federated learning setting. We report mean Test Accuracy for CIFAR100 using 100 clients, 3 minority classes, an imbalance ratio of $\rho=10$ and dirichlet allocation with parameter $d=0.3$.* *References* [R1] Zhu, Hangyu, et al. "Federated learning on non-IID data: A survey." Neurocomputing 465 (2021): 371-390. [R2] Wang, Lixu, et al. "Addressing class imbalance in federated learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 11. 2021. [R3] Shen, Zebang, et al. "An agnostic approach to federated learning with class imbalance." International Conference on Learning Representations. 2021. [R4] Durmus, Alp Emre, et al. "Federated Learning Based on Dynamic Regularization." International Conference on Learning Representations. 2021. [R4] Durmus, Alp Emre, et al. "Federated Learning Based on Dynamic Regularization." International Conference on Learning Representations. 2021. [R5] Immer, Alexander, et al. "Invariance learning in deep neural networks with differentiable Laplace approximations." Advances in Neural Information Processing Systems 35 (2022): 12449-12463. --- Rebuttal Comment 1.1: Comment: Thank you for the feedback. I'm pleased with the new experimental results and your explanation about my confusions. I will raise my score to 6.
Summary: This paper proposes an approach to solve the constrained learning problem (where the considered function is convex), named resilient constrained learning. The main idea is to relax the learning constraints according to how much they affect the considered task. It can be viewed as a generalization of the standard constrained learning and studies the tradeoff between the gain from constraint relaxations and the cost from the relaxations. The main theoretical result shows that the gap between the relaxed solution and the optimal solution of the original problem is bounded by a function of the relaxation amount. Numerical experiments verified the basic properties of the proposed approach. Strengths: The idea of relaxing the constraints and balance between relaxation gain and cost is interesting. When $u$>0, it can be seen as an interpolation between unconstrained learning and constrained learning. The main theorem states a bound for the gap between the relaxed solution and the original optimal solution and the derivation process seems solid (although I did not check every step carefully). It can have meaningful real life applications since in many real cases, the constraints are considered as compromisable depending on the gains of relaxations. Weaknesses: The writing is a little hard to follow and some important parts are put in the appendix, like the algorithm 2 mentioned below theorem 1. One major concern is that the experimental results are quite weak even considering that this is a theoretical paper. The experiments are only sanity checks to make sure the proposed method can indeed solve some constrained learning problems, but did not demonstrate how well the solution is. There is no comparison with existing algorithms for solving constrained learning problems, which hinders the contribution of the proposed method to a large extent. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It would be much better if there were some comparison with existing constrained learning algorithms, and discuss the cases where $u=0$ and $u>0$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Did not see discussions of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. In what follows, we address the main points raised by the reviewer. *some important parts are put in the appendix, like the algorithm 2* The algorithm referenced below theorem 1 should be Algorithm 1, which is indeed included in the main text. Algorithm 2 in the appendix pertains the federated setup only (i.e., it is more specific). *There is no comparison with existing algorithms for solving constrained learning problems*. We do compare with existing primal-dual algorithms for solving constrained learning problems. This algorithm has been used in recent works in both the imbalanced federated learning [R3] and invariant learning [R5] setup. All of our experimental comparisons include constrained learning. In addition, we have added new experiments in order to give a more detailed quantitative comparison between methods. The main concern is that we *did not demonstrate how well the solution is*. In that sense, we reported performance on Tables 1 (Imbalanced-FL) and 4(Invariant Learning), showing that our method performs well compared to baselines. Below, we add a more in depth discussion of existing and new results. ### New results in rebuttal We point out that federated learning with class imbalance, also known as heterogeneous federated learning, is a problem motivated by practical considerations which has received substantial attention; see e.g [R1] and references therein. We adopt it as an example in this paper as a prototypical situation in which accommodating some clients -- i.e., constraints -- can be much more difficult than accomodating most clients. We adopt as benchmarks Ratio-Loss [R2] and CLIMB [R3]. The latter is a constrained learning approach which we adopt with the modification that constrains cab be relaxed as per the definition of resilient equilibrium. Also, we did not tune any hyperparameters and used the same hyperparameters as in [R3], since the objective was to provide a fair comparison. In order to see how the hyperparameters that are exclusive to our method, namely $\eta_u$ and $h(u)$, we conducted ablations detailed below. We also thank the reviewer for their suggestion to include more quantitative metrics. In this setup, average accuracy is not really the pertinent metric. Observe that the *average* accuracy (Response Table 1) is overall similar to the constrained approach, and consistently higher than the baseline [R3]. However, the distribution o performance among clients varies. We actually chose this experiment as one in which the differences between resilient learning and standard constrained learning are apparent. Resilient learning has (generally) less spread in the interquartile range and higher maximum spread (Response Table 2) than the constrained approach. This is precisely what the method is designed to do. Sacrifice the performance of outliers to benefit the performance of the majority of the agents. In order to showcase this, we order clients by their accuracy. We then compute the fraction of clients in the resilient method that outperform equally ranked clients for baseline methods (Response Table 3). *References* [R1] Zhu, Hangyu, et al. "Federated learning on non-IID data: A survey." Neurocomputing 465 (2021): 371-390. [R2] Wang, Lixu, et al. "Addressing class imbalance in federated learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 11. 2021. [R3] Shen, Zebang, et al. "An agnostic approach to federated learning with class imbalance." International Conference on Learning Representations. 2021. [R4] Durmus, Alp Emre, et al. "Federated Learning Based on Dynamic Regularization." International Conference on Learning Representations. 2021. ## Comparisons to constrained learning and discussion. In addition to the quantitative metrics presented above, the results included both in the experimental section and appendix mainly aim to highlight the differences between our approach and existing constrained learning approaches. In doing so, we discuss the inherent trade-offs or limitations of out approach. Here we provide a brief summary and expand on how these address the fundamental aspects of our method. **Constraint Relaxation and Relative difficulty**: This illustrates how our method adapts $u$ depending on how difficult it is to satisfy the constraint, whereas the constrained approach enforces the constraint ($u=0$) regardless of the price to pay in performance. Since the impact on performance depends on the data distribution, the most imbalanced clients (Figure 2 in main text) are relaxed more. This shows empirically that our method behaves as intended *in a learning setup*. The same happens for invariances that are not present in the dataset (Figure 8 in Appendix). **Controlling the performance vs. relaxation trade-off**: These experiments (Figure 2 in main text, Figure 4 in the Appendix) illustrate the inherent trade-off between imposing constraints and having better performance in terms of statistical risk in the objective. **Sensitivity to Problem Specification**: These experiments (Figure 3 left in main text, Figure 6 in the appendix) highlight that our method effectively eases the challenge of specifying constraint levels in both practical setups. **Constraint violation and Generalization**: Based on our theoretical approximation bounds, resilient learning should have better generalization. These experiments (Figure 3 right in main text, Figure 6 in the Appendix) show that this holds in practice. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. The comparison with existing constrained learning methods now look reasonable and I have increased my score.
Summary: This paper introduces the concept of resilient constrained learning, which aims to find a compromise between reducing the objective loss and staying close to the original problem. This paper presents conditions for achieving a resilient equilibrium and provides equivalent formulations of resilient relaxation. This paper derives approximation and statistical guarantees for the algorithm used to find this equilibrium. It showcases its advantages in image classification tasks involving multiple potential invariances and federated learning under distribution shift. The paper's contributions include introducing a practical algorithm to compute the resilient equilibrium, determining conditions under which this equilibrium exists, and showcasing the advantages of resilient constrained learning in real-world applications. Strengths: + In terms of originality, resilient constrained learning is introduced, a novel approach to balancing multiple requirements in machine learning tasks. The idea of adapting the requirements while simultaneously solving the learning task is innovative. It addresses the challenge of specifying requirements in the presence of compromises and limited prior knowledge about the data. The paper also presents conditions for achieving a resilient equilibrium and provides equivalent formulations of resilient relaxation. + The paper's significance lies in its potential to enable machine learning solutions that better satisfy real-world requirements beyond accuracy, such as fairness, robustness, or safety. Weaknesses: - One weakness is that the paper does not comprehensively compare existing approaches to constrained learning, such as penalty-based methods or Lagrangian duality-based methods. While the article mentions these approaches, it does not compare their strengths and weaknesses with the Resilient Constrained Learning approach. Such a comparison could help clarify the advantages and limitations of the Resilient Constrained Learning approach and provide insights into when it is most appropriate. - Additionally, the paper could benefit from a more detailed discussion of the limitations and assumptions of the proposed method. - From my humble understanding, the inequality in eq (1) should be P(v) >= P(u) + p\top (v - u_0). - Grammar issues: L 39. be they penalty coefficients or constraint levels Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. In what follows, we address the points raised by the reviewer. *... the inequality in eq (1) should be...* Thanks for the observation, it was a typo. *...the paper does not comprehensively compare existing approaches to constrained learning,...* We do compare with existing primal-dual algorithms for solving constrained learning problems. These algorithms have been used in recent works in both the imbalanced federated learning [R3] and invariant learning [R5] setup. All of our experimental comparisons include constrained learning: - Resilient solutions sacrifice stringent constraints to improve overall performance. For instance, the results shown in Figures 2(left) and 8 (in the supplementary materials) demonstrate that more stringent requirements are relaxed more and Figures 2 (right), 7 and 8 along with Tables 1 and 4 demonstrate that these relaxations lead to performance improvements as measured by the objective loss. - We have also included several numerical experiments to highlight properties of resilient constrained learning. For instance, Figures 3 (left), 5 and 6 show decreased sensitivity to problem specification and Figures 3 (right) and 7 show better empirical approximation of the underlying statistical problem. Both of these are properties that resilient cosntrained learning has by definition. In addition, we have included new results and comparisons to further highlight the differences between our approach and constrained learning, as detailed below. We adopt as benchmarks Ratio-Loss [R2] and CLIMB [R3]. The latter is a constrained learning approach which we adopt with the modification that constrains can be relaxed as per the definition of resilient equilibrium. Also, we did not tune any hyperparameters and used the same hyperparameters as in [R3], since the objective was to provide a fair comparison. In order to see how the hyperparameters that are exclusive to our method, namely $\eta_u$ and $h(u)$, we conducted ablations detailed below. In this setup, average accuracy is not really the pertinent metric. Observe that the *average* accuracy (Response Table 1) is overall similar to the constrained approach, and consistently higher than the baseline [R3]. | Dataset | Imb. Ratio | Num Minority | Ratio-Loss[R2] | CLIMB[R3] | OURS | |---------|------------|--------------|------------|--------|--------| | F-MNIST | 10 | 3 | $92.5$ | $92.6$ | $93.4$ | | F-MNIST | 20 | 3 | $94.0$ | $93.8$ | $94.4$ | | CIFAR10 | 10 | 3 | $81.3$ | $81.5$ | $81.5$ | | CIFAR10 | 20 | 3 | $83.4$ | $82.4$ | $82.6$ | **Response Table 1**: Average Accuracy for different setups. The imbalance ratio denotes the fraction of samples kept in minority classes. As in [R4] we use a dirichlet distribution to allocate samples among clients, with parameter $d=0.3$. However, the distribution o performance among clients varies. We actually chose this experiment as one in which the differences between resilient learning and standard constrained learning are apparent. Resilient learning has (generally) less spread in the interquartile range and higher maximum spread (Response Table 2) than the constrained approach. | Dataset | Imb. Ratio | Ratio Loss[R2] | CLIMB[R3] | OURS | |---------|------------|----------------|----------------|----------------| | F-MNIST | 0.1 | $2.2$ $(64.1)$ | $3.5$ $(28.6)$ | $2.4$ $(50.6)$ | | F-MNIST | 0.05 | $2.2$ $(50.2)$ | $2.7$ $(28.6)$ | $1.7$ $(45.8)$ | | CIFAR10 | 0.1 | $8.1$ $(49.4)$ | $8.7$ $(35.8)$ | $8.7$ $(46.4)$ | | CIFAR10 | 0.05 | $8.6$ $(44.5)$ | $8.4$ $(32.0)$ | $7.9$ $(41.1)$ | **Response Table 2**: Client accuracy spread metrics for different setups. The first number denotes interquantile range and the number in parentheses denotes the maximum minus the minimum accuracy, both computed across 100 clients. This is precisely what the method is designed to do. Sacrifice the performance of outliers to benefit the performance of the majority of the agents. In order to showcase this, we order clients by their accuracy. We then compute the fraction of clients in the resilient method that outperform equally ranked clients for baseline methods (Response Table 3). | Dataset | Imb. Ratio | Improved (%) | Mean Improvement | Max Improvement | Mean Decrease | Max Decrease | |---------|------------|--------------|------------------|-----------------|---------------|--------------| | CIFAR10 | 10 | 77 | 0.4 | 1.5 | 0.5 | 10.0 | | CIFAR10 | 20 | 79 | 0.5 | 2.1 | 0.3 | 9.1 | | F-MNIST | 10 | 92 | 1.6 | 4.8 | 0.9 | 23.3 | | F-MNIST | 20 | 94 | 1.3 | 2.6 | 0.7 | 19.4 | **Response Table 3**: Changes in accuracy for equally ranked clients for the resilient method. As intended, performance improves for most clients, though at the cost of a decrease in performance for a few *outliers*. *References* [R1] "Federated learning on non-IID data: A survey." Neurocomputing (2021). [R2]"Addressing class imbalance in federated learning." AAAI 2021. [R3] "An agnostic approach to federated learning with class imbalance." ICLR 2021. [R4] "Federated Learning Based on Dynamic Regularization." ICLR 2021. [R5] "Automatic data augmentation via invariance-constrained learning." ICML 2023. --- Rebuttal Comment 1.1: Title: To Reviewer iyNP: Please respond to the author rebuttal Comment: Dear Reviewer iyNP, The deadline for author discussion period is approaching soon. Please respond to the author's rebuttal and indicate whether your concerns have been addressed. Thank you! -AC
Summary: This paper proposes a novel resilient learning approach for constrained learning problem. In the presented approach, constraints are interpreted as nominal specification that can be relaxed to find a better compromise between objective and requirements. The first main contribution of this paper is to relax constraints according to the sensitivity of the objective to perturbations of the constraint. Then the next contribution is to design a practical resilient learning algorithm based on duality and perturbation theory. Strengths: 1. The assumption and properties of the convex function of relaxation and resilient equilibrium are discussed in detail in this paper, which provides a theoretical proof to the effectiveness of the presented approach. 2. The authors also interpret why both traditional unconstrained and constrained learning can be seen as limiting cases of resilient learning. 3. The paper is well written. Weaknesses: 1. Some basic concepts should be explained more clearly, such as the definition of nominal specification. 2. The experiment of invariant learning is missed in Section 5 although in the abstract it is claimed to has been conducted. 3. The authors do not explain which constrained approaches are involved in the contrast experiment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I cannot find the evaluation of resilient formulation of invariant learning. Where is it? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: No limitations or potential negative societal impact are mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. In what follows, we address the main points raised by the author. *(Weakness 1) Some basic concepts should be explained more clearly, such as the definition of nominal specification.* Thanks, we will add the clarification that nominal specification means u = 0. *(Weakness 2) The experiment of invariant learning is missed in Section 5 although in the abstract it is claimed to has been conducted.* As stated at the start of Section 5 (line 272) experimental results for invariance learning are included in appendix G. We reproduce some of the results below for convenience. *The authors do not explain which constrained approaches are involved in the contrast experiment* We compare with an existing primal-dual alternating algorithm as previously used for both the imbalanced federated learning [1] and inveriant learning [2] setup. Note that this algorithm is a particular case of ours (i.e., when $h$ is the indicator function of the non-negative orthant) as discussed in section 3.3. *References* [1] Shen, Zebang, et al. "An agnostic approach to federated learning with class imbalance." International Conference on Learning Representations. 2021. [2] Hounie, Ignacio, Luiz FO Chamon, and Alejandro Ribeiro. "Automatic data augmentation via invariance-constrained learning." International Conference on Machine Learning. PMLR, 2023. ## Invariant Learning Results In Table 4 (Appendix G.3.2) we compared to Augerino [4], which is a popular invariant learning method. In an easy dataset like MNIST our approach shows similar performance whereas in a more challenging dataset like FMNIST - except for the fully rotated version - our method performs statistically significantly better. | Dataset | Method | Fully Rotated | Partially Rotated | Translated | Scaled | Original | |---------|---------------|---------------------------|---------------------------|----------------------------|---------------------------|---------------------------| | MNIST | Augerino | $\mathbf{97.78 \pm 0.03}$ | $96.38 \pm 0.00$ | $94.65 \pm 0.01$ | $97.53 \pm 0.00$ | $98.44 \pm 0.00$ | | MNIST | Unconstrained | $94.49 \pm 0.12$ | $96.25 \pm 0.13$ | $94.64 \pm 0.20$ | $97.47 \pm 0.03$ | $98.45 \pm 0.06$ | | MNIST | Constrained | $94.55 \pm 0.18$ | $96.90 \pm 0.07$ | $93.74 \pm 0.07$ | $97.92 \pm 0.15$ | $98.74 \pm 0.08$ | | MNIST | Resilient | $95.38 \pm 0.18$ | $\mathbf{97.19 \pm 0.09}$ | $\mathbf{95.21 \pm 0.15}$ | $\mathbf{98.20 \pm 0.04}$ | $\mathbf{98.86 \pm 0.02}$ | | F-MNIST | Augerino | $\mathbf{85.28 \pm 0.54}$ | $81.48 \pm 0.49$ | $81.13 \pm 0.77$ | $83.17 \pm 0.46$ | $90.09 \pm 0.20$ | | F-MNIST | Unconstrained | $77.94 \pm 0.06$ | $81.57 \pm 0.36$ | $79.23 \pm 0.17$ | $82.99 \pm 0.18$ | $90.20 \pm 0.23$ | | F-MNIST | Constrained | $84.96 \pm 0.12$ | $85.66 \pm 0.32$ | $83.61 \pm 0.10$ | $86.49 \pm 0.09$ | $91.02 \pm 0.02$ | | F-MNIST | Resilient | $\mathbf{85.57 \pm 0.26}$ | $\mathbf{86.48 \pm 0.15}$ | $\mathbf{85.06 \pm 0.23}$ | $\mathbf{87.26 \pm 0.14}$ | $\mathbf{91.55 \pm 0.31}$ | *Table 4: Classification accuracy for synthetically invariant datasets. We use the same invariance constraint level $\epsilon_i=0.1$ for all datasets and transformations. We report the mean and standard deviation computed across three independent runs.* [4] Benton, Gregory, et al. "Learning invariances in neural networks from training data." Advances in neural information processing systems 33 (2020): 17605-17616. --- Rebuttal Comment 1.1: Title: Thank authors for their response. Comment: I think the authors' responses have addressed all my concerns. I keep my orginal score.
Rebuttal 1: Rebuttal: # Response to all Reviewers We sincerely thank all reviewers for their efforts in reviewing our paper. We are glad to see that all reviewers have expressed that this is a novel and well-motivated method with high potential impact in practical applications. We are also thankful for the reviewers insights and feedback which we find pertinent and valuable. We have carefully addressed your comments and suggestions and hope that the forthcoming exchanges and discussions can lead to further improvements. The major concern about our work is whether there is sufficient empirical evaluation of resilient constrained learning. We believe that our experiments provide sufficient empirical evaluation of resilient constrained learning. We developed this idea as a way of striking compromises in situations where multiple conflicting requirement result in poor performance. Our experiments show that this does happen in practice. Resilient solutions sacrifice stringent constraints to improve overall performance. For instance, the results shown in Figures 2(left) and 8 (in the supplementary materials) demonstrate that more stringent requirements are relaxed more and Figures 2 (right), 7 and 8 along with Tables 1 and 4 demonstrate that these relaxations lead to performance improvements as measured by the objective loss. That said, the reviewers' comment that more evidence is required is, as we said above, pertinent and valuable. We have therefore added new results based on the reviewers feedback. In particular, we have performed the following additional experimental analyses: - *Response Table 1*: We compare average performance of resilient constrained learning with benchmarks for federated learning with class imbalance. It is notable that average performance is comparable with benchmarks. - *Response Table 2*: We show spread metrics of accuracy -- maximum range and interquantile range -- across clients. Constrained resilient learning reduces the interquantile range at the cost of increasing the maximum range. I.e., it improves the accuracy of most clients at the cost of reducing the accuracy of a few. - *Response Table 3*: We rank clients according to their realized losses in standard constrained learning and resilient constrained learning. We then compare the relative performance of equally ranked clients. We see that a large fraction of clients have 1% to 5% better accuracy at the cost of possibly substantial decreases in the accuracy of a few outlier clients. - *Response Tables 4 and 5*: We consider cost functions $h(u) = \| u\|_\beta$ with varying values of $\beta$ in the federated and invariant learning setup. - *Response table 6*: Ablation on dual and resilient learning rates for the federated learning setup, showing that our method is not overly sensitive to these hyperparameters. We also highlight in the individual responses several numerical experiments that were already included in the supplementary materials of our submission. The reviewers feedback suggests that having some of these results in the main body of the paper would make for a stronger contribution. We will streamline the presentation of the results in future versions of the manuscript in order to include some of these results in the main body. We have also included several numerical experiments to highlight properties of resilient constrained learning. For instance, Figures 3 (left), 5 and 6 show decreased sensitivity to problem specification and Figures 3 (right) and 7 show better empirical approximation of the underlying statistical problem. Both of these are properties that resilient cosntrained learning has by definition. Pdf: /pdf/a02f743a720527c2a9357c7e5f7684f7b69ec95f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
Accept (poster)
Summary: This work presents a text-conditional speech synthesis model with an optimal transport path and conditional flow matching. The model is trained with a large-scale cross-lingual dataset for zero-shot style transfer and content editing. Strengths: The work adopts a conditional normalizing flow with an optimal transport path for speech synthesis. The proposed model can synthesize and edit the speech with different styles. Weaknesses: Although this work successfully executes the zero-shot TTS and speech editing, I have doubts on the originality of this work and I think that the authors should have conducted more comparisons with the recently proposed TTS models (not YourTTS, not Vall-E), and speech editing or speech correcting paper. I have some comments on this work. 1. The concept to train the model by infilling speech given audio and text has been already presented in many works. First, SpecAugment [1] presented a time masking for context learning and Conformer [2] successfully adopted it for the ASR task. Self-supervised speech representation also models such as wave2vec 2.0 successfully learned the context with masking. Moreover, there are many similar models which can edit the speech conditional text information in speech editing [3] [4], correcting domains [5] [6], and also in the TTS domain [7] [8]. The authors only adopt the conditional normalizing flow with an optimal transport path for text-to-speech. [1] Park, Daniel S., et al. "Specaugment: A simple data augmentation method for automatic speech recognition." Interspeech, 2019. [2] Gulati, Anmol, et al. "Conformer: Convolution-augmented transformer for speech recognition." Interspeech, 2020. [3] Tae, Jaesung, Hyeongju Kim, and Taesu Kim. "EdiTTS: Score-based editing for controllable text-to-speech." Interspeech, 2022. [4] Wang, Tao, et al. "Campnet: Context-aware mask prediction for end-to-end text-based speech editing." IEEE/ACM Transactions on Audio, Speech, and Language Processing 30 (2022): 2241-2254. [5] Tan, Daxin, et al. "CorrectSpeech: A Fully Automated System for Speech Correction and Accent Reduction." ISCSLP, 2022. [6] Fong, Jason, et al. "Speech Audio Corrector: using speech from non-target speakers for one-off correction of mispronunciations in grapheme-input text-to-speech." Proc. Interspeech 2022 (2022): 1213-1217. [7] Ao, Junyi, et al. "Speecht5: Unified-modal encoder-decoder pre-training for spoken language processing." ACL, 2022. [8] Wang, Tao, et al. "Non-Autoregressive End-to-End TTS with Coarse-to-Fine Decoding." INTERSPEECH. 2020. 2. Specifically, the diffusion-based model (EdiTTS [3] and Guided-TTS [9]) can edit the speech even without training. The author should have compared the conditional normalizing flow with a diffusion-based model. However, the authors only referred that OT path leads to faster training, faster generation, and better performance compared to diffusion paths. I wonder if the conditional normalizing flow with OT path is better than diffusion-based models. [9] Kim, Heeseung, Sungwon Kim, and Sungroh Yoon. "Guided-tts: A diffusion model for text-to-speech via classifier guidance." ICML, 2022. 3. The audio quality is not good on the demo pages. 4. For a fair comparison, all models should be trained with the same dataset. 5. YourTTS is not a good text-to-speech model for zero-shot text-to-speech models. The audio quality of YourTTS is bad. I recommend training the VITS with the reference encoder you used and the same configuration. In addition, transferring the voice style from a reference audio, not speaker ID, is utilized in many works. For a fair comparison, the same transferring method is used for each model. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. The authors utilized a 100 Hz frame rate to extract the high-resolution Mel-spectrogram. For a fair comparison, each model is also trained with the same time resolution for Mel-spectrogram. For example, VITS and YourTTS should be trained with the linear spectrogram extracted at a 100 Hz frame rate for high-quality. However, the details are not described for baseline models. 2. Are there any scenarios for automatic noise removal or editing? In this work, the location of audio for noise removal or editing should be segmented by the user. [5] presented the fully automated speech correction scenario with detection, correction, and generation. It would be nice incorporating ASR with this work for an automatic speech editing system. 3. I think replacing the HiFi-GAN with BigVGAN could improve the audio quality. 4. For the AR model, the dropout in the pre-net of the decoder may improve the diversity of speech. Also, VITS and YourTTS can increase the sample diversity by controlling the temperature T. The details you used should be described. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: They stated the limitations and potential negative societal impact on Section Conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address common questions in Author Rebuttal above and other questions here. We will incorporate all feedback in the final version. **1. The concept to train the model by infilling speech given audio and text has been already presented in many works. SpecAugment [1] presented a time masking for context learning and Conformer [2] successfully adopted it for the ASR task. Self-supervised speech representation also models such as wave2vec 2.0 successfully learned the context with masking.** We would like to emphasize that this work focuses on building a **generalist speech generation model**. In contrast, **SpecAugment, Conformer, and wav2vec 2.0 are ASR/representation learning models that do not infill speech and cannot generate speech. They should not be considered related work.** [1] and [2] use masking for regularization. wav2vec 2.0 is similar to CPC which masks to create different views for contrastive learning. **2. there are many similar models which can edit the speech conditional text information in speech editing [3] [4], correcting domains [5] [6], and also in the TTS domain [7] [8]. The diffusion-based model (EdiTTS [3] and Guided-TTS [9]) can edit the speech even without training** **CampNet [4] is very similar to A3T, which we have already compared with.** Both assume a deterministic input/output mapping, preventing speech infilling to generalize to longer spans. The key differences are [4] does not require alignment during training, and uses a two-stage NAR decoder to refine spectrogram. **[3-9] are all very different from Voicebox and cannot be compared in the same setup.** [3] and [6] require an audio sample containing the new text to be swapped in for editing. To edit “I’m happy” into “I’m heavy”, [3] requires an audio sample of the same speaker saying “heavy”, while [6] requires a sample of “heavy” that can be from a different speaker. [5] presents an automatic way to align source text with target text and reuse [4] for editing, which is complementary but orthogonal to Voicebox. SpeechT5 [7] is an unsupervised pre-training framework, which requires separate fine-tuning to perform each task, and cannot edit speech or perform in-context style transfer. [8] is a regression-based NAR TTS model evaluated on a 20-hour single speaker dataset and does not edit speech. It will suffer similar issues as A3T when trained on diverse data. [9] did not show that it is capable of editing.. **3. each model trained with the same time resolution for Mel-spectrogram. For example, VITS and YourTTS should be trained with the linear spectrogram extracted at a 100 Hz frame rate for high-quality.** **What the reviewer suggests is similar to Glow-TTS+ HiFi-GAN. We believe VITS serves as a stronger baseline for flow-based TTS,** given VITS has shown that it is better (Table 1 and 3 in VITS). The frame rate of Glow-TTS is 22K / 256 = 86Hz which is close to the 100Hz we used. Moreover, higher time resolution does not necessarily mean better quality, it depends on if a model has enough model capacity. **4. For the AR model, the dropout in the pre-net of the decoder may improve the diversity of speech. Also, VITS and YourTTS can increase the sample diversity by controlling the temperature T.** **Pre-net dropout, which serves for regularization, can lead to stochastic behavior but the output diversity is limited.** Moreover, AR models like Tactoron still make an overly strong conditional independence assumption where each feature dimension is conditionally independent for a given time step. For VITS trained on diverse speech without a speaker encoder, one can draw samples from the low-dimensional Gaussian prior to generate diverse samples. We have presented results of this in the common comment 3 above. We follow the recommended temperature setup for YourTTS for sampling. It can be seen in Table 5 and 6 that the gap to Voicebox is huge (FSD: 277.9 vs 159.8/test-o WER: 54.6% vs 8.3%). YourTTS also conditions generation on speaker embedding inferred from a reference audio, and hence it cannot perform diverse sampling without conditioning on any audio like Voicebox. **5. The audio quality is not good on the demo pages.** All the samples presented in the supplementary material use audio prompts from recruited volunteers, who recorded samples from their own devices and provided consents for sharing. Hence it can be heard that some prompts also contain noise and are not high quality. The quality of the produced audio samples should be compared against the quality of the audio prompt (“Voicebox Input”, “Original Speech”, and “Prompt”). This is because the Voicebox transfers audio style from the prompt, including not only voice, but also audio quality (noise, reverberation, etc). We would be more than happy to discuss if the reviewer could point out any specific samples with noticeable worse quality compared to their input audio prompt. **6. Are there any scenarios for automatic noise removal or editing?** This paper did not explore automatic methods for determining noisy segments. One can consider using reference-free quality estimation methods like torchaudio-squim [1] or WADA-SNR [2] to determine which segments are noisy and should be re-generated. For editing, CorrectSpeech is applicable to not only CampNet but also A3T and Voicebox. [1] Kumar et al. "Torchaudio-Squim: Reference-Less Speech Quality and Intelligibility Measures in Torchaudio." ICASSP’23 [2] Kim and Richard. "Robust signal-to-noise ratio estimation based on waveform amplitude distribution analysis." Interspeech’08 **7. replacing the HiFi-GAN with BigVGAN could improve the audio quality.** We thank the reviewer for their suggestion and agree that BigVGAN would likely lead to better audio quality. We will explore in the future work, and we also want to note that our main contribution is orthogonal to the choice of the vocoder. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thank you for your helpful response. I acknowledge that this work first proposes the generalized model for multiple speech tasks. Moreover, I think your model will have a good impact on speech research. That is why the authors should conduct more experiments fairly for future researches who might follow your research. I don't want anyone in the future to claim "VoiceBox experimented like this and we're just following VoiceBox". I still have a concern about the comparisons for each task. I hope that you do not cut corners by just arguing that our framework is novel. >**Zero-shot TTS experiment** I still disagree with your experiments because YourTTS is not a good zero-shot TTS model. The audio quality is not good and the models are trained with different datasets. It is not fair to compare it with your model. The authors also utilize an additional Duration model. >**Diverse speech generation experiment** you just compared the model with VITS-LJ and VITS-VCTK. I could not agree that your model is better than others with the results of this experiment. You should train the model with the same dataset for a fair comparison. >**To generalize speech infilling, any powerful non-autoregressive generative models, including diffusion models, should work. We chose flow-matching (FM) with optimal transport (OT) path because [1] showed that FM w/ OT > FM w/ diffusion > score-matching (SM) w/ diffusion (the typical diffusion model) on training speed and inference compute-quality trade-off. See the comparison in Table 1 and Fig 4-7 in [1].** If you're saying this, adopting the flow-matching (FM) with optimal transport (OT) path is not your contribution. the authors should have compared all scenarios (FM w/ OT, FM w/ diffusion, score-matching (SM) w/ diffusion) to verify that the FM w/ OT is also a better method for speech generative tasks. > **Duration Modeling** I also have an additional doubt on the comparison of different models. In Appendix, the WER results of flow-matching and regression model are almost same in Table B3. This results show that the trained model with your dataset has a just lower WER so you should train the VITS or other models with the same dataset you used. I think a duration modeling with a large-scale dataset improves the pronunciation. For a fair comparison, VITS with a duration modeling should be compared. Specifically, VITS just utilizes a MAS for a efficient training without external duration modeling. In addition, there are many works which utilizes VITS with external duration modeling to improve the performance. [1] Cite as: Ju, Y., Kim, I., Yang, H., Kim, J.-H., Kim, B., Maiti, S., Watanabe, S. (2022) TriniTTS: Pitch-controllable End-to-end TTS without External Aligner. Proc. Interspeech 2022, 16-20, doi: 10.21437/Interspeech.2022-925 [2] Cite as: Lim, D., Jung, S., Kim, E. (2022) JETS: Jointly Training FastSpeech2 and HiFi-GAN for End to End Text to Speech. Proc. Interspeech 2022, 21-25, doi: 10.21437/Interspeech.2022-10294 [3] Zhang, Yongmao, et al. "Visinger: Variational inference with adversarial learning for end-to-end singing voice synthesis." ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. [4] Y. Shirahata, R. Yamamoto, E. Song, R. Terashima, J. -M. Kim and K. Tachibana, "Period VITS: Variational Inference with Explicit Pitch Modeling for End-To-End Emotional Speech Synthesis," ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1-5, doi: 10.1109/ICASSP49357.2023.10096480. Basically, I like the concept of this paper. However, current manuscript does not conducted a fair comparison. I encourage the authors to add additional ablation studies to the paper. But, they did not conduct any experiments I suggested so I could not take any action in this stage. --- Rebuttal 2: Title: Thank you for your follow-up comments (1/2) Comment: We thank the reviewer for carefully reading our response and sharing additional comments. We are glad that the reviewer acknowledged the novelty of Voicebox and its positive impact for future speech research. We wholeheartedly agree with the reviewer that proper comparisons are required and conclusions should be drawn carefully. **Respectfully, we disagree with the reviewer’s comments on “current manuscript does not conduct a fair comparison” and “[the authors] did not conduct any experiments [the reviewer] suggested.”** In our initial author response, we have pointed out where some of the suggested experiments can be found in the original manuscript and have added additional ablation studies. We have also kindly asked for clarification for one experiment the reviewer suggested (train VITS with same reference encoder), because the initial suggestion was not feasible. Unfortunately, we have not received clarification and are not able to conduct that experiment. We address each comment with our itemized response below. --- > **Comment 1:** [Diverse speech generation experiment] you just compared the model with VITS-LJ and VITS-VCTK. I could not agree that your model is better than others with the results of this experiment. You should train the model with the same dataset for a fair comparison. - **This is not true.** We also compared with (a) YourTTS trained on LibriTTS + 2 others, (b) A3T trained on VCTK, (c) Voicebox trained on 1K hour audiobooks, (d) Voicebox with A3T objective trained on 1K hour audiobooks, (e) VITS trained on 1K hour audiobook - (a) and (b) are presented in the paper Table 5 and 6, (c) and (d) are presented in the appendix Table B3, (e) is presented in the rebuttal “Global Author Response”, Comment 3. - **(c), (d), (e) are all trained with the same dataset** and we can draw the conclusion that Voicebox is the best in that controlled setup. --- > **Comment 2:** [Zero-shot TTS experiment] I still disagree with your experiments because YourTTS is not a good zero-shot TTS model. The audio quality is not good and the models are trained with different datasets. It is not fair to compare it with your model. The authors also utilize an additional Duration model. - **YourTTS and VITS also have a duration model** that is jointly trained (Sec 2.2.2. in VITS). Hence, Voicebox does not utilize an additional duration model. - As pointed out in Global Author response (comment 3), we believe **comparing Voicebox and VALL-E is fair** as it meets reviewer's criteria: Both are trained on the same dataset and adopt the same style transfer method. VALL-E also represents the most recent and the strongest baseline on ZS-TTS. - **The reviewer initially suggested that we should compare with other baselines instead of VALL-E, despite that VALL-E meets all the criteria listed.** We have kindly asked the reviewer to provide details on this comment for us to better address the concern. Unfortunately we did not receive clarification. - **It is unclear from the reviewer what would constitute a fair comparison between Voicebox and YourTTS/VITS.** - **YourTTS uses a pre-trained speaker embedder**. It still would not be a fair comparison even if we train YourTTS on the same dataset. - **We have trained a vanilla VITS on the same dataset** but it is not capable of zero-shot TTS. We compared it on the diverse speech generation task. - As explained in “Global Author Response”, Comment 3, **Voicebox does not have an explicit encoder so the suggestion of “training VITS with [our] reference encoder” is not feasible. We have kindly requested clarification but did not receive responses.** Moreover, **such a model, even if feasible, already deviates from the original VITS/YourTTS.** It changes not only the training data, but also the task (from TTS to text-guided infilling) and the model architecture. At that point, it should not be considered as a comparison with an existing baseline. - **We present a new experiment here to address the concern on the comparison with YourTTS in terms of data.** We trained Voicebox on LibriTTS with the reduced setup described in appendix Section B3, which is strictly a subset of what YourTTS is trained on (LibriTTS, VCTK, PT, FR SCL). Results suggest that Voicebox are still significantly better | Model | ZS-TTS WER (lower is better) | ZS-TTS SIM-o (higher is better) | | -------- | ------- | ------- | | Voicebox (trained on LibriTTS, reduced setup) | 2.1% | 0.579 | | YourTTS | 7.7% | 0.337 | --- Rebuttal 3: Title: Thank you for your follow-up comments (2/2) Comment: > **Comment3:** In Appendix, the WER results of flow-matching and regression model are almost same in Table B3. This results show that the trained model with your dataset has a just lower WER so you should train the VITS or other models with the same dataset you used. - **We highlight that the speaker similarity has a big gap between the flow-matching model and the regression model** (0.597 vs 0.520 SIM-r, the higher the better), and similarly for the sample diversity (242.5 vs 278.8 FSD, the lower the better). - **We have trained a vanilla VITS on the same dataset and presented the diverse speech generation results in our initial response** (“Global Author Response”, Comment 3). The WER is 16.6% and FSD is 311.75. --- > **Comment4:** If you're saying this, adopting the flow-matching (FM) with optimal transport (OT) path is not your contribution. the authors should have compared all scenarios (FM w/ OT, FM w/ diffusion, score-matching (SM) w/ diffusion) to verify that the FM w/ OT is also a better method for speech generative tasks. - Exactly because this is not our main contribution, we focus on comparing our method (a gradient-based NAR generative model) with token-based autoregressive models (VALL-E) and with regression-based non-autoregressive models (A3T, CampNet), and highlighting the contrast in task generalization, performance, and inference efficiency among these model families. - **We present new experiments here comparing the three gradient-based methods, and confirm FM w/ OT indeed has the best training and inference efficiency.** We adopt the same ablation setup as Appendix Section B.3 (with loss computed on all frames and a smaller learning rate, 1e-4, to ensure convergence for all three methods). We vary the number of training and inference steps Exp 1: Training for 50K / 100K / 150K updates; Inference with 64 NFEs. **We see FM w/ OT achieves the best performance with 100K training steps, and even outperforms SM w/ diff using only 50K updates.** | Model | ZS-TTS WER (upd=50K/100K/150K) | ZS-TTS SIM-o (upd=50K/100K/150K) | | -------- | ------- | ------- | | FM w/ OT (proposed) | 2.5% / 2.2% / 2.1% | 0.424/0.487/0.508 | | FM w/ diffusion | 76.0% / 3.1% / 2.6% | 0.066/0.344/0.478 | | SM w/ diffusion | 73.3% / 17.4% / 5.1% | 0.062/0.176/0.349 | Exp 2: Training for 150K updates; inference with 8/16/32/64 NFEs. **We see that FM w/ OT can produce good results with just 8 NFEs, while FM w/ diff requires at least 16 NFEs and SM w/ diff requires over 64 NFEs.** | Model | ZS-TTS WER (NFE=8/16/32/64) | ZS-TTS SIM-o (NFE=8/16/32/64) | | -------- | ------- | ------- | | FM w/ OT (proposed) | 2.4% / 2.2% / 2.2% / 2.1% | 0.410/0.481/0.503/0.508 | | FM w/ diffusion | 11.5% / 3.0% / 2.7% / 2.6% | 0.171/0.359/0.447/0.478 | | SM w/ diffusion | 94.5% / 42.3% / 11.5% / 5.1% | 0.054/0.076/0.218/0.349 | Results on other setups show the same trend (all inference NFE and train steps combinations on ZS-TTS and diverse speech generation). We will add to the appendix in the final revision --- Given the positive feedback from the reviewers on novelty, potential impact, and our efforts in addressing the concerns raised, we feel that the scores do not accurately reflect on our work. We kindly request the reviewer to consider revising these scores to better align with the feedback provided. If there are specific areas where we can further improve to justify a higher score, we would greatly appreciate additional insights or guidance. --- Rebuttal Comment 3.1: Title: Thanks for your response Comment: Thanks for your effort. The authors' response addressed my concerns about the fair experiments and they conducted ablation studies I suggest. Specifically, the ablation study for FM w/OT shows the effectiveness of the proposed method. Lastly, I would request to add the inference speed for each model in the final revision. Thanks. I will increase the score from 2 to 5.
Summary: The paper proposes voicebox, a text-guided generative model for speech at scale. By abstracting many speech tasks into speech infilling tasks, voicebox is able to conduct zero-shot text to-speech synthesis, noise removal, content editing, style conversion, and diverse sample generation in monolingual or crosslingual scenario. Experiments show that equipped with the advanced technology such as CNF and classifier-free guidance, voicebox show better performance than baseline methods on many tasks, Strengths: 1. The paper abstracts many different speech tasks into speech infilling task. 2. The paper scales up the model to 50K hours of speech and multilingual scenario, where the model is able to perform in-context learning for speech style. 3. The proposed method is well-studied on many tasks including zero-shot TTS, speech enhancement, speech editing and style generation. The experiments on ASR also show the potential value of proposed method. Weaknesses: About the experiments in Section 5.2, it is unfair to compare VB with speech enhancement model since VB can have access to the text of noised speech, which significantly help the model to generate speech with lower WER. Also, the text of noised speech is usually not accessible in the conventional formulation of speech enhancement. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. As Table 5 shows, VB is able to generate speech with different styles. Can this feature be used in Table 6? To be more specific, if VB synthesize more than one sample per text from the Librispeech training set, can the corresponding WER be further reduced? 2. If there is any insight for using Continuous Normalizing Flows to modeling speech infilling rather than other generative models such as diffusion models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have discussed the limitations in conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address common questions in Author Rebuttal above and other questions here. We will incorporate all feedback in the final version. **1. As Table 5 shows, VB is able to generate speech with different styles. Can this feature be used in Table 6? To be more specific, if VB synthesize more than one sample per text from the Librispeech training set, can the corresponding WER be further reduced?** **Yes.** We have conducted such experiments before using an earlier version of the model (trained on 1K hour, using a different forced aligner and word-position-independent phones, using regression duration model). We generated three copies of the Librispeech training set, trained 1 ASR model on each copy, and trained another model combining three copies. The WER of the first three ASR models are 5.34%/5.38%/5.54% on test-clean and 15.40%/15.79%/15.83% on test-other. The WER of the ASR is 4.43% and 12.37% when trained on the combination of the three copies. The results demonstrate the potential of using Voicebox for data augmentation and we plan to explore more in future work. **2. About the experiments in Section 5.2, it is unfair to compare VB with speech enhancement model since VB can have access to the text of noised speech, which significantly help the model to generate speech with lower WER. Also, the text of noised speech is usually not accessible in the conventional formulation of speech enhancement.** We have noted in text that “It should be noted that A3T and Voicebox utilize transcript and location of the noise while Demucs does not.” We will revise this text to emphasize this difference more We agree that this is an unconventional setup for speech enhancement where transcript is usually unavailable. That said, we believe it is still a worthy application as transcripts can still be available in some scenarios. For example, it can be used when one records a scripted speech or when a robust audio-visual speech recognition model is available, which can transcribe accurately even at a very low SNR.
Summary: This paper proposed Voicebox, a non-autoregressive flow-matching model designed to infill speech by leveraging given audio context and text. Notably, Voicebox capitalizes on a substantial amount of data, consisting of 50,000 hours of speech, which contributes to its impressive performance across various speech generation tasks. The model demonstrates notable capabilities in generating coherent and high-quality speech outputs. Strengths: 1. The paper exhibits a comprehensive and well-executed evaluation, yielding impressive experimental results. 2. The authors have effectively presented their work with clear and accessible writing, ensuring ease of understanding for readers. 3. The proposed framework showcases a high level of flexibility, enabling its application to various speech generation tasks and settings. Weaknesses: 1. The demos provided in the supplementary materials are not as satisfying as described in the paper. For example, issues can be found in the perceived speaker similarity in the zero-tts task and the quality of the articulation position of the mask. It would be beneficial if the authors engage in further discussions regarding these phenomena. 2. Given that speech encompasses various components (such as prosody, content, timbre, and noise), it would be better to have a more comprehensive discussion on how Voicebox specifically handles these different aspects. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It is unclear from the paper what motivated the authors to choose the flow-matching model as the audio model. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors are encouraged to add some discussions about the potential negative social impact. Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address common questions in Author Rebuttal above and other questions here. We will incorporate all feedback in the final version. **1. The demos provided in the supplementary materials are not as satisfying as described in the paper. For example, issues can be found in the perceived speaker similarity in the zero-tts task and the quality of the articulation position of the mask. It would be beneficial if the authors engage in further discussions regarding these phenomena.** All the samples presented in the supplementary material use audio prompts from internally recruited volunteers, who recorded samples from their own devices and provided consents for the processing and sharing samples online. Moreover, many of these speakers are non-native and with accents. These audio would be more out-of-domain relative to the training data, which may contribute to the perceptually lower similarity. VALL-E has observed similar results when tested on VCTK which is out-of-domain relative to their training data (Librivox). We will include audio samples for comparison with VALL-E if we can obtain consents for the prompts they used. In terms of the quality of the articulation position of the mask, we are unsure which samples have the issue reviewers mention. We would be more than happy to discuss if the reviewer could provide more details! **2. Given that speech encompasses various components (such as prosody, content, timbre, and noise), it would be better to have a more comprehensive discussion on how Voicebox specifically handles these different aspects.** Voicebox decouples speech into textual content and audio style, where audio style encompasses everything other than textual content. To generate an audio sample, the textual content is specified by the phone transcript, and the audio style is specified through the surrounding audio (audio context). Voicebox does not differentiate different aspects of audio style (voice, noise, emotion, etc) and does not need such labels either. Through learning to infill from large quantities of data, Voicebox learns that these attributes tend to be consistent within an utterance and it can infer the audio style of the target given the context. We can take the first “Transient Noise Removal” sample on the demo page as an example (transcript starts with “in zero weather in mid-winter…”). “Voicebox Input” audio shows the audio context input to the model. From it we hear a feminine voice speaking calmly, and there is static noise that is particularly noticeable when the speaker speaks. We can find the same audio style (low static noise, voice, slow pace, calm emotion) from the generated segments (“to a great depth below the surface when in driving over the”) presented in “Voicebox Output” audio. **3. Authors are encouraged to add some discussions about the potential negative social impact.** We have included some discussion of potential negative social impact and mitigation in Section 6. We expanded the discussion in the thread above for “Ethics Review” and will incorporate those in the final version.
Summary: This paper proposes VoiceBox, a speech infilling model based on a flow-matching generative model. VoiceBox is trained to fill in masked speech based on unmasked speech and given text, and it can perform various tasks depending on how the mask is applied to the speech during inference. VoiceBox allows for speech editing by masking the speech corresponding to the text portion that needs to be edited, and it can also perform zero-shot TTS by infilling the speech for the desired text. By masking the entire speech during generation, the model can generate voices from various speakers in a speaker-unconditional manner for a given sentence. This paper demonstrates significantly improved performance compared to existing models in various tasks and even shows that a generative model for speech can aid in performance improvement in speech recognition tasks through diverse speech generation. Strengths: * This paper demonstrates the versatility of VoiceBox across various tasks while achieving impressive performance. * This paper demonstrates that utilizing synthesized speech generated by VoiceBox can improve the generalizability of ASR models, thereby showing its ability to effectively model the distribution of general speech. * In the zero-shot TTS task, VoiceBox particularly outperforms the previous state-of-the-art model, VALL-E, by a significant margin. * The use of a flow-matching generative model enables fast sampling. * The extensive experimental results provide strong support for the advantages of the model. Weaknesses: * For zero-shot TTS and speech editing, both the duration model and audio model require prompting with the transcript of the reference speech as well as the duration per phoneme. This necessitates the use of MFA (Montreal Forced Aligner) during inference. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Despite VALL-E already demonstrating impressive zero-shot TTS performance, VoiceBox appears to have excessively higher SIM-o and SIM-r scores. It is difficult to determine that VoiceBox's zero-shot TTS samples are significantly better than VALL-E when listening to only the VoiceBox samples in the demo. For a one-to-one comparison with VALL-E, would it be possible to provide VoiceBox samples generated with the same prompt as VALL-E's demo samples? * In order to calculate SIM-r as mentioned in the paper, were both the reference speech prompt and all generated samples encoded and decoded using Encodec? If calculated differently, please provide an explanation. Additionally, when measuring SIM-o or SIM-r with the reference speech, was the similarity measured with the speech prompt only or with the entire reference speech? * Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: This paper provides additional experiments on a classifier designed to detect potential misuse in order to mitigate such risks. Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address common questions in Author Rebuttal above and other questions here. We will incorporate all feedback in the final version. **1. In order to calculate SIM-r as mentioned in the paper, were both the reference speech prompt and all generated samples encoded and decoded using Encodec? If calculated differently, please provide an explanation.** **No.** SIM-r aims to measure the similarity with respect to the audio feature target a generative model predicts. For Voicebox, we compute the mel spectrogram of the ground truth audio and decode the mel spectrogram into a waveform using HiFi-GAN to create the reference speech. We decode Voicebox output mel spectrogram into a waveform using HiFi-GAN to create the generated speech. SIM-r serves as the upper bound for the audio model given the selected audio feature (Encodec code for VALL-E / mel spectrogram for Voicebox). We noted in Section 4 that this number is not comparable when two models use different audio features or vocoders. Hence, we focus on SIM-orig and argue that SIM-orig should be used for comparison across papers. **2. when measuring SIM-o or SIM-r with the reference speech, was the similarity measured with the speech prompt only or with the entire reference speech?** When measuring SIM-{r,o}, the reference is the speech prompt. We confirmed this is the same protocol VALL-E used through personal communication with the authors. **3. Would it be possible to provide VoiceBox samples generated with the same prompt as VALL-E's demo samples?** The rebuttal period does not allow uploading audio samples, and we would also need explicit consent from a speaker to put samples resembling that speaker online. We will include them in the final demo if we could obtain such consents. We also note that all the samples presented in the supplementary material use audio prompts from internally recruited volunteers, who recorded audio from their own devices. These audio would likely be more out-of-domain. **4. For zero-shot TTS and speech editing, both the duration model and audio model require prompting with the transcript of the reference speech as well as the duration per phoneme. This necessitates the use of MFA (Montreal Forced Aligner) during inference.** We will add discussion on this in the main paper if space permits or in the appendix. We also note that for speech editing forced aligner is always needed during inference to identify the location of the source word(s) to be deleted/replaced, regardless of whether a forced aligner is used during training. In that regard the inference requirement of Voicebox is the same as prior work. To further resolve the limitation, there are two potential solutions to address the limitation in the future work. **First, we could use a mix of phonetic and self-supervised learning (SSL) units (e.g., HuBERT units) as the content representation similar to [1].** Specifically, the frame-level phone units corresponding to audio context can be replaced with SSL units, such that transcript of the context is not needed and duration of the SSL units can be easily derived since SSL units are originally at the frame level. **Second, we could explore using a separate encoder for audio context**, which duration model and audio model attend to through cross attention. [1] Fong, Jason, et al. "Speech Audio Corrector: using speech from non-target speakers for one-off correction of mispronunciations in grapheme-input text-to-speech." Interspeech’22 --- Rebuttal Comment 1.1: Comment: Thank you for addressing the concerns through the rebuttal. The additional two experiments effectively demonstrate the importance of data size for Voicebox and the motivation for using flow matching over the diffusion model. I believe this will be helpful for the readers, and it would be beneficial if you could incorporate this into the paper. Overall, I will keep my original score of 7.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback. We are glad that they found Voicebox is highly versatile and can perform many different tasks (**FPC6, K4NY, pReF, Peh7**) by abstracting them into speech infilling (**Peh7**). We are encouraged that they found Voicebox is well-evaluated (**K4NY, pReF, Peh7**), scales effectively to 50K hours and multilingual setups (**Peh7, F9C6, MQeA**), shows SoTA performance (**F9C6, K4NY, pReF, Peh7**), enables faster inference through flow-matching with OT (**K4NY**), and has the potential as a data generator for training other models (**K4NY, Peh7**). We are also pleased that reviewer **pReF** found the paper easy to follow. We address common questions here and individual ones in separate threads. We will incorporate all feedback in the final version. **Comment 1: Novelty (MQeA, F9C6)** > **MQeA**: “The concept to train the model by infilling speech given audio and text has been already presented in many works.” **F9C6**: “It sounds like the combination of masked-based A3T and flow-based speech synthesis model” **We highlight that no prior speech infilling models has attempted to or is capable of generalizing to as many tasks in the context of generative modeling as Voicebox does.** The goal of this paper is to build a generalist speech generative model that can solve many tasks without fine-tuning, just like LLMs solving many NLP tasks. Below we discuss the key novelties: 1. We show that speech infilling subsumes many tasks which have not been presented before. While speech infilling models exist, they struggle to generate long or diverse samples, because they assume deterministic input/output mapping and formulate it as a regression task. 2. We show that NAR flow-matching overcomes deterministic mapping and speeds up training and inference. See Table B3 (our reduced setup with 150K steps achieves better WER/spk similarity than Vall-E). See Fig. 2 for inference time comparison. 3, Voicebox enables unprecedented scaling and a single multilingual generative model. We added experiments in the rebuttal PDF Table 1 to show the benefit. **Comment 2: Why flow-matching w/ OT path (F9C6, pReF, Peh7, MQeA)** > **F9C6**: “use diffusion models since the masking strategy is similar to adding noise.” **pReF**: “what motivated the authors to choose the flow-matching model.” **Peh7**: “insight for using CNF to model speech infilling rather than other generative models such as diffusion models?” **MQeA**: “compare the conditional normalizing flow with a diffusion-based model.” **To generalize speech infilling, any powerful non-autoregressive generative models, including diffusion models, should work. We chose flow-matching (FM) with optimal transport (OT) path because [1] showed that FM w/ OT > FM w/ diffusion > score-matching (SM) w/ diffusion (the typical diffusion model) on training speed and inference compute-quality trade-off.** See the comparison in Table 1 and Fig 4-7 in [1]. FM and SM models are in fact similar. Both transform a noise distribution to the data distribution, and learn to predict the gradient (score and flow, respectively) given a time step and a noisy sample at that time step. Diffusion and OT are simply two different paths (with the same initial and final marginal distribution). A path dictates how a sample transforms between initial and final distribution. As shown in Figure 3, Lipman et al. (2023), OT corresponds to a “simpler path” with constant speed and direction compared to diffusion, which the authors argue is easier to learn (faster training) and integration can be estimated more accurately with fewer steps (faster inference), leading to better empirical results. [1] Lipman et al. "Flow matching for generative modeling." ICLR’23 **Comment 3: Baselines (F9C6, MQeA)** > **F9C6**: “why not train [VITS] in the LibriTTS dataset?” **MQeA**: “YourTTS is not a good TTS model for zero-shot TTS models. The audio quality of YourTTS is bad. I recommend training the VITS with the reference encoder you used and the same configuration.” **YourTTS is exactly a VITS model trained on LibriTTS (and a few other datasets) with a reference encoder.** We have included such comparisons in Table 5 which reviewer F9C6 asked for. In addition, the Voicebox with a reduced setup (12 layers, trained on 1K hours) in Table B3 in the appendix still shows superior performance than VITS/YourTTS. Moreover, we trained a VITS on LibriSpeech without a reference encoder. It leads to much worse results (16.64% WER and 311.75 FSD for Table 5) **Unfortunately, we cannot train VITS with same reference encoder because Voicebox doesn't have an explicit encoder**. Voicebox is a decoder only model taking text, masked audio, noisy audio) as input to predict flow. Also, given YourTTS is exactly VITS with a reference encoder trained on data similar to ours, we are not sure why the suggested experiment would lead to better results than YourTTS. > **MQeA**: “all models should be trained with the same dataset.” **MQeA**: “transferring the voice style from a reference audio, not speaker ID, is utilized in many works. [...] the same transferring method is used for each model.” **MQeA**: “more comparisons with the recently proposed TTS models (not YourTTS, not Vall-E)” Given that this work targets generalist speech generative model with unprecedented scaling, we believe **YourTTS is an appropriate baseline** because it is recent (ICML’22), open-sourced, achieves SoTA prior to VALL-E on zero-shot TTS, and is trained on in-the-wild data which enables better generalization than those trained on VCTK/LJ. Furthermore, **we compared with most related models trained with the same dataset and using the same transferring method**, which are VALL-E and A3T-style models (Section B.3 in the appendix). We could better address the comments if reviewer could kindly clarify **why these are not appropriate baselines**, and **what zero-shot TTS models we should compare with and why.** Pdf: /pdf/916955236cb6b7518b8ba8cd4ff60ee7d28e2162.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors present Voicebox, the text-guided generative model for speech at scale. Voicebox is a non-autoregressive flow-matching model trained to infill speech, given audio context and text. Voicebox can be used for mono or cross-lingual zero-shot text-to-speech synthesis, noise removal, content editing, style conversion, and diverse sample generation. Strengths: It is interesting that Voicbox performs in-context learning via the masking strategy. Besides, the authors train the model in extensive data and present the SOTA results in several downstream tasks. Weaknesses: 1. One of the weaknesses should be novelty. It sounds like the combination of masked-based A3T and flow-based speech synthesis model. Besides, duration prediction models and classifier-free guidance are not new. 2. Evaluation. Firstly, why present a portion of MOS in Table 2? VITS-LJ and VITS-VCTK are compared as the baseline in the LibriTTS test-other set, but why not train in the LibriTTS dataset instead? The differences in data amount could encounter unfair comparison. 3. Unclear presentation. Voicebox still needs to provide a clear illustration. Do you need tags for different task inferences? A3T finds that it is challenging to generate speech given full-mask samples, and what do you find is the most suitable proportion of masked and unmasked regions? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It could be more natural to use diffusion models since the masking strategy is similar to adding noise, so what is your consideration in using flow-matching models? 2. Citations [42] and [43] are the same Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address common questions in Author Rebuttal above and other questions here. We will incorporate all feedback in the final version. **1. Do you need tags for different task inferences?** **No.** Voicebox performs different tasks by preparing input differently, which we illustrated in Figure 1. We have added refined illustrations in the supplementary PDF for the rebuttal (Figure 1-3). **2. Why present a portion of MOS in Table 2?** **VALL-E is not publicly available** and there are only 8 samples from their demo page, which are not sufficient for MOS studies. We did not present A3T MOS because **the performances of A3T on WER and SIM-o are very bad**, and the MOS scores are expected to be very bad after we listen to a number of samples. **3. A3T finds that it is challenging to generate speech given full-mask samples, and what do you find is the most suitable proportion of masked and unmasked regions?** A3T finds it challenging because it assumes the mapping between the input (text and audio context) and the output (target audio) is deterministic, as discussed in paragraph 3 Section 2 and the response to common comment 1 above. In particular, it struggles more as the duration to infill becomes longer (the distribution of possible speech becomes broader and less described by the mean targeted by regression). In contrast, Voicebox addresses the issue by modeling with a CNF model, which models the full distribution over possible speech, and not just the mean like for a regression model. Hence, Voicebox is capable of infilling audio of any length. During training, we mask the entire audio with 30% probability, and with 70% probability masking a contiguous chunk that is [70%, 100%] the length of the entire audio. During inference, it depends on the task and the input. For zero-shot TTS, the masked length is the length of predicted duration of the target transcript. For denoising, we explore infilling 50% (Table 3) and 30%/70% (Table B4 in Appendix). For diverse speech sampling (Section 5.3), it is 100% masked. **4. Citations [42] and [43] are the same** We will fix that. Thank you. --- Rebuttal Comment 1.1: Comment: Thank you for answering the questions and clarifying the details.
null
null
null
null
null
null
Query-based Temporal Fusion with Explicit Motion for 3D Object Detection
Accept (poster)
Summary: This paper proposes a small component that can be added on-top of single-frame 3D object detectors and perform (late) temporal fusion. This comes at a tiny computational cost and provides some performance improvements. Strengths: S1) The background, as described in this paper in the introduction and related work sections, is apt yet concise. S2) The authors identify and tackle an important problem. S3) The idea -- to aggregate queries over time -- should be appreciated for its simplicity and is different from the approach taken by many prior works, e.g., BEVFormer. S4) The proposed approach can be added on-top of prior methods to obtain performance improvements (probably, see W4) at an insignificant computational cost. This is demonstrated on NuScenes with TransFusion and DeepInteraction. Weaknesses: W1) There are a few missing references in related work, for instance - SpatialDETR from ECCV2022. - The pioneering work DETR3D from conference on robotics learning. - TrackFormer, a simple yet elegant approach to multi-object tracking that is completely query-based. W2) One of the core motivations of the proposed approach is that BEV-based methods propagate information about the /background/ through time, which is unecessary (Section 1 second paragraph and Section 2 last paragraph). This hypothesis is not directly tested. I would have expected an experiment with BEV-based fusion in which the background was removed, to show that this is indeed unecessary. W3) Section 3.2, which constitutes the core of the proposed approach, is very difficult to understand (or possibly incorrect!). See questions Q1 to Q5 below. W4) The performance improvements are rather small (0.5 to 0.9 NDS). There is no analysis of standard deviation between different trainings, so it is difficult to appreciate whether these improvements are statistically significant. W5) There is no analysis of in what scenarios that the proposed approach provide performance improvements. While perhaps not strictly necessary, the performance improvements of the proposed approach are rather small and it is impossible to know whether this is because scenarios where temporal fusion is necessary are scarce in NuScenes or whether the proposed approach does not help so much in such scenarios. A more thorough analysis would be helpful to shed some light on the actual performance of the proposed approach. Some nitpicks) - The equations have poor formatting, always starting with a colon and not ending with a comma/period. Moreover, operators are written as variables, e.g., $and$ instead of $\text{and}$. - $R$ is perhaps not the best choice for the transformation from world-coordinates to ego-coordinates, because $R$ is often used for rotations. In contrast, we also have a translation. - Equation 2 seems incorrect to me in that it behaves as if $R_{t-1}^t$ was in homogenous coordinates but I suspect that $C_{t-1}$ is not. Moreover, I think it is more common to let $C_{t-1}'\in\mathbb{R}^{3}$ than to put $C_{t-1}'\in\mathbb{R}^{1\times 3}$ in order to have it left of the transformation matrix. - I think "mathcal" is usually used for sets or operators. It is a bit confusing to see it for the mask and attention matrices. - It is both clearer and more concise to write $A\in\mathbb{R}^{N_t\times N_{t-1}}$ than "matrix with shape of $N_t\times N_{t-1}$". Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1) How is $C_t$ obtained? Is it a fix anchor (like in AnchorDETR), a learnable anchor, or computed from $Q_t$ (like in DETR)? Q2) What are the details of the transformer architecture? Is it DETR, AnchorDETR, ConditionalDET, DeformableDETR, or something else? Q3) Is it correct that at each time-step that one would run the model, the MTM is run N+1 times (as shown in Figure 2)? That would not be as efficient as a fully recurrent model. Is this correct, and was a recurrent model considered? Q4) I do not understand what the cost matrix is, what it is used for, or how it is computed. How is this achieved? Q5) What is $C_t$? In l130 it seems as if it is a matrix of shape $\mathbb{R}^{K\times 3}$ (or possibly the transpose of that). In equation 3, however, the l2 vector norm is computed, which makes it seem as if $C_t\in\mathbb{R}^3$. Q6) What is the motivation for going as low as 200 queries? Image-based detectors seem to often adopt 300 queries (e.g., ConditionalDETR) whereas 3D detectors adopt 600 to 1200 (SpatialDETR). BEVFormer adopts 40000 queries, though it uses a sparse attention mechanism. Q7) What is the standard deviation between experiments? Please remember to not seed dataloading or weight initialization, and to retrain also the TransFusion/DeepInteraction backbone. Q8) Regarding cross-attention versus MTM qualitative results, could the same be achieved by temperature scaling the cross-attention? Or perhaps by performing cross-attention with positional encodings based on $C_{t-1}'$ instead of $C_{t-1}$ (i.e., correct the position with the predicted velocity of that object)? Q9) Why is relying on a DETR-like design a limitation ("the main limitation")? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 4 excellent Limitations: The authors provide some discussion of limitations. Though, it is unclear to me why the main limitation is the reliance of a DETR-like design. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your patient and detailed review. We try to address your comments below. **Missing references:** Sorry for missing these references. We will cite them into our submitted paper. **I would have expected an experiment with BEV-based fusion in which the background was removed:** Thanks. We conduct this experiment on BEV-based fusion method MGTANet and then removing all background by the provided ground truth. As shown in table, we surprisingly find that there is a significant performance improvement. However, it is very difficult to thoroughly distinguish foreground and background in practical situations. | Method | mAP | NDS | | ---- | :----: | :----: | | MGTANet | 64.0 | 68.1 | | +Remove Background | 94.6 | 82.5 | **There is no analysis of standard deviation between different trainings:** Thanks. We re-train our method for 3 times as shown in table. The corresponding standard deviation of NDS is $3.3e^{-5}$, which illustrates the stability of our method. | # | mAP | NDS | | ---- | :----: | :----: | | 1 | 66.47 | 70.86 | | 2 | 66.49 | 70.86 | | 3 | 66.46 | 70.86 | **There is no analysis of in what scenarios that the proposed approach provide performance improvements:** Valuable suggestion! To illustrate what scenarios that the proposed approach provide performance improvements, we provide the detailed results (mAP) for the categories of Motorcycle and Bicycle (fast-moving small objects), Traffic Cone and Barrier (static objects). We find that our method mainly improves the performance of fast moving small objects. For static objects, our method can maintain a good performance. For the results of more moving objects such as Car and Truck (please refer to the details in our supplemental materials), our method also brings promising performance improvement. We will add this analysis in the final version. | Method | Motorcycle | Bicycle | Traffic Cone | Barrier | | ---- | :----: | :----: | :----: | :----: | | TransFusion-L | 71.8 | 56.5 | 74.4 | 71.8 | | +QTNet | 75.5 | 61.5 | 75.4 | 71.4 | **Some errors in equations:** Thanks for pointing out these errors. We will revise them in the final version. ### Some Questions **Q1:** The $C_t$ is computed from $Q_t$ by a DETR-like detection head. **Q2:** Sorry for making you misunderstand. Actually, the mentioned DETR in submitted paper is a DETR-like 3D detector. In particular, we select the TransFusion-L as our default DETR-like detector. **Q3:** In fact, MTM runs $N$ times rather $N+1$ times. Theoretically, although our method is not more efficient compared with a recurrent model, our MTM is lightweight and only brings a small amount of time overhead compared with the whole network. As shown in table, we can clearly observe that when adopting 5 frames as inputs, it only brings 7.7 ms additional time overhead. | Frames | mAP | NDS | FLOPS (G) | Latency (ms) | | ---- | :----: | :----: | :----: | :----: | | 1 | 65.0 | 70.0 | 90.65 | 138.2 | | 2 | 66.1 | 70.6 | +0.07 | +3.1 | | 3 | 66.4 | 70.8 | +0.10 | +4.5 | | 4 | 66.5 | 70.9 | +0.14 | +6.5 | | 5 | 66.4 | 70.9 | +0.24 | +7.7| **Q4:** The cost matrix is computed by the L2 distance of objects' center in previous and current frames. It is used to generate the attention map among objects for establishing relationship. **Q5:** (i) $C_{t}\in \mathbb{R}^{N\times 3}$ means the centers of $N$ objects. (ii) Given the current $C_{t}\in \mathbb{R}^{N\times 3}$ and previous $C_{t-1}\in \mathbb{R}^{M\times 3}$, the L2 norms actually means the L2 distance between $C_{t}$ and $C_{t-1}$ so as to obtain the cost matrix $L\in \mathbb{R}^{N\times M}$. We will revise them in final version. **Q6:** In fact, for a fair comparison, we keep the same value of 200 queries with The LiDAR-based (TransFusion-L) or Multi-modality DETR detectors (e.g. TransFusion, DeepInteraction, BEVFusion). Besides, for highly sparse 3D point cloud, setting 200 queries is usually sufficient to detect most objects **Q7:**As shown in table, we re-train our methods for 3 times based on TransFusion, and the corresponding standard deviation of NDS is $3.3e^{-5}$. We will make it clear in the final version. | # | mAP | NDS | | ---- | :----: | :----: | | 1 | 66.47 | 70.86 | | 2 | 66.49 | 70.86 | | 3 | 66.46 | 70.86 | **Q8:** Sorry for making you misunderstanding. In fact, we have implemented the cross-attention with the positional encoding based on the correct position (${C^{'}}_{t-1}$) and provide the corresponding results in Table 5 of the original submitted paper. Here, we further simplify the table and show the results in the following table. | # | Cross Attention | MTM | mAP | NDS | | ---- | :----: | :----: | :----: | :----: | | 1 | | | 65.0 | 70.0 | | 2 | &#10003; | | 65.2 | 70.0 | | 3 | | &#10003; | 66.2 | 70.5 | **Q9:** To our best knowledge, DETR-like approaches are still developing in the 3D domain. In this paper, we mainly verify the superiority of our method for DETR-like 3D detectors. The effectiveness of 3D detectors for other paradigms remains to be explored in future research. We will make further clarification on limitations. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough rebuttal. I would have two follow-up questions - Regarding W2: It is not sound to remove the ground-truth from the propagated feature maps, because one then essentially feeds the ground-truth (or something highly correlated with it) into the model. I would instead suggest to use the model's prediction to remove the parts of the BEV feature map based on the predictions that the model makes. - Regarding W4: How is the standard deviation computed? Do the neural networks use different weight initializations? What about the backbone? How about the dataset shuffling? Is anything seeded? I find the reported number to be substantially lower than what I usually encounter on NuScenes. --- Reply to Comment 1.1.1: Comment: Thanks for your comments. **W2:** Thanks for your suggestion. We use the prediction to remove the background of BEV features. As shown in table, removing the background does not downgrade the performance. | Method | mAP | NDS | | --- | :---: | :---:| | MGTANet | 64.0 | 68.1 | | +Remove Background | 64.1 | 68.2 | **W4:** Sorry for making you misunderstanding. The main reason for our stable results (low standard deviation) is that we adopt different training strategies with some methods in NuScenes. Actually, we have described our training procedure in the implementation details of the main paper, which includes a two-stage training manner. In more detail, we load the weights of the backbones from the trained TransFusion/DeepInteraction and freeze their parameters in this first stage, which can promise consistent detection performance for a fair comparison. In the second stage, we focus on applying our proposed temporal fusion to refine the detection result in the first stage. In a word, the uncertainty of our detection performance is only from the second stage. Thus, we run our model three times, as shown in table , and find the final performance is very stable after fixing the first stage. We will make it more clear in the final version. | # | mAP | NDS | | --- | :---: | :---:| | 1 | 66.47 | 70.86 | | 2 | 66.49 | 70.86 | | 3 | 66.46 | 70.86 |
Summary: The paper introduced a Query-based Temporal Fusion Network. It faciliate the object queries in previous frame to enhance the current object queries by aproposed Motion-guided Temporal Modeling module, which utilizes both spatial and motion information to construct a cost matrix for efficient temporal fusion. The proposed method significantly improve upon the query-based baseline while incurring a negligible runtime cost. Strengths: 1. The paper introduces a novel temporal fusion paradigm that directly utilizes query-based features for achieving temporal fusion. 2. The proposed framework significantly improves upon the query-based baseline method while incurring a negligible runtime cost. 3. The paper is well-written, and the clarity of the figure enhances the understanding of the proposed concepts. Weaknesses: 1. The novelty of this work is limited. This work is built on top of existing DETR-based detector and perform temporal fusion on query-based features. The idea is quite similar to CenterFormer. What makes different is that the query-based features are generated from DETR detector instead of sampling from heat map in CenterFormer. The memory bank is also not new. 2. As the method use TransFusion as backbone model, it would be worth exploring the use of intermediate BEV features as the key and value for temporal fusion. These features are expected to offer a richer context and can be also cached in the memory bank. 3. Using the velocity \times time to align the features across frames is not able to generalize to long-sequence as the object motion is not always constant and the turning is not considered. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It is better to publish the results on NuScenes leaderboard and validate its method on Waymo Open Dataset. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The author addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your patient and detailed review. We try to address your comments below. **Difference from CenterFormer:** Thanks. In fact, our method is different from CenterFormer. CenterFormer conducts the temporal fusion between the current queries and historical BEV features, which is a sparse-to-dense strategy. But our method conducts the temporal fusion between current and historical queries, which is a sparse-to-sparse strategy. For the memory bank, we do not regard it as our contribution since it is only used to store historical information for avoiding redundant computation. **It would be worth exploring the use of intermediate BEV features as the key and value for temporal fusion:** Thanks. As shown in Table, we conduct the temporal fusion by achieving the interaction between sparse object queries (as Q) and dense BEV features (as K or V). We find that our method brings more improvement than directly conducting the temporal fusion between queries and BEV features. We think that establishing the relationship between current queries and historical BEV features is difficult since there are many backgrounds in BEV features, which hinders the instance-level temporal fusion. Besides, object queries have already aggregated context information from BEV features. Therefore, we think our proposed sparse-to-sparse temporal fusion is reasonable. | Method | Q | K/V | mAP | NDS | | ---- | :----: | :----: | :----: | :----: | | TransFusion-L | - | - | 65.0 | 70.0 | | +QTNet | Query Features | BEV Features | 65.3 | 70.1 | | +QTNet | Query Features | Query Features | 66.2 | 70.6 | **Using motion to align the features across frames is not able to generalize to long-sequence as the object motion is not always constant and the turning is not considered:** Actually, our method achieves temporal fusion between two adjacent frames in a progressive temporal fusion mechanism (Please refer to the structure figure in the uploaded pdf of our rebuttal). Besides, we compute the mean error of velocity (mAVE) between adjacent frames as $0.24 m/s$, which is relatively small. Thus, we argue that our method is applicable to the long-sequence cases. **Publish the results on NuScenes leaderboard and validate on Waymo:** Thanks for your valuable suggestion. We publish the results on NuScenes leaderboard, which is included in the uploaded pdf. Besides, as shown in table, we integrate our QTNet into ConQueR on the Waymo dataset. Due to the limited time and computation resources, we only train all models on the 20% sequences (keep temporal consistency) of training set and validate on the full validation set. We find that there is an obvious performance improvement, illustrating the effectiveness of our method. | Method | Veh | Ped | Cyc | L2 mAPH | | ---- | :----: | :----: | :----: | :----: | | ConQueR | 58.7 | 62.7 | 49.3 | 56.9 | | +QTNet | 59.5 | 63.3 | 54.1 | 59.0 | --- Rebuttal Comment 1.1: Comment: After carefully reading other comments, I believe the solution proposed by the author is effective, but lacks references and discussion of related work, and the innovation is limited. I will maintain my rating.
Summary: This paper proposes a new strategy of fusing temporal information for camera-lidar based 3D object detectors. The main method is to design a plug-and-play module, which uses predicted velocity and vehicle ego information to compute the correspondence matrix among queries explicitly. This module can be integrated into some advanced LiDAR-only or multi-modality 3D detectors. Strengths: 1. The paper provides a clear and well-structured overview of the approach, making it easy for readers to understand and follow. 2. The proposed method brings competitive performance with negligible computation cost and latency on the nuScenes dataset. Weaknesses: 1. The experiments are not very sufficient. For detectors of the DETR architecture, I think using explicit geometric constraint to match queries may not be necessary. Maybe directly conducting Attention operation among current queries and historical queries are enough to bring promising performance, just like StreamPETR. However, the authors do not analyze this issue. 2. The method proposed in this paper does not bring enough benefits, This method works on the nuScenes dataset. It may be that nuScenes has a large proportion of static data. I hope to verify it on other data sets. That will affect whether I am willing to improve the score. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Could you please provide your baseline(DeepInteraction or Transfusion-L) multi-frame results, just aligning the previous frame to the current frame? I hope to know how much gain is brought by MTM, rather than multi-frame alignment enhancing feature representation. 2. Could you please provide some visual samples? MTM should alleviate the problem of direction misdetection. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: As mentioned before, using explicit geometric constraint violates the simplicity of DETR. Thus, I do not think the proposed strategy will be widely adopted by future work, although the Query-based alignment strategy shows benefits compared with the BEV-based and Proposal-based methods. Anyway, I hope the authors can provide me a discussion comparing the proposed Query-based strategy and directly conducting Attention among current and historical queries. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your patient and detailed review. We try to address your comments below. **Analyze the attention and MTM:** Good suggestion. As shown in table, We replace the MTM with attention operation and find that there is no noticeable performance gain. The main reason is that objects with similar geometric structures between adjacent frames are difficult to distinguish in LiDAR domain, which makes the association between two frames less reliable. However, StreamPETR can work well by building the relationship of queries between two frames in camera domain due to the rich appearance information in 2D images. | Method | mAP | NDS | | ---- | :----: | :----: | | TransFusion-L | 65.0 | 70.0 | | +StreamPETR | 65.2 | 70.0 | | +MTM | 66.2 | 70.5 | **The method proposed in this paper does not bring enough benefits:** Thanks. We provide the results of our MTM on the Waymo dataset in table. Here, we select the representative DETR-like method ConQueR (CVPR 2023) as our baseline. Due to the limited time and computation resources, we only train all models on training set with only 20% sequences to keep temporal consistency. As shown, our method even brings more obvious performance improvement on Waymo dataset, which illustrates its effectiveness. | Method | Veh | Ped | Cyc | L2 mAPH | | ---- | :----: | :----: | :----: | :----: | | ConQueR | 58.7 | 62.7 | 49.3 | 56.9 | | +QTNet | 59.5 | 63.3 | 54.1 | 59.0 | **Provide your baseline multi-frame results, just aligning the previous frame to the current frame:** Thanks. We align the previous queries to the current frame and send them to the decoder for comparison. As shown in table, directly transfering the past queries to current frame only brings 0.1% mAP improvement. In contrast, our MTM brings more promising performance improvement with 1.2% mAP. | Method | mAP | NDS | | ---- | :----: | :----: | | TransFusion-L | 65.0 | 70.0 | | +Propagate | 65.1 | 70.0 | | +MTM | 66.2 | 70.5 | **Provide some visual samples:** Thanks. In fact, we have provided the visualization of detection results in the supplement materials. Besides, following your valuable suggestion, we highlight some visual samples for direction misdetection in the uploaded pdf of our rebuttal.
Summary: In this paper, the authors propose a simple and effective Query-based Temporal Fusion Network (QTNet). The main idea is to exploit the object queries in previous frames to enhance the current object queries by the proposed Motion-guided Temporal Modeling (MTM) module, Experimental results show the proposed QTNet outperforms BEV-based or proposal-based manners on the nuScenes dataset. Strengths: 1. The proposed QTNet can be plugged into LiDAR-only or multi-modality 3D detectors. 2. QTNet can boost 3D detector's performance with negligible computation cost and latency. Weaknesses: 1) The innovation of the method is limited. The fusion of temporal features using motion information has similar ideas in both the proposal-based method (MPPNet,MSF [1])and the query-based method (MOTRV2 [2]). For example, MOTRV2 greatly improves the positioning and tracking performance of the 2D transformer-based tracker by fusing the timing query features and passing the location of the timing query. 2) The comparison with proposal-based methods is not comprehensive. a) The fusion strategy of MPPNet only uses the historical proposal information that can match the current moment, that is, forms the trajectory, and discards other frames. This means that MPPnet cannot use historical information to restore frames missed at the current moment. The author can use motion information to transfer the unmatched past proposals to the current frame, like query-based startegy, and then verify the performance of MPPNet in this way, which can more fairly verify the performance of query-based strategy and proposal-based strategy. b) The comparison with SOTA methods, such as MSF, is missing. MSF also uses the motion-guided feature fusion strategy and achieve higher performance and efficiency than MPPNet. 3. The ablation experiments of MTM are not sufficient enough to verify the superiority of MTM design. The author should provide a simple baseline that just transfer all motion-aligned past queries to the current frame with an NMS to remove redundant temporal queries, and then send them to the decoder with the current query together. Compared with this baseline, we can know more clearly whether it is the gain brought by the MTM attention design or just temporal queries. [1] MSF: Motion-guided Sequential Fusion for Efficient 3D Object Detection from Point Cloud Sequences [2] MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See Weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your patient and detailed review. We try to address your comments below. **Difference of using motion information with MPPNet and MSF:** Thanks. The main idea of our QTNet is different from MPPNet and MSF. MPPNet utilizes the velocity prediction for motion compensation in tracking, and then generate trajectory for temporal fusion. MSF utilizes the motion for propagating the current detection boxes to history frames for sampling point clouds. However, our method utilize the motion to establish the attention map among queries and fuse these queries by transformer, which is a more efficient way for temporal fusion. **Difference from MOTRv2:** MOTRv2 propagates the historical queries as the track queries in the current frame. Then it concatenates these queries as the $Q$ of transformer decoder, and the dense image features as the $K$ and $V$ of transformer decoder. In other words, MOTRv2 is a sparse-to-dense fusion strategy while our proposed MTM conducts the temporal fusion among sparse queries, which can be regarded as a sparse-to-sparse fusion strategy. Besides, MOTRv2 can work well by building the relationship in camera domain due to the rich appearance information in 2D images. However, objects with similar geometric structures between adjacent frames are difficult to distinguish in LiDAR domain, which makes the association between two frames less reliable. Therefore, we design the MTM operation to solve this problem. **Comparison with proposal-based methods:** Thanks. We propagate the historical detection results to the current frame to verify the performance MPPNet. As shown in table, there is a large performance degradation. The behind reason is that propagation operation may leads to a lot of false positive predictions and harm the final prediction performance. | Method | Propagate | mAP | NDS | | ---- | :----: | :----: | :----: | | baseline | | 63.1 | 67.8 | | +MPPNet | | 63.4 | 68.2 | | +MPPNet | &#10003; | 61.5 | 67.3 | Besides, as shown in table, we compare MSF with MPPNet on the same baseline on the nuScenes dataset. Although MSF has lower latency than MPPNet, MSF produces worse performance. The main reason is that MSF only utilizes the velocity to move current boxes to previous frames and does not take into account the turning angle, which limits MSF generalize to long-sequence. | Method | mAP | NDS | FLOPs (G) | Latency (ms) | | ---- | :----: | :----: | :----: | :----: | | baseline | 63.1 | 67.8 | 90.7 | 138.2 | | +MPPNet | 63.4 | 68.2 | +131.3 | +127.9 | | +MSF | 62.9 | 67.9 | +198.3 | +81.5 | | +QTNet | 64.7 | 69.0 | +0.1 | +4.5 | **Comparison with simple baseline that just transfer all motion-aligned past queries to the current frame:** Thanks. We transfer the past queries to the current frame and send them to the decoder ('+Propagate' in table). We find that this temporal fusion manner does not bring obvious performance improvement. The main reason is that there lacks obvious feature distinction in LiDAR domain, which leads to having a difficulty in learning attention among object queries. However, MTM makes the learning process easier by our proposed explicit geometric constraint. | Method | mAP | NDS | | ---- | :----: | :----: | | TransFusion-L | 65.0 | 70.0 | | +Propagate | 65.1 | 70.0 | | +MTM | 66.2 | 70.5 | --- Rebuttal Comment 1.1: Comment: Thank the authors for the response and additional experiments. After the rebuttal my concerns are resolved and I keep my my original rating.
Rebuttal 1: Rebuttal: We upload a pdf, which contains the visualization about orientation, results on the nuScenes leaderboard, and the illustration about our progressive temporal fusion mechanism. Pdf: /pdf/9bd27c627ae0accf5bfeb19d70fc25a29d64f02c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces an approach that leverages object queries as a form of temporal memory. By establishing an attention map between queries from current and previous frames, the method effectively captures and encodes explicit motion information using velocity prediction and pose transformation. The proposed approach, referred to as QTNet, achieves superior performance compared to BEV-based or proposal-based techniques when evaluated on the nuScenes dataset. Furthermore, the Motion-guided Temporal Modeling Module (MTM) module can be seamlessly integrated as a plug-in component into other existing methods. Strengths: 1. This article is well-written and can be easily reproduced based on its content. 2. The visualization of the attention map reveals the reasons behind the superior performance of the proposed method. 3. Despite its simplicity, this approach achieves significant performance improvements. 4. Decouple Strategy is interesting. Weaknesses: 1. In line 126, the memory bank stores $Q_{t - 1}$, however in line 154, you actually use previous fused queries $Q_{t - 1}'$. 2. This article lacks innovation as it employs a manually defined attention map based on simple projections of velocity and pose, and the memory bank consists of straightforward query storage. Additionally, the performance of this approach is reliant on velocity prediction and tends to degrade in congested scenes. 3. There should be more exploration and comparison of different paradigms. Currently, each paradigm is represented by only one method, lacking universality. The author could validate their approach using a simpler paradigm, such as employing the straightforward BEV feature query method used in BEVfusion. This kind of comparison would be more reliable and credible. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your patient and detailed review. We try to address your comments below. ### W1: In line 126, the memory bank stores $Q_{t-1}$ , however in line 154, you actually use previous fused queries ${Q^{'}}_{t-1}$. Sorry for the misunderstanding. In fact, we only store historical queries $[Q_{t-1}, Q_{t-2}, ...Q_{t-N}]$ in the memory bank and then fused these historical queries to generate the ${Q^{'}}_{t-1}$ for enhancing current queries $Q_t$. We will make it clear in this original paper. ### W2: Manually defined attention map based on simple projections of velocity and pose, and the performance of this approach is reliant on velocity prediction and tends to degrade in congested scenes. Thanks. We summary your concerns as follows: **About manually defined attention map:** The defined attention map is simple, effective, and well-designed. Actually, 3D objects obey the physical law of motion in the real-world 3D space (LiDAR domain), which means that the same 3D object between adjacent frames does not shift too much in a short time. Therefore, utilizing this prior information is important to define the attention map among object queries, which makes our temporal model know where to pay attention, resulting in better performance. **About the reliance on velocity prediction:** To achieve promising detection performance, velocity prediction of objects is the critical information for almost all temporal-based 3D detectors (e.g. MPPNet, MSF, MGTANet) rather than only our method. **About the performance on congested scenes:** Since nuScence dataset does not provide the settings in the congested scene, we simply divide it into two scenes, which contains the crowded scenes (the number of objects is greater than 80) and the uncrowded scenes (the number of objects is less than 80). The corresponding results are shown in table. We can observe that our approach can obtain more obvious performance improvement in crowded scenes than that of uncrowded scenes, which illustrates the effectiveness of our proposed method. | Method | Scenes | mAP | NDS | | ---- | :----: | :----: | :----: | | TransFusion-L | uncrowded | 67.0 | 69.9 | | QTNet | uncrowded | 67.9 | 70.7 | | TransFusion-L | crowded | 64.4 | 68.7 | | QTNet | crowded | 65.8 | 69.7 | ### W3: There should be more exploration and comparison of different paradigms. **More exploration and comparison of different paradigms:** Thanks. The BEV-based paradigm conducts the temporal fusion on the dense BEV features, which may bring a lot of unnecessary computation cost on background. The proposal-based paradigm needs time-consuming operations to generate sparse 3D proposal features, and the performance highly depends on the quality of 3D proposals. In contrast, our query-based paradigm is based a sparse feature representation and can effectively aggregate the foreground object information, which is effective and efficient. Besides, our method can get rid of complex 3D RoI operations and less sensitive to 3D object of size and orientation than proposal-based representation. **More models for comparison:** Actually, we have selected one of the advanced and representative temporal fusion methods for each paradigm as comparison. To further illustrate the universality of our method, we select a new proposal-based method MSF, an improved version for MPPNet, as comparison. We can observe that QTNet consistently outperforms MSF in terms of performance and efficiency. | Method | mAP | NDS | FLOPs (G) | Latency (ms) | | ---- | :----: | :----: | :----: | :----: | | baseline | 63.1 | 67.8 | 90.7 | 138.2 | | +MPPNet | 63.4 | 68.2 | +131.3 | +127.9 | | +MSF | 62.9 | 67.9 | +198.3 | +81.5 | | +QTNet | 64.7 | 69.0 | +0.1 | +4.5 | **Validate on the BEVFusion:** We integrate our temporal fusion into BEVFusion by utilizing our method on the queries of BEVFusion. The experimental results show QTNet brings further performance improvement to BEVFusion with a small latency. | Method | Modality | mAP | NDS | Latency (ms) | | ---- | :----: | :----: | :----: | :----: | | BEVFusion | LC | 69.6 | 72.1 | 965.5 | | +QTNet | LC | 70.1 | 72.5 | +6.5 |
null
null
null
null
null
null
Emergent Communication for Rules Reasoning
Accept (poster)
Summary: This work investigates the emergent communication framework for reasoning rules. That is, unlike prior studies that focus on communication about perceived low-level contexts, this paper proposes a cognition-oriented environment to encourage agents to reason and communicate about high level-rules. To this end, it introduces a new interesting and unbiased benchmark, rule-RAVEN. This benchmark, as opposed to the original one (I-RAVEN) avoids overfitting and pushes the agents to have an actual communication protocol. The authors show, with different experiments, that agents are able to succeed in the reasoning tasks and develop a compositional and semantically stable language. Strengths: The authors introduce a well-thought benchmark that could be beneficial for future works to analyze the content of emergent languages. This benchmark, which is a modification of I-RAVEN, forces agents to develop an actual communication protocol. They perform the needed ablation to show its benefit compared to I-RAVEN. Furthermore, this paper is well-written, and a detailed description of the setting, and hyper-parameters are provided (on top of the code). Weaknesses: The main weakness of this work is its motivation. As stated in the paper, the goal of the emergent communication framework is to: - either study the origin of the human languages and/or - develop intelligent communicating artificial agents It is unclear what this work's position is. If the former, is there a theory that our language emerged to communicate about a high-level reasoning task? If so, can this line of work be clarified in the paper? If the goal is to develop communicating agents, communicating about visual inputs is more practical for human-agent interactions. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Can you explain further the experiments of the paragraph "Rule-RAVEN dataset" (line 260)? In particular, I don't understand why we have an "unsuccessful" communication game with "rule-RAVEN" dataset if the speaker was already trained (in a two-stage setting). That is, in this training regime, the speaker's language is not random, and the listener should be able to succeed in the communication game (or at least have a good enough accuracy) without modifying the speaker's language). Maybe you can elaborate more on the stage 1 training? Also, you state that "for the listener’s train accuracy still achieves ~0.9 even if the speaker’s message is completely ignored". How do you check if the speaker's message is completely ignored? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful comments. We would like to clarify the concerns as follows: 1. > Clarify this work's position: study the origin of the human languages or develop intelligent communicating artificial agents. Our work's position closer to the later, i.e., developing intelligent communicating artificial agents, cause we mainly focus on verifying the generalizable and transferable ability of agents in this paper. In addition, now we supplement experiments for demonstrating the emerged language's transferability performance on image-based downstream real-world tasks. Specifically, we first generate symbolic data with $Number \in \\{1, \dots 9\\}$, $Color \in \\{1, \dots 9\\}$, $Shape \in \\{3, \dots 9\\}$ (triangle, square, ..., nonagon), and $Size \in \\{1, \dots, 9\\}$ using the rule-RAVEN dataset. We then implemented a render to draw the panel's symbol as a $320\times320$ grayscale image. Finally, we train new listeners using the message from the symbolic environment and question-candidate panel images (we replace $f^L$ with a 5-layer ConvNet to process the image input). After 20 epochs of training, the training accuracy of the listener is in Table 1. **Table 1**: Transfer accuracy of language emerged on symbolic reasoning task to image-based downstream reasoning tasks ('agent': language from a well-trained speaker; random: a random language). | Attribute values | agent | random | | :--------------: | :-----------------: | :-----------------: | | 20 | $0.9193 \pm 0.0029$ | $0.3519 \pm 0.0245$ | | 30 | $0.9168 \pm 0.0084$ | $0.3030 \pm 0.0182$ | | 40 | $0.8996 \pm 0.0109$ | $0.3406 \pm 0.0618$ | The results show that languages emerging in symbolic environments can transfer (with ~ 0.9 accuracy) to downstream visual reasoning tasks. Moreover, for the former (i.e., study the origin of the human languages), cognitive literature [1] provides the theory: > *The ability to use linguistic signs to express freely-formed thoughts marks "the true distinction between man and animal" or machine.* We will clarify in the next version of the paper. Refs: [1] Chomsky, N. (2004). "Chapter 15 Language and Mind: Current Thoughts on Ancient Problems". 2. > Clarify the paragraph 'Rule-RAVEN dataset' at line 260. This paragraph compares the impact of different panel generation methods in two datasets (i.e., rule-RAVEN and I-RAVEN) on agents' overfitting. Specifically, an undeniable fact is that, in a message-blocked scene (where the speaker is bypassed, and the message is always a constant, making it unusable by the listener), due to the complete ineffectiveness of the message, a higher accuracy (i.e., the likelihood of the listener correctly selecting the target) implies a greater degree of inductive bias, which in turn suggests a larger overfitting risk. In this paragraph, based on the experiments under such message-blocked scenes, we prove that the candidate panels generated by the I-RAVEN-style method lead to a more severe overfitting, i.e., the listener achieves ~0.9 accuracies by just simply analyzing the question-candidate answer panels without any information from the speaker (lines with the legend 'I-RAVEN_20/30/40' in Figure 4, significantly higher than rule-RAVEN.) We will revise this paragraph to make it clearer and less confusing. 3. > How do you check if the speaker's message is completely ignored? The message-blocked scenes (where the speaker is bypassed, and the message is always a constant, making it unusable by the listener) are equivalent to the listener completely ignoring the message from the speaker, which is the lower bound of the accuracy of the communication task.
Summary: This paper takes the ever-popular Lewis signalling game for emergent communication and studies experiments on rule-focused communication as opposed to the perception-focused communication of prior work. In particular, it uses a modified version of Raven's progressive matrices to formulate a signalling game directly on attribute-value vectors which requires pattern recognition/reasoning to complete. The authors find that agents can learn to communicate when using a two-stage curriculum which essentially pretrains the speaker for more stable communication at the beginning of the communication stage. The experiments demonstrate that the resulting emergent language correlates better with the underlying rules of observations rather than the individual observations themselves. Strengths: ## Originality - `[major]` Addresses the signalling game from a new perspective, i.e., reasoning instead of perception. - `[minor]` Introduces a new dataset. ## Quality - `[major]` Presents good variety of empirical evaluation with clear results - `[minor]` Presents different levels and senses of "generalization". ## Clarity - `[minor]` Details for implementation are presented without being overwhelming ## Significance *See Originality.* Weaknesses: The paper is relatively complete, but what keeps my rating from being higher is that there is relatively sparse comparison with prior work. Such comparison would better contextualize the results and increase its significance. For further details, see the *Questions* section of the review. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - How does this work compare with prior art which also uses categorical variables for a signalling game, even if there is no reasoning element to the game? - What are the particular effects on an emergent language of a reasoning-focused game versus one that is perception-focused? What sort of inductive biases are present in the task that has a downstream effect on the language? ## Minor Comments/Questions - `Line 46` "inner": typo? - `Line 46, 86` "inter": typo? - `Line 122` Do not use curly braces for ordered sequences; use parentheses. - `Paragaph @ 119` Why is it necessary for there to be no ambiguity? This sort of ambiguity shows up frequently in human communication. - `Line 251` "cause" -> "because" - `Paragaph @ 246` I would use "shows" or "demonstrates" instead of "proves" since it is not a formal, mathematical proof. - Use the default LaTeX placement of tables/figures instead of `[h]`; the former is less distracting. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful comments. We would like to clarify the concerns as follows: 1. > Comparison with previous work using symbolic dataset. We compare with previous emergent language work using symbolic datasets (cited in our paper) from multiple angles: research goals, task orientation, input complexity, and agent training. **Table 1**: Comparison of previous emergent language work using symbolic datasets. | | Ours | Others | | ---------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | | Research Goals | Emergent cognition-based communication for rules reasoning. | Efficiency of Language Coding [1], Ease-of-teaching [2], population heterogeneity [3], and promote structured, generalizable languages [4]. | | Task Orientation | Extracting and using **inter-context** rules to handle reasoning problems. | Identifying the **inner-context** object attributes for discrimination or reconstruction task [1-4]. | | Input Complexity | Multiple symbolic vectors with attributes. | One symbolic vector with attributes [1-4]. | | Agent Training | Two-stage training due to the contexts-and-semantics bilaterally drifting task. | End-to-end training due to the training simplistic task [1-3], iterated training [4]. | We would supplement more related work and revise our paper. Refs: [1] Chaabouni, R., Kharitonov, E., Dupoux, E., & Baroni, M. (2019). Anti-efficient encoding in emergent communication. [2] Li, F., & Bowling, M. (2019). Ease-of-teaching and language structure from emergent communication. [3] Rita, M., Strub, F., Grill, J. B., Pietquin, O., & Dupoux, E. (2022). On the role of population heterogeneity in emergent communication. [4] Ren, Y., Guo, S., Labeau, M., Cohen, S. B., & Kirby, S. (2020). Compositional languages emerge in a neural iterated learning model. 2. > Semantic effects and inductive biases of a reasoning-focused game versus one that is perception-focused. **Table 2**: Comparison of reasoning-focused and perception-focused games from both perspectives of semantic effects and inductive biases. | | Reasoning-focused game (ours) | Perception-focused game | | -------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | | **Semantic effects** | The emerged language describes changing patterns (i.e., rules) between *multiple* input contexts. | The emerged language describes perceptual features of objects (or attributes) within a *single* input context. | | **Inductive biases** | The game settings force agents to cooperative reason by conveying abstract rules *implicit* in contexts. | The game settings force agents to discriminate or reconstruct targets by conveying perceptual features of contexts. | 3. > `Paragraph @ 119` Why is it necessary for there to be no ambiguity? From the listener's perspective, structural requirements (unambiguous) are necessary. Specifically, when considering a dataset comprising multiple RPM problems, the occurrence of ambiguity (i.e., attribute values and rules are not one-to-one correspondence) would result in an imbalanced *prior probability distribution* of rules' frequency in the dataset. This imbalance implies the inequality significance between rules, inducing inductive bias in the dataset, thus posing a higher risk of listener's overfitting. 4. > About other minor questions. We will revise the text format and typos, and modify inappropriate expressions in line 122 and paragraph at 246. --- Rebuttal Comment 1.1: Comment: I have read the author's rebuttal. I think the proposed table would be a great addition to the paper, although I do not think it would go quite far enough to get me to increase my score above a 7/10 (there would need to be empirical ablation studies), but I think the paper is largely adequate with the proposed changes. Minor edit: "inner-context" -> "intra-context"
Summary: This paper proposed a new environment along with the training framework for emergent communication of abstract rules. They designed a context generation pipeline rule-RAVEN to avoid overfitting and a two-stage curriculum training method for more stable convergence. They evaluated the emerged language from the perspectives of generalization and transfer learning. Strengths: 1. This paper proposed a new research angle of abstract rule reasoning for emergent communication. The context requires the agent to go beyond the low-level perceptual features and communicate more abstract rules. 2. The candidate pool is smartly designed to motivate agents to extract rules from the context. 3. A suite of comprehensive evaluations is designed to measure the generalization of the emerged languages. Weaknesses: 1. Structural requirements of the new benchmark may need to be further explained: I am not clear about why the rules must be unambiguous. From my understanding, though the multiple rules can be applied to the current context, as long as the agents can communicate either of the rules, the receiver should capture the correct candidate? Though Figure 4 demonstrates the receiver can select the candidate correctly without the sender’s message trained with rule-RAVEN, further experiments/explanations are still needed to show that: a. The communication training benefits from the structural or functional requirement or both. b. The language that emerged using the I-RAVEN dataset is not/less generalizable/compositional/transferable. 2. Sender’s rule reasoning and perception encoding are entangled. In the first stage, both $g^S$ and $f^S$ are trained. Though the training data is not shown in the communication stage, it will still introduce structural information because of the term $\mathbb{1}(r_i, m_i)$. a. does that require the length of the messages to equal the size of the rules? Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Sec 5.1, how distance(rule) and distance(panel) are computed? Just want to clarify whether these two distances are comparable. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. Though the goal of the task is to emerge the language for abstract rules, it will also be interesting to know whether the receiver can learn to induce rules after the communication (instead of applying rules during the communication, not required experiments). Similar to ETL, you can test the accuracy of the reasoning problem on the communicated receiver without further training. 2. As the author mentioned, the current context input is a structured symbol. It will strongly encourage compositional language. It will be interesting to know how agents can emerge languages in a raw pixel input. 3. Can the emerged languages generalize to contexts with different attributes? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful comments. We would like to clarify the concerns as follows: 1. > Experiments/explanations about 1a and 1b. (1a) The communication training process benefits from both the structural and functional requirements. From the listener's perspective, structural requirements (unambiguous) are necessary. Specifically, when considering a dataset comprising multiple RPM problems, the occurrence of ambiguity (i.e., attribute values and rules are not one-to-one correspondence) would result in an imbalanced *prior probability distribution* of rules' frequency in the dataset. This imbalance implies the inequality significance between rules, inducing inductive bias in the dataset, thus posing a higher risk of listener's overfitting. From the perspective of the speaker, functional requirements are also necessary. Specifically, our rule-based candidate-panel generation algorithm deliberately confuses target rules between question-candidate panel pairs. Such deliberate confusion forces the speaker to initiate meaningful communication to help the listener. (1b) The language that emerged using the I-RAVEN dataset is not generalizable, transferable, and compositional. For generalizable: We first emerge language using the I-RAVEN dataset, then test the language effectiveness on the rule-RAVEN dataset (these two datasets share the same rules but differ in how they generate candidate panels). Experimental results (Table 1) show low generalization performance of the language (i.e., ID/Inpo-ood accuracy only ~0.6, close to the message-blocked scene). **Table 1**: Generalization accuracy with different attribute values N. | N | ID | Inpo-ood | Message-blocked scene | | :--: | :-----------------: | :-----------------: | :-------------------: | | 20 | $0.6312 \pm 0.0017$ | $0.6270 \pm 0.0024$ | $0.6220 \pm 0.0019$ | | 30 | $0.6271 \pm 0.0034$ | $0.6247 \pm 0.0025$ | $0.6215 \pm 0.0033$ | For transferable: The experimental results (Table 2) also show low language transfer performance (accuracy only ~0.6, close to random). **Table 2**: Transfer accuracy of (source S, target T) attribute values ('agent': language from a well-trained speaker; random: a random language). | (S, T) | agent | random | | :------: | :-----------------: | :-----------------: | | (20, 30) | $0.6262 \pm 0.0057$ | $0.6255 \pm 0.0036$ | | (20, 40) | $0.6246 \pm 0.0002$ | $0.6248 \pm 0.0025$ | | (30, 40) | $0.6236 \pm 0.0007$ | $0.6209 \pm 0.0034$ | For compositional: Further analysis of the language shows that, when using I-RAVEN, the speakers describe the rules of different reasoning problems with constant messages (the listener completely ignores the speaker's message) on all random seeds, which indicates that the language is not compositional. 2. > Does the proposed training method have a limit on the message size? The training method does not introduce constraints on the length of messages. Specifically, the speaker updates $f^S$, $g^S$, and $h^S$ in the first stage of training, while only load parameters of $f^S$, $g^S$ (without $h^S$) in the second stage. Therefore, even if the $1(r_i,m_i)$ operator on message encoder $h^S$ constrains the equal size between message $m_i$ and rule $r_i$ in the first stage, we can the training a new $h^S$ with reconfigurable message size in the second stage (joint) training. To demonstrate this claim, we tried diverse message sizes (Table 3) in our reasoning game, and get similarly high generalization performance (accuracy ~0.95). **Table 3**: Generalization accuracy with different message sizes (message length M, vocabulary size V) and attribute values N. | (M, V, N) | ID | Inpo-ood | | :---------: | :-----------------: | :-----------------: | | (6, 15, 20) | $0.9539 \pm 0.0018$ | $0.9528 \pm 0.0015$ | | (6, 30 ,20) | $0.9522 \pm 0.0015$ | $0.9513 \pm 0.0031$ | | (6, 15, 30) | $0.9429 \pm 0.0053$ | $0.9385 \pm 0.0045$ | | (6, 30, 30) | $0.9411 \pm 0.0022$ | $0.9362 \pm 0.0014$ | 3. > How distance of rules/panels are computed? - For rules, different rules belong to different categories, so we first compute rule vectors one-hot encoding. Then, we use the cosine distance (in this case, cosine distance is similar to the normalized hamming distance) to calculate the distance between rules. - For panels, according to 4 attributes(number, shape, color, size)value, each context panel can be represented as a 4-dim integer vector. Within one RPM problem, we directly concatenate all 6 context panels into a 24-dim integer vector. Then, we use the cosine distance to calculate the distance between context panels of RPM problems. 4. > Whether the listener can learn to induce rules after the communication. We infer that the listener can learn to induce rules after the communication because the key to cooperative reasoning is that the speaker and listener have a similar ability to reason and agree on the message's linguistics (rules mapping). We check this by testing the accuracy of the reasoning problem after exchanging the parameters of the reasoning modules ($g^S$ and $g^L$) of the well-trained speaker and listener. 5. > Emerge language in a raw pixel input, and show compositionality. We supplement the transfer performance of emerged language to image-based downstream reasoning task experiments in reviewer LuwD(Q1) and show the compositionality of emerged language in Reviewer 3wvB(Q2). 6. > Languages generalize to contexts with different attributes. The emerged language can generalize to contexts with different attributes as long as ensuring the quality of the representations produced by the perception module (no effect on the cognition module).
Summary: This paper introduces a novel setting for abstract reasoning (i.e., RAVEN problems) by proposing a speaker-listener framework for communicating higher-level abstract rules. The authors propose an unbiased dataset (rule-RAVEN) to overcome overfitting in the original RAVEN-family datasets (I-RAVEN), and propose a two-stage curriculum agent training method for successful communication. Experiments have shown the efficacy of the curriculum training for solving the rule-RAVEN and out-of-distribution generalization. Strengths: - I like the idea of both: i). introducing communicative game settings to abstract reasoning tasks, and ii). see how higher-level relational abstractions (instead of low-level perceptual features) emerge in communicative games. The limited capacity communication channel formulation can lead to emergent abstractions for problem-solving, including more powerful representations for abstract reasoning and concept learning. Previous attempts in drawing are good cases but not complex enough to depict the importance of abstraction and emergent language. This preliminary trial on RAVEN tests sets a suitable problem formulation for emergent communication in abstract reasoning. - The rule-RAVEN dataset effectively mitigates the existing bias in the I-RAVEN dataset, making the speaker-listener communication valid. - The paper is well-written and easy to read. The flow of writing in section 5 is also appropriate for addressing potential concerns for readers. Weaknesses: Although I like the task settings in this paper, the experiment and proposed methods appear to have some weaknesses. I list them as follows: - The communicative formulation is very similar to Mu & Goodman, 2021. It seems this work (communicative RAVEN) is a special case of generalization, shifting from learning object-centric, attribute-level concepts (e.g., shape red or blue) to learning relational concepts (number-increasing). Authors should address more comparisons to these existing formulations. - The evaluation for emergent language is still quite limited. For example, can you probe the learned language to see if it can be linearly projected to some algebraic representations for relational concepts (e.g., the "number increasing" concept can be described as a multiplication matrix in Zhang et al., 2022) or just explicitly manipulate them and see if they have some language-like syntax or compositionality emerged. - The use of symbolic RAVEN and two-stage curriculum training (with the first stage supervised learned) made me doubt the applicability of this communicative method to more complex or real-world tasks. For example, Mu & Goodman, 2021 used a real-world dataset, pixel input, and end-to-end training. refs: 1. Mu, J., & Goodman, N. (2021). Emergent communication of generalizations. Advances in Neural Information Processing Systems, 34, 17994-18007. 2. Zhang, C., Xie, S., Jia, B., Wu, Y. N., Zhu, S. C., & Zhu, Y. (2022, October). Learning algebraic representation for systematic generalization in abstract reasoning. In European Conference on Computer Vision (pp. 692-709). Cham: Springer Nature Switzerland. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness section. I am open to changing my score, so I hope the authors can address these concerns. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful comments. We would like to clarify the concerns as follows: 1. > More comparisons to existing formulations [1]. Similar to many previous works, the agent communicative formulation (i.e., the definition of 'abstract concepts') in [1] identifies the **inner-context** object attributes (e.g., shape, color). While in our RPM-based reasoning game, the agent communicative formulation is extracting and using **inter-context** rules to handle reasoning problems. We will classify previous work based on different communicative formulations and revise the related work in our paper. 2. > Explore if the emergent language has some language-like syntax or compositionality. Inspired by [2], we compute the probability distribution $P(token\_sequence|attribute, rule)$ under randomly selected seeds and give the most probable tokens for a given attribute and rule, as shown in Table 1. **Table 1**: Given attributes and rules, the most probable tokens at each position. | Rules | color | number | size | shape | | :---------------: | :----------------: | -------------- | ------------------ | -------------- | | add | **K**, C, K, H | B, C, B, C | **K**, L, K, K | B, K, B, C | | minus | **B**, L, B, B | K, O, G, O | **G**, G, G, H | J, J, J, J | | min | **K**, C, K, C | J, L, J, C | **K**, O, K, K | B, B, B, C | | max | **B**, B, B, K | K, O, K, K | **K**, G, J, B | F, B, K, C | | constant | **K**, B, K, K | B, K, C, K | **K**, C, K, C | K, B, K, C | | progression_2 | **B**, L, B, B | K, O, K, O | **G**, G, H, H | J, O, J, J | | varprogression_-1 | **K**, C, K, **C** | B, J, J, **C** | **K**, O, K, **C** | B, K, B, **C** | The results indicate that tokens exhibit regular patterns (i.e., language-like syntax and compositionality) for different attributes and rules. For example, almost all rules related to attribute 'color' start with tokens 'K' and 'B', and attribute 'size' start with tokens 'K' and 'G'. On another dimension, the rule 'varprogression_-1' across all attributes ends with token 'C'. 3. > Apply the two-stage curriculum training method to real-world tasks [1] with pixel input, and verify its effectiveness. First, to verify that our two-stage training method can be applied to real-world datasets, we demonstrate the transferability of language produced by two-stage trained agents on image-based (i.e., pixel input) downstream tasks. Specifically, we firstly generate symbolic data with $Number \in \\{1, \dots 9\\}$, $Color \in \\{1, \dots 9\\}$, $Shape \in \\{3, \dots 9\\}$ (triangle, square, ..., nonagon), and $Size \in \\{1, \dots, 9\\}$ using the rule-RAVEN dataset. We then implemented a render to draw the panel's symbol as a $320\times320$ grayscale image. Finally, we train new listeners using the message from the symbolic environment and question-candidate panel images (we replace $f^L$ with a 5-layer ConvNet to process the image input). After 20 epochs of training, the training accuracy of the listener is in Table 2. **Table 2**: Transfer accuracy of language emerged on symbolic reasoning task to image-based downstream reasoning tasks ('Agent' represents using the language emerged by a well-trained speaker, and 'random' represents using a random language). | Attribute values | agent | random | | :--------------: | :-----------------: | :-----------------: | | 20 | $0.9193 \pm 0.0029$ | $0.3519 \pm 0.0245$ | | 30 | $0.9168 \pm 0.0084$ | $0.3030 \pm 0.0182$ | | 40 | $0.8996 \pm 0.0109$ | $0.3406 \pm 0.0618$ | The results show that languages emerging in symbolic environments can transfer (with ~ 0.9 accuracy) to downstream visual reasoning tasks. Second, we qualitatively analyze why the two-stage training method (especially the first stage) is still effective for real-world datasets. The first stage (i.e., supervised training) aims to warm up the speaker, enabling it to generate higher-quality messages during early epochs. This stage is crucial in preventing communication from getting trapped in a local optimum. For instance, in our work, we aim to avoid situations where the listener completely disregards the speaker’s message or where the speaker only conveys partial information. 4. > About limitation. We summarized the limitation of our works as follow: - Our work only focuses on language emergence on a clean symbolic-based reasoning dataset, lacking the exploration of more realistic stimuli-based (e.g., synthetic or natural images) reasoning datasets. We supplemented the transfer performance in image-based downstream reasoning tasks during the rebuttal (related results in reviewer LuwD Q1), and further investigation of the language emergence of image stimuli is still needed. - The reasoning task (RAVEN) adopted in our work only requires the agent to complete the reasoning via a single round of interaction, simplifying the natural reasoning process. - Our work only analyzes the semantics of the emerged languages at the message level, lacking fine-grained structural (gramma) and semantic analysis at the token level. We supplemented with coarse-grained token-level semantic analysis experiments during rebuttal (related results in Q2), and we would do more systematic analyzes further (e.g., the similarity of languages when given attributes and rules, and the degree of polysemy and ambiguity between tokens). Refs: [1] Mu, J., & Goodman, N. (2021). Emergent communication of generalizations. [2] Zhang, C., Xie, S., Jia, B., Wu, Y. N., Zhu, S. C., & Zhu, Y. (2022, October). Learning algebraic representation for systematic generalization in abstract reasoning. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank the authors for the detailed response. I decide to increase my rating by one. I recommend acceptance of this paper.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes an emergent communication game over abstract visual concepts, inspired by Raven's progressive matrices tests. The basic idea is to evaluate neural speakers and listeners on a communication game, where the speaker sees a collection of images encoding some abstract rule (e.g. "number of objects in the image is increasing"); the speaker must then generate a message that allows a listener to complete an unseen sequence. The authors show that agents trained to play this game indeed seem to learn to communicate the abstract rules for which they are trained for, as measured by intrinsic measures of language compositionality and ease of transfer to harder tasks. Strengths: - This is an interesting dataset and interesting problem in emergent communication which may be useful to the community. It indeed explores more abstract visual concepts than in existing work (though note that novelty over the existing EC literature is overclaimed; see Weaknesses). - Careful controls for dataset difficulty (ensuring one distinct feature that can be used to solve each task; ensuring "hard negative" rules) show the authors' care to making sure this is a well-constructed dataset, including an analysis of to what extent existing - Interesting experimental analysis shows that models seem to be (to some extent) communicating abstract rules, rather than superficial input features. Weaknesses: - Only a synthetic dataset consisting of clean symbolic inputs is evaluated. One could imagine more realistic settings requiring communication of rules at least over synthetic visual inputs, if not more realistic visual concepts. Similarly, there is no exploration of downstream transfer to other tasks that perhaps don't involve emergent communication, e.g. instruction following or visual reasoning. While this does not preclude publication, there's not a lot one can gain from this paper as it relates to actual realistic ML tasks. If the paper were to be rejected, IMO it would likely be because the experiments are just a little too synthetic/marginal to be useful to the broader NeurIPS community. - The claim that existing work in EC does not at all care about expresing abstract generalizations or rules is a bit overblown. Separating inputs given to the student and teacher, so as to facilitate communication of abstract concepts, was introduced as early as Lazaridou (2017), recurs in Choi et al., Kiela et al., etc. Mu and Goodman (2022) also propose generalizations over abstract visual concepts involving multiple visual inputs, which is very similar to the task presented here. I do think the present work makes some interesting contributions over the existing literature, in that it is even more abstract, but the relation to existing work needs to be made more clear. Many of these papers are not discussed in detail and simply bucketed as "forcing agents to descrie low-level features of images" (L31-32) which I believe is false. Section 2 Emergent Communication also completely neglects to discuss such efforts in the EC community. - I think it's important for footnote 1 to be made more clear in the text, i.e. that this is not a grounded communication game over real images, despite many of the introductory figures seemingly suggesting this. ## Minor - Title of paper and title on OpenReview do not match - spaces between text and citations would be ideal - It'd be interesting to see how pretraning agents on such visual reasoning communication tasks might improve performance on downstream visual reasoning tasks such as ARC (Chollet et al., ?) - The description of the paragraphs in L119 and L128 as "structural" and "functional" requirements is a little confusing and nonstandard to me—it's not clear what structural and functional mean here. It might be appropriate for example to refer to the "functional requirement" as sampling "hard negative distractors", as is used in the terminology for contrastive learning for example. In other words, distractors should be sampled carefully so as to represent close but not quite correct rules that force the speaker and listener to communicate precisely the right rules. - L171 "directly from the sketch" -> "directly from scratch"? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Did authors try varying the number of context panels given to the speaker? Or even show the partial sequence given to the listener? Wonder how this affects the languages' propensity to communicate abstractions; e.g. if the speaker sees the listener's partial sequence, does the language still communicate the abstract rule, or does the speaker internally learn the abstract rule but nevertheless convey the perceptual features (e.g. "single triangle")? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful comments. We would like to clarify the concerns as follows: 1. > Downstream transfer to visual reasoning tasks, and set more realistic environments requiring communication of rules over visual (image) inputs. We demonstrate the emerged language's transfer performance on image-based downstream tasks. Specifically, we first generate symbolic data with $Number \in \\{1, \dots 9\\}$, $Color \in \\{1, \dots 9\\}$, $Shape \in \\{3, \dots 9\\}$ (triangle, square, ..., nonagon), and $Size \in \\{1, \dots, 9\\}$ using the rule-RAVEN dataset. We then implemented a render to draw the panel's symbol as a $320\times320$ grayscale image. Finally, we train new listeners using the message from the symbolic environment and question-candidate panel images (we replace $f^L$ with a 5-layer ConvNet to process the image input). After 20 epochs of training, the training accuracy of the listener is in Table 1. **Table 1**: Transfer accuracy of language emerged on symbolic reasoning task to image-based downstream reasoning tasks ('agent': language from a well-trained speaker; random: a random language). | Attribute values | agent | random | | :--------------: | :-----------------: | :-----------------: | | 20 | $0.9193 \pm 0.0029$ | $0.3519 \pm 0.0245$ | | 30 | $0.9168 \pm 0.0084$ | $0.3030 \pm 0.0182$ | | 40 | $0.8996 \pm 0.0109$ | $0.3406 \pm 0.0618$ | The results show that languages emerging in symbolic environments can transfer (with ~ 0.9 accuracy) to downstream visual reasoning tasks. In fact, our symbolic rule-RAVEN dataset is sufficient to encourage agents to reason and communicate high-level rules. The reason is that, without changing the semantic information, the format of input data (e.g., visual or symbolic) only affects agents' perception but does not affect agents' cognition ability for rules reasoning. Furthermore, we tend to investigate the emergence of language on more realistic (e.g., V-Prom [1]) and complex (e.g., ARC [2]) image-based reasoning datasets in future work. Refs: [1] Teney, D., & van den Hengel, A. (2020). V-prom: A benchmark for visual reasoning using visual progressive matrices. [2] Chollet, F. (2019). On the measure of intelligence. 2. > Relation to existing work, which also 'communicate abstract concepts' (e.g., [3-6]), needs to be made more clear. The original claim in related work does indeed lead to misunderstandings, and we do not deny that the previous work also 'communicate abstract concepts'. Based on the different definitions of "abstract concepts", we would do a more fine-grained comparison with previous work and revise the paper. We take the several papers you mentioned ([3-6]) as examples: - The 'abstract concepts' refers to inner-context object attributes (e.g., color, shape) in [3-5] or combinations of them (e.g., blue OR/AND triangle) in [6]. - While in our RPM-based reasoning game, the 'abstract concepts' refers to extracting and using inter-context rules to handle reasoning problems rather than selecting which object has specified attributes. Refs: [3] Lazaridou, A. (2016). Multi-agent cooperation and the emergence of (natural) language. [4] Choi, E. (2018). Multi-agent compositional communication learning from raw visual input. [5] Graesser. (2019). Emergent linguistic phenomena in multi-agent communication games. [6] Mu, J., & Goodman, N. (2021). Emergent communication of generalizations. 3. > Footnote 1 (i.e., that this is not a grounded communication game over real images) needs to be made more clear in the text. We point out that rule-RAVEN is a symbolic dataset in line 136 of the paper (last paragraph in section 3.2) and just give the reason in footnote 1. Thanks for pointing out misunderstandings that may arise here. We will more clearly illustrate that rule-RAVEN is a symbolic dataset and revise the paper. 4. > Try varying the number of context panels given to the speaker or listener, and research how the setting affects the languages' propensity to communicate abstractions. We tried a new game setting: 1) the speaker receives the 6 context panels and 2 question panels, reasons the answer, and sends messages to the listener, and 2) the listener selects the target panel from the 8 candidate panels, only referring to the speaker's message. Experimental results (Table 2) show that such a setting will lead to a slight decrease in generalization accuracy because this setting compels the speaker not only reasoning but also to describe the target panel accurately to the listener. **Table 2**: Generalization accuracy on ID and Inpo-ood data splits. | Attribute values | ID | Inpo-ood | | :--------------: | :-----------------: | :-----------------: | | 20 | $0.7273 \pm 0.0728$ | $0.7299 \pm 0.0725$ | | 30 | $0.7996 \pm 0.0185$ | $0.7952 \pm 0.0172$ | Moreover, there is evidence (Table 3) showing that such a setting leads the speaker tends to learn the abstract rule internally but convey the perceptual features of the target panel (Topsim(panel, message) > Topsim(rule, message)). **Table 3**: Topsim between rules/target-panel and messages. | Attribute values | Rule-Message | Panel-Message | | :--------------: | :-----------------: | :-----------------: | | 20 | $0.1776 \pm 0.0137$ | $0.2249 \pm 0.0683$ | | 30 | $0.1571 \pm 0.0272$ | $0.2280 \pm 0.0306$ | 5. > About minor questions. (1) Align to the title on the Openreview site, we will revise the paper's title. (2) Real visual inputs related see the answer of Question 1. (3) We will revise text format and typos, and improve writing in specified paragraphs (L119 and L128). --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks to authors for their detailed response to my review, and for the follow up experiments which are quite interesting. Although I still think the task and domain are synthetic, I appreciate the inclusion of a more interesting downstream transfer task, and the author's rebuttal has solidified the difference between this work and related work. I'll increase my score to a 6.
null
null
null
null
null
null
Composable Coresets for Determinant Maximization: Greedy is Almost Optimal
Accept (poster)
Summary: The paper investigates the composable coresets for the determinant maximization problem that aims to pick $k$ vectors with the maximum volume. The authors prove that the widely-used greedy algorithm also provides composable coresets with an approximation factor of $O(k)^{3k}$, followed by showing a local optimality property for greedy. Empirical results show that the local optimality of the greedy algorithm is even lower. Strengths: - The topic of DPP is relevant. - The technical result is solid. - The observation of local optimality for greedy is of independent interest. Weaknesses: - The paper is a follow-up work of [IMGR20]. The improved analysis for the greedy algorithm is interesting but not the first result of a composable coreset. - The studied problem is limited, e.g., only consider unconstrained DPP in which every set $P_i$ selects exactly $k$ vectors. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the theoretical result extend to other settings of DPP, e.g., different $P_i$ selects a different number of vectors, or certain constrained DPP? - The observation of local optimality for greedy is interesting, does it have other applications? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. I suggest discussing some social impact, e.g., whether the greedy is better or worse for fairness issues than local search. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your positive feedback. Below, we have addressed your comments and questions. > The paper is a follow-up work of [IMGR20]. The improved analysis for the greedy algorithm is interesting but not the first result of a composable coreset. This is correct. However, our algorithm closes this line of work ([MIGR'19], [IMGR20], and [MKSK13]) by showing that the practical greedy algorithm provides an almost optimal approximation guarantee. We note that the motivation of [MIGR19] for studying local search for composable coresets is because simplicity is desirable to ensure that the algorithm is practical. Greedy avoids a lot of computation that is required for local search with a comparable guarantee. Furthermore, this paper provides a very different way of analyzing the relationship between local search and the greedy algorithm in comparison to [IMGR20] and could be interesting in its own right. ----- > The studied problem is limited, e.g., only consider unconstrained DPP in which every set $P_i$ selects exactly $k$ vectors. > Can the theoretical result extend to other settings of DPP, e.g., different $P_i$ selects a different number of vectors, or certain constrained DPP? In this paper, similar to [MIGR19] and [IMGR20], we consider the basic setting of unconstrained determinant maximization. The goal is to get as small of a coreset as possible for each pointset $P_i$ regardless of its initial size $|P_i|$. We want to emphasize that for this problem, a coreset of **size $k$ is always necessary** as otherwise we cannot get any approximation guarantee. (To see this consider the setting where $P_i$ contains $k$ points with very high volume but the rest of the point sets $P_j$ have all points equal to 0. In this case the coreset size should be at least k). It is an interesting direction for future work to see if the greedy algorithm can be used to get a coreset for this problem under fairness constraint. --------- > The observation of local optimality for greedy is interesting, does it have other applications? The locality of greedy does have a consequence: the locality property directly recovers the k! guarantee of volume maximization in the offline case [CMI09], and marginally improves the guarantee to (k!)^(0.5 + o(1)). This is a different proof from the original paper and may be of interest independent of our composable coresets result. The statement and proof of this result can be found in the global response. ------------ > I suggest discussing some social impact, e.g., whether the greedy is better or worse for fairness issues than local search. While our experiments focus on computing the empirical *local optimality* of the greedy algorithm, we expect that neither of the greedy nor local search will provide a fair result (e.g. consider the scenario where there are two populations in the data set where one population contains points with much higher volume. In this setting, both the greedy and local search prefer that population and are not fair in that respect). In order to get fair results one needs to change the formulation of this problem which is not the focus of this work.
Summary: This work studies the determinant maximization problem, in which the input is a set of $n$ vectors in $d$ dimensions, and the goal is to select a subset of $k\leq d$ of these vectors which maximizes the determinant of the Gram matrix of the $k$ vectors, which is also the volume of the parallelepiped spanned by the vectors. This is an important classical problem with connections to diversity. A popular heuristic for this problem is the greedy algorithm, which iteratively builds a set of vectors one at a time by selecting the vector that maximizes the improvement in the current volume. In prior work, an analysis of the greedy algorithm was given, which showed that it gives a $C^{k^2}$ factor approximation. However, the best known lower bound is $\Omega(k^{k-o(k)})$ (for composable coresets? not sure what the exact setting is here). This work shows that the analysis of the greedy algorithm can in fact be tightened to approximately match this upper bound, giving an approximation factor of $O(k)^{3k}$. This result is shown by proving that the greedy algorithm is approximately locally optimal, which in turn implies the approximation guarantee from results shown in prior work. Strengths: The analysis of greedy is an extremely important and old question (the prior bound is over 10 years old), and the authors provide a very nice improvement to this result, and achieves a tight approximation over some class of algorithms. The analysis is nontrivial to carry out. Weaknesses: While the result is very important, the techniques might be viewed as incremental over the analysis of local search given in prior work, and the paper doesn't give much intuition or discussion on why this result might have been missed in prior work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * I could not find the referenced lower bound of $\Omega(k^{k-o(k)})$ in prior work (I checked https://arxiv.org/abs/1907.03197 and https://arxiv.org/abs/1807.11648), I would appreciate a pointer to a specific theorem stating this. In particular, I would like to understand the exact setting in which this lower bound applies. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for reviewing our paper and your positive feedback. In what follows we address your concerns and questions. ---------- > While the result is very important, the techniques might be viewed as incremental over the analysis of local search given in prior work [IMGR20] discusses and analyzes the relationship between greedy and local search by utilizing a very lossy reduction to the k-perpendicular heights problem. In contrast, this paper provides a very different way of performing this reduction and is interesting in its own right. ------- > I could not find the referenced lower bound of $\Omega(k^{k-o(k)})$ in prior work ... I would appreciate a pointer to a specific theorem stating this. In particular, I would like to understand the exact setting in which this lower bound applies. The lower bound is **Theorem 1.4** in the arxiv version of reference **[IMGR20]** “Composable Core-sets for Determinant Maximization Problems via Spectral Spanners.” The setting for our results is the **composable coreset** setting where the data is partitioned into multiple machines: machine $i$ holds the dataset $P_i$. The goal is to use greedy to summarize $P_i$, getting a subset $S_i=Greedy(P_i)$ such that $\mathrm{MAXVOL}^2_k(S) >= (1/\alpha) \mathrm{MAXVOL}^2_k(P)$ where $S=\sum_i S_i$, and $P = \sum_i P_i$, and where for a set $A$ its $\mathrm{MAXVOL}_k(A)$ is defined to be the maximum volume one can achieve by picking $k$ points in $A$. The lower bound shows that one cannot get $\alpha$ which is smaller than $k^{k(1-o(1))}$. Finally, in the offline setting we show that locality property one can marginally improve the k! guarantee of volume maximization of [CMI09], to $(k!)^{(0.5 + o(1))}$. Please refer to the global response for the proof. -------- Please let us know if we can provide further clarification. --- Rebuttal Comment 1.1: Title: Thank you for the clarification on the lower bound. Comment: I encourage the authors to include this reference in the main text, since it is crucial for understanding the claim of optimality. If you could give at least a partial answer to the question "why this result might have been missed in prior work", this would be very helpful. In particular, the main trick seems to be the use of the matrix determinant lemma, but this seems to be a common tool in the area of determinant maximization, so I find it somewhat surprising that this kind of observation was not made before. Would it be possible to give some more insight into what might have been the "key" that prior work didn't have, or maybe a particular difficulty in the calculations? It is of course possible that it was just missed, I'm just wondering if the authors had any take on this. --- Reply to Comment 1.1.1: Comment: We will make sure to include the reference in the final version. Regarding why this was missed previously, in short, those papers were pursuing different approaches as described below. Our goal however was to understand the local optimality property of the greedy algorithm and answering "does the greedy algorithm have similar properties as the local search algorithm?" While simple in retrospect, it is not clear why it should be helpful at the first look. Please see below for more details. In the composable coreset setting, in [MIGR19]: they used a more geometric based approach. In particular, the key idea was that a coreset for the directional height problem implies a composable coreset for determinant maximization. So it remained to show that a simple determinant maximization algorithm yields a coreset for directional height. The authors observed that local search was very helpful in proving the k-directional heights result. When the authors returned to the vanilla greedy algorithm, they tried to analogously perform the same reduction, but without the local optimality assumption. Without local optimality, the guarantee was much weaker. Our key contribution is the "local optimality property" of the greedy algorithm. Therefore we can directly use the result of [MIGR19] on Local Search, to get our result for greedy, thus bypassing the lossy reduction which tries to get a coreset for directional height directly from greedy. In the composable coreset setting in [IMGR20]: they follow an approach based on spectral spanners. Their goal was to get the best constant in the exponent (e.g. k^{k/2}) and thus their algorithm is based on solving an LP and is not optimized for simplicity and practical use. In the offline setting: the reason it was probably missed is that swapping a vector into the greedy solution can break the "structure" of the greedy solution after even the first swap, so the vectors have to be carefully introduced in a sequence. So thinking of the structural property of local optimality would not be the first approach: indeed, [CMI09] iteratively compares the greedy and optimal solutions in a very different manner.
Summary: The paper gives a tighter analysis for the greedy algorithm for constructing composable coreset for the determinant maximization problem. The greedy algorithm performs well in practice; however, the previous analysis was giving an approximation factor much farther from the lower bound. The authors in the paper bring the approximation factor for greedy algorithm much closer to the lower bound. this is achieved by showing that by just swapping a single point from the greedy solution with a vector obtained using local search approach does not increase the volume (determinant) by much. The authors also perform some experiments to show that in practice this local optimality property of greedy algorithm is actually much better than theoretical bounds. Strengths: The paper is well written and clear. I tried to go through the proofs and unless I have missed something, the proofs are sound. Infact the proofs look quite elegant using techniques from simple linear algebra. Code is provided for the empirical results. By closing the gap between the lower bound and the approximation factor given by the greedy algorithm, the paper is able to give better explanation for the good performance of greedy algorithm in practice. Weaknesses: The only weakness I could think of is that the contribution is only theoretical and does not really have much practical implication as greedy algorithm was already known to perform nicely in practice. The simplicity of the proof techniques while not really a weakness may not excite the community much from the novelty perspective. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for reviewing our paper and your positive feedback. We argue that while this is primarily a theoretical contribution, it is valuable for the following reasons: - While it was already known that greedy does well in practice, our locality theorem, along with our experiments showing a lower value of local optimality on real datasets, provide a sound explanation for the strong practical performance of greedy in the context of composable coresets. - We further believe that the locality result on the greedy algorithm is a structural result, and may find uses in other contexts; we point the reviewer to the global response for an example on how this property recovers (and in fact slightly improves) the best analysis of Greedy in the offline setting. --- Rebuttal Comment 1.1: Title: Replying to Rebuttal Comment: Thanks for the response. I will keep my score
Summary: The paper considers the problem of picking k among n vectors that maximize the volume of the parallelepiped spanned by the selected vectors, which is equal to the squared determinant of the corresponding Gram matrix. In particular, the authors focus on the composable coreset setting of the problem where given a large dataset split into multiple subsets, the aim is to find small summaries (coreset) of each subset such that the union of the summaries is a good summary of the full dataset. Existing work showed that the best approximation possible for this problem is $\Omega(k)^{k - o(k)}$ and that an LP-based algorithm achieves almost optimal $\tilde{O}(k)^k$-approximation. Another Greedy algorithm followed by local search, which works better in practice than the LP-based one, was also shown to achieve ${O}(k)^{2k}$-approximation, while the Greedy algorithm alone is only shown to achieve $C^{k^2}$ for some constant $C$. This paper provides an improved approximation guarantee of ${O}(k)^{3k}$ for the Greedy algorithm, by providing an almost tight bound on the local optimality of the solution outputted by Greedy. The presented numerical results also show that the local optimality of the Greedy solution is significantly lower than the theoretical upper bound on real and random datasets. Strengths: - The paper provides a new analysis to the greedy algorithm which significantly improves upon its known approximation guarantee. Even though this approximation is still worse than the guarantees achieved by the LP-based and the Greedy + local search algorithms, this result is interesting as in practice the greedy algorithm performs better than the LP-based one, and skipping the local search stage saves time. - The theoretical results are correct and experimental results illustrate nicely that the local optimality of the Greedy solution in practice is significantly better than the theoretical bound. Weaknesses: - The paper is not well self-contained; several details are omitted in the discussion and proofs which affect clarity (see questions for details). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Suggestions to improve clarity: - The Greedy algorithm description in the preliminaries Section 1.1.1 does not match the pseudocode in Algorithm 1 with no mention of the relation between them. It is good to explicitly discuss the relation between the two, even if this is possibly well known. - In the proof of Theorem 5, the vectors in V are implicitly assumed to be linearly independent. This should be stated explicitly, explaining why this can be assumed without loss of generality. - Provide details for the expression of $\mathrm{vol}(V)$ given in Theorem 5 and for why $|a^j_l| \leq 1$, or at least a clear reference. Similarly for other steps in this proof and other ones. - Provide a reference for Lemma 6. - In the experiments, do you use the data points directly as feature vectors, or you apply a kernel as in prior work? Minor comments/suggestions: - using $\mathrm{vol}(S)$ to denote the square volume is a bit confusing. I propose to use $\mathrm{vol}^2(S)$. - typo line 87: $(2 k ( 1 + \epsilon))^k \rightarrow (2 k ( 1 + \epsilon))^{2k}$ - typo line 154: $a^j_i \rightarrow |a^j_i|$ Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: To Reviewer rYtN, We thank you for your positive feedback. Below, we have addressed your comments and questions. ----- > The Greedy algorithm description in the preliminaries Section 1.1.1 does not match the pseudocode in Algorithm 1 with no mention of the relation between them. It is good to explicitly discuss the relation between the two, even if this is possibly well known. Thanks, we will add clarification about why picking the vector with the largest perpendicular distance to the current solution is the same as greedily picking the vector that maximizes the volume. ----- >In the proof of Theorem 5, the vectors in V are implicitly assumed to be linearly independent. This should be stated explicitly, explaining why this can be assumed without loss of generality. If the greedy solution is not linearly independent, then that means the rank of the n vectors is less than k, so both the greedy and optimal volume is 0. In this case, Theorem 5 directly holds without proof. We will add this explicitly as suggested. ---- >Provide details for the expression of $\mathrm{vol}(V)$ given in Theorem 5 and for why $|a^j_l| \leq 1$, or at least a clear reference. Similarly for other steps in this proof and other ones. * The main idea behind the volume expansion is that subtracting a vector's perpendicular components from other vectors does not change the volume, similar to the Gram-Schmidt volume computation. We will elaborate on this further in the final version. * |alpha_l^j| <= 1 directly holds by the property of the greedy algorithm: if |alpha_l^j| > 1, then v_j would have been chosen before v_i. We will add details about this. * We will ensure that appropriate explanations are added for any ambiguous steps. ---- > Provide a reference for Lemma 6. We apologize for missing this reference. A proof for this lemma can be found in the paper titled _Eigenvalues of rank-one updated matrices with some applications_, by Jiu Ding and Aihui Zhou. We will include this reference in the final version of our paper. ----- >In the experiments, do you use the data points directly as feature vectors, or you apply a kernel as in prior work? We use the data points directly as feature vectors. However, we don't expect the results to vary much. As an example, we applied the RBF kernel with sigma = 6 as used in prior work and repeated Experiment 1 on the GENES dataset for k = 2,4,...,20 and the random dataset. The local optimality was still very close to 1 (figure attached in the global response). ----- > using $\mathrm{vol}(S)$ to denote the square volume is a bit confusing. I propose to use $\mathrm{vol}^2(S)$. Our analysis focuses on $\mathrm{vol}(S)$, while $\mathrm{vol}^2(S)$ is the determinant. For $\mathrm{vol}^2(S)=$ determinant, we obtain a guarantee of $k^{3k}$, while local search and the optimal algorithm are known to have guarantees of $k^{2k}$ and $k^k$ respectively. If we instead want to write the guarantees in terms of volume, they become $k^{1.5k}$, $k^k$ and $k^{k/2}$ for greedy, local search and optimal respectively. We will make this distinction more clear in the final version. ------ > typo line 87: $(2 k ( 1 + \epsilon))^k \rightarrow (2 k ( 1 + \epsilon))^{2k}$ > typo line 154: $a^j_i \rightarrow |a^j_i|$ We apologize for the mistakes in the submission. We will correct all typos in the final version. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your response and additional experiment. The additional derivation of the $k!$ guarantee of offline greedy is also interesting. I highly recommend including more detailed explanations of the proof steps, beyond what you stated in the rebuttal. This would increase the readability of the paper to a wider audience. --- Reply to Comment 1.1.1: Comment: Thank you for finding this application interesting and for your comment. We will certainly describe the proof steps in further detail for the final version.
Rebuttal 1: Rebuttal: We thank all the reviewers for their positive and constructive feedback. Bellow, we give a proof that the locality property can recover and in fact marginally improve the k! guarantee for the offline version of the greedy algorithm [CMI09]. We will include it in the final version. While the focus of our paper is on composable coresets, we hope that this application in the offline case demonstrates why our structural result on local optimality could be useful in analyzing the greedy algorithms in other contexts as well. ------ ### Theorem Let $P$ be a point set, Greedy$(P) = \\{v_1,\ldots,v_k\\}$ the output of the greedy algorithm and $\text{maxvol}_k(P)$ the maximum volume of any subset of $k$ vectors from $P$. Then $\text{vol}(\text{greedy}(P)) \geq \frac{maxvol_{k}(P)}{\prod_{i=2}^k (1+\sqrt{i})}$. ### Proof Let $S \subseteq P$ be the set of $k$ vectors with maximum volume. Without loss of generality and for simplicity of exposition, we assume $S\cap \text{Greedy}(P) = \varnothing$ (the proof still goes through if this is not the case). Consider the set $W_1 = \\{v_1\\} \cup S$ with $k+1$ elements. Perform the greedy algorithm on $W_1$ with $k$ steps. Clearly, greedy will choose $v_1$ first and then some $k-1$ of the remaining vectors. Label the left out vector $w_1$. Inductively define $W_{i+1} = \\{v_1,\ldots,v_i,v_{i+1}\\} \cup (S - \\{w_1,\ldots,w_i\\})$, which has size $k+1$. Perform greedy on $W_{i+1}$ with $k$ steps. The first $i+1$ vectors chosen will be $v_1,\ldots,v_i,v_{i+1}$ by definition. Call the left out vector $w_{i+1}$. We now have an ordering for $S = \\{w_1,\ldots,w_k\\}$. Starting with the greedy solution, we will now perform $k$ swaps to obtain the optimal solution. Each swap will increase the volume by a factor of at most $1+\sqrt{k}$. Initially, our solution starts with Greedy$(P) = \\{v_1,\ldots,v_k\\}$. Note that this is also the output of greedy when applied to the set $\text{Greedy}(P) \cup \\{w_k\\} = W_k$. Swapping in $w_k$ in place of $v_k$ increases our volume by a factor of at most $1+\sqrt{k}$. Our current set of vectors is now $\\{v_1,\ldots,v_{k-1},w_k\\}$. By the ordering on $S$, this is also the greedy output on the set $W_{k-1} = \\{v_1,\ldots,v_{k-1},w_{k-1},w_k\\}$. Therefore, we may swap in $w_{k-1}$ in place of $v_{k-1}$ in our current set of vectors by increasing the volume by at most a factor of $(1+\sqrt{k})$. Proceeding in this manner, we can perform $k$ swaps to obtain the optimal solution from the greedy solution by increasing our volume by a factor of at most $(1+\sqrt{k})^k$. To obtain the slightly better approximation factor in the theorem statement, we observe that in the proof of theorem 5 in the paper, swapping out the $i^{\text{th}}$ vector from the greedy solution for a vector that was not chosen increases the volume only by a factor of $(1+\sqrt{k+1-i}) \leq 1 + \sqrt{k}$, and that swapping out the $k^{\text{th}}$ vector does not increase the volume at all. Therefore, the approximation factor of greedy is at most $\prod_{i=1}^{k-1} (1+\sqrt{k+1-i}) = \prod_{i=2}^k (1+\sqrt{i})$. ### Remark Note that $\prod_{i=2}^k (1+\sqrt{i}) < 2^k \sqrt{k!}$ for $k \geq 7$, which is $(k!)^{\frac{1}{2} + o(1)}$. While the improvement in the approximation factor is quite small, we emphasize that the proof idea is very different from the $k!$ guarantee obtained in [CMI09]. ---------- Description of attached figure: We repeated Experiment 1 for the GENES dataset with the RBF kernel applied as requested in Review 1. Pdf: /pdf/141d1608e8023bd32386cab635aa484a81751a34.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Use perturbations when learning from explanations
Accept (poster)
Summary: This paper proposes a novel approach to Machine Learning from Explanations (MLX), the setting in which data points for learning are paired with human-annotated “explanations”, usually in the form of image masks marking regions that are known to be irrelevant with respect to the downstream prediction task. Previous methods for MLX made use of regularization-based techniques, forcing the learnt function to be locally independent of the irrelevant features for the training data points. The authors show that these methods require strong model smoothing to be effective, to the extent that downstream task performance is heavily penalized. They then propose “robustness-based” methods, which force the model to be robust to perturbation across the irrelevant features, and show theoretically and practically that these techniques work even without heavy regularization smoothing. The authors then show, using three benchmarks, that their technique works in practice, but best results are obtained by combining robustness and regularization-based methods. Strengths: The paper improves on methods from an existing problem setting, applies robustness techniques where they had not been tried before, and convincingly shows both with theory and experiments that such techniques work. The theorems, despite their simplified setting, intuitively demonstrate why the approach works better than the alternatives. The experimental results convincingly show that the robustness-based techniques work better than regularization-based techniques. Weaknesses: While the contribution itself is nicely self-contained and perfectly valid when taken at face value (that in the MLX setting, training GPs or simple CNNs, robustness-based methods are better than regularization-based methods), one cannot help but wonder whether the entire setup is relevant or interesting, considering current state of the art architectures and how they attend to their input. Attention-based architectures like ViTs, due to intermediate-layer attention maps, lend themselves to very natural investigations commensurable to the kind of pixel-wise explanations used in this work. One can imagine that, as these transformer models also easily incorporate masking, the whole MLX problem could be reframed to be about either manipulating attention maps, or dynamically learning attention masks. And example is [Yang et al. (2023)](https://openaccess.thecvf.com/content/CVPR2023/html/Yang_Improving_Visual_Grounding_by_Encouraging_Consistent_Gradient-Based_Explanations_CVPR_2023_paper.html). Addressing these concerns either in the introduction or related work could help contextualize the paper’s contributions. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: It would be interesting, to place the present work in perspective, to at least spend some time in related work to investigate how “explanations” are currently treated in the literature on transformer-based image models like ViTs. Is there any existing literature showing techniques for MLX (maybe not labeled as such) that are dependent on a particular architecture choice? If so, what makes the present work still interesting? Edit: Increased the score after author rebuttal addressing the points above - see comment below. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors briefly mention limitations related to the possibility, in future work, to learn from incomplete explanation data. It would be interesting to add a segment related to the points raised in Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their passionate assessment of our work. We are very glad that the reviewer is appreciative of the novelty of our contribution, intuition from our analysis and completeness of experiments. We enjoyed addressing their concerns on the future relevance of our contributions. We will make sure to include the details of our response in the final version. > It would be interesting, to place the present work in perspective, to at least spend some time in related work to investigate how “explanations” are currently treated in the literature on transformer-based image models like ViTs. Is there any existing literature showing techniques for MLX (maybe not labeled as such) that are dependent on a particular architecture choice? If so, what makes the present work still interesting? …….. Attention-based architectures like ViTs, due to intermediate-layer attention maps, lend themselves to very natural investigations commensurable to the kind of pixel-wise explanations used in this work. We thank the reviewer for drawing our attention to architecture specific regularization of explanation masks. We agree that ViTs can much more readily provide a local explanation through attention maps, which were used to encode prior knowledge in the past [1, 2] (thanks for pointing us to [2]). However, we observe that attention maps are just another local explanation method like gradients or CDEP that we studied in our paper. In that spirit, we expect attention maps based explanations to have the same trend of results as the other explanation methods we studied, i.e. attention maps regularization $\leq$ perturbation-based methods $\leq$ their combination. We confirm this trend through empirical validation on the Decoy-MNIST dataset. In the shared response, we provided results when using attention maps based regularization methods proposed in [1] called SPAN and when using a Visual Transformer architecture of depth 3 and width 128. We observed that ViT architecture much more strongly latches onto the spurious correlation (when compared with Feed-Forward Network architecture that we were originally using), perhaps because of the phenomena observed in [3], making it harder to remove dependence on the decoy part. Nevertheless, we note the following trend in the results: SPAN < PGD $\leq$ PGD-Ex + SPAN. Results from our experiment are aligned with our expectations, and because attention maps are just another local explanation method, much of our intuition and takeaways continue to hold even for Visual Transformers and attention maps. More generally, we expect our method and takeaways to go stale when we have architectures that can faithfully attribute importance of different input regions (by design or otherwise) while also not compromising performance. To the best of our knowledge, no existing architecture can faithfully attribute per-region saliency. Perhaps, it is possible to come up with a carefully designed transformer architecture that can roll out per-region contributions more faithfully as the reviewer is alluding to, but such an architecture is not here yet to our knowledge. Attention maps of ViT aggregate information from other patches at each layer, which is why importance of a patch in the last layer (which is used for regularization as proposed in [1]) cannot attribute importance of a patch faithfully. Our work therefore stands relevant even with recently popular transformer-based networks . References. [1] Miao, Kevin, et al. "Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers." arXiv preprint arXiv:2209.03745 (2022). [2] Yang, Ziyan, et al. "Improving Visual Grounding by Encouraging Consistent Gradient-based Explanations." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [3] Sagawa, Shiori, et al. "An investigation of why overparameterization exacerbates spurious correlations." International Conference on Machine Learning. PMLR, 2020. --- Rebuttal Comment 1.1: Title: Reply to the Authors Comment: We thank the authors very much for their very detailed and on-point reply. Specifically, I found the authors to have gone above and beyond in addressing the specific point I raised on architecture-dependent MLX technique. Their new experiment compares one such technique (SPAN) with theirs, and furthermore shows that combining the two improves performance, as predicted by their previous results. I would thus like to raise my score to a 7 (I do not however seem to find a way to edit my own review at the moment). Edit: I was now able to edit the review. --- Reply to Comment 1.1.1: Comment: We are pleased to know that Reviewer WA6d found our response convincing. Thank you once again for your support and thoughtful review.
Summary: This paper reinterprets Machine learning from explanations (MLX) as a robustness problem, where human explanations define a lower-dimensional manifold for perturbations. The paper points out that the previous regularization-based MLX approaches require strong model smoothing in order to be globally effective at reducing shortcut learning. The authors propose a novel approach that combines robust training methods with an earlier MLX technique, achieving state-of-the-art results on both synthetic and real-world benchmarks. The theoretical and empirical analyses explaining how the combination of robustness and regularization can reduce the need for strong model smoothing are provided. Strengths: 1. The paper is well-structured and clearly organized, making it easy to follow; 2. The authors offer theoretical analyses to explain how the combination of robustness and regularization can minimize the need for strong model smoothing, which adds to the rigor and solidity of the paper. Weaknesses: 1. The contribution of the proposed method is restricted to the combination of two established robust training techniques with an existing MLX approach; 2. Although the authors thoroughly review various regularization-based MLX approaches in the introduction, the paper only showcases the effectiveness of combining robust training methods with a single MLX approach, Grad-Reg. It remains unclear whether the proposed approach can be applied to other MLX methods, and further investigation is necessary to determine its generalizability. In addition, Table 1 does not include the results of the more recent MLX approaches that are well discussed in the introduction section. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Is it possible to apply the proposed methods to other MLX approaches? To better understand the generalizability of the approach, it would be helpful to see the performance of robust training methods combined with various MLX methods. 2. The authors mention in lines 277-279 that they evaluate in-domain test images with background pixels replaced by a constant pixel value. Have you tested the methods' performance under other out-of-distribution (OOD) scenarios, such as replacing background pixels with the background pixels from randomly selected images? 3. A sensitivity analysis on the hyper-parameters should be provided. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses. At this moment, the main limitations of the paper revolve around the limited novelty and generalizability of the proposed method. I tend to reject the paper primarily for this reason. However, I may reconsider my evaluation if the authors can provide solid evidence demonstrating the generalizability of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review. We tried our best to address their concerns on contribution and generalizability of our method; we are more than happy to answer any further concerns. > The contribution of the proposed method is restricted to the combination of two established robust training techniques with an existing MLX approach; We wish to highlight that our contribution also lies in (a) systematic study of robustness-based methods for learning from explanations, which is missing in the existing literature, (b) theoretical analysis and experimental validation of relative merits of regularisation and robustness-based methods. Although our final recommendation of combining regularization with robustness-based methods is simple, we found consistent gains with it across multiple datasets and architectures. In that regard, we only view the simplicity of our recommendation as an appealing aspect. > It remains unclear … can be applied to other MLX methods … necessary to determine generalizability. In addition, Table 1 does not include the results of the more recent MLX approaches that are well discussed in the introduction section. We only evaluated Grad-Reg when combined with robustness methods because Grad-Reg far exceeded the performance of CDEP as reported in our results table (Table 1). “We omit comparison with Shao et al. (2021) [5] because their code is not publicly available and is non-trivial to implement the influence-function based regularization.” (L224-225). Schramowski et.al. (2020) [7] simply studied regularizing using gradient explanations (just like Grad-Reg) for supervising explanations. Nevertheless, we appreciate the concern and added in the shared response the results when combining three other MLX methods with a robustness-based method: PGD-Ex. We evaluated CDEP + PGD-Ex, Integrated-Gradient [8] and attention map using Visual Transformers (ViT) [2]. We observe from the results in the shared response that irrespective of the explanation method used, our claims remained valid: (a) using perturbations (i.e. robustness-based methods) for supervising explanations is better than using regulairzation-based methods, (b) combining robustness with regularization-based methods is at least as good or better than using robustness-based methods. We hope that these results with new explanation methods and other results presented in the shared response to be convincing of the generalizability of our proposal. > Have you tested under other out-of-distribution (OOD) scenarios…plant dataset We were just following the dataset construction from Shramowski et.al.[7] to replace background with the average pixel value (obtained using train split). We did not originally evaluate other shifts to tease out dependence on the background. Acting on your suggestion, we evaluated on a test set obtained by adding varying magnitude of noise to the background. We observe that robustness and regularization methods when combined led to a model that is far more robust to noise in the background, aligning with our original results on the plant dataset. | | Noise (N(0, 1)) | | Noise (N(0, 10)) | | Noise (N(0, 30)) | | |------------------------------|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:| | | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | ERM | 59.8 ± 11.9 | 43.5 ± 2.0 | 57.4 ± 7.6 | 38.1 ± 4.8 | 55.8 ± 1.7 | 22.0 ± 3.7 | | Grad-Reg | 71.6 ± 2.0 | 66.1 ± 1.8 | **68.7 ± 6.2** | 53.4 ± 4.3 | 56.1 ± 3.3 | 34.8 ± 1.8 | | PGD-Ex+Grad-Reg | 69.8 ± 1.8 | **67.2 ± 2.1** | **69.5 ± 3.7** | **60.6 ± 4.8** | **67.5 ± 4.5** | **50.8 ± 2.4** | > A sensitivity analysis on the hyper-parameters should be provided. In Appendix F, Figure-4 and L668-674, we presented sensitivity analysis of PGD-Ex hyperparameters (number of steps used for optimization and epsilon) on performance for plant and ISIC dataset. Below, we also show sensitivity to hyperparameters for PGD-Ex + Grad-Reg on Decoy-MNIST dataset. In summary, results are broadly stable with the choice of hyperparameters, and we did not extensively search for the best hyperparameters. | Decoy-MNIST | Lambda (Grad-Reg) | Eps (PGD-Ex) | Avg -Acc | Wg -Acc | |-------------------|-------------------|--------------|-------------------|-------------------| | PGD-Ex + Grad-Reg | | | | | | | **1** | **3** | **96.9 ± 0.3** | **95.8 ± 0.4** | | | 0.1 | 3 | 96.8 ± 0.8 | 94.2 ± 0.2 | | | 5 | 3 | 91.6 ± 0.9 | 87.6 ± 2.3 | | | 0.0001 | 3 | 75.5 ± 0.9 | 57.2 ± 3.6 | | | 1 | 5 | 93.8 ± 1.6 | 86.3 ± 3.1 | | | 1 | 0.1 | 58.5 ± 7.9 | 30.0 ± 2.7 | | | 1 | 0.0001 | 59.5 ± 1.1 | 40.1 ± 2.0 | We could not share all sensitivity analysis results here due to space constraints. We will, however, make sure to include them in the final version. > … reconsider my evaluation if … provide solid evidence … generalizability We hope the reviewer finds our response to be convincing of the generalizability of our approach. We are happy to engage further to clarify any further concerns. We thank the reviewer once again for careful consideration of our paper. For references, please see our shared response. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I now well understand the two claims of the paper: (a) using perturbations (i.e. robustness-based methods) for supervising explanations is better than using regularization-based methods, (b) combining robustness with regularization-based methods is at least as good or better than using robustness-based methods alone. I still get some concerns and questions: The robustness-based methods themselves are not the contribution of this work. They are proposed in previous work, and the contribution of the paper is only to use and evaluate them in the MLX setting. Therefore, the novelty and contribution of this point appear limited in my perspective. Secondly, the empirical evidence presented does not consistently support the superiority of robustness-based methods. For instance, in Table 1, Grad-Reg outperforms all robustness-based methods on the Decoy-MNIST dataset. Similarly, the performance difference between robustness-based methods and Grad-Reg on the ISIC dataset is not substantial. Furthermore, I have reservations about the authors' implementation of CDEP. The original CDEP paper demonstrated a significant advantage of CDEP over Grad-Reg (RRR) on ISIC dataset, which is not mirrored in Table 1 of this paper. Also, according to the CDEP paper, the gap between CDEP and Grad-Reg (RRR) on the Decoy-MNIST dataset is not as large as reported in Table 1. Could the authors elucidate on these points? I will reassess my overall rating of the paper when more information is provided. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. We are glad that our earlier response helped clear some of the concerns. > The robustness-based methods themselves are not the contribution of this work. They are proposed in this work and the contribution of the paper is only to use and evaluate them in the MLX setting. Therefore, the novelty and contribution of this point appear limited in my perspective. Yes, IBP and PGD are established, popular methods for robust-training. (One-of-)Our contribution lies in seeing their relevance in learning from explanations. Moreover, we highlight further key contributions including an analysis of regularization-based XML methods as well as proposing the novel combination of regularization-based and robustness-based methods which show consistently state-of-the-art performance. > Secondly, the empirical evidence ... not consistently support the superiority of robustness-based methods. For instance, in Table 1, Grad-Reg outperforms all robustness-based methods on the Decoy-MNIST dataset. Similarly, the performance difference between robustness-based methods and Grad-Reg on the ISIC dataset is not substantial. We agree that improvements of robustness over regularization methods are somewhat muddled by high standard deviation for Decoy-MNIST and ISIC datasets, but they were well pronounced on the plant dataset. Newly added results for Decoy-MNIST and Salient-Imagenet also substantiate the relative strength of robustness-methods over regularization-based. We also would like to again highlight consistent and considerble improvement offered by the novel combination of robustness-based and regularization-based methods. Thanks for raising this point, we will carefully rephrase the claim of robustness being better than regularization to "robustness-based methods are at least as good or better than regularization-based methods when learning from explanations". > Furthermore, I have reservations about the authors' implementation of CDEP. The original CDEP paper demonstrated a significant advantage of CDEP over Grad-Reg (RRR) on ISIC dataset, which is not mirrored in Table 1 of this paper. Also, according to the CDEP paper, the gap between CDEP and Grad-Reg (RRR) on the Decoy-MNIST dataset is not as large as reported in Table 1. *Regarding ISIC dataset discrepancy:* We thank the author for this careful observation. We readily understand the concern. We had spent significant time debugging the cause of poor performance of CDEP, our efforts and observations are documented in Appendix G: "Discussion on poor CDEP performance" of supplementary material. To summarise, we stated that the discrepancy in relative performance between Grad-Reg and CDEP (as reported by us and [1]) on the ISIC datasey may have been because they (a) use a different metric: F1, and (b) use a different architecture: VGG model pretrained on Imagenet. Furthermore, Table 4 of supplementary material shows per-group accuracy for different methods on ISIC dataset. We observe that CDEP performs well (only) on majority groups (examples without patches), which may have influenced their metrics reported in Rieger et.al. [1] *Regarding DecoyMNIST dataset discrepancy:* Decoy-MNIST setting presented in our paper is inspired from decoymnist of [1], but not the same. Sorry for any confusion. All the methods were found to be equally good on the original decoy-mnist dataset [1], which is why we had to alter the dataset to be more challenging. A key difference is that the volume of spurious/simple features in our version of decoy-mnist dataset is much higher, making it harder to remove dependence of a model on decoy/spurious features. Therefore, the performance gap reported in our paper on this dataset is not related to the one reported in CDEP paper [1], although both the datasets share the same name. *Further clarification on our implementation.* Our implementation of CDEP is borrowed from their official code repository [2]. We also made extensive search for optimal hyperparameters for CDEP, picked the best checkpoint and hyperparams using a validation set, reported avarege over three runs, and visualized its results (in Appendix G). The results reported for CDEP in the paper are our best efforts to reproduce. Thanks again. [1] Rieger, Laura, et al. "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge." International conference on machine learning. PMLR, 2020. [2] deep-explanation-penalization, (2020), GitHub repository, https://github.com/laura-rieger/deep-explanation-penalization/tree/master --- Reply to Comment 1.1.2: Comment: We are delighted to hear that the reviewer's concerns are all resolved. We thank the reviewer for their time, passion and patience. _Regarding novelty_. We wish to highlight that robustness methods have been around since 2014 [1] and regularization of gradient explanations to learn from explanation masks was first proposed in 2017 [2]. And yet, relevance and utility of robustness methods for the learning from explanations problem has not been studied so far. Our work studied this novel combination, which filled a crucial gap and established a strong baseline for future research. We thank the reviewer once again for careful consideration of our paper. References: [1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014). [2] Ross, Andrew Slavin, Michael C. Hughes, and Finale Doshi-Velez. "Right for the right reasons: Training differentiable models by constraining their explanations." arXiv preprint arXiv:1703.03717 (2017).
Summary: This paper proposes a new approach to machine learning from explanations (MLX). This new approach is still based on human-provided explanations of (ir)relevant features for each input in training, but recasts the MLX problem itself essentially into a robustness problem. The authors achieve SOTA performance when combining their method with previous ones on several benchmarks. Strengths: The authors show that the need for strong parameter smoothing of earlier approaches can be overcome, and they achieve SOTA performance on several benchmarks. Their method is intuitive and easy to understand. Weaknesses: Obtaining human-specified masks is at best a lot of effort and in many cases simply not available, which limits the scope of problems for which their method can be applied. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Very minor: What is the bolding criteria for Table 1? Usually, the best model is in bold, but here there are 1, 2, or 3 models bolded depending on the dataset, and it's not clear what determined the number of bolded models for each. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It would be good to expand the "Limitations" section and provide more detail on, e.g., when the scaling breaks to large NNs, i.e. provide some guidance here about when one should expect the robustness methods to no longer be feasible to use. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and time. We are glad that the reviewer found our method intuitive and our experiments convincing. > Obtaining human-specified masks is at best a lot of effort … We agree that manually specifying explanation masks can be impractical. However, the procedure can be automated if the nuisance/irrelevant feature occurs systematically or if it is easy to recognize, which may then be obtained automatically using a procedure similar to [1,2]. A recent effort called Salient-Imagenet used neuron activation maps to scale curation of such human-specified masks to Imagenet-scale [3, 4]. These efforts may be seen as a proof-of-concept for obtaining richer annotations beyond content labels, and towards better defined tasks. > Very minor: What is the bolding criteria for Table 1? Bold numbers in Table 1 are the ones within statistical significance bounds of the best number. > provide more detail … when the scaling breaks to large NNs… We discussed this limitation pertaining to IBP-Ex, which works by propagating axis aligned input intervals through the model. Despite its computational efficiency, IBP is known to suffer from scaling issues when the model is too big. Consequently, it is better to use IBP-Ex only when the model is small (<4 layers of CNN/feed-forward) and if computational efficiency is desired. Thanks for pointing it out, we will add this detail in the improved version of the paper. We also wish to emphasise that we do not anticipate any scaling issues when using PGD-Ex. Irrespective of the scale of the network or the size of the explanation region, we expect PGD-Ex+Grad-Reg to be at least as effective as Grad-Reg or ERM. References. [1] Liu, Evan Z., et al. "Just train twice: Improving group robustness without training group information." International Conference on Machine Learning. PMLR, 2021. [2] Rieger, Laura, et al. "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge." International conference on machine learning. PMLR, 2020. [3] Singla, Sahil, and Soheil Feizi. "Salient ImageNet: How to discover spurious features in Deep Learning?." arXiv preprint arXiv:2110.04301 (2021). [4] Singla, Sahil, Mazda Moayeri, and Soheil Feizi. "Core risk minimization using salient imagenet." arXiv preprint arXiv:2203.15566 (2022). --- Rebuttal Comment 1.1: Comment: Thanks. Please consider adding most of this discussion to the paper. --- Reply to Comment 1.1.1: Comment: We are pleased to know that Reviewer k7a7 found our response satisfactory. We are grateful for your time and support.
Summary: The paper is in the domain of MLX (machine learning from explanations). In this approach, human annotated data for each input example is available denoting features that are _relevant_ and which are _irrelevant_. It is desired that the model doesn't learn from irrelevant features. In this paper, the authors utilize robustness for this domain. To elaborate, they expect the model to be robust to perturbations along the features which are considered irrelevant. According to the authors, this is the first use of this technique in the domain of MLX. They also combine with existing regularization-based approaches. A theoretical framework is provided and approach evaluated on three datasets -- Decoy-MNIST, Plant, ISIC -- which are designed to capture the extent to which the models are learning from the irrelevant features. Combination of the robustness-based approaches with regularization-based approaches is shown to outperform prior approaches. Strengths: The use of robustness for MLX is well-motivated, and as per the paper, novel. The empirical results appear to back the utility of combining robustness with existing techniques. The paper is well-written. Weaknesses: The prior works (such as Sagawa et al., 2019; Piratla et al., 2021) have several more evaluation datasets, which the approach is not evaluated on. The generality and robustness (pardon the pun) of the approach is not completely clear. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Questions - Please address the choice of limiting the evaluation to the 3 datasets used. Minor suggestion (no response needed) - L63: State the domain of m^(n) - Consider placing figure closer to their first reference. For example, Figure 2 is referenced on Pg 2 but appears on Pg 6. - L70: "while not exploiting" -- please provide a the definition for this at this stage (addendum: if there is one at all) Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. > The prior works (such as Sagawa et al., 2019; Piratla et al., 2021) have several more evaluation datasets, which the approach is not evaluated on. … Please address the choice of limiting the evaluation to the 3 datasets used. The generality and robustness … not completely clear. Our problem setting is such that we require an input mask per training instance highlighting irrelevant features. Standard sub-population shift datasets such as the ones used in Sagawa et al., 2019; Piratla et al., 2021 do not contain any input mask, which is why we cannot evaluate them. We elaborated on differences between ours and the sub-population shift problem in L335-346 of Section 6. Standard datasets with input masks highlighting irrelevant regions are somewhat hard to find; to the best of our knowledge, the three datasets included were the only standard datasets that were used in the past. Nevertheless, we evaluated using a recent dataset that included relevance masks, called Salient-Imagenet. Results on Salient-Imagenet can be found in the shared response, and are in agreement with other results in the paper. We hope that the results on Salient-Imagenet and other results presented in the shared response to be convincing of the generality of our proposal. Thanks a lot for your suggestions on presentation, we will make sure to incorporate these in the final version. --- Rebuttal Comment 1.1: Comment: Thanks a lot for your response, and the additional evaluation and insight. > Standard datasets with input masks highlighting irrelevant regions are somewhat hard to find If you have some thoughts on why this might be the case, please do share. Might it be the case that collecting such data is not easy / costly? --- Reply to Comment 1.1.1: Comment: Thanks for your response. > If you have some thoughts on why this _(standard datasets are hard to find)_ might be the case, please do share. Might it be the case that collecting such data is not easy / costly? Yes, standard datasets are hard to find partly because their curation is difficult when using a conventional annotation pipeline. However, with increasing interest in learning from explanations as a method for training reliable models [3, 4, 6, 7, 8], we are witnessing growing number of relevant datasets and techniques for efficient data curation. We answered more elaborately on advances in curation of relevant datasets in our response to Reviewer k7a7. Their question and our response to their question are relevant here, which we are pasting here for easy reference. ---- Reviewer k7a7: > Obtaining human-specified masks is at best a lot of effort … Our response: We agree that manually specifying explanation masks can be impractical. However, the procedure can be automated if the nuisance/irrelevant feature occurs systematically or if it is easy to recognize, which may then be obtained automatically using a procedure similar to [1,2]. A recent effort called Salient-Imagenet used neuron activation maps to scale curation of such human-specified masks to Imagenet-scale [3, 4]. These efforts may be seen as a proof-of-concept for obtaining richer annotations beyond content labels, and towards better defined tasks. ----- We hope this answers your question. We will make sure to include these details in the main paper. We wish to also highlight that we evaluated a new dataset (Salient-Imagenet that was shared in the general response), which was inspired by your concern on standard datasets. Thanks again for your time and comment. References. [1] Liu, Evan Z., et al. "Just train twice: Improving group robustness without training group information." International Conference on Machine Learning. PMLR, 2021. [2] Rieger, Laura, et al. "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge." International conference on machine learning. PMLR, 2020. [3] Singla, Sahil, and Soheil Feizi. "Salient ImageNet: How to discover spurious features in Deep Learning?." arXiv preprint arXiv:2110.04301 (2021). [4] Singla, Sahil, Mazda Moayeri, and Soheil Feizi. "Core risk minimization using salient imagenet." arXiv preprint arXiv:2203.15566 (2022). [5] Ross, Andrew Slavin, Michael C. Hughes, and Finale Doshi-Velez. "Right for the right reasons: Training differentiable models by constraining their explanations." arXiv preprint arXiv:1703.03717 (2017). [6] Pukdee, Rattana, et al. "Learning with Explanation Constraints." arXiv preprint arXiv:2303.14496 (2023). [7] Miao, Kevin, et al. "Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers." arXiv preprint arXiv:2209.03745 (2022). [8] Yang, Ziyan, et al. "Improving Visual Grounding by Encouraging Consistent Gradient-based Explanations." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. --- Rebuttal 2: Comment: Dear Reviewer haST, thanks a lot for your valuable time and comments. Since the discussion period is soon coming to an end, I wanted to ask if your concerns regarding limited evaluation have been adequately addressed by the authors or if there are any remaining weaknesses? All the best, Your AC
Rebuttal 1: Rebuttal: We thank the reviewers for detailed assessment and queries. We are glad that all the reviewers found our paper well presented and well motivated. We are also happy for an overall positive assessment of our work. Reviewer Hnkb, WA6d, haST raised some concerns regarding overall generality of our work and raised issues regarding applicability of our proposal to alternate explanation methods (Hnkb, WA6d) or datasets (haST). We address these concerns by (a) evaluating two new explanation methods (and their combination with a robustness method: PGD-Ex) on the Decoy-MNIST dataset, (b) presenting results on a new dataset called Salient-Imagenet. We will evaluate more extensively and include these results in the final version of the paper. In all, our response included results from two new explanation methods (Integrated-Gradient and attention map based regularization), two new architectures (Visual Transformers, ResNet-18) and one new dataset (Salient-Imagenet). As our results below demonstrate, irrespective of the explanation method or architecture or dataset, our claims remained valid: (a) using perturbations (i.e. robustness-based methods) for supervising explanations is better than using regularization-based methods, (b) combining robustness with regularization-based methods is at least as good or better than using robustness-based methods alone. We tried our best to address all the concerns, and are more than happy to engage further to resolve any further concerns. ## Generality to new explanation methods ### Integrated-Gradient and CDEP We introduced evaluation using Integrated-gradient [1] based regularization and also added evaluation with PGD-Ex+CDEP that we did not originally include on Decoy-MNIST dataset. | Alg. | Avg Acc | Wg Acc | |------------------------|------------------|------------------| | Integrated-Grad | 26.7 $\pm$ 1.3 | 17.6 $\pm$ 1.2 | | CDEP | 14.5 $\pm$ 1.8 | 10.0 $\pm$ 0.7 | | PGD-Ex | 67.6 $\pm$ 1.6 | 51.4 $\pm$ 0.3 | | PGD-Ex+Integrated-Grad | 80.5 $\pm$ 2.1 | **62.1 $\pm$ 6.8** | | PGD-Ex+CDEP | **84.8 $\pm$ 0.8** | **64.2 $\pm$ 1.6** | ### Attention Map based local explanations Using a Visual transformer architecture (of depth 3 and width 128), we evaluated regularization using local explanations obtained using an attention map – regularization based on attention maps was used to supervise prior knowledge in [2] and is called SPAN. We obtained saliency explanations on inputs using the procedure proposed in [2]. | Decoy-MNIST | Avg Acc | Wg Acc | |-------------|------------------|------------------| | ERM | 10.0 $\pm$ 0.3 | 8.1 $\pm$ 0.3 | | SPAN | 19.0 $\pm$ 0.3 | 8.1 $\pm$ 0.3 | | PGD-Ex | **64.6 $\pm$ 4.7** | **37.4 $\pm$ 3.5** | | PGD-Ex+SPAN | **63.1 $\pm$ 2.6** | **39.4 $\pm$ 2.9** | ## Generality to new dataset/architecture As further evidence of generalizability of our proposal, we also evaluated on a subset of Salient-Imagenet [3, 4] using pretrained (on ImageNet) ResNet-18. Our training dataset included six classes with around 600 examples. For each example, the dataset also included a human-approved input mask highlighting spurious features. The results are as follows. | | Original accuracy | Accuracy under noise | RCS | |-------------|------------------|------------------|-----------------------------------------------| | ERM | 96.43 | 87.50 | 47.88 | | Grad-Reg | 89.29 | 82.14 | 52.54 | | PGD-Ex | 93.75 | 90.18 | 58.69 | | PGD-Ex+Grad-Reg | 94.64 | 93.75 | **65.02** | Since the dataset did not include any natural example grouping, we could not use Worst-group accuracy metric. Instead we report using the relative core spurious (RCS) metric proposed in [4]. RCS measures relative stability of the model to noise in the core vs spurious regions. High RCS, therefore, means low dependence on spurious features. Also shown in the table are original test accuracy and accuracy when normal noise is added to spurious regions. References. [1] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for deep networks." International conference on machine learning. PMLR, 2017. [2] Miao, Kevin, et al. "Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers." arXiv preprint arXiv:2209.03745 (2022). [3] Singla, Sahil, and Soheil Feizi. "Salient ImageNet: How to discover spurious features in Deep Learning?." arXiv preprint arXiv:2110.04301 (2021). [4] Singla, Sahil, Mazda Moayeri, and Soheil Feizi. "Core risk minimization using salient imagenet." arXiv preprint arXiv:2203.15566 (2022). References. [5] Shao, X., Skryagin, A., Stammer, W., Schramowski, P., and Kersting, K. Right for better reasons: Training differentiable models by constraining their influence functions. [6] Stammer, W., Schramowski, P., and Kersting, K. Right for the right concept: Revising neuro-symbolic concepts by interacting with their explanations. [7] Schramowski, P., Stammer, W., Teso, S., Brugger, A., Herbert, F., Shao, X., Luigs, H.-G., Mahlein, A.-K., and Kersting, K. Making deep neural networks right for the right scientific reasons by interacting with their explanations. [8] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for deep networks." International conference on machine learning. PMLR, 2017.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
NVFi: Neural Velocity Fields for 3D Physics Learning from Dynamic Videos
Accept (poster)
Summary: This paper presents an algorithm for the realization of novel view synthesis in dynamic scenarios, leveraging multi-view video data. In order to tackle this task, the paper introduces two key components: a keyframe dynamic field and an interframe velocity field. These fields serve the purpose of accurately representing the motion, geometry, and color information inherent in the recorded scenario. By incorporating these fields into the algorithm, the proposed approach aims to achieve effective synthesis of new views in dynamic scenarios. Strengths: This paper introduces two novel 3D datasets for dynamic object scenarios and dynamic indoor scenarios. Given the existing limitations in available datasets for view synthesis in dynamic scenarios, the introduction of these new datasets is expected to significantly enhance the progress and development of algorithms in this field. Additionally, while this paper presents innovative contributions, it also incorporates comprehensive experiments to support its claims. Furthermore, the paper is well-written and maintains a coherent structure, making it easily understandable and accessible to readers. Weaknesses: The primary concept explored in this paper involves the acquisition of a key-frame representation and the establishment of connections between key frames and intra-frames through the utilization of intraframe velocity. However, it is crucial to acknowledge that the notion of learning a canonical field and a deformation field has been previously introduced and extensively discussed in several notable publications. Notably, papers such as "D-NeRF: Neural Radiance Fields for Dynamic Scenes" (with over 500 citations) and "Nerfies: Deformable Neural Radiance Fields" (also with over 500 citations) have extensively addressed and explored this concept. The concept of learning a deformation field has also been extensively studied and advanced in previous research, as evidenced by the paper "Dynamic View Synthesis from Dynamic Monocular Video" which has accumulated over 100 citations. Moreover, while this paper introduces two novel datasets, it is important to note that there exist minimal distinctions between the proposed datasets and preexisting ones such as the Nvidia dataset, Nerfie dataset, and Iphone dataset. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: ### Performance-related questions: 1. When compared to previous algorithms, such as D-Nerf, which rely on a canonical field, this paper employs Hexplane as a backbone. Is the improvement observed in this algorithm attributed to the utilization of Hexplane? 2. In comparison to HexPlane, which employs a single representation to learn features for the entire scenario, this paper learns multiple key-frame representations. Does the enhancement in performance stem from the usage of multiple representations instead of just one? ### Innovation-related questions: 1. Instead of directly learning the interframe velocity field, the proposed algorithm initially learns an acceleration field. Why was this approach chosen, and what evidence supports this decision? 2. Instead of learning the precise positions of key-frames, this paper employs uniform sampling for key-frame selection. Does this decision align with the inherent characteristics of dynamic scenarios, considering that motion within such scenarios may not exhibit uniform speeds? ### Dataset-related question: 1. What are the advantages of the proposed datasets in comparison to previous datasets used in similar studies? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The primary limitation of this paper revolves around the level of innovation exhibited by the proposed algorithm and the dissimilarity between the proposed datasets and their predecessors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: ... D-NeRF ... Nerfies ... explored this concept.** **A:** We agree with the reviewer that both prior works are pioneering in this field. Nevertheless, we observe that only learning the deformation actually does not truly understand the physical motions. That means they are good at interpolation but fail at extrapolation. Therefore, we focus on learning a velocity field. Compared to the deformation field, the velocity field has several advantages. The first is cycle consistency. A deformation field is always unidirectional - backwardly or forwardly. If we want both deformation fields to be cycle consistent, we need to add another loss to regularize. However, we can simply integrate forwardly or backwardly on one single velocity field to get a consistent position. Secondly, the motion modeled by the velocity field can be accumulated and is continuous. This means we can easily model the difference between any two timestamps. With these motivations, we focus on designing an effective framework to estimate the underlying velocity field instead of the deformation field. In order to show the effectiveness, we introduce D-NeRF and TiNeuVox (another SOTA canonical space and deformation model, which outperforms Nerfie in many cases) as our baselines. Extensive experiments demonstrate the superiority of our learned velocity fields over deformation fields. We will include these explanations in the next version. **Q2: ... "Dynamic View Synthesis from Dynamic Monocular Video" ...** **A:** Thanks for suggesting this paper. It is a successful method that combines a dense-frame model and scene flows. However, the used scene flow can only learn motions for one step/frame. It is not a continuous motion, and there is no way to regularize future (unseen) flows. The reason we do not include this method as our baseline is that it requires depth as input. It relies on too strong priors and is not general enough. **Q3: Moreover ... Iphone dataset.** **Q4: What are the advantages ... in similar studies?** **A:** The primary goal of our method is to model dynamic 3D scenes for future frame extrapolation, by means of learning underlying physical velocities (*i.e.*, meaningful motions). Because most dynamic scenes in existing datasets are usually chaotic and lack predictable physical movements. We turn to propose two synthetic datasets with diverse and predictable motion patterns in the main paper. Thanks for suggesting the new real-world NVIDIA Dynamic Scene dataset [1]. The Table 1 in Author Rebuttal shows the superiority of our method for future frame extrapolation. **Q5: When compared to ... utilization of Hexplane?** **A:** This is a great point and our extensive experiments can respond to this question in the following two aspects. Firstly, we include HexPlane based model (HexPlane\_PINN) as our baseline, whose performance is worse than ours, so our effectiveness is not just because of a new backbone. Secondly, for those scenes successfully reconstructed by D-NeRF, we can notice a huge gap between interpolation and extrapolation in Appendix Table 6. As shown in Appendix Figures 6-10, D-NeRF also predicts wrong motions in the extrapolation task. Based on this, the main reason why D-NeRF fails is due to its misunderstanding in motions **Q6: In comparison to HexPlane ... instead of just one?** **A:** A HexPlane model includes feature tensors in dimensions XY, XZ, YZ, XT, YT, ZT, and all of these feature tensors form the single representation. In our method, we take the same representation as HexPlane. The main difference is that the T dimension in HexPlane is always half of the total frames, and for interframes, they use bilinear interpolation to get the values, while the T dimension in VeRF is the number of keyframes, which is much fewer than HexPlane, and we use velocity field to transport interframes to keyframes. In general, both our model and HexPlane employ a single representation. **Q7: ... an acceleration field ... decision?** **A:** For clarity, we do not initially learn an acceleration field. We learn it only by our physics losses along with learning the velocity field simultaneously. Here we briefly discuss why this is needed. Usually, the hidden velocity is not a constant velocity. For example, in a falling ball scene, there is gravitational acceleration, and in scenes with rotation, there is centripetal acceleration. In order to learn a reasonable velocity field following physics rules, we must estimate an accompanied acceleration field to guide our extrapolation. Note that, we do not add any priors and supervision to these accelerations. The only gradients passing through these accelerations are from the momentum conservation PINN loss. More details about the PINN loss are in Appendix A.4. We will clarify this point in the next version. **Q8: Instead of ... uniform speeds?** **A:** There are two reasons why we use uniform distributed keyframes. Firstly, uniform keyframes can also handle non-uniform motions. For example, in our datasets, the falling ball scene has a gravitational acceleration, and for scenes such as telescope, their rotations require a centripetal acceleration. None of them is uniform motion, and either the direction or the magnitude of the velocity keeps changing over time. As shown in the experiments, our model works well in these cases. Secondly, uniform keyframes have consistent time intervals between two keyframes. Thus we can easily control the integration step (influence GPU memory and model performance) and the integration stability (influences model performance). However, if the time intervals between keyframes are different, in order to control the GPU memory for a longer integration, we need to sacrifice the integration stability. We evaluate our model with random keyframe places on our Dynamic Scene dataset. Not surprisingly, the Table 6 in Author Rebuttal shows a decrease in performance. We will add these results in the next version. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thank you very much for your detailed responses and valuable contributions, which undoubtedly contribute to the advancement of our community's understanding. I am truly grateful for your clarification regarding the representation employed in your research. However, I do have a couple of concerns that I would like to address. Firstly, I was wondering if your method has been evaluated on a more extensive and realistic dataset, such as the ones utilized in Nerfies and the original Hexplane paper, like the Plenoptic Video dataset. As the Nvidia dataset, you proposed appears to comprise only a limited number of frames for each scenario, I believe that testing on a broader range of real-world data could provide more comprehensive insights. Secondly, I would like to kindly suggest that you consider reviewing the paper titled "Temporal-MPI: Enabling Multi-Plane Images for Dynamic Scene Modelling via Temporal Basis Learning," which also explores the concept of interpolation. It might be interesting to examine and potentially compare your approach with the ideas presented in that work. Once again, thank you for your time and efforts, and I look forward to any further insights you might provide on these matters. --- Reply to Comment 1.1.1: Comment: **Q1: Firstly, I was wondering if your method has been evaluated on a more extensive and realistic dataset, such as the ones utilized in Nerfies and the original Hexplane paper, like the Plenoptic Video dataset. As the Nvidia dataset, you proposed appears to comprise only a limited number of frames for each scenario, I believe that testing on a broader range of real-world data could provide more comprehensive insights.** **A:** Thank you for suggesting Nerfies dataset and Hexplane dataset (i.e., DyNeRF dataset). The Nerfies dataset mainly focuses on human face reconstruction from selfies. The primary challenge in this dataset is to address the slight movements between selfies, where the corresponding pixels in different images do not intersect at the same spatial locations. Since the dataset lacks a `time' dimension, it does not provide the necessary temporal information for our model to estimate a velocity field and make extrapolations. For the Hexplane Plenoptic Video dataset, we have already evaluated our method on it, which we refer to as the DyNeRF dataset in our previous rebuttal. Please see Figure 3 in the attached PDF for our evaluation results. Unfortunately, this dataset appears not suitable for our model to perform future extrapolation due to the presence of scenes involving random-like motions, such as a man pouring coffee or flaming a steak. In these scenes, the hand and the coffeemaker/flame exhibit chaotic dynamics, making it nearly impossible to learn meaningful physical velocities. As a result, our method merely predicts static frames for future extrapolation. With regard to your inquiry about why a subset of frames from the NVIDIA dataset is evaluated, we would like to clarify that we aim to present a fair comparison of our model's extrapolation capability. The omitted frames in the NVIDIA dataset largely contain random-like and unpredictable motions, which are not suitable for our evaluation. Examples of these random-like motions include the ``yeah pose" and the randomly-waving hands of the skater, as well as the sudden disappearance of other cars and the appearance of humans in the truck scenes. Since these motions are hardly predictable/extrapolated and fall outside the scope of this paper, they are naturally excluded for fair and meaningful comparisons. Overall, we appreciate the reviewer's interest in dynamic videos with random-like motions. Nevertheless, the primary goal of our method is to model 3D scenes with physically meaningful dynamics and motions which are particularly common in many robotic applications, such as catching a flying ball and avoiding a moving obstacle. **Q2: Secondly, I would like to kindly suggest that you consider reviewing the paper titled ``Temporal-MPI: Enabling Multi-Plane Images for Dynamic Scene Modelling via Temporal Basis Learning," which also explores the concept of interpolation. It might be interesting to examine and potentially compare your approach with the ideas presented in that work.** **A:** Thank you for recommending the related paper Temporal-MPI. Upon carefully examining it, we find that it introduces a method for learning temporal codes for all observed timestamps. Similar to other baselines, it falls short in its ability to extrapolate to unseen time stamps, as it lacks a mechanism to predict novel temporal codes. In this regard, it is not suitable for the extrapolation task we tackle in this paper. To respect the prior art, we will include and discuss Temporal-MPI in the related work section in the next version, specifically under the subheading of dynamic 3D representations.
Summary: This paper proposes a new representation of dynamic scenes, using keyframe NeRF + Velocity Field. This representation disentangles appearance & geometry from velocity, which allows many exciting applications like future frame extrapolation, unsupervised semantic 3D scene decomposition, and dynamic motion transfer. Obtaining such representation from videos requires a carefully designed system and physics-constrained losses. To validate this model, this paper collects two new synthetic datasets and show impressive results on these datasets. Strengths: 1. Learning a velocity field from videos is an exciting direction and could be impactful for future research, especially for physics-aware 3D/4D tasks. This paper makes an impressive attempt in this direction. 2. The method is well-motivated and intuitively reasonable. The whole idea is easy to follow and sound. 3. The results are impressive, and the applications of the velocity fields are exciting, especially the 3D semantic field segmentation results could clearly show different parts of objects. Weaknesses: 1. Lacking limitation discussions. The current paper clearly has several limitations and they are expected to be discussed in detail in the paper. (1) As briefly mentioned in the broad impact, this paper doesn't have real-world data to validate whether it could work on real scenes. (2) This method seems to require videos with many views at the same timestamp as inputs. I am curious how this method works with monocular videos/sparse views. (3) How does the proposed method deal with abrupt motions and non-typology changes? For summary, the paper should discuss the potential limitations in detail. 2. Some presentations could be improved. It would be nice to give a brief explanation about losses and PINN used in this paper. And Algorithm 1 table could be more precise. The current paper seems to omit too many technical details and the meaning of some losses is confusing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am generally positive about this paper but I think the limitations of this paper (and the questions in that bullet) should be discussed in detail. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The current version doesn't give a convincing discussion about limation and potential broad impact. See the weakness for detailed comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Lacking limitation discussions. The current paper clearly has several limitations and they are expected to be discussed in detail in the paper. (1) As briefly mentioned in the broad impact, this paper doesn't have real-world data to validate whether it could work on real scenes.** **A:** As also suggested by other reviewers, we conduct additional experiments on the real-world NVIDIA Dynamic Scene dataset. It captures real-world dynamic scenes by a static camera rig with 12 cameras. For each scene, we clip 60 frames with reasonable and predictable motions. We reserve the first 46 frames at randomly picked 11 cameras as the training split, *i.e.*, 506 frames, while leaving the 46 frames at the remaining 1 camera for testing interpolation ability, *i.e.*, 46 frames for novel view synthesis within the training time period, and keeping the last 14 frames at all 12 cameras for evaluating future frame extrapolation, *i.e.*, 168 frames. As shown in the Table 1 in Author Rebuttal, our method achieves significantly better results in the challenging task of future frame extrapolation. Figure 1 shows the qualitative results in the appended PDF. Due to the time limit, we can only provide scores on two scenes. More experiments are still running and we will add complete results in the next version. **Q2: (2) This method seems to require videos with many views at the same timestamp as inputs. I am curious how this method works with monocular videos/sparse views.** **A:** As requested, we additionally evaluate our method on the truck scene from NVIDIA Dynamic Scenes in a monocular way. In particular, for every timestamp, only one camera from the 12 cameras is used. Due to the depth ambiguity of the monocular video, it is very hard to disentangle the foreground and background of the scene. So the velocity field will influence the background scene as illustrated in Figure 2 in the appended PDF. We leave this challenging monocular setting for our future exploration. As suggested by reviewer c2wX, we also make an ablation study on the camera number on our own Dynamic Objects dataset. The Table 3 in Author Rebuttal shows the quantitative results. As expected, given fewer training camera views, the performance of our method drops sharply, primarily because the extremely sparse views are unlikely to capture sufficient visual information for physical motion learning. **Q3: (3) How does the proposed method deal with abrupt motions and non-typology changes?** **A:** This is a good question. We evaluate our model on two scenes from DyNeRF dataset. Since the abrupt motions of arm and flame / coffeemaker in hands are actually not predictable, we clearly fail on the extrapolation task, even though we can get promising results in interpolation. Figure 3 of the appended PDF shows the qualitative results. Above all, our model aims to learn the predictable physical dynamics, and is not able to predict abrupt motions. We will discuss this limitation in the next version. **Q4: Some presentations could be improved. It would be nice to give a brief explanation about losses and PINN used in this paper** **A:** We thank the reviewer for this suggestion. To explain the losses better, we first need to address that the whole dynamics is regarded as a transport problem, as shown in Appendix A.4. In particular, the density and appearance features of objects are transported by a velocity field, and the velocity field is transported by itself according to some unobserved hidden forces. So the losses can be divided into two types. The first type is the RGB rendering loss for both keyframes and interframes, which are MSE between the rendering pixel colors and the ground-truth pixel colors. The second type is the PINN PDE losses, where the divergence-free loss is used to constrain the mass conservation of objects in the scene, and the momentum conservation loss is to learn the hidden forces and ensure the extrapolation in a reasonable pattern. PINN losses are implemented as follows: 1) We uniformly sample points in time and space dimensions. 2) We use torch.autograd to evaluate the jacobian of velocity w.r.t the input position and time. 3) We put those terms to calculate the LHS of Equation 7 and make it an L2 loss. Moreover, since the acceleration (hidden forces) is not observable, it is totally estimated by PINN losses. We will add these explanations in the next version. [1] J. S. Yoon. et al, Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera. CVPR, 2020. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: Thanks for the explanation. After reading all reviews and rebuttals, I think the rebuttal address my concerns and I improve my final rating to acceptance. --- Reply to Comment 1.1.1: Comment: We highly appreciate the reviewer's time in reviewing our rebuttal materials and providing very positive feedback. We also thank your initial comments which clearly improve our manuscript.
Summary: This paper proposes a model for dynamic 3D scenes from multi-view videos. Different with previous method, this paper proposes a physical velocity field instead of a deformation field. The method uses a dynamic radiance field to model key frames, and use a velocity field to warp the key frame to intermedia frames. For optimization, the photometric loss of rendered keyframes and interframes are optimized together, along with several PINN terms. Also, the authors show some applications of their model: future frame extrapolation, semantic 3D scene decomposition, dynamic motion transfer. Two new synthetic datasets are introduced. Strengths: 1. Although velocity field has been introduced to this area (Neural Radiance Flow for 4D View Synthesis and Video Processing), this paper introduces a more complete solution, including the PINN terms and the warping mechanism. 2. The implementations of T-NeRF_PINN and HexPlane_PINN are not clear to me, but the future frame extrapolation of the proposed method is impressive. 3. Two new synthetic datasets are proposed. Weaknesses: 1. The method shows inconsistent performance on interpolation. 2. For dense frame optimization method, T-NeRF is not an appropriate method to compare with, since it is a very naïve solution which simply adds a time dimension to MLP input. To prove the effectiveness of the proposed method, I think NSFF (Neural Scene Flow Field) is a reasonable and meaningful method to compare. NSFF also has warping constraints. 3. The proposed method is basically a canonical based method which separate the whole video into multiple sections and assign a canonical model for each section. From this aspect, it is not so fair to compare with TiNeuVox in this way. How would TiNeuVox perform if also assign the same number of canonical models to it? 4. The potential application of motion transfer is not clear to me. 5. No interpolation performance for ablation study. 6. No experiment on real dataset. NHR dataset (Multi-view Neural Human Rendering) may be an appropriate dataset for this paper. 7. From my aspect, the authors spent too much space on applications. The performance of the interpolation is more important for me to identify the ability of the proposed method to model the dynamic scenes. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. For motion transfer, how would the learnt velocity field align with new objects? Put original images of Gnome in Figure 3 (c) may help understanding. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: The authores addresed the limitations, including no experiment on real dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The method shows inconsistent performance on interpolation.** **A:** Comprehensive and consistent results for the interpolation of our method are provided in the Appendix. We are not clear about the request of this comment, but we are always open to discussion with the reviewer. **Q2: ... T-NeRF is not an appropriate method to compare with, ... I think NSFF (Neural Scene Flow Field) is a reasonable and meaningful method to compare.** **A:** Thanks for the valuable suggestion. We add NSFF as a new baseline on our Dynamic Indoor Scene dataset, as NSFF is not suitable for white backgrounds in our Dynamic Object dataset. As shown in the Table 4 in Author Rebuttal, NSFF indeed obtains much better interpolation results than the naive T-NeRF, but it still fails to extrapolate future frames. Figure 4 shows the qualitative results in the appended PDF. We will add this new baseline in the main paper in the next version. **Q3: ... How would TiNeuVox perform if also assign the same number of canonical models to it?** **A:** This is a very good point to discuss. TiNeuVox is a deformation field based model. If we use several canonical spaces, each canonical space requires a distinct deformation field to deform the latter frames back to it. This means that using multiple canonical spaces is equivalent to slicing the dataset into several pieces, and several TiNeuVoxs are trained on each piece respectively. Then the extrapolation is only related to the final canonical space and deformation field. This strategy actually cannot significantly increase the ability of TiNeuVox. To verify this, we conduct additional experiments for TiNeuVox with multiple canonical spaces. From the Table 5 in Author Rebuttal, we can see that TiNeuVox with the same 16 canonical spaces still fails to obtain satisfactory extrapolation results as ours, showing the superiority of our learning of physical velocity fields. Due to time limit, more experiments are still running and we will include complete results in the next version. **Q4: The potential application of motion transfer is not clear to me.** **A:** One interesting application is character animation using motion transfer. There are some demos in Appendix Figure 12 and in the final part of the demo. Another potential application is to edit the reconstructed scene. For example, we can replace the objects with other ones, which are not in the original dynamic scene. Unarguably, there could be many other exciting use cases and we hope that our method could unlock new opportunities. **Q5: No interpolation performance for ablation study.** **A:** The interpolation performance is in Appendix Table 5. Thanks for your reminder and we will include this in the main paper in the next version. **Q6: No experiment on real dataset. NHR dataset (Multi-view Neural Human Rendering) may be an appropriate dataset for this paper.** **A:** Thanks for the suggestion. We find that NHR dataset is a point cloud rendering dataset, which is not suitable for our method. Alternatively, we evaluate our method on the real-world NVIDIA Dynamic Scene dataset[1]. The Table 1 in Author Rebuttal shows the superiority of our method for future frame extrapolation. **Q7: From my aspect, the authors spent too much space on applications. The performance of the interpolation is more important for me to identify the ability of the proposed method to model the dynamic scenes.** **A:** We agree that interpolation is indeed an important task, which can be seen from a large number of existing research works in recent two years. In the meantime, we also strongly argue that the future frame extrapolation ability is essential for many intelligent machines, and related research is still in its infancy. Unfortunately, existing interpolation techniques fail to predict future frames as shown in our experiments. In this regard, we hope that our proposed method could inspire more advanced works in the future. **Q8 : For motion transfer, how would the learnt velocity field align with new objects? Put original images of Gnome in Figure 3 (c) may help understanding.** **A:** A naive motion transfer requires objects to have similar sizes. This is illustrated by Figure 6 in the attached PDF and the video demo Motion Transfer section. For a general motion transfer, more advanced techniques such as shape registration and alignment may be applied to deal with variable sizes of objects. **Q9: The implementations of T-NeRF$ _{PINN} $ and HexPlane$ _{PINN} $ are not clear to me, but the future frame extrapolation of the proposed method is impressive.** **A:** T-NeRF_pinn is implemented as an original T-NeRF along with a velocity field (MLPs) which is the same as our VeRF. HexPlane_pinn is implemented as a HexPlane along with the same velocity field (MLPs). Different from VeRF, the two models are trained as PINN. First of all, the RGB loss of given supervision is used to train the density and color. In PINN framework, it can be also regarded as boundary conditions. Then we regard the whole dynamics as a transport problem, and more details are in Appendix A.4. We add the transport loss for density and colors, and the physics loss for the velocity field as PINN PDE constraints. The main difference between these two models and VeRF is that, in VeRF, gradients can flow through the velocity field from the RGB loss, while in *$_{PINN}$ models the velocity field is only estimated by PINN losses. We will add these explanations in the next version. [1] J. S. Yoon. et al, Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera. CVPR, 2020. --- Rebuttal Comment 1.1: Title: Impressive results for extrapolation, but less competitive performance for interpolation Comment: Thanks for the hard working to address my concerns. Now, the only concern that remains to prevent me from recommending the paper to be accepted is the relative incompetitive performance for interpolation. As the paper title suggested, this is a method to representing dynamic 3D scenes. I intend to see at least competitive performance to model the dynamic 3d scenes. Also, I notice that the LPIPS of the extrapolation is not always the best in Table 1 and Table 5, which does not match the huge difference of the PSNR. What may be the reason? --- Reply to Comment 1.1.1: Comment: **Q: Thanks for the hard working to address my concerns. Now, the only concern that remains to prevent me from recommending the paper to be accepted is the relative incompetitive performance for interpolation. As the paper title suggested, this is a method to representing dynamic 3D scenes. I intend to see at least competitive performance to model the dynamic 3d scenes.** **A:** We agree with the reviewer that interpolation is also important in modeling dynamic 3D scenes. As requested, we turn to retraining our models using new settings from scratch on the Dynamic Indoor Scene dataset and NVIDIA Dynamic Scene dataset. In particular, on our Dynamic Indoor Scene dataset, we just use 4 keyframes instead of 16, while keeping all other settings untouched. As shown in the following Table 7, our method with the new number of keyframes outperforms the strong baseline TiNeuVox in both interpolation and extrapolation. Basically, using fewer keyframes results in more interframes to supervise each keyframe, enhancing scene reconstruction in cluttered environments; for instance, with 4 keyframes instead of 16, each keyframe is supervised by 4 times as many images, providing more comprehensive coverage of occluded areas and better interpolation from novel views (interframes). Additionally, fewer keyframes require longer integration of motion, which, in indoor scenes characterized by rigid body motions, may result in more stable motion and consequently improved geometry reconstruction. The ablation study in Author Rebuttal Table 2 also shows a similar trend. Nevertheless, searching for an optimal setting of keyframes needs more experiments. On the real-world NVIDIA Dynamic Scene dataset which is much more challenging, we turn to using 12M grids instead of 8M grids for the spatial resolution of the backbone Hexplane, 120K instead of 60K iterations for longer training, and 1024 instead of 2048 sampling rays for better Cuda memory management. As shown in the following Table 8, our method with the new settings achieves comparable performance with TiNeuVox in interpolation, along with significantly better results in extrapolation. Due to the time limit, only the Skating scene has been evaluated. Above all, we primarily focus on extrapolation and have not well-searched training settings for the task of interpolation in our main paper. From the new experiments above, we can see that our method actually does not scarify the accuracy of interpolation. Instead, it can achieve superior performance in both interpolation and extrapolation given better choices of training hyper-parameters. In addition to adding the above new experimental results in our next version, we also consider changing the paper title to ``Learning Velocity Fields for Dynamic 3D Scenes from Multiview Videos" if the reviewer agrees to it. We look forward to the reviewer's new feedback. **Table 7: Quantitative results of our method and TiNeuVox on Dynamic Indoor Scene dataset. Ours$^\*$ represents the new setting $K=4$.** | | | Interpolation | | | Extrapolation | | |----------------------|------------|---------------|-----------|------------|---------------|-----------| | **Models** | **PSNR** | **SSIM** | **LPIPS** | **PSNR** | **SSIM** | **LPIPS** | | TiNeuVox | 29.981 | 0.864 | 0.213 | 21.029 | 0.770 | 0.281 | | VeRF (Ours, K=16) | 28.000 | 0.862 | 0.226 | 26.235 | 0.839 | 0.237 | | VeRF (Ours, K=4)$^\*$ | **30.675** | **0.877** | **0.211** | **29.745** | **0.876** | **0.204** | **Table 8: Quantitative results of our method and TiNeuVox on Skating scene from the NVIDIA Dynamic Scene dataset. Ours$^\*$ represents the new settings.** | | | Interpolation | | | Extrapolation | | |-----------------------------|------------|---------------|-----------|------------|---------------|-----------| | **Models** | **PSNR** | **SSIM** | **LPIPS** | **PSNR** | **SSIM** | **LPIPS** | | TiNeuVox | **29.377** | **0.889** | 0.202 | 24.224 | 0.878 | 0.220 | | VeRF (Ours, 8M grids) | 26.999 | 0.848 | 0.227 | 28.654 | 0.896 | 0.208 | | VeRF (Ours, 12M grids)$^\*$ | $\underline{29.064}$ | $\underline{0.888}$ | **0.193** | **29.026** | **0.898** | **0.193** | --- Reply to Comment 1.1.2: Comment: **Q: Also, I notice that the LPIPS of the extrapolation is not always the best in Table 1 and Table 5, which does not match the huge difference of the PSNR. What may be the reason?** **A:** Since these numbers compare the extrapolation performance instead of interpolation, multiple factors may cause this issue. For example, in training, there are some areas that are not observable, which leads to some blank areas after extrapolation. The impact of this blank area will be increased in cognitive models' feature space. That's potentially why the LPIPS could be smaller but PSNR higher.
Summary: The paper aims to model dynamic 3D scenes from multiple-videos. The method simultaneously learns for geometry, appearance, and physical velocity of the 3D scenes from video frames. The show different application like future frame extraploation, unsupervised semantic 3D scene decomposition and dynamic motion transfer using the current framework. They further proposed two dynamic 3D datasets with extensive experiments. Strengths: The paper uses three major components to tackle the dynamic scene, the first is using the keyframe dynamic radiance field to compute radiance field at different time instances would be a good initial 3D estimator. the second is to use velocity field to interpolate the intermediate frames instead of recomputing NERF per frame which is very expensive. Finally a joint keyframe and velocity based interframe optimization produces an smooth trajectory and reconstruction of the dynamic scene. Instead of just showcasing the advantage of the method, displaying the robustness of the algorithm to multiple downstream tasks like future frame Extrapolation signifies the advantage of this method. Good ablation study and the introduced datasets are helpful for future research in dynamic scene understanding. Weaknesses: There has been many datasets proposed to tackle the problem of dynamic scene understanding, What is the advantage or difference with respect to these datasets has not been well studies. Specifically, Line 187 states that all of the datasets currently available are not useful for this task. It would be interesting to see the effect of the proposed method on one these datasets to understand the advantages and the disadvantages of the proposed algorithm. Effect of choosing keyframe is specific to the dataset. Since the current dataset has only one moving object may be using 16 frames is sufficient. The ablation should show more experiments to justify the keyframe selection process. The effect of the number of cameras is not explored in the analysis. Since you are using dynamic scenes, the effect of fewer frames should be analyzed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How does the method perform on other dyanmic scene datasets. even using simple multi-view datasets like panoptic studio should be explored to see the effect of the algorithm. Are the camera poses already provided for the dataset? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: limitations have been discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: There has been many datasets proposed to tackle the problem of dynamic scene understanding, What is the advantage or difference with respect to these datasets has not been well studies.** **A:** The primary goal of our method is to model dynamic 3D scenes for future frame extrapolation, by means of learning underlying physical velocities (*i.e.*, meaningful motions) in the continuous 3D space. In contrast, existing dynamic 3D scene modeling techniques and the commonly used datasets in the literature are mainly designed for novel view rendering/interpolation within the training time period, rather than for extrapolation beyond the training time period. For example, the dynamic motion patterns in existing datasets such as DyNeRF dataset [2] are usually chaotic and lack predictable physical movements. This research gap motivates us to propose two synthetic datasets with diverse and predictable motion patterns in the main paper. Nevertheless, as also suggested by other reviewers, we further pick up a number of meaningful 3D scenes from the real-world NVIDIA Dynamic Scene dataset [1]. The Table 1 in Author Rebuttal shows the superiority of our method for future frame extrapolation. **Q2: Specifically, Line 187 states that all of the datasets currently available are not useful for this task. It would be interesting to see the effect of the proposed method on one these datasets to understand the advantages and the disadvantages of the proposed algorithm.** **A:** Thanks for the suggestion. As requested, we train our model on two typical scenes of DyNeRF dataset [2]. In these scenes, a man is pouring coffee / flaming a steak, and the hand and coffeemaker / flame are just undergoing random motions. As shown in Figure 3 in the appended PDF, not surprisingly, our method is not able to learn meaningful physical velocities and it is impossible to have correct extrapolations on these chaotic dynamics. Actually, our method simply predicts static frames for future extrapolation. **Q3: Effect of choosing keyframe is specific to the dataset. Since the current dataset has only one moving object may be using 16 frames is sufficient. The ablation should show more experiments to justify the keyframe selection process.** **A:** We highly appreciate this suggestion and conduct an ablation study on the number of keyframes on our Dynamic Indoor Scene dataset. As shown in the Table 2 in Author Rebuttal, we find that fewer keyframes tend to have better performance, demonstrating that our keyframe based optimization strategy is actually very flexible and effective. More experiments are still running and we will add these results in the next version. **Q4: The effect of the number of cameras is not explored in the analysis. Since you are using dynamic scenes, the effect of fewer frames should be analyzed.** **A:** Thank you for this advice and we conduct an additional ablation study on the number of training cameras on our Dynamic Objects dataset. The Table 3 in Author Rebuttal shows the quantitative results. As expected, given fewer training camera views, the performance of our method drops sharply, primarily because the extremely sparse views are unlikely to capture sufficient visual information for physical motion learning. **Q5: How does the method perform on other dynamic scene datasets. even using simple multi-view datasets like panoptic studio should be explored to see the effect of the algorithm.** **A:** As also suggested by other reviewers, we turn to evaluate our method on another real-world dataset: NVIDIA Dynamic Scene dataset [1]. Results are supplied in above Q1. **Q6: Are the camera poses already provided for the dataset?** **A:** Yes, all camera poses are given in the datasets. Nevertheless, it is very interesting to explore simultaneous camera pose estimation and scene modeling in the future. [1] J. S. Yoon. et al, Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera. CVPR, 2020. [2] T. Li. et al, Neural 3d video synthesis from multiview video. CVPR, 2022.
Rebuttal 1: Rebuttal: Firstly, we would like to thank all five reviewers for their valuable comments. We genuinely appreciate the time and effort invested in reviewing our paper. Based on the comments provided, we have made significant improvements to our paper: - Additional experiments on public real-world datasets, including: a) two distinct scenes derived from NVIDIA Dynamic Scene dataset, b) one scene from the monocular version of NVIDIA Dynamic Scene dataset, c) two scenes from DyNeRF dataset. - Additional evaluation of a new baseline, Neural Scene Flow Fields (NSFF), on our Dynamic Indoor Scene dataset. - Additional ablation studies about: a) the number of keyframes used on our Dynamic Indoor Scene dataset, b) the number of cameras used in our Dynamic Object datasets. Due to the character limitations for responses in review rebuttal, we have cataloged all the relevant tables and details in this author rebuttal. We kindly request reviewers to refer to these tables and sections for a more detailed understanding. **Table 1:** Quantitative results of our method and baselines on the NVIDIA Dynamic Scene dataset. | | | | Truck | | | | |:-----------------:|:----------:|:-------------:|:---------:|:----------:|:-------------:|:---------:| | | | **Interpolation** | | | **Extrapolation** | | | **Model** | **PSNR** | **SSIM** | **LPIPS** | **PSNR** | **SSIM** | **LPIPS** | | TiNeuVox | 27.230 | **0.846** | **0.229** | 24.887 | 0.848 | **0.209** | | HexPlane$_{PINN}$ | 25.494 | 0.768 | 0.337 | 24.991 | 0.768 | 0.325 | | VeRF (Ours) | **27.276** | 0.840 | 0.235 | **28.269** | **0.855** | 0.220 | | | | | **Skating** | | | | | | | **Interpolation** | | | **Extrapolation** | | | **Model** | **PSNR** | **SSIM** | **LPIPS** | **PSNR** | **SSIM** | **LPIPS** | | TiNeuVox | **29.377** | **0.889** | **0.202** | 24.224 | 0.878 | 0.220 | | HexPlane$_{PINN}$ | 24.447 | 0.867 | 0.225 | 23.955 | 0.868 | 0.232 | | VeRF (Ours) | 26.999 | 0.848 | 0.227 | **28.654** | **0.896** | **0.208** | **Table 2:** Ablation study of the keyframe number on our Dynamic Indoor Scene dataset. | | | Interpolation | | | Extrapolation | | |-----------|------------|---------------|-----------|------------|---------------|-----------| | **Keyframes** | **PSNR** | **SSIM** | **LPIPS** | **PSNR** | **SSIM** | **LPIPS** | | 8 | **30.321** | **0.871** | **0.220** | **29.093** | **0.873** | **0.225** | | 16 | 28.000 | 0.862 | 0.226 | 26.235 | 0.839 | 0.237 | | 32 | 29.764 | 0.851 | 0.255 | 26.634 | 0.828 | 0.247 | **Table 3:** Ablation study of the camera number on our Dynamic Objects dataset. | | | Interpolation | | | Extrapolation | | |---------|------------|---------------|-----------|------------|---------------|-----------| | **cameras** | **PSNR** | **SSIM** | **LPIPS** | **PSNR** | **SSIM** | **LPIPS** | | 12 | **29.027** | **0.970** | **0.039** | **27.594** | **0.972** | **0.036** | | 6 | 25.689 | 0.954 | 0.051 | 25.114 | 0.959 | 0.122 | | 3 | 21.460 | 0.912 | 0.088 | 21.370 | 0.917 | 0.084 | **Table 4:** Quantitative results of NSFF on the Dynamic Indoor Scene dataset. | | | Interpolation | | | Extrapolation | | |-------|------------|---------------|-----------|------------|---------------|-----------| | **Model** | **PSNR** | **SSIM** | **LPIPS** | **PSNR** | **SSIM** | **LPIPS** | | NSFF | 29.365 | 0.829 | 0.278 | 24.163 | 0.795 | 0.289 | | VeRF | **30.321** | **0.871** | **0.220** | **29.093** | **0.873** | **0.225** | **Table 5:** Quantitative results of TiNeuVox on the chessboard of our Dynamic Indoor Scene dataset. | | | Extrapolation | | |-------------------------|------------|---------------|-----------| | **Model** | **PSNR** | **SSIM** | **LPIPS** | | TiNeuVox (1 Canonical) | 19.718 | 0.765 | 0.310 | | TiNeuVox (16 Canonical) | 21.394 | 0.812 | **0.233** | | VeRF (Ours, K=18) | **24.160** | **0.837** | 0.259 | **Table 6:** Non-uniform sampling for key-frame selection. | | | Interpolation | | | Extrapolation | | |------------|--------|---------------|-------|--------|---------------|-------| | **Keyframes** | **PSNR** | **SSIM** | **LPIPS** | **PSNR** | **SSIM** | **LPIPS** | | Uniform 16 | **29.027** | **0.970** | **0.039** | **27.594** | **0.972** | **0.036** | | Random 16 | 28.463 | 0.966 | 0.043 | 24.489 | 0.959 | 0.045 | Pdf: /pdf/18851cfed1977eb3e64fbf4631373851800b8e56.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents the framework to simultaneously learn the geometry, appearance, and velocity from only video frames. The paper's contribution lies in three directions: 1) Keyframe dynamic radiance fields to learn the time-dependent volume density and appearance. 2) Interframe velocity field to learn the time-dependent 3D velocity and 3) joint keyframe and interframe optimization to train keyframe and interframe fields together with physics-informed constraints. The main contribution lies in the joint keyframe and inter-frame optimization, which introduces three loss functions that help precisely learn to disentangle object masks, types, and materials. Strengths: S1) The framework is simple to understand yet elegant in achieving many tasks. S2) The paper is well written, with detailed derivation of major contributions. Easy to understand. S3) The work presents Dynamic Object and Dynamic Indoor Scene datasets. S4) The evaluation of the proposed approach is well presented on multiple baselines and downstream tasks (Frame extrapolation, Scene decomposition, Motion transfer). The proposed approach outperforms the existing approaches. Weaknesses: W1) All the evaluation results are presented on a synthetic dataset where the motion information is constrained ideally. It is unclear from the experiments how well the results translate to real-world scenarios. W2) All the results presented in this work are evaluated only on the authors' dataset. It is vital for the paper to present results on publicly available datasets to establish benchmarks (at least for a few downstream tasks). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1) From qualitative results, the dataset used in this work has limited spatial resolution. So, how would the approach scale to real-looking scenes with larger resolution? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: All the evaluation results are presented on a synthetic dataset where the motion information is constrained ideally. It is unclear from the experiments how well the results translate to real-world scenarios.** **Q2: All the results presented in this work are evaluated only on the authors' dataset. It is vital for the paper to present results on publicly available datasets to establish benchmarks (at least for a few downstream tasks).** **A:** Thanks for the suggestions and we agree that establishing a benchmark on a public and real-world dataset is crucial for the field of study. We evaluate our method on the real-world dataset: NVIDIA Dynamic Scene[1]. It captures real-world dynamic scenes by a static camera rig with 12 cameras. For each scene, we clip 60 frames with reasonable and predictable motions. We reserve the first 46 frames at randomly picked 11 cameras as the training split, *i.e.*, 506 frames, while leaving the 46 frames at the remaining 1 camera for testing interpolation ability, *i.e.*, 46 frames for novel view synthesis within the training time period, and keeping the last 14 frames at all 12 cameras for evaluating future frame extrapolation, *i.e.*, 168 frames. As shown in the Table 1 in Author Rebuttal, our method achieves significantly better results in the challenging task of future frame extrapolation. Figure 1 shows the qualitative results in the appended PDF. Due to the time limit, we can only provide scores on two scenes. More experiments are still running and we will add complete results in the next version. **Q3: From qualitative results, the dataset used in this work has limited spatial resolution. So, how would the approach scale to real-looking scenes with larger resolution?** **A:** This is a very good point. In the newly added NVIDIA Dynamic Scene dataset [1], the spatial scale of the real-world scene ``Truck" is about 20x10x10 meters, which is clearly larger than the two synthetic datasets in the main paper. From our new results, we can see that our method can easily scale up. [1] J. S. Yoon. et al, Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera. CVPR, 2020.
null
null
null
null
null
null
Flow-Based Feature Fusion for Vehicle-Infrastructure Cooperative 3D Object Detection
Accept (poster)
Summary: The authors explore and analyze the existing vehicle-infrastructure cooperative 3D object detection framework, and propose to adopt flow-based rather than still images to extract temporal features from multiple LiDAR input frames. The experimental results on DAIR-V2X dataset is better compared to the existing methods in terms of car objects. Strengths: 1. The task of vehicle-infrastructure cooperative 3D object detection is very important in the 3D community and has real-world practical applications. Interestingly, the authors propose to extract temporal features for different input frames. Also, they propose a self-supervised method to extract the flow-based features. Weaknesses: 1. The evaluation of one single class on a single dataset is not strong enough to prove the effectiveness of the proposed method. It would be convincing if the authors could conduct the experiments on other labeled classes, i.e., bus, truck, or van. Also, experiments on another existing dataset would be better as well. 2. Memory footprint comparison. The authors didn't make a comparison of their work in terms of memory with the existing vehicle-infrastructure cooperative 3D detection methods. As far as I am concerned, the memory consumption/additional efforts are needed since the compression and transmission are included. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the questions that I describe in the Weakness part. I would also consider the rebuttal and other reviews. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Wmh8, Thank you for providing valuable feedback on our work. We will address each of the limitations you have pointed out in your comments. **W1. We have conducted experiments on more datasets like OPV2V[1],** which also focuses on cooperative 3D detection. Our forthcoming version will also include experiments conducted on the V2X-Sim[2] dataset. Notably, the comment at the top offers a concise summary of our experiment results. These results, spanning the DAIR-V2X[3] and OPV2V[1] datasets, show the effectiveness of our proposed method. **W2. We have augmented our analysis to include memory comparisons with existing cooperative 3D detection methods.** To ensure a focused evaluation, we specifically compare our FFNet with DiscoNet[4], as the inference network structure of DiscoNet aligns closely with that of V2VNet[5]. Detailed implementation specifics for both methods are meticulously outlined in the supplementary materials (refer to Section E "Implementation Details of V2VNet and DiscoNet for VIC3D" in "S1-Appendix.pdf"). This comparison is summarized in Table 5 below. Notably, memory calculations were executed using an NVIDIA A100 GPU. In contrast to DiscoNet[5], **FFNet exhibits a slightly elevated memory footprint due to the processing of multiple frames on the infrastructure side, while the vehicle side memory remains consistent.** However, this increased memory usage remains well within acceptable limits for infrastructure computing servers. | | Infrastructure Side | | | | Vehicle Side | | | |---------------|---------------------|------------|-----------| --|--------------|------------|-------------| | | Feature Flow Generation | Compression | Total | | | | DiscoNet[5] | / | 122.1M | 243.9M | | 144.3M | | | | FFNet | 486.0M | 122.1M | 610.4M | | 144.3M | | | | FFNet-C1 | 486.0M | 286.1M | 610.4M | | 144.3M | | | | | | | | | | | | Table 5. Comparison of Memory Footprints. FFNet-C1 refers to FFNet with additional proposed compression modules. Furthermore, autonomous vehicles, with limited resources, are more sensitive to the memory footprint and computing consumption than infrastructure servers. **Our FFNet is memory and computing-friendly for the autonomous driving vehicles.** It extracts feature flow on the infrastructure side to predict future features, eliminating reliance on past data and conserving memory. Latency compensation involves light linear computations, saving vehicle-side computing resources. Best regards, 7569 Authors [1] Xu et al. OPV2V: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication. ICRA2022 [2] Li et al. V2X-Sim: Multi-agent collaborative perception dataset and benchmark for autonomous driving. RA-L 2022 [3] Yu et al. DAIR-V2X: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. CVPR2022 [4] Wang et al. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. ECCV2020 [5] Li et al. Learning distilled collaboration graph for multi-agent perception. NuerIPs2021 --- Rebuttal Comment 1.1: Title: Authors have addressed most of my concerns. Comment: Thanks for the answers and clarification in the rebuttal, which covered most of my concerns. --- Reply to Comment 1.1.1: Title: Appreciation for Your Acknowledgment Comment: Dear Reviewer Wmh8, We are pleased that we could effectively address your concerns and we extend our gratitude for recognizing our efforts. Best Regards, 7569 Authors --- Rebuttal 2: Comment: Dear Reviewer Wmh8, Please read the author's rebuttal and other reviews and indicate whether your comments have been addressed. Thank you. Best, AC
Summary: This work proposes a flow-based feature fusion framework called Feature Flow Net (FFNet) for vehicle-infrastructure cooperative 3D object detection (VIC3D). FFNet can generate aligned features for data fusion to transmit information for fusion and solve uncertain temporal asynchrony and transmission costs. The proposed self-supervised training of the feature flow generator lead FFNet to mitigate temporal fusion errors across various latencies. The results on the DAIR-V2X dataset show superior performance compared to other cooperative methods while consuming only about 1/100 of the transmission cost of raw data and using a single model. Strengths: - The writing is clear and easy to follow. - Using predicted features to solve the latency issue is neat and effective. - The whole pipeline is simple but well-designed. It covers the transmission speed and bandwidth while keeping the model performance in the meanwhile. - The results are solid and expressive. For different latency scenarios, FFNet achieves the best performance with less transmission data. Weaknesses: - The abbreviation of models is not easy to distinguish. For instance, FFNet versus FFNet-V2 versus FFNet-O, etc. - The ablation of compression is missed. Without proposed compression and decompression, will it affect the tolerance of the latency of transmission? - The last paragraph of the ablation study is not clear. The definition of the test part of the Dataset is unclear. If the models are exposed to the dataset test set, then it will be better than the one that doesn't (FFNet-O) for sure. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Although the latency and the bandwidth problem is indeed a real-world challenge, the Vehicle-Infrastructure Cooperative looks a little unrealistic. In that case, the autonomous driving vehicle will only be able to use such kind of extra information when there is an infrastructure sensor. Wouldn't it? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer enaU, Thank you for your valuable feedback on our work. We have carefully considered your suggestions and would like to respond to each of your main comments regarding our weaknesses and questions. **W1.** We sincerely acknowledge your suggestion, and in our upcoming version, we will enhance the clarity of abbreviations by incorporating a dedicated lookup table in the appendix. **W2.** Your concern about the impact of compression on latency tolerance is valid. The real-time dynamics of the road scene underscore the importance of minimizing latency for accurate predictions. As exemplified in Table 5 in our paper, increasing latency results in a discernible accuracy loss, even with our latency compensation. Moreover, uncompressed transmission can escalate both bandwidth consumption and latency challenges. **To illustrate, transmitting and downloading original uncompressed feature flow at 10HZ would demand a substantial 200Mb/s communication bandwidth.** This could severely impede communication responsiveness and stability, particularly in intricate traffic and communication environments. Conversely, our compression module substantially mitigates bandwidth consumption while preserving overall accuracy. The compression and decompression modules within FFNet are indeed pivotal components of our approach. | Model | Latency (ms) | mAP@3D | | mAP@BEV| | Transmission cost (Bytes) | |-------------------------------|--------------|--------|--|-------------|-------|----| | | |IoU=0.5 |IoU=0.7| IoU=0.5 |IoU=0.7| | | FFNet | 200 | 55.37 | 31.66 | 63.20 | 54.69 | 1.2× 10^5 | | FFNet-C1 | 200 | 55.17 |31.20 | 62.87 | 54.28 | **1.7×10^4 (~1/10^4)** | | FFNet (without any compression) | 200 | 55.44 |31.89 | 63.97 | 55.90 | 2.5×10^8 | |||||| | | Table 4. Evaluation results of FFNet with different compression on DAIR-V2X[1] dataset. **W3.** We wish to clarify that there are no data breach concerns. FFNet's training, including the feature flow prediction modules, is exclusively based on the training part of the DAIR-V2X[1] dataset. **Q1. Vehicle-infrastructure cooperative autonomous driving is a highly promising and dynamic field of research.** On one hand, the strategic deployment of sensors to forge intelligent transportation systems has gained attention from a diverse array of nations and research institutions, including Germany[4], China[5], and the United States[6]. Additionaly, the infrastructure sensors, including cameras, is also a common sight on city roads, highways, and various traffic networks. On the other hand, despite of achieving great progress recently, autonomous driving still faces great safety challenges due to a lack of global perspective and the limitation of long-range perception capability. A promising solution to address these challenges is to leverage infrastructure information via Vehicle-to-Everything (V2X) communication, which has been shown to significantly expand perception range and enhance autonomous driving safety[1, 2]. Overcoming the issues in vehicle-infrastructure cooperation, thereby enabling the seamless integration of roadside information to autonomous driving vehicles, constitutes a captivating realm of research. **Furthermore, our work can be extended to more diverse V2X scenarios.** Our experiment results on the DAIR-V2X[1] and OPV2V[3] datasets confirm FFNet's efficacy across different datasets and its remarkable effectiveness in a wide range of V2X applications, including vehicle-infrastructure interactions and complex multiple vehicle scenarios. In essence, our approach facilitates autonomous driving vehicles using extra information from infrastructure or other vehicle sensors through V2X communication. Best regards, 7569 Authors [1] Yu et al. DAIR-V2X: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. CVPR2022 [2] Eduardo et al. Cooperative perception for 3d object detection in driving scenarios using infrastructure sensors. TITS2020 [3] Xu et al. OPV2V: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication. ICRA2022 [4] https://innovation-mobility.com/en/project-providentia/ [5] https://thudair.baai.ac.cn/index [6] https://www.transportation.gov/tags/v2i --- Rebuttal 2: Comment: Dear Reviewer enaU, Please read the author's rebuttal and other reviews and indicate whether your comments have been addressed. Thank you. Best, AC
Summary: This paper introduces FFNet, a flow-based feature fusion framework that incorporates a feature flow prediction module to predict future features and addresses asynchrony issues. Instead of transmitting feature maps extracted from static images, FFNet transmits feature flow by leveraging the temporal coherence of sequential infrastructure frames. To evaluate the effectiveness of FFNet, experiments are conducted on the DAIR-V2X dataset. Strengths: 1. The motivation is clear and the idea is simple yet effective. 2. The overall results look good. Weaknesses: 1. The major limitation of this work pertains to its narrow scope, focusing exclusively on vehicle-infrastructure cooperative 3D object detection and mainly addressing the latency issue. While the proposed idea demonstrates effectiveness within this specific application, its applicability to broader contexts (such as multiple vehicle scenarios) remains uncertain. 2. Another weakness lies in the experimental section. The comparison made with V2VNet and DiscoNet seems unfair since these models were not originally designed to address communication latency. To ensure a more comprehensive evaluation, it would be beneficial for the authors to incorporate additional modules into these methods or compare their approach to more advanced models, such as SyncNet (ECCV 2022). This would provide a more accurate assessment of the proposed method's performance in comparison to state-of-the-art techniques. 3. A minor weakness is that the literature review is not sufficient. The authors should include more closely-related works such as [1-5] [1] Xu, R., Xia, X., Li, J., Li, H., Zhang, S., Tu, Z., Meng, Z., Xiang, H., Dong, X., Song, R. and Yu, H., 2023. V2v4real: A real-world large-scale dataset for vehicle-to-vehicle cooperative perception. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13712-13722). [2] Li, Y., Zhang, J., Ma, D., Wang, Y. and Feng, C., 2023, March. Multi-robot scene completion: Towards task-agnostic collaborative perception. In Conference on Robot Learning (pp. 2062-2072). PMLR. [3] Xu, R., Tu, Z., Xiang, H., Shao, W., Zhou, B. and Ma, J., 2023, March. CoBEVT: Cooperative Bird’s Eye View Semantic Segmentation with Sparse Transformers. In Conference on Robot Learning (pp. 989-1000). PMLR. [4] Li, J., Xu, R., Liu, X., Ma, J., Chi, Z., Ma, J. and Yu, H., 2023. Learning for vehicle-to-vehicle cooperative perception under lossy communication. IEEE Transactions on Intelligent Vehicles. [5] Su, S., Li, Y., He, S., Han, S., Feng, C., Ding, C. and Miao, F., 2023. Uncertainty quantification of collaborative detection for self-driving. ICRA. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Considering the major concern about the limited scope and application, the authors may consider submitting this work to a more appropriate venue. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer NKqy, Thank you for providing valuable feedback on our work. We will address each of the limitations you have pointed out in your comments. **W1.** Regarding the potential limitations of our work concerning application scenarios, **we have extended our experiments to encompass the OPV2V dataset[1], which exclusively focuses on cooperative 3D detection in multiple-vehicle scenarios.** Our forthcoming version will also include experiments conducted on the V2X-Sim[2] dataset. Notably, the comment at the top offers a concise summary of our experiment results. These results, spanning the DAIR-V2X[3] and OPV2V[1] datasets, show the impressive efficacy of our proposed approach across diverse V2X scenarios. In addition, we not only pay attention to the latency challenge, but also the transmission cost challenge. Specifically, we propose the Feature Flow Net (FFNet), a unified framework to overcome the hurdles posed by uncertain temporal asynchrony and communication bandwidth limitations in cooperative 3D object detection. Overall, we kindly request that you re-evaluate the scope of our work in light of the applications we have focused on and the issues we have successfully addressed. **W2.** Addressing your concern in Weakness 2, we recognize the significance of comparing FFNet with existing latency-aware techniques like SyncNet[4]. We wish to clarify that **we have indeed analyzed the distinctions between FFNet and SyncNet[4] in our paper (see lines 109-112). Moreover, we have conducted comparative experiments and provide comprehensive analysis in the supplementary material** (specifically, sections F "Comparison of Feature Flow Extraction on Different Sides" and G "Relationship to Other Existing Possible Solutions" in "S1-Appendix.pdf"). It is worth noting that FFNet and SyncNet[4] approach the cooperative problem from fundamentally different angles. While SyncNet[4] focuses on leveraging historical features for latency compensation, FFNet takes a more comprehensive approach by integrating transmission and reception considerations. **In our experiments (as outlined in section F of the supplementary material), we incorporate SyncNet's compensation module for VIC3D detection**, that is utilizing historical features for feature prediction on vehicle side. The results, as showcased in Table 2 of the appendix, illustrate FFNet's superiority in overcoming latency through feature flow extraction on infrastructure side. We also list part results in Table 3 below. This outcome aligns with the inherent challenges of extracting temporal information from compressed features, which lack the richness found in raw sequential point clouds. | Model | Latency (ms) | mAP@3D | | mAP@BEV | | | --- | --- | --- | --- | --- |--- | | | | IoU=0.5 | IoU=0.7 | IoU=0.5 | IoU=0.7 | | SyncNet[4] | 100 | 50.5 | 28.25 | 58.02 | 50.03 | | FFNet | 100 | 53.46 | 30.42 | **61.20 (+3.18)** | 52.44 | ||||| Table 3. Comparison with SyncNet[4]. Refer to appendix for more detailed experimental results. Furthermore, FFNet optimizes vehicle computing resources, as feature flow generation transpires on infrastructure devices rather than within vehicles. This presents FFNet as a more computing-efficient solution for resource-constrained vehicle devices. In contrast, SyncNet[4] demands increased computational resources to leverage historical per-frame features. Additionally, SyncNet[4] necessitates heightened storage due to its dependence on past received frames. SyncNet's vulnerability to frame drops impacts its execution and performance, a concern mitigated by FFNet's storage-friendliness and robustness. **W3.** Thanks to your suggestion, we will consider adding more related papers in the next version. Best regards, 7569 Authors [1] Xu et al. OPV2V: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication. ICRA2022 [2] Li et al. V2X-Sim: Multi-agent collaborative perception dataset and benchmark for autonomous driving. RA-L 2022 [3] Yu et al. DAIR-V2X: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. CVPR2022 [4] Lei et al. Latency-aware collaborative perception. ECCV2022 --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. My concerns were partially resolved and I upgrade my rating to borderline accept considering the authors will add the v2v setup and missing references. --- Reply to Comment 1.1.1: Title: Appreciation for Your Acknowledgment Comment: Dear Reviewer NKqy, We are delighted to have successfully addressed your concerns and greatly appreciate your recognition of our dedicated efforts. Best Regards, 7569 Authors --- Rebuttal 2: Comment: Dear Reviewer NKqy, Please read the author's rebuttal and other reviews and indicate whether your comments have been addressed. Thank you. Best, AC
Summary: In this paper, a cooperative detection framework, named Feature Flow Net (FFNet), is presented to address temporal asynchrony and limited communication condition challenges in vehicle-infrastructure cooperative. Specifically, FFNet transmits feature flow to generate aligned features for data fusion, providing a unified manner to transmit valuable information for fusion while addressing the challenges of uncertain temporal asynchrony and transmission cost. Incorporating with self-supervised learning, the proposed FFNet framework presents a performacne improvement for VIC3D object detection with less communication cost. Strengths: 1. FFNet transmits feature flow to generate aligned features for data fusion. This idea is simple yet can transmit temporal asynchrony information for fusion with less transmission cost. 2. The proposed self-supervised training approach enables FFNet with feature prediction ability to mitigate temporal fusion errors across various latencies without cooperative view and labeling. 3. FFNet demonstrates superior performance compared to other cooperative methods and demonstrates robustness in terms of latency. Weaknesses: 1. For 3D object detection, map@3D proves the detector accuracy better than map@BEV. However, it can be seen from Table 1 that FFNet's performance advantage in map@3D is much smaller than that of map@BEV, and the advantage on IoU@0.7 is less than IoU@0,5. This may indicate a) The proposed method does not learn the changes in the height of the environment, resulting in poor accuracy in high dimensions of FFNet. b) The flow-based prediction strategy is not accurate enough, which limits the upper quality of FFNet detection results. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Experiments and visualizations are expected to analyze and verify why FFNet is deficient in high-quality detection boxes. 2. Can a finer supervision method, not limited to the global similarity, improve the performance of FFNet on map@3D? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Resolving Anomalies in Evaluation Results using the mAP@3D Metric** Dear Reviewer CL6E: Thank you for your insightful question and for bringing to our attention the anomalous results in the mAP@3D metric. **Phenomenon.** We acknowledge the existence of abnormal performance in mAP@3D. As evident in Table 1, "FFNet's performance advantage in mAP@3D is much smaller than that of mAP@BEV." Moreover, both FFNet and other middle fusion methods like V2VNet[1] and DiscoNet[2] achieve significantly lower mAP@3D (IoU=0.7) compared to early fusion and late fusion methods. **Explanation.** The aforementioned issues can be attributed to the assumption of a strictly parallel ground in the implementation of feature fusion methods. Specifically, in the conversion of infrastructure Bird's Eye View (BEV) feature/feature flow into consistent local coordinate systems, we assumed that the x-y planes would remain parallel to the ground. Unfortunately, as the DAIR-V2X dataset[4] originates from real-world capture, the driving area does not adhere to strict parallelism. Consequently, an unintended rotation component surfaces in high dimensions when transitioning from the infrastructure's local coordinate system to that of the vehicle. Regrettably, we overlooked this high component, resulting in reduced accuracy in the high dimension. On the contrary, we used real transformation matrices in the implementation of early fusion and late fusion, mitigating the impact on the high dimension and achieving much better mAP@3D (IoU=0.7) results than middle fusion methods. **Solution**. To rectify this issue, we conducted additional experiments by unifying the bottom of all detection boxes and ground truth boxes to the same height. This effectively eliminates the influence of the high component, and we re-evaluated the detection results of FFNet. The table below shows a significant improvement in mAP@3D performance for FFNet. | High Standardization | Latency (ms) | mAP@3D | | mAP@BEV | | | --- | --- | --- | --- | --- |--- | | | | IoU=0.5 | IoU=0.7 | IoU=0.5 | IoU=0.7 | | No | 200 | 55.37 | 31.66 | 63.20 | 54.69 | | Yes | 200 | **62.48 (+7.11)** | **47.92 (+16.26)** | 63.20 | 54.69 | | No | 300 | 53.46 | 30.42 | 61.20 | 52.44 | | Yes | 300 | **60.39 (+6.93)** | **45.82 (+15.40)** | 61.20 | 52.44 | ||||||| Table 2. Evaluation results with high standardization on DAIR-V2X dataset[4]. **More Discussion:** mAP@BEV is widely recognized as a more relevant metric in the autonomous driving context. Since the driving scene typically involves no objects above traffic participants, focusing on the BEV (Bird's Eye View) dimension becomes critical for evaluation. Notably, related approaches like V2VNet [1], DiscoNet [2], and V2X-ViT [3] exclusively report evaluation results that prioritize the BEV dimension. This alignment in evaluation underscores the industry's consensus on the significance of BEV-based metrics for assessing autonomous driving systems. [1] Wang et al. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. ECCV2020 [2] Li et al. Learning distilled collaboration graph for multi-agent perception. NuerIPs2021 [3] Xu et al. V2x-vit: Vehicle-to-everything cooperative perception with vision transformer. ECCV2022 [4] Yu et al. DAIR-V2X: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. CVPR2022 --- Rebuttal Comment 1.1: Comment: Thank the authors for the response and additional experiments. My concerns were resolved and I keep my my original rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer CL6E, We sincerely appreciate your recognition and support for our work. Best Regards, 7569 Authors --- Rebuttal 2: Comment: Dear Reviewer CL6E, Please read the author's rebuttal and other reviews and indicate whether your comments have been addressed. Thank you. Best, AC
Rebuttal 1: Rebuttal: **Conducting FFNet on More Datasets and Driving Contexts** In response to some reviewers' concerns regarding the sufficiency of our experiments (specifically, Reviewer Wmh8's concern about limited dataset usage and Reviewer NKqy's concern about limited application scenarios), we have taken their feedback into consideration and conducted further experiments to address these issues. **Datasets.** For our extended experiments, we chose to utilize two widely-used simulated datasets: OPV2X[1] and V2X-Sim[2]. These datasets are renowned for their focus on cooperative 3D detection tasks in multiple vehicle scenarios, making them ideal candidates to validate the capabilities of FFNet. Due to the constraints of the rebuttal time, we were able to complete only a portion of the experiments. Consequently, in this submission, we only report experiment results for the OPV2V dataset[1]. We will provide additional experimental results on the V2X-Sim dataset[2] in the next version. **Experiment Setting and Results.** Each scene in OPV2V[1] involves 2-7 autonomous vehicles. To align with FFNet's existing framework, we adopted a two-vehicle configuration. We train FFNet, as well as the feature flow modules, using the training part of OPV2V[1]. Subsequently, we evaluate FFNet with and without feature flow prediction on the test part respect, under different latencies (100ms and 200ms). Experiment results are reported in the following table. From Table 1 below, it can be seen that: - FFNet can leverage data from other vehicles to enhance detection performance effectively, compared to PointPillars[5]. - FFNet can mitigate performance drops caused by latency, outperforming methods like V2VNet[4] across both 100ms and 200ms latency scenarios. | Model | FusionType | Latency (ms) | mAP@BEV || mAP@3D || |----------------------------------------|--------------|--------------|----------------|-------|----|-----| | | | | IoU=0.5 | IoU=0.7 |IoU=0.5| IoU=0.7 | | PointPillars[5] | non-fusion | / | 71.7 | 56.7 | 70.0 | 44.6 | | V2VNet[4] | middle | 100 | 79.6 | 59.9 | 76.6 | 49.8 | | FFNet | middle | 100 | **82.1 (+10.4)** | 63.3 | 79.6 | 54.6 | | V2VNet[4] | middle | 200 | 71.3 | 50.2 | 65.8 | 39.7 | | FFNet | middle | 200 | **80.0 (+8.3)** | 60.4 | 77.5 | 51.8 | ||||||| Table 1. Experiments results on OPV2V[1] dataset. **Conclusion.** Our experiment results on the DAIR-V2X[3] and OPV2V[1] datasets confirm FFNet's efficacy across different datasets and its remarkable effectiveness in diverse V2X scenarios, including vehicle-infrastructure interactions and complex multiple vehicle scenarios. This showcases FFNet's adaptability and potential for excelling in a wide range of V2X applications, making it a powerful and reliable solution for cooperative driving tasks, reinforcing its value and significance in V2X research and application. [1] Xu et al. OPV2V: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication. ICRA2022 [2] Li et al. V2X-Sim: Multi-agent collaborative perception dataset and benchmark for autonomous driving. RA-L 2022 [3] Yu et al. DAIR-V2X: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. CVPR2022 [4] Wang et al. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. ECCV2020 [5] Lang et al. Pointpillars: Fast encoders for object detection from point clouds. CVPR2019
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision
Accept (spotlight)
Summary: The authors present a new method for training diffusion models based on partial observations of underlying signals. Based on a dataset containing groups of multiple partial observations of the same signal, as well as a differentiable model of the forward operator, this method can train a diffusion model for synthesizing underlying signals conditioned on a given partial observation. The underlying signals can then be projected into "novel views". They apply their method to inverse graphics, GAN inversion, and motion prediction. The method outperforms the tested baselines in several metrics. Strengths: - Results seem strong, both visually and quantitatively. - Paper is well-motivated, and addresses an important problem. - The metrics used to evaluate performance are sound and relevant. Weaknesses: - Proposition 1 is very vaguely written. It needs to be presented with more rigorous definitions (what are the assumptions in math form? what is the result? "agrees with" is not formal enough and the "if" statement is not clear to me). - More generally, the method section is presented in simplified terms. There needs to be more math there. Do you prove that your proposed loss function is indeed related to the maximum likelihood objective? KL divergence? What are the conditions (in math form) for Proposition 1? How does the sampling algorithm (after training) work? The overall presentation of the experiments section is also overly convoluted and unclear. - The requirement on the dataset to have multiple views of the same scene is still a significant assumption over the dataset. While this might make sense in 2D-to-3D, it makes less sense in other stochastic inverse problems such as accelerated MRI reconstruction (we almost never have two scans of the same object/person). Minor issues: - In lines 166-170, diffusion-based inverse problem solvers are mentioned. The main weakness of those works that is mentioned is their inability to learn from partial observations. It would be beneficial to discuss concurrent works [W1, W2] that model signals based on partial observations as well. What are the similarities and differences with your work? [W1] https://arxiv.org/abs/2305.13128 [W2] https://arxiv.org/abs/2305.19256 Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - How does your work compare to the many NeRF+Diffusion papers? Examples include [Q1, Q2] but there are many more. I ampositive that some of them apply to the same use case the authors present, and can be added as entries in Table 1. [Q1] https://arxiv.org/abs/2304.06714 [Q2] https://arxiv.org/abs/2304.14473 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: No, the authors have not adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments. The reviewer notes that our “results seem strong, both visually and quantitatively”, we “outperform the tested baselines”, and that our paper “addresses an important problem”. We now address the remaining concerns: ### Proposition We **intentionally** only provided a simplified version of the proposition in the main paper to preserve readability. **Note that we provide a rigorous statement and in-depth proof in the supplemental material** (Paper line 147: “...as we formally prove in the supplement:”). Following your comments, we will make the statement in the paper more rigorous and provide an abridged version of the proof already in the main paper. We will also make the references to the supplemental sections more prominent. ### Loss Function In the supplemental material, we **formally relate our loss function to the maximum likelihood objective**; see our supplemental Sec. 2 (“Our loss maximizes the likelihood over total observations”). Following your comment, we will make this explicitly clear in the main paper and reference the supplemental when discussing the loss. ### Sampling at test time Given a context observation, we start from pure noise and denoise the underlying signal and target observation, see L138-139 (“At test time, a signal is sampled by iterating Eq. 4, 5, and 6, ….”). Also, see Figure 1 (bottom left “Single Denoising Step”). We iterate the single denoising step depicted here, starting from O^{trgt}_T. We will improve the exposition of the sampling stage and make it more prominent. For the inverse graphics application, refer to the updated overview figure; see general comment (G5). ### Presentation of the experiments We provide details for each application in the supplemental document; see Section 3. We will work on making the exposition clearer and release our code to aid in understanding and reproduction. We have already updated the overview figure for the inverse graphics application; see general comment (G5). ### Multiple views required for training We require multiple observations during training; however, they do not necessarily need to cover the signal completely. Consider the 3D reconstruction application: During training, we only observe sparse camera views, observing only a small subset of each scene. Yet, our model can reconstruct complete scenes at test time, such as a complete 360 view of an object, by denoising the test camera trajectory. To further demonstrate this, we conducted an experiment where we learn to reconstruct 64x64 res. images conditioned on partial observations that have a 32x32 patch missing. Referencing our formulation, the signal is the complete 2D image, and the forward model is a sampling function that can sample a specific region from the complete image. Importantly, we only use **incomplete** images during training, i.e., one patch from the image is always missing in our training dataset. From such training data, we first compute our context and target observations by applying further degradation to our training images - we remove pixels from the training image to compute the context, and the removed pixels become the target. Our method now learns to sample complete signals conditioned on the context observations. We use a similar architecture as our 3D reconstruction pipeline (see updated figure (G5)), where we condition the diffusion model on a deterministic estimate computed from the context. This network takes the context as input and computes a deterministic estimate of the signal we supervise only at the visible regions in the training images. The diffusion model is conditioned on this deterministic estimate and learns to sample the target patch by first reconstructing the signal, i.e., the complete image, and applying the forward model. At test time, we can complete the missing information in any image; see the response PDF. This shows that our method could likely apply to a wide range of stochastic inverse problems, including medical imaging. We also compare to the determinstic baseline in the response PDF. ### Relation to concurrent works that learn from partial data As correctly noted, they are concurrent to our submission and appear on arxiv after the submission deadline. The main difference is that both these papers seem to assume a linear forward model, while our approach supports a much wider range of forward models, as we demonstrate in our experiments. Our differentiable renderer is non-linear and GAN inversion also involves a non-linear forward pass of the GAN model. ### Relation to NeRF+diffusion papers We discuss many papers, including concurrent works, that use NeRF and diffusion, in Sec. 3 and Sec. 4.1. The most closely related paper, in terms of the nature of the model, is RenderDiffusion, see general discussion comment (G1). The most closely related paper in terms of the task and the complexity of datasets is SparseFusio, which we extensively compare to and significantly outperform. We briefly discuss the two papers: Both [Q1] and [Q2] train 3D diffusion models using purely 3D architectures, adding noise to some 3D latents that represent the scene during training. As these 3D latents are not available when training from 2D observations, they either have to be pre-computed [Q2] or discovered jointly [Q1]. Our approach is radically different, where we show that learning to denoise observations can lead to sampling in 3D. We do not add noise to any unknown quantity at training time. Instead, we add noise to the known observation space. Further, we show that our model scales to significantly more complex scenes, such as the compositional real indoor scenes from RealEstate10k, compared to the limited complexity of scenes considered in these two papers. Please also note that both these papers are concurrent to our submission. Thanks for pointing out all the references. We will cite them. ### Limitations See general discussion point (G4). --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in the rebuttal. I trust the authors to include more rigorous definitions of their propositions (not necessarily proofs, just a complete set of assumptions and results) in the main paper, as well as a limitations section in the main paper. The limitations section should also mention the requirement on multiple views of the same scene. I understand this does not cover the scene completely, but it is nevertheless different than having one view per scene. Based on the rebuttal, I raise my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you very much for considering our rebuttal! We are happy that we could answer your core questions. We will make the suggested changes in our paper. Borderline accept seems to imply that there might be ways to make the paper stronger. Could you kindly mention in what ways we could make the paper stronger yet?
Summary: This work proposes a framework that allows learning a conditional diffusion model for a signal without direct observations. The proposed method integrates differentiable forward models with conditional diffusion models. It is evaluated for the task of learning image-conditioned 3D scenes from 2D images with poses and a few more downstream tasks. Strengths: 1. The proposed method addresses an important problem. A main limitation of diffusion models is that they typically require direct observations on the signal of interest. The proposed method showed promising results on learning with indirect observations only. 2. The proposed method is novel (by considering RenderDiffusion as concurrent), which extends the classical diffusion process. The network architecture is well-designed and intuitive. 3. The effectiveness of the proposed method is demonstrated on multiple tasks and datasets, making the results comprehensive and solid. Weaknesses: I did not find any major weakness in this paper. A minor weakness: on CO3D dataset, only hydrants category is evaluated. It could make the results stronger to show the performance on more categories. Some other points are put into Questions section, which may make the submission stronger if they are clarified. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. If the context is empty, i.e. making the model unconditional, is proposed method mathematically equivalent to [1] RenderDiffusion? If not, what are the major differences in formulations? 2. In line 137 equation (4) (5) (6), to my understanding, \hat{O} is the "predicted x0" in DDPM, it is an estimate of clean observation but in theory not an instance in the space of clean observations. \hat{O} is rendered from S_{t-1}, while a true clean observation has a corresponding neural field, is it a practical fact or something provable that an "estimated observation" \hat{O} has a corresponding neural field S_{t-1}? [1] RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful read and insightful comments. The reviewer notes that our method “addresses an important problem” and shows “promising results”, our architecture is “well-designed and intuitive”, and our results are “comprehensive and solid’. We now address remaining concerns: ### Evaluation on more Co3D categories Thank you for the suggestion. We evaluate the suggested setting, training a single model on all 10 Co3D categories. See general discussion point (G3) for details, and the rebuttal PDF for results. This evaluation goes beyond the category-specific training used in other existing papers, such as SparseFusion, which require training a single model per class. While our model generates plausible results, the diffusion model in SparseFusion often fails to generate reasonable images, even generating output images from a different object category. Score distillation with such a model often fails to reconstruct any reasonable object. We further conducted an experiment where train on a subset of the large-scale object-centric Objaverse dataset. Both these experiments show that our models are scalable and effective. ### Differences to RenderDiffusion Even if the context is empty, our model is different from RenderDiffusion, as it does not rely on global pose information, i.e. assuming that all objects are aligned with one canonical coordinate frame, and does not rely on monocular supervision. We rely on the multi-view losses for our proof in the supplemental. See general comment (G1) for a more detailed discussion. ### Does every “estimated observation” (\hat{O}) have a corresponding signal (S_{t-1})? If we understand this right, the reviewer raises the point that while the “true” clean observation has a corresponding signal, is it necessarily true that the estimate of this clean observation \hat{O} also has a corresponding signal? In our models, \hat{O} is generated using a forward model operation on the estimate of the signal, and thus, the diffusion model is strictly limited to estimating \hat{O} that have a corresponding signal. If the estimated observation does not have any corresponding signal, we would not be able to properly minimize our loss functions. Thus, it is a practical fact that, for the forward models we describe in the paper, the estimate of the observation can also be described as a signal, letting us successfully optimize our networks. Thanks for the insightful question! --- Rebuttal Comment 1.1: Comment: Thanks for the response! My questions are well addressed and I tend to keep my rating.
Summary: This paper incorporates a differentiable forward model (e.g., rendering function) into a denoising probabilistic process in order to sample from distributions of underlying signals consistent with partial observations. For example, in inverse graphics the proposed approach would enable sampling from the distribution of 3D scenes consistent with a single 2D image. The efficacy of the approach is showcased in three applications: sampling from the distribution of 3D scenes using only 2D images at train and test times; single image motion prediction (where the forward model is a warping operation); and projecting partial images onto the latent space of a GAN (GAN inversion). Strengths: **S1.** Diffusion models for sampling 3D shapes or scenes have only been explored recently. A key challenge in this direction is the lack of 3D training data. The method proposed in this paper is able to train 3D models without the need for 3D training data. **S2.** Tackling Stochastic Inverse Problems more generally is a great way to broaden interest in advances in diffusion models. Weaknesses: **W1.** There are some (very) recent works that tackle the same problem tackled in this paper, e.g., (Karnewar et al. CVPR 2023), (Kim et al. CVPR 2023). It would be important for the authors to discuss their work in relation to such recent works. **W2.** The evaluation is limited: few tasks, few baselines (e.g., compare with the more detailed evaluation in baseline SparseFusion [50]). In fact, for single-image motion prediction and GAN inversion the main result is in the form of predictions for two inputs. **W3.** While the approach was framed as a solution to Stochastic Inverse Problems in general, all applications considered are in computer vision. In my view, the generality of the problem statement is a strength of the submission and a demonstration of this generality would have greatly strengthened the submission. ### References Karnewar et al. HOLODIFFUSION: Training a 3D Diffusion Model using 2D Images, CVPR 2023. Kim et al. NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models. CVPR 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Q1.** How does your approach relate / differ w.r.t. the recent methods in W1 above? **Q2.** What are the main limitations of the approach, e.g., complexity of training and inference? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitations or negative societal impact were discussed in the main submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and questions. The reviewer notes that our paper tackles a “key challenge” in training 3D diffusion models, and that our general formulation is a “great way to broaden interest in advances in diffusion models”. We now address the concerns: ### HoloDiffusion [Karnewar et al.] and NeuralField-LDM [Kim et al.]: We address this remark in the following points: - Please note that we already discuss these papers in the original submission (HoloDiffusion in L162-163, NeuralField-LDM in L190) and talk about how they relate to our work. **However, we hear the reviewer in that this discussion may not cover all essential points - we will thus significantly expand on this discussion in the camera-ready submission as below.**. - Note that both of these papers are concurrent to our submission and do not constitute prior work. - While certainly related and worth discussing, their contributions do not significantly overlap with our contributions and they do not take away from the significance of our results. We discuss this in-depth in the following, **and will extend the discussion of these papers in the revised version of our paper**. - HoloDiffusion is an unconditional 3D generative model limited to simple object-centric 3D scenes. Specifically, it can only be trained on pre-segmented (background-free) Co3D objects, one class at a time. HoloDiffusion adds noise to deterministic estimates of 3D scenes. This input differs at training and at test time, inducing a domain shift, which the authors address by introducing a 2-stage approach. In contrast, our model does not suffer from a domain shift at training and at test time, meaning that we can train our model end-to-end and in a single stage, without any multi-stage pipeline. Our model can further be trained on the much more complex class of indoor rooms from the RealEstate10k dataset, and further can model Co3D objects without pre-processing via segmentation and background removal. We present theoretical insights in our paper that clearly show how our formulation lets us optimize for 3D scenes using our 2D loss functions. As shown in our paper, we can learn distributions over complex compositional scenes that go far beyond the results of HoloDiffusion. - NeuralField-LDM is a 2-stage approach, where 3D latents are first computed from the 2D data, and then a 3D diffusion model is trained on the 3D latents. Our approach is radically different, demonstrating that we can directly learn to sample in 3D by learning from 2D data without requiring computation of an intermediate stage of 3D latents. ### Limited Evaluation We believe that evaluations presented in our paper are detailed and demonstrate the effectiveness of the approach. This is acknowledged by 5usH (“experiments… demonstrate the effectiveness of the proposed method”), LcVC (“results (are) comprehensive and solid”), and zgt6 (“Results seem strong, both visually and quantitatively”). We now expand on the points raised by the reviewer. We evaluate against the state of the art in both deterministic and probabilistic reconstruction, pixelNeRF and SparseFusion, respectively. The reviewer points out the evaluation in SparseFusion, which features more deterministic baselines. Please note that pixelNeRF outperforms all other deterministic baselines in the SparseFusion evaluations (see their Table 2 PSNR). Thus, we believe that comparing with SparseFusion and pixelNeRF sufficiently demonstrates the quality of our method. While SparseFusion only uses the Co3D dataset for evaluation, we also use the very challenging scene-level RealEstate10k dataset. Nevertheless, we provide even more evaluations of our 3D reconstruction in this rebuttal where we train a general model on 10 categories of the Co3D dataset, see general discussion point (G3) and the rebuttal PDF. For the single-image motion and GAN inversion experiments, we do provide more results than the two examples that the reviewer mentions: We provide additional results in the supplemental pdf and the webpage. For the GAN inversion experiment, we provide quantitative results in Table 1 demonstrating improvements over a deterministic baseline. We further mention that the deterministic baseline for the single-image motion problem collapses (L290 - 291), thus making any quantitative evaluation meaningless. We are not aware of any other probabilistic baselines that we could compare to. For example, all encoder-based GAN inversion approaches we are aware of use a deterministic model. However, we are happy to add any additional evaluations that the reviewer suggests. ### Limitations See general discussion point (G4) ### Applications beyond Computer Vision We believe our framework is very generally applicable, and would be useful for a wide range of applications. However, we agree with the reviewer and will clearly mention the scope of our empirical result in the abstract and introduction of the paper. We address the generality of our approach in the general discussion point (G2), and discuss new experiments and potential applications. --- Rebuttal Comment 1.1: Comment: First of all, I'd like to offer an apology to the authors. It looks like my original rating does not correspond to the rest of my review. The rating should be higher. I thank the authors for the extensive rebuttal and additional experiments. They are well received and strengthen the experimental validation. Question re NeuralField-LDM, while the authors have stated the approach is "radically different," I find the inputs and outputs are compatible -- in the sense that a comparison would be possible (the method proposed here could ignore any depth input data). Am I right, or what would make the comparison not worthwhile? --- Reply to Comment 1.1.1: Comment: We greatly appreciate you considering our rebuttal! NeuralField-LDM primarily learns an **unconditional** diffusion model and does not show any results of inferring a distribution of 3D scenes conditioned on one image. Their paper shows some results of conditioned synthesis, where the conditioning inputs are bird's eye view (BEV) segmentation maps that include significant information about the complete scenes. It is unclear whether this method can be extended to achieve the same input-output behavior as our paper. Unfortunately, their code is not publicly available, making any comparison practically very difficult.
Summary: This paper focuses on denoising diffusion models, a type of generative model used to capture complex signal distributions. However, current approaches can only model distributions when training samples are available, which is not always the case in real-world tasks. In fields like inverse graphics, where the goal is to sample from a distribution of 3D scenes based on a given image (without having access to ground-truth 3D scenes), this limitation poses a challenge. To address this, the authors introduce a new class of denoising diffusion probabilistic models. These models learn to sample from distributions of signals that are never directly observed but instead measured through a known differentiable forward model. This forward model generates partial observations of the unknown signal. The authors integrate this forward model directly into the denoising process. During testing, this approach enables sampling from the distribution of underlying signals consistent with a given partial observation. The authors demonstrate the effectiveness of their method on three challenging computer vision tasks. For instance, in inverse graphics, they show that their model allows direct sampling from the distribution of 3D scenes consistent with a single 2D input image. Strengths: This paper proposes a new type of conditional diffusion model for 3D scene generation without referring to the underlying 3D signal. This is an essential problem in the present era. The writing is super clear and I can easily follow. The experiments is sound and demonstrate the effectiveness of the proposed method. Moreover, I do appreciate the in-depth analysis and formulation of the "3D generation w/o 3D data" challenge. Weaknesses: Though this method gives a theoretical formulation of this problem, I believe more elaboration on some details and comparisons with some baselines are required to address my concerns. I have left my questions below. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Though this method gives a theoretical formulation of this problem, to some extent I find it quite similar to the underlying idea of RenderDiffusion (CVPR 23'), which also trains a diffusion model which each denoising step output the denoised X0. Though Renderdiffusion does not leverage the trgt-ctxt pair for training, the "learning 3D from 2D projection" spirit looks similar. I would like to hear the author's comments on this. 2. I wonder if your method share the same setting with "Generative novel view synthesis with 3d-aware diffusion models." which trains an independent diffusion model on a particular scene (like Hydrant and a single scene from RealEstate10k? This is not a limitation but since EG3D/RenderDiffusion like method could perform well on a single category, e.g., FFHQ/AFHQ/Shapenet, is your method also capable of achieving that performance, though they do unconditional generation and your method is a conditional one. 3. Since currently GT 3D poses are required, I wonder is this method robust to the noisy pose problem? Besides, since your incorporate the volume rendering as the "forward" process, is the synthesized scene perfect view consistent? 4. I do appreciate your experiment on RealEstate10K, and I wonder whether your method could be generalized to the "scene-level" 3D generation such as urban / BlockNeRF-like setting? If not, what is the challenge. 5. What are the limitations / failure cases of this method and could you share some insight behind it? 6. Regarding the GAN inversion experiment, I wonder whether your method has any intrinsic advantages over 3D GAN inversion, since you used StyleGAN2 in the experiment while this method is designed for 3D. Comparison with "E3DGE: Self-Supervised Geometry-Aware Encoder for Style-based 3D GAN Inversion, CVPR 23" is welcome since you both adopt encoder-based pipeline. 7. I see that your method achieves much better performance against SparseFusion. I wonder the popular SDS loss is still helpful in your framework and do you see it as a necessary component in the future 3D generation task? Thanks for your elaboration. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Though some concerns, overall I find this paper a sound submission and I hold my current rating towards accept. I am looking forward to the authors' elaboration in the rebuttal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. The reviewer nicely summarizes our paper and notes that the “writing is super clear”, we solve “an essential problem in the present era”, and that our experiments “demonstrate the effectiveness...”. We now address the points raised in the review. ### Related Work **RenderDiffusion** : See general comment (G1) **Generative novel view synthesis with 3d-aware diffusion models (GeNVS)**: Please note that GeNVS is concurrent to our submission. Both our method and GeNVS train on a dataset of scenes, and not a single scene. However, there are fundamental differences. While we integrate the forward model with the denoising diffusion architecture to enable 3D scene generation, GeNVS is limited to only generating 2D images from the diffusion model. This is a crucial difference: Genvs is *not* a 3D generative model, i.e. it is not capable of generating 3D scenes. Rather, it can only generate *novel views* of 3D scenes, and thus would require score distillation to obtain a 3D scene, a limitation that GeNVS shares with SparseFusion, which we benchmark with. **EG3D**: EG3D is a 3D GAN that is trained from 2D images. It belongs to a class of 3D GANs that are primarily limited to unconditional modeling of simple objects (as the reviewer correctly notes). Our approach, for the first time, enables learning 3D diffusion models of complicated scenes, such as RealEstate10k - note that 3D GANs have never been able to demonstrate generative modeling of scenes of comparable complexity. We have not explored training on monocular image collections in our paper. Monocular images make the learning problem under-constrained (because of the lack of ctxt-trgt pairs), and exploring how our theoretical formulation can be extended to such cases is a very interesting problem that we leave for future exploration. ### Robustness to noisy poses We agree with the reviewer that our method requires GT 3D poses. We performed an experiment where we added noise to the pose parameters. The results gracefully degrade - training with 5% noise added to the camera translation parameters makes the results blurry, and leads to worse performance quantitatively, see Fig. 1 and Tab. 1 in the rebuttal pdf. Relying on GT poses is a very common requirement in contemporary 3D reconstruction methods, and we will mention this in our limitations. Removing this reliance on known accurate poses is an important problem for future research. ### View consistency Volume rendering can indeed lead to imperfect view consistency, depending on the quality of training supervision. As we train on videos where we have reasonably dense supervision, we found our results to be highly 3D consistent. To demonstrate this, we visualize point clouds extracted from the reconstructed 3D volume from extreme out-of-distribution viewpoints, see Fig. 5 in the rebuttal pdf. These point clouds demonstrate that our method extracts reasonable surfaces in 3D. ### Scene-level generation We thank the reviewer for their suggestion. We show results on trajectories that enable moving from one room to another in the case of RealEstate10k, and moving all around an object in the case of Co3D. Extending to city-scale trajectories would require tackling similar challenges as BlockNeRF, where the method would need to forget context images that are further in the past, and be able to merge several independent NeRFs. The main limitation at this point is the compute resources required to train and sample longer trajectories. We believe that these limitations are transitory and we would see large scale scene synthesis in the near future, with more optimized architectures and better hardware. ### Limitations See general comment (G4) ### 3D GAN inversion Our 2D GAN experiment showed that our framework is capable of using non-linear GAN-based forward models, and can help with inference from partial data. 3D GAN inversion is interesting as, similar to our inverse graphics task, it also involves tackling the uncertainties that arise from projection, while constraining the reconstructions to the GAN manifold. Our framework should provide the same advantages it currently does over the deterministic baselines on inverse graphics. Nevertheless, we aim to conduct this experiment for the revised version of our paper. ### SparseFusion and SDS Thank you for the insightful question! Our paper shows that SDS is not **necessary** for 3D generation, as we demonstrate that it is possible to learn a 3D generative model without SDS. This resolves some of the limitations that come with SDS: it has mode-seeking behavior, it requires costly test-time optimization, and it usually relies on monocular image priors that lead to 3D artifacts (like the infamous Janus artifacts). While some recent approaches make progress on some of these limitations, we show that we can completely side-step these issues using our novel approach that natively learns 3D models. At the same time, we also believe that SDS can be **complementary** to our approach: we could use SDS as an additional form of supervision at training time to supervise text conditioning, use it at test time to constrain the sampled scenes conditioned on text, or to extract high-fidelity mesh models. We will add a discussion of these exciting topics to our future work section. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal of the author. Though this method has some limitations, the author's rebuttal has addressed most of my concerns. I think it worth an acceptance and presentation in the main nips conference.
Rebuttal 1: Rebuttal: We thank all reviewers for their efforts. The reviewers note that we solve “an essential problem in the present era” (5usH), we tackle a “key challenge” in training 3D diffusion models (bLEB), our architecture is “well-designed and intuitive” (LcVC), and “results seem strong, both visually and quantitatively” (zgt6). The reviewers have proposed additional experiments that will better highlight our work's strengths and limitations. **We are happy to report that we have been able to execute the majority of the experiments that the reviewers proposed, with favorable results, which we are excited to include in the paper.**. We now offer clarifications to some points shared across reviewers. ### (G1) Differences to RenderDiffusion RenderDiffusion indeed shares some of the motivation of our approach, which is training 3D diffusion models from 2D data, which we discuss in our paper (L162-163). However, our approach differs from RenderDiffusion in several critical points, which uniquely enables our method to go beyond simple, single-object scenes: 1. RenderDiffusion learns an unconditional model. 2. RenderDiffusion requires canonical camera poses, i.e., all of the objects have to be oriented according to a canonical reference frame. This is a major limitation, as canonical poses do not exist for compositional real-world scenes. Our approach does not require canonical poses. 3. Through not requiring canonical camera poses, and with the proposed conditional diffusion that is not limited to monocular supervision, our model for the first time departs from simple, single-object scenes and succeeds at 3D generative modeling of room-scale, RealEstate10k scenes, a previously unachieved feat for 2D-to-3D generative models. In addition, we provide a general theoretical formulation of conditional diffusion through forward models and define the conditions under which we can solve this problem optimally. We also present empirical results for different applications. ### (G2) Generality bLEB appreciates the generality of our formulation but notes the lack of applications outside of vision. Our theoretical formulation is general and not limited to forward models in computer vision. We demonstrate results on distinct, diverse, and important vision problems. We already mention the scope of our experimental results in the paper (lines 13-14). However, we agree that further clarification is beneficial. We will update the introduction and abstract to clarify that we only demonstrate applications in computer vision and will mention other domains only in the future work section. zgt6 mentions the requirement on multiple observations. While this is true, and we demonstrate several practical applications, we further explored the nature of observations necessary for training. We add an inpainting experiment using multiple observations created from partially observed signals during training. This task has parallels in many domains, such as medical imaging or audio processing, where the goal is to complete missing information. We provide more details for this experiment in the response to zgt6 (“Multiple views required for training”), and Tab.1, Fig.4 of the response PDF. More possible applications of our framework: - Physics simulators can serve as forward models (see PAC-NeRF: Physics Augmented Continuum Neural Radiance Fields for Geometry-Agnostic System Identification [Li et al., 2023]) to learn the physical properties of objects by training on videos. - Physically-based sound synthesis as forward models (see Singing Voice Synthesis Using Differentiable LPC and Glottal-Flow-Inspired Wavetables [Yu and Fazekas, 2023]) can be used to predict parameters of human vocal chords by training on singing voice datasets. We believe that our framework would enable uncertainty-aware inference of many more physical systems. ### (G3) Evaluations We evaluate against the state of the art in 3D reconstruction. While SparseFusion only uses the Co3D dataset for evaluation, we also use the very challenging scene-level RealEstate10k dataset. We show results on more complex settings compared to the SparseFusion paper. We only use a single input image (unlike >=2 for SparseFusion), and do not assume GT object masks, instead modeling the whole scene including background. This makes our task significantly more challenging. Thus, in our paper, we only trained on hydrants (a concurrent work, GeNVS, also only trained on hydrants). We agree with LcVC that training a general model on 10 Co3D categories will strengthen the paper. In the reponse PDF (Tab. 1, Fig. 2), we show the results of such a general model, and compare to pixelNeRF and SparseFusion, significantly outperforming them. Sparsefusion fails to generate any reasonable results in this multi-class setting. Note that the SparseFusion paper only trained category-specific models. **To the best of our knowledge, these are the first-ever results of category-agnostic diffusion models on Co3D.** We further show preliminary results on training on renderings of 15k objects from the large-scale object-centric Objaverse dataset, see Fig. 3 of the response PDF. These results demonstrate that our method is capable of modeling complex distributions, and that it strongly outperforms all existing baselines. ### (G4) Limitations Reviewers asked about the limitations of our approach. They are mentioned in our supplemental pdf, see Section 4 (L191-202). Among other points, we note the complexity of training and inference. We will move the limitations to the main pdf in the revised version. ### (G5) Updated Figure While most reviewers appreciate our exposition, zgt6 mentions that some application descriptions are unclear. We have updated the overview figure of our main application of inverse graphics that we hope better explains the main application, see Fig. 6 of the response PDF. Pdf: /pdf/5ba6500723fb1206eb4c8cd5abc005458860abee.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Neural Relation Graph: A Unified Framework for Identifying Label Noise and Outlier Data
Accept (poster)
Summary: This paper proposes a novel approach called the Neural Relation Graph framework for identifying label noise and outlier data in large-scale datasets with real-world distributions. The approach utilizes a relational structure of data in the feature-embedded space to detect label errors and outlier data, and introduces a visualization tool for interactive data diagnosis. The authors conduct extensive experiments on various tasks and demonstrate that their approach achieves state-of-the-art detection performance and is effective in debugging real-world datasets. The contributions of this paper include a unified approach for diagnosing and cleaning large-scale datasets, a data relation function and graph algorithms for detecting label errors and outlier data, and a visualization tool for interactive data diagnosis. Strengths: Originality: The Neural Relation Graph framework proposed in this paper is a novel approach for identifying label noise and outlier data in large-scale datasets. The authors utilize a relational structure of data in the feature-embedded space to detect label errors and outlier data, which is a unique and innovative approach. The paper also introduces a visualization tool for interactive data diagnosis, which is a novel contribution to the field. Overall, the paper is highly original and presents a new perspective on diagnosing and cleaning large-scale datasets. Quality: The paper is of high quality, with a well-designed methodology and extensive experiments conducted on various tasks. The authors provide detailed descriptions of the proposed approach and the experiments conducted, which makes it easy to understand and replicate the results. The paper also includes a thorough evaluation of the proposed approach, comparing it to existing methods and demonstrating its effectiveness in detecting label errors and outlier data. The quality of the paper is further enhanced by the use of clear and concise language, making it easy to follow and understand. Clarity: The paper is well-written and easy to understand, with clear descriptions of the proposed approach and the experiments conducted. The authors provide detailed explanations of the technical terms used, making it accessible to a wide range of readers. The paper also includes visual aids, such as figures and tables, which help to illustrate the concepts presented. Overall, the clarity of the paper is excellent, making it easy to follow and understand. Significance: The paper is highly significant, as it presents a novel approach for diagnosing and cleaning large-scale datasets with real-world distributions. The proposed approach utilizes a relational structure of data in the feature-embedded space to detect label errors and outlier data, which is a unique and innovative approach. The paper also introduces a visualization tool for interactive data diagnosis, which is a valuable contribution to the field. The results of the experiments conducted demonstrate the effectiveness of the proposed approach, making it a significant contribution to the field of machine learning. Weaknesses: One potential weakness of the paper is that the authors do not provide a detailed analysis of the limitations of their approach. While the proposed approach achieves state-of-the-art detection performance on various tasks, it is unclear how it would perform on datasets with different characteristics or in different domains. The authors could address this weakness by conducting experiments on a wider range of datasets and providing a more detailed analysis of the limitations of their approach. Another weakness of the paper is that the authors do not provide a detailed discussion of the computational complexity of their approach. While the paper mentions that the proposed algorithms are scalable, it is unclear how they would perform on very large datasets or in real-time applications. The authors could address this weakness by providing a more detailed analysis of the computational complexity of their approach and discussing potential strategies for improving its scalability. Finally, the paper could benefit from a more detailed discussion of the practical implications of the proposed approach. While the paper demonstrates the effectiveness of the approach in detecting label errors and outlier data, it is unclear how it could be applied in real-world scenarios. The authors could address this weakness by discussing potential use cases for the proposed approach and providing guidance on how it could be integrated into existing machine learning pipelines. Overall, the paper presents a novel and innovative approach for diagnosing and cleaning large-scale datasets, but could benefit from a more detailed analysis of its limitations, computational complexity, and practical implications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you provide a more detailed analysis of the limitations of your approach? While the proposed approach achieves state-of-the-art detection performance on various tasks, it is unclear how it would perform on datasets with different characteristics or in different domains. Can you provide a more detailed discussion of the computational complexity of your approach? While the paper mentions that the proposed algorithms are scalable, it is unclear how they would perform on very large datasets or in real-time applications. Can you discuss potential use cases for the proposed approach and provide guidance on how it could be integrated into existing machine learning pipelines? While the paper demonstrates the effectiveness of the approach in detecting label errors and outlier data, it is unclear how it could be applied in real-world scenarios. Can you provide more details on the visualization tool introduced in the paper? While the tool is mentioned briefly, it would be helpful to have a more detailed description of its functionality and how it can be used to diagnose data. Can you provide more details on the datasets used in the experiments? While the paper mentions that experiments were conducted on various tasks, it would be helpful to have more information on the characteristics of the datasets and how they were selected. Can you provide more details on the hyperparameters used in the experiments? While the paper mentions that hyperparameters were tuned using cross-validation, it would be helpful to have more information on the specific values used and how they were selected. Can you provide more details on the implementation of the proposed algorithms? While the paper mentions that the algorithms were implemented using PyTorch, it would be helpful to have more information on the specific implementation details and any potential optimizations that were made. Can you discuss potential future directions for this research? While the paper presents a novel and innovative approach, it would be helpful to have a discussion on potential future directions for this research and how it could be extended or improved upon. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not explicitly address the potential negative societal impact of the proposed approach. While the focus of the paper is on diagnosing and cleaning large-scale datasets, it is possible that the approach could be used for other purposes, such as identifying individuals or groups based on their data. This could potentially lead to privacy concerns or other negative societal impacts. However, it should be noted that the paper does not provide any evidence that the proposed approach has been used for such purposes, and the authors do not make any claims about the potential negative societal impact of their work. Additionally, the paper does not explicitly address the limitations of the proposed approach, which could potentially lead to unintended consequences if the approach is used in real-world scenarios. Overall, while the paper does not explicitly address the potential negative societal impact of the proposed approach, it should be noted that the authors do not make any claims about the potential negative impact of their work, and the focus of the paper is on diagnosing and cleaning large-scale datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable efforts and time in providing insightful feedback. We would like to address the questions below. **Limitations of the proposed approach (with different domains)** - Thank you for your feedback. In Table 1-b from Sec 4.1.2, we demonstrate that our method consistently outperforms baselines across **various data types, including image, speech, and text** domains. Meanwhile, we acknowledge some limitations of our work. - Firstly, our current approach is confined to classification tasks. Expanding the application of our method to a more diverse set of tasks, such as generative modeling or segmentation, is an important future direction. Furthermore, we observed from Figure 5 (b) that the performance gap between our approach and the baselines narrows as the number of data points in the relation graph decreases. While our method demonstrates superior performance even with 12k data from ImageNet, it is essential to address this limitation of decreased performance when dealing with a small number of data points. We will include these discussions in the main text. **Discussion on computational complexity** - We would like to highlight our theoretical complexity analysis in L159-166. We demonstrate that when utilizing parallel computing, the complexity of our algorithm is $O(k^2)$ for $k\ll n$, with $n$ representing the total number of data points. In Appendix A.3, we examine the time taken by the algorithm to process a large-scale dataset (1.2M ImageNet) in a real-world computing environment. The results show that our algorithm takes only a few minutes and comprises only 5~20% of the execution time for feature extraction, demonstrating its efficiency in handling large-scale datasets. **Potential use cases** - Thank you for your suggestion. Our method can be applied to various application scenarios, including data annotation, model evaluation, and robust inference. For instance, our problematic data detection algorithms can aid human annotators during the data annotation process. In addition, as shown in Appendix Figure 13 and Table 19, our algorithm allows for the effective detection and removal of outlier data from the evaluation set, leading to a more accurate model evaluation. Furthermore, our algorithm can determine whether a data point is out-of-distribution during inference, enabling a more reliable inference system. We will incorporate these discussions in the revised version. **Details on the visualization tool** - The visualization tool described in Section 3.5 helps us to comprehend the distribution of complementary and conflicting relations associated with a data point, aiding dataset analysis and debugging. For example, in Appendix Figure 11, the relation map of the Ram sample exhibits a combination of positive and negative relations. Specifically, it exhibits positive relations with other Ram class samples while showing negative relations with samples from the Big Horn class, which are visually challenging to distinguish. This observation suggests the need for multi-labeling or refinement of the label space definition. In this way, we can intuitively identify problems in the dataset by using the visualization tool. We will elaborate on these explanations in the revised version. **Details on the datasets** - Thank you for your feedback. The brief description and references for the datasets are provided in lines 239-243. We will update the dataset descriptions in a table format, including the number of data points and classes. These datasets were selected from the baseline methods, focusing on those with large scales. For example, ImageNet was used in the TracIn paper, and MNLI was used in Dataset Cartography. We will update this detailed information in the revision. **Hyperparameter details** - Thank you for your comment. In Appendix C, Table 5, we provide a summary of the hyperparameters used in our method. As noted, we used a **fixed set of hyperparameters** for each task (label error detection, OOD/outlier detection) across various datasets and models during the evaluation. We performed hyperparameter tuning on ImageNet, guided by the hyperparameter sensitivity analysis presented in Figure 8 and Table 6. The results indicate that our method does not require specific tuning for each model or dataset, enabling efficient adaptation in real-world applications. **Implementation details** - Our implementation is based on PyTorch, and we conducted each experiment on a single RTX3090-Ti GPU. To optimize computation efficiency, we utilized a caching technique to store the initial label noisiness scores, as described in lines 140-143. Moreover, in Appendix C.1, lines 599-601, we provided an implementation detail for handling small noisy values. We will release the detailed source code for reproducibility. **Potential future direction** - Thank you for your feedback. We briefly described the future direction of our approach in Appendix B, lines 568-574. We believe expanding our approach beyond classification to encompass tasks like generative modeling or segmentation is an important future direction. Furthermore, integrating our method into a real-world data annotation process is one of the future directions. We will elaborate on these points in the main text. **Potential negative social impact** - Our research primarily focuses on tackling technical challenges related to label errors and outliers, and does not directly relate to sensitive social issues. Nevertheless, we acknowledge the possibility that the algorithm could be utilized to detect specific group data, potentially leading to social implications. During the revision process, we will emphasize in the main text that the primary objective of our paper is strictly confined to addressing technical issues and does not advocate any discriminatory usage of the algorithm. Thank you once again for the valuable feedback. If you have any remained questions, please let us know. --- Rebuttal Comment 1.1: Title: Awaiting Your Feedback on Authors' Rebuttal Comment: Dear Reviewer Yk3p, Thank you for your hard work. The Author-Reviewer discussion ends on August 21. The authors and I are eager to learn whether their responses have adequately addressed your concerns. You are encouraged to directly reply to the authors' rebuttal. Please note that this is a public thread. If you prefer to reply to me individually, please use the internal discussion thread. Kind Regards, AC --- Rebuttal 2: Title: Discussion Comment: Dear Reviewer Yk3p, The authors have provided their response. Can you please get in touch with them to assess if their response meets your criteria? If not, could you highlight any remaining concerns? Thank you very much for your help. Best Regards, AC
Summary: The authors identified the issue of how existing label errors in the training data and the OOD in the test set can affect the model training and evaluation and further proposed a novel approach utilizing the learned feature embeddings and label information to compute the relations between data instances. The relations represent how similar the two data instances are in terms of their feature embeddings and also assigned labels. This is further used to construct a relational graph structure. Based on the graph structure, they introduce a min-cut algorithm based on the label noisiness score to identify a subset of label errors. They further visualized the derived graph structure for the purpose of interactive data error diagnosis. Extensive experiments are conducted on multiple data types (images, audio, texts), and all show superior performance over the baseline methods. Through comparison, authors showed their relational data structure provides complementary information not captured by the unary scoring methods. Generally, this paper is well-written, the ideas are pretty clear, the methods are novel, and results are solid. Strengths: A substantive assessment of the paper's strengths, touching on each of the following dimensions: originality, quality, clarity, and significance. We encourage reviewers to be broad in their definitions of originality and significance. For example, originality may arise from a new definition or problem formulation, creative combinations of existing ideas, application to a new domain, or removing limitations from prior results. You can incorporate Markdown and Latex into your review. See /faq. 1. This paper is well-written, and the authors explain the ideas clearly. The method is novel, and experiments and results are solid. 2. The example provided in Figure 1 explained well the limitations of using the unary scoring method when identifying the label error and the outlier data. 3. The design of the data relation function in Section 3.1 looks simple but still effective. 4. In section 3.2, the authors pointed out that simply aggregating all edges of a node can yield suboptimal results, which makes much sense, and they further proposed using the min-cut method as a walkaround. This looks interesting. 5. The experiments are conducted using various data types, including images, speeches, and texts. This also covers multi-class tasks and binary tasks. All experiments show the effectiveness of their methods, which is solid. Weaknesses: 1. In Figure 1, the difference between the Coil (label error) and the Envelope (outlier) is not very clear. More elaborations will be helpful. 2. Constructing the relational graph is a fully connected graph, the time complexity would be O(N^2), and the scalability to a large dataset is a concern. 3. In section 3.2, pp3 line 115, ‘a lower (label noisiness) score indicates a higher likelihood of label error’. A lower label noisiness indicates a higher error possibility, this is a bit confusing and not very intuitive. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In Figure 1, What makes the Coil example a label error, while the Envelope example an outlier? Some elaboration on this would be helpful. 2. In Section 3.2, pp3 line114, authors set $r(i,i)=0$, this is not very straightforward and the reason is not clear to me. More elaboration on this would be helpful. 3. In section 3.2, pp4 line 127, authors minimize the sum of the edges between two groups. One question comes naturally - what about the sums of edges within each group? 4. In terms of different datasets used in this work, several data types were involved (images, speeches, texts), it would be helpful to see what the embedding dimensions are for each dataset. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Minor edit suggestion: pp8 line309, ‘3.6%p’ should be ‘3.6%’. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable efforts and time in providing insightful feedback on our work. We would like to address questions from the reviewer below. **Difference between the Coil (label error) and the Envelope (outlier)** - In the case where a data point is associated with a different ground truth label, it is considered a label error, and when it is not possible to assign a suitable label, the data point is referred to as an outlier. For instance, in the case of the "Coil" example, it corresponds to the "park bench" label in ImageNet (incorrectly labeled as "coil" in the actual ImageNet dataset), and for the "envelope" example, a suitable ImageNet label cannot be found for the given data. We will clarify these explanations in the revision. **The complexity of constructing the relational graph** - Our method employs a simple cosine feature similarity for relation values, which involves a simple matrix multiplication of $n$ pre-computed features. This computation can be efficiently carried out on GPUs. In practice, extracting features from 1.2M ImageNet data points using an MAE-Large model takes 15x longer time than the execution time of Algorithm 1 (Appendix A.3, Table 3). Specifically, the computation of Algorithm 1 with full ImageNet can be accomplished within a few minutes by using a single GPU. Furthermore, as demonstrated in the complexity analysis in lines 159-166, we can improve complexity to O(k^2), for $k \ll n$, leveraging partitioning and parallel computing techniques. **A lower label noisiness indicates a higher error possibility, this is a bit confusing and not very intuitive** - Thank you for the pointer. We will modify the expression to be more intuitive. **More elaboration on the assumption r(i,i)=0** - Our approach only considers relations between different data points, which means the value $r(i, i)$ is not taken into account in the algorithm. However, if the formula is expressed considering this value, the notation becomes complicated, and to enhance the clarity of the expressions, we have chosen to set this value to 0. We will make this more clear in the revised version. **About the sums of edges within each group** - That's a great point. The sum of all edges in the graph is equal to the sum of edges between groups plus the sum of edges within each group. Since the total sum of edges for a specific dataset is fixed, minimizing the sum of edges between groups is equivalent to maximizing the sum of edges within each group. This new perspective is highly intriguing, and to aid the reader's understanding, we will incorporate this information into the main text. **Embedding dimensions of each dataset** - Thank you for the point. The embedding dimensions for each case are as follows: | Image (MAE-Large) | speech (Ast) | text (RoBERTa-Base) | |:-:|:-:|:-:| | 1024 | 768 | 768 | - We will include the information above in the revision. **Minor edit suggestion: pp8 line309, ‘3.6%p’ should be ‘3.6%’** - Thank you for the pointer. We will fix it in the revision. Thank you once again for the valuable feedback. If you have any remained questions, please let us know. --- Rebuttal Comment 1.1: Comment: Thanks for the reply! I have read the responses. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you for confirmation!
Summary: This paper proposes a new method using graph structure for detecting label errors and outlier data. Briefly speaking, the algorithm utilizes data feature embeddings to generate relation graph and using the new defined data relation function, its algorithm can capture mislabeled and outlier data from a global prospective. A large number of experimental results show that the performance of this method is greatly improved compared with the existing detection methods. Strengths: (1) This paper presents a novel relation graph-based approach to achieve better utility and considers error label and outlier detection in a global way. (2) Extensive experiments on datasets from different domains show the improvement of the new method is promising. (3) Comprehensive ablation study is performed, which helps to understand the proposed model better. (4) The overall expression of the article is clearer and easier to understand, and related works are adequately cited. Weaknesses: (1) Certain statements in this paper lack justification. For example, the method is based on building graph on feature space, but there is a little discussion about feature spaces and how to generate it. (2) Lack of sufficient theoretical analysis (e.g. convergence but not only empirical analysis) , the proof of proposition and analysis of relation function are relatively simple. (3) Lack of explanation of choosing parameter in baseline method, like whether there is a finetune on the choice of K in KNN method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Line 159 in Complexity analysis, the paper demonstrates the complexity can be reduced to O(nk). But in the fourth last row in algo 1 each point in partition seems also need to calculate n relation score instead of k. Can you deeper explain the improvement in complexity? And there is no empirical results of the acceleration. (2) The paper mentions it’s the first using data relation graph on the feature space. The “first” points to “relation graph” or “feature space”? Are there any other methods using graph to detect error data and need to be compared with? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: (1) Although extensive experiments on real-world datasets corroborate the effectiveness of the proposed method, there is lack of theoretical analysis of the effectiveness of the method. (2) Some content lacks detailed explanations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable efforts and time in providing insightful feedback on our work. We would like to address questions from the reviewer below. **Discussion about feature spaces** - Thank you for the pointer. We would like to mention that specific information regarding the feature space can be found in Appendix C.1, lines 589-590. We used neural networks trained on a noisy training set as feature extractors. Specifically, we followed the references of TracIn, MAE, and RoBERTa, using the input vector to the classification layer of neural networks as features. We will incorporate this information into the main text in the revision. **About theoretical analysis** - Thank you for your response. In Proposition 1, we theoretically prove the convergence of our proposed algorithm, which is also empirically demonstrated in Appendix A.2. We believe that the simplicity and intuitiveness of our proof offer an advantage, as they make the concepts more accessible to a broad readership, ensuring a clear understanding of our work. **Hyperparameter of baselines** - Thank you for your question. We tuned the hyperparameters of the baseline methods by adhering to the instructions provided in the respective papers, as mentioned in Section 4.3, lines 314-315. Specifically, for the KNN method, we followed the formula presented in the paper, which specifies that $k=n_{\text{class}} \times \alpha$ (where alpha represents the ratio of training data used) for the tuning. We will elaborate on these details in the revision. Additionally, we will make the source code, including the baseline methods, publicly available to further support reproducibility. **Questions on complexity (+empirical comparison)** - Figure 5 (b) demonstrates that our method maintains superior detection performance even with a small number of data in the relation graph, e.g., 1% of the ImageNet training set. Based on the observation, the complexity analysis suggests that when given a dataset with $n$ samples, dividing it into $n/k$ partitions of size $k$ and running the algorithm on each partition improves the complexity. In this case, the complexity for each partition is $O(k^2)$, and when applied sequentially to $n/k$ partitions, the complexity becomes $O(k^2 * n / k) = O(nk)$. Following the suggestion, we empirically validate the complexity of our Algorithm on ImageNet with 1 RTX-3090Ti GPU: |# data | 0.1M | 1M (accelerated) | &nbsp;&nbsp;1M | |:--|:--:|:--:|:--:| | Time (s) | 3 | 31 | 424 | | AP | 0.518 | 0.522 | 0.526 | - In the table above, we examine the computation time and label error detection performance of our algorithm on ImageNet with MAE-Large. Specifically, the column labeled 1M (accelerated) represents the results of our algorithm's accelerated version, achieved through partitioning of size 0.1M. Considering that the performance of the best baseline is 0.484, our algorithm demonstrates efficient acceleration while maintaining the best performance. We will incorporate this information in the revised version. **The “first” points to “relation graph” or “feature space”?** - We appreciate your feedback and would like to clarify the point mentioned in lines 83-85. The term "first" was used to indicate the order of the contents in the following sections rather than claiming to be the initial proposer of the concept. Our main contribution lies in presenting a unified framework, novel relation graph structure, and effective algorithms, which differ from the baseline methods, primarily relying on a single score of each feature. While there are other methods that also utilize relationships between data, such as the KNN method, it's important to note that this approach is limited to OOD issues and does not leverage a global relational structure. As a result, our proposed approach demonstrates higher performance, as presented in Figure 7, page 9. Thank you once again for the valuable feedback. If you have any remained questions, please let us know. --- Rebuttal 2: Title: Discussion Comment: Dear Reviewer 6KY3 The authors have provided their response. It would be greatly appreciated if you could communicate with the authors to confirm whether their response addresses your concerns, or to specify any remaining issues. Many Thanks, AC --- Rebuttal Comment 2.1: Title: My concerns have been addressed Comment: Thanks for your response. I'll keep my score.
Summary: The paper under review outlines a novel method utilizing graph structure to identify label errors and outlier data. It proposes an algorithm that makes use of data feature embeddings to produce relation graphs. By incorporating a newly defined data relation function, the algorithm can globally capture mislabeled and outlier data. The experiments conducted exhibit a marked improvement in performance compared to extant detection methods. Strengths: + Innovation: The paper introduces a novel method, which is an important contribution to the field. This new approach potentially provides a fresh perspective and further insights into the problem at hand. + Clarity and Comprehensiveness: The paper is well-structured and clearly written, making it easy for readers to understand the content. The authors have adequately cited related works, showing a thorough understanding of the existing literature and situating their work appropriately within that context. + Important Problem: The paper tackles an important problem, making its potential impact highly relevant and timely. This problem is pertinent to many real-world scenarios, amplifying the value of the proposed solution. Weaknesses: + Unclear Advantage: Despite the introduction of a new method, the paper lacks clear explanation of the motivation and advantages of the chosen approach compared to existing methods. The authors proposed to use the feature similarity ( 'semantic similarity between data points') to help learn a noise-robust classifier, which exploits $P(X)$ to help learn $P(Y|X)$. It shares the same underlying physiology with semi-supervised based methods (e.g., DividMix, ICLR20) or self-supervised based methods (e.g., UNICON, CVPR22). + Lack of Explicit Assumptions: The authors do not clarify the assumptions under which the proposed method is expected to perform well. This lack of clarity could impede understanding and application of the method in practical scenarios. + Unknown Motivation for Kernel Usage: The motivation for using a kernel in the proposed method is not explained. Without this, it's hard to understand the reason behind the choice of a kernel and how it contributes to the method's effectiveness. + Absence of Baseline Comparisons: The paper does not compare the proposed method with significant baselines in the field of learning with noisy labels (e.g., DividMix, ICLR20; ELR, NeurIPS20; CausalNL, NeurIPS21; C2D, WACV23; UNICON, CVPR22). Such comparisons are crucial for evaluating the performance of the new method and understanding its standing relative to existing techniques. Technical Quality: 3 good Clarity: 3 good Questions for Authors: + Could you elaborate on the unique advantages of your proposed method over existing semi-supervised and self-supervised techniques, especially considering the shared approach of using feature similarity to help learn a noise-robust classifier? What distinguishes your method from others that also exploit $P(X)$ to help learn $P(Y|X)$? + Could you specify the assumptions under which your method is expected to perform well? + Could you provide the reasoning behind the choice of using a kernel in your method? How does the kernel contribute to the effectiveness of the method and why was this specific kernel chosen over potential alternatives? + To convince others about the method's practical significance, could you include a comparison of your method with DividMix (ICLR20) and UNICON (CVPR22) in the field of learning with noisy labels? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I did not come across a sufficient discussion of the limitations of the proposed method. I would highly recommend the authors include a section on potential limitations in their revision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable efforts and time in providing insightful feedback. We would like to address questions below. **Advantage over the semi-/self-supervised techniques** - Thank you for the pointer. We would like to clarify that the goal of our paper is to identify the problematic data not only during training but also in more general situations such as evaluation and inference. Our method has some advantages over the mentioned semi-/self-supervised techniques: 1. Our approach has **various other applications**, including data annotation, evaluation set debugging, and robust inference (lines 17-20), whereas the semi-/self-supervised methods focus on training with noisy labels. For instance, as shown in Table 1-c, our method can effectively detect label issues in the validation set, enabling a more accurate evaluation system. Furthermore, as shown in Appendix Figure 13, our method can identify whether a data point is an outlier, which helps build a reliable inference system. 2. Our method does **not rely on specific training techniques**, making it applicable to more general data types and models. For instance, DivideMix and UNICON require mixup training, which is not commonly used in text domains, e.g., GPT-3 [1]. Our method assumes that the model is given, allowing it to be effortlessly applied to multiple domains using different training techniques, as demonstrated in Table 1-b. [1] Brown et al., "Language Models are Few-Shot Learners", 2020 3. Our method does **not demand additional training costs**, making it easier to scale to large-scale models and datasets. The semi-/self-supervised methods like DivideMix and UNICON require training two networks and numerous augmented training samples for SSL. Our approach, on the other hand, only needs a single trained network and identifies problematic data without additional training. - In the revision, we will incorporate the aforementioned points into the related work section. **Comparison with DivideMix and UNICON** - Thank you for the valuable suggestion. While the mentioned methods aim to train image classifiers with noisy labels, our approach is focused on the task of identifying and debugging problematic data in different scenarios and domains. Technically, we focus on determining whether a data point $x_i$ or its label $y_i$ is anomalous, whereas the mentioned methods aim to directly infer a clean label $y_i$, making a direct comparison less straightforward. For instance, due to the difference, our approach is applicable to outlier detection as well, whereas those baselines cannot be applied. - We recognize that DivideMix and UNICON proposed partitioning algorithms for clean/noisy labels using GMM and Uniform Clean Sampling. However, it's worth noting that their approaches rely on sample-level cross-entropy or JSD loss, while our method explicitly models the relation between data. We reproduced the baseline algorithms using publicly available GitHub code and compared them with our relation graph approach: |dataset \ method|DivideMix|UNICON|Relation (ours)| |:-|:-:|:-:|:-:| |ImageNet|0.424|0.447|0.526| |ESC-50|0.737|0.739|0.779| |MNLI|0.754|0.762|0.766| - The table above compares the label error detection AP over various datasets in different domains (Table 1-a,b of our paper). Our approach demonstrates better performance compared to DivideMix and UNICON, confirming the advantages of our method across various datasets in label error detection. We acknowledge the importance of these related works and we will incorporate the results into the revision. **Assumptions under which the method is expected to perform well** - Thank you for the pointer. In our approach, we assume the availability of a model trained on the noisy training dataset (line 89). As illustrated in Figure 6-left, when the model is insufficiently trained (e.g., trained for 10 epochs), the label noise detection performance tends to decrease, as the model does not capture meaningful semantic relationships between the data. It is worth noting that the model training requirement is a common assumption for other baseline techniques as well. - Furthermore, as evident from Figure 5, the performance of our algorithm improves as the number of data points in the relation graph increases. This analysis suggests that our method's performance can be effectively enhanced when handling massive data in real-world applications. We will ensure to present these discussions more clearly in our revision. **Motivation and reasoning behind the kernel used** - To evaluate our relation graph framework, we adopted the most commonly used and computationally efficient kernel, i.e., feature cosine similarity. Notably, our framework demonstrates strong performance with other kernels, e.g., RBF kernel (Sec 4.3. Table 2), indicating that the effectiveness of our approach primarily stems from the graph structure and algorithm rather than the specific kernel design. - Another notable advantage of the kernel described in Section 3.4 is interpretability. In lines 200-211, we establish a relationship between the proposed kernel and the influence function. Additionally, in Appendix A.4, we provide a formal explanation demonstrating that the proposed kernel can exhibit better robustness to outliers compared to the influence function. **Potential limitations** - Thank you for your feedback. Due to space limitations, we included the limitations and future work in Appendix B, line 568. Currently, we have evaluated our method within the scope of classification. Applying it to a broader range of tasks, such as generative modeling or segmentation, is an important future endeavor. Additionally, establishing a more rigorous theoretical foundation for our approach is also an important future direction. We will supplement these points and move them to the main text in the revision. Thank you once again for the valuable feedback. If you have any remained questions, please let us know. --- Rebuttal Comment 1.1: Title: My concerns have been addressed Comment: Dear authors, Thank you very much for your clear response. I have no further concerns. I have updated my rating. Kind regards, Reviewer nM5G --- Rebuttal 2: Title: Discussion Comment: Dear Reviewer nM5G, The authors have provided their response. Can you please get in touch with them to assess if their response meets your criteria? If not, could you highlight any remaining concerns? Thank you very much for your help. Best Regards, AC
Rebuttal 1: Rebuttal: Dear reviewers, Thank you for your valuable effort and time in providing helpful feedback on our work. We sincerely appreciate your encouraging comments, including: - The paper introduces a novel method, provides a fresh perspective into the problem, and is highly original. (Reviewer nM5G, 6KY3, Yk3p) - The paper tackles an important problem, making its potential impact highly relevant and timely (Reviewer nM5G) - Extensive and solid experiments show the improvement of the new method is promising. (Reviewer 6KY3, Cygf) - The clarity of the paper is excellent (Reviewer Yk3p) We have carefully considered all the points raised and provided rebuttals accordingly. If you have any further questions, please let us know. We will reflect all discussions in the revision and release the code for reproducibility. Best regards, The authors
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Spectral Evolution and Invariance in Linear-width Neural Networks
Accept (poster)
Summary: This paper studies gradient descent training in single-hidden-layer neural networks in the linear-width regime (i.e., that in which the input dimension, hidden layer width, and number of training datapoints together tend proportionally to infinity). Its primary result is that the bulk spectra of the conjugate and tangent kernels do not change over training in this regime, but at large learning rates an outlier eigenvalue can emerge. Strengths: The topic of how kernels evolve outside of the lazy regime is timely, and the RMT-inspired setup and results presented should be of interest to the community. The theoretical results appear correct, though I've not checked the proofs line-by-line. Weaknesses: The manuscript presents a series of interesting observations, but to my taste it sacrifices depth for the sake of breadth. The results on heavy-tailed spectra are thought-provoking, but I think focusing on the emergence of spikes in the spectra would make for a stronger manuscript. If the authors could prove the existence of a BBP-like transition, they would have a strong paper. The current comment on Lines 247-248 comparing to [6] is not very satisfying. At the very least, a more detailed empirical investigation of this phenomenon would result in a more convincing story. In this vein, the experiments are not very systematic, and as a result cannot clearly disentangle which changes in the setup are required to produce a given phenomenon. For instance, Table 1 contains various settings of batch size and learning rate, but nowhere are these parameters swept jointly over some reasonable range. I have in mind something like the parameters sweeps for learning single-index models performed in recent work by Atanasov et al., "The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes" (ICLR 2023; note also that this relevant recent work is not cited). The goal here would be to establish under precisely what conditions a spike emerges. As it stands, the authors can make only vague statements. The transition to using Adam for the heavy-tailed weight experiments is even more drastic: is it possible to observe this phenomenon using a non-adaptive optimizer? I think dissecting this phenomenon is more properly the subject of a separate paper, as it is currently not well-integrated with the other results. The discussion in Lines 286-291 is also rather heuristic, and I don't think the authors provided sufficient evidence for their claim that heavy-tailed spectra naturally emerge in feature-learning networks trained on complex tasks. Moreover, how does "[t]his example explain[] why we can use the heavy tails to discriminate well-trained and poorly-trained large models?" This requires elaboration, and more empirical evidence. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - In the Introduction, it would be nice to mention recent work on Gaussian equivalence for deep random feature models in the linear-width regime by Schröder et al., "Deterministic equivalent and error universality of deep random features learning" (ICML 2023) and Bosch et al., "Precise Asymptotic Analysis of Deep Random Feature Models" (2023). - The linear-width regime for deep Bayesian neural networks has recently been the subject of study in the community of researchers using tools from statistical physics to study deep learning, see e.g., Li and Somplinsky, "Statistical mechanics of deep linear neural networks: The backpropagating kernel renormalization" (PRX 2021), Zavatone-Veth et al., "Contrasting random and learned features in deep Bayesian linear regression (PRE 2022), and Cui et al., "Optimal learning of deep random networks of extensive-width" (ICML 2023). I think it would be useful to at least comment upon the Gaussian equivalence results of Cui et al. as a point of reference for how different inference procedures behave in the extensive-width regime. - In Figure 2 (b-d) and elsewhere, it would be better to label the abscissa with $\eta$ or "learning rate" rather than "lr," for the sake of consistency with the text. - In Figure 1 and elsewhere, the histograms should be plotted using a log-scaled ordinate. As it stands, the outlier in 1b is nearly invisible. - Why use the single-index + quadratic model in eq. 8 rather than a (simpler) single-index model? - The BERT experiments in Figure 5 strike me as mostly decorative, as the setting is not commensurate with the earlier portions of the paper. What insights would the authors argue can be transferred from their MLP experiments to this setting? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper does not provide adequate discussion either of the relevance of the linear-width setting for practice or of the limitations of their experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and additional references on random feature models and deep Bayesian networks under LWR. We will add these references and provide additional comments and comparisons. In the following, we address the comments and questions. 1. The goal of this paper is to demonstrate empirically that this BBP-like transition we observed appears in neural networks under LWR, both for the weight and kernel matrices. While proving this theorem in particular setups is our ultimate goal, this paper is an empirical study that can help inform our and others' approaches to rigorously proving such a transition. We are familiar with reference [6] where the authors considered a two-stage training process and proved the BBP-like transition for the weight matrix, in a much simpler setting than the one considered here. We will provide a detailed comparison with this result in Section 4.2. 2. Thanks for your suggestions on improving the readability of our captions and figures. We will change things accordingly. We'd like to point out that our experiments **are** systematic, as follows: we fixed the synthetic dataset, the teacher model, and the two-layer neural networks under the linear width regime; then, we used different optimizers (GD/SGD/Adam) to train the same network. For each optimizer, we tuned the hyperparameters, e.g., learning rate, to observe different spectral behaviors and accuracy after training. For example, Figure 2(b-d) presents how the spikes and eigenvector alignments emerge when we use different learning rates. Although we do not have a theoretical threshold for the learning rate when the spike emerges, we demonstrated these transitions and threshold empirically. In the attachment to the general rebuttal, we show how we tune the hyperparameter (a grid search for learning rate) for GD when presenting Table 1. In the revision, we will add more explanations on how to choose the parameters in the experiments in Table 1 of our paper. 3. In the heavy-tailed section, our main message is two-fold. In references [55-57], the authors explored a correlation between the heavy tails in the spectrum and the good performance of the NN. They left causality as an open question. In our paper, we rule out general causality: Figure 3 (c-d) shows that there are cases when both the weight and CK matrices appear to exhibit power law in their spectrum, but the neural network does not perform well. Second, we argue that even if not causal, the relationship between heavy-tailed spectra and good performance can be subtle: we demonstrate an example where a neural network with heavy tails generalizes well on the test dataset in Figure 3 (b) and Figure 18. Additionally, we exhibit a case when neural networks benefit from heavy tails: When the teacher model is a multiple-index model with a high intrinsic dimension and the heavy-tailed part is really aligned with the multiple-index directions well, the neural network with heavy tails will generalize well (see Figure 18). We will move the experiment of Figure 18 to Section 4.3 and clarify our explanations. 4. The reason we did not choose the log scale is that we want to show that initial spectra are Marchenko–Pastur-type distributions that one could not see in a log scale. We will adopt the reviewer's suggestion, though, and use log-scaled histograms to highlight outliers. 5. The reason we chose the teacher model as a single-index + quadratic in Eq. (8) is to make it a little more complicated to learn. However, the performances and the spectral properties are similar to the case when the teacher model is just a single-index model. To make the paper more consistent and clearer, we will only present the cases when the teachers are single-index and multiple-index models in our reversion. 6. The BERT example is important to the point we are trying to make, as other reviewers have noticed. The application of a transfer BERT model onto Twitter140 relies on a mathematical trick allowing us to pretend that this model is a 2-layer feed-forward (FF) network. We can think of any FF network as a composition of two functions, $f(g(x)) $, where $f$ is a classification head and $g$ is the composition of all the hidden layers; in this instance, studying the CK matrix for MLP is equivalent to studying the output of $g$. We can, in principle, do this for any architecture of neural networks. From these experiments, we want to highlight that understanding the spectral evolution of the conjugate kernel matrices for the large language model is crucial for analyzing the pre-trained model and fine-tuning process. First, in our experiments, spikes and heavy tails emerge in the spectrum of conjugate kernel---which is analogous to our toy models with synthetic datasets. Second, Figure 5 indicates that the evolution of the spikes in the conjugate kernel is closely related to the feature learning in the fine-tuning process. In the general rebuttal, we provide additional figures showing how these spikes forget the pre-trained feature and learn new features efficiently, quantitatively corroborating what Chen et al. have qualitatively shown in their preprint [CZZ23]. 7. We will provide an adequate discussion of the advantages and limitations of the linear-width setting. We will especially mention how the linear-width regime has been applied to study the Gaussian equivalence for deep random feature models and deep Bayesian networks. Our linear-width regime is more realistic compared with infinite-width neural networks and it has the theoretical benefit that random matrix theory can be applied in this setting. This linear-width regime is one of the ways to approximate finite but very large neural networks with very large datasets. [CZZ23] Chen, L., Zaharia, M. and Zou, J., 2023. How is ChatGPT's behavior changing over time? arXiv:2307.09009. We are happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to my concerns and those of the other reviewers. I maintain that the clarity of the paper could be improved by providing more detailed experiments investigating a smaller set of phenomena. This is related to the comment made by Reviewer C57a regarding the need to `zoom in' to the transition region; if the authors can provide a detailed investigation of this phenomenon, it would greatly enhance the paper. Respectfully, I do not think the BERT plots convincingly demonstrate the claim the authors are trying to make. I do appreciate the authors' efforts---I think the changes proposed will improve the paper---so I will raise my score. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Thanks for the reviewer's response and valuable suggestions. Here are some additional responses. 1. In this paper, we showed three different spectral evolutions (invariance, emergence of spikes, and heavy tails). We proved the spectral invariance in certain regimes and presented additional experiments on the other two phenomena. From a spectral analysis perspective, we believe it becomes more complete and all these phenomena are indispensable in our paper. Because of paper limitations, we did not provide enough analysis for all three phenomena. But definitely, in our revised appendix, we will provide more experiments on the transition phenomenon for the spikes in the spectra. Following the figures in the attachment of the general rebuttal, we will show a detailed investigation of this transition phenomenon, especially how the learning rate affects the spike's alignment and test error in the transition region. 2. For the BERT model, we do not want to draw any conclusion but aim to draw the attention of RMT and statistics communities, emphasizing the importance of spectral analysis for even large language models. And we possibly can use spectral analysis to understand how the model learns features. Also, like Figure 4, we want to show that our LWR can be considered in more practical models. It is possible to apply RMT to analyze these large models for future work.
Summary: This paper examines the spectrum of the conjugate kernel and neural tangent kernel of real networks before and after training. Based on analyzing the spectrum in some experiments, several observations about how the spectum evolves during training are made. In particular, there seems to be different phases depending on the optimizer used and learning rate which effect how the spectrum changes from initialization. Understanding this spectrum-evolution phenomenon may lead to a better understanding of how neural networks train, and shed light on important questions like understanding differences between ADAM vs GD vs SGD. Strengths: * The review of existing literature is really well done (there is even a more detailed review in the appendices). This puts the current work in context quite nicely and makes this paper a very accessible place to start learning about these spectral questions. * The experimental plots are nicely done and clearly showcase the behaviour being discussed (the Q-Q plots also help a lot with the understanding). There are also numerous other experiments in the appendix. This again makes this paper a great resource for these kinds of experiments. * I am not an expert on this area, but to me the way the different phases of spectral evolution are laid out (i.e. what they call "Invariant Bluk" vs "Bulk+Spike" vs "Heavy tail") is new to me. (I have heard of this outlier thing happening before for the Hessian...I am not sure how related that is). To me this paper looks like the first step towards building a phase diagram of how choices about optimizer/trianing effect the spectrum which would be a really useful thing to have. * I found the conclusion about learning rate (e.g. Figure 2) the most impressive conclusion here, and to me this may be the most actionable of all the various phenomenon observed here. This is a nice conclusion to draw and could pave the way for more results in this direction. Weaknesses: * One might complain that the theory developed here is a bit weak, in that the theoretical results are all inequalities which are (probably) not sharp. However, since the paper is mainly empirical I actually think these theoretical results are reasonable here. * I am not an expert on this area, so I was not able to tell how much of the results here were expected/known already. Hopefully at least one other reviewer is able to give that context. * It is not clear how robust the experimental results are: in the given plots the phenomenon seem to be happening as described, but maybe there could be some extra discussion about "edge cases" between the various phases and discussion about when things break down (e.g. when N is too small, or there are not enough examples etc) Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Is there any possibility to describe how the outlier eigenvalues are evolving in time during training? Particularly when there is only one outlier eigenvalue, this would be an interesing bit of theory to try and understand. * Is there any relationship between the spectrum of the CK/NTK and the spectrum of the Hessian? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * One limitation that always comes up in these limiting type things is the question of how large real neural networks have to be for the theory to actually work. Some discussion or experiments specifically addressing this (e.g. showing the error in the predictions as a function of network size) could help explicitly address this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation and thoughtful feedback. In the following, we address the technical comments and questions. 1. Our experiments are robust to dimensions as long as the width $h$, sample size $n$, and feature dimension $d$ are large. Anecdotally, when these dimensions are around several hundred is enough. As in random matrix theory, when the aspect ratios are fixed and the dimensions are large enough, the eigenvalue distribution is close to the limiting law, and it is very stable. We will present additional experiments for different aspect ratios to show that our phenomena are stable for different aspect ratios when dimensions are large enough. 2. We thank the reviewer for asking about the evolution of the outlier through training. We have not checked this evolution in the paper since we only present the emergence of the outliers after training. In the attachment of the global rebuttal, we present the evolution of the outlier during the training process when there is only one outlier eigenvalue. There are some interesting phenomena in this evolution. We observe the training error first increases and then decreases. We believe this regime corresponds to the catapult phenomenon in reference [47]. And the outlier for the kernel matrix first progressively increases and then oscillates around some value before convergence. We agree this will be an interesting theoretical direction to further explore. 3. We thank the reviewer for asking the question related to Hessian. In fact, NTK has a close relationship with the Hessian with square loss function. There are two parts of the Hessian and one of these parts is the Fisher information matrix which has the same non-zero eigenvalues as the NTK matrix. This is another motivation for analyzing the spectrum of the NTK matrix through training. We can have a better understanding of the spectrum of the Hessian matrix and the loss landscape, e.g., the relationship between sharpness and generalization, during different training processes. 4. We agree that our theory requires asymptotic limits. However, as classical results in random matrix theory, the eigenvalue distributions of random matrices will be very close to the limiting law even when the dimension is several hundred. We believe that there will be some **non-asymptotic** results for our empirical observations, i.e. our empirical results should hold practical neural networks whose widths and sample size are large but finite. Additional discussion and experiments (as dimensions are gradually growing) will be added to explain this problem. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
Summary: The paper carries out an empirical study of the spectral properties of finite-width neural network kernels after training, and compares them to their infinite-width counterparts to assess if there is an improvement. They study these changes with respect to various hyperparameters such as learning rate and optimizer. The results suggest that for large learning rates, where the linear dynamics is not a valid approximation, they show that feature learning emerges by inspecting the distribution of the final spectra. They identify two different spectral property that distinguish feature learning from lazy regime: emergence of low-rank spikes in the spectrum and heavy-tailness. They explore when these properties emerge and provide theoretical arguments for these observations. Strengths: The paper is well-written and easy to follow. The experiments are complimented well with theoretical arguments, and the discussion is sound. The Supplementary Material also includes nicely organized discussion of previous works and additional experiments and discussion. In summary, they introduce the well-known fact that the small-learning rates make a NN with NTK parameterization stuck around its initialization and derive theoretical results to show that the bulk of the spectrum remain invariant under lazy training. In my opinion, the main contribution is that they find two different ways feature learning can happen: emergence of low-rank outliers in the spectrum with large learning rates with GD optimizers and heavy-tailed spectrum with adaptive optimizers. Furthermore, for GD/SGD, they make a theoretical connection for how low-rank outliers emerge with a phase transition in RMT (ref. [9] in main paper.) Finally, the relation of heavy-tailed spectrum to generalization has been discussed. Weaknesses: 1. While the paper brings valuable insights, it is highly restricted by the 2-layer neural networks and synthetic dataset with a particular target. It would be useful to perform experiments on simple real datasets such as MNIST to show that spikes related to number of classes also emerge in the spectrum. For example, in neural collapse phenomenon, one can see that weight matrices of last layer acquire low rank structure that can be tracked to the number of classes. 2. Such experiments can also shed light on how full-batch vs. stochastic GD differ in terms of the after-training spectrum. Currently, I do not think there are any insights related to how SGD helps feature learning since GD experiment does not have a large LR analogue. 3. Is there an intuitive explanation how adaptive optimizers cause heavy-tailed spectrum in CK? More discussion around this phenomenon could be very helpful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It is hard to parse Figure 1, first panel shows weight spectrum for GD with LR 5, second panel shows CK spectrum for SGD with LR 22 and last one shows CK spectrum for ADAM. Something like CK spectrum for Case 2, 3, 4 would be more helpful. 2. In Figure 4, is there a reason why GD does not show the same properties as SGD? With large enough LR, could GD also show the same heavy-tailness? Also, it is not clear how adaptive optimizer qualitatively differs from SGD. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: A limitations section is missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation and thoughtful feedback. In the following, we address the technical comments and questions. 1. **Q: Real datasets such as MNIST?** We will present experiments of the MNIST dataset and show that the spikes in the kernel matrix are related to the different classes of the dataset. In the attachment of our global rebuttal, we present the top principal components of pre-trained transformers before and after fine-tuning. Similarly, with the neural collapse phenomenon, we can see that these principal components correlate to the different classes after fine-tuning training. This should be consistent with the MNIST experiments the reviewer describes in the question. 2. We thank the reviewer's suggestion for additional experiments to compare full-batch GD and SGD. In the attachment of the global rebuttal, we present additional simulations for GD training with different learning rates and the maximal learning rate we can use in GD training. Especially for GD training with very large learning rates, we can observe the emergence of an outlier in the spectra after training, but the performance is not as good as SGD training with large learning rates. And in this case, the training dynamic with GD is unstable. Besides, we also run a grid search for the learning rate when training with GD/SGD, and we do not figure the same heavy-tailed spectra as adaptive methods showed. We can observe the emergence of heavy tails if we apply sufficiently large learning rates for GD at the early stage of training, but we must adjust the learning rate later to ensure the convergence of the training loss. This is the heuristic reason we need adaptive optimizers to obtain heavy-tailed spectra in weight and CK matrices after training. 3. So far, we do not have a sufficient understanding of what causes a heavy-tailed spectrum after training. But we believe that the heavy-tailed spectrum in CK is due to the heavy-tailed spectrum of the weight matrix. And its heavy-tailed spectrum indicates that the weight matrix is moving far away from the initialization which possibly leads to good feature learning. Moreover, a large learning rate at initialization is important to feature learning, and adaptive optimizers will then guarantee convergence to a global minimum. Our experiments in Figure 18 provide more insights into the adaptive optimizers, heavy-tailed distribution, and how this heavy-tailed distribution related to the multiple-index teacher model. We will move this part to the main part for more discussion. 4. In Figure 1, we only presented part of the spectral results for different cases. In Appendix B.2, we additionally present all the spectra of the weight matrices, CK, and NTK matrices before and after training in Case 1-4 separately. We will mention this in the caption of Figure 5. We will present a section about limitations in the conclusion part of our paper for completeness. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal and clarifications. I will raise my score accordingly.
Summary: This paper analyzes the spectral properties of feedforward neural networks under a student-teacher setting. A linear-width regime is considered, where the sample size and network width grow comparably with the input feature dimension. The paper shows that the spectra of weight and kernel matrices are invariant under training, and that outliers occur in large step size regimes. Heavy-tailed spectra are also discussed with their relation to generalization. Evidence from both synthetic and real-world datasets is shown. Strengths: - Proposed LWR as a novel perspective for analyzing feedforward networks, and presented an interesting analysis of NN spectral invariance, potentially providing insights into NN feature learning. - Evidence from both synthetic and real-world datasets enhance the argument. - The presentation of the paper is clear. Weaknesses: - The proposed LWR feels under-explained. Please see "Questions". Otherwise I think this is a solid papar. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - In the proposed LWR, how should one understand that the input feature dimension $d$ goes to infinity? This feels somewhat counterintuitive, compared with e.g. the infinite-width regime. - To what extent does the observed spectral invariance depend on the LWR? I would assume this does not occur under the infinite-width regime. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: n/a, theoretical work Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation and thoughtful feedback. In the following, we address some of the technical comments and questions. 1. **Q: Why feature dimension goes to infinity?** The LWR, where the input feature dimension $d$ goes to infinity proportionally to the sample size $n$, is a classical setting in high-dimensional statistics, and provides important insights for real-world datasets. This is in contrast to the infinite-width regime, in which we are already in the asymptotic limit for width at first. We believe the LWR is a better approximation of real-world datasets and practical neural networks compared with the infinite-width regime since the dimension $d$ in real datasets is very large and we should allow the dimension of the feature space to grow with the sample size. In real networks, $d$ is not infinite but can depend on $n$: with a larger $n$ we may be able to use higher-dimensional features. For instance, the feature dimension of the ImageNet dataset is usually cropped to $d=224\times 224$ for learning. 2. **Q: How depends on the LWR?** Our theory of spectral invariance depends on the LWR, but it can be extended to the infinite-width regime easily. In fact, in the infinite-width regime, we can have sharper estimates of the changes of the weight matrix and each neuron as long as the width $h$ is much larger than sample size $n$. The main difficulty of our theory is to extend the result of the spectral invariance to LWR which is a more realistic regime for neural networks. Besides, although we can show that each neuron is close to the initialization under the infinite-width regime, we cannot get a limiting eigenvalue distribution of the weight matrix when $d$ is fixed and $h\to\infty$. Only when we consider the LWR, we can claim that the limiting eigenvalue distribution of the weight matrix is an invariant Marchenko–Pastur distribution through GD training with a small learning rate. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
Rebuttal 1: Rebuttal: **Overall Summary:** We would like to thank all reviewers for their evaluation of our work and their helpful comments. The key contribution of our work is to describe how the spectral evolution of both weight and kernel matrices changes during different training procedures. We focus on a simple two-layer neural network under a linear-width regime (LWR). These empirical results open a promising way to understand the training dynamic and feature learning of neural networks via random matrix theory. We believe our results will be impactful for both the random matrix theory and optimization communities by providing experimental evidence that can inform future theoretical studies. We briefly summarize our main results: 1. We observe the invariant spectra of weight and CK matrices through training with GD/SGD and small learning rates, which indicates the performance of this kind of training is still close to the kernel regime. We theoretically justify this observation for invariant weight and kernel matrices under certain assumptions by using the global convergence of GD under LWR. 2. We observe a strong alignment between the eigenvector corresponding to the outlying largest eigenvalue and the teacher model when training with a large learning rate. We demonstrate how to find the threshold for a large enough learning rate by showing a BBP-like transition in the spectra of the weight and kernel matrices when increasing the learning rate. The importance of this BBP-like transition is critical, as it is known to be indicative of feature learning, as explained in Response 1 to **Reviewer jctx**. This will also help us understand how to choose the learning rate to attain feature learning. 3. Our experiments rule out a causal relationship between the occurrence of a heavy-tailed spectrum for the weight matrices and a good generalization. This complements the work of Mahoney et. al. [55-57] in reference where the authors had observed a strong correlation between the two; our work can be considered a limitation of the phenomena underlying that trend, while at the same time, through the example in Figure 18, we confirm the existence of a relationship and heuristically explain why this neural network benefits from heavy tails. 4. We investigate the properties of weight and kernel matrices of larger models to demonstrate the phenomena observed in our toy examples. Therefore, our spectral analysis has the potential for high impact as we can investigate feature learning and training dynamics for different optimization algorithms using models used in applications. 5. Given **Reviewer zoyL**'s comments, we want to emphasize the fact that our BERT experiment is motivated by a connection to the spectral analysis we do on the feed-forward neural networks. Please see item 6. in our response to **Reviewer zoyL**. To reinforce our point here, we are including two PCA plots in Figure 3 of the attachment which are part of a related project we are working on. Due to authorship restrictions, these plots cannot appear in our revision, but we hope they can help inform our responses regarding spectral theory and feature learning. **Additional Experiments in the Attachment:** 1. Following the experiments in Figure 2 in our paper, we present the training dynamics of SGD training with a learning rate larger than the BBP-like threshold in Figure 2. We present the evolution of the largest eigenvalues (outliers) of CK and NTK matrices respectively during this training process. There are two phases of this evolution: at the early stage, the largest eigenvalues progressively grow, and then these outliers start oscillating before convergence. 2. Following the experiments in Table 1 of our paper, we further present the GD training with a grid search for all possible learning rates which ensures global convergence after training. We did not observe heavy tails for all these learning rates. However, analogously to the SGD case in Figure 2, the emergence of an outlier also appears when gradually increasing the learning rates. We then separately present the spectral behaviors of weight matrices when using a small learning rate and the maximal learning rate for GD training. 3. Following the BERT experiment in our paper, we further present the alignment between the top two principal components of the CK matrix of a large language model before and after fine-tuning. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. Pdf: /pdf/e435ca4ffb8326e99317a892d8c67f9646c878fc.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Considerable attention in deep learning theory has been given to the analysis of neural networks in the kernel regime, which neural networks approach as they approach infinite width. While this analysis has led to insights, there is an important gap between the theorized generalization performance of neural networks in this infinite width regime and the practical performance of finite-width neural networks, which typically perform better. This paper studies simple, two-layer neural networks in the more realistic *linear-width regime* (LWR), in which sample size, input feature dimension, and the width of the layer approach infinity at comparable rates. In several analyses, the authors target the following question: *How do the spectra of the NN's weight and kernel matrices evolve during the training process?* The authors compare to the kernel regime as a benchmark and analyze changes in spectra that occur through training with different gradient descent algorithms . For their first simple experiment (Section 3), they find that the spectra of the weight, conjugate kernel (CK) and neural tangent kernel (NTK) matrices remain invariant during training for gradient descent (GD) and stochastic gradient descent (SGD) with a low learning rate. This is consistent with the Lazy Training (LT), and indicates that the network is not outperforming a kernel machine. However, for SGD with a large learning rate, a spike in the spectral distribution emerges after training, which indicates improvement over lazy training. They note that this spike is consistent with previously published results that demonstrate that learned features may be indicated by an outlier in the spectra with a large singular value. They classify the spectral distributions into three categories: invariant bulk, spikes outside the bulk, and heavy-tailed distributions. They prove that the invariant bulk phenomenon is expected under certain assumptions. They then empirically analyze the spike phenomenon, and demonstrate that as the learning rate increases, there is a threshold at which the spike emerges, which corresponds with greater alignment and suggests feature learning. They then provide an explanation for the relationship between heavy tailed spectra and better generalization performance. Lastly, they perform some analyses on CNNs trained on more a more natural dataset, CIFAR-2, as well as BERT with fine-tuning on Sentiment140. Strengths: The authors tackle an important question, which is to understand how the structure of neural networks change through the training process. To get a handle on it, they analyze the spectra of the weight and kernel matrices of simple neural networks on controlled datasets. They find some interesting phenomena that relate the spectral properties to learning rate and generalization performance. Weaknesses: The paper would benefit from more effort put towards making clear the meaning and significance of the experiments and results. Why look at the spectra? What is the significance of the three classes of spectral distributions that are observed? (The answer to the latter question is in the paper, but it is somewhat buried. This should be put front and center and made very clear). The figures and captions in the paper could be clearer. Oftentimes, variables are used as labels. The reader forgets which variables correspond to what. It would be better to use English names, or more descriptive properties. The specific primary findings should be stated more clearly. The statements provided in the introduction are general, and don't provide much insight into the conclusions. It takes work for the reader to tease out these conclusions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What can we understand about neural networks, generally speaking, by looking at the spectra of their weight matrices and kernels? Why use this approach, rather than analyzing other properties of the weights and network structure? - How do the results found here advance our theoretical understanding of how deep networks learn? - How can these results be used in practice? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors note the limitation of the FWR. They do not state limitations of the spectral approach to analyzing networks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation and thoughtful feedback. Per the reviewer's suggestion, we will provide more detailed captions for the figures to help readers. We will also clearly state our primary findings and conclusions in the introduction. We have summarized this work's main contributions and insights in the global rebuttal for clarification. In the following, we address the technical comments and questions. 1. **Q: Why look at the spectra? Theoretical understanding?** The spectral properties of the neural networks are a key component of understanding the explainability and dynamics of feature learning in deep neural networks. We believe that a spectral analysis of weight and kernel matrices can be seen as one of the first, fundamental steps in understanding the performance of neural networks. There are many previous works that use the spectral properties of either the weight or kernel matrices of neural networks (there are many references [28, 55-57,60,70,71,82]). Intuitively, we can think of the eigendecomposition of the CK and NTK as describing the variance of the underlying feature space, akin to PCA. The emergence of an outlying eigenvalue tells us that the total explained variance of that feature space is becoming concentrated along one direction. In our alignment experiments (see Figures 1(b) and 2(a)), we show that this direction corresponds to the target space of the neural network. This alignment could be thought of as an operant definition of feature learning itself. While there are alternative approaches to the spectra, this analysis is simple and theoretically motivated. Furthermore, in Figure 2(b-d), the emergence of the outlier and strong alignment are highly related to the choice of learning rates, which indicates that the spectral properties we studied here help us understand how we choose the correct optimizer and hyperparameters for efficient training. In conclusion, our empirical results of spectral evolution provide a promising way for future theoretical understanding of how deep networks learn features during training. 2. **Q: Applications in practice?** Based on our empirical results, we emphasize that understanding the spectral properties of the weight and kernel matrices is essential to understanding feature learning. For example, in Figure 5, we computed the top two principal components of the CK matrix in the BERT model before and after fine-tuning, and we can see how the model forgets the pre-trained features and learn new features in the fine-tuning process. In addition, our results indicate that choosing optimizers and hyperparameters affects the spectral behavior and generalization of neural networks. Finally, [55-57] found a correlation between the occurrence of heavy tail spectra and generalization. Our experiments rule out that this relationship is causal. Our example in Figure 18 empirically shows that we need to analyze additional spectral properties, e.g., how well features align with the eigenspace in the heavy-tailed part, to test if the neural network generalizes well or not. We are happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: I thank the authors for the thorough rebuttal. My primary concerns with the paper regard clarity. Other reviewers have noted similar challenges with parsing the plots and remarked that the paper "sacrifices depth for the sake of breadth." I believe the results are important and that the work is sound. I agree with reviewer zoyL that, for my taste, I would prefer to see a more focused set of experiments on a smaller set of phenomena. Nonetheless, given the scope that the authors have chosen to present in this paper, I believe the changes proposed in the rebuttal will improve the paper, and I have increased my score accordingly. --- Reply to Comment 1.1.1: Title: Thanks for your feedback Comment: Thanks for your thoughtful response and valuable suggestions. We will balance the depth and breadth in our final version of the paper. In our revision, we will provide more experiments on the transition phenomenon for the spikes in the spectra of kernel and weight matrices. Following the figures in the attachment of the general rebuttal, we will show a detailed investigation of the phase transition phenomenon for spikes, especially how the learning rate affects the spike's alignment and test error in the transition region. This will provide a more focused case study of experiments on our second phenomenon.
null
null
null
null
null
null
Contrastive Retrospection: honing in on critical steps for rapid learning and generalization in RL
Accept (poster)
Summary: This paper proposes contrastive introspection (ConSpec), an algorithm for learning a set of prototypes for critical states via the contrastive loss. ConSpec works by delivering intrinsic rewards when the current states match one of the prototypes. This paper also conducted experiments in various environments. Strengths: The intuition of learning the critical states is natural and easy to follow. The experimental results in this paper look solid and promising. Weaknesses: Despite the empirical performance, the reviewer finds the ConSpec algorithm itself hard to follow. The largest weakness is: the insufficient discussion on how the prototypes $h_i$ are learned. Hence, the reviewer cannot understand the detailed on how $h_i$ are used (see detailed in Questions). Besides insufficient discussion on the prototypes, some minor issues are: (1) the title in the pdf (Contrastive Introspection:. ..) seems to mismatch with the one appear in openreview (ConSpec: …). (2) The font of citations appears to be confusing. E.g., from line 19-20 in the introduction, the manuscript uses (number) to address some key points, and the citation also appears as (number) – it would be nice if the citation can be changed to something that is not (number; number). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Per the major weaknesses: 1. How are the prototypes $h_i$ actually learned? If the reviewer understands correctly, in line 7 of the abstract, the manuscript says “ConSpec learns a set of prototypes…”. While in Algorithm 1, it seems that the prototypes $h_i$ are given to the algorithm as inputs. Maybe the author can clarify why this inconsistency in learning the prototypes happens? 2. How are the $h_i$ learned/chosen in each experiment? The reviewer has looked into the detail of the experiments in the appendix, but cannot clearly understand how the presented experiments actually utilize the $h_i$. It would be nice that the authors can provide more details of all the $h_i$ in all the present experiments (Sec. 4.1-4.5). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See questions and weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their feedback. > *some minor issues are: (1) the title in the pdf (Contrastive Introspection:. ..) seems to mismatch with the one appear in openreview (ConSpec: …). (2) The font of citations appears to be confusing. E.g., from line 19-20 in the introduction, the manuscript uses (number)* Thank you for bringing our attention to these issues. We will address them in turn during revision; we will correct the title, and change the numbering of bullet points to be (i-iv) rather than (1-4) to minimize conflict with the numbered citations (Unfortunately, space constraints prevent us from altering the citations to an author list). **Main questions:** To answer the Reviewer's main question, the prototypes are initialized as random vectors, and these are provided as initial parameters, per Algorithm 1. However, these initial random vectors are updated based on the gradient of the contrastive loss - they are identical to any other parameter in this way. More specifically, each $h_i$ is a vector of parameters that are updated by the algorithm at step 8 of Algorithm 1. This is stated in section 3.1, 3.2 and 3.3. We believe the Reviewer's misunderstanding was due to our listing the prototypes as inputs in Algorithm 1. We see this ambiguity now, so thank you for raising this. But, we note that the prototypes are included in a set of parameters, and this same list includes the other parameters of the model (e.g. the synaptic weights $W$, $\theta$, and $\phi$). Thus, these prototypes are only given as inputs to the algorithm at the start in the same way that all the other randomly initialized parameters are. To clarify this point and avoid any other confusion we will separate out the parameters from the other inputs in Algorithm 1 in revision. Furthermore, we hope this resolves the Reviewer’s second question about how we choose the prototypes: we do *not* choose them. They should only be understood as parameters that are updated via gradient descent. . Conceptually, **ConSpec uses these prototypes in a very novel way, with a new contrastive loss, providing a fresh solution to both difficult problems of long-term credit assignment and generalization in RL, as we demonstrate through a variety of different task situations.** We hope that with this clarification on how the ConSpec procedure works, the reviewer will enjoy the merits of this novel algorithm and their score will be updated appropriately. --- Rebuttal Comment 1.1: Comment: Dear Reviewer viiH, We wanted to say in advance our heartfelt thank you's for the time and effort you've put into both the reviews that have passed as well as the upcoming current discussion period. We know that you all have your own papers that you have to deal with during this busy time, and sincerely appreciate the time you've taken to spend on ours. We are so excited about this paper and its findings, so we are very much looking forward to the upcoming discussions with you! Please don't hesitate to ask us any questions big or small, and we are happy to provide any further clarifications. --- Rebuttal Comment 1.2: Title: Response to the Rebuttal Comment: Dear Authors, Thank you for your clarifications. Since all of my concerns/questions are properly addressed, the rating has been adjusted accordingly. Good luck! Reviewer viiH --- Reply to Comment 1.2.1: Comment: Dear Reviewer viiH, Thank you! We hope you enjoyed reading our work as much as we enjoyed conducting it. And we wish you good fortune in your own papers this year.
Summary: The paper noticed that in real-world MDP, success is often contingent upon a small set of steps. While Bellman equation can theoretically do credit assignment over long-horizon, reward is hard to propagate under Bellman-based methods in practice. The authors therefore propose a novel algorithm that uses contrastive learning to identify critical states that final success relies on. The method uses a memory like system that can give agents intrinsic reward during training. The paper then evaluates the proposed method on a wide variety of domains and shows performance gain when the proposed method is added to RL algorithms. Strengths: The paper is based on an interesting and important insight about long-horizon credit assignment and reward learning. The proposed method is designed to explicitly improve long-term credit assignment and have shown empirical success in the evaluation. The writing and figures are clear. The paper is easy to follow. The evaluation covers a wide variety of RL tasks are benchmarked to back the claim of the paper. Weaknesses: 1. The method assumes additional access to a "success" indicator at the end of episode. While this is commonly obtainable in gym environments, this doesn't fit into the general MDP setting and thus might limit when the algorithm can be applied. 2. The assumption about access to "success" seems privileged compared to baselines. I am wondering they will catch up with the performance of the proposed method when a success bonus is added. 3. The evaluation has #mini batches / # gradient steps as x-axis, unlike the environment steps in common RL benchmarks. I am wondering why this is the case. If this is necessary, I'd like to see convincing justifications. 4. The proposed method relies on a memory system, which may hurt generalization and might have problem when scaling up. 5. CURL+PPO doesn't seem to be a strong baseline to ablate in figure 3. I hope the authors could benchmark against RAD[https://arxiv.org/abs/2004.14990], a much stronger baseline in pixel space. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I am wondering whether adding the intrinsic reward can degrade the performance of RL algorithms on common environments (aka, those environments where the final success does not depend on just a few critical steps). This shall be justified the experiments. When the observation is partial, is the proposed method still reasonable? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: 1. The method requires privileged information about success of an episode. 2. I cannot see how the method can be applied to RL that has partial observation that requires recurrent policies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their questions and feedback. **Success/failure:** As discussed in section A.3, we used only 2 simple definitions of success across tasks, and in each case, tie “success” to reward in a natural way. In the first definition, success is based on whether a reward is achieved at all, which works well for sparse reward environments. But, this does not work for dense rewards. Thus, the second definition defines success as simply being the top-k highest rewarded trajectories encountered so far. In other words, there is no fixed definition of success necessary beyond simply being the top level of reward achieved so far. **In this way, success is not privileged information; it is a direct function of the reward observable to the agent, and works with any MDP.** Moreover, we believe that exploiting the reward signal in this “indicator” way is a special strength of ConSpec. To illustrate, consider dense reward tasks like Atari, where we used the second definition for successes. In this case, ConSpec tries to pinpoint the differences in critical steps between the highest rewarded trajectories and the trajectories with average rewards. **As such, ConSpec uses the reward signal in a comparative and relativistic way, giving it the ability to hone in on what it can improve in each mini-batch, which is a very efficient approach.** . **Intrinsic rewards potential degradation of RL performance:** This is a good question and one that we wondered about too. In all of the tasks we have tested to-date, whether they are tasks that require long term credit assignment or not, performance is either drastically improved or unchanged with ConSpec, never degraded. We are confident that this pattern will hold across tasks. This is because degradation of RL performance is in general mitigated by adopting the “potential” form of intrinsic reward (Andrew Ng et al 1999) which provably does not alter the optimal policy of the original task. We have implemented this for ConSpec already, and tested it on key-to-door tasks as well as Atari Appendix Fig. A.8, A.12). But interestingly, even without the “potential” form of intrinsic reward ConSpec still works well and has never led to degradation on any of the tasks we tested (Fig. 2,4,5,6,7). **We speculate that the reason why performance degradations did not occur with ConSpec is because ConSpec’s intrinsic reward is designed to be small and sparse, and so its unintended negative effects are mitigated.** Nonetheless, as noted, the use of a potential function for calculating the intrinsic reward provably deals with this problem. . **Partially observable observations:** Although partially observable environments were not a main focus of the paper, we do have several examples of partially observable environments here. The 3D OrangeTree task was one, since the pixel inputs are by no means a fully observable state. Amongst the Atari tasks, Montezuma’s Revenge and delayed AirRaid were both partially observable, and with these tasks the PPO policy was recursive, and ConSpec was able to give drastic improvements over baselines, so ConSpec was not hindered in any way by partial observability nor recurrent policies. Moreover, ConSpec is a plug-and-play module, and is separate from the policy itself. And ConSpec can be considered a representation learning algorithm in some ways (with a novel contrastive loss applied to RL). It discovers promising features separating successes from failures, and it is reasonable to expect it to be agnostic to architectural choices or environment characteristics. . **Plotting:** The Reviewer is correct to bring up the plotting conventions. The reason we plotted with # of gradient updates/minibatches is that the ConSpec module is updated based on complete trajectories, so we wanted a metric that would allow us to not only compare baselines, but also, compare across tasks. . **Memory system:** - This is a great point. Scaling to pixel level tasks like Atari games and 3D navigation was not a problem, but testing other larger scale tasks was considered out of scope for the current study. Nevertheless, inspired by the Reviewer, we do have several experiments that begin trying to make ConSpec more compact. To start, the Montezuma experiments don’t fill up the success memory buffers before training starts (since successes are rare) and still works, showing that saving on memory does not hurt. Altogether, the total memory should scale with $~O(k)$ where $k$ is the number of prototypes, which should scale with the number of sparse critical steps, a manageable number. But now, we have a new experiment testing learning in ConSpec when the number of prototypes is less than required (e.g. 3 prototypes in the 4-key task) --- in order to study further memory reductions (Rebuttal Fig. 4). Even having fewer than necessary prototypes can often be enough to still solve the task (i.e. catching any critical step helps the agent)! . **RAD baseline:** We thank the Reviewer for this suggestion and have done this baseline now (Rebuttal Fig. 5). RAD was still unable to solve the 3D OrangeTree task, unlike ConSpec. To us, this elegantly demonstrates the special benefits of ConSpec’s version of contrastive learning over other representation learning algorithms applied to RL. ConSpec is not merely representation learning; it is a special application of contrastive learning tailored for long term credit assignment and generalization in RL, and our experiments show that it does this extremely well. . Altogether, this Reviewer brought up a number of interesting issues that we found very helpful. Through careful examination of these issues as well as interesting new experiments (e.g. 3-prototypes in 4-keys exp, and RAD), we demonstrate that these issues are very mitigatable. We hope the Reviewer may consider raising their score to reflect the novelty and utility of the ConSpec approach, and the strength of the varied experiments. --- Rebuttal Comment 1.1: Comment: Dear Reviewer fj2y, We wanted to say in advance our heartfelt thank you's for the time and effort you've put into both the reviews that have passed as well as the upcoming current discussion period. We know that you all have your own papers that you have to deal with during this busy time, and sincerely appreciate the time you've taken to spend on ours. We are so excited about this paper and its findings, so we are very much looking forward to the upcoming discussions with you! Please don't hesitate to ask us any questions big or small, and we are happy to provide any further clarifications. --- Reply to Comment 1.1.1: Comment: Dear Reviewer fj2y, thank you for all your wonderful suggestions thus far! We wanted to ask, when convenient, if you have any further questions? We believe we have addressed and mitigated your concerns, and we are happy to address any remaining ones you may have. We know that you all have your own papers that you have to deal with during this busy time, and sincerely appreciate the time you've taken to spend on ours!
Summary: Proposes an auxiliary reward module to be used in RL algorithms, that learns features (‘prototypes’) of critical states in successful trajectories. For new observations, the method then uses cosine similarity to the learned features as a reward bonus. The method is evaluated on a unity-based env, grid-worlds, versions of gym,atari envs with delayed rewards, and Montezuma’s revenge. Strengths: 1. Effective exploration bonus The idea of learning invariant features across successes, and using these as a source of reward does seem to give better exploration performance, from the experiments. The Montezuma’s revenge experiments (Fig,6) are particularly compelling - the baselines PPO, RIMS (which also uses a set of discrete slot-based learned features ) and Decision Transformer all fail to obtain any reward. By creating an explicit division between success and failure episodes, conspec can then learn features that match states present in the successes, but not the failures, even from a very small number of successful trajectories (There might be other, simpler ways to get this effect however, see weakness #1). The ability of con spec to find important states critical for the task is also investigated by the authors in the simpler unity-based env, where they also visualize states closest to the learned prototypes. 
 2. More expressive set for bottleneck states Instead of learning an explicit set of states which are important (like sub-goals) as has been previously studied, this paper instead captures the notion of ‘critical states’ using learned prototypes. The advantage of this is it can flexibly capture a large set of very different states, all of which are critical. This is also beneficial because it enables zero-shot generalization in new environments (section 4.2) 3. Clarity, presentation The paper is well motivated, written clearly, and the main idea for the algorithm is presented clearly. Weaknesses: 1. Are the prototypes actually required? Learning from data in successes that aren’t present in failures should lead to better performance, but the importance of doing this through learning prototype features is unclear. As a simple baseline, consider training a policy on only the successful set (using behavior cloning). Does this provide similar performance to con spec on Montezuma’s revenge? Is trying to capture a notion of ‘critical states’ required to learn better policies ? Can you run Decision Transformer where for each successful trajectory, every transition is labelled with a reward of 1, and for every failure trajectory, every transition is labelled with a reward of 0 ? 
2. Success/failure definition The method relied crucially on the quality of the learned prototypes, which in turn depends on the success and failure datasets. It might not always be possible to divide up trajectories into 2 classes in this manner, in a lot of tasks performance keeps improving over time and a ‘successful’ trajectory at the beginning of training is very different from one from a converged policy. The authors do discuss this (appendix A.3), but the definition used in this paper for a successful trajectory is - ‘an episode that received one of the few rewards available, and a failure is defined as anything else’. For agents to keep learning and improving from data the notion of a success should necessarily change with time (eg - maximize the reward instead of just getting some reward). 3. Delayed reward envs A good portion of the experiments are conducted on familiar gym, Atari envs but with a modification where the rewards are delayed. The significance of these experiments is unclear, since the delayed reward setting for these envs is not standard and widespread. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the questions in weakness #1. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Sufficiently addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their feedback. **Are prototypes required?:** The reviewer raises an interesting question: could a system learn critical states implicitly, without resorting to the use of prototypes? As an example, they note that in theory one could use Decision Transformers (DT) and learn from the difference between successful and failed trajectories. Indeed, in Montezuma's Revenge, we have already fed the DT successful trajectories (i.e. trajectories that obtained reward) from an a priori set, as well as failures. In-line with the Reviewer’s recommendation we labeled the successes as 1’s and failure as 0’s. This is the result displayed in Fig. 6d and we found that performance, despite having access to success trajectories, was still zero. The reason is that these successful trajectories used for training did not come from a curated expert policy; rather, they were "spurious successes", random-policy trajectories that happened to get reward. This meant that most of the actions taken in these trajectories were actually not the correct actions, but a few were, enough to get reward. As such, behavioural cloning led the DT to learn a lot of terrible policies. Of course, behavioural cloning of curated expert policies with DTs works well. This brings up a critical but perhaps subtle point: unlike DTs without prototypes, **ConSpec is able to learn from spurious successes because the prototypes ignore all the states other than those that were critical for distinguishing success from failure. This is why ConSpec’s strategy of honing in on invariant intermediate states and ignoring all the rest of the noise is an important advance, and it is the reason why prototypes are necessary**. Put another way, the secret to the power of contrastively learned prototypes is that their limited capacity (each prototype is a single vector) is essentially a way to force the system to ignore other information that is not relevant to distinguishing successes and failures. As our experiments demonstrate, this makes ConSpec, courtesy of its prototypes, much better at learning as a beginner from scratch than DTs, which are designed to learn from experts. . **Definition of success/failure:** Happily, to clarify, the Reviewer’s recommendation was exactly what we had done. Altogether, we used only two simple definitions of success/failure in all of our tasks as outlined in Appendix A3. One of those definitions was “an episode that received one of the few rewards available”, as the Reviewer commented. But this pertained only to the binary reward setting -- e.g. where there is only 1 reward in the trajectory. For dense reward tasks like Atari and Mujoco, successes were defined (Section A.3 line 551-552 in the Appendix) as the top-k highest rewarded trajectories encountered so far. Failures were the median rewarded trajectories in the mini-batches. In this case, success and failure are defined relative to trajectories encountered so far, and as such, the definition of success changes over time as the agent improves - exactly covering the situation the Reviewer describes. **Hence, we agree with the Reviewer and have been doing exactly as the Reviewer recommends all along.** Given this point, we could have been more clear about the fact that the definition of success can change, so we will make this clear in a revised manuscript. Most interestingly, ConSpec can be a perfect inductive bias to learn from this changing reward signal! ConSpec is designed to pinpoint the relative differences between success and failure trajectories. And **we consider this relativistic design to reflect an important and subtle learning signal that ConSpec exploits but which other RL algorithms do not.** . **Delayed Atari:** Delayed Atari has been used in classic papers like RUDDER, and in more recent papers like InferNet (Ausin et al, 2021). More generally, delayed reward modifications of various regular environments is quite common, including in Synthetic Returns (2021) and in the episodic RL literature (e.g. Off Policy RL with delayed rewards, Han et al 2022), so we do not consider our modified Atari environments out of place. Moreover, we also apply ConSpec to the standard (unmodified) versions of Atari. Besides Montezuma’s Revenge, we also did so for 8 other Atari games (shown in appendix Fig. A.12). **The significance of our experiments on the delayed Atari is that they take a classic RL benchmark and make it more challenging vis-a-vis long-term credit assignment** (which is why standard RL algorithms such as PPO cannot solve the delayed versions but could solve most of the easier, unmodified Atari games). Ultimately, our use of delayed Atari was but one of many tasks to which we applied ConSpec, including grid world tasks, 3D pixel navigation tasks, regular Atari like Montezuma’s Revenge, and delayed continuous control tasks. These tasks differ from each other in the nature of their observations, their task features, exploration, and goal requirements. The sum total of all these tasks is to showcase the strong capability of ConSpec to robustly handle a wide variety of situations requiring long term credit assignment and/or generalization. . Altogether, this Reviewer brought up a number of issues that have helped us to highlight why ConSpec is an important contribution and to clarify our approach. We feel that we have even demonstrated that in some cases ConSpec is doing exactly as the Reviewer had called for. In light of these matters, we hope the Reviewer will raise their score to reflect the novelty and utility of the ConSpec approach, the strength of the varied experiments we provide, as well as the good ideas that we and they jointly thought of. --- Rebuttal Comment 1.1: Comment: Dear Reviewer LNh7, We wanted to say in advance our heartfelt thank you's for the time and effort you've put into both the reviews that have passed as well as the upcoming current discussion period. We know that you all have your own papers that you have to deal with during this busy time, and sincerely appreciate the time you've taken to spend on ours. We are so excited about this paper and its findings, so we are very much looking forward to the upcoming discussions with you! Please don't hesitate to ask us any questions big or small, and we are happy to provide any further clarifications. --- Rebuttal Comment 1.2: Title: DT comparison clarification Comment: The standard manner of running Decision Transformer (DT) is to use the reward obtained in the environment, which is how I assumed the method was run in the paper. The comparison I had asked for a variant in which every transition of a successful trajectory is labelled with +1, and every transition of an unsuccessful trajectory is labelled with 0. The reason for this is equally weights all the data from the successful trajectory set. This would be different from running DT as originally presented, which uses rewards provided by the environment, and does not change them in this manner. Can the authors please clarify what version of DT they did run in their paper? If they haven't run the original version of DT (which uses environment reward), then this experiment must be run too. I think this analysis is important to see if prototypes are actually important, which is central to the argument in the paper. Thank you for the clarification on the relative changing definition of task success, and the use of delayed reward Atari envs in other work. --- Reply to Comment 1.2.1: Comment: Thank you Reviewer LNh7 for the feedback. To clarify, the experiments conducted with the success trajectories followed the protocol of the original Decision Transformers paper in Chen, Lu et al, 2021. We note that in that paper, what each transition is labelled with is not its reward at the current timestep, but rather with its return-to-go $\hat{R}$ (Equation 2 of that paper here https://arxiv.org/pdf/2106.01345.pdf). In the case of sparse reward tasks like Montezuma's Revenge, this has the effect of labelling all state transitions in a success trajectory with return-to-go value, until the agent actually achieves the reward and then the rest of the transitions in that trajectory get zero. As such, our original experiments were quite close to that desired by the Reviewer. But the Reviewer is right that it does not end up labelling all the state transitions uniformly with the same label, since transitions after the agent achieves reward are labelled zero. Therefore, to conclusively address this, we are now conducting two new experiments that we hope are closer to the Reviewer's specifications (and please correct us if it is not). In the first, every transition in success trajectories has been labelled with $\hat{R}$ = the total return of that trajectory, so they are now all uniform within each success trajectory. In the second, we label every success trajectory with +1, per the Reviewer’s specification. We already have the results of the first experiment, which show that despite receiving successful demonstration trajectories, the Decision Transformer still could not learn to get nonzero reward in Montezuma's Revenge. This experiment therefore strongly supports our main hypothesis that ConSpec, courtesy of its prototypes, is much better at learning from scratch than Decision Transformers. We are waiting on the results of the second experiment and they will be ready within 72 hours. We hope this experiment is what the Reviewer was looking for. If not, please let us know and we will work hard to conduct more experiments. Thank you!
Summary: This paper introduces ConSpec, a reinforcement learning (RL) algorithm designed to identify critical steps and improve performance in continuous control tasks. ConSpec utilizes contrastive learning to learn prototypes of critical steps and employs a contrastive loss to differentiate successful and failed experiences. It addresses the challenges of long-term credit assignment and generalization in RL tasks. Strengths: This article presents an interesting idea of learning to match key states in a task through contrastive learning. The writing of this paper is clear and Figure 1 is well-drawn, making it easy to quickly grasp the details of ConSpec. And the experimental results effectively demonstrate that the learned prototypes indeed match the key states in 3D Orange-Tree task and gridworld. Weaknesses: The cosine similarity measures the similarity between the prototype and the hidden state, both of which are learnable vectors. However, optimizing the contrastive learning loss with updates to both vectors may lead to extremely unstable training. In related literature on representation learning, it is common to use the stop gradient approach and optimize only one of the learnable vectors. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - ConSpec achieved a return of only 400 in the Montezuma's Revenge task compared to RND, which reached a return of 7500 in its original paper. It appears that ConSpec is not as effective as RND in this regard. Both the multi-key room environment and Montezuma's Revenge involve similar logic, but the former is much simpler. So why does ConSpec outperform RND in Fig.4? - It appears that ConSpec has significantly higher variance than the baseline algorithm in all tasks. What could be the reason for this? Further, prototype learning is crucial and can every seed match the key states? - What does "softmaxed over t" mean in line 152? Why is it necessary to introduce softmax operation? - How is success and failure defined in Atari and MuJoCo tasks? I think this is important, but the article lacks details in this aspect. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: As presented in Section 5, the number of prototypes is task-specific and definations of success and failures need human design. Furthermore, The proposed method introduces hyperparameters, such as the Diversity measure and the hyperparameter $\lambda$, that require careful tuning. Furthermore, the algorithm exhibits a significant variance in its actual performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the feedback. **Contrastive instability with no stop gradient (SG):** To our knowledge, in self-supervised learning the SG is used (e.g. BYOL Grill et al 2020, or SimSiam Chen et al 2021) to prevent representational collapse (rather than learning instability), and only necessary when there are no negative samples (e.g. SimCLR has no SG). Since ConSpec uses positive and negative samples, there shouldn’t reasonably be SG related learning issues here. Nevertheless, **we did as the Reviewer suggested, stopping gradients to the prototype vectors, and found that it could not solve the 4-key task (Rebuttal Fig. 3), unlike the original ConSpec (Fig. 4c)**. This was likely since SGs cause the prototype vectors to remain in their initial random state, and so, do a poor job of anchoring the critical step recognition process. We believe that learning both vectors is the correct strategy, and don’t find that it introduces any instability. **ConSpec v. RND:** The Reviewer is asking why ConSpec vastly outperforms RND, a classic exploration algorithm, on the multikey tasks and not on Montezuma’s Revenge. **This is a good question and reflects a deeper issue**. To us, our results indicate that there is not one but two axes of “difficulty”, and that it is incorrect to consider Montezuma’s Revenge to be more difficult in all respects. Montezuma’s Revenge is more challenging from an exploration stand-point and RND provides sophisticated exploration. On the other hand, RND performance collapses in the multikey tasks. In the multikey tasks, the pixel difference between “success” and “failure” frames is often very small, and RND struggles to identify these differences. **On the contrary, our contrastive loss is designed to hone in on even very minute differences that distinguish success from failure (figure 4b)**, which is why ConSpec is superior to RND in the multikey tasks, **elegantly illustrating the benefits of ConSpec.** Given these two distinct strengths, we wish to highlight that in principle ConSpec could be combined with more powerful exploration algorithms, and so, one should be able to obtain the best of both worlds. **Variance:** We point out that ConSpec's performance across most tasks display **very little variance at** ***convergence***. This was true in 3D OrangeTree(Fig. 2b), the multi-key task (Fig. 4c), and for delayed Atari (Fig. 5). There does appear to be variance during training, but it seems to be the amount of time it takes for ConSpec to converge to the max reward that varies. In Montezuma's Revenge, variance likely comes from 1) stochasticity in the amount of time for the prototypes to independently hone in on different critical states, and 2) the time it takes to get spurious successes in the first place, just as the Reviewer suspected. But **to us this is benign variance as the seeds do eventually learn, and their performances vastly exceed other baseline algorithms**. Variance is still a major issue in the RL field at large (e.g. Bjorck et al ICLR 2022). **Softmaxing:** By “softmaxed over t” we mean that we are applying the softmax operation to the scores across time-steps in the trajectory. This was a design choice to sparsify the sequence of cosine scores for each trajectory. Our goal was to compare pairs of prototypes and force their peak cosine scores to be different in time. The softmax was one way to do this, but any equivalent method would do equally well. We will clarify this in a revised manuscript. **Definition of success/failure:** In dense reward tasks like Atari and continuous control (Appendix A.3) successes are the top-k highest rewarded trajectories seen so far, while failures were the average trajectories in mini-batches. In RL tasks generally, since there usually is a reward signal, **adaptive thresholding can be used to turn any reward signal into an indicator function of success vs failure**. In this case, ConSpec tries to pinpoint the differences in critical steps between the successes from failures. To clarify, in all the experiments in the paper, we made use of only two simple definitions as described in Appendix A.3 (see our general response). We can move this information to the main manuscript if necessary. **Hyperparameters:** The Reviewer pointed out there are several hyperparameters ConSpec introduces. Luckily, for each hyperparameter, we either found a fixed value that worked across all tasks or a degree of robustness across a range of values: 1. Number of prototypes: we tested ConSpec with various numbers of prototypes and find that performance is robust across values (Rebuttal Fig. 1). 2. Diversity measure: Since this hyperparameter acts on cosine similarity scores, we know that it is necessarily bounded between 0 and 1. Moreover, since the point is simply to encourage some non-trivial diversity, we used common sense to select this value, and so, had a fixed value of 0.2 across all experiments. ConSpec learned well across all experiments, speaking to the basic robustness of this parameter choice. 3. Lambda: there is a typo in equations (2) and (3) of the manuscript which we apologize for: $\tilde{r_{kt}}$ should be equal to $\lambda \cdot R_{task} \cdot \sum...$ where $R_{task}$ is the nonzero reward per step in the task, under which $\lambda$’s range is bounded between 0 and 1. We found that performance was relatively insensitive to lambda between 0.2 and 0.5, as shown by new experiments across tasks (Rebuttal Fig. 2) with robust performance. Altogether, ConSpec performance was quite stable for these hyperparameters and did not require careful tuning per task. . In sum, this Reviewer brought up a number of issues that we believe actually highlight the robust strengths of ConSpec rather than detract from it. We hope that our responses here, including the new experiments (Rebuttal Fig. 1-5) address the original concerns enough to raise their score, and we are happy to address any insufficiencies. --- Rebuttal Comment 1.1: Comment: Dear Reviewer t9VM, We wanted to say in advance our heartfelt thank you's for the time and effort you've put into both the reviews that have passed as well as the upcoming current discussion period. We know that you all have your own papers that you have to deal with during this busy time, and sincerely appreciate the time you've taken to spend on ours. We are so excited about this paper and its findings, so we are very much looking forward to the upcoming discussions with you! Please don't hesitate to ask us any questions big or small, and we are happy to provide any further clarifications.
Rebuttal 1: Rebuttal: **Message to all reviewers:** Thank you for the comments, questions, and suggestions. In this global response we will address those issues highlighted by multiple reviewers. We will respond to individual reviewer comments in the individual responses. First, we want to summarize and clarify the central, novel contribution of our work. Whereas most contemporary RL algorithms try to model or fit values, returns, or transitions for all the states in the environment, the present manuscript investigates an alternative learning strategy: that of retrospectively honing in on a few critical steps, courtesy of a novel contrastive loss. ConSpec tests the hypothesis that **the ability to recognize critical steps (and ignore all other states) allows rapid long-term credit assignment and robust generalization.** In support of this hypothesis, we have demonstrated strong performances by ConSpec in a wide variety of different task environments, discrete and continuous tasks, gridworlds, 2D Atari including Montezuma’s Revenge, and 3D navigation. We managed to isolate particular weaknesses of other algorithms for long-term credit assignment including RUDDER, Synthetic Returns, and Temporal Value Transport (TVT) in situations with multiple contingencies, which are common in real life, and which ConSpec overcomes easily thanks to its retrospective credit assignment. Moreover, we even managed to **isolate particular weaknesses** of other RL strategies like RND exploration, and Decision Transformers that ConSpec can overcome, demonstrating the uniqueness of ConSpec as well as its potential complementarity to these other algorithms. Thus, our results show that retrospectively honing in on critical steps is a potentially powerful thing to do especially for agents learning from scratch (to learn the world a bit at a time). **New experiments**: In the individual Reviewer questions below, the Reviewers brought up a variety of interesting issues that have helped us to think more deeply about this approach. Motivated by these points, we conducted a number of small experiments. We found that all the worries that the Reviewers brought up are easily mitigated by ConSpec. **Our new experiment figures are in the attached PDF, and include:** - Demonstrating robustness to hyperparameter choices - A version of ConSpec with stop-gradient on prototypes - More compact ConSpec with fewer than necessary prototypes - A new baseline RAD (stronger than CURL) **Success v. Failure**: Several Reviewers brought up issues related to the definition of successes (Reviewers t9VM, LNh7, fj2y), so we will give a summary here (and we have also answered their individualized questions below). As discussed in Appendix section A.3, we used only 2 simple definitions of success across all of our tasks, and in each case, we tied “success” to reward in a natural way. In the first definition, success is based on whether a reward is achieved at all, which works well for sparse reward environments, but this does not work for dense rewards. Thus, the second definition defines success as the top-k highest rewarded trajectories encountered so far, which means that the trajectories defined as successful are updated as an agent improves. As such, ConSpec uses the reward signal in a comparative and relativistic way, giving it the ability to hone in on key steps associated with the best performance achieved so far, a very efficient approach. This is a special strength of ConSpec. In light of our findings, we believe that ConSpec is a **robust and versatile plug-and-play** module that can improve credit assignment and generalization in RL. Moreover, it is a **novel solution to traditional RL problems**. Most notably, it is a novel application of **contrastive** learning in RL, distinct from other contemporary attempts, and it takes a **retrospective** strategy to credit assignment, discovering prototypes of a few critical steps that have the capacity to both help assign credit over long time horizons and generalize, as we show with strong performance in widely varied experimental settings. All in all, we believe that ConSpec provides a novel, well-validated approach to improving difficult credit assignment in RL, and would benefit the NeurIPS community in this venue. We hope that the reviewers agree, particularly given our new experiments, and see fit to raise their scores. Pdf: /pdf/5e2532c5446402e672b6197991781987e2841062.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Coop: Memory is not a Commodity
Accept (spotlight)
Summary: The authors propose to consider memory fragmentation when using tensor rematerialization on dynamic computation graphs. With three new memory management methods, sliding window algorithm, cheap tensor partitioning, and recomputable in-place, the authors enhance the checkpointing method with lower computation overhead and memory consumption. Strengths: * The paper is well-organized and easy to follow. * The perspective is new. Considering memory fragmentation can enhance the gradient checkpointing. * The results are great across the eight neural networks. * The implementation is available in the supplementary material. Weaknesses: **Major issues** 1. Table 1. The cost density depends on the hardware accelerators. It is better to introduce the hardware specifications. The authors can also mention the arithmetic intensity, which is independent of the hardware. Also, in a single neural network, different operators belonging to the same category may have different cost densities (e.g., different kernel sizes in convolution layers). 2. In cheap tensor partitioning, the authors allocate tensors from the leftmost and rightmost of the memory pool. Is it possible to have more ports for the memory pool? What if there are 3 levels of cost densities in the computation graph? In general computation graphs, the cost density or the arithmetic intensity of operators has a continuous distribution. Is it a good idea to classify them into two categories, “cheap” and “expensive” tensors? 3. The authors miss one of the most important evaluation metrics, the end-to-end training time under different memory budgets, which is more critical than the computation overhead. 4. The authors mention that “static graph methods are beyond the scope of this paper.” I am aware that symbolic and eager executions have distinct features. However, I have several concerns about that. * Among the eight computation graphs in the experiments, six have static structures, while only two have dynamic ones. It is better to discuss more on the dynamic structures. * The proposed method leverages the global information that several tensors are unevictable. If the computation graph is fully dynamic, we have no idea which tensors will be used in the future. **Minor issues** 1. Line 180. Please explain the N. I assume that it is the number of tensors. 2. A period after “Compute Overhead and Memory Budget” in Line 287, “Search Latency” in Line 303, and “Memory Fragmentation” in Line 315. 3. The caption of Figure 5. y-axes -> y-axis. 4. Results on Search Latency. The authors may list other statistics, e.g., min/max value, standard deviation to demonstrate, to demonstrate the small variations of the proposed method. 5. The authors mainly use the term “memory allocation” in the paper. It is better to replace it with “memory management” since there are other memory operations, such as eviction. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses, especially the major issues. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not discuss limitations. I think the proposed COOP has the inherent limitations of the checkpoint method on the dynamic computation graphs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer’s appreciation for our work, and we address the questions below. We use W and L to denote major weakness and limitation correspondingly. > **W1:** It is better to introduce the hardware specifications about cost density. The authors can also mention the arithmetic intensity. Different operators belonging to the same category may have different cost densities. We agree that the cost density depends on the hardware accelerators. The cost density in Table 1 is measured on NVIDIA GeForce RTX 2080 Ti. We will mention it in the revised manuscript. Arithmetic intensity is the ratio of FLOPs to the summation of input and output sizes. In the revised manuscript, we will define a similar concept, FLOPs density (the ratio of FLOPs to output size), to match the heuristic of recomputation, and will show FLOPs densities and cost densities of various operators (e.g., convolutions with different kernel sizes). Some preliminary results are shown in Table 1 of the attached pdf. > **W2.1:** In general computation graphs, the cost density or the arithmetic intensity of operators has a continuous distribution. Is it a good idea to classify them into two categories, “cheap” and “expensive” tensors? We agree with the reviewer that cost density continuously changes. We divided the operations into two categories since we observed an obvious jump in the cost densities of the commonly used operations, as shown in Figure 1 of the attached pdf. Similar ideas of dividing the operations into two categories have been evidenced in Selective Activation Recomputation (SAR). SAR found that tensors generated by some operations (e.g., softmax and dropout) occupys 70% of the memory in GPT-3 while these operations only contribute to 2.7% FLOPs. > **W2.2:** Is it possible to have more ports for the memory pool? It is possible to have more than two ports for the memory pool, where the operations are divided into two categories given the reasons in W2.1. Figure 2(a)(1) shows an example of a 4-port memory pool with two 2-port sub-pools. The tensors are allocated to the two sub-pools alternatively. If there is no space left in one sub-pool, the tensors will be allocated to the other one. Figure 2(a)(2) shows an 8-port memory pool. Thus, we can construct memory pools with 16-port, 32-port, etc., in the same way. If the ports are odd (2k+1), the memory pool is most likely to be consisted of k 2-port sub-pools and one 1-port sub-pool. This leads to insufficient usage of the memory as shown in Figure 2(a)(3). The N-port memory pools can reduce the evictions of tensors that are continuous in the computational graph (thus reducing the heuristic) but will increase the memory fragmentation. We investigated this approach at the beginning of this work but did not apply it, since there is no significant improvement in the overall performance. Some experimental results of using 3-port and 4-port memory pools are shown in Figure 2(b). We will add these discussions to the revised manuscript. > **W3:** the end-to-end training time and the compute overhead. We followed Checkmate and DTR to use the metric of compute overhead, which can be regarded as the normalized training time (the end-to-end training time with rematerialization divided by the end-to-end training time without rematerialization). > **W4.1:** It is better to discuss more on the dynamic structures. We agree that online methods such as DTR and Coop are useful to optimize NNs with dynamic structures since offline optimization methods are not applicable to these dynamic NNs. We also studied multiple static NNs since (1) there are more static NNs than dynamic NNs, and (2) online methods also show advantages over offline methods when used to optimize static NNs. As in Reviewer BV8L’s comment, one popular offline method, Checkmate, is limited to running on small-scale networks. Our experiments for the Question 2 of Reviewer BV8L also show that Checkmate (with the most advanced commercial solver, gurobi) fails to find the best solution of ResNet-50 within a 14-hour time limit. This explains why PyTorch and TensorFlow only provide manual recomputation instead of the theoretically optimal method such as Checkmate. Given the small search latency of online methods such as DTR and Coop, we expect that applying online methods to optimize trainings of both dynamic or static NNs will be beneficial. We will add these discussions in the revised manuscript. > **W4.2:** Coop leverages the global information that several tensors are unevictable. The unevictable tensors in Coop are the parameters and buffers in DNN. These tensors are available from the initialization of the network (e.g., the constructor of nn.Module in PyTorch and tf.keras.Model in TensorFlow 2.0). Therefore, Coop only uses information available for optimization. > **L1:** Coop has the inherent limitations of the checkpoint method on the dynamic computation graphs We agree with the reviewer that Coop has some inherent limitations of online methods. We will add the discussions (as in the response to Reviewer 2’s L1) to the revised manuscript. **Minor issues** > **1.** The meaning of N at Line 180. The meaning of N is explained at Line 157. We will explain it at Line 180 in the revised manuscript to make it clearer. > **2.** List min/max value, standard deviation of search latency. Thanks for your valuable advice. We calculated these statistics (shown in the following table) and found that they are very useful to demonstrate the small variations of Coop. | | Coop | DTE | DTR | | ------------- | ------------- | ---- | ---- | | Min | 0.24 | 0.20 | 0.24 | | Max | 18.6 | 13975.0 | 28019.0 | | Std | 2.81 | 2349.1 | 4863.5 | > **3.** Missing periods, "y-axes" -> "y-axis" and "memory allocation" -> "memory management" We will update the manuscript accordingly. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thanks to the authors for their response, especially the attached tables and figures. Most of my concerns have been addressed. I have only one question regarding W2.1. > We divided the operations into two categories since we observed an obvious jump in the cost densities of the commonly used operations, as shown in Figure 1 of the attached pdf. > Coop does not make any assumptions about the structures of the neural network and can be universally implemented in any deep learning framework. The first statement is from the authors' response, and the second one is the last sentence of Section 5. I am aware that the field of MLSys focuses on both (1) general optimization without assumptions on model and hardware, and (2) oriented optimizations against a workload. The authors may define their contributions with a consistent claim. For most of the operators in NN, they can be simply classified into two categories by their complexity: linear or less (e.g., element-wise ops), larger than linear (matmul). I think this assumption makes sense and is widely adopted. Thank you for your great work. --- Reply to Comment 1.1.1: Comment: Thanks for the constructive feedback. We fully agree with the reviewer that most operators in neural networks can be classified into two categories by their complexities, i.e., linear/sub-linear and super-linear. This provides a theoretical basis for our experimental results (Figure 1 in the attached pdf). Therefore, Coop works under an implicit assumption that super-linear complexity translates to high cost density. Even though this assumption applies to most known neural networks, it is not theoretically guaranteed because of the constant terms. We will add the discussions about this assumption to Section 3.4 and update the statement that "Coop does not make any assumptions about the structures of the neural network" in Section 5 accordingly. Title: Thanks for the constructive feedback
Summary: Tensor materialization trades the memory with recomputation. Prior tensor materialization methods do not consider the memory fragmentation problem of the memory system used in deep learning frameworks, which makes them evict unnecesary tensors. The authors of this paper proposed a memory-system-aware rematerialization method called Coop to reduce the memory fragmentation. Experiments show that Coop can achieve up to 2x memory saving comparied with prior works. Strengths: 1. The authors proposed a new method based on sliding window algorithm (sec 3.3) to alliviate the fragmentation problem of rematerialization. 2. The experiments show that this method is effective and can greatly reduce the compute overhead for specific memory ratio compared with prior works. Weaknesses: 1. The requirement of underlying memory allocator might limit the applicability of the proposed method This paper assumed the memory allocator used by the deep learning systems (discussed in section 2.1). The underlying memory allocator must be able to **merge** the freed chuncks if they are contiguous. There are other kinds of memory allocators (e.g., record the mapping from chunck size to a list of free chuncks with the specific chunk size) that do not have this feature, thus the proposed method can not be (directly) used for deep learning systems with this kind of memory allocators. 2. More discussion on the effectiveness of the the page-table-based memory system is needed Similar to CPU memory system, the GPU memory system also employed the page table to manage its memory. Thus, we can free the discontiguous memory chunks and allocate a new one with the sum of the sizes of the freed chunks. From the virtual memory's view, the allocated memory is contiguous. Thus, there is no fragmentation problem discussed in this paper, and the prior works can be directly used. The good news is that, since CUDA 11.2, we can directly use the memory pool [1] implemented in cuda runtime/driver to enjoy this feature. Thus, I am interested in whether the prior works have the fragmentation problem if they use the memory pool implemented in cuda? We can use the following program to validate that the page-table based GPU memory system. In the program, we allocate 3 chuncks of 7 GB called p1, p2, and p3. Then, we free p1 and p3 (they are not contiguous). At last, we allocate a chunk of memory with 14 GB called p4 successfully. ```python #include <stdio.h> #include <cuda.h> #include <unistd.h> #define CHECK(e) {auto s = e; if (s != cudaSuccess) printf("CUDA error: %s", cudaGetErrorString(s));} #define GB(x) ((x) * 1024ull * 1024 * 1024) int main() { void *p1, *p2, *p3, *p4; CHECK(cudaMalloc(&p1, GB(7))); CHECK(cudaMalloc(&p2, GB(7))); CHECK(cudaMalloc(&p3, GB(7))); printf("first allocation done\n"); printf("p1=%p\n", p1); printf("p2=%p\n", p2); printf("p3=%p\n", p3); // sleep 5 seconds, we can check the memory usage in `nvidia-smi` during this time sleep(5); printf("free p1 and p3, allocate p4\n"); CHECK(cudaFree(p1)); CHECK(cudaFree(p3)); CHECK(cudaMalloc(&p4, GB(14))); printf("p4=%p\n", p4); printf("successfully allocated p4\n"); CHECK(cudaFree(p2)); CHECK(cudaFree(p4)); } ``` I can run above program on a RTX 3090 (24GB memory). ``` first allocation done p1=0x7f4a68000000 p2=0x7f48a8000000 p3=0x7f46e8000000 free p1 and p3, allocate p4 p4=0x7f4528000000 successfully allocated p4 ``` [1] https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY__POOLS.html Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Whether the prior works have the fragmentation problem if they use the memory pool implemented in cuda? (See weaknesses part) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The proposed method relied on that the underlying memory allocator is able to merge contiguous chunks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer’s appreciation for our work. Please see below our responses to your comments. We use Q, W, L to denote question, weakness, and limitation correspondingly. > **W1&L1:** The underlying memory allocator must be able to merge the freed chunks if they are contiguous. The reviewer's understanding is correct that Coop requires the memory allocator to be able to merge the freed chunk if they are contiguous. All DL frameworks have implemented 'merge', e.g., `try_merge_blocks` method in PyTorch's caching allocator and `Merge` method in TensorFlow's BFC allocator. Although 'merge' can only be used to combine the sub-block in the same block, it is common to apply for a single block in the size of the memory budget when rematerialization is enabled (so every sub-block is from the same block). Therefore, we suppose this limitation will not influence the wide application of Coop in most DL frameworks. We will add these discussions in the revised manuscript. > **W2&Q1:** More discussion on the page-table-based memory system is needed. We are deeply appreciative of the reviewer's insightful comments. We modified the attached codes to use the memory pool implemented in CUDA: ```cpp #include <stdio.h> #include <cuda_runtime_api.h> #include <cuda.h> #include <unistd.h> #include <cassert> #define CHECK(e) {auto s = e; if (s != cudaSuccess) printf("CUDA error: %s", cudaGetErrorString(s));} #define GB(x) ((x) * 1024ull * 1024 * 1024) int main() { { // assert current device supported memory pool int value = 0; CHECK(cudaDeviceGetAttribute(&value, cudaDevAttrMemoryPoolsSupported, 0)); assert(value == 1); } void *p1, *p2, *p3, *p4; cudaStream_t stream; CHECK(cudaStreamCreate(&stream)); CHECK(cudaMallocAsync(&p1, GB(2), stream)); CHECK(cudaMallocAsync(&p2, GB(2), stream)); CHECK(cudaMallocAsync(&p3, GB(2), stream)); printf("first allocation done\n"); printf("p1=%p\n", p1); printf("p2=%p\n", p2); printf("p3=%p\n", p3); printf("free p1 and p3\n"); CHECK(cudaFreeAsync(p1, stream)); CHECK(cudaFreeAsync(p3, stream)); // Sleep 1 seconds to imitate the time between free and alloc. // We can sleep arbitrary time here and the result is the same. sleep(1); printf("allocate p4\n"); CHECK(cudaMallocAsync(&p4, GB(2.1), stream)); printf("p4=%p\n", p4); if (p4 != nullptr) { printf("successfully allocated p4\n"); CHECK(cudaFreeAsync(p4, stream)); } else { printf("failed to allocate p4\n"); } CHECK(cudaFreeAsync(p2, stream)); CHECK(cudaStreamDestroy(stream)); } ``` We ran the program on an NVIDIA GeForce RTX 2080 with 8GB memory and the chunk p4 **cannot** be successfully allocated. This indicates that the memory pool provided by CUDA did not fully utilize page table to optimize the memory usage. We further do three more experiments: * **Experiment 1:** Replace `sleep(1)` with `cudaStreamSynchronize(stream)`. According to the CUDA documentation, `cudaStreamSynchronize` returns all free memory blocks in the CUDA memory pool to OS. **Result:** The allocation of `p4` succeeds. * **Experiment 2:** Free `p1` and `p2` (or `p2` and `p3`) instead of `p1` and `p3`, so that the freed memory is contiguous in the virtual memory space. **Result:** The allocation of `p4` succeeds. * **Experiment 3:** Based on Experiment 2, change the size of `p4` from 2.1GB to 4GB and 4.1GB, so that the desired memory size is exactly equal to or slightly larger than the freed memory size. **Result:** The allocation of `p4` succeeds and fails, respectively. These three experiments suggest that the memory pool provided by CUDA also caches the freed memory block, just like what the normal allocator of deep learning frameworks does. Based on these experimental results, we believe the prior works have the fragmentation problem even if they use the memory pool implemented in CUDA. However, we fully agree that optimizing memory allocation by combining page table to reduce memory fragmentation is a promising direction. Studies have proven that memory fragmentation in CPU can be reduced by using similar ideas [1,2]. We believe we can apply the same idea to optimizing the general memory allocation (not only recomputation) of deep learning systems if we can manipulate the underlying operations of the GPU driver (rather than treating it as a proprietary NVIDIA-controlled black box). We will add these discussions to the revised manuscript. [1] Maas M, Andersen D G, Isard M, et al. Learning-based memory allocation for C++ server workloads[C]//Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems. 2020: 541-556. [2] Park C H, Cha S, Kim B, et al. Perforated page: Supporting fragmented memory allocation for large pages[C]//2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2020: 913-925. --- Rebuttal Comment 1.1: Comment: Thanks the informative experiments on the memory pool provided by NVIDIA runtime/driver. I have no other questions. Good job!
Summary: This paper proposes an optimization framework called Coop to solve the severe memory fragmentation, which is overlooked by prior tensor rematerialization works. Coop designs a sliding window algorithm to determine evicted tensor, guaranteeing the freed memory is contiguous and available for a new tensor. Further, Coop adopts a cheap tensor partitioning method to rearrange the tensor in the memory layout based on the cost density, and a memory reuse mechanism, namely recomputable in-place, for the in-place operations. There two ideas are combined to reduce additional tensor rematerializaiton cost. Strengths: 1. This paper provides a framework to solve the bottleneck of the high memory fragment rate in the tensor rematerialization scheme. The proposed sliding algorithm, cheap tensor partition mechanism, and recomputable in-place method reduce the rematerialization cost and improve memory utilization. 2. The problem formulation and writing make the paper is easy to understand. Weaknesses: 1. It would be better to give an algorithm to describe the framework comprehensively. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. From the figures in the evaluation part, the proposed Coop is not always optimal in search latency. Please analyze the underlying reasons. 2. Is the additional sliding window search algorithm needed after recomputable in-place and cheap tensor partitioning in Figure 1? If not, how can the evicted tensors be determined with minimum cost? It would be better to provide an algorithm description. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. Is there any trade-off for implementing the framework Coop? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer’s appreciation for our work. Please see below our responses to your comments. We use Q, W, L to denote question, weakness, and limitation correspondingly. > **W1:** It would be better to give an algorithm to describe the framework comprehensively. We greatly appreciate your valuable comments. The pseudo-code algorithm for Coop is displayed below, with the core logic within the `allocate` function. The implementations of methods such as `free_and_merge_block` are omitted since they are straightforward and not directly related to the core logic. The pseudo-code will be added to Section 3.2 of the revised manuscript. ```python def evict(tensor): # Release memory block and merge if possible free_and_merge_block(tensor.addr) tensor.addr = None def rematerialize(tensor): if tensor.addr is not None: return for x in tensor.producer_op.inputs: rematerialize(x) run(tensor.producer_op) def run(op): output_size = infer_size(op) output = allocate(output_size, op) # ... if is_inplace_mutation(op): op.inputs[0].addr = None def allocate(size, producer_op): if is_inplace_mutation(producer_op): # Apply recomputable in-place # For simplicity, we assume in-place mutation operations have only one input tensor. # The addr of the input tensor will be set to None at the end of `run` method. addr = producer_op.inputs[0].addr else: block = find_free_block_large_than(size) if block is None: # Apply sliding window algorithm evict(sliding_window_search(size)) block = find_free_block_large_than(size) # Apply cheap tensor partitioning if is_expensive(op_type): addr = block.left_addr else: addr = block.right_addr - size return Tensor(addr, size, producer_op) ``` > **Q1:** Why is Coop not always optimal in search latency? We provided an explanation in Line 310. Given the structure of ResNet-50, evicting a single resident tensor might be sufficient to allocate a new one. In this case, both DTR and DTE could find the best tensors to evict after one iteration. Therefore, the search latencies depend on the engineering implementation rather than the strategies themselves. > **Q2:** Is the additional sliding window search algorithm needed after recomputable in-place and cheap tensor partitioning in Figure 1? If not, how can the evicted tensors be determined with minimum cost? The reviewer's understanding is correct that the memory layout is first optimized during tensor allocation by using recomputable in-place and cheap tensor partitioning. Sliding window algorithm is used to find the best tensors to evict given the current memory layout. The three modules are flexibly used during the whole training process. The sequence of the three modules in Figure 1 is not the sequence of using them in time. We will modify the caption of Figure 1 and add the pseudo-code in the answer of W1 to the revised manuscript to make the whole pipeline clearer. > **L1:** Is there any trade-off for implementing the framework Coop? Coop comes with its own memory pool, so it cannot be simultaneously used with CUDA's built-in memory pool (stream-ordered memory allocator). The advantage of using the stream-ordered memory allocator is that multiple programs that use the stream-ordered memory allocator can share the same memory pool. However, all existing deep learning frameworks use their own memory pools instead of CUDA's built-in memory pool in default to achieve better efficiency and flexibility. Additionally, Coop is an online method so it inherits the pros and cons of online methods. It can provide an efficient solution to finding the best tensors to evict within negligible time. However, the solutions may not match the optimal solutions as solved by the offline methods such as Checkmate, even though these offline methods usually require additional solvers and take several hours or multiple days. We will add these discussions about limitations to the revised manuscript. --- Rebuttal Comment 1.1: Comment: The comments addressed my concerns. Thank you very much!
Summary: This paper considers the rematerialization problem for DNN training and studies it from the perspective of memory fragmentation. Strengths: Please see the "Questions" section. Weaknesses: Please see the "Questions" section. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - I think this paper is interesting in the sense that it raises and studies a problem that could affect the performance of other rematerialization algorithms in the literature. The problem of memory fragmentation is not taken into account in most of the DNN memory optimization papers in the literature. - The comparisons presented in the paper seem to be limited to heuristic based methods only. I think a comparison against the checkmate method ([11]) would make the results more interesting. The checkmate method is known to return the optimal solution since it's an exact method. However, it's also known that it doesn't scale to large-scale graphs. Perhaps numerical experiments for some small scale graphs could be still valuable since checkmate as a baseline would represent the optimal under the assumption that memory fragmentation is not an issue. This, in my opinion, would make the contributions of this paper clearer. - I haven't read the work of [21]. Given what the authors discuss about the DTE method, is this statement in line 64 true: "We argued for the first time that existing tensor rematerialization methods overlook the memory system during optimization and wrongly assume that the memory in DL systems is a fungible commodity"? Another statement similar to this is "To the best of our knowledge, Coop is the only tensor rematerialization scheme that fully bypasses the incorrect assumption of DL memory system." Minor - It is not immediately clear what is meant by "search latency" the first time it is mentioned in the text. It is explained later in the text, perhaps that explanation could be moved up to where it's mentioned first in the text. - This sentence in line 70 is hard to follow: "The properties of memory allocators in deep learning frameworks are considered to reduce the heuristic ..." Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Please see the "Questions" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer’s appreciation for our work. Please see below our responses to your comments. > **Q2:** Comparison between Coop and Checkmate We agree that a comparison with Checkmate could further demonstrate Coop's contributions. Checkmate's publicly available code includes two git branches, namely mlsys20_artifact and master. The mlsys20_artifact branch is designated for replicating experiments from their paper. As the reviewer mentioned, these experiments do not account for memory fragmentation. What's more, Checkmate in this branch cannot generate executable networks (as evident in the tests/test_execution.py file within the mlsys20_artifact branch). Consequently, it cannot be used to investigate the impact of memory fragmentation. The master branch is capable of generating executable TensorFlow2 Keras computational graphs. Running these actual Keras graphs should help us comprehend memory fragmentation's effects on Checkmate. However, in the case of ResNet50, we ran Checkmate for 14 hours without obtaining any results within a single designated memory budget, despite leveraging the advanced MILP solver, Gurobi. Furthermore, attempts to optimize U-Net using Checkmate yielded nearly identical networks regardless of the budget value specified, rendering the collected data nonsensical. We will continue to make ongoing efforts, aiming to include a Coop and Checkmate comparison in the revised manuscript's appendix. Additionally, reviewers can refer to the comparison experiments between DTR and Checkmate in [1]. These experiments also do not account for memory fragmentation and do not generate executable networks. They reveal that DTR and Checkmate exhibit comparable performance across three networks (VGG-16, MobileNet, U-Net). In our experiments considering memory fragmentation, Coop outperforms DTR significantly (Figure 4), and DTR displays pronounced memory fragmentation issues (Figure 5). We believe this can serve as a supplementary rough reference. > **Q3:** Is Coop the only tensor rematerialization scheme that fully bypasses the incorrect assumption of DL memory system? The heuristic in DTE encourages evictions of tensors that are adjacent to free memory blocks. However, DTE is still a greedy algorithm that runs in a loop to find the best tensor to evict until the next tensor can be successfully allocated. This brings in the redundant and discontinuous evictions, as a limitation of assuming the memory in DL systems is a fungible conmmodity. In comparison, Coop optimizes 'tensor allocation' and uses the sliding algorithm to produce a continuous block. Therefore, we claimed that Coop is the first to fully bypass this incorrect assumption. **Minor points:** > **1.** It is not immediately clear what is meant by "search latency" the first time it is mentioned in the text. It is explained later in the text, perhaps that explanation could be moved up to where it's mentioned first in the text. We will define search latency at the position of its first occurrence. > **2.** This sentence in line 70 is hard to follow: "The properties of memory allocators in deep learning frameworks are considered to reduce the heuristic ..." We will rephrase this sentence as: The heuristic of tensor rematerialization is reduced by taking into account the properties of memory allocators, and the memory allocators are improved by considering the efficiency of different operations in tensor rematerialization. [1] Kirisame M, Lyubomirsky S, Haan A, et al. Dynamic tensor rematerialization[J]. arXiv preprint arXiv:2006.09616, 2020. --- Rebuttal Comment 1.1: Comment: Thanks for the responses to my questions. I still believe that the insights of this paper on the memory fragmentation aspect of rematerialization are important and therefore I maintain a positive opinion about the work.
Rebuttal 1: Rebuttal: We thank the reviewers' appreciation and valuable advice for our work. Some new figures and tables are in the attached pdf file. Pdf: /pdf/b9ce83047ddc794d0eab3308d79c200efa121cd6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Structure of universal formulas
Accept (poster)
Summary: This paper studies function approximation in the spirit of the Kolmogorov-Arnold representation theorem. A definition of expressivity classes is made which distinguishes pointwise and uniform approximation. Function families with the form of neural networks are proposed and approximation results proven. Strengths: Definitions are systematic and arguments are clear. Weaknesses: As discussed at the bottom of p. 2, these constructions and arguments rely on infinite precision arithmetic and are badly behaved when restricted to finite precision. This is as expected for finite dimensional families which live in the classes G_n of section 2. The results are of mathematical interest but I do not see the interest for the machine learning community. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How could this line of work bear on or influence approximation theory which (like ref. [4]) is relevant for practical computation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *How could this line of work bear on or influence approximation theory which (like ref. [4]) is relevant for practical computation?* Thank you for this reasonable question. 1. You mention Kolmogorov-Arnold, but the big difference between KA and our results is that the KA-type models include very complicated, non-elementary functions that do not naturally occur in real computations. In contrast, we consider only models that are explicitly defined by a finite number of elementary functions and arithmetic operations. Such models easily can/do occur in practice. 2. The effects we discuss are formulated irrespective of the limit of infinite magnitude/precision, and may potentially occur in realistic models. For example, we identify some functional families as belonging to the class $\mathcal {G}_0\setminus\mathcal {G}_1$ of constrained models that can shatter arbitrarily large finite sets, or to the class $\mathcal G_1\setminus\mathcal G_2$ of models that can approximate functions on any finite set but not on the whole domain. It is concievable that such deficiencies may occur in practice. You may call this conjecture a stretch, but practical interpretation of classical approximation theorems like Cybenko's [4] also involves various stretches. For example, Cybenko's theorem does not say anything about how many neurons, $10$ or $10^{10^{10}}$, are needed for a reasonable accuracy in a realistic problem. Rather, the value of this theorem is in showing that there are no fundamental constraints preventing single-hidden-layer networks from fitting generic functions; after that one just hopes that a moderate number of neurons is enough. Likewise, we see the main value of our results in showing that there are, or aren't, fundamental constraints present in a few natural functional families. 3. Moreover, our work can be viewed as clarifying conditions and conclusions of classical theorems like Cybenko's. This classical theorem guarantees approximability in the limit of infinitely many neurons, but does not rule out the possibility that the same approximability can be achieved with finitely many neurons. Imagine that such a strengthening of Cybenko's theorem would be true; wouldn't this have a profound effect on its interpretation in the context of ML? It turns out that this strengthening is not true in general for classical single-hidden-layer networks, but it does hold for relatively simple modified models. Our work sheds light on conditions necessary for such models. 4. Our class $\mathcal G_1\setminus\mathcal G_2$ is particularly appealing for practical interpretation. Its elements represent models that can fit any function on any finite set but cannot fit general continuous target functions on the full domain $[0,1]$. This implies that such models are trainable but poorly generalize: for a generic continuous target the deviation of the model from the target outside the training set does not decrease even if the size of the training set grows. In section 7 we give examples of sin-networks in this class, and our central conjecture is that general sin-networks with more than one hidden layer lie in this class. Now, our work was in fact partly motivated by an experimental observation that sin-networks tend to poorly generalize. Admittedly, we have not done any comprehensive experimental studies of this effect, but we typically observe that sin-network tend to overfit more than, say, ReLU networks whenever they train well on a training set (see examples in our [general author rebuttal](https://openreview.net/forum?id=gmVoaAxB1R&noteId=1hE4zF1UCW)), We think that our theoretical results contribute to a better understanding of this effect. --- Rebuttal Comment 1.1: Title: Rebtual 2 and 3 are inaccurate - Exact Rates are Known For Many Neural Network Classes Comment: Though it is true that Hornik and Cybenko proved qualitative universal approximation theorems, ie density theorem; modern formulations are quantitative (so one knows explicit upper-bounds on the number of neurons required to approximate function in a given class). For example: [1] provide optimal rates for (uniformly) continuous function approximation on $[0,1]^d$ by ReLU feedforward neural networks, with optimal extensions to feedforward networks with general activation functions in [2]. For general quantitative statements, the key search word is "constructive approximation", e.g. see the book of DeVore and Lorentz on the topic. Concerning the pointwise topology, exact rates for real-valued ReLU neural network *interpolation* and not approximation, are known see [3], for multi-variate ReLU neural networks see Lemma 20 in [4], and for approximation in the pointwise topology, again for real-valued networks with sigmoidal activation function, see the main results of [5]; etc... [1] Shen, Zuowei, Haizhao Yang, and Shijun Zhang. "Optimal approximation rate of ReLU networks in terms of width and depth." Journal de Mathématiques Pures et Appliquées 157 (2022): 101-135. [2] Zhang, Shijun, Jianfeng Lu, and Hongkai Zhao. "Deep Network Approximation: Beyond ReLU to Diverse Activation Functions." arXiv preprint arXiv:2307.06555 (2023). [3] Vardi, Gal, Gilad Yehudai, and Ohad Shamir. "On the optimal memorization power of relu neural networks." arXiv preprint arXiv:2110.03187 (2021). [4] Debarnot, Valentin, UNIBAS CH, and Ivan Dokmanic. "Small transformers compute universal metric embeddings." Journal of Machine Learning Research 24 (2023): 1-48. [5] Park, Sejun, et al. "Provable memorization via deep neural networks using sub-linear parameters." Conference on Learning Theory. PMLR, 2021. --- Reply to Comment 1.1.1: Title: Quantitative results and Unbounded precision Comment: **Quantitative results.** To clarify: we never claimed that all network approximation theorems are qualitative. What we wrote is that Cybenko's theorem (and modern quantitative results we know of) do not allow to reasonably accurately estimate the network size required for specific real world problems. None of the papers cited by Reviewer FCbX theoretically predicts the network size necessary to, say, reach accuracy 90% on ImageNet [1,2]. A typical modern quantitative result says something like "if the target function belongs to a Hölder/Besov/Korobov etc. space, then a network of size/width/depth $N$ can generally approximate it with error $O(N^{-a})$", with some (possibly optimal) exponent $a$. There is a big gap between such theorems and real world applications, for multiple reasons. For example, the $O(..)$'s typically contain implicit large constants, making the bounds hardly useful for specific practical tasks. Four out of the five papers cited by Reviewer FCbX have these $O(..)$'s. Also, there is no simple way to assign a class like a specific Hölder space to a problem like ImageNet. There are, of course, other types of theoretical results (including some in the cited papers), but a significant application gap is present for all of them. This does not mean that these results are useless: their value is rather in revealing fundamental limitations and fundamental optimal designs of neural networks. We believe that our work shares this value. **Unbounded precision.** One of the papers cited by Reviewer FCbX - Park et al. (2021) - states that $O(N^{2/3})$ parameters are sufficient to memorize $N$ input-label pairs. Note that this result also, like our results, requires the parameters to have an unbounded precision: the $N$ input-label pairs carry at least $N$ bits of information, so the average amount of information per parameter is at least $N/O(N^{2/3})=\Omega(N^{1/3})\to \infty$. Moreover, the well-known standard and widely used bounds on the VC-dimension of neural networks with standard activations like ReLU [3,4] state that networks with $W$ parameters can shatter sets of size $\Omega(W^2)$ (see e.g. Theorem 3 in [4]). This again requires the precision of the weights to be unbounded since $W$ weights are used to store $\Omega(W^2)$ bits of information. So, unbounded parameter precision is a common and essential assumption in theoretical studies of neural networks. [1] https://www.image-net.org/ [2] https://paperswithcode.com/sota/image-classification-on-imagenet [3] M. Anthony and P. Bartlett. Neural network learning: theoretical foundations. Cambridge University Press, 1999 [4] Bartlett, P. L., Harvey, N., Liaw, C., & Mehrabian, A. (2019). Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. The Journal of Machine Learning Research, 20(1), 2285-2301. --- Rebuttal Comment 1.2: Title: reply Comment: Thanks to the authors for responding to my question. Taking it into account along with the interest shown by the other reviewers, I increased my score.
Summary: The paper advances the basic understanding of the expressive power of fixed-complexity function families on [0, 1]. It defines three classes of function families: $\mathcal{G}_0$, whose members are infinite VC-dimension families, $\mathcal{G}_1$, whose members can approximately fit any finite number of points, and $\mathcal{G}_2$, whose members can achieve uniform approximation over the domain [0, 1]. Previous work has identified certain function families that belong to $\mathcal{G}_2$. This paper focuses on the difference sets, namely $\mathcal{G}_0 \setminus \mathcal{G}_1$ and $\mathcal{G}_1 \setminus \mathcal{G}_2$. Strengths: [originality] - The paper explores the expressiveness of function classes that have fixed complexity and infinite VC dimension. To the best of my understanding, this research area remains largely unexplored beyond the prior results on universal formulas. [quality] - The mathematical framework is clearly defined, and the analysis is conducted rigorously. [clarity] - The paper is well-written. [significance] - The paper identifies certain specific function classes (such as $H^{(5)}$) that are seemingly complex and expressive but do not belong to $\mathcal{G}_2$ and belong to $\mathcal{G}_1$. Weaknesses: [originality] - None in particular [quality] - None in particular [clarity] - None in particular [significance] - This paper only focuses on one-dimensional functions, which inevitably restricts the immediate applicability of the presented results to the field of machine learning. However, I consider this limitation to be a reflection of the fact that this area of analysis is still in its early stages. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Should Line 327 say $H_N^{(5)}$ instead of $H_N^{(2)}$? [other minor suggestions] - Line 353 “suggests” → “suggest” - Line 183 “over the field Q” → “over the field Q of rational numbers” (as the field appears less frequently than R or Z in machine learning papers). - Line 165 “is a subset” → “is a non-empty subset” Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The paper acknowledges a limitation in its methodology, which is explained in the paragraph starting from Line 363. Specifically, the three methods used in the paper cannot effectively distinguish between $\mathcal{G}_1 \setminus \mathcal{G}_2$ and $\mathcal{G}_2$. As a result, the investigation in the paper is not completed and requires further research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the careful reading and positive evaluation of our work! We agree with all your fixes, thanks! --- Rebuttal Comment 1.1: Title: I have read the authors' response Comment: Thank you for the response. I leave this comment to note that I have read it.
Summary: The authors isolate the study universal hypothesis subclasses of $C([0,1])$ for the compact-open and point-open topologies respectively, and they compare universality here to infinite VC-dimension. The results are interesting, though perhaps not that surprising nor novel, but are still publishable (in my opinion). Some of the author's derivations could use more mathematical rigour and the terminology "formulas" is misleading as it alludes to logic/model theory, which does not appear in the paper. The authors could also add some references to approximation theory in ML (some of which I provide below) to better tie in their results. Details below in the Weakenesses section. Strengths: As stated above, the results are interesting, though perhaps not so surprising. Weaknesses: 0. Add details to all proofs. My main issue with this paper, is that many of its proofs contain handwavy and imprecise statements such as: 1) "Specifically, isolate the variables gn(x) in the polynomials" (line 739-740). This is not rigorous and vague 2) equation (94) what space is this limit taken in; to me it seems to be $\mathbb{C}$ with its modulus as a norm, or 3) "by a classical theorem of Kronecker's" on lien 188; which one? (citation missing)... There are many such instances in derivation. **Requested Change 0:** Make all proofs rigorous and remove handwaved details (clear limits, clear steps, no missing references) 1. This sentence "especially dangerous from the generalization point of view" is purely speculative in the context of the paper, and one can doubt its accuracy [4] and [5] on benign overfitting for high-capacity models. **Requested Change 1:** Since the authors only deal with approximation, they should remove this claim in the final version. 2. The hierarchy $G_1\subset G_2$ in (3) is misleading - The class $G_1$ is simply the set of subsets $A$ of $\mathbb{R}^{[0,1]}$ satisfying $A$ is dense in $\mathbb{R}^{[0,1]}$ for the point-open topology, which I denote $\tau_{PO}$; i.e. the topology of pointwise convergence (see [1] - Section 46, page 281). So the inclusion of $G_1$ into $G_2$ is immediate and has a simple topological meaning; since $G_2$ is the same but for the compact-open topology (see [1] - Section 46, page 285). From these remarks, and say Theorem 46.7 and Theorem 46.8 - both on page 281 of [1], imply $G_1\subset G_2$ and they give a topological interpretation of $G_1\subset G_2$. I point this out because, from this perspective, one has a more general phenomenon. Let us focus only on continuous functions. Consider the standard poset $(\mathcal{T},\lesssim)$ of topologies on $C([0,1])$ with the partial order relation $\tau_1\lesssim \tau_2$ if and only if $\tau_2$ is at-least as fine as $\tau_1$ (i.e.\ $\tau_1\subseteq \tau_2$). For any $\tau\in \mathcal{T}$ define the class $G_{\tau}:=\{A\subset C([0,1]): \bar{A}^{\tau} = C([0,1])\}$ where $\bar{A}^{\tau}$ denotes the closure of a subset $A$ of $C([0,1]$ with respect to the $\tau$ topology. One immediately then deduces the contravariant-functorial relation: for all $\tau,\tilde{\tau}\in \mathcal{T}$ $\tau \lesssim \tilde{\tau} \Rightarrow G_{\tilde{\tau}}\subseteq G_{\tau}$. So this type of construction holds in greater generality. In fact, strictly finer topologies than the compact-open topology (i.e. the topology of uniform convergence since $[0,1]$ is compact and $\mathbb{R}$ is equipped with a metric - see Munkres Theorem 46.8) and it has been studied in the context of kernel methods [3] (see the strict topology) and non-uniform topologies on $L^1([0,1])$ were considered in [2] (see Theorems 1 and 2) in the context of deep learning. **Requested Change 2:** Please add such comments circa (3), and emphasize that there are many such "classes" between $G_1$ and $G_2$, as well as, below or above then in the chain of inclusions (3). Clearly the top of the chain consists of $G_{\tau_{disc}}$ where $\tau_{disc}$ is the discrete topology on $C([0,1])$ and the *correct* bottom of the chain consists of $G_{\{C([0,1]),\emptyset\}}$ (i.e. $\{C([0,1]),\emptyset\}$ is the trivial topology on $C([0,1])$. 3. Line 183 - You are considering $\mathbb{R}$ as an (infinite-dimensiona) $\mathbb{Q}$-module (vector space over the rationals). This is not clear for readers who did not have a class or two on algebra during their studies. **Requested Change 3:** Add an explanation of how $\mathbb{R}$ is an infinite-dimensional $\mathbb{Q}$-module. 4. Theorem 2 is not rigersouly stated. What does "can approximate any R-valued function on X". In reading the theorem's proof you must mean the point-open topology (which in this case consides with the compact-open topology, trivially). **Requested Change 4:** Remove "can approximate any R-valued function on X" in the pointwise/point (in this case compact-open/uniform topologies. Please write what is meant clearly, with "the usual for all $\epsilon$ theres exists an f satisfying .... " type statement. Without this modification I cannot bump my score up to a pass, since the paper must be precise. 5. Missing Reference **Requested Change 5:** Please add a precise reference to this hand-waving: "by classical Kronecker’s theorem" in the proof of Theorem 2 (it is classical if one has seen such results, but not in general). 6. Branching expressions terminology **Requested Change 6:** Please call these piecewise functions. --- I believe the manuscript will be in publishable condition when these changes are made. **References** [1] Munkres, James. "Topology James Munkres Second Edition." [2] Kratsios, Anastasis, and Behnoosh Zamanlooy. "Do ReLU Networks Have An Edge When Approximating Compactly-Supported Functions?." Transactions on Machine Learning Research (2022). [3] Chevyrev, Ilya, and Harald Oberhauser. "Signature moments to characterize laws of stochastic processes." The Journal of Machine Learning Research 23.1 (2022): 7928-7969. [4] Bartlett, Peter L., et al. "Benign overfitting in linear regression." Proceedings of the National Academy of Sciences 117.48 (2020): 30063-30070. [5] Tsigler, Alexander, and Peter L. Bartlett. "Benign overfitting in ridge regression." J. Mach. Learn. Res. 24 (2023): 123-1. **Note** I did not use mathcal since open-review had compilation issues with it... Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Related to requested change 2: is there a topology $\tau$ on $C([0,1])$ such that $\mathcal{G}_{\tau}=\mathcal{G}_0$? - In the proof of Theorem 2, does one really need to index over $\mathbb{R}$; isn't $\mathbb{R}\setminus \mathbb{Q}$ enough? - Why do you call these formulas? Is there any connection to logic/model theory? To me the correct/standard term is universal hypothesis/function classes, in the context of approximation theory and universal approximation theorems. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Not really, but this section doesn't really affect this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comprehensive reading of our paper including the proofs, and a positive evaluation of our work. We also appreciate your many comments, though we respectfully disagree with some of them. **Requested changes.** 0. We don't actually see what is imprecise in the examples you give, except for the reference for Kronecker's theorem which is indeed missing. 1) We don't see what is not rigorous and vague here. You cite the beginning of a sentence; the end of the same sentence specifies the exact sense in which the variable is isolated - namely, by expanding the polynomial as a univariate polynomial in this variable with coefficients in the ring of polynomials in the remaining variables (Eq. (115)). 2) Equation (94) is a statement on convergence of a sequence of complex numbers. There is a unique standard meaning of this convergence known from the basic calculus course; we don't see what is ambiguous here. 3) Indeed, a reference to Kronecker's theorem is missing. We will add it to the paper (the original Kronecker's paper is [1], and a recent survey can be found in [2]). Thank you for this suggestion. 1. We don't see why this sentence is speculative. By definition, the models in the class $\mathcal G_1\setminus\mathcal G_2$ can approximate any target function on any finite set, but, in general, cannot approximate a continuous target function on the whole domain $[0,1]$. This means that a model from this class can be fitted on any finite training set, but, in general, outside the training set the deviation from the target will not decrease to 0 as the size of the training set grows. This implies that the model can be fitted, but cannot properly generalize. We don't see why the papers on benign overfitting that you cite are relevant here - these papers deal with a completely different setting of high-dimensional linear models under special assumptions. 2. Thank you for this comment. We only consider the classes $\mathcal G_0, \mathcal G_1, \mathcal G_2$ in our paper because they have a simple meaning and because we prove something nontrivial about them. We don't claim that this hierarchy is exhaustive; of course, there are many other classes, in particular those induced by different topologies as you describe. However, we don't see a point in discussing them in the paper since we don't prove anything about them. Discussing topological and other issues would only distract the reader from the essense of our work and make our paper less accessible. As you can see from the neighboring review of Reviewer 2cHj, our work is already criticized for being too focused on abstract math. 3. Respectfully, adding terminology like "$\mathbb Q$-module" would not add any substance to our paper, but would certainly make it less accessible. 4. We don't see why Theorem 2 is not rigorously stated. After restriction to the finite set $X$, approximation becomes finite-dimensional and has a unique standard meaning. 5. Done, thank you. 6. Thank you for this suggestion, but we want to keep the term "branching expressions" because it reflects the actual branching involved in the computation of these expressions. We call our functional families "formulas" because they are defined by finite expressions composed of numbers, arithmetic operations and elementary functions. One can think of these formulas as words in a finite alphabet; this connects them to formulas in logic/model theory. It is important in our context that a model consists of a finite number of explicit standard operations. In contrast, general hypothesis/function classes are not that restrictive; even the class of single-hidden-layer neural networks $f(\mathbf x)=\sum_{n=1}^N c_n\sigma(\mathbf a_n\cdot \mathbf x+h_n)$ contains an operationally indefinite activation $\sigma$, not to mention classes like Sobolev spaces, etc. [1] L. Kronecker, Näherungsweise ganzzahlige Auflösung linearer Gleichungen, Monats. Königl. Preuss. Akad. Wiss. Berlin, 1179–1193 (1884), pp. 1271-1299 [2] S. M. Gonek, H. L. Montgomery, Kronecker’s approximation theorem, Indagationes Mathematicae, Volume 27, Issue 2, 2016, Pages 506-523, (https://www.sciencedirect.com/science/article/pii/S0019357716000148) **Our question.** You write: *The results are interesting, though perhaps not that surprising nor novel*. If our results are not novel, can you please cite specific publications containing our theorems 4, 5, 6, 10 or their close analogs? --- Rebuttal Comment 1.1: Title: Response Comment: **Retort** 1. Generalization is a probabilistic notion, which holds in expectation. Let's consider a simple case where there is a "true" target function $f:X\mapsto \mathbb{R}$ (to be learnt), a probability measure $\mu$ on $X$ from which we draw samples, and suppose that the data-generating probability measure $\mathbb{P}$ on $X\times \mathbb{R}$ is noisless; meaning that $\mathbb{P}$ is the pushforward of $\mu$ by the map from $X$ to $X\times \mathbb{R}$ given for every $x\in X$ by $x\mapsto (x,f(x))$ . In this case, the true risk of any $g:X\mapsto \mathbb{R}$ is $R(g) := \mathbb{E}^{(X,Y)\sim \mathbb{P}} [|g(X)-Y|] = \mathbb{E}^{X\sim \mu}[|g(X)-f(X)|].$ Suppose that $f^{(0)},f^{(1)},\dots$ is a sequence of functions from $X$ to $\mathbb{R}$, e.g. in $G_1\setminus G_2$, which converge (only) pointwise to a limit $f$, also mapping $X$ to $\mathbb{R}$. If suppose that there is some function $F\in L^1_{\mu}(X)$ satisfying such that, for every $n\in\mathbb{N}$, $\mathbb{E}^{X\sim \mu}[|f^{(n)}(X)|]$ and $\mathbb{E}^{X\sim \mu}[f(X)|]$ are all bounded-above by $\mathbb{E}^{X\sim \mu}[|F(X)|]$. Then the dominated convergence theorem the sequence $f_0,f_1,\dots$ converges to $f$ in $L^1_{\mu}(X)$. This shows that a sequence of functions in $G_1\setminus G_2$, which only approximates the true/target function pointwise can make the true risk vanish; i.e. we have shown that $\operatorname{lim}_{n\mapsto \infty} R(f^{(n)})=0$ even if the sequence $f^{(0)},f^{(1)},\dots$ need not approximate $f$ uniformly. 2. I agree with your rebuttal, that is a fair point. 3. Perhaps $\mathbb{Q}$-module could sound off-putting, but a $\mathbb{Q}$-vector space is exactly the same thing and all readers know what that is (since the all modules over a division ring are free modules; aka vector spaces). So I respectfully disagree, I think this clarification should be included, but perhaps with the $\mathbb{Q}$-vector space terminology. 4. Here are two meanings. For example, let $X$ be a finite set with at-least two elements. Consider the (trivial) topology $\tau_1:=\{\emptyset, \mathbb{R}^X\}$ (note brackets did not compile), the topology of pointwise convergence $\tau_2$, ie the product topology on $\mathbb{R}^X$, and the (discrete) topology $\tau_3:=\mathbb{R}^X$, on $X$. Approximation means density in $\mathbb{R}^X$ with respect to some topology, the authors mean $\tau_2$, but since they do not make any explicit statements one can't help onder if its $\tau_1$, $\tau_3$, or something even more exotic. In short, I have provided at-least 3 non-unique meanings. Please formulate it precisely. 5. Thank you for adding the reference. 6. That makes sense. All other other points are minor. --- **Comment about the terminology formulas:** With respect to the terminology "formulas", I see what angle you are getting at, but as far as I know, model theoretic descriptions of functions representable in a model/language are usually binary. Namely, a function can be exactly expressed in a language with finitely many operations. For example, O-minimal functions; see e.g. Wilkie's theorem. Rather, the authors consider approximation problems, which seem unorthodox for model theory (but this last comment, I am not an expert on, so take it as a point of curiosity). --- **Concerning the Typesetting of My Retort:** P.s.: Some notation, such as sequences and expectation look odd since I'm having trouble compiling LaTeX with subscripts on open review; apologies for the type setting. --- Reply to Comment 1.1.1: Comment: 1\. "Model generalization" is a general concept reflecting our expectation that if a model is fitted to a target function on a subset of the input domain, then the model should agree with the target on the whole domain. Specific meaning of "fitted", "agree", etc. is a matter of convention. In our remark we point out a particular sense (different from yours) in which the class $\mathcal G_1\setminus\mathcal G_2$ conflicts with generalizability. Our remark is not a theorem-like assertion; it is our interpretation of a mathematical object. The reader may like or dislike this interpretation, but we don't claim anything going beyond the basic definition of the class $\mathcal G_1\setminus\mathcal G_2$. The remark is useful for the paper, and we will not remove it. However, we agree to add the clause "in the uniform norm" in the end of the first sentence to avoid confusion with other kinds of approximation. 3\. We don't see why any clarification is needed here. Theorem 2 is a very simple and known fact that we already explain in detail for the reader's convenience. For comparison, in the machine learning paper [S] the same fact is described in three lines, even without mentioning Kronecker's theorem. We already agreed to supplement the reference to Kronecker's theorem by a specific citation as you asked. 4\. First, your topology $\tau_1$ is not a topology on $\mathbb R^X$, so it is unclear what it has to do with convergence of functions. Second, we don't introduce any topologies in our paper (we don't even use the word "topology"). Why would the reader expect that we use some exotic topology when there is a unique standard notion of approximation in $\mathbb R^X$ for a finite $X$? We don't see a point in making the paper more complicated than it needs to be. [S] Sontag, E. D. (1997). Shattering all sets of k points in “general position” requires (k—1)/2 parameters. Neural Computation, 9(2), 337-348.
Summary: The authors study "universal formulas" -- specifically, families of parametric mappings $(f_{\omega})\_{\omega \in \Omega} \subset C([0, 1])$ where $\Omega$ is a domain in a finite-dimensional euclidean space, such that $(f_{\omega})$ is actually dense in $C([0, 1])$ with the sup-norm -- motivated in part by applications to neural networks. In this connection, universal formulas should be contrasted with typical "universal approximation" results for neural networks that state that various fixed-depth neural networks can approximate any continuous function on $[0,1]$ by *increasing the number of neurons* sufficiently; universal formulas achieve this approximation with a fixed number of neurons by instead exploiting (at least indirectly) unbounded-magnitude parameters and infinite-precision arithmetic. The authors present a hierarchy of nested expressiveness classes -- in decreasing order, families with infinite VC dimension, families that achieve arbitrary approximation on arbitrary finite sets, and families that achieve arbitrary approximation of arbitrary continuous functions (i.e., universal formulas) -- and study obstructions to a family being in a class higher in the hierarchy, but not a class lower in the hierarchy. Starting from the classical example of the infinite VC-dim class $\\{c \sin \omega x\\}_{c,\omega \in \mathbb{R}}$, the authors prove using algebraic methods that a fairly general class of "polynomially-exponentially-algebraic" expressions, which includes neural networks with topology given by a (fixed size) directed acyclic graph with only one "algebraic-exponential" activation on any path to the output (e.g., $\sin$ activation, $\tanh$ activation) and arbitrary piecewise-polynomial activations otherwise (e.g., ReLU), cannot achieve the arbitrary finite approximation property. Results for networks achieving finite approximation but not being universal formulas are slightly less general, due to technical challenges: the authors show, for example, that classes $f(x) = c \sin (\omega \sigma(bx) + h)$ achieve arbitrary finite approximation if and only if $\sigma$ is non-polynomial, and demonstrate that for a certain sub-class of $\sigma$ maps studied earlier involving $\sin$ activations, the resulting class does not have the universal formula property. This kind of mapping can be viewed as a one-hidden-layer neural network with one neuron on the hidden layer and $\sin$ activations; they conjecture that the same separation holds for similar architectures with multiple neurons in the hidden layer. With these results established, they discuss known results on architectures that do constitute universal formulas, and possible paths to characterize necessary and sufficient conditions to be a universal formula further. Strengths: The writing, both technical and expository, is clear and precise. The authors go to great lengths to present a broad view of the problem and prior work in the introduction, which makes the subsequent technical analysis accessible to a non-expert reader. The authors defer more technical proofs to the appendices, but always give a clear indication in the main submission what the types of arguments developed are (e.g., algebraic methods vs.\ Pfaffian function theory). This will make the work useful for follow-up work. The discussion section (section 8) seems uncommonly interesting for a submission in this venue, in that it lists several open problems that the authors' work suggests, and which should lead to interesting follow-up work. Weaknesses: It is not a weakness per se, but as the authors themselves note, the investigation is mostly of mathematical interest as "... the universality of universal formulas is... an idealization that requires their parameters to have unbounded precision or magnitude". Nonetheless, there are neural networks commonly used that have structures similar to some of the near-universal formulas studied (the authors mention SIREN, which uses periodic activations in neural radiance field learning), and these kinds of periodic constructions have been shown to be useful in studying general deep/shallow neural network approximability (e.g., work of Telgarsky). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: ### Questions and Comments Does the "GeLU" activation function ($x \mapsto x \Phi(x)$, where $\Phi$ is the standard normal cumulative distribution function) satisfy the algebraic-exponential property? I am not sure if it would, because it is defined with an integral, but if so, it might be interesting to mention here (this activation is used in some modern transformer architectures). ### Minor Fixes Caption of Figure 1: typo "uiversality" Line 186: the $2\pi$ is on the wrong factor in the quotient? (the reals, "modulo" $2\pi$). Line 187: is this supposed to be the "fractional part" operator (based on its specified domain) rather than the "floor" operator (which as I understand gives the next-lowest integer)? Line 345: "Then the existence of a ..." -- should this be "Then there exists a..."? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the careful reading of our paper and giving a very detailed and careful summary of our work. Indeed, the GeLU function unfortunately does not belong to our polynomial-exponential-algebraic class. All your minor fixes are spot-on (except for the last one: we did actually mean "existence" because our statement was about the decidability of an existence predicate; we will rephrase this sentence to make it clearer). Thanks again for many useful comments and positive evaluation of our work!
Rebuttal 1: Rebuttal: We thank all the reviewers for the careful reading of our work and useful feedback, both positive and critical. As we mention in the response to Reviewer 2cHj, our work is partly motivated by an experimental observation that neural networks with the activation function $\sin$ apparently can easily overfit. We attach a .pdf figure illustrating this with a simple example of a two-hidden-layer network fitted by gradient descent to a one-dimensional target function. The network fits the training set reasonably well, but wildly oscillates outside. This is the sort of behavior we expect from models in our class $\mathcal G_1\setminus\mathcal G_2$ (trainable, but poorly generalizing). A ReLU-networks of the same architecture is trained to a comparable accuracy on the training set, but shows a much better generalization. In a weaker form, we also observe this effect in MNIST classification: $\sin$-networks again seem to be somewhat more prone to overfitting than ReLU-networks of the same architecture. Our theoretical setting of universal formulas can be viewed as an extreme case of these experiemental setups, when the network size is small and fixed, the weights are trained by some ideal optimization algorithm, etc. The above experimental observations seem to agree with our theoretical results and conjectures regarding the multi-layer $\sin$-networks being in the class $\mathcal G_1\setminus\mathcal G_2$, in contrast to ReLU-networks. Pdf: /pdf/f9a3603ed7a6e45c44e11d93319c934074a7e3a3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Creating Multi-Level Skill Hierarchies in Reinforcement Learning
Accept (poster)
Summary: The paper proposes a method to discover skill hierarchy by applying hierarchical graph clustering methods to expose the structure. The proposed method introduces Louvain method for skill hierarchy revealing and produces skill hierarchy with multiple-grained of action abstractions. The proposed method is tested on six different environments and compared with other methods. Strengths: This paper introduces the concept of graph clustering to discover action hierarchies, which is an interesting idea. The references are described in details, which makes the intuition and technique of the proposed method more reasonable. It is appreciated also the experiments in six different problems and the evaluation with other methods, as well as the ablation study. Weaknesses: There are several weaknesses. First, this paper is poorly written. There are a lot of grammatical errors and unprofessional writing styles, which makes it difficult to understand. Technical writing in Proposed Approach part in this paper is also limited, causing a great deal of confusion about its proposed method. This paper attempts to solve the problem of action hierarchy discovery of agents. This problem is extremely difficult and could not be well solved by introducing Louvain-based graph clustering methods as described in the paper. More details will be described in the limitations. Moreover, the proposed approach section in the paper spends plenty of words to describe the process of the Louvain algorithm. However, the Louvain algorithm has been proposed in 2008 and there have been a lot of improved works during 2010-2015, which should not be the focus of the paper. In addition, although there are many tasks in the experiment section, these problems are all simple and some of them can be solved well even using simple planning methods. It is unnecessary to discover action hierarchies in these scenarios. Agents’ skills should be learned in more complex scenarios with more constraints, such as StarCraft II and the Minecraft (the recently popular environment), but these complex scenarios are not considered in this paper. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: How about the learning efficiency of the proposed algorithm? Does the size of the graph limit the learning efficiency of the proposed method compared to the original reinforcement learning algorithm? I am confused that there is no diversity in the skills formed by combining only 4 actions with same type in these simple scenarios including Rooms. Poor diversity limited the performance of policies learned from the reinforcement learning framework. There are too many confusing descriptions, including: • "an autonomous agent" in Line1 • "graphical structure" in Line2 • "its environment" in Line2 • "multiple levels of abstraction" in Line4 • "regions of the state" in Line5 • "connected within themselves" in Line6 • "various levels of granularity" in Line16 • "outcome" in Line17 • "characterisation" in Line19 • "existing approaches to skill discovery" in Line55 • "are not without their limitations" in Line72 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 1 poor Limitations: Existing reinforcement learning algorithms based on the state-action space have achieved great results, especially in simple scenarios, which can achieve efficient exploration in the policy space. The reason for introducing skills to split the primitive action space into multiple parts for separate exploration is that the state action search space is so large in facing with more complex scenarios that the primitive reinforcement learning algorithm is inefficient for exploration. However, it is difficult for the skill-based approach to converge to the global optimal solution. The possibility of convergence to a local optimal solution in each subspace is high, which is more likely to occur in simple scenarios. This paper does not propose a theoretical guarantee on the learning efficiency of the algorithm and lacks the optimization analysis of the algorithm. In addition, it does not conduct experimental evaluation on complex scenarios to answer aforementioned questions. Louvain is a commonly used in community discovery to mine graph clusters. However, the learning process of Louvain algorithm and the learning process of reinforcement learning framework are independent. How can the clustering information of state action space mined by Louvain algorithm actively guide the reinforcement learning? How can the policy search process in the state-action space be fed back to the mining process of Louvain? I think it is difficult. In addition, the modularity seems to be independent of the reward in reinforcement learning, which may lead to the inability of combining the Louvain algorithm with the reinforcement learning algorithm, no matter what approach is taken. I wonder if graph partitioning can really guide skill learning. First concerning about the model-based reinforcement learning, which generates a complete state action transition graph. However, once the transition graph is completely knowable, traditional reinforcement learning methods can achieve great performance. There is no need to develop reinforcement learning method based on graph partitioning. In case of model-free reinforcement learning, where the policy learning of traditional reinforcement learning methods is difficult, graph partitioning will play a positive role. However, if the transition graph is unknown, it is impossible to reveal expective clusters of the transition graph. Therefore, the proposed method in this paper would be ineffective. But I think that combining graph partitioning method with model-free reinforcement learning with incomplete transition graph is a promising direction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time and attention you have given to our paper. LA: Louvain Algorithm **Q1: How about the learning efficiency of the proposed algorithm? Does the size of the graph limit the learning efficiency of the proposed method compared to the original reinforcement learning algorithm?** We see consistently, across domains, that the learning efficiency increases when Louvain skills are available to the agent (Figure 3). We also see that the learning efficiency gained from Louvain skills (compared to primitive actions) increases with the size of the domain: the largest performance gap between the Louvain agent and the primitive agent is in the largest domain we tested (Figure 5c). **Q2: I am confused that there is no diversity in the skills formed by combining only 4 actions with same type in these simple scenarios including Rooms.** It is possible we do not understand what you mean by diversity because we are confused by this question. There is in fact significant diversity in the Louvain skills. For example, consider the *Level 3* skills in Rooms (see Figure 2, top row). There are 8 skills at this level, two from each room, and each of these skills takes the agent in a different direction. For instance, in the upper-left room (depicted in green), two skills can be initiated: one (efficiently) takes the agent to the upper-right room (red) and the other to the lower-left room (yellow). And this is only the diversity at level 3 of the skill hierarchy. Skills at other levels of the hierarchy are similarly diverse but, *in addition*, they introduce diversity in the skills’ reach (trajectory length; the degree of temporal abstraction). For instance, Level 2 skills are shorter (in trajectory length), and Level 1 skills are even shorter, than Level 3 skills. In summary, we see diversity in skill initiation sets, skill policies, and trajectory lengths. Furthermore, collectively, the skills cover the state space very well. **Q3: There are too many confusing descriptions (...)** We are sorry to hear this and would very much like to remedy it. At the same time, we are unsure what is confusing about the words and phrases you have highlighted. Many, such as *autonomous agent* and *environment*, are fundamental concepts in reinforcement learning. Others, such as *graphical structure* are clearly illustrated in the paper. We would welcome any concrete comments that would help us improve the writing. **[Skill discovery] is extremely difficult and could not be well solved by introducing Louvain-based graph clustering methods as described in the paper.** We agree, skill discovery is very difficult. This is exactly why it is important to explore a wide variety of approaches and to not prematurely discard any of them, especially ones that show promise in small-scale problems, such as the one we propose here. We strongly oppose the statement that skill discovery cannot be solved by introducing Louvain-based graph clustering methods. There is no basis for this statement. Skill discovery is an open problem and Louvain-based graph clustering methods may well be part of the solution. **The Louvain algorithm has been proposed in 2008 and there have been a lot of improved works during 2010-2015 (...) [it] should not be the focus of the paper.** LA remains one of the most popular and highly-performing modularity optimisation algorithms. Although several improvements have been proposed to its original formulation, most of these modifications improve the algorithm’s runtime but do not impact its final output. Those modifications that do modify its output often result in the hierarchical structure being harder to extract, and this makes them unsuitable for our purposes. We explain LA in the paper because otherwise the skill hierarchy it generates would not be as clear to the reader. **Agents’ skills should be learned in more complex scenarios with more constraints, such as StarCraft II and the Minecraft (...)** This is an unrealistically high bar. Adopting it would be detrimental to research progress on this very important subject, and this is recognised by the research community: Papers on skill discovery are continuously being published at NeurIPS and other high-profile venues, such as ICML; we are not aware of a single paper that meets this bar. In fact, many new ideas are presented and tested in small, discrete domains. e.g., Bar, Talmon, & Meir (ICML 2020), Jinnai et al. (ICML 2019). **Finally, we would like to respond to the 3 limitations raised by the reviewer.** 1a. Factual correction: skill-based approaches do converge to the global optimal solution. If primitive actions are included in an agent’s action set along with skills (as we have done), then existing convergence guarantees from the primitive-only case continue to hold. Some HRL frameworks might lose global convergence (e.g., MAX-Q) but not the options framework (see Precup, Sutton & Singh, Theoretical results on RL with temporally abstract options, ECML, 1998). 1b. Theoretically analysing the learning efficiency of agents using a specific skill hierarchy is difficult: such analysis is not present even in the most well-known skill discovery papers. We argue that this is not a reasonable requirement from the paper. 2a. The relationship between LA and RL is simple: LA provides input to RL. 2b. Clustering by LA does indeed guide RL: it impacts exploration; it determines what the skills are and skills are part of the policy learned by RL. 2c. RL does not inform LA, but why is this a weakness? 2d. You may find it useful to see the discussion of reward at the end of our rebuttal to Reviewer cG6b. 3. The paper should not be evaluated as a skill discovery algorithm. We are not presenting a skill discovery algorithm but a hypothesis on what makes a useful skill hierarchy. To evaluate this hypothesis, we necessarily use the transition graph. We took great care in writing the paper to make this point clearly and repeatedly. --- Rebuttal Comment 1.1: Comment: Thank you for the comments. But I am keeping my original score given the limitations of your work: 1. The paper claims that the main contribution is a characterisation of a useful action hierarchy which is an action structure. The action structure is intuitive and easily constructed. Many works have made efforts in this aspect [1-7]. The action structure can even be represented as a graph structure, and many works have done so [8-10]. Therefore, it should not be considered as a contribution of the paper. The paper also claims that it uses Louvain-based graphical representation method for the characterisation of the action hierarchy. However, the Louvain algorithm has long been proposed with many improvement works[11-15]. Besides, as mentioned in the related work, many works have used graph partitioning methods for skill discovery[16-19]. Therefore, the real contribution of this paper is only constructing the state-action space for option learning based on the graph partitioning results. However, the option learning method is not described either in this paper or in the supplementary material. From the description of the Analysis sector, it seems that this method is not different from the classical reinforcement learning methods. **In summary, what this paper does is merely using the Louvain method to generate clustering results of the transition graph for training classical RL algorithms, without substantial innovation.** 2. In the Introduction section of this paper, the author claims that proposed approach differs from existing methods in two ways: parition the interaction graph by maximising modularity and producing a multi-level hierarchy. However, the first difference has already been clearly proposed in the fast unfolding method[11]. Part of the second, which generates a hierarchical clustering structure has also been described in the fast unfolding method[11]. Therefore, the only real difference between this method and existing work isonly a very small part mentioned in the paper, which is constructing the state-action space for option learning based on the clustering results. In fact, the proposed method does not have much novelty compared to classical reinforcement learning methods. **Therefore, there is so little original content in this paper that it would not be accepted in a top-tier conference**. 3. The author notes that there is no basis for this statement that skill discovery cannot be solved by introducing Louvain-based graph clustering methods and skill discovery is an open problem and Louvain-based graph clustering methods may well be part of the solution. **I wonder to know if there's any basis for the author's claim?** The open problem cannot be solved by mere textual description. We consider two commonly used methods. - The first is the theoretical analysis. Is there any theoretical analysis in this paper? **Neither the main content nor the supplementary material contains the theoretical analysis of the proposed approach.** - The second is experiments. Although the paper describes several experimental scenarios, these scenarios are all too simple. These scenarios were usually used in papers published 20 years ago, but recent work has rarely used such simple scenarios for experiments, except for papers about the theoretical analysis of reinforcement learning. Recent work has typically used scenarios such as Atari and Mujoco for experiments. As stated in the previous part, there is no theoretical analysis in this paper, so it should be evaluated on Atari or Mujoco or even more complex scenarios such as MineCraft or SMAC to be convincing. **Even author noted in Line 57 in the paper that multi-level skill hierarchies are essential for solving complex tasks. Why are there no experiments on such complex scenarios? In my opinion this paper won't be accepted in a top-tier conference while you don't have a non-toy problem evaluation.** 4. Some issues. - If options are used to solve complex problems, then they should be diverse. In this case, the proposed approach that options control agents move between adjacent state spaces does not make sense, because such options cannot adapt to scenarios that require options to extensively explore different action spaces. - The proproposed approach is built on the assumption that the Louvain method successfully generates good clustering results. If the Louvain method fails or generates clustering results with low quality, the proproposed approach will fail. Such situations are prevalent in the experimental scenarios commonly used in current works. The state-action features of these scenarios do not have characteristics conducive to unsupervised clustering, and cannot generate appropriate clustering results by simply applying the Louvain method. - If the collected transition graph is only a small part of that in the entire senario, then the results do not provide meaningful guidance for solving the entire problem. --- Reply to Comment 1.1.1: Title: Response to Points 1 and 2 Comment: We thank the reviewer for their continuing engagement with the paper. Below we provide a point-by-point response to the concerns raised. We believe that we have addressed all of the main concerns. (1) The innovation in the paper is the use of an existing method (Louvain Algorithm) in a new context (temporal abstraction in reinforcement learning). **This is an established and valuable form of contribution to science.** Our use of the Louvain Algorithm brings something new and useful to temporal abstraction in reinforcement learning, achieving something that is not yet possible with ANY existing approach: autonomous specification of a multi-level action hierarchy with no human input. Organising the state-action space of an agent into a skill hierarchy is no easy task. Louvain skills do it elegantly and successfully. This is an important result for the field. Please note the following: (a) Yes, some existing characterisations of skill hierarchies use the state-transition graph, and some of these methods use graph partitioning; we note this clearly in the paper. There is a fundamental limitation shared by all existing graph-based methods: they lead to skill hierarchies with only **a single level** above primitive actions. The necessity of **multi-level** abstraction is clear; it is noted frequently in the literature (e.g.; Barto, Singh, Chentanez, ICDL 2004; Singh, Barto, & Chentanez, NeurIPS 2004). (b) The reviewer wrote: *“However, the Louvain algorithm has long been proposed with many improvement works[11-15]”*. This is a point that the reviewer raised earlier; we have already responded to it: These later works are not relevant in the context of temporal abstraction in reinforcement learning. If the reviewer knows differently, we would welcome the new information, but the reviewer needs to explain which particular later development is useful in our context and why. It is also worth noting that any existing/future improvements to the Louvain algorithm, where relevant and useful, can be **directly** incorporated into our approach as long as the output of the algorithm is a cluster hierarchy. In short, we argue that this objection is not valid. We note that it is not possible to further respond to it without further details from the reviewer. (c) The method used to train option policies is detailed in the Supplementary Material (Appendix F), and also in our discussion with Reviewer 4HvS. This is just one example of how Louvain option policies can be trained; other approaches exist. Our analysis focused on how useful the Louvain options are when they are available to an agent; therefore, precisely how the option policies were trained did not matter as long as the learned policies were accurate. (d) Factual correction: Cited works [16] and [17] are not based on graph partitioning, as claimed by the reviewer. They use graph centrality measures to find bottleneck states and produce skills that efficiently take the agent to these states. --- (2) We do not claim to contribute a novel hierarchical graph partitioning algorithm. On the contrary, we clearly state throughout the paper that we use the Louvain algorithm (Blondel et al., 2008) to perform hierarchical graph partitioning. Please see our response to point 1 above for the innovation in the paper. --- Reply to Comment 1.1.2: Title: Response to Point 3 Comment: (3) We argue that the entire paper supports our claim, including the following results: (i) In a wide variety of domains, Louvain skills form a multi-level hierarchy that matches human intuition well (Figure 2). (ii) Agents using Louvain skills consistently learn faster than agents using only primitive actions and agents using skills produced by existing approaches (Figure 3). (iii) Agents learn more quickly when Louvain skills are arranged into a multi-level hierarchy compared to when each skill calls primitive actions directly (Figure 4). (iv) Agents learn more quickly with the full Louvain hierarchy compared to when each level of the skill hierarchy is used in isolation (Figure 4). (v) The proposed approach continues to produce good skill hierarchies in environments with over 1 million states (Figure 5b). The Louvain agent continues to have a clear and substantial advantage over other approaches in the largest domain we tested (Figure 5c). (vi) We explore two possible paths towards discovering Louvain skills incrementally, while an agent is interacting with its environment, with positive results (Figure 6). (vii) We illustrate how a Louvain cluster hierarchy can be produced in a problem with a continuous state space (Figure 7). ***On theoretical results:*** (1) Louvain skills have a solid basis in graph theory. (2) We have already noted in our earlier response that any existing convergence results in reinforcement learning continue to hold because learning continues in primitive-action space. This is well known so we did not see the need to include it in the paper. We can do so if the reviewer believes it to be useful. (3) The reviewer is asking for theoretical results but without specifying what they should/could be. Can the reviewer be more specific? Precisely what type of theoretical analysis would be applicable, feasible, and useful? ***Scenarios used in the experiments:*** First, we note that it is entirely irrelevant how old the scenarios are; what matters is how useful they are in answering scientific questions posed in the paper. Therefore, “age” is not a valid scientific objection to the scenarios used. Secondly, the reviewer’s comment misses the following: **In these scenarios, our approach achieves something that is not achieved by ANY existing approach: autonomous specification of a multi-level action hierarchy with no human input.** Furthermore, empirical results are presented on a diverse set of scenarios, and they consistently show positive results for the proposed approach. Thirdly, something that is a key difficulty for the field as a whole — scaling up graph-based skill discovery algorithms to very large domains — cannot be the basis of rejecting a new paper that brings forward a new idea whose utility, as well as comparative strengths to existing methods, is clearly demonstrated in smaller domains. We expand on this third point below. Constructing graphical representations for very large environments – such as those with continuous (e.g., MuJoCo) or high-dimensional (e.g., Atari and SMAC) state spaces – is a key open problem. Addressing this problem is not our focus. We are asking a more fundamental question: What is a useful skill hierarchy? Scaling up is a different question; and, importantly, it is an **orthogonal** question: Any promising future graph construction method for such domains can be **immediately** incorporated into the proposed approach to produce Louvain skills. It is also noteworthy that any solution to the scaling-up question will benefit not only our approach but basically all other graph-based approaches to skill discovery (as well as other areas of reinforcement learning). Scientific progress is achieved by divide-and-conquer; not all questions can or should be addressed in one conference paper. --- Reply to Comment 1.1.3: Title: Response to Points 4.1, 4.2, and 4.3 Comment: *(4) Some issues.* *(4.1) If options are used to solve complex problems, then they should be diverse. In this case, the proposed approach that options control agents move between adjacent state spaces does not make sense, because such options cannot adapt to scenarios that require options to extensively explore different action spaces.* It is not clear what the reviewer means by *“diverse”*. As noted in our earlier response, Louvain skills are diverse in their initiation sets, policies, termination conditions, and temporal reach. They provide good coverage of the entire state space. It is unclear what other source of diversity one could want from a given class of skills. We also do not understand what the reviewer means by *“scenarios that require options to extensively explore different action spaces”*. In particular, what does the reviewer mean by *“different action spaces”*? Can they please express it in the mathematical notation used in the paper? e.g., $A(s)$ is the set of actions available from state $s$ in a given MDP. What exactly are the *“different action spaces”* the reviewer refers to? In short, it is not possible to respond to a concern that is not clearly stated. Can the reviewer please further explain their concern? It would help us understand the concern if they are able to point to any existing approach to skill discovery that addresses this concern. --- *(4.2) [...] If the Louvain method fails or generates clustering results with low quality, the proposed approach will fail. Such situations are prevalent in the experimental scenarios commonly used in current works. The state-action features of these scenarios do not have characteristics conducive to unsupervised clustering, and cannot generate appropriate clustering results by simply applying the Louvain method.* (i) The reviewer wrote *“Such situations are prevalent in the experimental scenarios commonly used in current works.”* How prevalent? What are these works? Can the reviewer please provide references so that we can look at the evidence? On the contrary, existing work suggests that, if there exists modular structure in a given network, the Louvain algorithm usually identifies it (e.g., see Lancichinetti & Fortunato, 2009). This is also what we found in our experiments, as reported in the paper. (ii) It is possible, even likely, that no single skill characterisation will work perfectly in all possible scenarios. This is not a problem. An agent is not limited to using a single approach to skill discovery; it can (and probably should) use multiple different approaches. It is useful to know what type of environments each discovery algorithm is well suited to. Some problems will have strong modular structure, and others may not; Louvain skills will be better-suited to the former cases than the latter. (iii) Even in the absence of modular structure, we still observed that the proposed approach produced useful and intuitively-appealing skills. For instance, consider the Level 2 skills produced in Rooms (Figure 2, top row, third column). The skills for moving between the clusters within each room clearly enable efficient low-level navigation of the state space, despite the fact that the internal structure of each room is uniform. --- *(4.3) If the collected transition graph is only a small part of that in the entire scenario, then the results do not provide meaningful guidance for solving the entire problem.* (i) As soon as any state is visited the very first time, it can be added to the state-transition graph. If unknown parts of the state-transition graph are relevant for solving the problem, the agent will visit those states sooner or later. (ii) Future work can explore how to generalise graph structure from known parts of the state space to unknown parts of the state space. This is an orthogonal research question that can be explored in isolation from the current paper and would benefit many areas of reinforcement learning beyond skill discovery. --- Reply to Comment 1.1.4: Title: Response to Point 4.4 Comment: *(4.4) The proposed approach combing lower actions into higher level may not be efficient. It may be less efficient than directly learning actions.* We find this claim surprising and counter-intuitive. We would welcome any reasoning and evidence that the reviewer can provide to support their claim. We argue the opposite, with clear reasoning and evidence: Building multi-level skill hierarchies leads generally to increased learning efficiency. When an agent executes a skill, it learns (through reinforcement learning algorithms) about the consequences of executing not only that skill but also ALL lower-level skills (and, ultimately, primitive actions) that the skill calls upon when executing. So multi-level skills introduce a clear learning efficiency in this way. This is supported by the results in our paper: agents with Louvain skills arranged as a multi-level hierarchy – where lower-level skills are composed into higher-level skills – consistently learned more efficiently than agents using “flat” Louvain skills which call primitive actions directly (Figure 4). Furthermore, when learning option policies, it is more economical to form new higher-level skills by composing already-trained lower-level skills than it is to start training from scratch using only primitive actions. As a simple example, if an agent already knows how to leave the room, it can use this knowledge (i.e., skill) when learning how to leave the building. --- Once again, we thank the reviewer for their continued attention to the paper and hope that our comments are useful in evaluating the paper.
Summary: The paper proposes a graph-partitioning-based method to automatically learn multi-level hierarchies of skills at varying timescales in reinforcement learning. The method assumes existence of a complete state transition graph and applies Louvian algorithm to create a hierarchy of state clusters. By merging states, the algorithm creates state partitions that maximize modularity, which measures the quality of partitions based on strong connections within clusters and weak connections between them. The work considers an important unsolved problem of automatically creating multi-level skill hierarchies. The paper is written well, however, it needs more clarity. Some details regarding the assumptions made and the scope of problems need to be clarified right from the beginning to improve quality. The empirical evaluation could be improved by diversifying the choice of the domains because four out of six domains are extremely similar navigation domains, and clearly justifying the choice of baselines and their input requirements. Strengths: The number of levels in a hierarchy do not need to be predefined, unlike existing methods. The proposed method automatically finds the levels in the hierarchy using no gain in modularity as a stopping condition. Weaknesses: (I) The proposed method relies on concrete state transition graph and then uses hierarchal clustering based algorithm to iteratively merge states and produce a hierarchy of clusters. Hence, the method might not scale well to problems with large state space as the memory requirement would increase exponentially and learning a complete state transition graph would require exploration of the complete state space/high number of samples. (II) The assumption regarding the availability of a complete state space graph and the scope of the problems (discrete state space, low-dimensional, etc.) the method can handle need to be stated clearly. (III) The empirical evaluation could be improved by diversifying the selection of domains. It is unclear how the approach would perform in more complex decision-making tasks. (IV) An analysis of comparison of the number of samples required by each baseline to learn a hierarchy and option policies is needed. Minor: (I) line 20: has -> have (II) line 112: the partition -> the partition c_i is Technical Quality: 3 good Clarity: 3 good Questions for Authors: (I) Can you explain "we use each of the h partitions to define a single layer of skills, resulting in a hierarchy of h levels.."? It is not clear how total number of partitions is directly affecting the number of the layers. (II) How can state abstraction be employed using the proposed method? (III) Can the current method handle continuous state space problems? (IV) How does the proposed work relate to [1]? (V) In empirical evaluation, how is the office world (without any objects) different from the maze world domain? (VI) Can you comment about scalability of the approach to handle domains with large state spaces? (VII) Do the baselines assume a state transition graph or start learning from scratch? Does the learning performance in Fig 3 include the samples required to learn the hierarchies? If not, how many samples were required to learn the hierarchies by different methods along with the samples required to learn the option policies? References: [1] Fox, R., Krishnan, S., Stoica, I. and Goldberg, K., 2017. Multi-level discovery of deep options. arXiv preprint arXiv:1703.08294. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: (Included in summary, weaknesses) Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time and attention you have given to our paper. **Q1: Can you explain “we use each of the h partitions to define a single layer of skills, resulting in a hierarchy of h levels”?** We will give an example using Figure 2 in the paper. In Rooms, the algorithm produces 4 partitions of the state space, as shown in the top row of Figure 2. Consider the "Level 4" partition. This partition defines level 4 of the skill hierarchy, which has 6 skills: 1) from purple to green, 2) from purple to yellow, 3) from green to yellow, and so on. Similarly, the "Level 3" partition defines level 3 of the skill hierarchy, which has 8 skills: 1) from yellow to blue, 2) from yellow to green, 3) from blue to red, and so on. Similarly, the Level 2 and Level 1 partitions, respectively, define level 2 and level 1 of the skill hierarchy. (At the very bottom of the skill hierarchy, which could be called level 0, we have the primitive actions.) **Q2: How can state abstraction be employed using the proposed method?** The most natural place to introduce state abstraction would be when constructing the state-transition graph. Instead of a concrete state, each node could represent an *abstract* state based on some learned representation of the environment, and the Louvain algorithm could be applied to this abstract state-transition graph. We also note that how to employ state abstraction is an important open problem for graph-based skill discovery generally. Any promising methodology could be directly incorporated into the discovery of Louvain skills. **Q3: Can the current method handle continuous state space problems?** Preliminary results in the paper suggest that it can (see Figure 7). We explored one possible approach for building a state-transition graph to represent a continuous domain, and we partitioned the graph by using the Louvain algorithm to form the basis of a concrete set of Louvain skills. **Q4: How does the proposed work relate to Fox et al. (2017)?** Fox et al. (2017) propose DDO, an imitation learning method that extracts option hierarchies from demonstration trajectories. It uses policy-gradient methods to train option policies and termination conditions that are most likely to generate given demonstration trajectories. This is fundamentally very different to our approach. DDO requires demonstration trajectories; our approach does not. With DDO, both the number of hierarchy levels and the number of options per level need to be specified by the system designer ahead of training. In contrast, our proposed approach automatically finds a suitable number of hierarchy levels and automatically produces a suitable number of skills at each level. Furthermore, using an incremental version of our method, the number of hierarchy levels and skills per level can evolve during training. **Q5: How is the office world (without any objects) different from the maze world domain?** The dynamics of Office are the same as the other gridworlds (except for the elevator tile). We designed Office to be able to explore the scaling properties of our approach as the number of states increases (see figures 5b and 5c). **Q6: Can you comment about scalability of the approach (…)?** Our main contribution is a novel characterisation of a useful skill hierarchy based on the concept of modularity. This contribution, we believe, scales to large state spaces conceptually. As the size of the state space increases, we will need to consider different ways of representing the graphical structure of the environment (direct use of the state-transition graph will not be useful). We expect that future work will identify and explore different approaches with different strengths and weaknesses under different conditions. The Louvain algorithm itself scales well to large graphs. Blondel et al. (2008) successfully applied it to graphs with millions of nodes and billions of edges and observed its time complexity to be linear in the number of graph edges. We also successfully applied it to versions of Office with over 1 million states. **Q7: Do the baselines assume a state transition graph or start learning from scratch? How many samples were required to learn the hierarchies by different methods along with the samples required to learn the option policies?** All baselines assumed access to the complete state-transition graph. Our analysis focused on the comparative benefits of the learned skills – because the fundamental question we are asking is *what is a good skill hierarchy?* – so we did not measure the number of samples required by the different methods. All that mattered was that the skills were accurately learned in each case. Please note that there is no single number of samples required by each algorithm: more samples generally lead to better learning of the underlying concept; so, for each algorithm, there is a range (of number of samples) for which the learned skills are useful to varying degrees. **The assumption regarding the availability of a complete state space graph and the scope of the problems (discrete state space, low-dimensional, etc.) the method can handle need to be stated clearly.** Thank you, we will make sure that the assumptions in the different parts of the paper are clearly stated. When testing the utility of the general concept (Louvain skills), we assume that the graph is available. When exploring incremental learning algorithms, we do not make that assumption. The most direct use of the Louvain algorithm assumes discrete state and action spaces. But our characterisation of a useful skill hierarchy (based on modularity) is a general concept that can be used with abstract or approximate versions of a state-transition graph. Fundamentally, all that is required is a graphical representation of an agent's interaction with its environment – given or learned, exact or approximate, complete or partial, etc. – that encodes what is known about the connective structure of this interaction.
Summary: Given a graph of connectivity of the state space of an environment, this paper proposes to use a hierarchical clustering technique, the Louvain algorithm, to provide reinforcement learning with a multi-level skill representation. This allows policies to take actions at variable time scales, and experimental results show that the identified clusters allow faster learning in the domains studied. Strengths: - Clarity of presentation. The graph clustering algorithm is motivated and described very clearly. The experiments are also well described. The visualizations of the hierarchical results are informative and easy to interpret. - Comparison with baselines. The paper describes five alternative graph-based approaches, and shows superior performance of the Louvain-based method. - Initial explorations into some limitations. The paper points out that the method requires full knowledge of the state space and it's graph representation. Initial results explore what happens when the clustering is learned online with interaction in the environment. The paper also explores extending the method to continuous state spaces. - Thorough discussion section. Weaknesses: - Although the description of the Louvain algorithm and most of the rest of the paper is quite thorough and well presented, the method for training a policy using the hierarchical clustering output is left comparatively sparse. Especially since the cited method may not be readily familiar to a modern audience. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper is very clear. Other than the weakness mentioned above, I do not feel I have pressing questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are more than adequately addressed, as mentioned in the strengths section above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time and attention you have given to our paper. **Although the description of the Louvain algorithm and most of the rest of the paper is quite thorough and well presented, the method for training a policy using the hierarchical clustering output is left comparatively sparse. Especially since the cited method may not be readily familiar to a modern audience.** Thank you for pointing out the need for additional clarification. We provided full experimental details in the supplementary material (Appendix F) and will expand on this specific point in the main paper. Consider a Louvain option for moving from cluster $c_i$ to a neighbouring cluster $c_j$. To train the policy of this option, macro-Q learning was applied to the following task: the agent starts in a random state in cluster $c_i$; it receives a reward of $-0.01$ at each decision stage until it reaches a state in cluster $c_j$, where it receives a reward of $+1.0$ and the episode terminates. After many episodes of training, this produces an option policy that efficiently takes the agent from any state in $c_i$ to the cluster $c_j$. The policies of options at level $i$ of the hierarchy, $i > 1$, call only the options from level $i-1$; the policies of options at level 1 call only the primitive actions. This is just one example of how Louvain option policies can be trained. Other approaches exist. Our analysis focused on exploring how useful Louvain options are when available to an agent; therefore, precisely how the option policies were trained did not matter as long as the learned policies were correct. --- Rebuttal Comment 1.1: Comment: Thank you for your response, I agree that the exact method of training the options is not a main point of the paper. Thanks for pointing out where the set-up was described nonetheless.
Summary: This paper addresses the problem of discovering a hierarchy of skills based on the state-transition graph. The proposed approach builds on an existing method called the Louvain algorithm, which creates a hierarchy of state clusters; i.e., the lowest level will place nearby states in the same (small) cluster, the second level will place those smaller clusters in bigger clusters, and so on. The clusters are defined using a measure of modularity (nodes within a cluster have dense connections, but there are sparse connections between different clusters). The skills are then defined using the options framework, and their behavior is to take the agent from any state within the cluster to the adjacent clusters. The empirical evaluation presented indicates that the method is capable of finding such hierarchies of skills and that, in some cases, using these skills yields a better performance when learning to solve a task, when compared with existing methods. Additionally, the empirical evaluation demonstrates the method's efficacy under scenarios with continuous state spaces (assuming a discretization method) and under scenarios with many states. Finally, the authors present preliminary results for a possible extension of the method to an interactive case where the transition graph is also learned dynamically instead of being assumed available originally. Strengths: - The paper addresses a very interesting, novel, and important problem: to autonomously identify a hierarchy of skills with multiple levels (not only two as in most previous work), without specifically setting the number of levels it should have. - The proposed method is intuitive and has a different/creative look at a problem that has not been solved yet. The paper is very clearly written, with a very good display of the empirical results, analyzing both qualitative and quantitative aspects of the learned skills. - The experiments conducted indicate that the hierarchies learned are presenting the desired behavior (i.e., clusters well connected within themselves but not well connected between each other). - The domains used in the empirical evaluation cover a wide range of scenarios: smaller, simpler MDPs, but also MDPs with many states and even an MDP with discretized continuous states. This helps demonstrate the effectiveness of the method at different levels of complexity. Weaknesses: - The method requires complete knowledge of the state-transition graph, which is often not available in real-life applications. However, I believe the paper still presents an important contribution and a first step towards solving the problem of finding multi-level hierarchies of skills. Additionally, the authors present preliminary results for an incremental version of the algorithm, where the skills are adjusted as the estimated model is learned. - Without limiting the number of clusters/levels in some way, the algorithm could end up returning a lot of skills, which could make the learning process later on more difficult instead of facilitating it (given the increased action space). The authors mention this in the paper and propose one possible improvement (to ignore lower-level clusters that contain a small number of states). - The method finds skills using only the transition graph, not the reward function. Thus, it would not be as useful in tasks with a large state space but where only a small subset of it is used for solving the task. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Why did the authors select Q-learning as the baseline for primitive actions instead of a more recent, and better-performing, algorithm? - It seems like the first two plots of Figure 4 do not present the results for Level 4. Could the authors please clarify this? - Currently, as per my understanding, all edges in the graph have the same weight. This would imply that when finding the skills in non-deterministic MDPs, a transition that happens with a 10% probability would have the same importance as one that always happens. Do the authors have an insight into how the algorithm could be changed to consider the transition probabilities? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors discuss the limitations of the work. No major concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time and attention you have given to our paper. **Q1: Why did the authors select Q-learning as the baseline for primitive actions instead of a more recent, and better-performing, algorithm?** The main point of the baseline algorithm is to allow a comparison of the agent's performance with and without Louvain skills. Louvain skills themselves are agnostic to any particular hierarchical reinforcement learning algorithm, so we could have performed our analysis with any algorithm as long as it had a primitive counterpart we could use for our comparison. Louvain skills are based on the graphical structure of an agent's interaction with its environment; so the most natural test of their fundamental utility is in environments with discrete states and discrete actions. We used Q-learning as the baseline because (1) it is a suitable algorithm for such environments, and (2) it is the direct primitive counterpart to macro-Q learning and intra-option learning, which we used to train the hierarchical agents. **Q2: It seems like the first two plots of Figure 4 do not present the results for Level 4. Could the authors please clarify this?** We used a single legend for all three plots in Figure 4. We now see that this created confusion and will remedy it. In the domains shown in the first two plots (Rooms and Taxi), the Louvain skill hierarchy contained only three levels, so there was no "Level 4" agent. **Q3: Currently, all edges in the graph have the same weight. This would imply that when finding the skills in non-deterministic MDPs, a transition that happens with a 10% probability would have the same importance as one that always happens. Do the authors have an insight into how the algorithm could be changed to consider the transition probabilities?** For stochastic MDPs, it would be appropriate to weight edges according to the probability of the underlying transition. Many edge-weighting schemes have been proposed in the graph-based skill discovery literature (e.g., see Section 3.1, Metzen (2013)). One sensible approach is to assign an edge from state $u$ to state $v$ a weight of $\sum_{a \in A(u)}P(u,a,v)$. These probabilities could be based on the true transition probabilities, if known, or could otherwise be estimated from experience. The definition of modularity naturally handles weighted graphs, so the Louvain algorithm (and, therefore, the proposed method) will be able to use this information. The higher the probability of transitioning between two states, the higher the weight of the edge between them, and the higher the likelihood that the Louvain algorithm will place these two states in the same cluster. **Without limiting the number of clusters/levels in some way, the algorithm could end up returning a lot of skills, which could make the learning process later on more difficult instead of facilitating it (given the increased action space).** Making a large set of options available to the agent can indeed harm learning. But Louvain skills have a number of useful properties that limit the impact of this problem. First, we observe empirically that the depth of the Louvain skill hierarchy grows very slowly with the size of the state space. Figure 4b shows that the hierarchy depth increases from 5 levels in a version of Office with 1000 states to 8 levels in a version of Office with over 1 million states. This aligns with existing results reported in the literature, such as from Blondel et al. (2008), who find that the Louvain algorithm produced 6 levels when applied to a graph of a social network with over 2 million nodes. Secondly, two parameters influence the depth of the Louvain skill hierarchy: the resolution parameter, $\rho$, and the mean cluster size threshold, $c$. Higher values of $\rho$ punish overall cluster size and inter-cluster edges more harshly, leading the Louvain algorithm to run for fewer iterations and produce fewer partitions. The result is a skill hierarchy with fewer levels. On the other hand, $c$ impacts the lowest level of the hierarchy — the algorithm discards partitions containing clusters that are (on average) smaller than $c$. Higher values of $c$ lead to more partitions being discarded, resulting in a skill hierarchy with fewer levels. Thirdly, although many skills may be defined at each level of the hierarchy, a relatively small number of them will be available in any given state. This is because Louvain skills have restricted initiation sets: a Louvain skill navigating from some source cluster to some target cluster is available only in the states of the source cluster. For example, in Rooms, at level 2 of the hierarchy, at most 3 skills are available from each state (See Figure 2, top row, third column). Finally, arranging Louvain skills into a multi-level hierarchy allows an agent to learn about multiple skills at the same time. When an agent is executing a skill, it learns about the consequences of executing not only that specific skill but also of any of the lower-level skills (and, ultimately, primitive actions) that the skill calls upon while executing. **The method finds skills using only the transition graph, not the reward function. Thus, it would not be as useful in tasks with a large state space but where only a small subset of it is used for solving the task.** Incorporating the task reward when producing Louvain skills would be an interesting avenue for future work. But Louvain skills as defined in the paper, based purely on the connectivity of the state-transition graph, are still useful. In large state spaces, in the absence of further information on where the rewards may be, they will allow efficient exploration of the state space. In our experiments, Louvain skills helped learning efficiency the most in the largest domain we tested (Figure 5c). --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response. After reading it, and the other reviews and discussion, I tend to maintain my original score. I agree that there are limitations and that the idea might be seen as simple, but the limitations are discussed in the paper and I believe this is a valuable contribution to the field.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling
Accept (spotlight)
Summary: This paper presents SimMTM, a simple pre-training framework for masked time-series modeling. The core idea is to train the model to reconstruct the time series by aggregating information from multiple masked series rather than one single masked series. While the temporal pattern is destroyed in a single masked series, multiple masked series contain complementary information, making the reconstruction process more accessible. The aggregation of point-wise representations is weighted by their series-wise similarities. The proposed loss adds the reconstruction loss with a contrastive loss on the series representations. The author provides extensive empirical studies on both time series classification and time series forecasting tasks to evaluate the learned representations. Strengths: The proposed method is sound and well-motivated. Reconstructing the original time series from multiple randomly masked series is novel, in my opinion. The write-up is easy to follow. The overview figure is also illustrative and helps the understanding. The empirical evaluation is comprehensive and convincing. Weaknesses: 1. In Eq 4, the sum over the variable $s'$ is confusing without carefully reading the text below it. In my opinion, it would be more clear to have the sum over $z'$ and have $s' = Projector(z')$ on the right. 2. Eq 7 is not precise. I understand that the authors mean the intra-similarity of samples should be maximized. However, the sets before and after $~$ are the same, and therefore they are already "close". Please considering revise eq 7. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper says, "For each time series, the reconstruction is not only based on its own masked series." I wonder how much does the method benefit from aggregating representations from other time series? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 5Qee for providing a meaningful review and insightful suggestions. #### **About the Weaknesses** **Q1**: The confusing of $Eq. (4)$. Thank you for your suggestion. Your understanding is correct. We will provide a clearer description and modify $Eq. (4)$ as follows: - Rephrase $\underline{\text{lines 153-154 in the main text}}$ as: where $\mathbf{s}^{\prime}$ represents series-wise representation, while $\mathbf{z}^\prime$ represents the corresponding point-wise representation, where $\mathbf{s}^{\prime} = \operatorname{Projector}(\mathbf{z}^{\prime})$ and $\widehat{\mathbf{z}}\_{i}\in\mathbb{R}^{L\times d_{\text{model}}}$ is the extracted point-wise representation. - Modify $\underline{\text{Eq. (4) in the main text}}$ to: $$ \begin{equation} \begin{split} & \mathbf{s}^{\prime} = \text{Projector}(\mathbf{z}^{\prime}), \newline & \widehat{\mathbf{z}}\_{i} = \sum\_{\mathbf{s}^{\prime} \in \mathcal{S}\backslash\\{\mathbf{s}\_{i}\\}}\frac{\text{exp}({\mathbf{R}\_{\mathbf{s}\_{i},{\mathbf{s}^\prime}}/\tau})}{\sum\_{{\mathbf{s}^\prime}^{\prime} \in \mathcal{S}\backslash\\{\mathbf{s}\_{i}\\}} \text{exp}({\mathbf{R}\_{\mathbf{s}\_{i},{\mathbf{s}^\prime}^\prime}/\tau})}\mathbf{z}^\prime \end{split} \end{equation} $$ **Q2**: $Eq. (7)$ is not precise. Thanks for your suggestion, and we will rewrite $Eq. (7)$ to better represent positive pairs and negative pairs as follows: $$ \begin{equation} \begin{split} & \text{Positive pairs:}~~\left(\mathbf{s}\_{i}, \mathbf{s}\_{i}^{+} \right), \mathbf{s}\_{i}^{+} \in \{\overline{\mathbf{s}}\_{i}^{j}\}\_{j=1}^{M}, \newline & \text{Negative pairs:}~\left(\mathbf{s}\_{i}, \mathbf{s}\_{i}^{-} \right), \mathbf{s}\_{i}^{-} \in \{\mathbf{s}\_{k}\} \cup \{\overline{\mathbf{s}}\_{k}^{j}\}\_{j=1}^{M}, i\neq k \end{split} \end{equation} $$ #### **About the Questions** **Q1**: What is the benefit of aggregating representations from other time series? As we stated in $\underline{\text{lines 155-159 in the main text}}$, the aggregation of representations from other time series requires the model to suppress the interference of less-related noise series and precisely learn similar representations for both the masked and the original series, namely guiding the model to learn the manifold structure better. To further verify the benefit, we also conducted a comparison experiment as follows, where "Own Masked Series" represents aggregating representations from their own masked series, and "All Masked Series" means aggregating representations from their own and other masked series. We present the averaged MSE/MAE from 4 different forecasting horizons {96,192,336,720\} based on the past 336 time points for the in-domain forecasting tasks. We also record the Accuracy (%) score for classification tasks as follows: - Forecasting Tasks | Average MSE/MAE| Own Masked Series | All Masked Series (Ours) | | :-------------:| :------------: | :------------: | | ETTh1 | 0.407/0.430 | 0.404/0.428 | | ETTh2 | 0.348/0.393 | 0.348/0.391 | | ETTm1 | 0.348/0.380 | 0.340/0.379 | | ETTm2 | 0.263/0.318 | 0.260/0.318 | | Avg | 0.342/0.380 | **0.338**/**0.379** | - Classification Tasks | Accuracy (%) | Own Masked Series | All Masked Series (Ours) | | :------------------:| :-----------------: | :----------------------: | | Epilepsy → Epilepsy | 94.18 | 94.75 | | SleepEEG → Epilepsy | 95.49 | 95.49 | | SleepEEG → FD-B | 67.01 | 69.40 | | SleepEEG → Gesture | 77.08 | 80.00 | | SleepEEG → EMG | 92.24 | 97.56 | | Avg | 85.20 | **87.44** | The experimental results show that "All Masked Series" performs consistently better than "Own Masked Series" in forecasting and classification tasks, which proves that using both own and other less-related noise series for reconstruction is meaningful.
Summary: Pre-training models have been using self-supervised learning in various fields (NLP, Vision). However, masked modeling, a representative method of self-supervised learning, was difficult to apply to time-series tasks. Randomly masked data is difficult to show good performance because semantic information of time series including temporal changes is broken. Thus, SimMTM recovers the masked point in time through weighted aggregation of multiple neighbors outside the manifold. This not only restores simple, but also reconstructs complementarily, leading to good performance. SimMTM performs forecasting and classification (in-domain, cross-domain) on various datasets and shows good performance. Strengths: 1. They explained the problem of masked modeling in time-series data well from the time-series point of view. 2. Also, to apply masked modeling to time-series data, the characteristics of time-series were well utilized. In other words, masked modeling was not simply applied to time-series, but the characteristics of time-series data were well understood and applied accordingly. 3. Experiments were conducted on various tasks of the pre-training model, and remarkable performance improvements were achieved. Weaknesses: It seems that there is a part that does not match the notation in the formula and Figure 2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Regarding weakness, I don't know what $z'$ means in Eq.4 and line 154. No explanation of $s'$ in "$z'$ represents the corresponding point-wise representation of $s'$". Does $s'$ mean series-wise representations $S$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In fact, there is no limit to this work Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer L1Yx for providing an insightful review. #### **About the Weaknesses** **Q1**: A part that does not match the notation in the formula and Fig. 2. Thank you for your careful reading. To facilitate summation operations and distinguish different sample representations, we have introduced new symbols, $\mathbf{z}^{\prime}$ and $\mathbf{s}^{\prime}$. - $\mathbf{z}^{\prime}$ represents the point-wise representation of time series. - $\mathbf{s}^{\prime}$ represents the series-wise representation of time series. #### **About the Questions** **Q1**: The explanation of $\mathbf{z}^{\prime}$ and $\mathbf{s}^{\prime}$. Thank you for the constructive reviews, your understanding is correct. We will modify the description and $Eq. (4)$ in the original text as follows: - Rephrase $\underline{\text{lines 153-154 in the main text}}$ as: where $\mathbf{s}^{\prime}$ represents series-wise representation, while $\mathbf{z}^\prime$ represents the corresponding point-wise representation, where $\mathbf{s}^{\prime} = \operatorname{Projector}(\mathbf{z}^{\prime})$ and $\widehat{\mathbf{z}}\_{i}\in\mathbb{R}^{L\times d_{\text{model}}}$ is the extracted point-wise representation. - Modify $\underline{\text{Eq. (4) in the main text}}$ to: $$ \begin{equation} \begin{split} & \mathbf{s}^{\prime} = \text{Projector}(\mathbf{z}^{\prime}), \newline & \widehat{\mathbf{z}}\_{i} = \sum\_{\mathbf{s}^{\prime} \in \mathcal{S}\backslash\\{\mathbf{s}\_{i}\\}}\frac{\text{exp}({\mathbf{R}\_{\mathbf{s}\_{i},{\mathbf{s}^\prime}}/\tau})}{\sum\_{{\mathbf{s}^\prime}^{\prime} \in \mathcal{S}\backslash\\{\mathbf{s}\_{i}\\}} \text{exp}({\mathbf{R}\_{\mathbf{s}\_{i},{\mathbf{s}^\prime}^\prime}/\tau})}\mathbf{z}^\prime \end{split} \end{equation} $$ --- Rebuttal Comment 1.1: Comment: Thank you for solving my question.
Summary: The authors propose SimMTM, a novel representation learning method for time-series data based on random masking. They experimentally verify the effectiveness of the proposed method from both in-domain and cross-domain perspectives. Comprehensive comparative experiments with existing methods are conducted, demonstrating the effectiveness of the proposed approach. Strengths: The authors have conducted extensive comparative experiments with existing methods and quantitatively demonstrated the effectiveness of the proposed method. In addition to in-domain generalization, they also focus on cross-domain generalization. This verification is indispensable for validating the effectiveness of deep representation learning assuming large-scale open data. Weaknesses: **Presentation quality** The current presentation style of this paper is confusing. Particularly, there is a lack of consistency in the representation of formulas and symbols, making it very difficult to grasp the overall logic of the paper. Some specific examples include: 1. In Equation 3, the symbol $\times$ is used to denote both the Cartesian product and scalar multiplication when defining the domain of $R$. 2. The transposition notation in Equation 3 is counterintuitive as $\mathbf{s}$ is defined as a row vector. 3. Equation 7 is overly verbose and exacerbates the difficulty in reading, as it does not contribute to the subsequent. It would be sufficient to simply declare that only the situation where $i = k$ is treated as positive pairs. 4. While $\mathbf{s}$ is defined as a single vector, $\mathbf{s}^+$ is defined as a set of vectors. Some other points that I could not make from the paper are discussed below in the Questions section. **Comparison with prior literature** The comparison with prior literature is insufficient. For example, references [1, 2] can be cited as representative examples of representation learning methods for deep neural networks under time-series data. In particular, a model that generalized the method in [2] as a generalized contrastive learning method not limited to the time-series domain was proposed later. Both [2] and [3] have superior theoretical guarantees for identifiability from the perspective of nonlinear ICA. The authors should clearly indicate the proposed model's advantages over these approaches. **References** 1. Hyvarinen, A., & Morioka, H. (2016). Unsupervised feature extraction by time-contrastive learning and nonlinear ICA. *Advances in neural information processing systems*, *29*. 2. Oord, A. V. D., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*. 3. Hyvarinen, A., Sasaki, H., & Turner, R. (2019, April). Nonlinear ICA using auxiliary variables and generalized contrastive learning. In *The 22nd International Conference on Artificial Intelligence and Statistics* (pp. 859-868). PMLR. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What is the definition of $\operatorname{Encoder}(\mathcal{X})$? Typically, a function $f \colon X \rightarrow Y$ can use all the information in $X$ to calculate $Y$, but it seems that the Encoder function defined here applies the same function in parallel to some dimension. However, it is not clear from the text which dimension this parallel processing applies to. For example, there would be at least two possibilities: $\operatorname{Encoder}(\mathcal{X})$ is kind of a “syntax suger” for either $\bigcup\_i \operatorname{Encoder}( \\{x\_i\\} \cup \\{\bar{x}\_i^j \\}\_{j} )$ or $\bigcup\_i ( \\{ \operatorname{Encoder}(x\_i) \\} \cup \\{ \operatorname{Encoder}(\bar{x}\_i^j) \\}\_{j} )$. The same can be said about the definitions of the Projector and Decoder. - Although I understood the training pipeline of the proposed method, I had questions regarding its behavior at inference time. Does the proposed method compute internal representations from the true sequence and $M$ randomly masked sequences at inference time, similar to training time? Or does it only use the true sequence without random masking at inference time? Additionally, how is training conducted during the fine-tuning stage in the pipeline? - The authors mention the analysis of a quantity called the CKA value in line 262 of Section 4.4, but it is unclear what this quantity actually is. While there is an explanation of Pre-training/Fine-tuning CKA in the caption of Table 5, if the CKA value mentioned in the main text refers to these, its meaning should be clearly explained in the text as well. Moreover, I did not understand why a small $| \Delta_{\mathrm{CKA}} |$ leads to “acquiring adaptive representations for different tasks” (l.264). In principle, it should be possible to drastically change the internal representation by fine-tuning while maintaining the degree of change in the internal representation along the evolution of the layers. Suppose $x_\mathrm{first}$ and $x_\mathrm{last}$ be the representation in the first and the last layer at the pre-training phase, respectively. We can consider one idealistic situation that the representations after fine-tuning are given as $y_{\mathrm{first}} = x_{\mathrm{first}} + \delta_{\mathrm{task}}$ and $y_{\mathrm{last}} = x_{\mathrm{last}} + \delta_{\mathrm{task}}$ with a task-dependent fixed vector $\delta$. In such a situation, the values of Pre-training/Fine-tuning CKA would be nearly the same, and the difference in CKA values would go to zero by construction. However, the internal representations drastically change depending on the task choice. In this context, I believe there is a logical gap in the authors' argument. - In masked modeling-based representation learning, the proposed method uses the true sequence without masking, along with the input. This setup is generally considered to make it more challenging to avoid trivial local solutions that merely output the input as is. Can it be said that such a phenomenon does not occur? Moreover, if a trivial local solution can be avoided, what factors of the proposed method enable this? - As a naive control condition for the idea of using multiple different random masks, it is possible to consider the variant of using only one random mask that reduces the masked portion $r$, or takes the position of the mask to have a specific rules that is not completely iid. Can the proposed method be claimed to be effective against such variants? In particular, the model structure of using multiple random masks is expected to have at least some overhead in terms of training/inference computation time compared to such simple control conditions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations of this paper are adequately discussed in the appendix. Furthermore, the effectiveness of the proposed method can be confirmed from various perspectives through exhaustive comparative experiments. However, as mentioned earlier, there are major concerns primarily about the presentation of the paper. These are points that affect the reproducibility of the results and the authors' claims, so it would be necessary to resolve all problems of unclear logical progression for this paper to be accepted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer ZNHw for the detailed and insightful suggestions. #### **About Presentation Quality** (1) We will rephrase $Eq. (3)$ as $\mathbf{R}=\text{Sim}(\mathcal{S})\in\mathbb{R}^{D\times D},D=N(M+1),$ (2) $\mathbf{R}_{\text{u},\text{v}}$ compute the cosine distance via transposition in $Eq. (3)$, resulting in a value that represents the similarity between vectors $\text{u}$ and $\text{v}$. (3) We will remove $Eq. (7)$ in the main text. (4) We will modify $\text{s}^+$ to $\mathcal{S}^+$ as a positive series set. #### **About Comparison with Prior Literature** **Q1**: The comparison with prior literature is insufficient. We have compared SimMTM with six competitive state-of-the-art baselines in the main text, which are representative and recently published in the time series domain, including Ti-MAE(2023), TST(2021), LaST(2022), TF-C(2022), etc. As per reviewer's request, we have reimplemented and compared the generalized methods you suggested in $\underline{\text{Table~1, 2 in the global response PDF}}$. SimMTM performs best and achieves 15.5%, 11.5%, and 10.6% average MSE reduction compared to TCL(2016), CPC(2018), and PCL(2019) in forecasting tasks, improves 3.33% average accuracy compared to PCL(2019) in classification tasks. #### **About the Questions** **Q1**: What is the definition of $\text{Encoder}(\mathcal{X})$ and how to apply it? In the previous formulation, we organized time series and its masked series along with batch dimension, which is a conventional usage in pre-training. Thus, as you pointed out, $\text{Encoder}(\mathcal{X})=\bigcup_i(\{\text{Encoder}(\text{x}\_i)\}\cup\{\text{Encoder}(\overline{\text{x}}\_i^j)\_j\})$, which means $\text{Encoder}$ will process the input series separately. **Q2**: Understanding the training pipeline of SimMTM. SimMTM follows the standard pre-training and fine-tuning paradigm, including three stages: pre-training, fine-tuning, and inference. SimMTM mainly focuses on modeling time series representations in pre-training. Notably, masking is only applied in pre-training, not fine-tuning and inference. - In fine-tuning, the pre-trained Encoder is retained while Projector and Decoder are removed. Different task heads are appended to the pre-trained Encoder and fine-tune all parameters for ten epochs to validate the performance. Note that we organize data into batches for fine-tuning without masking. - In inference, the model's parameters remain unchanged, and the fine-tuned model is directly used for inference. A single series without masking will be input to Encoder with the specific head. **Q3**: What is $\text{CKA}$ and why a small pre-training/fine-tuning $|\Delta_\text{CKA}|$ acquire adaptive representations for different tasks? (1) $\text{CKA}$ can be used to measure the representation-learning property of deep models. Centered Kernel Alignment ($\text{CKA}$) is a statistical measure between representations that identify correspondences in models with different initializations. We can use it to measure the representation-learning property of deep models by calculating $\text{CKA}$ between a model's first and last layer representations. A larger $\text{CKA}$ indicates the model tends to learn high-level representations, which is widely used in previous works [1,2,3]. (2) $|\Delta_\text{CKA}|$ can measure the difference in representation-learning property between pre-training and fine-tuning models. Suppose both pre-training and fine-tuning models hold the same high-level or low-level representation-learning property, these two models will present the close $\text{CKA}$ values, corresponding to smaller $|\Delta_\text{CKA}|$. Thus, we use the $|\Delta_\text{CKA}|$ to measure the gap between pre-training and fine-tuning models. In other words, **we do not attempt to use $|\Delta_\text{CKA}|$ to measure the change of representations but use it to quantify the change of model representation-learning property**. [1] Xie,et al, Revealing the Dark Secrets of Masked Image Modeling. CVPR, 2022. [2] Kornblith,et al, Similarity of Neural Network Representations Revisited. ICML, 2019. [3] Wu,et al, TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. ICLR, 2023. **Q4**: Why the usage of true series in SimMTM will not lead to trivial local solutions? Note that SimMTM reconstructed point-wise series representations based on series-wise representation similarity. Although the true series-wise representation $\text{s}\_i$ undergo similarity calculation in $Eq. (2)$, we only use the point-wise representations of $\text{z}^\prime$ for aggregation (corresponding to $\text{s}^\prime \in \mathcal{S}\backslash\{\text{s}\_i\}$). It implies that the reconstruction of SimMTM solely relies on the series-wise similarity values among the true and other series, disregarding the specific point-wise values of the true series. It is not simply reconstructing the original series with the true series, so trivial local solutions will not occur. **Q5**: Try variants of masking rules. The variant you mentioned, only using one masked series with a lower masked ratio can perform well but cannot beat multiple masked series settings. (As shown in $\underline{\text{Fig. 5 in the main text}}$). Further, we have analyzed the relationship between masked ratio and masked numbers in Fig. 5, where a reasonable balance between masked ratio and masked numbers is critical. As you requested, we also compared different masked rules for ETTh1, including: - Fixed position: Masking tail or head - Random position: Masking large, small segment or points The experiments are in $\underline{\text{Table~3 in the global response PDF}}$. Results show random masking is superior to fixed masking (Small Segment > MaskedTail > Masked Head). The difference is insignificant among random masking. As mentioned in **Q2**, masked modeling involves only pre-training, so there is no computational overhead problem in fine-tuning and inference. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for responding to my questions and concerns. Regarding presentation quality, I think the paper will be easier to read if revisions are made based on the comments. Thanks for adding the experimental comparison to the previous studies I have pointed out. After confirming the experimental results, I acknowledge the experimental advantage of this paper over these papers. About questions **Q1&Q2** I understood the procedure of this work. Thank you for your response. **Q3** I am still not fully convinced. Regarding CKA, the authors stated as > Centered Kernel Alignment (CKA) is a statistical measure between representations that identify correspondences in models with different initializations. We can use it to measure the representation-learning property of deep models by calculating between a model's first and last layer representations. A larger indicates the model tends to learn high-level representations, which is widely used in previous works [1,2,3]. First, CKA itself is a general metric that evaluates the similarity between two representations, not limited to the models with different initialization or layers. I acknowledge that CKA itself is widely used to evaluate the similarity of model representations. However, I was still not convinced by the authors' statement that evaluating the CKA of the first and last layers of a model can tell whether the model has acquired "high-level representations." It seems to me that high CKA simply means high representaion similarity between layers, but how could this lead to the acquisition of "high-level representaion” as an entire model? **Q4** I understood the authors' point. **Q5** > As mentioned in Q2, masked modeling involves only pre-training, so there is no computational overhead problem in fine-tuning and inference. Thank you for making clear this point. However, I asked about *some overhead in terms of both training/inference computation time.* The computational overhead of training time because of using multiple masking should be clearly documented in the manuscript. Overall, the additional experiments and notational revisions made the paper's contribution clear. But, I still have a bit concern on the interpretation of CKA studies. Therefore, let me raise my score to 5. --- Reply to Comment 1.1.1: Title: Thanks for Your Response and Raising the Score Comment: We thank Reviewer ZNHw for providing a detailed valuable rebuttal review and feedback, which enables us to understand your concerns clearly. We are delighted to further clarify the remaining two questions: **(1)** About the CKA: we adopt this concept following the usage of the previous works [1, 2]: - Xie et al. [1] use the CKA to measure the attention similarity among different layers in vision Transformer, specially "measure the representation diversity in the model." - TimesNet [2], which also focuses on time series (**closer to our paper**), adopts the CKA similarity between bottom and top layers to define "high-level" and "low-level" representations (or tasks) quantitively ($\underline{\text{Section 4.6 of their official paper}}$). Intuitively, since the bottom layer representations are usually be viewed as containing the "low-level" or detailed information, a smaller CKA similarity means the top layer contains the different information from the bottom layer, so-called "high-level" or abstract information. Many thanks for your valuable question. We will rephrase this part in every detail by adding new explanations and proper citations to make the concept more clear. **(2)** About the training overhead: Thanks for pointing out this item. We will discuss the training overhead in the $\underline{\text{Conclusion}}$ section as a limitation and place it in the future work. Many thanks for your dedication! We guarantee to resolve all the writing issues and include all the mentioned updates in the final version. ===== [1] Xie,et al, Revealing the Dark Secrets of Masked Image Modeling. CVPR, 2022. [2] Wu,et al, TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. ICLR, 2023. --- Rebuttal 2: Title: Request of Reviewer's attention and feedback Comment: Dear Reviewer, We kindly remind you that it has only 2 days until the Reviewer-author discussion ends. Please let us know if our response has addressed your concerns. Due to the word limit of Rebuttal, we will be happy to answer your concerns further or deal with any additional issues/questions in the discussion period. Following your suggestion, we have answered your concerns and improved the paper in the following aspects: - We **have rewritten or modified some formulas and corrected some writing errors** to improve presentation quality. - We **have compared SimMTM and three generalized representation learning methods you mentioned TCL(2016), CPC(2018), and PCL(2019)**. We further demonstrate the state-of-the-art performance of SimMTM. - We **explained the training pip-line of pre-training and fine-tuning**, **the definition of Encoder**, **What is $\text{CKA}$, and why we use $|\Delta_{\text{CKA}}|$ to measure the difference in representation-learning property between pre-training and fine-tuning models**. - We further **analyzed the relationship between masked ratios and numbers** and **compared the effect of different masked rules**. Since we cannot submit the revised version during the reviewing phase, we guarantee to resolve all the writing issues and include all the above updates in the revised paper. Thanks again for your valuable review. We are looking forward to your reply. --- Rebuttal 3: Title: We are anticipating your feedback. Comment: Dear Reviewer ZNHw, Thanks again for your valuable and constructive review, which has inspired us to improve our paper further substantially. Following your suggestions, we have modified some formulas and writing errors to improve presentation quality, compared the three generalized representation learning methods you mentioned, analyzed the relationship between masked ratios and numbers, compared the effect of different masked rules, and discussed all your mentioned weaknesses in every detail. We do our best to solve your concerns in the limited time and characters. We hope that this new version has addressed your concerns to your satisfaction. We eagerly await your reply and are happy to answer any further questions. **We kindly remind you that the reviewer-author discussion phase will end by Aug 21st at 1 pm EDT, just 4 hours left. After that, we may not have a chance to respond to your comments**. Sincere thanks for your dedication! Authors
Summary: This paper proposes a simple pre-training framework for masked time-series modeling. Instead of reconstructing origin data directly, which is unsuitable for time series, this paper recovers masked time points by the weighted aggregation of multiple neighbors outside the manifold. The reviewer appreciates the novelty method proposed, but is still concerned about some results. Strengths: 1) The method is of novelty. The multi-level aggregation and representation method is appealing. 2) Experimentally, both high-/low-level tasks benefit from this method. Weaknesses: 1. The experimental results need more justification (see Questions). 2. The relationship with manifold learning needs more description. What is the progress in this aspect? 3. The masked parts in Figure 1 and Figure 1 in Supplement material should be annotated and more results should be listed. A well-trained network usually predicts low-frequency results. Why the result of TST in the right-upper part in Figure 1 in Supplement material contains too many high-frequency parts? Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: In terms of experimental results, why Random init. outperforms TF-C in Table 2? Does SimMTM use a stronger baseline? Is it an unfair comparison? Maybe TF-C or other methods will outperform SimMTM with the same baseline. It seems that the model learns directly aggregate information from different masked series, which learns more temporal information than other semantic information. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: The authors should discuss more implications about using large-scale pre-training in forecasting in terms of fairness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer xbHA for providing an insightful review. #### **About the Weaknesses** **Q1**: The relationship with manifold learning needs more description. What is the progress in this aspect? Masked modeling is a mainstream paradigm in self-supervised pre-training. However, the temporal information of the time series is significantly distorted as the masked ratio increases, making the reconstruction task too difficult to guide representation learning. SimMTM firstly introduces the core idea of neighborhood aggregation from manifold learning into self-supervised time series representation learning by reconstructing the original time series using multiple neighborhood masked series. Numerous experimental results further demonstrate the importance of introducing the idea of neighborhood aggregation from manifold learning into self-supervised representation learning of time series. - SimMTM achieves 8.9% average MSE reduction (0.335→0.305) and 4.6% average MAE reduction (0.445→0.329) compared to the advanced masked modeling baseline Ti-MAE (2023) in in-domain forecasting benchmarks ($\underline{\text{Table 3 in the main text}}$), which reconstructs the original time series based on standard masked modeling. - The averaged accuracy of SimMTM remarkably surpasses previous state-of-the-art TF-C (2022) (87.44% vs. 83.29%), as shown in $\underline{\text{Table 4 in the main text}}$. **Q2**: Why the results of TST contains too many high-frequency parts? Masking will make the input series temporally irregular, bringing noise to the reconstruction, thereby leading to high-frequency results. Following your suggestion, we have included more showcases in $\underline{\text{Figure 1 in the global response PDF}}$. SimMTM reconstructs the original time series by aggregating multiple neighboring masked series. The information from multiple neighboring masked time series is aggregated reasonably and complements each other. Multiple neighboring aggregating brings a more stable reconstruction effect, which is more beneficial for the representation learning of time series. #### **About the Questions** **Q1**: Why Random init. outperforms TF-C? We have mentioned the particularity of TF-C and LaST in $\underline{\text{lines 207-209 in the main text}}$. TF-C is closely related to its model structure. Thus we do not unify the Encoder of TF-C in the original submission. In addition, TF-C primarily focuses on time series classification tasks, not forecasting tasks. Specifically, TF-C, which is based on time and frequency series-wise representation contrastive learning, is more suitable for series-wise classification tasks than point-wise forecasting tasks. To further validate, we attempted to replace the dual-tower TF-C (CNN) with TF-C (Transformer). We show the averaged MSE/MAE from 4 different forecasting lengths {96,192,336,720} based on the past 336 time points. Results showed that TF-C (Transformer) outperformed TF-C (CNN) in forecasting tasks but still significantly lagged behind SimMTM. Note that even if we change each part of it the same as SimMTM (Transformer), its unique dual-tower design cannot be strictly consistent with SimMTM. - Forecasting tasks in the in-domain setting |Average MSE/MAE|TF-C (CNN)|TF-C (Transformer)|SimMTM |:-:|:-:|:-:|:-:| |ETTh1|0.637/0.638|0.492/0.481|0.404/0.428 |ETTh2|0.398/0.398|0.401/0.437|0.348/0.391 |ETTm1|0.744/0.652|0.466/0.445|0.340/0.379 |ETTm2|1.755/0.947|0.295/0.341|0.260/0.318 **Q2**: Does SimMTM use a stronger baseline? Is it an unfair comparison? Ensuring complete consistency in the backbone used by all baselines is often unrealizable due to the high dependence between model architecture and the pre-training method in this domain. Existing work also commonly faces this problem [1,2,3]. We have made a great effort to ensure a fair comparison. For all baselines and all tasks, we have conducted both experiments: using a unified backbone and the specific backbone proposed in their original papers. The comprehensive experiments and descriptions have been presented in $\underline{\text{lines 203-211 in the main text}}$, $\underline{\text{Table 7, 8 in Supplementary Materials}}$. [1] Yue, et al., TS2Vec: towards universal representation of time series. AAAI, 2022. [2] Zhang, et al., Self-supervised contrastive pre-training for time series via time-frequency consistency. NeurIPS, 2022. [3] Nie, et al., A time series is worth 64 words: long-term forecasting with transformers. ICLR, 2023. **Q3**: The model learns more temporal information than other semantic information. As stated in $\underline{\text{lines 5-6 in abstract}}$, the semantic information of time series is mainly concentrated in temporal variations. Thus, the detailed semantic information can be learned by aggravating multiple time series for reconstruction. In addition, the aggregation process also relies on the similarity among series-wise representations from multiple time series, which makes the model also learn the global semantic information. #### **About the Limitations** **Q1**: The authors should discuss more implications about using large-scale pre-training in forecasting in terms of fairness. We pay great attention to the issue of fair comparison. - We conduct two types of experiments for all baselines and all tasks: using a unified backbone and the specific backbone proposed in their original papers. - We strictly separate training, validation, and testing for all data to prevent data leakage issues. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response. At this stage, I will keep my score unchanged.
Rebuttal 1: Rebuttal: ## Summary of Revisions and Global Response We sincerely thank all the reviewers for their insightful reviews and valuable comments, which are instructive for improving our paper further. Since the standard masked modeling seriously ruins vital temporal variations of time series, this paper presents SimMTM, a simple pre-training framework for masked time series modeling. By relating masked modeling to manifold learning, SimMTM proposes to recover masked time points by the weighted aggregation of multiple neighbors outside the manifold, which eases the reconstruction task by assembling ruined but complementary temporal variations from multiple masked series. **SimMTM achieves state-of-the-art fine-tuning performance compared to 6 advanced baselines in 12 well-established benchmarks in two canonical time series analysis tasks: forecasting and classification, covering both in- and cross-domain settings.** The reviewers generally held positive opinions of our paper, in that the proposed method is "**novel**", "**a novel representation learning method**", "**sound and well-motivated**", this paper has "**comprehensive comparative experiments**", and it archives "**remarkable performance improvements**", and the empirical evaluation is "**comprehensive and convincing**". The reviewers also raised insightful and constructive concerns. We have made every effort to address all the concerns by providing sufficient evidence and requested results. Here is the summary of the major revisions: - **Clarify the unified backbone setting (Reviewer xbHA)**: For all baselines and all tasks, we have conducted two types of experiments in the original submission, including unified backbone and the specific backbone proposed in their original papers. Furthermore, we perform experiments by keeping each part of the dual-tower TF-C consistent to SimMTM. All results show SimMTM produce the best performance. - **Resolve the writing issues (Reviewer ZNHw, L1Yx, 5Qee)**: We have rewritten or modified some formulas and corrected some writing errors to improve presentation quality. - **Analyze the relationship of masked ratios and numbers (Reviewer ZNHw)**: We've proven that one masked series with a lower masked ratio can perform well, and we have further analyzed the relationship between masked ratio and masked numbers. Results demonstrate that the design of multiple masked series aggregation in SimMTM is critical, and a reasonable balance between masked ratio and masked numbers shows better performance. - **Add comparison in different masking rules (Reviewer ZNHw)**: We further explored the effect of different masking rules, including fixed position (tail or head) and random position (large, small segment or points) masking. Results show random masking performs better than fixed masking, and the difference is insignificant among different random masking. - **Analysis the effects of different reconstruction candidates (Reviewer 5Qee)**: We have conducted a comparison experiment by aggregating from own or all masked series in the reconstruction process. Results show using both own and other less-related noise series for reconstruction performs better. - **Add baselines (Reviewer ZNHw)**: By comparing SimMTM and three generalized representation learning methods TCL(2016), CPC(2018), and PCL(2019) in forecasting and classification tasks, we further demonstrate that SimMTM achieves state-of-the-art performance. The valuable suggestions from reviewers are very helpful for us to improve our paper. We'd be very happy to answer any further questions. Looking forward to the reviewer's feedback. #### **The mentioned Tables and Figures are included in the following PDE file.** - **Figure 1**: More showcases for Reviewer xbHA. - **Tabel 1**: New baselines in forecasting for Reviewer ZNHw. - **Tabel 2**: New baselines in classification for Reviewer ZNHw. - **Tabel 3**: Different masking rules for Reviewer ZNHw. Pdf: /pdf/73a0d02108d6b7a1d9abc02dfb9f726444e93652.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Scaling Laws for Hyperparameter Optimization
Accept (poster)
Summary: The authors present a hyperparameter tuning scheme based on optimizing surrogate powerlaws. They claim substantial improvements over baselines on hyperparameter tuning benchmarks. Strengths: The writing is generally clear, and the method is well described. The authors report substantial improvements over baselines on what appear to be standard benchmarks. Real improvement here would be significant for the field. I should note here that I am unfamiliar with these HPO baselines and benchmarks. While the reported results seem potentially interesting, I place low confidence in my own assessment here. Input from other reviewers with more familiarity with the methods of assessment and the difficulties one encounters when using HPO in practice will be important here. Weaknesses: It is very hard to assess Hypothesis 1 from Figure 1. This figure seems like it ought to be a final summary figure after some more illustrative figures showing, for example, parameter vs. performance, with five proposed fits overlaid. More generally, the lack of figures explicitly showing powerlaws in this paper about powerlaws is quite odd and makes it hard to gauge the claims made. For example, I'm quite skeptical that *all* metrics one might want to track follow nice powerlaws as the proposed fitting function assumes -- which seems like it'd largely invalidate the proposed HP tuner -- but I could be convinced by lots and lots of plots showing that metric after metric is indeed forecasted nicely by a powerlaw. This is what I'd expect to see here. For a paper using powerlaw scaling, there's a surprising lack of discussion of [Kaplan et al. (2020)](https://arxiv.org/abs/2001.08361) or the Chinchilla scaling laws, which are foundational results dealing with powerlaws + large models used in practice. Technical Quality: 3 good Clarity: 3 good Questions for Authors: On "Unfortunately, HPO is not yet feasible for Deep Learning (DL) methods" -- what does this mean? Hyperparameters are optimized all the time. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful review. We provide the following clarifications on the questions raised by the reviewer: - **Regarding “It is very hard to assess Hypothesis 1 from Figure 1. This figure seems like it ought to be a final summary figure after some more illustrative figures showing, for example, parameter vs. performance, with five proposed fits overlaid. More generally, the lack of figures explicitly showing powerlaws in this paper about powerlaws is quite odd and makes it hard to gauge the claims made.”:** We would like to initially clarify to the reviewer that our power law surrogate is a function of the number of epochs/iterations of optimization (as explained in the experimental protocol in Section 5) and not on the number of parameters. Hypothesis 1 Figure 1 is directly addressing the reviewer's question on “How well do power laws model learning curves”. Our experiment is based on the rationale that “method A models learning curves better than method B if method A is able to forecast future unobserved values of the learning curve (given a partially observed curve) more accurately than method B”. We opted for Hypothesis 1 because we believe that reporting the aggregated forecasting accuracy across thousands of learning curves is a more principled assessment than visually interpreting a few learning curves. Regarding the metric of evaluating the forecasting performance, we present both results in mean absolute relative error in Appendix J, Figure 18, as well as the rank correlation in Figure 1. Nevertheless, we understand the reviewer’s concern that simpler visualizations will help readers understand the method more quickly. As a result, we provide additional figures (extra page, in the global response) that visually show how DPL fits learning curves from diverse datasets included in our experiments. DPL is given partial observations from **only the learning curve that is investigated** and infers the rest of the learning curve based on the observed points. We will incorporate these visualizations into the camera-ready version. - **Regarding “For example, I'm quite skeptical that all metrics one might want to track follow nice powerlaws as the proposed fitting function assumes -- which seems like it'd largely invalidate the proposed HP tuner”:** We would refer the reviewer to [1], additionally [2] (that the reviewer suggests) which show that well-behaved learning curves “generally” follow a power law pattern. However, the reviewer is right in stating that not “all” learning curves follow a power law assumption. We fully agree with that statement, and also **stress this point in the paper in lines 247-249 “The findings indicate that even though not all learning curves are power laws, most of them are, therefore a power law surrogate is efficient in forecasting the final performance of a partially-observed learning curve.”** At the end of the day, the quality of the HPO results is the ultimate metric of success, not the quality of learning curve modeling. HPO is a different process than learning curve forecasting, because it focuses on exploring and exploiting regions of performant hyperparameter configurations, in order to recommend the “best configuration to evaluate next”. Through ample experiments, we show that Bayesian optimization equipped with our novel power law surrogate and acquisition achieves better HPO results. It is worth emphasizing that learning curves that deviate significantly from the power law assumption are usually diverging configurations (e.g. loss and error rate increasing suddenly, as in the case of very high learning rates). Modeling such learning curves suboptimally appears to not hurt the quality of HPO, because divergent learning curves usually represent bad hyperparameter configurations with high error rates. Since in HPO with Bayesian optimization the acquisition recommends only the best-estimated configuration (the one with the lowest estimated error rate), it is not essential if we optimally or suboptimally estimate the learning curve of the configurations with a high error rate, because the bad configurations are not recommended for evaluation by the HPO algorithm. **For transparency, we would point the reviewer to Lines 239-245 where we provide a few ways on how to further tackle learning curves that have a divergent behavior. As pointed out by the aforementioned lines, the analysis is extended in Appendix C in more detail.** - **Regarding: For a paper using powerlaw scaling, there's a surprising lack of discussion of Kaplan et al. (2020) or the Chinchilla scaling laws, which are foundational results dealing with powerlaws + large models used in practice.** We agree with the reviewer. We will add the suggested papers to the list of already-cited prior works on scaling laws in Section 3 for the camera-ready version. - **Regarding: On "Unfortunately, HPO is not yet feasible for Deep Learning (DL) methods" -- what does this mean? Hyperparameters are optimized all the time.** By HPO we refer to principled and automated searching techniques for tuning hyperparameters, such as Bayesian optimization. rather than manual and ad-hoc parameter tuning. For modern Deep Learning, most researchers and practitioners follow suboptimal trial-and-error searching of hyperparameters based on a local search around an initial guess of hyperparameter values. If the reviewer is pleased with the clarifications and proposed changes we would appreciate a reflection of the discussion to the score. In case there are more questions, we are happy to answer them. [1] Mohr, F., & van Rijn, J. (2022). Learning Curves for Decision Making in Supervised Machine Learning--A Survey. arXiv preprint arXiv:2201.12150. [2] Kaplan et al. Scaling laws for neural language models (2020)
Summary: AFTER REBUTTAL: I acknowledge reading the rebuttal. After the reviewer discussion, I will keep my score the same. ---- Gray-box hyperparameter optimization involves optimizing neural network hyperparameters by evaluating performance at low budgets and terminating configurations if they seem unpromising. This paper proposes a gray-box scheme using Bayesian optimization over neural scaling laws to estimate future model performance. The key insight is to fit an ensemble of neural networks that can estimate the power law parameters and then apply BO to estimate new hyperparameter settings. Numerical results demonstrate the power law’s ability to forecast as well as state-of-the-art relative regret in HPO. Strengths: Overall the paper is well-written, clearly motivated, suggests an intuitive strategy, and rigorously experimented. Weaknesses: - The function $f(\lambda)$ gets overloaded with $f(\lambda, b)$ or as $f(b)$ in various parts. - The definition of the cost function is not present. It would be nice to have a slower explanation of the experiment setup including the cost function and budget, since the plots tend to just look at normalized budget metrics. - The ideas of learning a meta model of power laws, of using Bayesian optimization over power laws, and using an ensemble of power laws have all been explored in recent works. Although these works do not focus on hyperparameter optimization, it would be nice to differentiate the current work from the previous in terms of methods. - Jain, Achin, et al. "A meta-learning approach to predicting performance and data requirements." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. - Tejero, Javier Gamazo, et al. "Full or Weak annotations? An adaptive strategy for budget-constrained annotation campaigns." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. - Mahmood, Rafid, et al. "Optimizing data collection for machine learning." Advances in Neural Information Processing Systems 35 (2022): 29915-29928. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see the weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are not meaningfully discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful review. We provide the following clarifications on the questions raised by the reviewer: - **Regarding “The function $f\left(\lambda\right)$ gets overloaded with $f\left(\lambda, b\right)$ or as $f\left(b\right)$ in various parts.”:** We thank the reviewer for spotting the inconsistency in our formalism, to improve clarity, we will rephrase $f^{best}\left(b\right)$ and $f^{oracle}\left(b\right)$ as $f\left(\lambda^{best}, b\right)$ and $f\left(\lambda^{oracle}, b\right)$. This affects Equation 7 (bottom) and Equation 9 (as well as the text that refers to both terms in Line 126 and Lines 146-151). We will incorporate the aforementioned changes for the camera-ready version of our work. Regarding the preliminaries of Section 3, there is no notion of $b$ for the first 2 paragraphs as multi-fidelity has not been introduced yet. Additionally, in Section 6, when $f\left(b\right)$ is used, it refers to models that do not take the hyperparameter configuration as an input. - **Regarding "The definition of the cost function is not present. It would be nice to have a slower explanation of the experiment setup including the cost function and budget, since the plots tend to just look at normalized budget metrics.":** We would like to point the reviewer to the **experimental protocol, Section 5, lines 137-144** where we describe the cost function and budget. **“One unit step of the HPO budget signifies training one hyperparameter configuration for one more step (1 epoch in LCBench, or 200 iterations in TaskSet)”**. We then describe in detail what step represents for all benchmarks and the benchmark-specific default evaluation metric. We summarize the information below: |Benchmark | Learning curve step |Evaluation metric| |:---------------|:---------------------------|:----------------------| |LCBench | 1 epoch | Bal. Accuracy | |TaskSet | 200 batch iterations | Loss | |PD1 | 1 epoch | Accuracy | - **Regarding "The ideas of learning a meta-model of power laws, of using Bayesian optimization over power laws, and using an ensemble of power laws have all been explored in recent works. Although these works do not focus on hyperparameter optimization, it would be nice to differentiate the current work from the previous in terms of methods.":** We thank the reviewer for suggesting the parallel works at CVPR 2023, and the NeurIPS 2022 paper, which although do not address HPO, still elaborate on scaling laws for performance predictions. We will integrate the suggested works into our work for the camera-ready version. We agree with the comments on novelty, **except for the point “... using Bayesian optimization over power laws … has been explored in recent works”**. To the best of our awareness, we are the first to explore Bayesian optimization for HPO with power laws. We believe to have adequately addressed the questions from the reviewer. In case there are more questions, we are happy to answer them
Summary: The paper proposes a novel surrogate for multi-fidelity HPO that uses an ensemble of power-law models to estimate the future validation loss of hyper-parameter configurations at intermediate stages of training. The principal novelty of the work lies in exploiting the observation (made in other work) that learning curves tend to follow power laws. Thorough experiments are presented across diverse datasets and tasks, comparing against publicly available strong HPO baselines; SOTA performance is demonstrated. Analysis demonstrates an improvement in learning-curve forecasting for the surrogate over non-power law techniques, as well as an increase in efficiency for DPL over HPO baselines. Strengths: The paper presents a novel contribution to the HPO methodology literature. The claims of the paper are well supported by experiment, and generally well analyzed. The specification of the separate hypotheses of the work in the analysis gives a pleasantly clear structure to the paper. The results demonstrably advance the SOTA in HPO; this work seems very likely to be used and built upon in future HPO work. Weaknesses: The final analysis section, evaluating the use of DPL for LLMs, is less convincing than the previous sections. Significant work is done by line 319 “We follow the common practice of conducting HPO with small transformers and then deploying the discovered optimal configuration on the full-scale transformers”. Citations of prior work which discuss this convention or apply the ‘common practice’ should be included (perhaps the paper on Tensor Programs V (Yang et al.2022)). Following the convention of that paper, it would be valuable to include the total computational cost of tuning the small models as a proportion of the computational cost of training the large model once (with the small-model-identified parameters). This would give a much better sense of the practicality of applying such a technique in a LLM setting, where the computational burden of training is paramount. Nits: In Section 5.1, the architectures of LCBench and PD1 are mentioned, but for TaskSet no mention of architecture is made. In the LLM section, total parameter count of the various models should be reported. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The results on PD1 seem weaker than those for LCBench and TaskSet in Figures 2,3,4, and PD1 is left out of Figure 5 and pushed to the appendix. Do you have any analysis or commentary on the relative performance of the method as a function of the architectures or tasks involved in each benchmark? From the description in Sec 5.1, it seems PD1 is a quite diverse benchmark. While the analysis presented quite reasonably focuses on the aggregated performance of each benchmark, was any analysis performed as to the relative performance on specific architectures, tasks, or hyper-parameter spaces? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The practical limitations of the applicability of this work towards LLMs may be under-explored. While the extension of the method towards giant models is not necessary for the impact of this generally strong contribution, it remains unclear if the efficiency gains over existing HPO methods are sufficient to overcome the general concerns of computational inefficiency which have typically precluded Bayesian HPO methods from general use in larger models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful review. We provide the following clarifications on the questions raised by the reviewer: - **Regarding “Significant work is done by line 319. We follow the common practice of conducting HPO with small transformers and then deploying the discovered optimal configuration on the full-scale transformers. Citations of prior work which discuss this convention or apply the ‘common practice’ should be included (perhaps the paper on Tensor Programs V (Yang et al.2022))“:** We agree with the reviewer. To improve clarity, we will refer again to the Tensor Programs V work from Line 74 (in the related work). - **Regarding "Following the convention of that paper, it would be valuable to include the total computational cost of tuning the small models as a proportion of the computational cost of training the large model once (with the small-model-identified parameters)."":** We agree with the reviewer. In our LLM experiment (Section 6, Hypothesis 4), it takes 3.66 hours to find the Oracle configuration for the largest model via HPO for the smallest model. In turn, it takes 21.52 amount of hours to train the largest model only once. As such, the proportion is 0.17. We will update our camera-ready version as suggested by the reviewer. - **Regarding: "In Section 5.1, the architectures of LCBench and PD1 are mentioned, but for TaskSet no architecture is mentioned.":** We thank the reviewer for pointing out the missing information. The architectures for TaskSet are a Variational RNN, Identity RNN, GRU RNN, and LSTM RNN. We will make sure to update Section 5.1 for the camera-ready version with the details of the architectures included in TaskSet. - **Regarding: "In the LLM section, the total parameter count of the various models should be reported."** We agree with the reviewer. The parameter counts for the various models are as follows: | Embedding Size | Total Parameters | | :---------------------- | :---------------------: | | 6 | 0.3 M | | 12 | 0.6 M | | 24 | 1.2 M | | 48 | 2.6 M | | 96 | 5.5 M | | 192 | 12.4 M | | 384 | 30.1 M | We will update the camera-ready version to include the parameter counts of the models. - **Regarding "The results on PD1 seem weaker than those for LCBench and TaskSet in Figures 2,3,4, and PD1 is left out of Figure 5 and pushed to the appendix. Do you have any analysis or commentary on the relative performance of the method as a function of the architectures or tasks involved in each benchmark? From the description in Sec 5.1, it seems PD1 is a quite diverse benchmark. While the analysis presented quite reasonably focuses on the aggregated performance of each benchmark, was any analysis performed as to the relative performance on specific architectures, tasks, or hyper-parameter spaces?"** We thank the reviewer for the interesting question. We would like to point the reviewer to Section 6, Hypothesis 2 (Line 274) where we provide our comments on the lack of statistical significance for PD1. As a summary, we did investigate the specific tasks/search spaces where DPL does not perform as well as the other baselines, as the reviewer suggests. We notice that the tasks have a skewed distribution of hyperparameter configuration performances, where, a majority of the configurations achieve top performance. The detailed analysis can be found in Appendix F. We believe to have adequately addressed the questions from the reviewer. In case there are more questions, we are happy to answer them. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the thorough response. All of the questions I had have been adequately addressed. On reading the other reviews and responses, I maintain my vote for acceptance.
Summary: This paper proposes to combine power law patterns in learning curves, which have been recently popularized by scaling laws, to improve efficiency and performance of Bayesian optimization (BO) based hyperparameter optimization (HPO). Authors provide detailed discussions on how to model the surrogate function and accuracy of their power law based surrogate function in predicting final performance. Tested on three HPO benchmarks, authors empirically demonstrate that DPL achieves better performance given a limited budget compared to other BO-based HPO baseline algorithms. Strengths: 1. I believe the strategy of incorporating power law patterns from scaling laws into the surrogate function of Bayesian optimization is neat and smart in that it enables an improved prediction of learning curves. 2. In addition, authors have empirically justified the soundness of their method through diverse analyses and experiments. 3. Given its simplicity, I expect this method would generalize across different tasks not explored in this paper. 4. Authors described their experiment settings in detail, which was very helpful in evaluating the significance of their method. Weaknesses: 1. I believe the clarify of the paper could be potentially improved. For example, abstract of this paper is too simple, and I wasn't able to grasp the general direction or the concept of the paper by reading them. Also, I believe moving the algorithm box from Appendix to the main text would help readers to better understand their method. I currently see quite a lot of empty spaces in the paper, so expect, with minor formatting efforts, it would be easy to move the algorithm box to the main text. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I am curious about the applicability of DPL to different ML techniques such as random forest or XGBoost, which is arguably the most popular option for tabular data. I am not an expert in this domain, but I doubt the concept of learning curves also plays an important role in those types of models. If not, I believe stating it clear and focusing writing on the area where their method shines most would actually improve the clarify of the paper. 2. I want to know author's opinion on the applicability of their method on training recent large models. The training cost of recent large models can be astronomical, so multiple training runs assumed by Bayesian optimization may not be realistic despite DPL's improved efficiency compared to the baseline. Even though authors included some transformer experiments, I still think the size of their transformers are quite smaller than recent models. This question is not specifically about DPL, but more about BO-based HPO. Any insights would be much appreciated. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors haven't clearly discussed the limitations of their work. Having one or two sentences on the limitations would be helpful for people who are interested in this direction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful review. We provide the following clarifications on the questions raised by the reviewer. - **Regarding “the abstract of this paper is too simple, and I wasn't able to grasp the general direction or the concept of the paper by reading them.”:** We will improve the abstract in two ways for the camera-ready version by better describing: i) the problem (describing the multi-fidelity HPO approach with learning curves) we are addressing and ii) the proposed method (introducing a probabilistic scaling law performance predictor with a dynamic acquisition function for Bayesian optimization). If the reviewer has other suggestions we would be happy to incorporate them. - **Regarding “I believe moving the algorithm box from Appendix to the main text would help readers to better understand the method”:** We agree with the reviewer’s suggestion and will move the algorithm to the main paper for the camera-ready version. - **Regarding the “applicability of DPL to different ML techniques such as random forest or XGBoost”:** The reviewer is correct. It would be possible to apply DPL for ML techniques such as random forest or XGBoost, by considering the number of ensemble models as a budget/fidelity. This is a very interesting future work that we will add to a new section “Limitations and Future Work”. - **Regarding the "the applicability of the DPL method in training recent large models":** Searching directly on a model with billions of parameters might be indeed challenging. However, in practice, we can tune the hyperparameters on a smaller version and transfer it to the larger version. We would like to refer the reviewer to [1] who motivated a possible transfer theoretically and demonstrated empirically that models with **a few million parameters transfer to LLMs with billions of parameters**. Other AutoML works employ the same idea. Most cell-based NAS methods (for example the following early work [2]) are training small networks and later scale them by increasing width and depth. Similarly, work on transformer search makes use of the same technique [3]. [1] Yang, Ge, et al. "Tuning large neural networks via zero-shot hyperparameter transfer." Advances in Neural Information Processing Systems 34 (2021): 17084-17097. [2] Pham, Hieu, et al. "Efficient Neural Architecture Search via Parameter Sharing." ICML 2018: 4092-4101 [3] So, David, et al. "The Evolved Transformer." ICML 2019: 5877-5886 If the reviewer is pleased with the clarifications and proposed changes we would appreciate a reflection of the discussion to the score. In case there are more questions, we are happy to answer them. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal with clarifications. I maintain my score, voting for the acceptance. Good luck!
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for the thorough reviews and for helping us improve the quality of our work. Below, we summarize the main clarifications: - **Time performance and ecological insights (Reviewer 7buU):** Hypothesis 2 of Section 6 provides a comparison between all methods for the total time it took random search (a model-free method) to evaluate 20 hyperparameter configurations, where our method achieves better or same results compared to our competitors with less computational resources. - **Weaker results in the PD1 benchmark (Reviewer rbxf):** Hypothesis 2 of Section 6 provides insights on the lack of statistical significance in the PD1 benchmark. A detailed analysis is provided in Appendix F. - **Missing definition of the cost function and budget (Reviewer wp1o):** The experimental protocol of Section 5.3 provides information in detail on the cost function of experiments (learning curve step or time). It additionally provides information on the default evaluation metric in every benchmark. - **Explicit mention of the limitations:** We will add a new section to our work labeled “Limitations and Future Work”. We believe to have answered all of the questions raised by the reviewers and we welcome any new questions during the discussion period. Pdf: /pdf/4c33faf97254d888e90944e3975a49927906658b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes deep power laws ensembles for hyperparameter optimization. More precisely, it is used as a surrogate for Bayesian Optimization (BO) to estimate the performance of a hyperparameter configuration leveraging ensembles of deep power law functions. Furthermore, it is combined with multi-fidelity optimization to estimate the performance for an upcoming budget, enabling incremental training that can be paused for individual configurations based on the performance estimate of the surrogate. Strengths: ### Originality and Quality The presented approach is interesting, as it combines promising directions of HPO: Multi-Fidelity and Bayesian Optimization based on power laws. It is original in the way that it is the first work (to the best of my knowledge) that leverages Deep Ensembles for power laws as surrogates for BO.Moreover, a simple but original multi-fidelity strategy is presented to dynamically adjust the budget a configuration is evaluated on, allowing incremental training of configurations by pausing and continuing the process dependent on the performance estimate of the surrogate. Overall, none of the parts of the presented approach is innovative on its own, but the combination thereof is. ### Clarity The paper is well structured and written in an understandable manner. The different components and their interaction could be clearly separated and emphasized more. ### Significance The presented method is evaluated on 3 state-of-the-art benchmarks, as well as compared to 7 state-of-the-art HPO tools. The evaluation of 4 well selected hypotheses lead to the conclusion that assumptions are accurate, the approach is working correctly, and the presented method outperforms competitors in most cases. Weaknesses: ### Originality The originality of this paper is partly hard to define, as related work or citations are missing at key aspects of this paper. Especially regarding the usage of power laws in the context of HPO, existing work is not investigated / cited [1, 2]. Furthermore, motivation, explanation, and limitation of multi-fidelity and power laws are mixed up (see e.g. l. 57-59). Moreover, the presented multi-fidelity strategy is not put into context of existing strategies. Furthermore, the authors mix up gray-box HPO and multi-fidelity HPO (e.g. l. 95: “Gray-box (multi-fidelity)”). Multi-fidelity can be classified as gray-box, but not the other way around. In addition, the assumption that every learning curve can be described by a power law function should be verified by a reference or two. Overall, the work could be better embedded into existing work on learning curves, see [2]. ### Clarity and Quality The clarity and quality of the paper can be improved. On one hand, the paper should be self contained, which mainly refers to a missing formal introduction and short background of power law functions. Related work and approach description are partially mixed or not suitably placed, e.g. exploiting a power law assumption with the presented method would have been expected to be explained within the description of the method and not as part of related work of multi-fidelity HPO. On the other hand, the mathematical formulation can be improved. Details below: 1. l. 52-53: “[...] the budget is multiplied by the fraction of discarded hyperparameter configurations and the process continues until the maximum budget is reached.” This would reduce the budget with every step, but it should be increased to reach the maximum budget. Suggestion: “[...] budget is divided by the fraction [...]”. 2. l. 57-59: “However, the only assumption these methods make about the learning curve is that it will improve over time. In contrast, we fit surrogates that exploit a power law assumption on the curves.” There is already work leveraging power-laws for performance prediction of learning curves [1, 2]. 3. l. 79f / Eq. 1: Too much space between $\theta^*$ and the next bracket 4. l. 82: The basic definition of $H$ should contain $N$, as it is defined later in Eq. 2 (l. 87f), otherwise multiple usages of $H$ are ambiguous. 5. l. 83: $\mathcal{A}$ is defined, and directly afterwards the normal $A$ is used. Furthermore, the set definition of all possible $\mathcal{A}$ is missing which should be used for $\arg\min$ (in l. 87f) 6. l. 87f / Eq. 2: “$\lambda_i = \mathcal{A}$” is an unusual notation and more common in programming than math, “$\lambda_i =$” could be removed here to avoid confusion. 7. l. 87f / Eq. 2: $\Omega$ should be $\geq$ (greater equals) the sum of the cost, as the budget should not be exceeded, but can be matched. 8. l. 91: $p$ is not defined, as well as the set definition of all possible $\phi$. 9. l. 138: What is “the loss”? 10 l. 201: SMAC is not necessarily an extension of HB, but only the mf part of SMAC uses HB. This should be properly described. Lastly, there is a grammar issue in l. 55: “elaborated” → “elaborated on”, and Figure 5 is misplaced as it belongs to the earlier section. [1] Buratti, B., & Upfal, E. (2019). Ordalia: Deep Learning Hyperparameter Search via Generalization Error Bounds Extrapolation. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 180-187). IEEE. [2] Mohr, F., & van Rijn, J. (2022). Learning Curves for Decision Making in Supervised Machine Learning--A Survey. arXiv preprint arXiv:2201.12150. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: For easier reference during the rebuttal, the questions are enumerated below: 1. Do you agree or disagree with any of the remarks from section Weaknesses? 2. Do you agree or disagree with the suggested corrections of the mathematical formulation listed in the Weaknesses section? 3. l. 171-172: “HPO budget is defined as the maximum number of steps needed to fully evaluate 20 hyperparameter configurations.” How about the overall CPU hours needed to execute the experiments with different approaches? Do they differ? Are there any additional ecological insights, e.g. from a Green AutoML perspective [3], like one might save wall clock time compared to others? Does this relate to performance improvement? 4. l. 205-206: The description of the used hardware is incomplete as memory is missing. 5. I am wondering why there is no bold conclusion for Hypothesis 4? Final remark regarding the overall rating: I am strongly convinced that the weaknesses can be easily addressed during the rebuttal phase. If the authors do so, I am more than happy to increase my score to an accept as I think that the paper is really good besides the weaknesses above. [1] Buratti, B., & Upfal, E. (2019). Ordalia: Deep Learning Hyperparameter Search via Generalization Error Bounds Extrapolation. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 180-187). IEEE. [2] Mohr, F., & van Rijn, J. (2022). Learning Curves for Decision Making in Supervised Machine Learning--A Survey. arXiv preprint arXiv:2201.12150. [3] Tornede, T., Tornede, A., Hanselle, J., Mohr, F., Wever, M., & Hüllermeier, E. (2023). Towards Green Automated Machine Learning: Status Quo and Future Directions. Journal of Artificial Intelligence Research, 77, 427-457. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are not explicitly given in the paper if I have not overseen something. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful and detailed review of our work. We provide the following clarifications on the questions raised by the reviewer. - **Regarding the remarks from the section Weaknesses**: - **In terms of related work:** We will cite both [1] and [2] in our related work. Additionally, in the case of [1], we will highlight the difference in terms of proposing a conditioned probabilistic power law surrogate that needs no observations per LC in order to estimate the performance. The ability to estimate the performance of unobserved configurations through probabilistic surrogates is essential for Bayesian optimization, therefore, making our method the first to propose power law surrogates for uncertainty-driven HPO. - **Regarding “motivation, explanation, and limitation of multi-fidelity and power laws are mixed up (see e.g. l. 57-59)”:** We understand the reviewer’s concern, however, we believe the related work is structured in a consistent manner. At the end of every related work paragraph, we have a sentence that delineates the novelty of our paper from the prior work of that paragraph. In terms of multi-fidelity HPO, our novelty is in proposing a novel surrogate with the power law assumptions. This is the reason why the sentence refers to both multi-fidelity HPO (problem definition) and power laws (novelty). - **Regarding “the presented multi-fidelity strategy is not put into context of existing strategies” and “Overall, the work could be better embedded into existing work on learning curves, see [2].”:** Our work is positioned as in the context of well-established multi-fidelity HPO (problem definition) and Bayesian optimization (BO, strategy for solving the problem). Given our novel power-law surrogate and acquisition, which are very standard BO components, we assess the method to be positioned within a well-established BO strategy. Additionally, the surrogate function with power laws is connected to existing work on **learning curves (Line 60)**. Furthermore, we cite multiple papers in the learning curves prediction paragraph, the majority of which are cited by [2]. However, we agree with the reviewer in further extending/strengthening the related work with the aforementioned works [1]-[2]. - **Regarding “mix up gray-box HPO and multi-fidelity HPO (e.g. l. 95: “Gray-box (multi-fidelity)”)”:** We agree with the reviewer. We will make it clear in our work that we address multi-fidelity HPO, and specify that it is a sub-problem of gray-box HPO as the reviewer suggests. - **Regarding the "suggested corrections of the mathematical formulation":** We agree with the suggested changes on the formalism from the reviewer and we will update the camera-ready accordingly. - **Regarding the "time performance and ecological insights":** The reviewer is correct in his/her understanding that the budget for the wallclock experiments is the runtime equivalent of fully evaluating 20 randomly selected configurations. This budget limit is set as a practical compromise considering our available computational resources. All methods, therefore, need to search for an incumbent (best per method) configuration within this budget. For instance, if 20 randomly selected hp configurations are evaluated for a total of say 100 hours, then we set the HPO budget for all methods to 100 hours. Within this budget, methods are free to partially or fully evaluate as many configurations as they decide within their search mechanisms. The best model's performance discovered by each method within the budget is used to compute the comparative metrics; namely regret and ranks. Regarding the wallclock HPO time, we point the reviewer to **Hypothesis 2 L283-289 (Figure 4)** where we fairly compare all methods over time [surrogate fit (if it is a model-based method) + hyperparameter evaluation]. **The setup for the experiment is described in Section 5 L159-166**. Summarized, the time of every algorithm is normalized by the random search time and the total time shown in Figure 4 is the time it took random search to perform 20 HPO trials. Regarding ecological insights, our method uses fewer computational resources to achieve the same/better performance compared to other methods. In the case of LCBench, our method matches the final performance of random search in approximately 10% of the total time and the final performance of the closest competitor (BOHB) in 20% of the total time. While for PD1, our method matches the final performance of random search in 30% of the total time and matches the performance of DragonFly the closest competitor in 80% of the total time. - **Regarding "the memory description of the used hardware":** We would like to thank the reviewer for pointing out the missing information. The total memory of every node is 120GB, and every experiment is limited to 2 cores which offer 12GB. We are going to modify Section 5 accordingly for the camera-ready version of our paper since the rebuttal phase does not allow modifications to the submitted manuscript. - **Regarding "the missing bold conclusion for Hypothesis 4":** We agree with the reviewer, and we will add a bold conclusion for Hyp. 4 along the lines of "The results validate Hypothesis 4 and confirm that DPL is an efficient HPO technique for tuning the hyperparameters of large language models when the HPO is conducted using smaller transformer model sizes." If the reviewer is satisfied with the clarifications and proposed changes we would appreciate a reflection of the discussion to the score. In case there are more questions, we are happy to answer them. --- Rebuttal Comment 1.1: Title: Score Increased Comment: Thank you very much for the detailed response! Since the authors answered all of my questions and will adjust for the remaining points, I am more than happy to increase my score.
null
null
null
null
null
null
Hierarchical Semi-Implicit Variational Inference with Application to Diffusion Model Acceleration
Accept (poster)
Summary: The paper introduces hierarchical semi-implicit variational inference (HISIVI) as an extension of semi-implicit variational inference (SIVI). HISIVI incorporates interpolating distributions between prior and target data to train conditional layers progressively, resulting in accelerated sampling in diffusion models. Strengths: The paper is well-structured and easy to follow. Weaknesses: - The novelty of HISIVI compared to related work is not clearly demonstrated. It is recommended to include more related works and comparison: cascaded diffusion models[1], diffusion schrodinger bridge[2], flow matching[3], or rectified flow[4], etc. - The experiment section is relatively weak. Comparisons could strengthen the evaluation (aforementioned references and more [5],[6], etc.) [1] Ho, J., Saharia, C., Chan, W., Fleet, D.J., Norouzi, M. and Salimans, T., 2022. Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research, 23(1), pp.2249-2281. [2] De Bortoli, V., Thornton, J., Heng, J. and Doucet, A., 2021. Diffusion Schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34, pp.17695-17709. [3] Lipman, Y., Chen, R.T., Ben-Hamu, H., Nickel, M. and Le, M., 2022. Flow matching for generative modeling. arXiv preprint arXiv:2210.02747. [4] Liu, X., Gong, C. and Liu, Q., 2022. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003. [5] Xiao, Z., Kreis, K. and Vahdat, A., 2021. Tackling the generative learning trilemma with denoising diffusion gans. arXiv preprint arXiv:2112.07804. [6] Zheng, H., Nie, W., Vahdat, A., Azizzadenesheli, K. and Anandkumar, A., 2022. Fast sampling of diffusion models via operator learning. arXiv preprint arXiv:2211.13449. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How does the proposed optimization for score function in HISIVI outperform or differ from the conventional implicit score matching or denoising score matching[7]? [7] Vincent, P., 2011. A connection between score matching and denoising autoencoders. Neural computation, 23(7), pp.1661-1674. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 1 poor Limitations: This work only deals with the geometric interpolation of distributions without providing justification. Exploring the impact of choice of interpolation on generation would enhance the quality of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Here are our response to your concerns. Q1: The novelty of HSIVI compared to related work is not clearly demonstrated. It is recommended to include more related works and comparison: cascaded diffusion models [1], diffusion schrodinger bridge [2], flow matching [3], or rectified flow [4], etc. A1: First, we'd like to clarify the novelty of HSIVI compared to related works on diffusion model acceleration. (i) HSIVI can be trained without data. As a variational approach, HSIVI uses flexible hierarchical semi-implicit distributions and introduces a sequence of auxiliary distributions for progressive training. For diffusion models, the pre-trained score networks serve as a natural sequence of auxiliary distributions (i.e., diffusion bridge, see Example 2) for HSIVI. (ii) Note that most of the current approaches for accelerating diffusion models are ODE-based, which could limit the diversity of generated samples. The HSIVI approach is different from these acceleration techniques by directly accelerating the SDE. We show in Figure 6 that HSIVI can generate diverse samples even from the same starting state. Thanks for your recommendation! See below for a comparison of HSIVI with these related works. - Cascaded diffusion model [1] comprises a cascaded pipeline of diffusion models which generate high-resolution samples by successive upsampling from low-resolution samples. Our HSIVI is clearly distinct from [1] in that we aim at accelerating diffusion sampling by assuming a diffusion bridge between Gaussian prior and data distribution. - Diffusion Schrödinger bridge (DSB) [2] reformulates the generative modeling as a Schrödinger bridge problem in finite time and proposes an approximate IPF algorithm that requires recursively solving the half-bridge problem that matches the joint path distributions, as opposed to HSIVI that fits the marginal distributions. - Flow matching [3] and rectified flow [4] learn ODE based generative models by regressing the vector fields of fixed conditional probability paths (which could be made straight by connecting the samples from the prior and target distributions and constructing a deterministic coupling). HSIVI differs from them in that HSIVI uses pre-trained score networks (and therefore is data-free) and directly accelerates the SDE. - Denoising Diffusion GANs [5] proposes using GANs to model each sampling step and optimizing them through an adversarial loss with data. In constant, we assume an explicit form (e.g., Gaussian) of the conditional layers which is optimized via a VI approach without data. - [6] proposes a neural operator, which learns the trajectories of the probability flow ODE, and allows for high-accuracy simulation in a few steps. This idea can be thought of as a data-driven distillation for ODE. In contrast, HSIVI originates in a variational inference perspective which is trained without data and directly accelerates the SDE. Q2: The experiment section is relatively weak. Comparisons could strengthen the evaluation (aforementioned references and more [5],[6], etc.) A2: We compare the HSIVI-SM with more baselines on CIFAR10 (32x32). We want to clarify that as a data-free method, HSIVI-SM inherently faces the difficulty of lacking true samples. Also, due to the extra stochasticity, compressing SDE is generally more challenging than compressing ODE. Table: Sample quality measured by FID on CIFAR10 (32x32). |Model|5| 10| 15| |----|----|----|----| |DDPM|320.16|278.65|198.0| |FastDPM|67.64|9.85|6.1| |Analytic-DDPM|93.16|34.54|20.0| |Analytic-DDIM|51.86|14.08|8.6| |DDIM|41.53|13.73|8.7| |DPM-solver-fast|329.13|10.89|4.67| |DiffFlow [7] |28.31|22.46 |N/A | |HSIVI-SM|**6.27**|**4.31**|**4.17**| Table: Sample quality measured by FID on CIFAR10 (32x32) for the other results with different architectures. |Model|FID| |----|----| |FM[3]| 6.35 (142 NFE)| |DDGAN[5]|3.75 (4 NFE) | |2-Rectified Flow[4] |3.36 (110 NFE)| |2-Rectified Flow[4] |12.21 (1 NFE)| |2-Rectified Flow(+Distill)[4]|4.85 (1 NFE)| [7] Diffusion normalizing flow. NeurIPS 2021. Q3: How does the proposed optimization for score function ... denoising score matching? A3: As a variational approach, HSIVI, or more specifically, HSIVI-SM that uses a score matching objective (i.e., Fisher divergence), assumes a target score function, instead of estimating it from the data. Therefore, in HSIVI-SM, the score function is fixed, not optimized. However, the semi-implicit variational posterior does not have a tractable score function. To deal with it, HSIVI-SM rewrites the Fisher divergence as the maximum of an inner optimization problem. This allows us to take advantage of the hierarchical structure of semi-implicit distributions and use a similar trick to denoising score matching for a tractable training objective. This technique is borrowed from the work of SIVI-SM (https://openreview.net/forum?id=sd90a2ytrt). We want to emphasize that the improvement of sampling efficiency of HSIVI over denoising diffusion models is not due to the way score matching is done, but the expressiveness of semi-implicit distributions and a training procedure that directly matches the marginal distributions. Q4: This work only deals with the geometric ... A4: In addition to the geometric interpolation (Example 1), we also introduced an alternative interpolation method called diffusion bridge (Example 2) which is useful for diffusion models. We also give a comparison of these interpolations in Figure 2 (geometric interpolation results) and Figure 7 in Appendix D.1 (diffusion bridge results) on the Gaussian mixture model. Exploring the impact of the choice of interpolation is interesting. However, for diffusion models, this would also be a bit challenging as it relates to how one constructs the forward process that allows efficient score estimation. We thank the reviewer for your suggestion, and we will leave a more thorough investigation for future work. --- Rebuttal Comment 1.1: Comment: Thank you for addressing all the concerns raised by reviewers. However, due to my limited knowledge, I’m finding it challenging to fully comprehend how HSIVI algorithm works, given the details provided in the paper. Given my current understanding, I find it difficult to distinguish the contributions of this work from those of other generative models. Considering these factors, I have made the decision to lower my confidence score to ensure a fair decision by the ACs.
Summary: Semi-implicit variational inference increases the expressiveness of variational posteriors but introduces intractabilities to their inference. This work adds to the semi-implicit variational inference (SIVI) work of Yu and Zhang (https://openreview.net/forum?id=sd90a2ytrt) which uses an alternative objective (the Fisher divergence) that, combined with the hierarchical nature of semi-implicit variational distributions, can be approximated as the solution of a mini-max optimization problem. This work takes inspiration from simulated annealing and denoising-diffusion models by proposing to expand the single-layer semi-implicit variational family into multiple-layer variants by specifying hierarchical auxiliary distributions to guide the semi-implicit variational distribution towards the target distribution. The authors denote this method Hierarchical Semi-Implicit Variational Inference (HSIVI). The auxiliary distributions are specified either by assuming the marginal densities of a geometric interpolation between a target and base distribution are available analytically or by using pre-trained scores from a diffusion bridge. The main contribution to this paper is stated in the author's claim that "when used for diffusion model acceleration, we show that HSIVI can produce high quality samples comparable to or better than the existing fast diffusion model based samplers with small number of function evaluations on various datasets." This claim is tested empirically on the CIFAR-10 (32x32) and CelebA (64x64) datasets with results showing comparable or better FID scores to existing fast diffusion model samplers with few (between 5 and 15) function evaluations. As well as the main contribution, an algorithm for training is provided by noting that the conditional layers can be trained sequentially until convergence by exploiting the hierarchical structure of the semi-implicit distribution. Also, an efficient parameterization of the neural networks is provided to make the algorithm computationally feasible. Strengths: Quality The work is a novel combination of well-known, peer-reviewed techniques including simulated annealing and SIVI. It is clear how this work differs from the work of Yu and Zhang (https://openreview.net/forum?id=sd90a2ytrt) using the hierarchical semi-implicit variational posteriors, and the value of this contribution is demonstrated on a multi-modal problem in Figure 2. The methods used seem to work well judging by the experimental results. Clarity The submission is well-organized and clearly written. Weaknesses: Originality There is limited technical novelty in this paper as the technical content on the work is almost entirely covered by the work of Yu and Zhang (https://openreview.net/forum?id=sd90a2ytrt) that rewrites the Fisher divergence leading to a form that does not require computing the score of the hiearchical variational posterior. The paper extends this by sequentially score matching to noised versions of the target distribution, which have been obtained analytically or approximated from pretrained denoising diffusion models. The main novelty in this work is instead an experimental one, showing that HSIVI can produce high quality samples comparable to or better than the existing fast diffusion model based samplers with a small number of function evaluations on two datasets by comparing FID scores (CIFAR-10 (32x32) and CelebA (64x64)) and one dataset by comparing samples visually (MNIST). The experiments were made computationally feasible by a trick to parameterize the neural networks that is detailed in Proposition 1. Quality of results Relatively simple experiments are provided that on one hand demonstrate the contribution well, but on the other hand do not cover a very diverse range of cases. No empirical failure case is provided for a complicated distribution that requires many function evaluations to sample from, which may provide an interesting avenue for future work. On the reproducibility of the results, whilst the experimental setups are well detailed, no code is provided with the paper, nor indication it will become available upon publication. The reviewer would like to see that the code is made available since this is crucial for reproducibility. Furthermore, there may not be enough information to replicate the experiments. For example, as far as I can see, there is no discussion on the parameterization/values of the positive weighting function, beta(t) for joint HSIVI-SM training on CIFAR-10 and MNIST, which makes it impossible to reproduce without any code provided. Significance The work's significance would be improved if it addressed why we should expect improvements in sampling efficiency over denoising-diffusion sampling and where these improvements can be derived from (the particular form of variational family, or something else?). Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: 1. It would be interesting if the authors could demonstrate a failure case for their hierarchical variational posterior approximating the denoising-diffusion model, what properties of the score in the backwards process (equation 9) would cause the proposed variational posterior to fail with a small number of function evaluations? 2. No interpretation of the neural network f_t(x_t) is given, apart from that it provides "guidance". Could the authors please clarify what is being guided, and to what? 3. Comparing the proposed sampling algorithm to denoising-diffusion: the sampling algorithm for denoising-diffusion uses a reverse-time Markov chain that has specified isotropic variance that appears due to the assumed forward-time transition densities. Whereas the proposed HIVI sampling algorithm in this work is a Markov chain with transition densities that are the conditional variational posterior distributions (called the conditional layer in this work). The difference between this Markov chain and the denoising-diffusion one is the trainable non-isotropic variance of the conditional layer q_t(\cdot|x_{t+1}; \phi). If the conditional layer is instead modelled as a Gaussian distribution with mean and an isotropic scale-Identity matrix, then do you expect that there is no improvement over denoising-diffusion with the same number of steps? It would be good to see an experiment that showed the trade off between sample quality and number of function evaluations using this simpler variational family, which is on par with the denoising-diffusion sampling. Typos: Fisher divergence is not symmetric in its arguments and has been incorrectly written with arguments transposed (the expectation is taken over the distribution of the first argument). See for example how the Fisher divergence is written in the papers that this work has cited https://arxiv.org/pdf/1602.03253.pdf or https://arxiv.org/pdf/1810.03545.pdf Line 510, 511, 512 there are typos in the norm regularizing the f network (written as f_{t}(x; \psi_t) and should be f_{t}(x_{t}; \psi_t)) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: The limitation discussion picks up on the important points, but it would be good to know if higher dimensional problems were attempted, or problems with more parameters (such as the "huge VP deep continuous-time model (Song et al., 2020b) that has more channels and layers"), and did the authors run into memory issues that were discussed in Appendix F? An ablation of the variational family for the conditional layers (as described in question 3) comparing directly to the denoising-diffusion sampling remains unexplored, and would be useful to see. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and valuable feedback. We will address your concerns in the following aspects: ### Weaknesses #### Originality Answer: First, HSIVI not only extends SIVI by allowing multiple conditional layers which greatly enhances the flexibility but also introduces a sequence of auxiliary distributions for progressive training that further improves stability. Note that this is only possible in the context of semi-implicit variational inference, where the mixing layer is allowed to be implicit, and this sets HSIVI apart from the previous methods like hierarchical variational models [1]. Moreover, using HSIVI for diffusion models is more than just an experimental one, and there are also significant methodological contributions. Note that most of the current approaches for accelerating diffusion models are ODE-based, which could limit the diversity of generated samples. The HSIVI approach is different from these acceleration techniques by directly accelerating the SDE. We show in Figure 6 that HSIVI can generate diverse samples even if the starting state is the same. [1] Hierarchical variational models. in ICML 2016. #### Quality of results Answer: Thank you for your suggestions. We provide a failure case in Figure 2 of the rebuttal PDF, demonstrating that the HSIVI-SM algorithm fails when the layer number $T$ is small (the distances of successive auxiliary distributions are large) on a checkerboard distribution. In fact, the score function on the checkerboard target is sharp on the boundaries but vanishes elsewhere. Therefore, fitting this target distribution is somewhat challenging. Regarding reproducibility, the parameterization of beta(t) and other details can be found in Appendix E.2.2. We also have sent an anonymous link of codes to AC. #### Significance Answer: The improvement in sampling efficiency over denoising-diffusion sampling comes from the expressiveness of semi-implicit variational families. More specifically, we have shown (at least experimentally) that using the same neural network architecture as the score nets, the semi-implicit variational distribution can be used to compress many steps of SDE simulations in DDPM by fitting toward the score functions at the corresponding times (i.e. the score functions in the diffusion bridge in Example 2). Another point of view is from the training objective. Note that DDPM matches the joint distributions by minimizing $KL(p(x_{0:T−1})|q(x_{0:T−1}))$. When $T$ is small, the variational distribution $q(x_{0:T−1})$ may be insufficient to approximate $p(x_{0:T−1})$ well enough, leading to degraded marginal distribution approximations of $q_0(x_0)$ to $p_0(x_0)$. In contrast, HSIVI matches the marginal distributions $q_t(x_t)$ and $p_t(x_t)$ and hence would ensure a better fit for $p\_0(x\_0)$ especially when $T$ is small. ### Questions Q1: It would be interesting if the authors could demonstrate a failure case ... A1: Please refer to the answer of Weakness-Quality above. Q2: No interpretation of the neural network f_t(x_t) is given ... A2: The role of $f_t(x_t)$ is to approximate $\nabla \log p_t(x_t) - \nabla \log q_t(x_t)$, thereby identifying regions where the current variational distribution $q_t(x_t)$ fits the target marginal distribution $p_t(x_t)$ insufficiently. We visualized the training dynamics of $f_0(x_0)$ on the checkerboard target in Figure 3 of the global rebuttal PDF. Q3: It would be good to see ... using this simpler variational family, which is on par with the denoising-diffusion sampling. A3: We train HSIVI-SM with isotropic conditional layers on par with DDPM on CIFAR-10 (see the following table). Table: Sampled quality measured by FID on CIFAR10 (32x32). |NFE\Model|DDPM|DDIM|HSIVI-SM (isotropic)|HSIVI-SM (non-isotropic)| |----|----|----|----|----| |5|320.16|41.53|7.33|**6.27**| |10|278.65|13.73|4.78|**4.31**| |15|198.00|8.78|4.46|**4.17**| The above results also support the findings discussed in Significance. The improvement of HSIVI-SM over DDPM stems not only from a more expressive variational distribution but also from the direct matching of the marginal distributions. Q4: Typos A4: Thanks for catching the typos. We will correct these typos in our revision. ### Limitations Q5: It would be good to know if higher dimensional ... and did the authors run into memory issues that were discussed in Appendix F? A5: We have implemented HSIVI-SM using the score function in [1] on ImageNet64, which demonstrates that HSIVI can also handle high dimensional problems with large models. Table: Sampled quality measured by FID on ImageNet (64x64). All these methods employ the same UNet in [1] with 115.47M parameters. |Model\NFE|5|10|15| |----|----|----|----| |DDPM|402.68|358.80|284.00| |Analytic-DDPM|-|60.65|45.98| |Analytic-DDIM|-|70.62|41.56| |DDIM|147.03|42.31|24.85| |DPM-solver-fast|402.43|28.96|20.03| |**HSIVI-SM (ours)**|**40.43**|**17.67**|**15.49**| In fact, we also find that HSIVI can benefit from more accurate score networks (larger models). The score networks on CelebA (64x64) that we used in the main text is a relatively small (see the footnote in page 8). We have used a larger UNet in [2] and got better results (see the table below). Table: Sampled quality measured by FID on CelebA (64x64). |Model|5|10|15| |----|----|----|----| |HSIVI-SM(UNet with 38.72M parameters)|8.29|4.95|4.6| |HSIVI-SM(UNet with 78.66M parameters)|**6.22**|**3.09**|**2.23**| In fact, the memory usage in HSIVI-SM during training is about three times of those baseline methods. We did not run into memory issues as the memory usage is still in an acceptable range. More details are presented in our released code. [1] Improved denoising diffusion probabilistic models. in ICML 2021.\ [2] Denoising diffusion probabilistic models. in NeurIPS 2020. Q6: An ablation of the variational family for the conditional layers ... to see. A6: We have implemented HSIVI with isotropic conditional layers (in A3).
Summary: Authors propose a hierarchical semi-implicit variational inference framework by extending the existing SIVI. Authors showed that the proposed HSIVI, given pre-trained score networks, can be used to accelerate the sampling process of diffusion models with the score matching objective. The numerical results show more enhanced performance in faithful modeling of complex distributions and diffusion model acceleration. Strengths: The paper is fairly well-written, clearly motivated and generally addresses an important problem. The extension of SIVI to a hierarchical model, although not very novel, is intuitive and makes sense. I haven't checked the proofs but the overall approach seems to hold up. Weaknesses: - Extending SIVI to a hierarchical model, by itself, is not very novel. However, the application to diffusion acceleration is interesting. - The experiments and presented numerical results could be improved. (check next section for more) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It is mentioned in the manuscript that HSIVI requires more memory. How much more memory (with respect to data dimension) does it need compared to baselines? - ImageNet (64x64) is a common benchmark dataset for diffusion models. I would be nice to have the results for that as well. - How does the proposed model performs compared to Diffusion Normalizing Flows [1] both in terms of target distribution approximation and sample generation. - [A suggestion] It would be interesting to extend HSIVI on graphical models (in a setup like [2]). [1] Diffusion Normalizing Flow, Qinsheng Zhang and Yongxin Chen, NeurIPS 202. [2] Efficient Inference Amortization in Graphical Models using Structured Continuous Conditional Normalizing Flows, Christian Weilbach, 2019. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations to some extent have been discussed but more discussion (specially around memory usage) is needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedbacks and suggestions! Here are our responses to them. Q1: Extending SIVI to a hierarchical model, by itself, is not very novel. However, the application to diffusion acceleration is interesting. A1: Thank you for acknowledging the novelty of the application to diffusion acceleration! We admit that HSIVI is a natural extension of the SIVI framework. However, we also believe that HSIVI's contributions go beyond SIVI in the following aspects: 1. HSIVI constructs more expressive mixing layers using multi-layer architectures. Standard SIVI only has one conditional layer. We show that this may not be enough for distributions with complicated structures (see Figure 2 and 3 in Section 5.1). By allowing multiple conditional layers, HSIVI further improves the flexibility of semi-implicit distributions. 2. HSIVI introduces an auxiliary bridge to alleviate the difficulty of fitting the target distribution. Training HSIVI, therefore, can be done in a sequential manner, where the intermediate semi-implicit distributions were pushed towards the target distribution layer after layer. Note that this is only possible in the context of semi-implicit variational inference, where the mixing layer is allowed to be implicit, and this sets HSIVI apart from the previous methods like hierarchical variational models [4]. These auxiliary distributions would also anchor the intermediate semi-implicit distributions $q\_t(x\_t), t=T-1, \ldots, 1$, making the training process more stable. Q2: The experiments and presented numerical results could be improved. (check next section for more) A2: We have new results on ImageNet (64x64) that compare favorably to other baselines (see A4 below). We will add these new results and other relevant baseline methods (e.g., Diffusion Normalizing Flows [1]) to the experiments in our revision. Q3: It is mentioned in the manuscript that HSIVI requires more memory. How much more memory (with respect to data dimension) does it need compared to baselines? A3: In addition to the score nets as in standard diffusion models, HSIVI requires the conditional layers $q_t(x_t|x_{t+1};\\phi), t=T-1,\\ldots, 0$ and $f_t(x_t;\\psi), t=T-1,\\ldots, 0$ for training, when parameter sharing is applied. These two additional parts both take the same memory consumption as the score nets. Therefore, the memory consumption of HSIVI during training is about three times of those baseline methods. After training, both the score network and $f_t(x_t;\\psi), t=T-1,\\ldots, 0$ can be removed, and only the conditional layers $q_t(x_t|x_{t+1};\\phi), t=T-1,\\ldots, 0$ are needed for sampling from HSIVI. So during sampling, the memory consumption of HSIVI is about the same as the baseline methods. Q4: ImageNet (64x64) is a common benchmark dataset for diffusion models. I would be nice to have the results for that as well. A4: We conducted additional experiments on the ImageNet (64x64) dataset, which demonstrated that HSIVI also achieves significant acceleration to diffusion model sampling at 5, 10, and 15 steps. Table: Sample quality measured by FID(&#8595;) on ImageNet (64x64). All methods employ the same UNet in [3]. | Model\NFE | 5 | 10 | 15 | | ---- | ---- | ---- | ---- | | DDPM | 402.68 | 358.80 | 284.00 | | DDIM | 147.03 | 42.31 | 24.85 | | Analytic-DDPM | N/A | 60.65 | 45.98 | | Analytic-DDIM | N/A | 70.62 | 41.56 | | DPM-Solver-fast | 402.43 | 28.96 | 20.03 | | **HSIVI-SM (ours)** | **40.43** | **17.67** | **15.49** | Q5: How does the proposed model performs compared to Diffusion Normalizing Flows [1] both in terms of target distribution approximation and sample generation. A5: Thank you for mentioning this relevant work. Unlike standard diffusion models, Diffusion Normalizing Flows (DiffFlow) allow learnable forward SDE as well and are trained by matching the forward and backward SDEs [1]. Compared to standard diffusion models, DiffFlow would require fewer discretization steps and thus has better sampling efficiency. In contrast, HSIVI is more like standard diffusion models that use a fixed forward process, while achieving better sampling efficiency via variational approaches using hierarchical semi-implicit distributions. Therefore, the training of HSIVI only requires a pre-trained score function and can be data-free. DiffFlow can be used for density estimation and sample generation. As HSIVI is not designed for density estimation, we compare HSIVI to DiffFlow in terms of sample generation on CIFAR-10 as follows: Table: Sampled quality measured by FID(&#8595;) on CIFAR10 (32x32). | NFE | DDPM| DDIM| DiffFlow | HSIVI-SM | |--------|------|------|------|------------| | 5 | 320.16 | 41.53 | 28.31 | **6.27** | | 10 | 278.65 |13.73 | 22.56 | **4.31** | We will add these results in our revision. [1] Zhang, Qinsheng, and Yongxin Chen. "Diffusion normalizing flow." in NeurIPS 2021. Q6: [A suggestion] It would be interesting to extend HSIVI on graphical models (in a setup like [2]). A6: Thank you for the suggestion! Extending HSIVI to graphical models is indeed very interesting. We will read it carefully and think about possible extensions. Q7: The limitations to some extent have been discussed but more discussion (specially around memory usage) is needed. A7: We will add more discussion on memory usage in our revision, please see A3 for more details. [1] Zhang, Qinsheng, and Yongxin Chen. "Diffusion normalizing flow." in NeurIPS 2021.\ [2] Weilbach, Christian, et al. "Efficient inference amortization in graphical models using structured continuous conditional normalizing flows." in AABI 2019.\ [3] Nichol, Alexander Quinn, and Prafulla Dhariwal. "Improved denoising diffusion probabilistic models." in ICML 2021.\ [4] Ranganath, Rajesh, Dustin Tran, and David Blei. "Hierarchical variational models." in ICML 2016. --- Rebuttal Comment 1.1: Comment: Thanks authors for the response addressing my concerns/questions. Having read (all) reviews/responses, I'm keeping my score as is. Looking forward to see new results/discussions in the revised version.
Summary: The authors extend semi-implicit variational inference to have multiple layers of latent variables, vastly increasing the. expressiveness of the model. A nice formulation of the asymptotic lower bound plus a training scheme is introduced as well. Strengths: This paper is an extension that on the surface seems straight forward. But the authors demonstrate that this is far from the case and introduce a novel formulation of the semi-vi ELBO. The paper was also written well and easy to follow. The experiments section was great too. Weaknesses: My only qualm is the flow between the first half of the paper and the second half where the focus is on diffusion models. Previously, the authors argued that training using a sequence of distributions would lead to better marginal distributions. But for speeding up sampling of diffusion models, it seems like training the joint distribution leads to impressive results. This opens up the question of whether the sequence of distributions is even needed at akk. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I don't have any questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It seems like the main text is missing a limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable feedback! Below are our answers to your comments: Q1: My only qualm is ... But for speeding up sampling of diffusion models, it seems like training the joint distribution leads to impressive results. This opens up the question of whether the sequence of distributions is even needed at all. A1: Thank you for the discussion on this issue! We want to clarify a potential confusion here regarding "joint training" and "training the joint distribution" in the diffusion model acceleration part. In our proposed HSIVI approach, only the marginal distributions are fitted, not the joint distributions (we have a related discussion in Section 3.2). The phrase "joint training" is introduced when we use parameter sharing to reduce memory usage which is a common practice in diffusion models (e.g., score nets). Since the conditional layers now are parameterized with the same $\\phi$, they would be trained jointly, instead of being independently trained as in the sequential training case where each layer has its own parameters. Therefore, "joint training" is used to indicate a training behavior and it is the marginal distribution $p_t(x_t), t=0,\\ldots, T$ given by a sequence of auxiliary distributions that are approximated progressively with the hierarchical semi-implicit distributions. We see this marginal approximation property as one of the keys that allow HSIVI to generate high-quality samples with small function evaluations (see the discussion paragraph in section 3.2, i.e. line 150-157, for more details). Q2: It seems like the main text is missing a limitations section. A2: Due to the space constraints, we have placed the limitations section in the appendix. We will move the limitations section to the main text in our revision if there is room.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback. First, we'd like to address some common issues raised by the reviewers. ### Novelty/Originality As a semi-implicit variational inference method, HSIVI further enhances the expressiveness of semi-implicit variational posteriors by allowing multiple conditional layers. This makes HSIVI able to provide an accurate approximation of complicated distributions that would not be approximated well via SIVI which uses a single conditional layer. Moreover, we also introduce a sequence of auxiliary distributions for progressive training that further improves the stability of the algorithm. For diffusion model acceleration, HSIVI-SM can produce high-quality samples that compare favorably to other baseline methods with small numbers of function evaluations. We want to emphasize that it is the expressiveness of semi-implicit distributions and the variational inference approach that progressively matches a sequence of auxiliary distributions (i.e., the diffusion bridge given by the learned score nets) that contributes to the improved sampling efficiency of HSIVI-SM. Although DDPM is originally derived via a variational approach, it matches the joint distributions instead of the marginal distributions. In HSIVI-SM, the marginal distribution of the backward models (i.e., semi-implicit distributions $q_t(x_t), t=T-1\ldots,0$) and the forward models $p_t(x_t), t=T-1,\\ldots,0$ are directly matched via score matching, and we see this as one of the keys that allow HSIVI-SM to generate high-quality samples with a small function evaluations (see the discussion paragraph in Section 3.2, i.e. line 150-157, for more details). Note that this also allows a direct compression of stochastic diffusion models, which is different from most of the current ODE-based acceleration methods. HSIVI-SM can also be trained without data, which would be useful when there are privacy concerns. Overall, we think there are significant methodological innovations in HSIVI that make it a novel/original contribution to the community. ### More experiments We have provided additional results on CelebA (64x64) and ImageNet (64x64) with more powerful pre-trained score nets (bigger models with more parameters). These results show that our methods work well for common benchmark datasets for diffusion models and can benefit from more accurate score function estimation. Table: Sample quality measured by FID on CelebA (64x64). All methods without \* employ the same UNet in [1]. | Model\NFE | 5 | 10 | 15 | | ---- | ---- | ---- | ---- | | DDPM | 366.10 | 309.95 | 206.92 | | DDIM | 27.38 | 10.89 | 7.78 | | FastDPM | 27.63 | 15.44 | 12.05 | | Analytic-DDPM | 50.92 | 28.93 | 21.84 | | Analytic-DDIM | 29.40 | 15.74 | 12.25 | | DPM-Solver-fast | 355.96 | 6.76 | 2.98 | | **HSIVI-SM (ours), in the main text (smaller UNet)\*** | 8.29 | 4.95 | 4.66 | | **HSIVI-SM (ours)** | **6.22** | **3.09** | **2.23** | Table: Sample quality measured by FID on ImageNet (64x64). All methods employ the same UNet in [2]. | Model\NFE | 5 | 10 | 15 | | ---- | ---- | ---- | ---- | | DDPM | 402.68 | 358.80 | 284.00 | | DDIM | 147.03 | 42.31 | 24.85 | | Analytic-DDPM | N/A | 60.65 | 45.98 | | Analytic-DDIM | N/A | 70.62 | 41.56 | | DPM-Solver-fast | 402.43 | 28.96 | 20.03 | | **HSIVI-SM (ours)** | **40.43** | **17.67** | **15.49** | We will revise our manuscript in the following aspects: - We will add these additional results on CelebA (64x64), ImageNet (64x64) to the experiments. - We will enhance the discussion of HSIVI's computational complexity and memory usage in comparison to the baseline methods. - We will add the other relevant baseline methods (e.g., Diffusion Normalizing Flows, Rectified Flow) to the experiments and the discussion. - We will add the ablation studies to discuss: - The failure case of HSIVI when the number of layers is small. - The effect of the accuracy of score net. - We will add more discussion on the detaching operation and its effect on the optimality of $\phi$ in Appendix C.3. [1] Ho, Jonathan, Ajay Jain, and Pieter Abbeel. "Denoising diffusion probabilistic models." in NeurIPS 2020.\ [2] Nichol, Alexander Quinn, and Prafulla Dhariwal. "Improved denoising diffusion probabilistic models." in ICML 2021. We hope our revision has adequately addressed the reviewers' questions and concerns, and look forward to reading any further comments. Pdf: /pdf/886ff13fbb44c81c1fdf95a2c866423bc5bb2319.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces a hierarchical semi-implicit variational inference that stacks multiple semi-implicit layers to construct a flexible generative distribution. The paper introduces a sequence of distributions that interpolate between the target and a base distribution. Each pair of intermediate inference and target distributions are matched by optimizing a SIVI bound. The method is applied in the diffusion model for sampling acceleration. ##Post-rebuttal Thanks for the authors' response. Most of my questions have been addressed and I will maintain my current score. Strengths: - The idea of using semi-implicit distribution for the progressive approximation is interesting and intuitive. - The empirical results show HSIVI improves the distribution approximation and sample quality significantly on several tasks. - Though the modeling and training of HSIVI consists of many components, the paper explains the ideas concisely and clearly. Weaknesses: - The validity of the training procedure needs more concrete explanations or theoretical results - The empirical study needs more evidence for the improvement of acceleration. - Some closely related works need more discussions of similarity and improvement. Please see Questions for more details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - To train HSIVI sequentially, the paper mentions, “Let the parameters ϕ_t in the t-th conditional layer be independent across different ts.” But since q_t(x_t; ϕ_t) depends on x_{t-1} that depends on ϕ_{t-1}, It is understandable that the graph detaching in the algorithm benefits computation efficiency. Does it have any downside for not training ϕs jointly? For example, will there be convergence issues, or will some ϕs overfit while others underfit? - In Appendix C.3., the author shows that the detaching operation keeps the optimality of \phi unchanged in the joint training. Does a similar conclusion apply to the sequential training in the main paper? - Table 1 shows with the same layers/steps, HSIVI achieves better approximation than DDPM and DDIM. Would the computation with one layer of HSIVI be more than one step of DDPM? If so, it might not be a fair comparison. Can the author discuss the computational complexity and provide wall clock time? - [1] and [2] design a hierarchical model using semi-implicit distributions, which seem to be closely related to HSIVI. The paper might need more detailed discussions of these papers. [1] Importance weighted hierarchical variational inference (NeurIPS 2019) [2] Structured Semi-Implicit Variational Inference (AABI 2019) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful feedback! We address your concerns as follows. Q1: The validity of the training procedure needs more concrete explanations or theoretical results. A1: The training procedure is similar to SIVI-SM (https://openreview.net/forum?id=sd90a2ytrt), with the only difference being using multiple auxiliary distributions instead of the target distribution directly. Q2: The empirical study needs more evidence for the improvement of acceleration. A2: Thank you for your suggestion! We have provided additional results on CelebA (64x64), ImageNet (64x64) with more powerful pre-trained score nets. These results show that our methods work well for common benchmark datasets for diffusion models and can benefit from a more accurate score function. For CelebA (64x64) experiments here, we adopt **the identical Unet in [3] with 78.66M parameters, which is bigger than the one we used in the main text** with one more downsampling block and one more upsampling block. For ImageNet (64x64), we follow DPM-solver, taking the same improved UNet in [4], which is larger than the one in [3] and has 115.47M parameters. Table: Sample quality measured by FID on CelebA (64x64). All methods without \* employ the same UNet in [3]. |Model\NFE|5|10|15| |----|----|----|----| |DDPM|366.10|309.95|206.92| |DDIM|27.38|10.89|7.78| |FastDPM|27.63|15.44|12.05| |Analytic-DDPM|50.92|28.93|21.84| |Analytic-DDIM|29.40|15.74|12.25| |DPM-Solver-fast|355.96|6.76|2.98| |**HSIVI-SM (ours), in the main text (smaller UNet)\***|8.29|4.95|4.66| |**HSIVI-SM (ours)**|**6.22**|**3.09**|**2.23**| Table: Sample quality measured by FID on ImageNet (64x64). All methods employ the same UNet in [4]. |Model\NFE|5|10|15| |----|----|----|----| |DDPM|402.68|358.80|284.00| |DDIM|147.03|42.31|24.85| |Analytic-DDPM|N/A|60.65|45.98| |Analytic-DDIM|N/A|70.62|41.56| |DPM-Solver-fast|402.43|28.96|20.03| |**HSIVI-SM (ours)**|**40.43**|**17.67**|**15.49**| [3] Denoising diffusion probabilistic models. NeurIPS 2020.\ [4] Improved denoising diffusion probabilistic models. ICML 2021. Q3: Some closely related works need more discussions of similarity and improvement. A3: We will add discussions on related works in our revision. Please see more details in A7. Q4: To train HSIVI sequentially ... Does it have any downside for not training $\phi$s jointly? For example, will there be convergence issues, or will some $\phi$s overfit while others underfit? A4: Thanks for this interesting question! In the current setting, each conditional layer is trained to push the fitted distribution from the previous layers toward the auxiliary distribution at the next time step, and then it is fixed afterward. The reason is that the trained $q_t(x_t; \phi\_{\ge t})$ would provide a good approximation to the corresponding auxiliary distribution $p_t(x_t)$ and hence reduce the difficulty for training the next conditional layers (since the differences between successive auxiliary distributions are assumed to be small). Note that this approach does not require each $q_t(x_t; \phi_{\ge t})$ to be a perfect match of $p_t(x_t)$, as the conditional layer at time $t-1$ would automatically compensate for the error from previous time steps when fitting $p_{t-1}(x_{t-1})$ (see an example in Figure 7 in the appendix). Therefore, we do not expect convergence issues given enough capacity of the conditional layers and an appropriate $T$. On the other hand, when $\phi$s are trained jointly, there is a potential drawback that the previously fitted $q_t(x_t; \phi\_{\ge t})$ would deviate from the corresponding auxiliary distribution, and this would make the training less stable as well (see Figure 1 in the rebuttal PDF). We will add a discussion to our revision. Q5: In Appendix C.3., the author shows that the detaching operation keeps the optimality ... Does a similar conclusion apply to the sequential training in the main paper? A5: Yes, this result holds true as well. In sequential training, we assume that the marginal distribution of the variational distribution is fitted to the target distribution $p_{t+1}$. This is used as a mixing layer during the training of $q_t(x_t|x_{t+1};\phi_t)$, where we apply a detaching operation to keep the parameters $\phi^\star_{\ge t+1}$ fixed. As a result, the optimality of the model is maintained until the final $t=0$. We will include these details in Appendix C.3. Q6: Would the computation with one layer of HSIVI be more than one step of DDPM? If so, it might not be a fair comparison. Can the author discuss the computational complexity and provide wall clock time? A6: The computation with one layer of HSIVI is about the same as one step of DDPM/DDIM as we used the same neural network architecture for the conditional layers in HSIVI and the score nets in diffusion models. A wall clock time comparison is provided in Figure 12 in the appendix. Q7: [1] and [2] design a hierarchical model using semi-implicit distributions, which seem to be closely related to HSIVI. The paper might need more detailed discussions of these papers. A7: Thank you for sharing these relevant papers. We will add more detailed discussions of these papers in our revised manuscript. (i) IWHVI [1] employs a reverse model as an importance distribution for the variational distribution and requires an explicit variational prior. Our proposed HSIVI inherits the advantage of SIVI that allows $q_t(x_t;\phi_{\geq t})$ to be implicit and does not require a reverse model. These properties of HSIVI improve its expressiveness and enable multi-layer extension. (ii) Structured SIVI [2] assumes an autoregressive form of variational distributions to factorize high-dimensional joint semi-implicit distribution into the product of low-dimensional conditional semi-implicit distributions. Our approach is distinct from structured SIVI because HSIVI does not factorize the data space but augments it with auxiliary bridges to accommodate the multimodal targets.
Summary: This paper presents a new method called Hierarchical Semi-Implicit Variational Inference (HSIVI) that enhances the expressiveness of Semi-Implicit Variational Inference (SIVI) on complex target distributions. HSIVI works by applying SIVI to a hierarchy of latent variables, and using the variational distribution of the previous step as the implicit prior for the later step. The authors apply HSIVI for sampling from diffusion models, learned on a variety of synthetic and real-world datasets. Strengths: 1. Novelty: The paper presents a new method called Hierarchical Semi-Implicit Variational Inference (HSIVI) that enhances the expressiveness of Semi-Implicit Variational Inference (SIVI) on complicated target distributions. This is a novel approach that has not been explored before, to the best of my knowledge. 2. Effectiveness: The authors demonstrate the effectiveness of HSIVI on a variety of synthetic and real-world datasets, including accelerating the sampling process of diffusion models. The results show that HSIVI outperforms existing methods in terms of accuracy and efficiency. 3. Clarity: The paper provides a detailed explanation of the HSIVI method, including its mathematical formulation and implementation. The authors also provide clear and concise descriptions of the experiments and results. Weaknesses: The main weakness of this work is that the method seem fairly complicated to implement in a practical setting. The method is presented in a general way, with an algorithm box. However, later another objective function is presented with some parameter sharing. Then, in the larger-scale experiments, HSIVI-SM is trained by fune-tuning a larger model. Either the method is not extremely stable / usable, or there are too many details but those are necessary for the experiment to be setup properly? (in the latter case, perhaps one experiment should go to the supplements?). Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Why isn't the performance of HSIVI-LB reported on the other experiments? (2) In terms of computational efficiency, it wasn't clear to me that if two compared methods had the same "T", then their computational burden were comparable. Are the computational complexities exactly the same for sampling from HSIVI-SM and SIVI, for equal "T"? (3) In principle, if HSIVI was used to train the model, we could reach a better data likelihood? Why aren't the authors restricting the application of HSIVI to sampling, and not learning the score function as well? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We address your specific questions and comments below. Q1: The main weakness of this work is that the method seem fairly complicated to implement in a practical setting. The method is presented in a general way, with an algorithm box. However, later another objective function is presented with some parameter sharing. Then, in the larger-scale experiments, HSIVI-SM is trained by fine-tuning a larger model. Either the method is not extremely stable / usable, or there are too many details but those are necessary for the experiment to be setup properly? (in the latter case, perhaps one experiment should go to the supplements?). A1: Thanks for your question! First of all, we propose HSIVI as a general VI method. When used for diffusion model acceleration, we employed parameter sharing to reduce memory consumption so that HSIVI-SM will have the same memory usage as the other baseline methods (e.g., DDPM and DDIM). Moreover, parameter sharing also allows joint training of the conditional layers which would facilitate convergence, similar to parameter sharing of score nets in diffusion models. Note that the target data distributions in generative models are usually complicated. When $T$ is small, the distance between the auxiliary distributions $\\{p_i(x)\\}_{i=0}^{T-1}$ would be large, making it challenging to train the conditional layers starting from random initialization. As DDIM can produce reasonably good samples when $T$ is small, it serves as a natural initialization strategy for HSIVI. For the same reason, we used the trained $T=15$ layers HSIVI model to initialize $T=5$ layer HSIVI model. These strategies work well in practice, and we see significant improvement of HSIVI over DDIM (Table 2 in the main text). More details of the experimental setup can be found in Appendix E.2.3. Note that these strategies are more like initialization strategies that can take advantage of current fast diffusion model based samplers and is data free, instead of fune-tuning strategies that often require data samples during training. We apologize for the confusion and would clarify these in our revision. Q2: Why isn't the performance of HSIVI-LB reported on the other experiments? A2: The reason is that only the score function is available in diffusion models, not the probability density function. This makes HSIVI-SM a natural choice as it uses the score matching objective instead of the ELBO related objectives. Q3: In terms of computational efficiency, it wasn't clear to me that if two compared methods had the same "T", then their computational burden were comparable. Are the computational complexities exactly the same for sampling from HSIVI-SM and SIVI, for equal "T"? A3: First, we'd like to clarify that SIVI uses a single conditional layer (i.e., T=1) and hence would be much cheaper to sample from than HSIVI-SM which uses multiple layers $T>1$. For methods that we discussed for diffusion models, the computational complexity would be similar if they had the same $T$. That is because we used the same neural network architecture for the conditional layers in HSIVI and the score nets in diffusion models. We have a comparison of the sampling time of different methods in Figure 12 in Appendix D.5. We will clarify this in our revision. Q4: In principle, if HSIVI was used to train the model, we could reach a better data likelihood? Why aren't the authors restricting the application of HSIVI to sampling, and not learning the score function as well? A4: Sorry for the confusion! Generally speaking, HSIVI is a variational inference method that assumes the target density is accessible (e.g., the density function up to a constant or the score function is available). When used for diffusion model acceleration, HSIVI-SM does not directly target the generative model. Instead, it requires a sequence of auxiliary distributions that bridges between a simple distribution and the target distribution which is available given the learned score functions of diffusion models (Example 2). Note that this also allows data-free training of HSIVI-SM to accelerate diffusion model sampling. Learning the score functions together with HSIVI is an interesting idea, and would allow for a better forward process. However, it would also be more challenging as the score functions required by HSIVI now need to be learned together with the conditional layers of hierarchical semi-implicit distributions. --- Rebuttal Comment 1.1: Title: Thank you for the answers Comment: I thank the authors for answering my questions!
null
null
null
null
Reducing Shape-Radiance Ambiguity in Radiance Fields with a Closed-Form Color Estimation Method
Accept (poster)
Summary: This paper use closed-form spherical harmonics (SH) color-field to regularize the training of NeRF-based models. The SH coefficient of each 3D point has a closed-form solution given a set of observed viewing angles. This work further takes the transmittance from the density field to weigh each observed angle. A residual estimation is proposed to reduce the bias due to non-uniform view angles sampling from the training views. The close-form color-field is only used to regularize the optimization of density field. Experiments show that the proposed regularization can improve quality and alleviate shape-raidance ambiguity. Strengths: Introducing the closed-form SH coefficent solver into NeRF's optimization is an interesting aspect to improve quality of NeRF-based models. I'm interested to see if it is possible to have a closed-form optimal-PSNR color-field in the future given a fixed geometry. Weaknesses: 1. The manuscript incorrectly use the term "volumetric-based" to differentiate voxel-based methods from MLP-based methods (L76). Both of them are volumetric representations mapping spatial coordinates to modality of interest. Please fix it using "implicit" and "explicit" representations instead. 2. Some baseline results are much worse than their official report. The reported PSNR of Plenoxel on NeRF synthetic dataset is 29.83 which is much lower than their official reported 31.71 in Plenoxel's Table2. The baseline DVGO on Mic scene in Fig8 is much worse than their official results where DVGO doesn't produce the floater artifact around the mic wire. I guess the reason is the background color (from solid white to solid black) which make the shape-radiance ambiguity more severe on some scenes. It is neccessary to have a discussion about the discrepancy of the implementations and the results comparing to the original baselines. 3. The proposed closed-form color-field solution is point-wise without considering the colors from the other points on a ray. As a result, the solved color-field may not be the optimal regarding the photometric loss. This may be one of the reason why Plenoxel (which uses SH as well) have a worse PSNR when using the proposed closed-form solution. 4. Lack of theoratical backup about the bias reduction using residual. The eliminated components (supp's Eq.6) itself is estimated (supp's Eq.10) and is biased due to the non-uniform sampling. It is not clear to me how the overall bias is reduced by using the residual. 5. The ablation study only use a single scene (Fig.4). More examples can make it more convincing. 6. The quantitative improvement is incremental while it needs much more training time (1 hour vs. 10 minutes) and specialized CUDA implementation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. I still don't quite convinced why Plenoxel's color field get much worse PSNR when using the proposed closed-form solver comparing to the SGD solver (Fig5,L235-240). As the original Plenoxel use SH as well, I expect the closed-form solver to have a even better or similar MSE (PSNR) under the same fixed density field. Are there any other factors like the degree of SH or the grid resolution affecting the results? More discussions are necessary. 2. Missing experiments for the number of SH degree. The only discussion is Fig9 showing failure cases on high-frequency view-dependent effect. What is the quality improvement and computation cost of using more or less SH degree? 3. This work assumes the view sampling is uniform so the Monte Carlo integrator (Eq.5) has large error when the training viewing angles is not uniform (Fig.3). However, we actually know the camera poses. Is it possible to derive a pdf over viewing angles using the camera poses so that we can have a correct Monte Carlo integrator like: $$\frac{1}{K} \sum_{k=1}^K \frac{f(x_k)}{\mathrm{pdf}(x_k)} ~.$$ Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The weakness doesn't discussed the discrepency between the point-wise assumption and the volume rendering (see weakness 3). If the authors decide to give up theoratical proof for the proposed bias reduction strategy, it should also be listed as one of the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Weaknesses __W1__: Thank you very much to point out that. We will carefully examine and fix these incorrect terms. __W2__: Thanks for your comments. The goal of this work is to study the shape-radiance ambiguity problem of the NeRF model. The black background is a more challenging setting, with which more floaters tend to generate in the free space and in some scenes the shape radiance ambiguity problem becoms more severe. That is why the PSNR values drop as you mentioned. We would like to emphasize that all experiments are done with the same black background images. Thus, the comparison of our method with the baselines is fair under the same challenging setting. We will explain this in the final version of the paper. __W3__: Thanks a lot for your thoughtful comments. It is true that the closed-form color-field solution is point-wise. Our assumption is that, if a point is not occluded, or it is on the surface of an object, its color along a direction is directly determined by the corresponding observation color in the image. This assumption is true for the ideal case. It is mentioned in the NeRF++ paper that: ideally, the density should peak at the ground-truth surface location, in which case the color reduces to the surface light field. By using this assumption, the proposed closed-form color-field solution becomes sensitive to false geometry. If the geometry or the density field is not correct or sharp enough, which breaks this assumption, the color estimation results will be worse. This leads to worse rendering colors and so a higher CF loss. We think that it is an advantage instead of weakness, because the worse PSNR or high CF loss in the training will correct the false geometry and lead it to more ideal one through backpropagation. __W4__: We apologize that we cannot provide a rigorous proof this time. We will try to derive the proof in the future and add this to limitations this time. __W5__: Thanks for your suggestion. We conduct the ablation study on all scenes of DTU given the density volumes from the trained Plenoxels. Below, we can see that by using the residual color estimation, the PSNR improves considerably. By only add the occlusion handling, the PSNR decreases, but when it is combined with the residual color estimation, the PSNR is further improved. This demonstrates the effectiveness of both occlusion handling and residual color estimation. __Ablation study results__ |Method|PSNR| |:-:|:-:| |w/o occlusion handling and residual color estimation|14.61| |w/ occlusion and w/o residual color estimation|13.57| |w/o occlusion and w/ residual color estimation|25.99| |w/ occlusion and residual color estimation|26.49| __W6__: Thanks for your comments. We additionally calculate the PSNR on depth below for the NeRF synthetic data as it provides ground truth depth. It shows that we achieve appreciable improvement. __Comparison of the PSNR on depth of the NeRF synthetic dataset__ |Method|Plenoxels|Plenoxels + CF loss|DVGO|DVGO + Distortion loss|DVGO + CF loss|DVGO + Distortion loss + CF loss| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |PSNR on depth|22.70|23.08|25.46|25.68|26.36|26.68| We report the detailed computation cost below. __Mean computational cost using different number of CF rays__ |Number of CF rays|DTU|NeRF synthetic|LLFF| |:-:|:-:|:-:|:-:| |0|17 min 43 sec|15 min 59 sec|24 min 11 sec| |10|32 min 39 sec|20 min 29 sec|29 min 22 sec| |25|43 min 1 sec|25 min 32 sec|34 min 33 sec| # Questions __Q1__: Thanks for your questions. As we explain in the weakness 3 __W3__, the closed-form solver performs worse because it is more sensitive to false geometry due to the point-wise estimation. __Q2__: We conduct both color estimation and training experiments about SH degrees and report results below. The increase of SH degree does produce better estimation and training results. The computation cost does not vary much, because the SH bands are efficiently handled by CUDA warps. __Traininng reuslts with different degrees of SH coefficients on the DTU dataset__ |SH degree / SH basis|0 / 1|1 / 4|2 / 9|3 / 16| |:-:|:-:|:-:|:-:|:-:| |PSNR|31.79|31.95|32.08|32.12| |IMRC|16.43|16.56|16.66|16.73| |Training time|46 min 31 sec|44 min 42 sec| 43 min 1 sec|44 min 36 sec| __Color estimation results with different degrees of SH coefficients on the DTU dataset__ |SH degree|0|1|2|3| |:-:|:-:|:-:|:-:|:-:| |PSNR|27.22|28.29|29.44|29.98| __Q3__: Thanks a lot for your constructive question. A common choice for modeling the distribution of 3D directions is to use the von Mises-Fisher (vMF) distribution defined as follows, \begin{equation} v(\mathbf{d}; \mu, c) = \frac{c}{2\pi(e^c - e^{-c})} e^{c \mu^\top \mathbf{d}}, \end{equation} where $\mu$ is the nomarlized mean direction, and $c$ is the concentration paramter. Because the vMF distribution only deals with the unimodal case. We resort to the mixture of von Mises-Fisher distribution defined as \begin{equation} p(\mathbf{d};\mathbf{d}_1, ..., \mathbf{d}_K, c) = \frac{1}{K} \sum_k v(\mathbf{d}; \mathbf{d}_k, c), \end{equation} where $\mathbf{d}_1, ..., \mathbf{d}_K$ are known viewing directions as the modes of the distribution. It is noteworthy that when $c$ approches zero, the PDF becomes a uniform distribution. So actually the method used in the paper is a special case of the mixture of vMF distribution. We vary $c$, and use this PDF to estimate the color field given the density field from trained DVGOs. The results are reported below. The PSNR does not increase much, possibly because the viewing directions are few in our study. Despite this, we believe that it is an important improvement and will further study other PDFs and datasets in the future. __Color estimation results with a modified Monte Carlo integrator on the DTU dataset__ |Concentration parameter|0|0.01|0.1|1|2| |:-:|:-:|:-:|:-:|:-:|:-:| |PSNR|29.44|29.45|29.43|28.93|27.97| --- Rebuttal Comment 1.1: Comment: **W2:** As I mentioned in the discussion in Reviewer qn5t feedback, I think that the claiming about "black background is a more challenging setting" is premature. But the response have addressed my main concern about the fairness of the comparison. **W6:** Thanks for the new results. The improvements are still incremental to me with the extra training time and implementation complexity. However, I find the idea interesting with future potential so it doesn't affect my rating. **Q3:** Thanks for the insightful discussion. I believe that including this discussion in the main paper would be beneficial. I appreciate the author's responses which address my main concerns so I increase my rating. I don't have further questions.
Summary: While NeRFs achieve photorealistic results, they suffer from the shape and radiance ambiguity that is inherent in joint reconstruction. To alleviate this issue, the authors aim to separate the estimations using a closed-form radiance estimation. They leverage Spherical Harmonics to represent the radiance given a density field. With explicit occlusion handling based on the density field, all cameras with visibility for the point can be detected. The colors are then used to fit the SH coefficients for the specific surface point. The authors propose to use this closed-form color to regularize the NeRF training by enforcing similar behavior between the NeRF output and the closed-form color besides the regular photometric loss. The effectiveness of this formulation is shown in various experiments and is agnostic to the underlying NeRF method used. Strengths: - One of the most significant advantages of this method is that the regularization is universally applicable to any NeRF method. - The method introduces a nice way to add a soft global view-dependent color regularization. - Besides issues with highly reflective surfaces, it improves considerably. Weaknesses: - Currently, I have the feeling the paper would benefit from an overview figure. This can also be beneficial as a reference in the experiments, where some evaluations only rely on the closed-form color, but in others, it's used as a regularization. Here, one can create a direct link to the overview figure and the symbols for each color output. A simple overview of how the method slots in an existing NeRF framework would be a great addition and can be easily incorporated next to Fig. 1. - The authors compared against other ray distribution priors but did not compare with model priors such as RegNeRFs depth/normal input from omnidata. - Similarly, RefNeRF also aims to improve surface reconstruction. How does this method fare against the proposed one? - The authors should briefly discuss Riegler et al. - Stable View Synthesis, as it also handles aggregation of features from multiple views. The approach is vastly different, but the rough conceptual idea remains. The authors mentioned that the closed-form color field has issues approximating highly reflective objects, but the limitations are also quite severe. The performance of anything which drastically derivates from Lambertian reflection will suffer. Biasing everything towards Lambertian also solves the shape/radiance ambiguity. So I would like to see if experiments with only lower frequency view encodings or a RefNeRF with an l2 penalty on the estimated roughness are used. Minor: - In 81/82: NeRF’s representation _of_ a scene - In 82: remove ‘makes it’: The NeRF’s representation of a scene inherently suffers from shape-radiance ambiguity - In 83: error _in_ geometry. - In 86: needs to be solved - In 174, the authors used minus as a verb. The correct verb is: subtract Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the additional overhead of the closed-form color calculation? - Would the decrease in performance on highly reflective surfaces be mitigated with more SH bands, or would the method become instable during training? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the last point in the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Weaknesses __W1__: Thanks a lot for your useful suggestion. We will carefully prepare an overview figure that clarifies which part of the experiments is about evaluation and which is about regularization. We will also add links to the symbols, and how the method slots in existing NeRFs. __W2__ & __W3__: Thanks a lot for you comments. To better compare with RegNeRF and RefNeRF, we additionally train the vanilla NeRF model and report the results below. The hyper-parameters are the same as the deafault values in original implementation, and all the input data are the same as the one we use, which makes it a fair comparison. For the implicit model, the RegNeRF and RefNeRF outperform NeRF due to their regularization and special formulation. Their PSNRs also outperforms those of Plenoxels and DVGO. But they need much longer training time. Specifically, more than one day is needed for training a scene using RegNeRF and RefNeRF, while the explicit models requires only less than an hour. Furthermore, the explicit models also enable faster rendering in the testing phase. __More experiments on other models on the DTU dataset__ |Model|PSNR|IMRC|Training time| |:-:|:-:|:-:|:-:| |NeRF|31.95|17.95|>10 hours| |RegNeRF|32.41|18.90|>1 day| |RefNeRF|32.34|18.61|>1 day| __W4__: Thank you very much to provide us the omitted related work. The Stable View Synthesis encodes features of images from source views by using a convolutional network and aggregates the features from these views for predicting the target view. The conceptual idea is roughly the same. We will add further discussion about it to the related work section. For the lower frequency view encodings and the issues about the highly reflective objects. Please refer to the answer of __Q2__ below for a detailed discussion. # Questions __Q1__: Thanks for your comments. We would like to breifly introduce our implementation first. At each batch of training, a batch of $B$ rays are used. Then, only the SH coefficients of the voxels intersected with these rays need to be estimated. Suppose that each ray involves of $K$ voxels ($K$ varies for different rays actually), there are $M=BK$ voxels. To estimate the SH coefficients of a voxel, the density values along the ray that connects the voxel to each source cameras need to be calculated. Suppose that there are $N$ cameras, then the total number of voxels involved is $NMK=NBK^2$. In our implementation, each CUDA block handles one of the $M$ voxels, and each thread in the block handles one of the $N$ cameras. So each thread handles a ray from a voxel toward each source camera, which involves the calculation of the order of $K$ density values. As the threads are executed in a parallel way, in theory, this reduces the computation overhead from the order of $NBK^2$ to $K$. In practice, we observe that sometimes the computational overhead is affected by the batch size $B$. This is due to the hardware limit of the maximum CUDA threads. When too much threads are used, they are not guaranteed to be executed in a parallel way. We report and compare the computational cost of our method below. __Mean computational cost using different number of CF rays__ |Number of CF rays|DTU|NeRF synthetic|LLFF| |:-:|:-:|:-:|:-:| |0|17 min 43 sec|15 min 59 sec|24 min 11 sec| |10|32 min 39 sec|20 min 29 sec|29 min 22 sec| |25|43 min 1 sec|25 min 32 sec|34 min 33 sec| The number of source cameras used are the number of training images. They are 44 or 58 for DTU, 100 for NeRF synthetic, and from 17 to 54 for LLFF. The DTU requires longer training time because it is trained for 12 epochs, while the NeRF synthetic is trained for 9 epochs. For 9 epochs, the DTU cost about 24 minutes and 32 minutes for 10 and 25 rays, respectively. Compare these with those of NeRF synthetic, we can observe that the large number of 100 cameras does not obviously increase the computational burden. __Q2__: First of all, we would like to clarify that there are two kinds of results listed in the paper. One is the color estimation results based on the density volume from pre-trained models. The other is the training results of the models armed with the CF loss. We apologize for the confusion and will decribe this more clearly in the final version. The results in Figure 9 is the color estimation results given the density volume from a pre-trained model. It does not involve a training process. In case of the training results, we would like to emphasize that the PSNR for the scene in Figure 9 is improved by using the CF loss. In case of the color estimation process, the PSNR of a closed-form color field does increase by using more SH bands as you have mentioned. Below we report the average PSNR over all DTU scenes from SH degree 0 to 3 (We use degree 2 in the paper). __Color estimation results with different degrees of SH coefficients on the DTU dataset__ |SH degree|0|1|2|3| |:-:|:-:|:-:|:-:|:-:| |PSNR|27.22|28.29|29.44|29.98| We also train with lower and higher frequency view encodings, which aim to answers the weakness 4 __W4__ you point out. The lower view encodings deal with non-Lambertian effects worse, and so the PSNR is worse. By training with more SH bands with degee 4, the results become better. __Traininng reuslts with different degrees of SH coefficients on the DTU dataset__ |SH degree / SH basis|0 / 1|1 / 4|2 / 9|3 / 16| |:-:|:-:|:-:|:-:|:-:| |PSNR|31.79|31.95|32.08|32.12| |IMRC|16.43|16.56|16.66|16.73| |Training time|46 min 31 sec|44 min 42 sec| 43 min 1 sec|44 min 36 sec| --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I have no further questions and will increase my rating.
Summary: While nerfs have shown impressive properties in the ability to reconstruct the geometry of three dimensional scenes, there are still some cases where the performances are not as good as expected. This paper focuses on one of these problems, the untangling of the geometry from the color. For that, the authors propose to learn a second color model, using a close form formula based on a spherical harmonics model. On top of the classic ray tracing of nerf, this method requires to trace additional rays that goes from the query point to the cameras. The authors also show the existence of a bias depending on the distribution of view angle and propose a method to correct it. Strengths: The paper focus on an important problem. Estimating accurately the geometry in complex scene, especially with high specularity, is still a very relevant today. Despite the heavy computational load, the authors were able to successfully train different models. They also provide results on three different datasets, with two of them being real images. Weaknesses: The premise of the method is that density and colors are estimated independently in a nerf model. This is actually false. In particular, in the vanilla nerf model the prediction of the color directly depends on the density. The color MLP takes as input not the coordinates of the point but the features corresponding to the density predicting network. This means that nerf should actually be modeled by (c) in Figure 1 and not (a) as claimed in the paper. Many variables are not introduced properly making the reading more difficult that it should. The method is highly inefficient. For a given ray, for a given sampled point, the method requires to integrate the density along the ray toward each source camera. This means that if the model is trained using $N$ cameras and $M$ points are sampled each time to approximate the integrals, the proposed method requires to estimate the density of $NM^2$ points instead of the usual $M$ points. The parameters of the method vary between datasets. No explanation or intuition is provided for that. Regarding the experiments, the results are quite incremental with an increase of the order of 0.1dB compared to two previous methods. It's also disappointing that the analysis was not performed with more models. We can especially mention tensoRF that also has a model based on SH. It would have also been interesting to compare to methods like regnerf that adds additional regularizers to nerf to be able reconstruct better the geometry even in difficult conditions. Moreover, while the paper focuses on improving the geometry of the trained model, there is no quantitative experiment showing that the proposed method has indeed an impact on the geometry thus learned. Only a few qualitative results are presented with a color map for the depth that is very difficult to interpret. One can also regret the lack of computational analysis. No study are done on the unbiaising process proposed in the paper. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: What strategy of point sampling is performed for estimating the color and transmittance of a given point (i.e when the ray from this point to the camera is drawn)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: I don't foresee any potential negative societal impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Weaknesses __W1__: Thanks for your comments. The NeRF can be categorized into implicit and explicit models. We focus on the explicit ones in this work. They model the density and color as volumes. The color volume does not receive features from the density one, and so the they can be estimated independently. We believe that the premise of our method is correct. __W2__: Thanks for your feedback. We will carefully revise the variables. __W3__: Thanks for your comments. To relieve the computational burden, we implement by CUDA kernels. At each batch of training, a batch of $B$ rays are used. Then, only the colors of the voxels intersected with these rays need to be estimated. Suppose that each ray involves $K$ voxels ($K$ varies for different rays actually), there are $M=BK$ voxels. To estimate the color of a voxel, the density values along the ray from the voxel to $N$ source cameras need to be calculated. Then the total number of voxels involved is of order $NMK=NBK^2$. In our implementation, each CUDA block handles one of the $M$ voxels, and each thread handles one of the $N$ cameras. As the threads are executed in a parallel way, in theory, this reduces the computation overhead from the order of $NBK^2$ to $K$. In practice, we observe that sometimes the computational overhead is affected by the batch size $B$. This is due to the hardware limit of the maximum CUDA threads. When excessive threads are used, they are not guaranteed to be executed parallelly. We report and compare the computational cost below. The computational overhead is acceptable. __Mean computational cost using different number of CF rays__ |Number of CF rays|DTU|NeRF synthetic|LLFF| |:-:|:-:|:-:|:-:| |0|17 min 43 sec|15 min 59 sec|24 min 11 sec| |10|32 min 39 sec|20 min 29 sec|29 min 22 sec| |25|43 min 1 sec|25 min 32 sec|34 min 33 sec| The number of source cameras used are 44 or 58 for DTU, 100 for NeRF synthetic, and from 17 to 54 for LLFF. The DTU requires longer training time because it is trained for 12 epochs, while the NeRF synthetic is trained for 9 epochs. For 9 epochs, the DTU cost about 24 minutes and 32 minutes for 10 and 25 rays, respectively. Compare these with those of NeRF synthetic, we can observe that the large number of 100 cameras does not obviously increase the computational burden. __W4__: Thanks for your comments. To determine the hyperparameters, we set aside a portion of training data as validation data, and use grid search to find the best hyperperameters. Then, we use all training data for training. Specifically, we try the weight factor for the CF loss including 1, 5, 10, and 20, and the number of rays including 10 and 25. __W5__: Thanks a lot for your detailed comments. We separate our answer to four sections in the following. __W5.1 More expriments__ We follow the default hyperparamter setting in the original work, and train TensoRF, TensoRF with CF loss, NeRF, RegNeRF, and RefNeRF on the DTU dataset. The results are reported below. For the implicit model, the RegNeRF and RefNeRF outperform NeRF due to their regularization and special formulation. Their PSNRs also outperforms those of Plenoxels and DVGO. But they need much longer training time. The explicit model TensoRF performs better, because it allows a high grid resolution due to its decomposition of density and color volumes. By adding CF loss to the TensoRF, the PSNR further improves. This demonstrates the effectiveness of our method. __More experiments on other models on the DTU dataset__ |Model|PSNR|IMRC|Training time| |:-:|:-:|:-:|:-:| |NeRF|31.95|17.95|>10 hours| |RegNeRF|32.41|18.90|>1 day| |RefNeRF|32.34|18.61|>1 day| |TensoRF|32.49|18.86|22 min 56 sec| |TesorRF + CF loss|32.66|19.04|57 min 23 sec| __W5.2 Quantitative results of geometry evaluation__ We evaluate the geometry of the models trained on the NeRF synthetic dataset by calculating the PSNR on depth maps. The results are reported below. Our CF loss increases the PSNR by 0.38 for the Plenoxels and 0.9 for the DVGO. For the datasets that do not have ground truth depth maps, we use the IMRC metric defined in equation (14) in the manuscript as an alternative. As Reviewer qn5t points out, this metric is an approximation of PSNR in 3D. The wrong geometry, such as floaters in the free space, will be penalized due to the high residual color. Our method increases this metric on both DTU and LLFF dataset. Overall, these quantitative results demonstrate that it makes geometry better. __Comparison of the PSNR on depth of the NeRF synthetic dataset__ |Method|Plenoxels|Plenoxels + CF loss|DVGO|DVGO + Distortion loss|DVGO + CF loss|DVGO + Distortion loss + CF loss| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |PSNR on depth|22.70|23.08|25.46|25.68|26.36|26.68| __W5.3 Computational analysis__ Please see the answer to weakness (__W3__) you point out. __W5.4 Study about the unbiasing process__ We conduct ablation studies for the color estimation process given the density from trained Plenoxels, which involves two components including occlusion handling and residual color estimation or unbiasing. The results are reported below. Please refer to the answer to the weakness 5 (W5) pointed out by the Reviewer LJiP for a detailed discussion. This study demonstrate the effectiveness of the unbiasing process. __Ablation study of the closed-form color estimation on the DTU dataset__ |Method|PSNR| |:-:|:-:| |w/o occlusion handling and residual color estimation|14.61| |w/ occlusion and w/o residual color estimation|13.57| |w/o occlusion and w/ residual color estimation|25.99| |w/ occlusion and residual color estimation|26.49| # Questions __Q1__: Thanks for your question. The point sampling strategy for the rays that connect this point to a camera center is the same as the one used in original training rays in Plenoxels. Specifically, it uses uniform sampling along the ray that intersects with the density volume with a step size of 0.5 voxel, and no random noise is applied in the sampling.
Summary: The paper proposes a novel regularization technique for optimizing radiance fields to achieve more accurate geometry alongside realistic novel views. The regularizer is based on a closed-form estimation of spherical harmonic coefficients for view-dependent color, as a function of the training pixel colors and the predicted density field. Using this closed-form procedure to estimate color from density enables regularizing density directly through photometric loss. This regularizer is therefore adaptive to each scene, unlike more general, standard scene-agnostic regularizers like sparsity and total variation. The paper shows empirical improvement in reconstructed geometry via improved rendered depth maps, for two voxel-based scene representations (DVGO and Plenoxels). Strengths: I really like the observation the paper makes that it is possible to predict color in closed form as a function of density field and training ray colors. It makes a lot of sense that using this closed-form solution for color and then optimizing density directly through photometric loss, should yield more accurate reconstructed density fields. And indeed the results do show improved depth renderings. Weaknesses: presentation weaknesses - Figure 2 should include o_k and P_k, and ideally also F_c. In its current form there is a bit of a struggle to match variables in the text to components in the figure. - I don’t see much value added by Figure 3. It is used to illustrate the idea that nonuniform or systematically biased sampling can result in a systematic error in the estimated integral. In my view, this idea is clear from textual description, and the space currently dedicated to the figure could be better spent on detail (either textual or ideally graphical) about how the proposed bias correction works. - As an added point of confusion around Figure 3, why include a term that is cos(0x) (in both the figure and the accompanying text)? It appears that this is literally referring to cos(0)=1, which has no effect in the example shown. - Equation 14 (the definition of the IMRC metric) could benefit from another sentence of explanation, describing the intuition that this metric is an approximation of PSNR in 3D. - Figures 4 and 5 would benefit from more substantial captions that explain the experiment being presented and how the results should be interpreted. Currently this information is only available in the text, but it’s best if figures are also self-contained. - It’s not clear from the paper whether the results (or which subset of the results) are fine-tuned using pre-trained models vs trained from scratch. I’m not sure if the argument is that fine-tuning with the closed-form loss improves existing pre-trained geometry, or that training from scratch with the closed-form loss as a regularizer produces better geometry, or both. - There are scattered typos and minor grammatical issues; please copy-edit the final paper. evaluation weaknesses - The evaluation uses a black background, whereas in the baseline papers a white background was used. Is there a reason for changing the background color? - PSNR values reported for the two baselines (Plenoxels and DVGO) are substantially lower than the values reported in the original papers (29.83 vs 31.71 for Plenoxels, and 31.58 vs 31.95 and 32.8 for DVGO). As far as I’m aware the only difference is the use of the black background; I’m concerned that either this or some other implementation difference makes the baselines used unnecessarily weak. - I understand the IMRC metric and see why it’s useful for datasets where ground truth geometry is unavailable, but in my view an even stronger evaluation would be to compare PSNR on depth maps for a dataset where ground truth geometry is available (e.g. this could be rendered for the NeRF-synthetic dataset). - Figure 9 shows that there is a gap in quality between renderings produced by the trained color field vs the closed-form color field—can the authors comment on the origin of this gap, and how it might be closed? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: My biggest question about this paper is why the proposed method is used as a regularization scheme on top of the usual least squares objective, rather than optimizing a density-only model and directly using the colors from the closed-form predictions. I wouldn’t be surprised if doing so yields even better geometry, and reduces model size as an added bonus. Or if this simpler strategy works worse than what is proposed, do the authors have an explanation why (perhaps related to the limitation illustrated in Figure 9)? I would also appreciate responses to my comments in the weaknesses section, primarily regarding evaluation weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes, limitations are discussed to some extent, but the paper would benefit from more discussion of the origin (and potential remedies, if they exist) of the limited quality of the estimated color field. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Presentation weaknesses Thank you very much for your detailed comments and suggestions about the presentation. We will try our best to fix these in the final version of the paper. # Evaluation weaknesses __W1__: Thanks for your comments. The goal of this work is to study the shape-radiance ambiguity problem of the NeRF model. The black background is a more challenging setting, with which more floaters tend to generate in the free space. It provides a better way to reveal and study the shape-radiance ambiguity problem. To make it clearer, we will add this reason in the final version of the paper in the section of experimental settings. __W2__: Thanks for your comments. First, we would like to clarify that there are not any other implementation differences except for the use of the black background. The black background is a more challenging setting, so that the PSNR values drop as you mentioned. We would like to emphasize that all the experiments in the paper are done with the same black background images. Thus, the comparison of our method with the baselines is fair under the same challenging setting. Moreover, in practice, photos may be taken at night, which makes their background mainly black, so it is also meaningful to study the model performance under this setting. To make readers clearer, we will explain this in the final version of the paper in the section of experimental results. __W3__: Thank you very much for providing us a stronger evaluation method. We evaluate the depth maps for the NeRF-synthetic dataset, as the ground truth depth is provided in the test datasets. Because the pixel values of an image for PSNR calculation fall within [0, 1], we first normalize the ground truth and predicted depth to be within this range. Then, we can calculate the PSNR on depth maps. The results are reported in Table below. We can see that for both Plenoxels and DVGO, the CF loss improves the quality of the depth maps. __Comparison of the PSNR on depth of the NeRF synthetic dataset__ |Method|Plenoxels|Plenoxels + CF loss|DVGO|DVGO + Distortion loss|DVGO + CF loss|DVGO + Distortion loss + CF loss| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |PSNR on depth|22.70|23.08|25.46|25.68|26.36|26.68| __W4__: It is difficult to model highly reflective object with low SH degree, especially given few views. The drop in PSNR can be greatly mitigated by using a higher SH degree. Below we report the average PSNR over all DTU scenes from SH degree 0 to 3 (We use degree 2 in the paper). In average, the PSNR increase by 0.54 dB from SH degree 2 to 3. __Color estimation results with different degrees of SH coefficients on the DTU dataset__ |SH degree|0|1|2|3| |:-:|:-:|:-:|:-:|:-:| |PSNR|27.22|28.29|29.44|29.98| # Questions __Q1__: Thanks a lot for your constructive question. We are also aware that it is possible to optimize a density only model directly using our proposed CF loss. The main difficulty lies on the computational overhead. We give a computational analysis in the answer to the Weakness 3 to the Reviewer hBrk, and report the computational cost in the Table below. A main conclusion is that the computational cost is affected by the batch size or number of rays in practice. In our experiments, we use 25 rays in each batch for regularization in addition to 5000 rays used in the original optimization process, which takes about 43 minutes to train a DTU model in average. __Mean computational cost using different number of CF rays__ |Number of CF rays|DTU|NeRF synthetic|LLFF| |:-:|:-:|:-:|:-:| |0|17 min 43 sec|15 min 59 sec|24 min 11 sec| |10|32 min 39 sec|20 min 29 sec|29 min 22 sec| |25|43 min 1 sec|25 min 32 sec|34 min 33 sec| On one hand, if we increase the number of rays to 5000 for either regularization or optimization for a density only model. The computational cost will be increased from below an hour to a half day. As we focus on the explicit NeRF models in this work, which feature a fast training process, we aim to limit the training time to be within an hour. On the other hand, if we only use 25 rays in each batch to optimize a density only model, the results are worse than the original model that uses 5000 rays in each batch. That is why we apply the CF loss as a regularization. It is still interesting to compare the density only model with the original model under a fair setting. Specifically, we train the original Plenoxels also using 25 rays on the DTU dataset, and compare it with the density only model trained by the CF loss also using 25 rays. This is a more challenging setting, as few training rays easily make the model overfit to a local region, and thus exacerbate the shape-radiance ambiguity problem. We report the mean metrics in the Table below. The density only models not only converge, but even yield better PSNR and IMRC. __Comparison of the Plenoxels and density only model with CF loss by using 25 rays__ |Method|PSNR|IMRC| |:-:|:-:|:-:| |Plenoxels|26.12|14.21| |Density only model with CF loss|27.82|15.33| In summary, we believe that there is great potential to train density only models using the CF loss, but in this work, we use it as a regularization term to meet our goal of acceptable training time. The unique superiority of our regularization method is that even with only the regularization term, the model can converge, which is not achieved by other regularization in NeRF to the best of our knowledge. We will further explore how more CF rays work in the future. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response and additional experiments. I greatly appreciate the comparison of training with and without closed-form color only, rather than using it as a regularizer. Even though this comparison is done using very small batches for computational reasons, it still shows promising results. I would very much like to see a strategy that optimizes density only and retains computational efficiency, but it’s ok if this is deferred to future work. I still don’t really "buy" the explanation that black background is a more challenging setting than white background; my suspicion is that it is only more challenging because the prior methods use parameters that were tuned to work well with a white background, and re-tuning for a black background may well recover similar performance. Or at least this has been my experience in some of my own experiments. --- Reply to Comment 1.1.1: Comment: Thanks very much for your reply and thoughtful comments! We fully understand your concern about the parameters. Therefore, we carefully search for the weight factors of total variation loss for both density ($tv_d$) and color volumes ($tv_c$) on the NeRF synthetic dataset. They are the most important hyper-parameters. Other hyper-parameters such as the learning rate, grid size, and number of training epochs remain the same. Specifically, the $tv_d$ ranges from 1e-5 to 1e-2, and the $tv_c$ ranges from 1e-4 to 1e-1. Below, we report the results that are centered around the best one (highlighted in bold face). __Grid searched PSNRs for different combinations of $tv_d$ and $tv_c$__ |$tv_d$ \ $tv_c$| 1e-4 | 1e-3 | 1e-2 | 1e-1| |-----------|-----|-----|-----|-----| |__1e-5__|29.70|29.83|29.71|29.14| |__1e-4__|29.80|29.92|29.88|29.33| |__1e-3__|29.70|29.88|__29.97__|29.63| |__1e-2__|29.31|29.63|29.90|29.63| The original results is 29.83 for PSNR and 22.70 for the PSNR on depth. After the grid search, the best PSNR becomes 29.97 and the corresponding PSNR on depth is 22.99. The PSNR does improve, but compared with the PSNR trained on the white background, i.e., 31.71, there is still a large gap. Therefore, we are afraid that re-tuning the hyper-parameters, at least the weight factors of total variation loss, does not recover similar performance. Furthermore, we add the proposed CF loss to the best case. The resulting PSNR is 29.99 and the PSNR on depth is 23.62. We can see that under the same setting, the CF loss still helps to improve the geometry of the scenes by 0.63 dB in average. We would like to emphasize again that our comparison is fair, the only difference between our method and the original model is that we add the CF loss. All other settings are the same. Thanks again for your reply.
Rebuttal 1: Rebuttal: Dear all, We would like to deeply appreciate comments from all reviewers, which are invaluable to improve our paper. We have carefully considered all comments and reponsed in a point-by-point way. In order to address all concerns and questions raised by reviewers, and make our paper better, we summarize all additional evaluation results based on the reviews and show them as follows. We believe these additional evaluations can help us demonstrate the effectiveness and efficiency of our method. __Table 1__: Comparison of the Plenoxels and density only model with CF loss by using 25 rays |Method|PSNR|IMRC| |:-:|:-:|:-:| |Plenoxels|26.12|14.21| |Density only model with CF loss|27.82|15.33| __Table 2__: Comparison of the PSNR on depth of the NeRF synthetic dataset |Method|Plenoxels|Plenoxels + CF loss|DVGO|DVGO + Distortion loss|DVGO + CF loss|DVGO + Distortion loss + CF loss| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |PSNR on depth|22.70|23.08|25.46|25.68|26.36|26.68| __Table 3__: Mean computational cost using different number of CF rays |Number of CF rays|DTU|NeRF synthetic|LLFF| |:-:|:-:|:-:|:-:| |0|17 min 43 sec|15 min 59 sec|24 min 11 sec| |10|32 min 39 sec|20 min 29 sec|29 min 22 sec| |25|43 min 1 sec|25 min 32 sec|34 min 33 sec| __Table 4__: More experiments on other models on the DTU dataset |Model|PSNR|IMRC|Training time| |:-:|:-:|:-:|:-:| |NeRF|31.95|17.95|>10 hours| |RegNeRF|32.41|18.90|>1 day| |RefNeRF|32.34|18.61|>1 day| |TensoRF|32.49|18.86|22 min 56 sec| |TesorRF + CF loss|32.66|19.04|57 min 23 sec| __Table 5__: Traininng reuslts with different degrees of SH coefficients on the DTU dataset |SH degree / SH basis|0 / 1|1 / 4|2 / 9|3 / 16| |:-:|:-:|:-:|:-:|:-:| |PSNR|31.79|31.95|32.08|32.12| |IMRC|16.43|16.56|16.66|16.73| |Training time|46 min 31 sec|44 min 42 sec| 43 min 1 sec|44 min 36 sec| __Table 6__: Color estimation results with different degrees of SH coefficients on the DTU dataset |SH degree|0|1|2|3| |:-:|:-:|:-:|:-:|:-:| |PSNR|27.22|28.29|29.44|29.98| __Table 7__: Color estimation results with a modified Monte Carlo integrator that uses the mixture of von Mises-Fisher distribution on the DTU dataset |Concentration parameter|0|0.01|0.1|1|2| |:-:|:-:|:-:|:-:|:-:|:-:| |PSNR|29.44|29.45|29.43|28.93|27.97| __Table 8__: Ablation study of the closed-form color estimation on the DTU dataset |Method|PSNR| |:-:|:-:| |w/o occlusion handling and residual color estimation|14.61| |w/ occlusion and w/o residual color estimation|13.57| |w/o occlusion and w/ residual color estimation|25.99| |w/ occlusion and residual color estimation|26.49|
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning to Configure Separators in Branch-and-Cut
Accept (poster)
Summary: The authors use machine learning to decide how and when to toggle the on/off switches for different cutting plane families provided by SCIP, the fastest open-source MIP solver. Their learning algorithm outperforms the default setting of SCIP on various benchmark sets. Strengths: The techniques used to deal with the high dimensional combinatorial space of possible configurations are interesting. The results are promising and yield speedups over SCIP default on established benchmark sets. The interactions between different cut families is not an extremely well understood topic. It is nice to know that machine learning can help decide what families to activate, and when. Weaknesses: Ultimately I don’t think the proposed methodology is too novel, since it boils down to learning how to set on-off toggles for a variety of cut families. I acknowledge that this toggling is a critical component of tuning MIP solvers, but since this paper doesn’t mention what cut families ended up being selected and most useful for the different problem instances, I believe it leaves the most interesting question of what separator families worked best for which problems on the table. So methodologically this paper doesn’t seem too different from an array of previous algorithm configuration papers for MIP (e.g. Hydra-MIP by Xu, Hutter, Hoos, Leyton-Brown) that aim to tune parameters of a MIP solver from past data, but without any deeper principled investigation of what the parameters are actually doing. Hypothetically, it could be the case that turning off SCIP’s Gomory cut generation helps out for a class of MIPS, but maybe if the actual Gomory cut generation was tweaked performance would improve. Learning to toggle does not yield any such insights, as far as I can tell. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The title/phrase “learning to separate” slightly misleading. The *separation problem* refers to the specific problem of generating a cut (from some class of cuts) that cuts off the LP optimum. The authors here are not concerned with that problem, rather they are concerned with the on/off toggle for a particular family of cuts that is being generated via SCIP’s separation routines. The authors discuss generalization guarantees but do not cite or compare to any prior work on such theoretical guarantees for cutting planes/integer programming/tree search. It might be worth mentioning how this compares to that work and where it is different (ostensibly “learning to toggle” falls into some of the frameworks studied previously). As mentioned previously, one of the most interesting questions that is completely missing: what cut families ended up being selected and working well for the different problem instances? To me this would be the most interesting set of conclusions, since it is well known that MIP solvers need to be tuned to yield improved performance. The question of what cuts work best for what problems is much more elusive. Overall, I would be much more in favor of acceptance if there was some discussion about this aspect. Presumably this question is already answered by the experiments the authors ran, and including some observations here would be great. Given that the authors are controlling granular on/off parameters (and not fundamentally modifying the underlying algorithms), why not compare to a state-of-the-art solver like Gurobi or CPLEX, which are free for academic use and significantly faster than SCIP? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the actionable feedback in terms of interpreting the results, using other MILP solvers, and providing pointers to relevant theoretical works. We have made great efforts to address the reviewer’s concerns in our response below and have included additional experiments in the general response, covering (1) interpretations of effective separators for each MILP class from our learning results (2) applying our method to a different MILP solver Gurobi, and (3) applying our method to a different metric (relative gap improvement under a fixed time). We hope that the Reviewer will take these new results into account and increase our score if we have adequately addressed the main concerns. > As mentioned previously, one of the most interesting questions that is completely missing: what cut families ended up being selected and working well for the different problem instances? We sincerely appreciate the reviewer’s excellent suggestions on interpreting our learning results. We provide visualizations and interpretations in General Response [GR1]. These visualizations lead to several intriguing observations, including (1) our learned model can automatically select separator families that are known to be effective for certain MILP class, and (2) our learned model can differentiate heterogeneous instance class and select customized separators tailored to different types of MILP instances. We plan to include visualizations for all MILP classes in the Appendix of the updated paper. Given the alignment of effective separator families on standard MILP benchmarks between our learning model and the existing literature, we believe our learned model can serve as a valuable tool for guiding the automatic selection of separator families for different MILP classes, and inspiring the mathematical programming community to further investigate the interconnection between certain separators and MILP instances as suggested by our learning method. > Given that the authors are controlling granular on/off parameters (and not fundamentally modifying the underlying algorithms), why not compare to a state-of-the-art solver like Gurobi or CPLEX, which are free for academic use and significantly faster than SCIP? SCIP allows us to update separator configurations multiple times during a solve process (by modifying the source code), while state-of-the-art solvers Gurobi does not (separator configurations have to be fixed before the solve starts). Nonetheless, we can apply our method to Gurobi by configuring separators once before the solve starts. Inspired by the reviewer’s question, we perform this additional experiment in General Response [GR2]. We are delighted to report that our method is also effective in accelerating Gurobi. > The authors discuss generalization guarantees but do not cite or compare to any prior work on such theoretical guarantees for cutting planes/integer programming/tree search. It might be worth mentioning how this compares to that work and where it is different. We thank the reviewer for providing references to relevant papers. We will include them in the updated paper. The most relevant work [1] studies generalization of portfolio-based algorithm selection, where the procedure first selects a subset of algorithm parameter settings, and then, for a given problem instance, uses an algorithm selector to choose a parameter setting from the portfolio. We summarize the key differences below: - Our theoretical generalization analysis **directly informs** our empirical configuration subspace restriction algorithm, where we design a filtering criterion to improve the generalization bound. In contrast, the previous theoretical work analyzes the generalization bound on a *given* subspace construction procedure; however, their bound is not informative for designing the construction procedure, as it contains an abstract constant (representing the number of piecewise constant regions in the performance function [2]) which is unknown in practice and hence cannot be applied empirically. - the generalization bound in the previous work does not consider the influence of the quality and diversity of the parameter settings within a subset of a fixed size, which our generalization bound captures. In our experiments, we demonstrate that the quality of each configuration is important for a fixed size configuration subspace (see our ablation “Greedy Restr.” in Table 2 which does not apply our filtering criterion to consider individual configuration’s quality, and leads to suboptimal results). Additional theoretical works investigate the generalization bound of the branch-and-cut framework, mainly focusing on cut selection [3, 4]. However, the analyses either rely on the structure of a specific MILP cut family (e.g. Chvátal-Gomory) or study how a selected cut affects the LP relaxation’s optimal solution, which cannot be generalized to the upstream, higher-level task of separator configuration. *[1] Balcan, Maria-Florina, Tuomas Sandholm, and Ellen Vitercik. "Generalization in portfolio-based algorithm selection." Proceedings of the AAAI Conference on Artificial Intelligence (2021).* *[2] Balcan, Maria-Florina, et al. "How much data is sufficient to learn high-performing algorithms? Generalization guarantees for data-driven algorithm design." Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing. (2021).* *[3] Balcan, Maria-Florina F., et al. "Sample complexity of tree search configuration: Cutting planes and beyond." Advances in Neural Information Processing Systems 34 (2021).* *[4] Balcan, Maria-Florina F., et al. "Structural analysis of branch-and-cut and the learnability of gomory mixed integer cuts." Advances in Neural Information Processing Systems 35 (2022).* > The title/phrase “learning to separate” is slightly misleading. We are happy to change the title to “Learning to configure separators in branch-and-cut”. --- Rebuttal Comment 1.1: Comment: I greatly appreciate the authors' amazingly thorough response. The qualitative results are very interesting, and it is especially fascinating to me that the L2Sep method "discovers" classical knowledge. Coupled with this, I find the message that L2Sep meaningfully shows when and where to toggle specific cut families to be significant and interesting, given that the "when to cut" question is equally not-well-understood as the "how to cut" question, and has received significantly less study. My initial score of 4 was probably too harsh, and I would be happy to raise my score to a 6/7. I do think the title change proposed by the authors would be appropriate. --- Reply to Comment 1.1.1: Title: Thank you for your feedback and suggestions! Comment: We are thankful to the reviewer for taking a careful look at our rebuttal and taking the time to revise their assessment. We're happy that our rebuttal alleviated the reviewer's main concerns, and we will make the title change in the updated paper. Thank you again for your detailed feedback and suggestions throughout the whole process!
Summary: The paper proposes a pipeline for a learning-enhanced cut separation management in modern MIP solvers (specifically, in the academic solver SCIP). To pick a promising subset of the large number of possible settings, the authors derive a data-driven and theoretically motivated method to focus on specific configurations that are expected to perform well. The actual policy on how cut separators are used during a MIP solving run are then learned over the restricted configuration space. The experimental results show significant solve-time improvements for a variety of benchmark MIP classes and test sets; this holds both when learning within MIP subclasses and when learning within heterogeneous benchmark sets such as MIPLIB. The presented work has the potential to replace current fixed, heuristic solver settings for cutting plane separation by carefully learned policies that often yield large speed-ups, which in turn can increase the size of problem instances that can be solved to optimality, and therefore represents a very notable advancement in data-driven MIP solving techniques. -- update: I have read and acknowledged all other reviews and the authors rebuttals, see discussion. -- Strengths: Besides the impressive empirical results, the theoretical justification/motiviation for the configuration pre-selection is a strength of this work. Moreover, although the work spans quite many different and subtle aspects of the intricate workings of MIP solvers and employs different machine learning techniques, the authors did an excellent job in condensing their work into an understandable and well-written main paper; the supplementary material, while containing the technical proofs, is largely additional information that well complements the main paper but is mostly not necessary to follow the main paper. Weaknesses: It appears to have become common practice to submit papers to NeurIPS (and ICML) whose actual, main content is put in a separate "Supplementary Material" document whose length far exceeds that of the supposed main paper. This paper is only a partial exception -- the main paper provides enough information and details to stand on its own except for the most technical bits, which can then be found in the very long Appendix (supplementary document; three times as long as the main paper) along with a host of additional information. Thus, it may be considered a weakness of the paper to have such a long Appendix, because this format bears the danger of the formally most important parts of the work (proofs; algorithm details and specific setups) not being reviewed thoroughly due to the short review period and high review load of reviewers at these conferences. I cannot exclude myself from this -- I simply did not have the time to rigorously check all the details in the long supplementary document, and therefore cannot give a definitive answer regarding the proofs' correctness beyond "believing" everything appears to be well in order. In this regard, I cannot help but wonder if a full journal paper would not be the better way to publish results that simply do not fit into the 9-page limit. But, again, I have seen papers that exploit the main paper/supplementary material split in a much worse way; for the present work, it actually hardly bothered me because the main paper is nearly self-sufficient. Nevertheless, if I have to point to a weakness, this aspect is what stands out to me. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - there are several (minor) typos throughout paper and supplementary document (e.g., no comma after "e.g.", "c.f." or "v.s." instead of "cf." and "vs.", "constraint" <-> "constrained" (caption of Fig.2), "contrary" <-> "contrast" (l.193), "included A" <-> "included in A" (l. 239), "which we provide comparision" <-> "for which we provide comparison" (l. 311), "U Stuhl" (ref. [47] -- what's the first name?)) - some wordings are a bit inaccurate (at least when not having seen the supplementary document yet, i.e., when reading just the first paper): * l.19: "lower bound" -- this implies that the problem is a minimization problem, which is not specified here (only in the supplement). Maybe clarify, or use "dual bound"? * l. 66 and again later: The product-sign (\prod , large Pi) should be a *Cartesian* product-sign * Prop. 1 is referred to as "lemma" in the paragraph preceding it. * Sect. 3.2: "network" kind of drops out of nowhere -- please clarify a bit here that neural networks are used for the discussed task * in Tab. 1 and 3 at least: "higher the better" only applies to the median; for standard deviation, lower is better! * is the reported standard deviation given in percent deviation from the default or as percent variation from the median values? how often, if at all, does reconfiguration actually slow down the solver? what are the mean/average deviations from default? - it would be good to clarify in the main paper that the separation rounds after which configuration updates occur are specified in advance; this only becomes clear in the supplementary document, so reading the main paper, one wonders how this is decided and/or if it is part of the learning procedure to decide when to update configurations. Also, are there cases in which the MIP solver terminates before the number k of updates has been performed? - have you tried reverting to the default configuration as the last update? - which LP solver was used by SCIP to solve the relaxations? SoPlex? - were all MIPs solved in single-thread mode? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations have been appropriately addressed (at least in the supplementary material). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are delighted and truly appreciate the reviewer‘s positive feedback on our work. We made a great effort to integrate empirical methodology with theoretical justification, aiming at bridging an important gap in the existing learning for MILP literature that has been predominantly empirical. We provide a response to the reviewer’s questions below, and we include additional experiments as suggested by other reviewers in the general response, covering (1) interpretations of effective separators for each MILP class from our learning results (2) applying our method to a different MILP solver Gurobi, and (3) applying our method to a different metric (relative gap improvement under a fixed time). > is the reported standard deviation given in percent deviation from the default or as percent variation from the median values? how often, if at all, does reconfiguration actually slow down the solver? what are the mean/average deviations from default? The following table shows the percentage of test instances in each MILP class for which our learned configuration improves over SCIP default (% win). We observe that, while not always, our learned model does accelerate SCIP default for the majority of instances. ||Bin. Pack.|Max. Cut|Pack.|Comb. Auc.|Indep. Set|Fac. Loc.|NNV|MIPLIB|Load Balancing| |-|-|-|-|-|-|-|-|-|-| |% win (ours > default)|91.2%|100%|75%|98.9%|95.7%|74%|88.5%|67.5%|84%| |standard deviation (from mean), Table 1 |(34.2%)|(11.3%)|(39.3%)|(26.2%)|(27.8%)|(39.6%)|(33.9%)|(73.1%)|(20.3%)| | deviation from median|(39.0%)|(11.6%)|(42.8%)|(29.3%)|(37.0%)|(43.5%)|(35.5%)|(78.8%)|(22.1%)| | deviation from default|(52.1%)|(68.4%)|(44.2%)|(67.2%)|(66.6%)|(46.3%)|(45.2%)|(75.3%)|(32.2%)| || In our paper, we report the standard deviation from the mean $\sqrt{\frac{1}{N}\sum\limits_{i=1}^{N} (\delta_i - \bar{\delta})^2}$ of each method, where $\bar{\delta} = \frac{1}{N}\sum\limits_{i=1}^{N}\delta_i$, and $\delta_i$'s are the relative time improvement of each method from SCIP default on $N$ MILP test instances. In the table above, we further include the deviation from the median $\sqrt{\frac{1}{N}\sum (x_i - x_{median})^2}$ and the deviation from SCIP default $\sqrt{\frac{1}{N}\sum (x_i-0)^2}$ (since the time improvement of default from default is 0%), both for our complete method. We observe that the deviation from the median is similar to the deviation from the mean reported in Table 1, while the larger deviation from SCIP default demonstrates the ability of our method to achieve time improvement from the default parameters. > it would be good to clarify in the main paper that the separation rounds after which configuration updates occur are specified in advance; this only becomes clear in the supplementary document, so reading the main paper, one wonders how this is decided and/or if it is part of the learning procedure to decide when to update configurations. Also, are there cases in which the MIP solver terminates before the number k of updates has been performed? We appreciate the reviewer for carefully reviewing the supplementary document. We will add clarifications in the main paper to state that the separation rounds are specified in advance. In our experiments, we set the second separator configuration ($k=2$) to be at around 15%-25% of the average total separation rounds for each MILP classes, and set the third separator configuration ($k=3$ in our Ablation Table 2) to be at around 25%-50%, as the early solving stage (e.g. earlier in the B&B tree) may benefit more from additional configuration updates. It is an interesting suggestion to consider learning to decide when to update the configurations, but it comes with learning challenges as the search space becomes a larger, joint space of (which separation round to configure $\times$ which configuration to choose). We consider this as a future work to explore additional search space reduction techniques to enable joint learning. While there are instances that terminate before the $k^{th}$ of updates has been performed, they only constitute a very small portion of all instances in our experience. Moreover, such instances that terminate before the $k^{th}$ configuration update likely have a short solve time (due to fewer separation rounds to solve the instance). The model at the $k^{th}$ configuration update can then focus on the harder instances that take longer to solve and further improve their solve time from default. > have you tried reverting to the default configuration as the last update? We conduct an experiment where we revert to the default SCIP configuration at two intermediate separation rounds (20 and 40) for Ecole instances (Combinatorial Auction, Independent Set, and Capacitated Facility Location). We compare the time improvement for these two scenarios with our complete method, where we apply learned configuration updates for the entire solve process. Our results (median and standard deviation) are as follows: ||Comb. Auc.|Indep. Set|Fac. Loc.| |-|-|-|-| |Revert to default at separation round 20|61.9% (31.3%)|39.3% (53.2%)|21.4% (40.8%)| |Revert to default at separation round 40|62.8% (31.4%)|61.3% (90.8%)|27.4% (40.9%)| |Ours: L2Sep|**66.2% (39.3%)**|**72.4% (27.8%)**|**29.4% (39.6%)**| || We observe that the relative time improvement increases as our learned configurations cover more separation rounds, demonstrating our learned method is able to achieve time improvement throughout the solve process. > which LP solver was used by SCIP to solve the relaxations? SoPlex? > > > were all MIPs solved in single-thread mode? We use the default LP solver by SCIP, which is SoPlex [1], and solve all MIPs in single-thread mode. *[1] Gamrath, Gerald, et al. "The SCIP optimization suite 7.0." (2020).* > typos and wording We are very thankful that the reviewer points out typos and suggests wording changes. We will correct them in the updated paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses to all comments by the other reviewers and myself. Especially the additional insights summarized in the general response will be a valuable addition to the paper. A quick (?) follow-up question regarding the interpretability analysis in GR1: Besides identifying known results/what was known to work well for specific instance classes, did you observe something "unexpected" or "unexplained"? For example, did some family of cuts consistently appear in selected configurations for an instance class for which it is not known that or why these cuts would be useful? If so, this may warrant a closer (theoretical) inspection of the problem-cut pair in future work, so possibly, L2Sep could also serve as a driver of improved polyhedral understanding fof some problems. Overall, I am strongly in favor of accepting this paper, and will increase my score. I very much hope that the other reviewers see the merits of this work and raise their scores at least into acceptance territory as well. --- Reply to Comment 1.1.1: Title: Thank you for your feedback! Comment: We are very thankful to the reviewer for recommending acceptance of our work! Regarding the reviewer’s followup question, we do observe some intriguing unexpected scenarios from the visualizations. For Independent Set (see Rebuttal PDF Fig. 2), L2Sep deactivates all separators with a frequency of 20% at the 2nd config. update, whereas all selected configurations at the 1st update activate a substantial amount of separators. It is an interesting question to investigate why it is better to *deactivate all* separators for a certain subset of Independent Set instances at *later* separation rounds. For Maximum Cut, OddCycle [1, 2] and ZeroHalf [3] are known to be effective in the literature. Interestingly, none of the selected configurations activate ZeroHalf for both config. updates; OddCycle is also completely deactivated for the 1st update, but is activated with a frequency of 14% at the 2nd update. Meanwhile, we observe that Disjunctive, FlowCover, and Aggregation separators are more frequently selected. We will provide the visualization in the Appendix. In summary, we agree with the reviewer that L2Sep could also serve as a driver of improved (theoretical) polyhedral understanding of some problems. We further believe that L2Sep can be helpful to seed investigations (empirical and theoretical) for nonstandard, newly-proposed problems (e.g. NN Verification) where few analyses exists. We thank the reviewer again for the time taken throughout the process to thoroughly review our work! *[1] Boros, Endre, Yves Crama, and Peter L. Hammer. "Chvátal cuts and odd cycle inequalities in quadratic 0–1 optimization." SIAM Journal on Discrete Mathematics 5.2 (1992): 163-177.* *[2] Jünger, Michael, and Sven Mallach. "Exact facetial odd-cycle separation for maximum cut and binary quadratic optimization." INFORMS Journal on Computing 33.4 (2021): 1419-1430.* *[3] Caprara, Alberto, and Matteo Fischetti. "{0, 1/2}-Chvátal-Gomory cuts." Mathematical Programming 74 (1996): 221-235.* --- Rebuttal 2: Title: Raise score to 8 Comment: I don't seem to be able to edit my review directly; I would raise my rating from 7 to 8.
Summary: This paper studies learning to manage separators to improve MILP solvers. Specifically, it learns a policy to determine which separators to use to generate cutting planes. The task is well formulated and experiments on many datasets demonstrate the effectiveness of the proposed model. Strengths: 1. The paper is well written. Especially the problem is well formulated. 2. The paper identifies the opportunity of managing separators to improve MILP solvers. 3. Experiments on many datasets demonstrate the effectiveness. Weaknesses: In general, I think this paper is an OK work, but it is just borderline to the NeurIPS bar, so I only give borderline accept. There have been many works that study different parts of branch-and-bound, e.g., cut selection and node selection. Replacing one of the solving stages to a learning method, which leads to improvements compared with heuristic rules, is unsurprising. However, in real applications, it would be impossible to replace every part to learned models because of the memory limitation. Whether the studied part is critical enough in branch-and-bound is an important question. Therefore, separator configuration may not be in a trend of the research community, and thus the impact of the proposed method may be limited. I would give some suggestions in improving the work. First, the authors can compare the focused task, i.e., separator configuration, with other solving stages to demonstrate its significance. Or the authors can compare the resource usage to show the proposed method is lightweight enough so that it can be used jointly with other methods. Or the authors can try to merge the proposed framework with existing work and conduct enough ablation study. Summarily, the authors should denonstrate the real usefulness of the proposed method beyond only its performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the insightful suggestions from the reviewer and provide our response below. We hope the reviewer may also take a look at the general comments for our additional experiments, covering (1) interpretations of effective separators for each MILP class from our learning results (2) applying our method to a different MILP solver Gurobi, and (3) applying our method to a different metric (relative gap improvement under a fixed time). We hope that the reviewer may consider increasing our score accordingly if the new results adequately address the concerns. > Whether the studied part is critical enough in branch-and-bound is an important question. Understanding what separator families are useful for each MILP class is important to the mathematical programming community [1, 2] (also, see reviewer wvgK’s summary of our paper’s strengths). While a few separator families are known to be effective for specific MILP classes (e.g. Clique for Independent Set and FlowCover for problems with network flow substructures [2]), the knowledge remains limited and cannot cover the wide variety of separator families and MILP classes. Despite abundant research on cut selection, there is a surprising lack of work on the higher-level decisions such as configuring separators, or deciding whether to cut (B&C) or not (pure B&B) [3]. We hypothesize that this scarcity is because the abstract nature of the tasks poses challenges to acquiring useful heuristic information. Our work allows an automatic approach to detect effective separators for each MILP class, providing an important step toward understanding the interactions among separators and identifying when a specific separator is useful. It also allows for instance-level specification, enabling different configurations on different MILP instances. We refer the reviewer to General Response [GR1] for visualizations and interpretations of our learning results. *[1] Contardo, Claudio, Andrea Lodi, and Andrea Tramontani. "Cutting Planes from the Branch-and-Bound Tree: Challenges and Opportunities." INFORMS Journal on Computing 35.1 (2023).* *[2] Dey, Santanu S., and Marco Molinaro. "Theoretical challenges towards cutting-plane selection." Mathematical Programming 170 (2018).* *[3] Berthold, Timo, Matteo Francobaldi, and Gregor Hendel. “Learning to use local cuts." arXiv preprint arXiv:2206.11618(2022).* > First, the authors can compare the focused task, i.e., separator configuration, with other solving stages to demonstrate its significance. Or the authors can compare the resource usage to show the proposed method is lightweight enough so that it can be used jointly with other methods. **The Immediate and Multi-step Effect of Separator Configuration in the B&C Process:** 1. immediate: some separators take a long time to run, but generate mostly low-quality cuts that are ultimately never selected by the downstream cut selector. Deactivating those separators leads to an immediate time improvement by reducing the time to generate the cut pool. 2. multi-step: improved separator configuration can tighten the dual bound faster through better-selected cuts; it may also accelerate other B&C components such as branching (e.g. strong branching requires solving many children LPs and hence may benefit from tighter dual bounds). The following table presents the total solve time and total separator execution time for our complete method L2Sep and SCIP default on several MILP classes. We report the median and standard deviation evaluated on 100 instances for each class. L2Sep significantly reduces the total separator execution time. Upon closer examination, we find L2Sep adeptly deactivates expensive yet ineffective separators while activating effective ones. ||Total Solve Time (L2Sep)|Total Separator Execution Time (L2Sep)|Total Solve Time (SCIP Default)|Total Separator Execution Time (SCIP Default)| |-|-|-|-|-| |Comb. Auc.|**0.65s (2.36s)** |**0.02s (0.054s)**|3.01s (4.60s) |1.39s (1.14s)| |Indep. Set|**3.81s (116.42s)**|**0.35s (9.94s)** |13.16s (120.89s)|7.38s (37.94)| |NNV|**20.75s (18.56s)**|**0.16s (0.13s)** |34.76s (25.05s) |6.58s (8.05s)| || In Fig. 3 of the rebuttal pdf, we plot the primal-dual bound curves (median and standard error) of L2Sep and SCIP default on Independent Set and NN Verification. The significantly faster dual bound convergence of L2Sep demonstrates the multi-step effect of improved separator configurations. We further summarize the synergistic interaction effects between separator selection and other B&C components (branching, dual LP) in the next table. Notably, even though our method does not modify branching, the branching solve time is reduced. ||Strong Branching Time|Pseudocost Branching Time|Dual LP Time| |-|-|-|-| |Comb. Auc. (L2Sep)|0.31s (1.15s)|0.35s (1.46s)|0.08s (0.58s)| |Comb. Auc. (SCIP Default)|0.41s (1.79s)|0.45s (2.17s)|0.18s (0.83s)| |Indep. Set (L2Sep)|2.82s (21.44s)|3.01s (22.29s)|0.3s (73.95s)| |Indep. Set (SCIP Default)|3.92s (19.68s)|4.6s (21.67s)|1.14s (55.51s)| |NNV (L2Sep)|6.56s (4.06s)|8.18s (4.81s)|3.67s (6.52s)| |NNV (SCIP Default)|8.31s (5.55s)|9.69s (6.13s)|5.07s (5.81s)| || **Resource usage of learned separator configuration v.s. other B&C parts (cut selection, branching):** At inference time, we require only two model calls (two separator configurations) during each MILP solve. In contrast, learning to branch or cut selection requires model calls at a higher frequency, such as at each node of the B&C tree (branching) or at each separation round (cut selection). Moreover, each model call for cut selection requires evaluating each cut in the cutpool ($\approx 10^2-10^3$), whereas each of our model calls is more efficient due to the reduced number of separator configurations to evaluate ($\approx 20-30$, enabled by our restricted space). As such, our method is lightweight enough to be integrated during the solve process. --- Rebuttal Comment 1.1: Title: Thanks for the response and sory for the late reply. Increasing my score from 5 to 6. Comment: I appreciate the authors' efforts in responding to my concerns and I'm sorry for the late reply. I have read the authors' response and other reviewers' comments. I can understand the importance of the task better now, which has addressed my main concern. I have raised my score from 5 to 6. Still, though the authors provide a comparison of efficiency between the proposed method and other B&C parts (e.g., cut selection, branching), I think the paper will be stronger if direct comparisons of end-to-end performances can be conduct. --- Reply to Comment 1.1.1: Title: Thank you! Please see authors’ further response. Comment: We really appreciate that the reviewer takes a careful look at our rebuttal and and other reviewers’ comments. Thank you for taking the time to provide valuable feedback and revise the assessment! We provide our further responses regarding “the direct comparison of end-to-end performances” below. If our answers do not align with the reviewer intention, we would appreciate that the reviewer explain their suggestion in more details. The bottom line is that subtle differences in the emerging related works result in challenges in directly comparing without adapting and extending implementations (it is not as easy as re-running the authors’ code), not to mention that some implementations are not available. We thus offer comparisons based on reported improvement metrics for overlapping benchmarks. **What we are able to say about performance comparison:** - **Cutting:** In Sec. 4.3 of our main paper, we contextualize our performance for separator configuration with prior cut selection works on comparable datasets, where L2Sep achieves a 37.5% time improvement on NNV and 12.9% on MIPLIB (a larger subset), whereas prior cut selection works report 11.67% on NNV and 3% & 1% on MIPLIB (two smaller subsets). - **Branching:** A recent work [4] compares a few learned branching rules with SCIP default on the Ecole datasets, and report the best time improvements of 36.4% (Comb. Auc.), 34.1% (Indep. Set), and 57.4% (Fac. Loc.) from SCIP default (see Table 2 in their Appendix). In contrast, our method L2Sep achieves 66.2% (Comb. Auc.), 72.4% (Indep. Set) and 29.4% (Fac. Loc.) on larger and more heterogeneous Ecole instances. **Challenges in direct comparisons:** It is challenging to directly compare with prior learning works due to differences in experimental setup. For example, [1,2] focuses on the comparing different cutting plane selection strategies (including learning) on a synthetic environment that consider pure cutting plane iterations (without branch-and-bound) on Tang et al. instances. That is, they do not consider improvement relative to a full B&C solver (e.g., SCIP). Another recent work [3] is restricted to solving only the root node of B&C. In addition, prior works also consider different performance metrics (e.g. reversed Integrality-gap-closed integral), since they do not fully solve the B&C. Our work compares with the full B&C solver (SCIP and Gurobi), solves the full B&C tree, and considers the performance metrics of relative time or optimality gap improvement. The reviewer comment does highlight an emerging research gap, however, to unify and rigorously relate the emerging works in this area! We hope our response align the reviewer’s intended suggestion. We would like to thank again for the reviewer’s suggestions during the rebuttal period! *[1] Tang, Yunhao, Shipra Agrawal, and Yuri Faenza. "Reinforcement learning for integer programming: Learning to cut." International conference on machine learning. PMLR, 2020.* *[2] Paulus, Max B., et al. "Learning to cut by looking ahead: Cutting plane selection via imitation learning." International conference on machine learning. PMLR, 2022.* *[3] Wang, Zhihai, et al. ‘Learning Cut Selection for Mixed-Integer Linear Programming via Hierarchical Sequence Model’. The Eleventh International Conference on Learning Representations, 2023.* *[4] Scavuzzo, Lara, et al. "Learning to branch with tree mdps." Advances in Neural Information Processing Systems 35 (2022): 18514-18526.*
Summary: This paper uses machine learning to decide which families of cutting planes should be applied in each of a finite number of rounds when a discrete mathematical optimization problem is solved by an MILP solver. The authors conduct experiments on the SCIP solver, in which they show that their selection of separators (generators of different families of cutting planes) is able to solve problems faster than the default configurations of SCIP. Strengths: Within the mathematical optimization community, it is known that many people have looked at this problem but few obtained good results, which shows in the limited number of direct references shown by the authors. The results are certainly motivating, and the authors managed to analyze their work across a representative number of datasets. ***** Following the rebuttal, I still have concerns about the significance of the work, which are similar to the ones presented in reviewer wvgK's review. For that reason, I cannot be more enthusiastic than a borderline accept for this paper. Weaknesses: I tend to consider the selection of separators a special case of what was done in prior work, in which the cuts themselves were selected. By using a setting in which their work is not comparable to prior ML studies on this topic, I am left wondering if what is proposed in this paper indeed leads to a better approach than the ones already known. The paper explains very little about MILP, to the point that it fells like the application is an afterthought. While the appendix generously makes up for that, a reader unfamiliar with MILP might read the entire paper and not understand much about the application considered. In fact, for a span of approximately 3 pages (Line 56 in Page 2 to Line 175 in Page 5) there is very little that is specific to the application to MILP solvers. In great part that is because of using a language that is very different. I would not say that this is an issue of using ML instead of MILP terminology because I also got lost with the abstractions used, such as when the authors say "single configuration update" to mean that the same separators would be used in every round. This abstractness of the language left me with many questions about what exactly was done (see Questions). For someone with a greater interest in MILP as myself, I find it difficult to translate what the authors did to the application. As a consequence, I would have a hard time trying to reproduce their work if I wanted to. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1) Can you compare your work with prior cut selection approaches? 2) In plain terms, what exactly is the subset of configurations A and how is it pre-selected? 3) What is the relevance of Propositions 1 and 2 in the context of MILP? 4) How exactly are the separator nodes S used in the representation of the instances? 5) How exactly do you consider instances in which neither SCIP nor skip with your separator selection were able to solve? Figure 1: The letter b is used both as the RHS of the set of constraints as well as the RHS of the next cut generated; also using A and a with different meanings (constraint LHS and cut LHS) does not seem advisable. Equation 2: Why is the term on the left repeated at the end of the equation? Is this a typo? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: Although the authors do not explicitly acknowledge this, their focus on runtime for solving problems to optimality means that their approach is of little help precisely for the case in which it would be needed the most: problems in those benchmarks for which the provable optimal solution is not known. It is also unclear what happens in the case of the problems that do not finishing running on time: my guess is that timeout is counted equally as bad, although it would make more sense to look for the best solution found or the remaining optimality gap. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable input on paper presentation and other insightful questions. We provide our responses next, and present our new experiments in the general comments, covering (1) analysis of effective separators for each MILP class from our learning results, and applying our method to (2) a different MILP solver Gurobi, and (3) a different metric (relative gap improvement). If appropriate, we encourage the reviewer to increase our review score. > The paper explains very little about MILP, to the point that it feels like the application is an afterthought As the camera ready permits one more page, we plan to present more details on MILPs (which are currently in the Appendix) and provide additional clarification on other terminologies. > Q1. Can you compare your work with prior cut selection approaches? **Learning Method differences:** prior cut selection learning methods do not consider reducing the dimensionality of the action space. However, our separator configuration space with a size $2^M$ is challenging to learn (see ablation “No Restr” in Table 2). This motivates our proposed data-driven configuration space restriction algorithm. **Result Comparison:** in Main Paper Sec. 4.3, we contextualize our time improvement on comparable datasets (NNV and MIPLIB); our method achieves larger time improvements than prior cut selection works. **Task differences:** Separator configuration is applied for cut generation, which happens earlier than cut selection (See Main Paper Fig. 1 caption). At each separation round, activated separators first generate cuts to a cutpool; then cut selection algorithms choose cuts from the cutpool. The two tasks are mostly orthogonal and can be combined in future work. > Q2. In plain terms, what exactly is the subset of configurations A and how is it pre-selected? Each element of the subset A is a combination of separators (e.g., Gomory, Clique) to activate; we call this a configuration and it can be thought of as a binary vector of length the number of separators (17 for SCIP). We select A from the full configuration space by choosing configurations that are effective but without being redundant. To do so, we use a small training set for each MILP class (100 instances) and semi-randomly generating and evaluating around 2000 configurations to select the target subset A of size around 15-30. The specific strategy leverages submodularity, i.e., diminishing marginal returns on performance with more configurations, which justifies the use of a greedy strategy to select the configuration subset A. > Q3. What is the relevance of Propositions 1 and 2 in the context of MILP? Our learning method first constructs a subset A of high quality separator configurations for the MILP. We then learn a network $\tilde{f}_A$ to select a configuration among A. Prop. 1 characterizes the test performance of $\tilde{f}_A$ as a function of the subset A, which sheds light on how to construct a good A: an ideal subset A allows $\tilde{f}_A$ to have (1) high training performance, obtained when **some** configuration in A achieves good performance for *any* MILP instance in a training set, and (2) low generalization gap, achieved when **each** configuration in A has good performance *across* MILP instances in a test set. In practice, we approximate the generalization gap using the training set (see empirical justification in Appendix Fig. 5). Prop. 2 formalizes the diminishing marginal returns (submodularity) of $\tilde{f}_A$’s training performance with respect to A, which enables a greedy algorithm to iteratively construct A. Based on Prop. 1, we further augment the greedy algorithm with a filtering criterion to improve the generation gap by eliminating ineffective configurations from A. > Q4. How exactly are the separator nodes S used in the representation of the instances? Our network takes a separator configuration-MILP instance pair as input and predicts the time improvement of applying the configuration to the instance. We represent each configuration by M separator nodes; each node has M+1 dimensional input features, representing whether the separator is activated (the first dimension), and which separator it is (one-hot M-dimensional vector). Our GNN input graph connects each separator node with all variable and constraint nodes; the input features of latter nodes contain the MILP instance information (see Appendix A5.2). > Q5. How exactly do you consider instances in which neither SCIP nor SCIP with your separator selection were able to solve? > > > In the case of the problems that do not finishing running on time, it would make more sense to look for the best solution found or the remaining optimality gap. For larger MILP classes, we exclude instances that cannot be solved by SCIP default to a specific optimality gap within a predefined time limit, ensuring most instances are retained (optimality at 120s for NNV, and a gap of 10% at 300s for MIPLIB and Load Balancing, see Appendix A.6.5); 26% of NNV instances are excluded while all instances from Load Balancing are retained. Our experiments then consider the time improvement of all instances to reach the respective gaps. In general, we can increase the gap threshold to allow learning on harder MILP classes. We conduct a new experiment where we change the objective to the relative gap improvement, which is amenable to instances that cannot be solved to optimality under a given time limit. For example, on Load Balancing where SCIP default has a average gap of 32% within 16s, L2Sep is able to reduce the average gap to 21% (34% improvement). We refer the reviewer to General Response [GR3] for details. > Eq. 2: Why is the term on the left repeated at the end of the equation? The test performance $\Delta(\tilde{f}_A)$ on an unknown test set can be decomposed into (1) the training set performance $\hat{\Delta}(\tilde{f}_A)$, and (2) the generalization gap $\hat{\Delta}(\tilde{f}_A) - \Delta(\tilde{f}_A)$. --- Rebuttal Comment 1.1: Comment: I appreciate the response from the authors. I did not see a comment regarding Figure 1, but I hope that this gets correct. In revising my score, I am counting on the word of authors that a final version would be more informative about the application (MILP), and I also second reviewer wvgK's recommendation that the paper title should be changed to better reflect what the paper does. --- Reply to Comment 1.1.1: Title: Authors' Further Response to Reviewer LDTP Comment: We're thankful that the reviewer took the time to revise their assessment! We will incorporate more information about MILP and make the title change in the final version. We did not provide a comment on Fig. 1 due to the character limit of our initial response, but we will update a and b to $\nu$ and $\omega$ in the Figure (e.g. $\nu_{11}^\intercal x \leq \omega_{11}$). Thank you for your suggestions! Regarding the reviewer’s concern on the significance of our work (similar to the ones in reviewer wvgK's review), we invite the reviewer to elaborate on their concern more explicitly, so that we can use this discussion period to further address. As we are delighted that our response addressed reviewer wvgK’s concern, we thought of sending you a summary of the significance of our work. We hope this message will help the reviewer LDTP in contextualizing our work’s significance. - **Task**: we agree with reviewer TfAQ and wvgK’s response that it is important to understand “when to cut”, which is much less explored (though equally crucial) as the “how to cut” question. Separator configuration and the associated cut generation play a vital role in the B&C process; properly configuring separators can accelerate MILP solvers by 25%-70% (also, see our response to reviewer JiCg *“The Immediate and Multi-step Effect of Separator Configuration in the B&C Process”*). We are excited about our work’s potential to inspire more future studies on this “when to cut” question. - **Interpretability**: from [GR1], our learning method L2Sep automatically discovers known facts from literature regarding the efficacy of difference separators for each MILP class (where the existing literature has spent decades of efforts to initially discover these facts). L2Sep can potentially speed up the knowledge discovery process by suggesting efficient MILP class-separator family pairs for future theoretical inspection. - **Method**: different from prior cut selection works, we propose a data-driven subspace restriction algorithm followed by a learning method to configure separators. Our work integrates empirical methodology with theoretical justification, bridging gaps in the existing learning for MILP literature that has been predominantly empirical; our theory directly informs our empirical subspace restriction algorithm, whereas prior theoretical works on parameter configuration cannot (see our response to reviewer wvgK for details). - **Performance**: our learning method L2Sep for separator configuration is able to accelerate multiple MILP solvers (SCIP, Gurobi [GR2]) under different objectives (time improvement, gap improvement [GR3]) on various datasets (standard, large-scale [our paper]), demonstrating effectiveness of our method in accelerating MILP solvers. We would like to thank the reviewer again! We really appreciate your detailed feedback on the paper presentation and your other insightful suggestions.
Rebuttal 1: Rebuttal: # General Responses to All Reviewers We thank each of the reviewers for their detailed and constructive comments. We provide the following additional interpretations and new experimental results in response to the reviewers’ suggestions. $\ $ ### [GR1] Interpretation analysis: The learned model recovers known facts from the literature regarding effectiveness of different separators Reviewer wvgK inquired about interpretations of our learned model to understand the efficacy of different separators for different problem instances. In Rebuttal PDF Fig. 1 and 2 (see figure captions for explanations of the figures), we provide an investigation, summarize the findings below, and will include additional analysis for other MILP classes in the appendix. **Bin Packing:** It is known that instances with few bins approximates the Knapsack problem (Clique cuts are known to be effective [1]), and that instances with many bins approximates Bipartite Matching (Flowcover cuts can be useful [2]). We analyzed the separators activated by our learned model when we gradually decrease the number of bins, and observe that the prevalence of selected Clique and Flowcover cuts increased and decreased, respectively. This is illustrated in Fig. 1, right column. **Other MILP classes:** We provide visualizations for Independent Set and MIPLIB in Fig. 2. Clique is known as an effective separator for Independent Set [3]; L2Sep automatically recovers this fact by frequently selecting configurations that activate Clique. Meanwhile, we see that L2Sep discovers the instance heterogeneity of MIPLIB, resulting in a more dispersed distribution of selected configurations. *[1] Boland, Natashia, et al. "Clique-based facets for the precedence constrained knapsack problem." Mathematical programming 133 (2012).* *[2] Van Vyve, Mathieu. "Fixed-charge transportation on a path: Linear programming formulations." International Conference on Integer Programming and Combinatorial Optimization. (2011).* *[3] Dey, Santanu S., and Marco Molinaro. "Theoretical challenges towards cutting-plane selection." Mathematical Programming 170 (2018).* $\ $ ### [GR2] Learning-to-separate effectively accelerates state-of-the-art MILP solver Gurobi As inquired by Reviewer wvgK, we replicated our method L2Sep with Gurobi (containing a larger set of 21 separators). As Gurobi is closed source, we cannot change configurations after the solving process starts$^1$, so we only consider one stage of separator configuration. To our delight, L2Sep achieves significant relative time improvements over the Gurobi default, with gains ranging from 12% to 56%. This result confirms the efficacy of L2Sep as an automatic instance-aware separator configuration method. Similar to our results for SCIP, we observe that (1) our two heuristic sub-components (See Section 4.1) achieve impressive speedup from Gurobi default, indicating the high quality of our restricted configuration subspace, and (2) our complete method L2Sep improves the performance further, highlighting the benefit of learning instance-aware configurations. Our results (median and standard deviation) are as follows: ||Method|Max. Cut|Pack.|Comb. Auc.|Fac. Loc.| |-|-|-|-|-|-| ||Default Time (s)|0.087 (0.051)|4.048 (3.216)|1.687 (3.596)|27.872 (14.733)| |Heuristic Baseline|Gurobi Default|0%|0%|0%|0%| ||Random|18.6% (49.0%)|15.5% (28.2%)|-10.7% (69.1%)|13.4% (46.0%)| |Ours Heuristic Variants|Inst. Agnostic Configuration|35.1% (35.8%)|22.9% (39.4%)|3.1% (65.3%)|40.6% (48.1%)| ||Random within Rest. Subspace|37.3% (48.0%)|24.3% (32.2%)|5.1% (84.2%)|40.2% (46.8%)| |Ours Learned|L2Sep|**45.4% (38.4%)**|**30.6% (29.6%)**|**12.6% (63.5%)**|**56.7% (35.7%)**| || *$^1$ Gurobi official documentation states “Parameters control the operation of the Gurobi solvers. They must be modified before the optimization begins.”* $\ $ ### [GR3] Learning-to-separate is effective under alternative objective (relative gap improvement) Inspired by reviewer LDTP’s comments, we analyzed an alternative objective of the relative gap improvement under a fixed time limit. Let $g_0(x)$ and $g_\pi(x)$ be the primal-dual gaps of instance $x$ using the SCIP default and another configuration strategy $\pi(x)$ under a fixed time limit $T$. We define the relative gap improvement as $\delta_g(\pi(x), x) := (g_0(x) - g_\pi(x)) / (\max(g_0(x), g_\pi(x)) + \epsilon)$. We choose the denominator to avoid division by zero when the instance is solved to optimality. In the table below, we find that L2Sep achieves a 15%-68% relative gap *improvement* over SCIP default. Specifically, the table presents the relative gap improvement (mean and standard deviation) of each method over SCIP default, along with the fixed time limit for various MILP classes (mostly around 50% of medium SCIP default solve time), and the absolute gap of SCIP default at the time limit. In Rebuttal PDF Fig. 3 (right two columns), we further plot histograms of the gap distribution on the entire dataset for L2Sep and SCIP default, where we observe L2Sep effectively shifts the *entire gap distribution* to a lower range. These results demonstrate the effectiveness of our method across different objectives, and its ability to improve primal-dual gaps for instances that cannot be solved to optimality within a given time limit. ||Method|Pack.|Comb. Auc.|Indep. Set|NNV|Load Balancing| |-|-|-|-|-|-|-| ||Time Limit (s)|4.4|1.4|8.2|16|16| ||Default Gap|9.1e-4 (9.3e-4)|0.060 (0.098)|0.057 (0.059)|0.50 (0.80)|0.32 (0.13)| |Heuristic Baseline|SCIP Default|0%|0%|0%|0%|0%| ||Random|-37.1% (41.7%)|-27.3% (69.1%)|-23.2% (44.1%)|-40.3% (72.8%)|-48.0% (35.8%)| |Ours Heuristic Variants|Inst. Agnostic Configuration|11.9% (38.4%)|52.4% (45.3%)|23.5% (34.5%)|33.6% (72.1%)|14.0% (18.9%)| ||Random within Rest. Subspace|10.1% (42.5%)|54.1% (45.1%)|21.6% (33.7%)|24.8% (75.9%)|9.5% (17.6%)| |Ours Learned|L2Sep|**15.4% (40.0%)**|**68.8% (38.2%)**|**29.6% (34.7%)**|**36.0% (68.2%)**|**34.2% (27.5%)**| || Pdf: /pdf/c63620a89f59a1c2f203cec7284e07ced057881a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning From Biased Soft Labels
Accept (poster)
Summary: The authors analyse the learnability properties of (biased) soft labels, e.g., originating from a teacher model in a student-teacher setup, with respect to classifier consistency and ERM learnability. To this end, two indicator measures of the quality of the proxy labels are suggested, which are used in a theoretical analysis to derive bounds about the aforementioned quality dimensions. Moreover, a heuristic loss for training skillful but bad teachers is developed. Strengths: - Investigates a highly relevant matter - Technically sound and valid theoretical results - Covers multiple special cases of weakly-supervised learning as a practically relevant problem setup Weaknesses: Major: - I find the distinction between incomplete supervision and partial label learning confusing. Typically, incomplete supervision and partial label learning are more or less the same thing. What the authors here refer to in the case of incomplete supervision is semi-supervised learning, which is a special case of partial label learning, namely that you can either observe a precise (i.e., unambiguous) label, or no label at all resp. the complete target space. In order to preserve consistency with related literature on this (e.g., as in your reference [58]), I would recommend sticking to this ontology. - The empirical analysis of SBTs does not contain any baseline. Overall, it is hard to judge the appropriateness of the developed indicators and the SBT loss proposals, perhaps the experimental setup can be overhauled. - It is hard to assess the looseness resp. tightness of the bound in Theorem 2. I would love to see the authors elaborating on this matter, e.g., by putting individual term components in a context, comparing it to “classical” bounds in multi-class learning, potentially with respect to label noise. Minor: - The page limit for submissions was 9 pages, but there is content on page 10. - I think there is a missing $\max$ (or a different aggregator) in Eq. (1), right? - Also, “ambiguity degree” is a term that is already being used in the PLL literature, as also indicated by referring to [5]. This is not precisely an equivalent ambiguity degree formulation, so it should be distinguished. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - How did you determine the hyperparameters being used in the experiments? Did you repeat the experiments multiple times with different seeds? How about the standard deviations? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We are truly grateful for your comprehensive review and the positive recognition of our work's relevance, technical soundness, and valid theoretical results. Your feedback provides us with valuable insights that will undoubtedly enhance the quality of our paper. We would like to address the concerns you raised: **Q1. Incomplete supervision and partial label learning.** * A: Incomplete supervision (semi-supervised learning) is training with labeled data and unlabeled data (i.e. the dataset are partially labelled). In partial label learning, each instance is ambiguously labeled with a labels set and the ground-truth label is in the set (i.e. each instance are partially labelled). There is not a direct correlation between incomplete supervision and partial label learning. **Q2. The baseline of the empirical analysis of SBTs.** * A: Our primary contribution is the discovery that large-biased soft labels can also produce competent student models and that we provide explanations and theoretical proofs for the underlying phenomenon. The algorithm we designed (see Section 5.1) primarily serves to validate our theory. To the best of our knowledge, this phenomenon hasn't been studied previously, which is why we did not design comparative experiments. **Q3. The looseness resp. tightness of the bound in Theorem 2.** * A: Suppose the Natarajan dimension of the hypothesis space $\mathcal{H}$ is $d_\mathcal{H}$. In the multi-class learning, a classic sample complexity for the ERM learners is $\mathcal{O}(\frac{d_{\mathcal{H}} \log \frac{1}{\varepsilon} + \log \frac{1}{\delta} }{\varepsilon})$ [1]. This bound is based on a fully supervised scenario, whereas weakly supervised problems introduce a great deal more complexity. In this context, we refer to a theory on partial label learning [2] and describe its bound based on the proposed indicators: $\mathcal{O}(\frac{d_{\mathcal{H}} \log d_{\mathcal{H}} + \log \frac{1}{\theta \varepsilon} + \log \frac{1}{\delta} }{\theta \varepsilon})$, where $\theta = \log \frac{2}{1+\gamma}$. The bound in Theorem 2 (ours) is $\mathcal{O}(\frac{d_{\mathcal{H}} \log d_{\mathcal{H}} + \log \frac{1}{\theta \varepsilon} + \log \frac{1}{\delta} }{\theta \varepsilon})$, where $\theta = \log \frac{2(1-\eta)}{1-\eta+\gamma}$. The bound in Theorem 2 is of the same order of magnitude as that in [2]. The difference between the two lies in the value of $\theta$. When the unreliability degree $\eta>0$, more samples are required, almost increasing in proportion to $\frac{1}{\theta}$. >[1] Ben-David, Shai, Nicolo Cesa-Bianchi, and Philip M. Long. "Characterizations of learnability for classes of {O,…, n}-valued functions." Proceedings of the fifth annual workshop on Computational learning theory. 1992. >[2] Liu, Liping, and Thomas Dietterich. "Learnability of the superset label learning problem." International Conference on Machine Learning. PMLR, 2014. **Q4. Exceed the page limit.** * A: We sincerely apologize for this oversight. We will make the necessary adjustments to condense the length of our paper. **Q5. Is there a missing $\max$ (or a different aggregator) in Eq. (1)?** * A: Thank you for your kind reminder. We have revisited and reflected upon the definition of $\eta$. Aggregation based on inter-class max is better, with $\eta= max_{(\boldsymbol{x}, y) \sim \mathcal{X} \times \mathcal{Y}} \operatorname{Pr}(y \notin \Omega_k(f(x)))$. The form in Eq. (1) implies that all samples have the same unreliability degree, which is right but more stringent. **Q6. The term “ambiguity degree” in the PLL literature should be distinguished.** * A: As explained in lines135-136, the ambiguity degree is inspired by PLL and extends it to soft labels. In fact, if PLL is transformed into the form of soft labels, two kinds of ambiguity degree are identical. To be more precise, the ambiguity degree in this paper refers to the ambiguity degree of soft labels. **Q7. How to choose hyperparameters?** * A: We have provided a detailed description of our experimental setup in Appendix A.7. Additionally, we've outlined the range and rationale behind our hyperparameter design in Appendix A.9. **Q8. Repeat the experiments multiple times with different seeds.** * A: Due to the limited time for the rebuttal, we replicated our experiments on CIFAR10/CIFAR100 using three different seeds. In each experiment, while using the same teacher, we varied the seed during student training. As a result, we observed a standard deviation of 1.113 in accuracy on CIFAR10 and 1.215 on CIFAR100. Your constructive feedback is instrumental in refining our paper, and we are committed to making the necessary improvements. Once again, thank you for your valuable insights and for considering our paper for acceptance. --- Rebuttal Comment 1.1: Comment: Thanks for your efforts and the thorough response. I am now more certain about going for an accept, which is why I increased my score. One last remark on the terms "incomplete supervision" vs. "partial labels": Pointing again to the reference [58], I find it more consistent with classical weakly-supervised learning (WSL) ontologies when "incomplete supervision" is an abstract term for learning settings, where not all labels are unambiguously labeled. E.g., in classification problems with a target space $\mathcal{Y}$, one wouldn't necessarily observe only labels $y \in \mathcal{Y}$, but *at least* one instance with a label $Y \subseteq \mathcal{Y}$ with $|Y|>1$. Semi-supervised learning with a labeled and unlabeled split would then refer to instances labeled with a single $y \in \mathcal{Y}$ (labeled split) and $Y=\mathcal{Y}$ (unlabeled), i.e., it considers the extreme case of observing only deterministic and agnostic "partial" labels. Partial label learning in general is more abstract in not specifying how the partial labels $\subseteq \mathcal{Y}$ are observed in the data, but typically refer to "mixed" cases where we observe something in between the two extremes of $\mathcal{Y}$ and $y$. Incomplete supervision would merely refer to this more abstract term, at least from the point of how it is used within the WSL community. But that is more of a minor remark. --- Reply to Comment 1.1.1: Comment: We are truly appreciative that our response has garnered your approval, and we would like to express our gratitude once again for the elevated rating you have provided. Your support serves as a significant source of encouragement for our work. The lack of clarity about "incomplete supervision" and "partial labels" in our paper led to your misunderstanding. We take full responsibility for this oversight, and it is not indicative of any shortcomings on your part. The term "partial" indeed has the potential for ambiguity. To rectify this, we will include further elucidation of both concepts within the main body of the text. Thank you for your insightful feedback and understanding.
Summary: This paper studies the effectiveness of biased soft labels in knowledge distillation and weakly-supervised learning. The paper introduces two indicators to measure the effectiveness of soft labels, and proposes moderate conditions to ensure that biased soft label learning is classifier-consistent and ERM learnable. The paper also presents a heuristic method to train skillful but bad teachers, and shows that they can teach students to achieve high accuracy on CIFAR-10/100. The paper applies the theoretical framework to three weakly-supervised learning paradigms, and validates the indicators with experiments. Strengths: 1. The paper is well-written and a pleasure to read. 2. The paper provides thorough theoretical analysis for three different weak supervision settings. Weaknesses: 1. The paper only conducts experiments on CIFAR-10/100 datasets, and does not provide experiments on larger and mainstream datasets, such as ImageNet. The paper also does not compare the proposed method with other state-of-the-art weakly-supervised methods, especially those based on knowledge distillation. 2. The paper is suggested to provide more visualizations of the the prediction results (class probabilities) of large-biased soft labels (generated by SBTs) and good student. 3. The paper does not provide experiments on different backbones, and cannot demonstrate the effectiveness of the proposed indicators and method on different architectures. The paper also does not discuss how the choice of the backbone affects the performance and robustness of the method. 4. The paper exceeds the page limit. The paper is suggested to be shortened to meet the page limit requirement. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the introduction, the paper claims that “Empirical Risk Minimization (ERM) learners’ performance can generalize to the entire data distribution.” However, previous studies have pointed out the drawbacks of ERM, such as (1) neural networks trained with ERM change their predictions drastically when evaluated on examples just outside the training distribution[1], also known as adversarial examples; (2) ERM allows large neural networks to memorize (instead of generalize from) the training data even in the presence of strong regularization, or in classification problems where the labels are assigned at random [1]. In general, using ERM for optimization may affect the generalization ability of the model. Does this contradict the conclusion of this paper? Does the conclusion of this paper have any prior studies or experiments to support it? [1]Zhang, Hongyi, et al. "mixup: Beyond empirical risk minimization." *arXiv preprint arXiv:1710.09412* (2017). 2. The paper mentioned “skillful but bad teachers”. How to define the skillfulness of the bad teachers? 3. As you mentioned in Section 5.3, is your main contribution the two methods of evaluating soft labels (unreliability degree and ambiguity degree), rather than the heuristic method of “bad teacher can teach good student”? 4. The conclusion that “the accuracy of the students decrease when unreliability degree and ambiguity degree increase” does not seem to be obvious in Figure 2(b). Also, in Figure 3 (a) and (b), the ambiguity degree γ does not change much as the student accuracy increases. The same situation also occurs in Figure 5 in the appendix, which seems to not support the effectiveness of the ambiguity degree γ indicator in these weakly-supervised settings. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Some of the paper’s experimental results show that the two indicators proposed by the paper cannot well characterize the effectiveness of soft labels in all weakly-supervised settings. For example, under some of the weakly-supervision settings, the paper shows that the indicators are not consistent with the performance of the student model. The paper does not provide sufficient explanation or analysis for this phenomenon, and does not discuss how to improve or modify the indicators to better capture the effectiveness of soft labels. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, **Q1. Experiments on Tiny-ImageNet.** - A: We have added experiments on Tiny-Imagenet, which can be found in the PDF attached to the rebuttal. **Q2. Why not compare the proposed method with other state-of-the-art weakly-supervised methods.** - A: Our primary contribution is the discovery that large-biased soft labels can also produce competent student models and that we provide explanations and theoretical proofs for the underlying phenomenon. The algorithm we designed (see Section 5.1) primarily serves to validate our theory. The aim of the weakly-supervised experimentation was not to train the state-of-the-art student model. To the best of our knowledge, this phenomenon hasn't been studied previously, which is why we did not design comparative experiments. **Q3. Visualizations of the class probabilities of large-biased soft labels.** - A: We have provided the visualizations of the class probabilities of large-biased soft labels (generated by SBTs) in the appendix (see Figure 4). We did not include it in the main text due to space constraints. **Q4. Experiments on different backbones.** - A: We have added experiments on different backbones (Figure 2 in the PDF of Author Rebuttal). We experimented with wideresnet 28x2, 28x4, 40x2, and 40x4. In the Figure 2, we omitted the Unreliability degree and Ambiguity degree (which is the same the that of Figure 2 in the paper) for clarity. The four distinct backbones displayed consistent trends, further suggesting that the proposed indicators are effective across different backbones. For the same teacher model, the better the student model's fitting capacity, the better student's performance tends to be. **Q5. The paper exceeds the page limit.** - A: We sincerely apologize for this oversight. We will make the necessary adjustments to condense the length of our paper. **Q6. ERM learner cannot generalize well.** - A: The expected classification error defined in section 3.1 indicates the performance (or generalization ability) of the model with respect to the original data distribution. This conclusion is widely accepted in the machine learning community, as evidenced by [1,2]. Peter L. Bartlett also stated, 'The performance of such a model selection scheme critically depends on how well the error bounds match the true error (i.e., expected classification error).' As for the model's performance on adversarial examples, it falls outside the scope of our study. The generalization ability mentioned in mixup refers to situations outside the training distribution, and there's no contradiction between the two. Thank you for the excellent idea regarding the student model's performance on adversarial examples, and we will consider the feasibility of this direction in our future work. > [1] Bartlett, Peter L., and Shahar Mendelson. "Rademacher and Gaussian complexities: Risk bounds and structural results." Journal of Machine Learning Research 3.Nov (2002): 463-482. > [2] Daniely, Amit, et al. "Multiclass learnability and the erm principle." Proceedings of the 24th Annual Conference on Learning Theory. JMLR Workshop and Conference Proceedings, 2011. **Q7. Definition of skillfulness of the bad teachers.** - A: We apologize for any confusion caused by the term "skillfulness of the bad teachers" mentioned in the paper. Here, we provide a rigorous definition for "skillfulness of the bad teachers" to clarify its meaning. > Definition 1. (Bias of soft labels) Give a dataset $D$ consisting of $n$ samples, the feature vector for the $i$-th sample is denoted as $\boldsymbol{x}_i$ and the corresponding label is denoted as $y_i$. Let $f$ represent a model or a mapping rule (e.g. label smoothing). The bias of the soft labels generated by $f$ on dataset $D$ is \begin{equation} Bias(f, D)=\frac{1}{n} \sum_{i=1}^n [1-f_{y_i} (\boldsymbol{x}_i)], \end{equation} where $f_{y_i} (\boldsymbol{x}_i)$ refers to the component of the soft label $f(\boldsymbol{x}_i)$ that corresponds to the true label $y_i$. > Definition 2. (Large-biased soft labels) Soft labels generated by $f$ on dataset $D$ is called biased soft labels when $Bias(f, D) \textgreater 0$ and called large-biased soft labels when $Bias(f, D) \geq 0.5$. > Definition 3. (Bad teachers) We define $f$ as a bad teacher if the soft labels it generates on dataset $D$ are large-biased. Typically, $D$ is the training set for $f$. - We cannot provide a precise definition for 'skillful teachers' as the performance of the student is contingent upon the architecture of the model and the complexity of the dataset. In this context, 'skillful' merely signifies that the teacher produces students with acceptable outcomes. **Q8. Main contribution.** - A: We discover that 'large-biased soft labels can produce competent student models' and then introduced two indicators (unreliability degree and ambiguity degree) and provided theoretical guarantees for them. The heuristic approach of 'bad teacher can teach good student' serves to empirically validate the existence of the phenomenon and the soundness of our theory. **Q9. The conclusion that “the accuracy of the students decrease when unreliability degree and ambiguity degree increase” does not seem to be obvious.** - A: We sincerely apologize for any confusion caused by our statement 'the accuracy of the students decrease when unreliability degree and ambiguity degree increase'. What we intended to convey was that when one remains unchanged and the other increases, the accuracy of the students will decrease. Both indicators together are necessary to assess the efficacy of soft labels, and one should not be analyzed based solely on individual trends. This applies to Figure 2(b), 3(a), and 3(b) alike. Thank you for your valuable feedback. We will make revisions to our paper based on your suggestions. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? Your time and efforts in evaluating our work are appreciated greatly. --- Rebuttal Comment 1.2: Comment: Thank you for your reply and rebuttal to my comments. I appreciate the authors' detailed responses, which have addressed most of my concerns effectively. In particular, I appreciate the addition of new experiments and the clarification of previously vague concepts in the paper. Based on these improvements and responses, I am pleased to give the paper a positive evaluation and will increase my score to 5. The authors are supposed to incorporate the insights from our discussion into the revised version of the paper.
Summary: The authors propose a clever theoretical framework for studying learnability under supervision with imperfect soft labels. Strengths: - Learning from imperfect soft labels is an important and interesting area that is prevalent not only in knowledge distillation but also in cognitive science, human-AI interaction, and pretty much any modeling that aim to take into account noise and uncertainty. - The authors conduct a promising theoretical analysis of this setting by recasting it as a noisy top-k oracle problem (where teachers provide a set of k labels which with some probability contains the true label). Weaknesses: - Clarity could be improved by providing more intuitions and explanations for terminology throughout. For example, "biased soft label" is never formally defined, and the implied definition (based on the notions of ambiguity and unreliability) does not fully agree with what I would normally think of when thinking of statistical notions of bias. Some additional editorial revision (fixing typos, etc.) would also be helpful, but I am not taking that into account in my score. - I think the assumptions are not quite as moderate as the authors imply and would like some more clarification about these. For example, one of the assumptions seem to be that labels occur in the top-k with equal probability, but this is clearly not the case in practice (e.g. you may believe a picture of a dog could be a wolf, but certainly not an airplane) which the authors mention when designing their experiments as they find that the top-k labels are correlated. The experiments are then designed to satisfy this assumption which means there is very little evidence that the proposed theory describes a realistic setting. Another assumption (Assumption 2 in 4.1) seems to pre-assume that the metrics considered by the authors are inversely correlated with accuracy which feels circular since I believe this is one of the things the authors want to claim. - As mentioned in the section above, the empirical results are currently unconvincing. In addition to the assumption mentioned above, it seems like in both experiments, the true label has the highest probability in >15% of the cases. For CIFAR10 this is reasonable since random chance is 10%, but for CIFAR100, this is quite high. Is there a disconnect between the theory setting and the experimental setting, in that the theory is focused on learning from the top-k set where all top-k labels are treated equally while in the experimental setting, learning still takes into account the actual probabilities? Perhaps a more compelling experiment would be to isolate various top-k cases and show that the curves are robust to changes in k? - There are a number of existing papers that study the informativeness of soft labels. It would be great to see some discussion of how this study fits into that literature. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: - The authors have listed both limitations and assumptions, though I think the assumptions may be more of a limitation than suggested. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, **Q1. Clairty and defination.** - A: We apologize for any confusion caused by the term "large-biased soft labels" mentioned in the paper. Here, we provide a rigorous definition for "large-biased soft labels" to clarify its meaning. > Definition 1. (Bias of soft labels) Give a dataset $D$ consisting of $n$ samples, the feature vector for the $i$-th sample is denoted as $\boldsymbol{x}_i$ and the corresponding label is denoted as $y_i$. Let $f$ represent a model or a mapping rule (e.g. label smoothing). The bias of the soft labels generated by $f$ on dataset $D$ is \begin{equation} Bias(f, D)=\frac{1}{n} \sum_{i=1}^n [1-f_{y_i} (\boldsymbol{x}_i)], \end{equation} where $f_{y_i} (\boldsymbol{x}_i)$ refers to the component of the soft label $f(\boldsymbol{x}_i)$ that corresponds to the true label $y_i$. > Definition 2. (Large-biased soft labels) Soft labels generated by $f$ on dataset $D$ is called biased soft labels when $Bias(f, D) \textgreater 0$ and called large-biased soft labels when $Bias(f, D) \geq 0.5$. **Q2. About the assumption "labels occur in the top-k with equal probability".** - A: The assumption "incorrect labels occur in the top-k with equal probability", mentioned in line 212 (Section 4.1), is solely in service of Theorem 3. I have also articulated that this assumption can be relaxed, mentioned in lines 213-215, for instance, by considering an upper bound of $\frac{p(i|x)}{p(j|x)}$. This does not impact our understanding of Incomplete Supervision from the perspective of soft labels. **Q3. The designed experiments reflect the assumption is not realistic.** - A: In fact, for models that are trained normally, the conditions we described (i.e. $\gamma_k(f)<1-\frac{\eta_k(f)}{1-\eta_k(f)}$ and $\eta_k + \gamma_k<1$) are easily met. It only requires the existence of a specific 'k' such that both delta and gamma are satisfied. To further illustrate this point, when we set $k=4$ and use a normally trained resnet-50 as an example, we obtained the following results: - CIFAR10: Accuracy (ACC): 95.29%; $\eta$: 2.63\%; $\gamma$: 20.12\%. - CIFAR100: Accuracy (ACC): 78.13%; $\eta$: 6.55; $\gamma$: 21.01\%. - Tiny-Imagenet: Accuracy (ACC): 60.63%; $\eta$: 13.55; $\gamma$: 24.67\%. They all easily satisfy the requirements stipulated in Theorem 1 and Theorem 2. **Q4. (Assumption 2 in 4.1) Assume that the metrics are inversely correlated with accuracy.** - A: In Assumption 2, we indeed ideally posit a inverse correlation between the proposed metrics and accuracy, using this to prove Theorem 3. Operating under the ideal assumption, we can justifiably explain both the dynamic process and the final performance of Incomplete Supervision. This indicates that our theory does not contradict practical observations, further validating the reasonableness and practicability of our theoretical framework. **Q5. The accuracy of the teacher is much higher than 1% on CIFAR100.** - A: Theoretically, the accuracy on CIFAR100 can be even lower. However, in practice, the accuracy being above 15% is constrained by the size of the dataset. As seen in Theorem 2, the smaller the unreliability degree, the larger the required dataset size. If there are more CIFAR100 training samples, we believe the accuracy could indeed be lower. **Q6. Top-k labels and the actual probabilities of the soft labels.** - A: It's important to clarify that our study consistently focuses on learning from soft labels, rather than from top-k sets. The indicators we proposed facilitate theoretical analysis by transforming soft labels into top-k sets. However, the conclusions are universally valid for all soft labels. While in practice experiments are indeed influenced by the “actual probabilities,” our theory merely provides a guarantee in the worst-case scenario. **Q7. About the choice of k.** - A: It's evident that when k=1 or k=c−1 (where c is the total number of classes), the supervision information suffers significant loss, resulting in poor performance by the student model. In practice, we observed that the student performs well when k=3, 4, 5. Our experimental results are based on a fixed k=4. **Q8. How this study fits into related literature.** - A: To the best of our knowledge, teacher models in the current domains of knowledge distillation and label enhancement aim to mimic true labels while adding some regularization terms. Such teacher models typically produce soft labels that closely resemble the true labels. There are many existing explanations for these types of soft labels, such as they act as a form of regularization, they approximate Bayesian prior probabilities, they prevent excessive overconfidence, etc (see lines 80-84, 100-101). However, it seems that these theories cannot account for the observation that "large-biased soft labels can also work". Our research serves as a complement and refinement to the existing theories on soft labels. Thank you for your valuable feedback. We will make revisions to our paper based on your suggestions. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? Your time and efforts in evaluating our work are appreciated greatly. --- Rebuttal Comment 1.2: Comment: Thank you for the detailed rebuttal! My concerns have generally been addressed and I have updated my score accordingly.
Summary: This paper aims to study the effectiveness of biased soft labels for knowledge distillation. They propose two indicators to measure the effectiveness of the biased soft labels: unreliability degree and ambiguity degree. They provide a theoretical guarantee that the biased soft labels are effective in training a good student in three weakly-supervised learning paradigms: incomplete supervision, partial label learning, and learning with additive noise. Their experiments reveal that largely biased soft labels can also teach good students and the proposed indicators are effective in measuring the effectiveness of soft labels. Strengths: * The motivation of the paper is original and novel. It is intriguing to see that learning from largely biased soft labels can achieve comparable performance, and the proposed indicators can be a valuable contribution to measuring their effectiveness in learning a good student for downstream tasks. * The paper provides an in-depth theoretical guarantee of the biased soft labels in three weakly supervised learning settings. * The experiment results validate that the proposed indicators (unreliability and ambiguity degree) are indeed effective in measuring the effectiveness of the soft labels (i.e. high accuracy of students) across the three weakly supervised learning paradigms. * Overall, the paper sufficiently informs the readers of technical and implementation details. Weaknesses: * The paper shows limited experiment results on CIFAR-10 and CIFAR-100 (Figure 2). The authors should perform experiment on a wider range of benchmark datasets to validate the generality of the method. Are the proposed indicators effective in more complex datasets as the authors suggested? * The introduction could be better organized. The paper should clearly present the definition of a large-biased soft label and better motivate the readers on why utilizing them is valuable. * The paper exceeds the nine page limit. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Does the proposed indicators work well across other benchmark datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your constructive feedback. We appreciate the time and effort you took to review our paper, and we value the insights you've provided. First and foremost, we would like to express our sincere gratitude for your positive remarks on the originality, theoretical depth, and experimental validation of our work. Your recognition of the novelty of our motivation, the in-depth theoretical guarantees provided, and the effectiveness of our proposed indicators is truly encouraging and reinforces our belief in the significance of our contributions. We would like to address the concerns you raised: **Q1. Experiments on more complex datasets.** - A: We have supplemented our research with experiments on the Tiny-ImageNet dataset, and the results are illustrated in the PDF of Author Rebuttal. **Q2. Definition of a large-biased soft labels.** - A: We apologize for any confusion caused by the term "large-biased soft labels" mentioned in the paper. Here, we provide a rigorous definition for "large-biased soft labels" to clarify its meaning. > Definition 1. (Bias of soft labels) Give a dataset $D$ consisting of $n$ samples, the feature vector for the $i$-th sample is denoted as $\boldsymbol{x}_i$ and the corresponding label is denoted as $y_i$. Let $f$ represent a model or a mapping rule (e.g. label smoothing). The bias of the soft labels generated by $f$ on dataset $D$ is \begin{aligned} Bias(f, D)=\frac{1}{n} \sum_{i=1}^n [1-f_{y_i} (\boldsymbol{x}_i)], \end{aligned} where $f_{y_i} (\boldsymbol{x}_i)$ refers to the component of the soft label $f(\boldsymbol{x}_i)$ that corresponds to the true label $y_i$. > Definition 2. (Large-biased soft labels) Soft labels generated by $f$ on dataset $D$ is called biased soft labels when $Bias(f, D) \textgreater 0$ and called large-biased soft labels when $Bias(f, D) \geq 0.5$. **Q3. The paper exceeds the nine page limit.** - A: We apologize for the oversight regarding the page limit. We will carefully revise the paper to ensure it adheres to the limit while retaining the essential content and contributions. Your constructive feedback is instrumental in refining our paper, and we are committed to making the necessary improvements. Once again, thank you for your valuable insights and for considering our paper for acceptance. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your detailed response. After reading the response and comments from other reviewers, I have decided to retain my original score.
Rebuttal 1: Rebuttal: We provided three supplementary experiments in the PDF: * In Figure 1, we show efectiveness of the proposed indicators on Tiny-Imagenet. As $\eta_k(f)$ and $\gamma_k(f)$ decrease, Acc (i.e., accuracy of the student) increases. The experiments on Tiny-Imagenet demonstrate a trend similar to that on CIFAR-10 and CIFAR-100. * Figure 2 demonstrates whether the effectiveness of soft labels is influenced by different backbones. We experimented with wideresnet 28x2, 28x4, 40x2, and 40x4. In the Figure 2, we omitted the Unreliability degree and Ambiguity degree (which is the same the that of Figure 2 in the paper) for clarity. The four distinct backbones displayed consistent trends, further suggesting that the proposed indicators are effective across different backbones. * In Table 1, we tested four different metrics — Chebyshev distance, KL divergence, Manhattan distance, and Euclidean distance — to measure the discrepancies between the large-biased soft labels generated by SBTs and the ground-truth labels. Furthermore, we also test soft labels generated by a teacher trained under full supervision. What we observed was that, based on these four classical metrics, the soft labels generated by SBTs differ significantly from those of the normally trained teacher. Yet, they still manage to train good students. These classic metrics do not adequately capture the teaching capabilities inherent in soft labels, nor do they reflect the performance of the student models. This finding accentuates our claim that the indicators we introduced are more indicative of the teaching capabilities of soft labels compared to these classic metrics. Pdf: /pdf/55aa9c12d02393f0efdb579fa8f52d7b89b3fa7e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper explores the concept of learning from weak, or "soft," labels that may be biased or diverge from the ground truth labels in a dataset. In contrast to existing theories that focus on the importance of close alignment between soft and ground truth labels, the authors probe the efficacy of learning from significantly biased soft labels. They introduce two indicators to gauge the effectiveness of these soft labels and propose conditions under which learning from such labels can be successful, including large-biased labels. The authors further devise a heuristic method for training what they call Skillful but Bad Teachers (SBTs), referring to models with relatively low accuracy that can nevertheless effectively train high-performing student models. They show that these teachers can achieve up to 90% accuracy on the CIFAR-10 dataset, demonstrating the validity of their approach. Additionally, the authors adapt their theoretical framework to examine the utility of soft labels in three specific weakly-supervised learning paradigms: incomplete supervision, partial label learning, and learning with additive noise. Experimental results are presented to support the proposed indicators and the viability of biased soft labels in these scenarios. Key contributions include: Discovery that learning from largely biased soft labels can achieve comparable performance, and an exploration of the mechanisms behind this phenomenon. The proposal of two indicators to evaluate the effectiveness of soft labels and conditions to ensure their usefulness. A heuristic method to train SBTs, using new concepts of unreliability degree and ambiguity degree. A theoretical framework that illuminates the role of soft labels in three weakly-supervised learning paradigms, accompanied by theoretical guarantees for their learnability and supporting experimental results. Strengths: Theoretical Analysis: The paper provides a comprehensive theoretical framework for analyzing the effectiveness of soft labels, which are often employed in the realm of machine learning for teaching student models. Definitions and Indicators: The paper introduces and defines new concepts like unreliability degree and ambiguity degree, and relates them to the effectiveness of soft labels. This could provide valuable insight for the development of future machine learning models. Extension to Weakly-Supervised Learning (WSL): The research effectively applies the theoretical findings to weakly-supervised learning paradigms, thus demonstrating the applicability and extensibility of their findings. Weaknesses: Selective Conditions: While the paper provides conditions for classifier-consistency and ERM learnability, it doesn't clearly outline how to meet these conditions in a real-world context. Furthermore, the condition of balancing unreliability degree and ambiguity degree might be challenging to achieve in practice. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It's mentioned that the soft labels generated by SBTs are large-biased, which could potentially impact the model's ability to generalize to new, unseen data. While the authors note that students still have good accuracy despite the bias, how does the authors propose the current approach tackles this situation? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The process of inhibiting correct predictions, reducing unreliability and ambiguity degrees, and randomly selecting k-1 labels for each training instance could increase the computational complexity and time required to train the models. The evaluation is mainly based on the accuracy of the student models, which might not be the most comprehensive measure. There could be other performance metrics that are important in the context of this problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your constructive feedback. We appreciate the time and effort you took to review our paper, and we value the insights you've provided. Your acknowledgment of our paper's strengths, especially the theoretical analysis, the introduction of new indicators, and the extension to weakly-supervised learning, is both encouraging and motivating. It reinforces our belief in the significance of our contributions and provides us with valuable insights for further refinement. We would like to address the concerns and questions you raised: **Q1. Conditions in Theorem 1 and 2 might be challenging to achieve in practice.** * In fact, for models that are trained normally, the conditions we described (i.e. $\gamma_k(f)<1-\frac{\eta_k(f)}{1-\eta_k(f)}$ and $\eta_k + \gamma_k<1$) are easily met. It only requires the existence of a specific 'k' such that both delta and gamma are satisfied. To further illustrate this point, when we set $k=4$ and use a normally trained resnet-50 as an example, we obtained the following results: - CIFAR10: Accuracy (ACC): 95.29%; $\eta$: 2.63\%; $\gamma$: 20.12\%. - CIFAR100: Accuracy (ACC): 78.13%; $\eta$: 6.55; $\gamma$: 21.01\%. - Tiny-Imagenet: Accuracy (ACC): 60.63%; $\eta$: 13.55; $\gamma$: 24.67\%. They all easily satisfy the requirements stipulated in Theorem 1 and Theorem 2. **Q2. How does the proposed algorithm make it possible for bad teachers to teach good students?** * The algorithm proposed in Section 5.1 is inspired by our Theorem 1 and 2. While the soft labels generated by SBTs are large-biased, they exhibit a lower unreliability degree and ambiguity degree, as elaborated in Section 5.1. Theorems 1 and 2 offer assurances for the effectiveness of such large-biased soft labels, thereby enabling the effective training of competent students. **Q3. The proposed algorithm increase the computational complexity and time in training the models.** * Based on the logs from our experiments, our method operates within 10% slower than normally trained models. Moreover, the GPU memory consumption is almost identical. Thus, the computational complexity and time required for our approach should be considered acceptable. **Q4. Evaluation is mainly based on the accuracy of the student models.** * As illustrated in Table 1 of the Author Rebuttal, we tested four different metrics — Chebyshev distance, KL divergence, Manhattan distance, and Euclidean distance — to measure the discrepancies between the large-biased soft labels generated by SBTs and the ground-truth labels. Furthermore, we also test soft labels generated by a teacher trained under full supervision. What we observed was that, based on these four classical metrics, the soft labels generated by SBTs differ significantly from those of the normally trained teacher. Yet, they still manage to train good students. These classic metrics do not adequately capture the teaching capabilities inherent in soft labels, nor do they reflect the performance of the student models. This finding accentuates our claim that the indicators we introduced are more indicative of the teaching capabilities of soft labels compared to these classic metrics. Your constructive feedback is instrumental in refining our paper, and we are committed to making the necessary improvements. Once again, thank you for your valuable insights. --- Rebuttal 2: Comment: Dear Reviewer XfEm, This is another friendly reminder to acknowledge that you have read the rebuttal and the other reviews. Please also share how they change your view on the paper, if at all. Thanks again for your service! Best, AC
null
null
null
null
null
null
Combating Representation Learning Disparity with Geometric Harmonization
Accept (spotlight)
Summary: The authors tackle the problem of contrastive learning in the context of imbalanced datasets where some classes have many more samples than other classes. Since no label information can be used, importance sampling is not an option and without any interventions, contrastive learning tends to overrepresent majority classes in the latent space, thereby hurting the readout accuracy on the minority classes. The authors propose a Geometric Harmonization technique in which they propose to estimate the class labels with clustering and then recalibrate the learned embeddings such that majority classes occupy the same feature space as minority classes. The authors show experimental results on CIFAR100-LT, ImageNet-LT and Places-LT. Strengths: - I appreciate the complexity analysis of the proposed method in Section 3.4. - The authors compare to several other benchmarks and show superior results with their method. - I think in general the method makes sense and it is intuitive that it should work in the proposed case. Weaknesses: - There are many typos and grammatical errors in the text which makes the paper hard to understand. Some mathematical definitions are wrong and inconsistent which makes me doubt the claims of the theoretical guarantees since there are even errors in the Lemmas / definitions: - Line 94: “Let N denote the sample number and R = max_i n_i/min_i n_i denote the imbalanced ratio, where n_i denotes the number of samples in class i.” I am confused, the max and min are calculated per class i, which means that max_i n_i = min_i n_i = n_i, therefore R would be 1 for each class. Since I do not understand how R is defined, I cannot understand / judge the results in Table 2. - Lemma 3.2 is inconsistent with itself. If n_L_H=n_H and n_L_H=n_T, then n_T = n_H and n_H / n_T -> infty is impossible because n_H / n_T = 1 by definition. - Eq. 5: Using M both as the geometric structure and the size of the collection of negative samples is confusing and should not be done. - Considering the experimental results, I am not convinced that the authors have looked into the most important baselines. CIFAR100-LT, ImageNet-LT and Places-LT are well-established benchmarks and the baseline results that the authors aim to improve seem to be very low to me. I will elaborate on the different datasets in the following: - For ImageNet-LT (https://paperswithcode.com/sota/long-tail-learning-on-imagenet-lt), the currently best number for a ResNet50 architecture is an accuracy of 70.1 (https://arxiv.org/pdf/2111.13579v4.pdf) with extra training data and 67.2 (https://arxiv.org/pdf/2111.14745v1.pdf) without extra training data. The best result the authors report in this paper is about 38% which is far below what is currently state of the art according to the benchmark list. - For Places-LT, https://paperswithcode.com/sota/long-tail-learning-on-places-lt, the best numbers for a ResNet50 are above 45% while the authors here report numbers below 35%. -> I believe that in order for this method to be relevant to the community, the authors need to show results on the superior models with higher baseline accuracy. In the current state, it is not clear whether their results would generalize to the better models. - The motivation for the studied problem does not become clear to me from the introduction. The authors write: “However, the real-world natural sources usually exhibit the long-tailed distribution [31], and directly learning representation on them might lead to the distortion issue of the embedding space, namely, the majority dominates the feature regime [45] and the minority collapses [28].” Using the modal verb “might” here indicates a possibility but no further evidence is presented. Citing fairness research as an application is not enough in my opinion. From reading the introduction, I am not convinced that the problem the authors want to study actually exists in SSL. I would advise the authors to provide concrete examples where using SSL actually harms performance on minority classes. Currently, the main question of study posed in line 39 (“Why the conventional contrastive learning underperforms in self-supervised long-tailed context?”) does not seem well supported. - The first part of the first contribution is misleading and I believe wrong: “To our best knowledge, we are the first to investigate the drawback of the contrastive learning loss in self-supervised long-tailed context.” This drawback is investigated in several other papers which the authors discuss in their related work section, e.g. SDCLR specifically tackles this problem with a different method. - Figure 1: The spheres in the middle and right Figure look like ovals which is confusing. These should be spheres/ circles. - Abstract, line 5: „The attribution is that the vanilla SSL methods that pursue the sample-level uniformity easily leads to representation learning disparity, where head classes with the huge sample number dominate the feature regime but tail classes with the small sample number passively collapse.” -> This sentence is hard to parse and understand. - Table 1: What does IR stand for? - Line 119: “and π ∈ RK+ refers to the the marginal distribution constraint.” -> "The" twice and what do you mean under “marginal distribution constraint”? - The used datasets and models must be cited. - Line 162: “This phenomenon indicates all representations of the minority will collapse completely to one point without considering the category discrepancy, which corresponds to our observation regarding the passive collapse of tailed samples in Figure 1.” This statement is not accurate because the representations of the minority classes do not collapse to a single point in Figure 1. Lemma 3.2. covers an extreme case where there are infinitely more samples in the head classes compared to the tail classes, so Figure 1 does not represent this scenario and thus, using “corresponds” here is not accurate. - It is confusing that Table 7 comes before Tables 5+6, please fix. - Does Table 5 show the computational cost per epoch? - Line 218: “For hyper-parameters of GH, we provide a default setup across all the experiments: set the geometric dimension K as 100 and the temperature γGH as 0.1. In the surrogate label allocation, we set the regularization coefficient λ as 20 and Sinkhorn iterations Es as 300. Please refer to Appendix E.3 for more experimental details.” There are many hyperparameters that need to be set for this method, and it is not clear whether they were chosen on the test set, or how they were selected. It is also not clear how sensitive the algorithm is to these hyperparameters. - Line 223: “For comprehensive performance comparison, we present the linear probing performance and the standard deviation among three disjoint groups, i.e., [many, medium, few] partitions [25].” Please explain what the partitions into many/medium/few mean as it is not possible to understand the results otherwise. - Line 290: “To justify the intuition in Section 3.2” which intuition? Please be more specific. - I find the results in Fig. 3b unintuitive. The NMI scores show that GH is better aligned with ground truth labels compared to all other methods, but then why is the readout accuracy of GH only 1-2 percent points better compared to the other methods? Please discuss this as I am not sure how to interpret this result and whether it is meaningful to compare NMI scores between GH and the other methods. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - How sensitive is GH to the choice of the hyperparameters? - Can you please comment on how the presented results on Places-LT and ImageNet-LT compare to the state-of-the-art results on https://paperswithcode.com/sota/long-tail-learning-on-places-lt and https://paperswithcode.com/sota/long-tail-learning-on-imagenet-lt? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors could think a bit more about the impact of their work on applications related to long-tailed data distributions where minority and majority classes are present. How would applications situated in fairness research impacted by their work? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: While the reviewer raised so many questions, we sincerely appreciate the reviewer's time and effort on our submission. Probably due to the difference in small research directions, the reviewer has some misunderstanding in the setting, background and evaluation of this topic. We do hope that the following point-to-point responses can address the concerns raised by the reviewer. Any further comments are welcomed. > **Q1:** Typos, grammatical errors and notation usage. **A1:** Sorry for the misunderstanding due to some typos, the definition $R$ and the notation usage. We will carefully address each of then in the revision to ease the understanding. Specially, we would like to explain that $R = N_{max}/N_{min}$ denotes the imbalance ratio, where $N_{max}$,$N_{min}$ represents the sample number in the largest\/smallest class, and will revise the mentioned notation as $n_{L_H+1}=n_{L_H+2}=\dots=n_{L}=n_T$, and use $J$ denote the number of negative samples. > **Q2:** Supervised long-tailed benchmark. **A2:** Thanks for the question. We would like to kindly clarify the differences between self-supervised long-tailed (SSL-LT) learning and the recommended supervised long-tailed learning (S-LT). Firstly, SSL-LT does not rely on any label information in the training stage, while S-LT leverages the fully supervised data. Furthermore, SSL-LT aims to address the issue of representation disparity, pursuing more balanced embedding space. In contrast, S-LT focuses on the classification disparity, which involves rectifying both the representation and the last-layer classifier. This distinction encourages different evaluation and corresponding performance of two paradigms. Specially, in SSL-LT, linear probing is employed to evluate the quality of representation, while classification is adopted to evaluate the accuracy of the predictions in S-LT. Consequently, direct performance comparison between SSL-LT and S-LT may not be feasible or appropriate. > **Q3:** Motivation. **A3:** Thanks for the questions. We would like to kindly clarify on the evidence regarding the limitation of contrastive learning on long-tailed data as follows: - Linear probing evidence. Table 2 shows that many-shot classes outperforms few-shot classes by an average of 5.67\%/9.87\% on CIFAR-LT/ImageNet-LT with SimCLR [11]. This indicates that contrastive loss is not immune to class imbalance, leading to performance disparities. Our empirical findings are consistent with several recent SSL-LT explorations [14,16,20,21]. - Cross-dataset transfer. We conduct experiments on large-scale long-tailed CC3M and evaluated on various downstream tasks (See **Table R3-R4**). The results reveals that representation disparity actually exists in real-world data and GH promotes better transferability. - Uniformity metric. **Table R5** reveals that contrastive learning struggles to achieve the class-uniform partition, while our method effectively mitigates this issue. - Qualitive analysis. **Figure 1,R1** show that contrastive learning leads to representation disparity, where head classes dominate the embedding space and tail classes exhibit passive collapse. We will enhance the explanation about the difference of two sub-areas to avoid misunderstanding. > **Q4:** Clarification on first contribution. **A4:** Thanks for the question. We would like to reclarify our taxonomy as described in **Lines 28-38** that all previous explorations target to improve self-supervised long-tailed learning, but from different aspects like reweight/optimization techniques, architecture design and data augmentation. However, **none of these methods focus on the intrinsic limitation of the contrastive learning**. Concretely, we provide more detailed aspects in **Table G1** in the general response for clarity. > **Q5:** Minor issues: (1) Oval-like embedding space in Figure 1, (2) long sentence in line 5, (3) twice "the" in line 162, (4) cite dataset and model, (5) “correspond” in line 162 (6) Table 7 location. (7) intuitions in line 290. **A5:** Thanks for your detailed suggestions. We will carefully follow the reviewer's advice to revise each of them in the revision. > **Q6:** Questions: (1) IR in Table 1, (2) marginal distribution constraint in line 119, (3) computational cost in Table 5, (4) many/medium/few partitions **A6:** Thanks for your detailed questions. Below are explanations to each questions: - IR indicates the imbalanced ratio $R$. - It means the distribution prior to determine the marginal projections of matrix $\hat{Q}$ onto its rows and columns. - Table 5 shows the time cost per mini-batch. (See caption in Table 5) - Please refer to line 692-696 in Section E.1 in appendix for partition details. > **Q7:** Sensitivity to the hyper-parameters. **A7:** Thanks for the question. We would like to kindly clarify that many important hyperparameters have been compared or discussed (See Figure 3(a) and Table 11-13, Figure 5). Besides, we follow reviewer's advice to conduct more experiments on temperature $\gamma_{GH}$, coefficient $\lambda$ and Sinkhorn iteration $E_s$ on CIFAR-LT. The results in **Figure R3** shows that GH consistently achieves satisfying performance with different hyperparameter. > **Q8:** NMI and linear probing. **A8:** Thanks for the question. Below are the detailed reasons: - Linear probing removes the projector and GUS, while NMI uses their predictions. - NMI evaluates on train set, while linear probing evaluates on test set. - NMI is the normalization of the mutual information, while linear probing reports accuracy. These differences make the scale for the relative improvement in NMI cannot be compared with that for accuracy. > **Q9:** Applications in fairness research. **A9:** Thanks for the constructive suggestions. We will add more discussions about how GH can benefit the applications situated in fairness research, especially from the perspective of representation parity, and cite the related works. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Dear authors, thank you for your replies to my concerns. It is indeed that I am not familiar with the literature on self-supervised long-tailed (SSL-LT) learning, as well as with the field in general, as is reflected by my low confidence. Given that other much confident reviewers write that the experimental results are strong, I will lift this concern. I will increase my rating to a '6', but will not increase my confidence. I encourage the authors to improve the clarity of the paper, rectify the definitions etc. My low rating was also due to me not understanding several important equations because they were wrong / very ambiguous. Best, Reviewer R4di --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Dear Reviewer R4di, We sincerely appreciate you taking the time to review our responses and contributing to improve this paper. We will follow reviewer's advice to thoroughly proofread the paper again to enhance its clarity and facilitate understanding. We will also carefully incorporate all the addressed points in the updated version. Thank you once again for your dedicated and valuable contribution in reviewing our paper! Best, Authors of Paper 11308
Summary: The paper investigates the SSL in a long-tailed distribution setting, where traditional contrastive learning may not work well. The authors attribute that issue to the "sample-level" learning of SSL and propose Geometric Harmonization (GH) to regularize the "category-level" during training. GH can be a plug-and-play loss to the existing SSL frameworks without modification. Empirical results of linear probing accuracy show the effectiveness of the proposed method in the long-tailed data over the baseline SimCLR. Strengths: + Contribute to SSL capability in real-world scenarios with imbalanced classes which may be useful for unsupervised representation learning. + Some theoretical analyses are provided to support the problem and method. + The idea is reasonable + Writing is clear enough Weaknesses: + SSL has been developed with various methods to facilitate the representation learning for unlabelled data, this paper provided the experiments for one method, i.e. SimCLR, which is not a very strong baseline and not a state-of-the-art CL framework as claimed. From the presented results, SimCLR still works not too bad compared to SimCLR+GH in all settings. I would expect to see the effectiveness of GH in more strong SSL baselines including contrastive learning frameworks such as BYOL, MoCo-v3, etc, and maybe a more recent branch of SSL, namely mask autoencoder (MAE). To make the proposed method more complete, those baselines should be considered. + The "passive collapse" of tail classes is mentioned in the paper but seems that except for some quantitative results showing GH improves baseline to some extent, there are no results (either visualization or quantitative metric) to show such collapse happens and that GH can mitigate or solve that problem. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: + For transfer learning to downstream tasks, which dataset is pre-trained for SSL? + From the concept/hypothesis in Fig. 2 bottom, when applying GH, the samples of each class are clustered well even though it is the imbalanced case. Is there visualization (t-SNE for example) for the real data that has been conducted in this paper to show that works as the figure of concept in Fig.2? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: They have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We gratefully thank you for your time and efforts devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. > **Q1:** Empirical studies on BYOL, MoCo-v3 and maybe a more recent branch of SSL, namely mask autoencoder (MAE). **A1:** Thanks for the constructive suggestions. We would like to kindly clarify that many representative self-supervised learning methods have been empirically compared in the submission, such as SimCLR [11], SeLa [17], SwAV[18], MoCo-v2[22], SimSiam[23] and Barlow Twins[24] (See Table 1,4). The comprehensive comparison of different SSL baselines includes (1) contrastive learning: SimCLR and MoCo-v2, (2) unsupervised clustering: SeLa and SwAV and (3) some other non-contrastive methods: SimSiam, Barlow Twins. Besides, we do appreciate the reviewer’s advice about conducting comparisons on more SSL baselines, and conduct more experiments based on BYOL [25] and MoCo-v3 [26] on CIFAR-LT with different imbalanced ratios. From the results in **Table R1** in the complementary PDF file, we can see that GH can consistently improve BYOL/MoCo-v3 across different imbalanced ratios on CIFAR-LT on imbalanced datasets. Due to the time constraints, we will provide more comprehensive experiments/discussions to strengthen their comparisons with MAE [27] and on ImageNet in the revision. > **Q2&Q4:** Visualization or quantitative metric demonstrating the passive collapse of tail classes and t-SNE visualization to support the concept/hypothesis presented in Fig. 2 on read data. **A2&A4:** We appreciate the reviewer's constructive suggestions, and conduct both quantitative and qualitive analyses to provide further intuitions into the proposed method. **Quantitative analysis:** We conduct more thorough experiments with several metrics [28,29] on CIFAR-LT as follows: - Inter-Class Uniformity: $U=\frac{1}{L(L-1)}\sum_{i=1}^L\sum_{j=1, j\neq i}^L||\mu_i-\mu_j||_2$, where we have $L$ classes and $\mu$ denotes the class means. It evaluates average distances between different class centers. - Neighborhood Uniformity: $U_k=\frac{1}{Lk}\sum_{i=1}^L\min_{j_1,\cdots, j_k}(\sum_{m=1}^k||\mu_i-\mu_{j_m}||_2)$, where $j_1, \cdots, j_k \neq i$ represent different classes. It measures how close one class is to its neighbors. In **Table R5** in the complementary PDF file, we compare these metrics on CIFAR-LT with different imbalanced ratios, and have the following observations: our GH exhibits significant improvements in both inter-class uniformity and neighborhood uniformity when compared with the baseline SimCLR. This indicates that vanilla contrastive learning struggles to achieve the uniform partitioning of the embedding space, while our proposed method effectively mitigates this issue. **Qualitive analysis:** We conduct t-SNE visualization of SimCLR and SimCLR with GH on CIFAR-LT-R100. For simplity, we randomly selected four head classes and four tail classes to generate the t-SNE plots. Based on the results in **Figure R1** in the complementary PDF file, the observations are as follows: (1) SimCLR: head classes exhibit a large presence in the embedding space and heavily squeeze the tail classes, (2) GH: head classes reduce their occupancy, allowing the tail classes to have more space. This indicates that passive collapse of tail classes still persists in real-world data and our method can effectively mitigate the negative effect. Moreover, our method demonstrates the ability to promote clusters with higher quality in the presence of class-imbalanced data, surpassing the baseline SimCLR. We will include these discussion and empirical quantitive/quanlitive comparisons in the revision. > **Q3:** For transfer learning to downstream tasks, which dataset is pre-trained for SSL? Thanks for the detailed question. In section 4.3, the SSL methods are pretrained on the same datasets as the downstream datasets, i.e., ImageNet-LT to ImageNet-LT and Places-LT to Places-LT. In section F.9 in the appendix, the SSL methods are pretrained on ImageNet and linear probing evaluated on the CUB200 [4] and Aircraft [5]. Besides, we also conduct more comprehensive experiments on various cross-dataset transferring tasks to evaluate representation transferability. Specifically, we pretrain our method on large-scale long-tailed dataset CC3M [1]. Subsequently, we apply our approach to various downstream classification datasets, including well-curated datasets such as ImageNet [2] and Places [3], fine-grained datasets such as CUB200 [4], AirCraft [5], Stanford Cars [6], Stanford Dogs [7] and NABirds [8]. Furthermore, we also conduct empirical comparisons on object detection and segmentation task using COCO2017. From the results in **Table R3-R4** in the complementary PDF file, we can see that our proposed GH consistently outperforms the baseline across various tasks and datasets. This indicates the superiority of GH to improve contrastive learning on imbalanced datasets, yielding better model generalization to a range of real-world scenarios. We appreciate the reviwer's question and will additional clarifications and empirical studies on transfer learning in the revision. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and other reviews. Basically it addressed adequately my concerns, and I would love to raise the score. I recommend the authors add these addressed points to their revision. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Dear Reviewer G4d9, We sincerely appreciate you taking the time to review our responses and contributing to improve this paper. We will carefully follow reviewer's advice to incorporate all the addressed points in the updated version. Best, Authors of Paper 11308
Summary: - The paper proposes to tackle the task of self-supervised learning on datasets sampled from long-tailed distributions. - Typical SSL approaches tend to perform a lot worse for the minority classes, thus hurting performance. - The proposed approaches termed Geometric Harmonization encourages category-level uniformity leading to better performance on various downstream tasks. - The approach is shown to be complementary to various existing SSL approaches. Strengths: - The paper tackles an important problem in SSL - that of training on data sampled from long-tailed distributions which is common in the real world. - The proposal of geometric harmonization is interesting. The idea is simple and easy to implement. While the approach is motivated by recent clustering-based works in SSL, the proposed approach seems novel. - The paper is well written for the most part (some concerns are pointed out in other sections) - Authors perform extensive analyses and ablations to understand the working of the approach. - Experiments on various LT datasets show the benefit of using the approach. Weaknesses: Following are some of my concerns with the paper: - Clarity: Given that geometric harmonization is the key contribution of this work, I think the authors should give the readers more idea about the intuitions for the section on "surrogate label allocation". While the Figure 2 looks good, the caption is not very helpful to drive home the intuition. I see a similar issue with Definition 3.1 : meaning of K only becomes clear later in the paper. - Effect of batch-size : The authors have provided an analysis on the effect of batch size. I think such analysis will be especially interesting for datasets which have lot more labels : like ImageNet. - Discussion/comparison missing on some related works: [a], [b] [a] The hidden uniform cluster prior in self-supervised learning [b] Temperature Schedules for Self-Supervised Contrastive Methods on Long-Tail Data Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Minor suggestions : - L42 : reword "as proof in [38].." - "sample number" : I suggest using "number of samples" or a similar alternative instead to avoid any confusion. - Some more discussion and analysis on the observation in L298-300 would be really interesting. For this experiment, what happens if there is no support for a certain class in that mini-batch ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations and societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We gratefully thank you for your time and efforts devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. > **Q1:** Clarity: Given that geometric harmonization is the key contribution of this work, I think the authors should give the readers more idea about the intuitions for the section on "surrogate label allocation". While the Figure 2 looks good, the caption is not very helpful to drive home the intuition. I see a similar issue with Definition 3.1 : meaning of K only becomes clear later in the paper. **A1:** We appreciate the reviewer's constructive suggestions. We will add additional clarifications regarding the "surrogate label allocation" section especially about how and why it works in self-supervised long-tailed context. Futhermore, we will thoroughly proofread the paper again to enhance its clarity and facilitate understanding including the caption of Figure 2 and $K$ in Definition 3.1. > **Q2:** Effect of batch-size : The authors have provided an analysis on the effect of batch size. I think such analysis will be especially interesting for datasets which have lot more labels : like ImageNet. **A2:** Thanks for the advice. We summarize the experiments under different batch sizes on ImageNet as follows. | Batchsize | 256 | 384 | 512 | 768 | |:---------:|:-----:|:-----:|:-----:|:---:| | SimCLR [11] | 36.65 | 36.97 | 37.85 | 38.04 | | +GH | 38.28 | 39.22 | 41.06 | 41.34 | From the results, we can see that under different batch sizes, our GH consistently outperforms the baseline method and a larger batch size relatively promotes better performance. We will include detailed experiments and discussions regarding the training batch size on large-scale datasets in the revision. Specially, we refer the reviewer to A5, which explains why a smaller batch size may hurt the performance under the long-tailed data from many categories. > **Q3:** Discussion/comparison missing on some related works: [a], [b]. > [a] The hidden uniform cluster prior in self-supervised learning > [b] Temperature Schedules for Self-Supervised Contrastive Methods on Long-Tail Data **A3:** We very appreciate the reviewer to recommend the concurrent works PMSN [19] and TS [20], and will add these works into related works with the proper discussion. Regarding the comparison, we conduct a range of experiments on CIFAR-LT with different imabalanced ratios to compare PMSN/TS with GH as shown in **Table R2** in the complementary PDF file. From the results, we can see that the proposed method consistently outperforms PMSN/TS across different imbalanced ratios on CIFAR-LT. Besides, we can observe that combining GH and TS consistently improves the performance of contrastive learning on CIFAR-LT. Due to the time constraints, we will provide more comprehensive experiments/discussions to strengthen their comparisons in the revision. > **Q4:** Minor suggestions > (1) L42 : reword "as proof in [38].." > (2) "sample number" : I suggest using "number of samples" or a similar alternative instead to avoid any confusion. **A4:** Thanks for your detailed suggestions. We will carefully revise each of them and proofread the submission. > **Q5:** Some more discussion and analysis on the observation in L298-300 would be really interesting. For this experiment, what happens if there is no support for a certain class in that mini-batch? **A5:** This is a very insightful question and thank the reviewer for this point. Intuitively, it might easily generate biased estimation when there is no support for a certain class in the mini-batch.Then, the cluster quality might be affected by the probability of encountering missing class, which potentially correlates one important factor, i.e., batch size. Empirically, as demonstrated in Table 13 in Appendix F.4, we obeserve that the performance drops when reducing the batch size by a factor of 4 on CIFAR-LT. This can potentially be attributed to the higher probability of encountering situations where certain classes are missing under smaller batch size. We appreciate the valuable question and will include more detailed experiments and discussion in the revision. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the very detailed rebuttal. The explanations and new experiments on CC3M are very helpful. I have no additional concerns at this moment. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Dear Reviewer pT3P, We sincerely appreciate you taking the time to review our responses and contributing to improve this paper. We will incorporate the mentioned points in the updated version. Thank you once again for your dedicated and valuable contribution! Best, Authors of Paper 11308
Summary: This paper concentrates on the long-tailed distribution problem in SSL representation learning. To overcome the challenge in vanilla SSL methods, where head classes with the huge sample number dominate the feature regime, the authors propose the Geometric Harmonization (GH) method to encourage category-level uniformity. Specially, GH measures the population statistics of the embedding space, and then infer an fine-grained instance-wise calibration to constrain the space expansion of head classes while avoid the collapse of tail classes. Extensive results show the effectiveness of GH with high tolerance to the long-tailed distribution problem. Strengths: -1- This paper investigates the drawback of the contrastive learning loss in SSL long-tailed context, and shows that the resulted sample-level uniformity is an intrinsic limitation to the representation parity, as shown in Figure 1. The motivation of pursuing category-level uniformity is clear. -2- This paper is well-written and easy to follow. The proposed GH method that incorporates the geometric clues from the embedding space to calibrate the training loss, is interesting and solid. -3- The proposed GH loss is versatile and can be easily plugged into existing SSL methods. The extensive experiments and ablation stidies demonstrate its effectiveness in learning robust representation. Weaknesses: -1- About the surrogate label allocation. What is the advantage to choose the discriminative clustering [1], what if we choose other clustering method such as K-means. How does the quality of the geometric label affect the final training results? When verifing the assumption that the constructed surrogate geometric labels are mutually correlated with the oracle labels, can the authors provide more intuitive results? -2- In E.q. 4, do you need to design how to balance these two losses? -3- Table 6 shows the results on class-balanced data. Can you explain why in some cases, incooperateing with GH will lead to the performance drop? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to *Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We gratefully thank you for your time and efforts devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. > **Q1:** About the surrogate label allocation. What is the advantage to choose the discriminative clustering [1], what if we choose other clustering method such as K-means. How does the quality of the geometric label affect the final training results? When verifing the assumption that the constructed surrogate geometric labels are mutually correlated with the oracle labels, can the authors provide more intuitive results? **A1:** We appreciate the reviewer's insightful questions. Below are our replies to each subquestion. **Advantage:** Our discriminative clustering approach offers the following advantages: (1) it enables a flexible class prior that can adapt to various distributions, (2) it refines assignments in an efficient manner, (3) it is theoretically grounded to achieve the desired category-level uniformity along with the geometric uniform structure, effectively addressing the challenges posed by the long-tailed effect. Note that, we would like to reclarify that straightforward applying the discriminative clustering SeLA [17] or the variant SwAV [18] cannot work, whose difference from ours is discussed in Lines 134-152, and empirically verified in Table 1 of the original submission. **Drawback of K-means:** K-means algorithm tends to generate clusters with relatively uniform sizes, which will affect the cluster performance under the class-imbalanced scenarios [19]. To gain more insights, we conduct empirical comparisons using K-means as the clustering algorithm and evaluate the NMI score with ground-truth labels and the linear probing accuracy on CIFAR-LT-R100. | Method | Accuracy | NMI score | |:--|:--|:--| | SimCLR [11] | 50.72 | 0.28 | | +K-means | 51.44 | 0.35 | | +GH | 53.96 | 0.50 | From the results, we can see that K-means generates undesired assignments with lower NMI score and achieves unsatisfying performance compared with our GH. This observation is consistent with previous studies [19]. **More intuitive results:** To enhance the understanding of the proposed surrogate label allocation, we conduct both quantitative and qualitive analyses. (1) Quantitative analyses: We conduct more thorough experiments with several metrics [28,29] on CIFAR-LT, including: - Inter-Class Uniformity: $U=\frac{1}{L(L-1)}\sum_{i=1}^L\sum_{j=1, j\neq i}^L||\mu_i-\mu_j||_2$, where we have $L$ classes and $\mu$ denotes the class means. It evaluates average distances between different class centers. - Neighborhood Uniformity: $U_k=\frac{1}{Lk}\sum_{i=1}^L\min_{j_1,\cdots, j_k}(\sum_{m=1}^k||\mu_i-\mu_{j_m}||_2)$, where $j_1, \cdots, j_k \neq i$ represent different classes. It measures how close one class is to its neighbors. Based on the results in **Table R5** in the complementary PDF file, the observations are as follows: our method exhibits significant improvements in both inter-class uniformity and neighborhood uniformity when compared with the baseline SimCLR [11]. This indicates that vanilla contrastive learning struggles to achieve the uniform partitioning of the embedding space, while our proposed method effectively mitigates this issue. (2) Qualitive analyses: We conduct t-SNE visualization of SimCLR and SimCLR with GH on CIFAR-LT. For simplity, we randomly selected four head classes and four tail classes to generate the t-SNE plots. Based on the results in **Figure R1** in the complementary PDF file, the observations are as follows: (1) SimCLR: head classes exhibit a large presence in the embedding space and heavily squeeze the tail classes, (2) GH: head classes reduce their occupancy, allowing the tail classes to have more space. This further indicates that the constructed surrogate labels can serve as the high-quality supervision, effectively guiding the harmonization towards the geometric uniform structure. > **Q2:** In E.q. 4, do you need to design how to balance these two losses? **A2:** In the submission, we have not introduced a hyperparameter to balance two losses, instead, we defaultly set a weight 1.0 (termed as $w_{GH}$) on GH loss in Eq.4 across all the experimental results. To address the question regarding balancing contrastive loss and our GH loss, we conduct experiments with different weight $w_{GH}$ on CIFAR-LT-R100 (please refer to **Figure R2** in the complementary PDF file). From the results, we can see that our method generally achieves comparable performance across different configurations of weight $w_{GH}$. We appreciate the reviewer's question and will add the discussion and empirical comparisons in the revision. > **Q3:** Table 6 shows the results on class-balanced data. Can you explain why in some cases, incooperateing with GH will lead to the performance drop? **A3:** Thanks for your detailed question. We would like to kindly clarify that our GH is generally comparable with baseline methods, which aligns with our expectations. Specifically, our GH is designed to offer advantages on imbalanced datasets while avoiding unreasonable degradation to contrastive learning on balanced dataset. In Table 6, the proposed GH shows the minor improvements over Focal [12], SDCLR [14] and BCL [16], and performs worse than SimCLR and DnC [15] for less than 0.3\%. The minor decrease in performance could potentially be attributed to some random factors during training (like weight initialization and data augmentation) or the negligible effect of GH loss as it might not reach an absolute zero value. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thanks for the response that addressed my concerns, and I will keep my positive score. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Dear Reviewer TBJn, We sincerely appreciate you taking the time to review our responses and contributing to improve this paper. We will carefully follow reviewer's advice to incorporate all the addressed points in the updated version. Best, Authors of Paper 11308
Rebuttal 1: Rebuttal: We gratefully thank all the reviewers for their devoted efforts and constructive suggestions on this paper. We are glad that the reviewers have some positive impressions of our work, including: - Explorations on an **important and useful** problem (M9kH, pT3P, G4d9). - **The motivation is clear** (krnL, M9kH, TBJn). - The method is **interesting, novel, solid, reasonable**, and can be **seamlessly** integrated into existing SSL methods (krnL, M9kH, TBJn, pT3P, G4d9, R4di). - **Extensive and solid** experiments with **plenty of ablation studies** (krnL, M9kH, TBJn, pT3P). - The paper is **well-written and easy to follow** (krnL, M9kH, TBJn, pT3P, G4d9). We have addressed the reviewers' comments and concerns in **individual responses to each reviewer**. The reviews allowed us to improve our draft, and the changes made in our responses are summarized below: - To back up the importance of the studies on self-supervised long-tailed learning, we provide further evidence with both the quantitative and qualitative empirical studies, see Table R3-R5, Figure R1. - To address the concerns on the cross-dataset transferability of the proposed GH, we add more empirical evidence on various cross-dataset tasks pretrained on large-scale long-tailed CC3M, see Table R3-R4. - We incorporate our method to more self-supervised methods, including BYOL and MoCo-v3 (See Table R1). Besides, we add empirical comparisons with several concurrent studies, including PMSN and TS (See Table R2). - We further expand the ablation studies with comprehensive experiments, including sensitivity analysis of hyperparameters (See Figure R3), training configurations (See A2 to Reviewer pT3P), t-SNE visualization (See Figure R1) and uniformity analysis (See Table R5). - We provide a more in-depth and comprehensive analysis on several points, including difference between SSL-LT and S-LT, the limitation of vanilla contrastive learning, the contribution of GH to SSL-LT, analysis of GH on other SSL methods, analysis of assumption on neighboring samples in the same class and advantages of surrogate label allocation. **We appreciate all reviewers' great effort again!** We have tried our best to address your concerns and improve the paper following the suggestions. **Would you mind checking it and confirming if there are any unclear parts?** **Tables:** [**Table G1.** Taxonomy of self-supervised long-tailed methods.] | Method | Aspect | Description | | ------ | ----------- | ------------------- | | Focal [12] | Sample Reweighting | Hard example mining | | rwSAM [13] | Optimization Surface | Data-dependent sharpness-aware minimization | | SDCLR [14] | Model Pruning | Model pruning and self-contrast | | DnC [15] | Model Capacity | Multi-expert ensemble | | BCL [16] | Data Augmentation | Memorization-guided augmentation | | GH | Loss Limitation | Geometric harmonization | **References:** [1] Piyush Sharma et al. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. ACL 2018. [2] Jia Deng et al. Imagenet: A large-scale hierarchical image database. CVPR 2009. [3] Bolei Zhou et al. Places: A 10 million image database for scene recognition. TPAMI 2017. [4] Catherine Wah et al. The caltech-ucsd birds-200-2011 dataset. 2011. [5] Subhransu Maji et al. Fine-grained visual classification of aircraft. 2013. [6] Jonathan Krause et al. 3d object representations for fine-grained categorization. ICCV workshop 2013. [7] Aditya Khosla et al. Novel dataset for fine-grained image categorization: Stanford dogs. CVPR workshop 2011. [8] Van Horn et al. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. CVPR 2015. [9] Tsung-Yi Lin et al. Microsoft coco: Common objects in context. ECCV 2014. [10] Quentin Garrido et al. On the duality between contrastive and non-contrastive self-supervised learning. ICLR 2023. [11] Ting Chen et al. A simple framework for contrastive learning of visual representations. ICML 2020. [12] Tsung-Yi Lin et al. Focal loss for dense object detection. ICCV 2017. [13] Hong Liu et al. Self-supervised learning is more robust to dataset imbalance. ICLR 2022. [14] Ziyu Jiang et al. Self-damaging contrastive learning. ICML 2021. [15] Yonglong Tian et al. Divide and contrast: Self-supervised learning from uncurated data. ICCV 2021. [16] Zhihan Zhou et al. Contrastive Learning with Boosted Memorization. ICML 2022. [17] Asano Yuki Markus et al. Self-labelling via simultaneous clustering and representation learning. ICLR 2019. [18] Mathilde Caron et al. Unsupervised learning of visual features by contrasting cluster assignments. NeurIPS 2020. [19] Jiye Liang et al. The K-means-type algorithms versus imbalanced data distributions. TFS 2012. [20] Mahmoud Assran et al. The hidden uniform cluster prior in self-supervised learning. ICLR 2023. [21] Anna Kukleva et al. Temperature schedules for self-supervised contrastive methods on long-tail data. ICLR 2023. [22] Kaiming He et al. Momentum contrast for unsupervised visual representation learning. CVPR 2020. [23] Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. CVPR 2021. [24] Jure Zbontar et al. Barlow twins: Self-supervised learning via redundancy reduction. ICML 2021. [25] Jean-Bastien Grill et al. Bootstrap your own latent-a new approach to self-supervised learning. NeurIPS 2020. [26] Xinlei Chen et al. An Empirical Study of Training Self-Supervised Vision Transformers. ICCV 2021. [27] Kaiming He et al. Masked autoencoders are scalable vision learners. CVPR 2022. [28] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. ICML 2020. [29] Tianhong Li et al. Targeted supervised contrastive learning for long-tailed recognition. CVPR 2022. Pdf: /pdf/e194a7e6fdf95d65a5847169ac3fd74f910184ff.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studied self-supervised representation learning with implicit class-imbalance data, in contrast to most prior works that assume class-balance. To do this, this paper proposed to augment the existing contrastive learning method with a novel geometric harmonization (GH). Intuitively, GH pulls samples toward the closest prototype in the embedding space, which differs from previous cluster-/prototype-based methods in the learnable prototypes and marginal prior. Through extensive experiments on several standard long-tail classification benchmarks, the authors demonstrated the efficacy of the proposed method with different contrastive learning methods as start points. Strengths: 1. The paper is generally well-written and easy to follow. 2. The paper is well-motivated. Class imbalance occurs in most real-world scenarios, and existing contrastive learning methods fail to take this into account. This paper provides a good exploration of this direction. 3. The proposed geometric harmonization regularization is interesting. Though prototype-based constraints are already studied in contrastive learning [1, 5, 12], GH carefully considers the class imbalance by its design and is thus robust to such imbalance. 4. The experiment results are strong compared with the baselines on imbalanced data. Moreover, the proposed method even performs on par with the baseline with balanced data. Weaknesses: 1. The geometric harmonization relies heavily on the assumption that samples from the same class intrinsically lie in neighboring regions in the embedding space to estimate the assignment and prior. While this condition can be satisfied on curated datasets like CIFAR and ImageNet, it might be a different case for the uncurated datasets, e.g., YMCC or a subset of it. Some analysis on this end would further strengthen this paper. 2. Lack of transfer learning experiments. In all the experiments, the models were pretrained and then finetuned on the same datasets. It is unclear whether the learned representations are general to transfer to other data distribution and different tasks. Possible options are adding cross-dataset experiments (e.g., YFCC-to-ImageNet-LT, ImageNet-Lt-to-CIFAR-LT) and transfer learning on MS-COCO object detection as in [5]. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The major limitation is that geometric harmonization relies heavily on the assumption that samples from the same class intrinsically lie in neighboring regions in the embedding space, which may not hold for real-world data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We gratefully thank you for your time and efforts devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. > **Q1:** The geometric harmonization relies heavily on the assumption that samples from the same class intrinsically lie in neighboring regions in the embedding space to estimate the assignment and prior. While this condition can be satisfied on curated datasets like CIFAR and ImageNet, it might be a different case for the uncurated datasets, e.g., YMCC or a subset of it. Some analysis on this end would further strengthen this paper. **A1:** This is a very insightful question and thank the reviewer for this point. We agree with the reviewer for the potential presence of scattered samples from the same classes across different regions (rather than neighboring regions) in the embedding space, particularly in the context of large-scale long-tailed data distributions. In this case, our method will leverage the inherent pattern or semantic cluster information of the data in the fine-grained region to relatively mitigate the disparity. Such potential provides the model a chance to learn more general-purpose representations correlated to the clusters in various levels of granularities, going beyond the observed labels, thus promoting the generality and transferability of the pretrained representations. We do appreciate the reviewer's advice and conduct experiments on **large-scale long-tailed and uncurated dataset CC3M** [1] to address the possible concern, as detailed in **A2**, and we will include the discussion into the submission. > **Q2:** Lack of transfer learning experiments. In all the experiments, the models were pretrained and then finetuned on the same datasets. It is unclear whether the learned representations are general to transfer to other data distribution and different tasks. Possible options are adding cross-dataset experiments (e.g., YFCC-to-ImageNet-LT, ImageNet-Lt-to-CIFAR-LT) and transfer learning on MS-COCO object detection as in [5]. **A2:** We appreciate the reviewer’s constructive advice and conduct more comprehensive experiments on various cross-dataset transferring tasks with distinct characteristics as follows: - Pretraining datasets - CC3M [1]. CC3M is a **large-scale, long-tailed** dataset with more than 3 million images. We pretrain SimCLR [11] and SimCLR with GH on CC3M for comparison. - Transferring to downstream classification - Curated datasets: ImageNet [2] and Places [3]. We randomly subsample a balanced subset of ImageNet and Places for downstream finetuning, with the number of images per class set as 100. - Fine-grained datasets: CUB200 [4], AirCraft [5], Stanford Cars [6], Stanford Dogs [7], NABirds [8]. These datasets requires semantic features to distinguish each category on the fine-grained granularity. - We report the average accuracy of our CC3M-pretrained weights finetuning on classification on these datasets based on ResNet50 backbone. - Transferring to downstream object detection - COCO2017 [9]. We report the bounding box AP of our CC3M-pretrained weights finetuning on object detection on COCO2017 using Faster-RCNN based on ResNet50-FPN backbone. - Transferring to downstream segmentation - COCO2017 [9]. We report the mask AP of our CC3M-pretrained weights finetuning on segmentation on COCO2017 using Mask-RCNN based on ResNet50-FPN backbone. From the results in **Table R3-R4** in the complementary PDF file, we can see that our proposed GH consistently outperforms the baseline methods across various tasks and datasets. This indicates the superiority of GH to improve contrastive learning on imbalanced datasets, yielding better model generalization to a range of real-world scenarios. Besides, for more complete comparisons under different baselines, we will update the results in the submission once we finish all experiments. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thanks for the great effort in the rebuttal. It resolves my concerns about the premise and the transferability of this method. I have no additional concerns now and would like to raise the score. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Dear Reviewer M9kH, We sincerely appreciate you taking the time to review our responses and contributing to improve this paper. We will incorporate all the addressed points in the updated version. Thank you once again for your dedicated and valuable contribution in reviewing our paper! Best, Authors of Paper 11308
Summary: This paper addresses the class-imbalance problem under the SSL setting. It shows that the Vanilla SSL methods pursuing the sample-level uniformity easily lead to representation learning disparity, where head classes with the huge sample number dominate the feature regime but tail classes with the small sample number passively collapse. It proposes Geometric Harmonization (GH) to encourage the category-level uniformity in representation learning. The GH loss can be easily integrated into existing SSL methods, and GH loss improve the performance over baselines on the imbalance class problem under the SSL setting, based on the experimental results. Strengths: 1.This paper define the category-level uniformity in the embedding space that SSL learns. The motivation to using category-level uniformity for imbalance class problem is clear. 2.The proposed Geometric Harmonization (GH) loss improve the performance over baselines on the imbalance class problem under the SSL setting, based on the experimental results. 3.The presentation of this paper is generally clear and easy to follow. Weaknesses: 1. In the experiments of Section 4.3, it is better to conduct experiments on large-scale ImageNet pertaining using the proposed methods, and then transfer to other datasets (This is the so called “representation transferability”), rather than only: SSL pre-training and finetuning on the same datasets. 2. I personally believe the theories and analyses both are specific to contrastive learning, rather than on other SSL methods (e.g., the non-contrastive SSL methods, BOYL, SiamSiam). Even though this paper conducts an experiment on other SSL methods in Section 4.4, it is not clear how the theory/analyses work for these SSL methods? Do the theoretic analyses still work for other SSL methods? This paper should at least mention it. 3. At last, I have concerns on whether the imbalance problem under the SSL is an important research topic. I definitely agree with that the class imbalance problem is very important. However, it is weird when considering the class imbalance problem under the SSL scenario. As everyone knows, SSL is considered as pre-training under large scale unlabeled data (class-free), and then transfer to downstream tasks (not specific for the classification problem). I believe the contributions are significant if this is the first paper to propose the imbalance problem under SSL setting. However, with several exists works as shown in the paper, I donot recognize the significance this paper contributes to the ML community. I think this paper will be stronger, if considering transfer to the class-imbalance object detection/semantic tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Address Weakness 2 and 3. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We gratefully thank you for your time and efforts devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. > **Q1:** Cross-dataset transferring experiments. **A1:** We appreciate the reviewer’s constructive advice and conduct more comprehensive experiments on various cross-dataset transferring tasks to evaluate representation transferability as follows: - Pretraining datasets - CC3M [1]. CC3M is a **large-scale, long-tailed** dataset with more than 3 million images. We pretrain SimCLR [11] and SimCLR with GH on CC3M for comparison. - Transferring to downstream classification - Curated datasets: ImageNet [2] and Places [3]. We randomly subsample a balanced subset of ImageNet and Places for downstream finetuning, with the number of images per class set as 100. - Fine-grained datasets: CUB200 [4], AirCraft [5], Stanford Cars [6], Stanford Dogs [7], NABirds [8]. These datasets requires semantic features to distinguish each category on the fine-grained granularity. - We report the average accuracy of our CC3M-pretrained weights finetuning on classification on these datasets based on ResNet50 backbone. - Transferring to downstream object detection - COCO2017 [9]. We report the bounding box AP of our CC3M-pretrained weights finetuning on object detection on COCO2017 using Faster-RCNN based on ResNet50-FPN backbone. - Transferring to downstream segmentation - COCO2017 [9]. We report the mask AP of our CC3M-pretrained weights finetuning on segmentation on COCO2017 using Mask-RCNN based on ResNet50-FPN backbone. From the results in **Table R3-R4** in the complementary PDF file, we can see that our proposed GH consistently outperforms the baseline across various tasks and datasets. It further demonstrates the importance of **considering long-tailed data distribution under large-scale unlabeled data in the pretraining stage**. This can potentially be attributed to that our geometric harmonization motivates a more balanced and general emebdding space, improving the generalization ability of the pretrained model to a range of real-world downstream tasks. Besides, for more complete comparisons under other baselines, we will update the results in the submission once we finish all experiments. > **Q2:** Analysis of GH on other SSL methods. **A2:** Thanks for the constructive suggestions. We agree with the reviewer that our theorem and analyses are specific to contrastive learning. In terms of other non-contrastive SSL methods, we empirically show the superiority of our method on long-tailed data distribution in Section 4.4. Although it might not be straightforward to extend the theory to non-contrastive SSL methods, an explanation about the consistent superiority is that some non-contrastive methods still exhibit similar representation disparity with their contrastive counterpart, and our proposed method can similarly reallocate the geometric distribution to counteract the distorted embedding space. Specially, the recent study [10] theoretically and emprically explore the equivalence between contrastive and non-contrastive criterion, which may shed light on the intrinsic mechanism of how our GH benefits non-contrastive paradigm. We appreciate the reviewer’s question, and will include these discussions about the our theorem in the revision for clarity. > **Q3:** The importance of considering imbalancing learning in SSL scenarios, the contribution of GH and the transferring experiments on object detecion/segmentation tasks. **A3:** Thanks for the comments although it is a challenging point of view. Without offense, we might be persistent that considering imbalancing learning in SSL scenarios can also be critical to the representation generalization. Following the reviewer's advice in Q1, we conduct the representation learning on large-scale long-tailed CC3M and verify that our GH inherently provides merits for the generalization on different downstream tasks, including detection and segmentation as suggested. As completely transferring all baselines in large-scale datasets is time-consuming and computationally expensive, we will update the comprehensive experiments in the submission as soon as possible. Besides, we would like to reclarify our taxonomy as described in **Lines 28-38** that all previous explorations target to improve self-supervised long-tailed learning, but from different aspects such as reweight/optimization techniques, architecture design and data augmentation. However, **none of these methods focus on the intrinsic limitation of the contrastive learning**. Concretely, we provide more detailed aspects in **Table G1** in the general response for clarity. Furthermore, to the best of our knowledge, we are the first to **explicitly point out the concept of representation disparity, which is the key drawback we have investigated**, in self-supervised long-tailed learning. We believe that the undesired disparity is the intrinsic limitation of contrastive loss to hurt the representation quality. --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: I acknowledge the response of the authors and I have read the authors responses. My concerns on “representation transferability" are addressed by the authors' additonal experiments. Besides, the results of additional experiments pretrained on CC3M also increases my interest in the imbalance class under SSL setups. I raised my score from 5 to 6, and hope the authors can include the additional experiments in the revised version. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Dear Reviewer krnL, We sincerely appreciate you taking the time to review our responses and contributing to improve this paper. We will carefully follow reviewer's advice to incorporate all the addressed points with additional experiments in the updated version. Thank you once again for your dedicated and valuable contribution in reviewing our paper! Best, Authors of Paper 11308
null
null
null
null
How do Minimum-Norm Shallow Denoisers Look in Function Space?
Accept (poster)
Summary: The paper studies the shape and properties of one layer networks on a denoising problem. Specifically, the authors compute a closed-form solution for a NN trained offline with regularization and show that its ability to generalize is better than the eMMSE estimator -- which acts as a piece-wise constant function -- in a low-noise regime and in 1d. Then they provide results on multivariate cases where the training data points are contained in a lower dimensional subspace, showing that the image of the network is also contained in the subspace, and deriving closed-forms for specific cases where the data are aligned. Strengths: Regarding the presentation, the paper is clearly written and each theoretical result is well-explained, making the reading easier. Regarding the content, the theorems seem new and provide interesting insights on the behavior of simple networks on the denoising problem. The authors provide a complete study of the univariate case when the noise level is reasonable, and give insights on what happens in higher dimensions. Numerical illustrations support the theoretical claims. Weaknesses: The study is limited to low noise levels (noisy samples supports can't intersect), and it would have been interesting to have the authors opinion on whether removing assumption 1 change or not the behavior shown in figure 1 (theoretical development or even empirical illustrations). In particular, theorem 1 is valid in the limit when $\sigma \to 0$, while we would have preferred a "threshold" (as the authors mention, there exists a threshold for the strict inequality but it is potentially very small). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Could the authors illustrate what happens in practice (in 1d or in higher dimension) when the noise level increases ? In particular, do we still have a gain using a NN rather than the eMMSE ? * Is it possible to give bounds on $\sigma$ for which theorem 1 is valid ? Otherwise, could the authors provide MSE curves highlighting the difference between NN and eMMSE depending on $\sigma$ (in the case where the noisy samples support are disjoint and in the general case) ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are addressed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __Answers to the questions:__ __C:__ What happens in practice when the noise level increases? __R:__ Please see Figure 1 (in the pdf file attached to the 'global' rebuttal) for the MSE of NN denoiser and eMMSE vs $\sigma$ (the noise level). As can be seen from Figure 1, the NN denoiser performs better for noise levels in which we still have visible information in the noisy image. __C:__ Is it possible to give bounds on which theorem 1 is valid? __R:__ Given the probability density function of x (the clean image) we can calculate the critical noise level for which Theorem 1 holds. The critical noise level can change significantly depending on the probability density function of x. For example, if the probability density function has high “mass” in between the training points then the critical noise level is large. However, if the density function has low “mass” between the training points the critical noise level is small. Please see Figure 1 (in the PDF file) for the MSE curves. As can be seen the critical noise level in this case is large ($\sigma$ ~ 5). --- Rebuttal Comment 1.1: Title: Thanks for the answer Comment: Thanks to the authors for their answer and the clarifications regarding the noise level.
Summary: This paper studies how two-layer ReLU denoising networks look like when minimizing common losses such as the empirical minimum mean square error, an empirical alternative that draws finitely many noisy samples, as well as representation costs that find the data interpolating function with minimal norm of the weights. It is proven that - for univariate data - representation cost minimizers generalize better than empirical minimum mean square error minimizers in the low noise regime. Moreover, the solution to the representation costs is explicitly stated in the univariate case as well as in the multivariate case for specific geometric configurations, including one conjecture. Small numerical experiments illustrate that theory is correct and the conjecture is justified. Strengths: - The paper is very well written and easy to follow despite its theoretical nature - It characterizes several interesting and important properties of shallow denoising networks in the low-noise regime for univariate data. - It gives inspiring insights on the behavior in the multivariate case (under particular assumptions). - It encourages further research in the investigated direction by posting one (partially open) conjecture, as well as by accurately stating several limitations (tight to very interesting directions of future research). - It contributes to our understanding of denoisers in function space Weaknesses: I am not familiar enough with the explicit characterization of networks that minimize representation costs to judge the novelty and impact of the contribution. I was a little surprised to read "Hanin [2021] gave a full characterization of univariate representation cost minimizers subject to data interpolation constraints", and that there are follow-up extensions, which seem very relevant to the paper at hand. Beyond this, except for the limitations that have already been stated by the authors themselves, I do not see any major weaknesses. In practical terms, I am now curious to what extent larger (deeper + more sophisticated, e.g. convolutional or normalization including) models inherit some of the shown properties, but can understand that this goes beyond the scope of the paper. Also, is the M<<d problem discussed in line 333 a reason why adversarial examples exist? The strict interpolation on a ball would prevent them. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Please clarify in what way your work extends previous characterizations of minimal representation cost solutions. - And just out of curiosity - if "Savarese et al. [2019] showed that the representation cost of a function realizable as a univariate two-layer ReLU network coincides with the L1-norm of the second derivative of the function" (and assuming that the second derivative of a ReLU network has some meaning like the total variation of the (step-function-like) first derivative), doesn't proposition 1 follow from this quite directly? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have addressed the limitations well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __Response to the weaknesses points:__ __C:__ Do larger models inherit some of the shown properties? __R:__ There are several properties that we hypothesize to hold for larger models (but it goes beyond the scope of this paper). (1) We generalized Proposition 2 to the case of obtuse Simplex (See the “global” response). An interesting insight we deduce from our results on the obtuse simplex is that the NN denoiser (in this case) is bounded, which is a desired property for a denoiser. We hypothesize it is also true for larger models. (2) We proved that the minimizer is contractive towards the training points (for univariate data). This is akin to phenomenons shown empirically in diverse settings (multilayer Auto-Encoders, CNNs, and FCNs [A]). We hypothesize that larger models are locally contractive towards the training points. [A] Radhakrishnan, A., Yang, K., Belkin, M. and Uhler, C., 2018. Memorization in overparameterized autoencoders. __C:__ is the M<<d problem discussed in line 333 a reason why adversarial examples exist? __R:__ It is an interesting point. You are right, if the denoiser output is constant and equal to $x_n$ on a ball of radius $\rho$ around $x_n$, then the adversarial would be prevented altogether. It is an interesting future research topic to empirically verify if NN denoisers are more robust to adversarial attacks when we increase the number of noisy samples per image. __Answers to the questions:__ __C:__ Please clarify in what way your work extends previous characterizations of minimal representation cost solutions. __R:__ For univariate data, the characterization in Hanin [2021] is possible because the representation cost reduces to the L1-norm of the 2nd derivative of the function, as shown in Savarese et al. [2019]. Since the 2nd derivative only acts locally, the minimum representation cost can be found by minimizing this quantity separately over intervals between data points. In the multivariate setting, the representation cost is more complicated, and involves the Radon transform of the function – a highly non-local operation – that complicates the analysis. Parhi and Nowak [2020] prove a representer theorem showing that there always exists a minimum representation cost interpolant realizable as a shallow ReLU network with finitely many neurons, and Ergen and Pilanci [2021] give an implicit characterization of representation cost minimizers as the solution to a convex optimization problem. However, to the best of our knowledge, there are no results in the literature explicitly characterizing representation cost minimizers in the case of multivariate inputs, even for networks having scalar outputs. Therefore, for this paper we had to develop new tools and approximations. To simplify the problem, we assume norm-ball interpolation constraints (Equation (19) in the paper), in place of finite noisy realizations. This type of approximation is novel, and allows us to give explicit characterizations of representation cost minimizers under specific geometric assumptions on the training points. Thank you for your comment, we will add this discussion to the revised paper. __C:__ Doesn't proposition 1 follow from Savarese et al. [2019] directly? __R:__ Savarese et al. [2019] considered the case of shallow ReLU networks with an unbounded number of neurons. In Proposition 1 we found an explicit form to the minimizer of the representation cost with a __finite__ amount of neurons. In addition, Savarese et al. [2019] consider the case of one hidden layer ReLU network __without__ a skip connection. Lastly, in the case of denoising, Hanin’s [2021] result guarantees a unique minimizer for the representation cost. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks a lot for the detailed response and the additional results which further strengthen the paper! I clearly recommend the acceptance of this work!
Summary: The elementary properties of NN solutions for the denoising problem have been explored, with a focus on offline training of a one hidden-layer ReLU network. When the noisy clusters of the data samples are well-separated, there exist multiple networks with zero loss, even in the case of under-parameterization, while having a different representation cost. In the univariate case, a closed-form solution to such global minima with minimum representation cost has been derived. It has also been demonstrated that the univariate NN solution generalizes better than the eMMSE denoiser. In the multivariate case, it has been shown that the interpolating solution with minimal representation cost is aligned with the edges and/or faces connecting the clean data points in several basic cases. Strengths: The NN solutions is studied in the setting of interpolation of noisy samples with minimal representation cost, in a practically relevant “low noise regime” where the noisy samples are well clustered. In the univariate case, a closed-form solution for the minimal representation cost NN denoiser is derived and shown to have better generalizition behavior than the empirical minimum MSE denoiser. In the multivariate case, a closed-form solution for the minimal representation cost NN denoiser in multivariate case under various assumptions on the geometric configuration of the clean training samples. Moreover, a general alignment phenomenon of minimal representation cost NN denoisers is illustrated in the multivariate setting. Weaknesses: Weakness 1: Empirical effectiveness. This paper proposes a shallow denoiser and analyizes its theoretical performance. W1.1 Despite the interesting theoretical properties of the shallow denoiser, it is not clear whether the proposed model performs well in real image denoising applications due to the lack of sufficient empirical evaluation and analysis. W1.2 In the multi-variate settings, the authors consider training data on a subspace. However, it is not clarified in which typical settings this subspace assumption holds. Thus, the empirical reasonability of the subspace-based analysis is not well addressed. Weakness 2: The theoretical findings may be not sufficiently new. For example, the NN solution is shown contractive towards the clean data points, which has already been empirically observed in Autoencoders by Radhakrishnan et al. [2018]. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see and address the weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I think the authors have well discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __Response to the weaknesses points:__ __Weakness 1.1:__ __C:__ Is the proposed model perform well in real image-denoising applications? __R:__ Practically successful image-denoising architectures are deep and not fully connected. On such architectures, it is very challenging to obtain any theoretical guarantee without very strong assumptions (e.g. working in a linearized regime, such as the neural tangent kernel), especially before understanding simpler architectures. Therefore, deep learning theory papers researching a new question usually start with the simplest relevant model, as we do here. Specifically, in this paper, we have focused on a shallow NN denoiser, which is both simple enough to start a theoretical inquiry, yet rich enough to provide insightful conclusions about modern denoisers architectures. One such insight that we proved is that the minimizer is contractive towards the training points (for univariate data). This is akin to phenomenons shown empirically in diverse settings (multilayer Auto-Encoders, CNNs, and FCNs [C]). It is an interesting future research topic to empirically verify that practical image denoisers are also locally contractive toward the training points (Please see our answer to Weakness 2 for an additional explanation regarding the novelty of this result). Another interesting insight is that the NN denoiser is bounded, which is a desired property for a denoiser. This property hold in all our results, and also in the generalized Proposition 2 to the case of obtuse Simplex (See the “global” response), which provides an explicit solution to a high-dimensional case. __Weakness 1.2:__ __C:__ In which typical settings this subspace assumption holds? __R:__ In general, natural images are commonly assumed to lie on a low-dimensional nonlinear manifold [A]. There are several applications where the images are assumed to lie on a subspace (linear manifold). For example, in face recognition and handwritten digits, classical algorithms use PCA and achieve good results, which indicates that the data lies on a linear subspace. __Weakness 2:__ __C:__ The NN solution is shown contractive towards the clean data points, which has already been empirically observed in Autoencoders. __R:__ Our results differ from [B] in the following: * [B] focused on __empirically__ showing contractivity, while we __proved__ it. We believe there is value in proving important empirical observations, especially in deep learning, where the lack of established theory is a well-recognized problem. * [B] also proved that 2-layer auto-encoder models are contractive, but under rather __unrealistic__ assumptions: (1) the weights of the input layer are fixed and (2) the number of neurons goes to infinity. In contrast, in our theoretical results in the univariate case, the minimizer of the representation cost is contractive towards the training samples without these assumptions (i.e., the minimizer optimizes over both layers and has a finite number of neurons). * [B] used a weaker definition of contraction. Specifically, [B] only showed __local__ contraction, based on the eigenvalues of the Jacobian (linearization of a dynamical system). In contrast, we show __global__ contraction towards the training samples (the word “locally” in definition 2 in the submitted paper is a typo). * [B] examined auto-encoder models, while we examine denoisers models (i.e., auto-encoders trained with noisy inputs). We will clarify this in the revised paper. In addition, we have several new theoretical findings. For example, * we prove Generalization results for the univariate case (better generalization than the eMMSE). * A closed-form solution for the minimizer of the representation cost __in the multivariate case__. As far as we know, there are no such results in the literature, even for networks having scalar outputs. [A] Pope, P., Zhu, C., Abdelkader, A., Goldblum, M. and Goldstein, T., 2021. The intrinsic dimension of images and its impact on learning. [B] Radhakrishnan, A., Yang, K., Belkin, M. and Uhler, C., 2018. Memorization in overparameterized autoencoders. --- Rebuttal Comment 1.1: Title: Response to the reply Comment: Thanks for the authors' feedback. While addressing my weakness 1.2 for the empirical reasonability of the subspace-based analysis, the authors give examples in face recognition and handwritten digits, where PCA achieves good performance. However, it is more convincing to analyze the subspace assumption on large scale datasets today like ImageNet. --- Reply to Comment 1.1.1: Comment: As shown in [A], it is a general phenomenon that large datasets are (approximately) low rank [A], i.e., lie on a linear subsapce. Following the reviewer׳s suggestion, we also validated the subspace assumption on the following common image datasets: * CIFAR10 * CINIC10 * Tiny ImageNet (a lower resolution version of ImageNet, enabling us to use SVD) * BSD (a denoising benchmark composed from 128X1600 patches of size 40X40 cropped from 400 images [B]) We applied a Singular Value Decomposition (SVD) for each of the above datasets, and calculated the relative number of Singular Values (SV) needed to achieve a given percentile of the energy (for the average vector). __CIFAR10__: 95%, 99%, and 99.9% of the energy is concentrated on 0.8%, 7.5%, and 30% of the SV, respectively. __CINIC10__: 95%, 99%, and 99.9% of the energy is concentrated on 1%, 23%, and 41% of the SV, respectively. __Tiny ImageNet__: 95%, 99%, and 99.9% of the energy is concentrated on 1.6%, 20%, and 36% of the SV, respectively. __BSD__: 95%, 99%, and 99.9% of the energy is concentrated on 0.1%, 1.6%, and 4.5% of the SV, respectively. As can be seen from the results, the subspace assumption holds for all the datasets we used. We hope we were now able to completely address all the reviewer's concerns, if there are any remaining concerns, please let us know. [A] Udell, M. and Townsend, A., 2019. Why are big data matrices approximately low rank?. [B] Zhang, K., Zuo, W., Chen, Y., Meng, D. and Zhang, L., 2017. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising.
Summary: This paper looks at the denoising problem and compares the neural network denoiser to what the paper calls the eMMSE denoiser. The eMMSE denoiser is the optimal denoiser (in function space) given finite data and known noise distribution. In contrast, the Neural Network denoiser has access to a finite number of copies of each of the finite data points and minimizes the empirical noise. The paper shows that even in simple settings (univariate data), the minimum norm neural network denoiser has better generalization than the eMMSE denoiser, as it interpolates in between the data points whereas the eMMSE denoiser acts as a nearest neighbor denoiser. The paper further explores some other settings such as low rank and data on rays. Strengths: The main strength of the paper is finding the explicit form of the minimum norm two-layer denoiser. This is interesting as it identifies the best point in function space that can be represented as a two-layer network. Hence gives the global minimizer for the neural network which is traditionally not easy. Hence this is important. Other strengths of the paper include that they show these denoisers are contracting and generalize better than some "optimal" denoisers. Weaknesses: The are a few weaknesses of the paper. 1) I think the presentation of the paper can be improved significantly. In many cases, the paper switches between functional representations of the function and the parametric representation of the function. While both are interesting, it would be nice to have to a clear distinction between the two and a way of translating from one representation to the other. 2) While the results of the paper are interesting, there are limitations in the types of data that they consider. Specifically, univariate data, data on a line, and data on the union of lines are the cases in which the paper can identify the min norm solutions. While these cases are interesting and I understand that more general cases are challenging. Hence I don't think it is needed to solve more general cases. However, if the paper could extract some insights from these theoretical cases that might apply to more general cases, this would help strengthen the paper. 3) I think the related works section is missing various theoretical works on denoising. [A-D] look at denoising via factorizations which fit in nicely with the low-rank structure that the paper looks at, and [E] looks at denoising regression case. [A] Raj R. Nadakuditi. OptShrink: An Algorithm for Improved Low-Rank Signal Matrix Denoising by Optimal, Data-Driven Singular Value Shrinkage. IEEE Transactions on Information Theory, 2014 [B] Marc Lelarge and Léo Miolane. Fundamental Limits of Symmetric Low-Rank Matrix Estimation. In Proceedings of the 2017 Conference on Learning Theory, 2017 [C] Antoine Maillard, Florent Krzakala, Marc Mézard, and Lenka Zdeborová. Perturbative Construction of Mean-Field Equations in Extensive-Rank Matrix Factorization and Denoising. Journal of Statistical Mechanics: Theory and Experiment, 2022 [D] Emanuele Troiani, Vittorio Erba, Florent Krzakala, Antoine Maillard, and Lenka Zdeborov’a. Optimal Denoising of Rotationally Invariant Rectangular Matrices. ArXiv, abs/2203.07752, 2022 [E] Rishi Sonthalia and Raj Rao Nadakuditi. Training data size induced double descent for denoising feedforward neural networks and the role of training noise. Transactions on Machine Learning Research, 2023 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The paper presents the network with a skip connection, but as far as I can tell, the skip connection version is not studied in the paper. As $R(f)$ (Definition 1, Equation 19) is without the skip connection. 2. The denoiser in Corollary 1, what would be the neural network representation of that function with ReLU activation? 3. The fact that the online denoiser and the offline are equivalent is interesting. Do you have any intuition as to why the offline captures the online phenomena despite having much fewer samples? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __Response to the weaknesses points:__ __C: (1)__ the paper switches between functional representations and the parametric representation __R:__ We are not sure what is the source of the confusion. For parametric representation we only use $h_{\theta}$, as defined in Equation 6 in Section 2. For function space representation only use $f$, as defined in Definition 1 in Section 3 (and we focus on $f$ from that point onwards). This was also the standard notation in previous works (e.g. Savarese et al.). __C: (2)__ insights from the theoretical results which apply to a general case __R:__ We generalized Proposition 2 to the case of obtuse Simplex (See the “global” response), and this provides an explicit solution to a high-dimensional case. We believe that this result will facilitate deriving generalization guarantees of NN denoisers. One interesting insight we deduce from our results on the obtuse simplex is that the NN denoiser (in this case) is bounded, which is a desired property for a denoiser. __C:__ (3) The related works section is missing various theoretical works on denoising __R:__ The papers [A]-[D] indeed also consider the denoising problem with a low-rank prior on the signal, but mostly focus on optimal MMSE denoising, and do not consider neural networks. Our main focus is the structural properties of neural network denoisers because this sheds light on their generalization properties. As denoising is a very large topic, we have simply referred the reader to general surveys on the denoising problem (e.g., the one by Elad et al. 2023) in order to keep the paper focused. The paper [E] is indeed relevant to neural networks, and we will add a discussion on the results obtained in this paper on the double-descent phenomenon in a rank-1 denoising setup in the revised version. __Answers to the questions:__ __C: (1)__ The skip connection version is not studied in the paper __R:__ There appears to be some misunderstanding. As stated in Definition 1, the representation cost is defined for $h_{\theta}$ as defined in Equation 6, which is a shallow ReLU network __with a skip connection__. All later results find minimizers of $R(f)$ as defined in Definition 1, and therefore also assume shallow ReLU networks with a skip connection. __C: (2)__ What is the neural network representation of the denoiser in Corollary 1? __R:__ Suppose the collinear points $x_n = c_n u$ are ordered such that $c_1 < c_2 < … < c_N$. Then a neural network representation of the denoiser in Corollary 1 is: $f^*(y) = \sum_{n=1}^{N-1} a_n u ([u^T y - (c_n + \rho)]\_+ - [u^T y - (c_{n+1} - \rho)]\_+)$ where $a_n = (c_{n+1}-c_n)/(c_{n+1}-c_n-2\rho)$ and $\rho$ is the radius of the norm-ball constraints (i.e., we assume the denoiser maps every point in the ball of radius $\rho$ centered at $x_n$ to the point $x_n$). __C: (3)__ An intuition as to why the offline captures the online? __R:__ On an intuitive level, the cause is the exponential tail of the Gaussian distribution. Given that the number of noisy samples per image in the offline setting is large enough, we need exponentially more noisy samples per image in the online setting in order to get a different solution. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications. Comment: I will increase my score. I did have one more question. I think why I thought (13) did not have the skip connection ($V$) is because the norm of the skip connection is not regularized. Is there a reason this is the case? Do the authors know what might happen if $V$ included in $R(f)$? --- Reply to Comment 1.1.1: Comment: Thank you for increasing the score. We followed the same setting as in [A] and [B], which did not regularize the skip connection. Previous works also considered other cases: without skip connection ([A] Theorem 2), and regularization on the bias [C]. Adding regularization over the skip connection is an interesting case to consider, which was not directly covered in previous works: [D] and [E] proved related results but in a different setting (constrained path/optimization path in classification vs. regularization path in regression in our case). From these works, it intuitively appears that a “large” target function is “cheaper” to implement without a skip connection (recovering the case without the skip connection), since we can distribute the function scale over two layers instead of one. However, for general target functions the situation is more complicated. We can prove (see below) that the cost of realizing the linear function $L(x) = Vx$ with the regularized skip unit is greater than the cost of realizing it with regularized ReLU's if and only if $2||V||\_* \leq ||V||\_F^2$, where $||V||\_*$ is the nuclear norm. This will generally hold for large norm matrices (following the intuition above), but it fails for matrices with sufficiently small norms. We are not sure yet what holds for general (nonlinear) target functions. This is an interesting open question to explore. __Proof:__ Let $C\_{relu}$ be the minimum cost needed to realize $L(x) = Vx$ with ReLU units over some compact domain. We can write every such realization as $L(x) = A[W^Tx+b]\_+ -Ab$, where $[W^Tx+b]\_+ = W^Tx + b$ on the domain, and $V = AW^T$. This gives $C\_{relu} = min\_{V = AW^T} \sum_k (||A||\_F^2 + ||W||\_F^2)$. But by the variational characterization of the nuclear norm, we see that $C\_{relu} = 2||V||\_*$. On the other hand, to realize $L(x)$ with a regularized skip connection costs $C\_{skip} = ||V||\_F^2$. So as long as $2||V||\_* \leq ||V||\_F^2$, we have $C\_{relu} \leq C\_{skip}$. [A] Ongie, G., Willett, R., Soudry, D. and Srebro, N., 2019. A function space view of bounded norm infinite width relu nets: The multivariate case. [B] Hanin, B., 2021. Ridgeless Interpolation with Shallow ReLU Networks in $1 D $ is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions. [C] Boursier, E. and Flammarion, N., 2023. Penalising the biases in norm regularisation enforces sparsity. [D] Nacson, M.S., Gunasekar, S., Lee, J., Srebro, N. and Soudry, D., 2019, May. Lexicographic and depth-sensitive margins in homogeneous and non-homogeneous deep models. [E] Kunin, D., Yamamura, A., Ma, C. and Ganguli, S., 2022. The asymmetric maximum margin bias of quasi-homogeneous neural networks.
Rebuttal 1: Rebuttal: We thank the reviewers for the helpful feedback and remarks, and for your interest in the paper. We have addressed all of them, and we will revise the paper, ‌as detailed below. The rebuttal is in comment-response (C/R) format. Also, ‌after the initial submission we have strengthened some of our results: 1) In Theorem 3, we have removed the restriction $L \le 3$. In other words, we generalized it to an arbitrary number of $L$ rays making obtuse angles with each other. 2) Using the above result, we generalized Proposition 2 to the case of an obtuse simplex (i.e. one point in the training data that forms an obtuse angle with all other points). 3) We proved Conjecture 1 in the special case that the training set forms an equilateral triangle. We believe these results will further strengthen the paper, and therefore plan to add them to the final version (unless there is any objection). Pdf: /pdf/aa23559356b262b8a26ee9ea2bcd5529e20143f6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data
Accept (poster)
Summary: This paper proposes a scRNA-seq pretraining method, overcoming the scalability and resolution weaknesses of previous works. Experiments on various downstream tasks proves the efficiency and effectiveness of the proposed method. Strengths: The paper is well structured and easy to follow in general. The challenges it aims to tackle are also clear, namely, scalability and resolution. The proposed method is technically sound. Its performance and efficiency are evaluated on several downstream tasks. The experiments are thoroughly conducted to investigate the influence of different mask strategies and self-supervised objectives. Weaknesses: 1. Would it be possible to further improve the efficiency by optimizing the padding operation? It seems that the padded zeros are also not useful in the encoding process. 2. The masking and reconstruction strategy, the auto-discretization module, and optimization objective seem to be a simple combination of previous works. This could slightly lower the novelty of the proposed method. 3. The embedding of the proposed method and existing works could be visualized to provided an intuitive understanding of the performance. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No significant limitations of the method are found. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. Here we showed additional analysis and discussions to further strengthen our work. If our response does not fully address your concerns, please post additional questions and we will be happy to have further discussions. **A1**: Introducing padding tokens within a batch is major compatible with parallel matrix operation. However, computes related to the [PAD] placeholder are useless as the corresponding attention is all masked out in implementation. In this scenario, diminishing computes on [PAD] token will further boost the training efficiency. One widely used strategy to avoid padding is sequence packing [1], which theoretically concatenates all the variable-length sequences into one sequence and then splits it into individual fragments with a fixed length. The resulting batches have a consistent shape and no further padding is needed. The method has been utilized in natural language modeling and protein sequence pre-training. However, the design is not applicable to xTrimoGene as gene expression pattern is cell specific, cross-concatenation will introduce noise signal and collapse the intrinsic cell representations. Another strategy called bucketing [2] tends to fit xTrimoGene more closely. While preparing the training set, one can sort the samples by expression pattern distribution. Specifically, the single-cell samples are sorted by dropout ratio and thus cells with similar non-zero length are divided into the same buckets. This will greatly reduce the needed [PAD] tokens within a batch and potentially accelerate the overall training process. [1] M Kosec et al, Packing: Towards 2x NLP BERT Acceleration. 2021 [2] Tom Kocmi et al, Curriculum Learning and Minibatch Bucketing in Neural Machine Translation. 2017 **A2**: Though individual component of the architecture has been explored before, the overall design is predominately motivated to align scRNA-seq data characteristics. Regarding the novelty, we would like to highlight the following contributions and advancements over previous methods. 1. xTrimoGene is the first asymmetrical encoder-decoder architecture to guide single-cell RNA-seq data pre-training. The scRNA-seq is highly sparse, thus using the encoder-only design tends to introduce huge amounts of redundant computations. The encoder in xTrimoGene concentrates on capturing intrinsic features from the most informative non-zero values, while the decoder additionally integrates zero-value genes to further tweak the gene-gene interactions. This strategy empowers the model to learn gene representations both efficiently and comprehensively. 2. Projection of expression value needs to maintain the continuous feature. Different from language tokens, the gene expression values in scRNA-seq data are continuous scalars, which typically indicate similar gene activity when they have similar values. To transform these scalars into high-dimensional tokens in the data matrix, a representation that preserves the continuous semantics is needed. We verified the effectiveness of the proposed auto-discretization strategy. 3. Training strategy setting is comprehensively ablated. We have conducted a series of ablation studies to validate the optimal training configurations. Concretely, we proved regression task (objective) is superior to the conventional classification one (Figure 2A). Instead of masking all positions randomly, we need to separately mask zero and non-zero values with a trade-off ratio (Appendix Figure 2,3,4). 4. We scaled the pre-trained model and achieved remarkable performances on multiple downstream tasks, including cell type annotation, perturbation prediction and synergistic drug combination prediction. The results demonstrate the generalization ability of xTrimoGene, which is expected to drive more advancements in other downstream task learning. In summary, xTrimoGene is developed to pre-train large-scale scRNA-seq data efficiently, where the underlying design is motivated and optimized to align scRNA-seq data characteristics. We envision the established framework is meaningful for further algorithm improvement. **A3**: For all the compared methods related to cell type annotation tasks, we investigated the embedding distributions with Zheng68K data set and uncovered a potential connection to model performance. Specifically, CellTypist and scVI are included for comparison versus xTrimoGene. We projected the embedding in a UMAP plot, where the cells are colored by either ground truth (upper row) or model predicted (bottom row) cell type labels (Figure R4). The model performance is correlated with the consistency level between the two plots. In contrast with CellTypist, xTrimoGene achieves a better performance in predicting the CD19+ B cell type (Figure R4, 1-1 v.s. 1-2). Meanwhile, the CellTypist tends to identify some CD8+ Cytotoxic T cells (2-1) as CD19+ B cell type (2-2). The CD8+ Cytotoxic T cell is the largest sub-population across all 11 cell types, potentially leading to the inaccurate assignment. scVI performs much worse than xTrimoGene, as a proportion of CD8+ Cytotoxic T cells (3-1) are detected as CD56+ NK cell types (3-2). It seems that two batches of Cytotoxic T cells are present and scVI is inferior to discriminate the batch effects. Notably, a smaller subgroup CD34+ cells (4-1) are nearly assigned as Dendritic (4-2) cell type incorrectly. This may suggest scVI has a limited resolution to separate rare cells. Visualization analysis of embedding is useful to interpret model behavior, especially when it remains challenging to establish explainability for deep models. The paradigm provides a convenient and intuitive manner to validate the performance and will be further utilized to decode the limitations and advantages of xTrimoGene. --- Rebuttal Comment 1.1: Title: Thanks for the responses Comment: Thanks for the responses which solve my first two concerns. For the last concern, could the author also supply the UMAP comparison with scBERT? --- Reply to Comment 1.1.1: Title: Thank you for the feedback! Comment: We will add the figure to our revised manuscript. But due to the constraints imposed by the discussion format, we are unable to revise or provide figures now. If you are now interested in the part and initiate a video request, we can format a figure video to the committee for access. Here is a summary of the scBERT UMAP: scBERT only identified 10 cell types across all 11 populations. And all the CD4+ T Helper2 cells are incorrectly assigned. The observation demonstrated scBERT performs inferior to xTrimoGene in capturing cell type specific embedding, especially for rare cell types.
Summary: In this study, the authors propose an asymmetric encoder-decoder transformer for scRNA-seq data, called xTrimoGene, for large scale dataset pre-training. Quantitative comparison with various SOTA approaches shows the advantage in efficiency and scalability. In addition, a series of downstream analysis further validate the performance of xTrimoGene. Strengths: 1. The training process is accelerated by utilizing sparse input. 2. A novel auto-discretization strategy is introduced to map continuous expression values into a latent embedding space. 3. Considering its high accuracy and efficiency, the proposed large model xTrimoGene is a valuable contribution to the single-cell community. 4. The authors conducted validation of xTrimoGene using multiple downstream analysis tasks. Weaknesses: In table 2, compared with CellTypist, the propsoed method only obtained a marginal improvement. In this case, it will be better if the author can provide more analysis to show the advantage of xTrimoGene on this task. See questions for detailed suggestions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Compared with CellTypist, is there any biological insights can only be found by xTrimoGene from these datasets? Is there any specific circumstance in which xTrimoGene consistently demonstrates superior performance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are not presented in the paper. Consider discussing the potential limitations or challenges associated with the proposed method. Acknowledging and addressing these factors would further enhance the study's robustness and reliability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. Here we showed additional analysis and discussions to further strengthen our work, with a focus on the advantage over CellTypist. If our response does not fully address your concerns, please post additional questions and we will be happy to have further discussions. We have further explored the performance of xTrimoGene in comparison to CellTypist and demonstrated the following advantages and potential biological insights: 1. xTrimoGene is more robust to identify rare cell types than CellTypist. For the Zheng68K annotation task, xTrimoGene achieves a marginal improvement over CellTypist (F1-score: 0.7354 versus 0.7151). However, the boosting performance across individual cell type varies. Among all the profiled 11 different cell types, xTrimoGene gains a large margin (F1 score: 0.21 versus 0.0) for CD4+ T Helper2 cell type, which is the smallest subgroup in the total population. The observation suggests xTrimoGene is superior over CellTypist to detect and distinguish rare cell types. |Cell type|xTrimoGene-Precision | xTrimoGene-Recall | xTrimoGene-F1 | CellTypist-Precision | CellTypist-Recall | CellTypist-F1| | :--- | :---: | :---: | :---: | :---: | :---: | ---: | | CD14+ Monocyte (195) | 0.86 | 0.81 | 0.84 | 0.86 | 0.85 | 0.85 | | CD19+ B (558) | 0.96 | 0.81 | 0.88 | 0.90 | 0.84 | 0.87 | | CD34+ (19) | 0.90 | 0.95 | 0.92 | 1.00 | 0.84 | 0.91 | | CD4+ T Helper2 (9) | **0.20** | **0.22** | **0.21** | 0.00 | 0.00 | 0.00 | | CD4+/CD25 T Reg (612) | 0.78 | 0.68 | 0.73 | 0.72 | 0.69 | 0.71 | | CD4+/CD45RA+/CD25- Naive T (185) | 0.53 | 0.71 | 0.61 | 0.66 | 0.54 | 0.59 | | CD4+/CD45RO+ Memory (303) | 0.65 | 0.64 | 0.64 | 0.70 | 0.47 | 0.56 | | CD56+ NK (853) | 0.93 | 0.90 | 0.91 | 0.93 | 0.92 | 0.92 | | CD8+ Cytotoxic T (2031) | 0.87 | 0.88 | 0.87 | 0.86 | 0.83 | 0.84 | | CD8+/CD45RA+ Naive Cytotoxic (1636) | 0.84 | 0.91 | 0.88 | 0.80 | 0.94 | 0.87 | | Dendritic (194) | 0.80 | 0.84 | 0.82 | 0.84 | 0.83 | 0.83 | 2. xTrimoGene embedding reveals potential cell type specific gene-gene networks. xTrimoGene, as a pre-trained model, can generate unique embeddings which encapsulate intrinsic gene-gene relationships. In the Zheng68K dataset, after cell-type annotation, we conducted the differential gene expression analysis between B and non-B cell groups. And then we retrieved the top 10 genes specific to B cells and retrieved the context embedding for these 10 genes from xTrimoGene. These embeddings are used to construct a gene-gene network (Figure R3), where the edge represents gene embedding similarity. From the gene network, we found HLA genes family (HLA-) have higher similarities within each other, while gene CD74 has less similarity with others. The analysis illustrates that xTrimoGene embedding can be utilized to decipher cell type specific gene networks, which is not within the scope of CellTypist. In summary, xTrimoGene not only outshines in handling rare cell types but also offers the potential to decode gene-gene relationships, setting it apart from existing methods. --- Rebuttal Comment 1.1: Comment: Thanks for the response which mainly addresses my concerns. However, it will be better if the author can discuss some limitations of the proposed method. --- Reply to Comment 1.1.1: Title: Thank you for your comments! Comment: Thank you for your feedback. Certain limitations exist for xTrimoGene and further work is desired to advance the design. 1. Not fully make use of cell meta information. In present, xTrimoGene major utilize gene expression values during the pre-training stage, overlooking varieties of other related meta information like sample condition (health/disease), cell type, tissue type, sequencing platform, etc. These rich annotations are biologically meaningful and highly correlated with the expression pattern within a cell. Incorporating such information is anticipated to learn intrinsic gene-gene regulations better and boost the model performance. However, more further experiments are needed to validate the hypothesis. An appropriate strategy to encode and integrate these discrete attributes may require extensive trials. 2. Engineering optimization. The memory consumption for inference with xTrimoGene-100M model is approximately 50GB, whose hardware requirement (Nvidia A100 80G GPU) is beyond some academic labs. Additionally, computes related to the [PAD] placeholder are useless as the corresponding attention is all masked out in implementation. In this scenario, computational or memory efficient engineering techniques tend to advance the model pre-training and application. Thank you again for pointing this out and making our work more comprehensive. We will include the discussions in our future version.
Summary: The manuscript proposes an adaptation of the key contributions of [1] to the scRNA-seq data modality, which is dubbed xTrimoGene. Additionally, the authors introduce a new embedding encoder, their "auto-discretization strategy". Several ablation experiments motivate specific choices in the model. Finally, xTrimoGene is tested on three downstream tasks. Strengths: The model is described (for the most part) clearly, so that the paper is easy to follow. Utilizing the sparsity of scRNA-seq data is important in the context of transformers and, therefore, adapting the ideas from [1] to this modality seems like a very good match. I also appreciated the ablation experiments because they justify specific choices for the model and make the develoment process more transparent. Weaknesses: **Connection to [1]:** The paper is for the most part a (useful) adaptation of the ideas of [1] to the scRNA-seq domain. This is mentioned in one sentence on page 5 (l 156), which, in my opinion, under emphasizes the connection. I appreciate that additional steps, such as determining the correct size of the mask and reducing the frequency of masking zeros are necessary to transfer the ideas to scRNA-seq data. Nevertheless, I strongly suggest putting references to [1] more prominently and earlier in the paper, e.g., into the abstract, the introduction and the beginning of Section 3 (at least beginning of section 3.1). **Exposition of the auto-discretization strategy:** While the paper is overall clearly written, I did not understand several aspects about the auto-discretization strategy. First, why is discretization necessary in the first place; unnormalized count data would already be discrete, so why not use this? Also, why is discreteness necessary? Second, does the proposed method actually discretize? According to ll168 ff it is just a weighted sum of vectors and Appendix Figure 1 confirms this empirically at least for low expression values. Third, I did not get the description in the first paragraph of 3.3. My main question is what the shapes of the various w's and v's are in that paragraph. My best guess is that $v_1, v_2, v_3, v_4 \in \mathbb{R}^{c\times b}$. But then the product $EXP_{lookup} \cdot v_4$ is the wrong way around and would only produce output in $\mathbb{R}^{c\times d}$, while from Fig 1 I understand that the output should be in $\mathbb{R}^{c \times m \times d}$, which makes sense as m is the sequence length, crucial for the transformer. Also, I wonder why the two products of $v_2$ are not collapsed into a single one in line 167. Should perhaps one of the two $v_2$'s be a $v_1$? When the authors write in ll 168 ff "The final output is a weighted combination of individual embeddings from the look-up table [...], where the weights are learnable parameters.", I wonder whether the entries of the look-up table are themselves learnable, or just the weights $w_1, w_2$. Also, why is $EXP_{lookup}$ called a look-up table, when actually a weighted sum of its rows is returned? I assume that the softmax performs a soft version of discritization, with a winner-takes-(almost-)all effect. But at least for low expression values the output really is not discrete. **Ablation of the relative masking frequency of zeros:** I agree that zeros need to be masked much less frequently than non-zero entries of the gene expression matrix. The authors' choice of masking an "almost equal number of positions for zero and non-zero positions" is plausible (Why "almost" here? What is the precise masking strategy?). I would be curious as to how the downstream results change for other ratios. I.e. I suggest a plot like in the Fig 3 of the appendix, in which the total masking ratio is kept at 30% but the frequency of masking zeros and non-zeros is changed. Perhaps something similar is depicted in Fig 4 of the appendix, which I did not understand (Could you please explain the three percentages in more detail?). **Order of subsections in section 5:** The main experimental results appear in subsections 5.4, which also describes the downstream tasks for the first time and compares to competitors. Sections 5.1-5.3 are valuable in that they analyze the model in more depth and perform ablations. Some ablations are even based on the task described only later in 5.4. Therefore, I would suggest to move the most important subsection, 5.4., to the beginning of section 5. In addition, one might also put the experiments of Fig 2 into section 5 (after the downstream experiments have been described) because 2A also relies on the downstream task. This way, the reader would be familiar with the downstream tasks before they are used to compare models. **Comparison to other models:** For the cell type annotation task many methods were used to produce a representation which was subsequently used for clustering. I guess that nearly all of these methods (for sure scBERT, scVI) can also be used in conjunction with GEARS and DeepDDS in much that same way as xTrimoGene is used for perturbation prediction and drug prediction, respectively. While the current experiments for these two task show that xTrimoGene representations offer benefits compared to raw gene expressions, it would strengthen the paper if xTrimoGene also outperformed, say, a combination of scBERT representations and GEARS. **Minor:** - In line 15 of the abstract the downstream task is described as "cell classification". This sounds as if it was supervised. But according to the appendix unsupervised Leiden clustering is used. Please change this to "cell type annotation" or "cell clustering". - In line 19 the term AI4Science is introduced but never used. Perhaps it can be omitted? - In l 84 the authors say that gene expression values in scRNA-seq data are continuous scalars. But the raw data is counts, i.e., discrete. This confused me when reading the paper the first time. From l 103 onwards, they speak of *normalized* gene expressions, which indeed are real numbers and not necessarily natural numbers. Please add "normalized" or "pre-processed" to l 84. The pre-processing is discussed in the appendix, but not linked from the paper. Perhaps it would be useful to include a cross-reference at the beginning of section 3, as a first step of the pipeline. - Line 131 refers to the auto-discretization strategy "discussed previously", but the auto-discretization strategy is only discussed later in section 3.3. - It would be useful to mention the key aspects of the large pre-training dataset at the beginning of section 5 (size, number of genes, that it was scraped from GEO) at the beginning of section 5. - It would be great if Table 1 was extended by the runtime and memory consumption of each model. - Line 271 and Table 2 speak of Zheng68K, which has 68K cells, while appendix l 43 claims the PBMC dataset had only ~2k cells. - Line 162: should it be $V\in \mathbb{R}^{c\times m}$ rather than $V\in\mathbb{R}^{c\times n}$? - Fig 3 panel A's caption and main caption "Sparse level" --> "Sparsity level" - First line of Fig 2: Typo in "Performance" and missing hyphen in "auto-discretization". - Line 227: word order "other two" --> "two other" - Line 228 "comparison, three models" --> "comparison, all three models" **Reference:** [1] He, Kaiming, et al. "Masked autoencoders are scalable vision learners." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How is the name "xTrimoGene" motivated? Please include an explanation of this non-obvious name. - What are the numbers in brackets in line 154? - What is the range of the sum in Eq 5? Just the masked entries of the gene experession matrix (similar to [1]) or all entries? The normalization prefactor does not match either. - How is classification in the classification task mode of Fig 2B) performed? I understand that the output is continuous and not discrete, so that I do not see what the classes would be. For this reason, I have accepted the use of MSE as loss immediately. Nevertheless, performing the ablation is appreciated. - Why was the comparison in Table 2 only performed for the 10M parameter model and not the 100M parameter model? The latter has lower validation loss, so it should perform even better, right? - The model was trained on a very large dataset of human gene expressions. Is there any hope for it to transfer to other species? - I understood that the gene embedding was akin to positional embeddings in NLP. Therefore, would have assumed that it only changes with the gene, not the cell. Is this correct? If so, why are the rows in Fig 1 upper right corner not all the same? - Are $I_{masked}$ and $I_{zero}$ in eq (3) also the sums of the auto-discretization encoder and the gene encoder? Or are the pre-processed gene expression levels directly used here? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not explicitly discussed. Code is not part of the submission, but is promised to be released on GitHub alongside the pre-trained model, which is particularly interesting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **WA1**: We are grateful for all your comments and will revise our manuscript to make more discussion and emphasize the connection. In our scenario, this asymmetric encoder-decoder architecture is not only efficient but also designed for handling the high sparsity in scRNA-seq data. Specifically, we feed the non-zero expressed genes into the encoder and let the zero expressed genes only be processed by the decoder. **WA2**: 1. Using the count values to query embeddings can cause the model to overly capture noise since the MSE loss is sensitive to the data scale. This noise makes it difficult for model optimization. Therefore, the normalized data is more applicable for model training and a "coarser-grained" representation strategy- discretization or quantization, is needed to make the continuous values be mapped into similar embeddings. 2. Our method is more aptly described as a form of soft discretization. This contrasts with traditional methods of hard discretization, which can lead to problems like Similar value But Dissimilar embedding (SBD) as highlighted in ll53-ll55. The soft discretization, allows for flexibility, effectively addressing the SBD challenge. Additionally, our Auto-discretization method is differentiable, supporting end-to-end training. Also, the embedding methods for numerous or continuous values can be observed in other fields[1,2]. Appendix Figure 1 shows bin distribution for low-expression values. This is because the majority of gene expressions in our training data fall within the 1-2 range. Those exceeding a value of 4 are relatively rare. We are sorry for the confusion and the shapes for all the matrices are added in the paragraph.[1] AutoEmb: Automated ... Recommendations [2] AutoDis: An Embedding ... Prediction 3. Both the look-up table entries and the weights, w1 and w2, are learnable. The process can be viewed as a soft discretization, and while the outcome is a weighted sum of the embedding table (refer to Q2), we label it as a "look-up" due to its meaning of mapping raw data to its respective embedding, we are sorry for this confusion. The low expression values are not strictly discrete since the data distribution is not uniform. **WA3**: In total, 1,140 (5.7\% of all genes) entries are masked, which on average includes 600 non-zero entries (30\% of total non-zero value genes) and 540 zero entries (3\% of total zero value genes). We ablated the ratio of the zero under the same total masking ratios/numbers (5.7\% of all genes, n=1,140) as in the public comments experiment no.4. For each setting, we trained a 10M parameter model over 5 million data for 50,000 steps. Then, we evaluated the model on PBMC3K downstream cell clustering task. As the zeros masking ratio increases, the performance tends to be better and then achieves a plateau (1.3\% - 3.0\%). The current zeros masking ratio lies in the plateau interval. For appendix Figure 4, we aim to investigate if all the masked values should be replaced with a [MASK] token. The three percentages (80\%,10\%,10\%) represent the probability of masked values being replaced with a [MASK] token, a random expression token, and the original token, respectively. **WA4**: Thank you for the proposed logic to organize the work, we have revised the main text accordingly. **WA5**: Yes, the comparison can also be applied to other tasks with produced cell embedding. Here we investigate whether xTrimoGene is advantageous over scBERT on the perturbation effect prediction task. Similar to xTrimoGene, the generated embedding from scBERT is in conjunction with GEARS. As shown in the public comments experiment no.5, xTrimoGene achieves a lower MSE value than scBERT across all metrics, demonstrating its superior efficiency in generating cellular context embedding under different biological conditions. **Minor**: For minor comments, we revised the manuscript accordingly. And for datasets, we used two datasets named PBMC and Zheng68K with ~2k and 68k cells, respectively. **Q2**: The numbers in brackets are the absolute depth/head value between the encoder and decoder. We deleted the numbers and added the reference to Appendix Table 2 for clarification. **Q3**: Equation 5 defines the loss at masked positions (including zero and non-zero entries), whose range and prefactor vary across different samples. **Q4**: The ablation studies across the paper are all conducted with model context embedding. Concretely, we fed the PBMC3K expression matrix into the evaluated model and dumped the context embedding. Then, we utilized the embedding to cluster all the cells and calculated the clustering metrics. In the scenario of classification pre-training mode (Figure 2B), the expression value is rounded into an integer (each integer as a token) and the pre-training objective is to predict the token (which is discrete). After the pre-training stage is finished, the model is evaluated on the ablation cell clustering task as aforementioned with the exception that the PBMC3K expression matrix are rounded integers. **Q5**: Sorry for the typo, the result is from xTrimoGene-100M model. **Q6**: We can fine-tune the pre-trained model in mice as the scRNA data is comparable with humans. Upon fine-tuning over a large amount of data, the model is possible to capture the intrinsic species-specific regulations and learn good representations for species-specific genes. **Q7**: Yes, the gene embeddings are changed along with genes instead of the cell. For Fig 1 upper right corner part, we are retrieving the expression and gene embeddings for the unmasked-only matrix. The matrix has been processed with filtering and padding steps. In the former step, we filtered out the zeros and masked entries, which positions/orders are not consistent across different cells. Thus, we utilized a color gradient to denote the gene differences. **Q8**: The former, they are also converted to embedding similar to unmasked entries, instead of pre-processed value. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response! **WA2:** 1. Thanks for explaining. 2. Unfortunately, I am no expert on these soft or hard discretization methods. I would appreciate a more basic explanation of this step. Why is any form of expression embedding computed at all? Why not just multiply the embeddings of genes expressed in a cell by their (normalized) expression in that cell? Is the number of tokens equal to the number of bins? If not, how do the two concepts relate? Does a token still correspond to a unique gene, or to a mixture of genes? The reason for only using 100 tokens is to make the computations tractable (because the attention matrices will be of shape $(100, 100)$ rather than (20k, 20k)), right? 3. Thanks for clarifying. **WA3:** Thanks for reporting this sensitivity analysis on the masking ratio! I think it justifies your choice and thus strengthens the paper. I understand App. Fig. 4 better now, thanks. Which of the five settings explored in this figure do you use in your main experiments? What is the rational behind replacing a masked token by the original token? This amounts to just not masking it, right? **WA5:** Excellent, thanks for checking this! **Q3:** I see. Perhaps writing something like >$\sum_{i=1}^c \sum_{j\in\mathcal{M}_i}$, where $\mathcal{M}_i$ is the set of $n-m$ masked entries in cell $i$ would help. Why does the size of the mask differ between samples? I thought you always masked 1140 genes per cell? **Q4:** Thanks for explaining the classification pretraining mode in more detail. Indeed, normal masked language learning is phrased as a classification task over the type of token (i.e., which word is it?). My intuition was that the scRNA-seq analogue to word is gene and that the frequency of a word is akin to the expression value of a gene. Therefore, I am surprised that you predict a discretized expression value rather than the gene type. I would have rather expected a comparison in which the gene identity is predicted. But I do not think this is an important issue. **Q7:** Ok, but if the gene embeddings only change by gene not by cell, then I still do not understand why the columns of the right stack of matrices in Fig 1 top right corner are not constant. In Fig 1 a) of the scBERT paper the rows of the gene expression matrix are constant (it is rows instead of columns because they work with the transposed setup). --- Reply to Comment 1.1.1: Title: Thank you for your comments! Comment: Thank you for your valuable feedback. Here we provide more explanations and examples to clarify the auto-discretization and model design. If our response does not fully address your concerns, please post additional questions and we will be happy to have further discussions. **WA2**: Simply multiplying gene embeddings by their normalized expression might seem straightforward, but it presents certain challenges. Due to large amounts of zero expression values in the data, the multiplication would result in numerous zero-valued embeddings, rendering such embeddings uninformative. Whereas in the xTrimoGene encoder which exclusively processes non-zero values, expression value multiplication may be applicable. However, it still has the risk of missing key regulatory relationships. Some genes (such as some transcription factors) can significantly influence the expression dynamics of other genes, even if their own expression is not high. The multiplication will reduce the importance level of these low-expressed genes, yielding biased and incomplete regulatory relationships. No, the number of tokens is not equal to the number of bins. A token is the input unit of the transformer, and the number of tokens for each sample is 19,264. Each token is the sum of a gene embedding and a value embedding (both d dimension). Value embedding is obtained from the auto-discretization module and is weighted summarization of 100 embeddings (bins) from a look-up table $EXP_{loopup} \in \mathbb{R}^{d \times b}$, where $b=100$ indicate the number of embeddings in the table and refers to the bin number. So the attention matrix is of shape (number of non-zero \& non-mask genes, number of non-zero \& non-mask genes) in the encoder and (19,264, 19,264) in the decoder. **WA3**: Based on the App. Fig 4, we opted for the first configuration (80\%,10\%,10\%). The design is to minimize the disparity between the pre-training phase and the fine-tuning downstream tasks. Given that the inputs during downstream fine-tuning are not subjected to masking, it becomes pivotal to ensure that the pre-training objective encompasses positions that do not solely consist of [MASK] tokens. The strategy has been proven effective in natural language pre-training [1], which results are consistent with our ablation study. [1] Devlin et al. BERT.... 2019 **Q3**: Thank you for the suggestion. We will revise this. 1,140 is the approximate average masked number of the total samples and is not fixed. Actually, in the implementation, we fixed the mask ratio (zero: 30\%, non-zero: 3\%). So the mask size $m$ can be defined as: $$m=a\times 0.3 + b\times 0.03$$ where $a$ and $b$ is the number of zero and non-zero value, respectively. These two values are different across samples, leading to a non-constant absolute mask size $m$. **Q4**:The gene can be akin to a word and the expression value to the word frequency is indeed the case. By following this intuition, one possible way is to repeat each gene multiple times, i.e., extending times by expression values. However, two factors may limit the extending alignment: (1) Though the frequency of the gene is clear, the exact position of the added gene is not known. For instance, extended genes (e.g., G1,G2,G3 with expression value 2,1,2) can be sequentially (G1G1G2G3G3) or cycling concatenated (G1G2G3G1G3), which represents two distinct gene sentences and impact the modeling. (2) If each gene is extended multiple times (maybe several to tens to hundreds), the resulting sentence will achieve a large length, constraining the pre-training efficiency. **Q7**: We take the below example to demonstrate the processing flow. Assume an expression value matrix with 2 cells and 10 genes. First, we masked a portion of values and the generated matrix is as: ||G1|G2|G3|G4|G5|G6|G7|G8|G9|G10| |-|-|-|-|-|-|-|-|-|-|-| |C1|M|2.1|0|4.5|M|7.3|8.9|M|3.4|2.5| |C2|1.1|M|M|3.4|2.3|M|M|0|2.9|0| Then, we filter both the M token and zero token for each sample, concatenate the rest tokens sequentially and add the PAD tokens to match max-length sample. The generated matrix (unmasked-only matrix in Fig1) is as: ||Column1|Column2|Column3|Column4|Column5|Column6| |-|-|-|-|-|-|-| |C1|2.1(G2)|4.5(G4)|7.3(G6)|8.9(G7)|3.4(G9)|2.5(G10)| |C2|1.1(G1)|3.4(G4)|2.3(G5)|2.9(G9)|PAD|PAD| The generated matrix is utilized to retrieve both expression and gene embeddings. The genes in Column1 are different (G2 versus G1) across the two samples, while genes in Column2 are the same (G4). Conversely, the gene rows are constant in scBERT. The discrepancies in these schemes primarily arise from differences in architectural design. scBERT operates as an encoder-only framework, where all genes are involved in computation across each Transformer layer, thereby preserving the same gene order.
Summary: Here the authors propose xTrimoGene, a scalable transformer-based model for learning representations of scRNA-seq data. The authors demonstrate that their proposed method is more computationally efficient than alternatives for training transformers on scRNA-seq data, and they also validate their model on cell type annotation and perturbation response prediction tasks. Strengths: * **Clarity**: I found the manuscript very well organized and the writing easy to follow. Well done! * **Significance**: There has been much recent excitement about the potential utility of applying transformer architectures to scRNA-seq data. Here the authors demonstrate a method for making the training of such models more efficient, and also they demonstrate that the embeddings from such models are useful for multiple downstream tasks. Thus, I believe this work is significant. * **Originality**: The authors' proposed strategy for training xTrimoGene is, to my knowledge, novel (though I note that I am not an expert in transformer training strategies). Weaknesses: The results presented by the authors are promising, and I indeed enjoyed reading the paper. However, I do have some minor issues that I would like to see clarified during the rebuttal period before I can give a recommendation of acceptance. If the authors are able to address my concerns, I would be happy to raise my score: * **Was data leakage avoided?**: The authors briefly describe their data collection pipeline in Section 1 of the Appendix. However, this description is a bit sparse (just mentioning that data was collected from the GEO). Were any precautions taken to avoid data leakage issues in the evaluation of xTrimoGene on downstream tasks? That is, did the authors ensure that any test cells from train/test splits e.g. on the perturbation prediction task were not present in the data used to pretrain xTrimoGene? * **Proper modeling of scRNA-seq count distributions**: Previous works (e.g. the Deep Count Autoencoder https://www.nature.com/articles/s41467-018-07931-2) have found significantly better preservation of biological variation in latent representations of scRNA-seq data by directly modeling scRNA-seq counts (e.g. using a negative binomial or zero-inflated negative binomial distribution) compared to minimizing MSE loss for normalized counts. However, xTrimoGene opts for the latter. I think it would be valuable to know how this choice affects the model's latent representation. An ablation study comparing modeling the raw counts versus normalized counts would be very interesting here. However, if the authors are unable to perform such an experiment in the rebuttal period, performing an (easier) experiment similar to that presented in Figure 2 of the Deep Count Autoencoder paper (depicting how increasing levels of noise can potentially result in low-quality latent representations) would also provide valuable results. * **How to handle batch effects?**: A classic problem in scRNA-seq analysis is integrating datasets from multiple batches that may have systematic differences unrelated to any underlying biology. Given that xTrimoGene's pretraining procedure does not account for batch effects, could the authors discuss how would one analyze data from different batches with xTrimoGene? * **Rare cell types**: A potential issue with pretrained models like xTrimoGene is that they may misbehave when applied to new datasets with phenomena not present in the training set (e.g. new cell types or tissues not seen in the training data). Did the authors explore xTrimoGene's behavior in such a scenario? A new experiment investigating this behavior would be useful. * **Memory/hardware requirements?**: Could the authors provide additional details on the hardware/memory requirements for loading and performing inference with a pretrained xTrimoGene model? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See "Weaknesses" section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I would like to see the authors clarify some potential limitations with their method (see "Weaknesses" for specific questions). I do not forsee any negative societal impacts resulting from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We have shown additional discussions and experiments to further strengthen our work. The point-by-point responses to the comments are as follows. If our response does not fully address your concerns, please post additional questions and we will be happy to have further discussions. > **Q1: Was data leakage avoided?** **A1**: We did not specifically introduce a precaution strategy during the pre-training data preparation. However, we suppose this has no significant influence on the downstream task evaluation for the following two reasons: 1. Nature of Training: - The pre-training process is self-supervised, focused solely on gene expression values, while other meta-information like cell type, tissue type, sequencing platform, etc., being disregarded. - Conversely, the downstream tasks (such as perturbation prediction) are supervised, employing distinct cell label information or relationship of pre-post perturbed cells. 2. Learning Focus: - During pre-training, the model is primed to understand intrinsic gene relationships without any exposure to label information. While for the downstream task, the model learns the relationship between gene expressions and cellular labels. Hence, even if there were overlaps between pre-trained and downstream evaluation data, the lack of exposure to label information during pre-training should safeguard against any significant data leakage effects. > **Q2: Proper modeling of scRNA-seq count distributions.** **A2**: The modeling of xTrimoGene is different from DCA. DCA uses the observed gene expression to predict the parameters in the ZINB distribution. In contrast, xTrimoGene is a masked autoencoder [1], i.e., recovering the unobserved gene expression values depends on other genes within a cell. MSE loss is generally a default choice. However, the pre-training data distribution of raw counts is very skewed, where most values are small but a few are very large, using MSE may cause the model to focus too much on those large values. Therefore, we use the normalized expression value. Following the suggestion, we explored an ablation study to compare the raw counts versus normalized expression values in our modeling. Concretely, we trained a 10M parameter model with raw counts as input, while all the other configurations keep consistent as the xTrimoGene-10M model. Then, the pre-trained model is evaluated on the PBMC3K clustering task, in which the raw count matrix is fed into the model to obtain the context embedding for subsequent analysis. The results show that pre-training with normalized gene expression value achieves a better performance than raw counts, which is consistent with our aforementioned assumption. |Input| ARI| NMI| HOMO| CP|SIL| |-|-|-|-|-|-| | Normalized value | 0.7767|0.7810|0.7841|0.7778 | 0.1406 | | Raw count| 0.6575| 0.7318|0.7459| 0.7183|0.1086| [1] He et al. Masked Autoencoders Are Scalable Vision Learners. 2021 > **Q3: How to handle batch effects?** **A3**: For xTrimoGene, two strategies could handle batch effects: 1. Fine-Tuning with Batch Information: xTrimoGene can be fine-tuned with user-specific datasets incorporating batch information. We achieve this by: (1) Converting the batch ID to embeddings via a lookup table. (2) Summing these batch embeddings with the original value and gene embeddings. This aggregated input is then processed by the transformers. During fine-tuning, gene expression embeddings from different datasets are enriched with their corresponding batch embeddings. (3) At the inference stage, to harmonize cells from different batches, a consistent batch embedding can be utilized, ensuring the output remains consistent across batches. 2. Integration with Other Batch Correction Techniques: One could dump embeddings from xTrimoGene and feed them into other methods. For this strategy, we used the pancreas dataset (comprising 8 batches and is a benchmark for batch integration) as an example. We subsampled this dataset to 3k cells for efficiency. After getting embeddings from xTrimoGene, we employed a lightweight method, BBKNN(Polański, et al. Bioinformatics 2020) to correct for batches. As depicted in Figure R1, the results of xTrimoGene+BBKNN combination mitigated batch effects and maintained cell-type variations. > **Q4: Rare cell types** **A4**: To investigate how xTrimoGene behaves on these unseen data, we conduct the following evaluation analysis. 1. We first collected the scRNA-seq data from a previous study (Ji et al., 2020, Cell), which profiles human skin squamous cell carcinoma landscape with scRNA-seq and spatial transcriptomics technology simultaneously. The scRNA-seq data is not present during xTrimoGene training process. Notably, the authors found that clustering the scRNA-seq data (Figure R2, left panel) yields a novel cell subgroup named TSK (Tumor-Specific Keratinocyte), which is clearly in the tumor sample but not the normal sample. 2. We employed the tumor scRNA-seq data to explore whether xTrimoGene is robust to distinguish the cell subpopulations from others. The expression matrix is fed into xTrimoGene and the dumped context embedding is used for subsequent UMAP visualization. The results show that the TSK subgroup is clearly separated (Figure R2, right panel). More importantly, the two TSK subgroups are merged with xTrimoGene, demonstrating its generalization ability to generate good cell-specific embeddings for unseen data. > **Q5: Memory/hardware requirements?** **A5**: For the pre-trained xTrimoGene models, the memory consumption for inference with a sample of approximately 2000 non-zero expressed genes is approximately 50GB for the xTrimoGene-100M model and around 18GB for the xTrimoGene-10M model. It's worth noting that, in line with our pre-training settings, we conducted our tests using bf16 mode on an Nvidia A100 80G GPU. We hope this provides clarity, and we're here to offer further details if necessary. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your detailed response. I just have a couple more follow-up clarification questions before making my final decision: **Re A3**: Are there any example results of the procedure from point (1) in the authors' response? I couldn't find anything in the rebuttal PDF or the original submission. **Re A4**: Could the authors provide more details on why the two TSK populations being merged in the xTrimoGene embedding space is desirable? In other words, is there a technical confounder (e.g. batch) here that xTrimoGene is removing? Given the information provided in Figure R2 it's not clear to me whether this merging behavior is good or e.g. removing distinctions between two subtypes of TSKs. --- Reply to Comment 1.1.1: Title: Thank you for your feedback! Comment: **A3**: Both two strategies we discussed in the rebuttal period can remove the batches. Since the second strategy (xTrimoGene+BBKNN) only needs inferred cell embeddings and does not require any fine-tuning, we conducted an additional experiment and gave the example result (Figure R1). While the first strategy needs to fine-tune the pre-trained model and adjust the hyper-parameter setting. Due to the current discussion time limitation, we can't provide the result now, but we would like to add it in a future version. As far as the current results, the second strategy is a more lightweight approach, and it already corrected the batch effects as shown in Figure R1. Thank you again for your interest in batch correction, making us have a deeper thought on the batch correction strategies and further expand xTrimoGene for handling more downstream scenarios. **A4**: We checked the data and found a technical bias (rather than biological bias) between the two TSK sub-populations, which potentially leads to the distinctions. Specifically, we compared the total count and expressed gene number between these two groups. The results showed that the left TSK group achieves a higher sequencing quality, where the total counts are almost 1.8 times (median: 24,648 vs 13,574) over the right TSK group and the expressed gene percentage is also much higher (median: 25.3% vs 17.2%). The technical factor tends to induce the separation into two subgroups. However, the bias is removed by xTrimoGene, illustrating its efficiency in preserving biological signals.
Rebuttal 1: Rebuttal: We extend our sincere gratitude to all the esteemed reviewers for dedicating their time and expertise to meticulously evaluate our work. Their valuable feedback has significantly contributed to enhancing the quality and depth of our research. We have conscientiously delved into every facet of the comments. Guided by the insightful suggestions, we have made several noteworthy adjustments to the content. These revisions encompass elucidating intricate concepts for improved clarity, bolstering our textual content, and incorporating additional analyses. To elucidate, we have conducted a comprehensive array of ablation studies and experimental investigations, which in turn have shed light on the architectural intricacies, merits, and limitations of xTrimoGene. Here we briefly summarize the added experiments: 1. **Ablation study on normalized value versus raw count**. The results show that pre-training with normalized gene expression value achieves a better performance than raw counts. The details are responded to Reviewer 9yec. 2. **Application on handling batch effects**. We discussed the potential usage with xTrimoGene to remove batch effects. Concretely, we provided an analysis (Figure R1) and demonstrated xTrimoGene's capability on the issue. The details are responded to Reviewer 9yec. 3. **Rare cell types detection**. We explored the ability of xTrimoGene to identify rare cell types from a large population (Figure R2). The details are responded to Reviewer 9yec. 4. **Ablation of the relative masking frequency of zeros**. Apart from the presented ablation study on masking strategy, we provided an additional investigation on zero masking frequency. The results show that the current configuration is within the optimal interval. The details are as below and responded to Reviewer MmMn. | Value | Masked1 | Masked2 | Masked3 | Masked4 | Masked5 | Total | |----------|-------------|------------|-----------|------------|------------|---------| | $\neq$ 0 | 100(5%) | 300(15%) | 600(30%) | 900(45%) | 1,100(55%) | 2,000 | | = 0 | 1,040(5.8%) | 840(4.7%) | 540(3%) | 240(1.3%) | 40(0.2%) | 18,000 | | Sum | 1,140 | 1,140 | 1,140 | 1,140 | 1,140 | 20,000 | | Sum ratio | (5.7%) | (5.7%) | (5.7%) | (5.7%) | (5.7%) | (100%) | | ARI | 0.6654 | 0.5043 | 0.7767 | 0.7817 | 0.5048 | | | NMI | 0.7170 | 0.6884 | 0.7810 | 0.7833 | 0.6348 | | | HOMO | 0.7285 | 0.7481 | 0.7841 | 0.78226 | 0.6826 | | | CP | 0.7058 | 0.6375 | 0.7778 | 0.7843 | 0.5932 | | | SIL | 0.1418 | 0.1522 | 0.1406 | 0.1675 | 0.1247 | | 5. **Comparison to other models on downstream task**. We also included scBERT for comparison on the perturbation effection prediction task. The results indicate that xTrimoGene is superior over scBERT to capture the intrinsic context embedding under perturbed conditions. The details are as below and responded to Reviewer MmMn. | Pre-trained model | Total | 1-gene | 2-gene(seen0) | 2-gene(seen1) | 2-gene(seen2) | |-------------------|--------|--------|---------------|---------------|---------------| | xTrimoGene | 0.1983 | 0.1930 | 0.2385 | 0.2100 | 0.1286 | | scBERT | 0.2231 | 0.2116 | 0.2581 | 0.2386 | 0.1522 | 6. **In-depth comparison with CellTypist**. For the cell type annotation task, we performed further analysis and showed advantages in two aspects: (1) xTrimoGene is more robust to identify rare cell types than CellTypist; (2) xTrimoGene embedding reveals potential cell type specific gene-gene networks (Figure R3). The details are responded to Reviewer TD4Y. 7. **Embedding visualization analysis**. We also utilized embedding visualization to interpret the behavior of three models on the cell type annotation task (Figure R4). The details are responded to Reviewer N2nG. We take this opportunity to reiterate our gratitude to each reviewer, as their incisive comments have propelled our work. If our responses have not entirely addressed your concerns, we cordially invite you to share additional queries. We stand ready to engage in further discussions and provide any necessary clarifications. Pdf: /pdf/65f78f03207e0702e10119ba463db7234ba38f5d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Finding Safe Zones of Markov Decision Processes Policies
Accept (poster)
Summary: The work introduces the SafeZone problem for safe reinforcement learning (RL). Instead of learning for the optimal policy under the constrained Markov Decision Process (MDP) as many traditional methods deal with a safe RL problem, the work attempts to search for a SafeZone, a subset of state space that is policy-dependent, balancing between minimizing the number of states contained in the subset and reducing the probability that a random trajectory goes out of the subset. If the probability of escaping the SafeZone is low with a randomly sampled trajectory, it is considered safe. The paper also proposes a new algorithm for detecting the SafeZone. The algorithm uses rejection sampling to decide whether a new random trajectory should be added to the SafeZone and keep updating the safety estimation once a new trajectory is added. Strengths: 1. The work proposes a new way of dealing with the safe reinforcement learning (RL) problem. In traditional safe RL, the problem is usually formulated as a constrained MDP, while in this work, the agent is trained to find the SafeZone for a given policy. 2. The paper detailedly explains the formalization of the new problem setting and compares the new setting with the constrained MDP, it helps with understanding. 3. The paper provides both theoretical analysis and empirical evidence, making the results more sound. Though solving the SafeZone problem is NP-Hard, the paper derives an algorithm and provides empirical results. These results suggest that the idea introduced in the paper is practical. Weaknesses: The main concern comes from the choice of baseline algorithms and the experiment design. The paper empirically tested methods on a $N \times N$ grid, checking the quality of the SafeZone given by the new algorithm and baselines. A SafeZone with a smaller size but a larger coverage percentage on the trajectory (lower escape probability) is considered better. Although the new method is empirically shown to be better than the baseline algorithms regarding the above standard, I doubt whether the proposed algorithm can be sufficiently tested by the reported experiment, or, whether the advantage and shortage of SafeZone problem setting can be seen from this experiment. In the introduction section, the paper points out that a new method is proposed for safe RL. One common idea used by the community is to formulate the problem as constrained MDP to solve the safe RL problem, as the paper points out. The proposed SafeZone method performs as the other path for solving safe RL, besides using constrained MDP. So, one thing remaining unclear is whether SafeZone is a better way of solving the safe RL problem than constrained MDP. The paper focuses on comparing the quality of SafeZone detected by the proposed algorithm and baselines, but does not provide any empirical evidence about whether a good SafeZone output helps with improving the learning performance in safe RL problems, or whether a SafeZone method performs better than constrained MDP tasks. As SafeZone is a newly introduced problem setting and the proposed algorithm directly targets searching for a better SafeZone, I am not surprised that the new method outperformed baselines regarding the SafeZone quality. But the lack of learning performance in safe RL tasks makes it hard to say if detecting a SafeZone is a better way of solving the safe RL problem. It could be more convincing if the learning performance in safe RL tasks could be checked and empirically indicate that the SafeZone idea can learn safe RL tasks more efficiently than using the constrained MDP. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My question is related to the concerns listed above. I would appreciate it if the authors could discuss whether the proposed method has the potential to outperform the constrained MDP methods, regarding the learning efficiency. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review! Our focus is theoretical, the experiments are a minor part of the paper and were only made for demonstration. We don't offer an alternative solution to constrained MDP; instead, we approach a different problem and give a different solution. As we explained in the introduction, the purpose of the SafeZone problem is to capture a new sort of safety and aims to capture popular events in the environment. --- Rebuttal Comment 1.1: Title: Reply to the rebuttal Comment: Thank you for your reply. After reading it, I intend to maintain my original score.
Summary: This paper introduces the SafeZone problem: Given an MDP and a policy, find the smallest set of states such that the probability of leaving this set of states (the escape probability) lies below a given threshold using trajectory samples. The authors provide various examples of applications of the SafeZone problem, such as imitation learning with compact policy representation and post hoc explainability of RL. The authors provide proof that the SafeZone problem is NP-complete. The adjusted problem solved in this paper is to instead find a SafeZone with a larger but minimal escape probability than the threshold and a larger but minimal cardinality than the true optimal cardinality given the threshold. Three naive approaches are discussed, and one approximation algorithm for finding SafeZones. The authors provide upper bounds on each approach's escape probability and sample complexity. They give provable guarantees on their approximation algorithm. Finally, the paper compares three of the four approaches based on a grid world problem. Part of this work has been previously presented at the NeurIPS 2022 TSRML Workshop. The work did not appear in any proceedings, journals, or books, so according to NeurIPS' call for papers, this is not considered a dual submission. Strengths: The paper introduces a novel problem and gives extensive examples of situations where a solution to this problem could be useful. It shows the need for approximation by proving that the SafeZone problem is NP-complete, even if the induced MC and the minimal cardinality (of the corresponding escape probability threshold) are known. The paper analyses four algorithms for finding SafeZones, provides upper bounds on the escape probability and sample complexity, and indicates the limitations of the three naive approaches. The paper suggests an interesting future work direction, where the problem is moved from finding SafeZones of an induced MC to finding policies of an MDP with small SafeZones given an escape probability threshold. Weaknesses: The paper is challenging to follow because of two main reasons: a lot of information is only available in the appendices, and few (intuitive) examples are used after introducing the problem. The first reason, limited information in the paper itself and reliance on the appendices can, for example, be seen in Section 3, where Appendix B contains the actual algorithms, MDP examples, and proof outlines, and in Section 5, which only contains one figure, with the rest all placed in Appendix E. Furthermore, what information is available in the appendices is also not always clear. For example, it is not mentioned in Appendix A that the proof of Theorem A.2 is given in Appendix D. Nor is it mentioned in Section 5 that more figures regarding the empirical demonstration are available in Appendix E, only four subfigures are referenced. Moreover, the information about the empirical demonstration section is incomplete. The escape probability threshold is not given for computing the results and the number of repetitions. Also, the paper only compares three of the discussed methods without mentioning the exclusion of the Greedy by Threshold algorithm. The second reason, the lack of intuitive examples, concerns the sections after the introduction. The paper gives extensive and intuitive examples in the introduction but does not use these throughout the rest of the paper. Only the autonomous vehicle example is referenced once in Section 2. Although the paper proves why an approximation of the SafeZone problem is necessary, they give no argumentation as to why an almost 2 approximation is sufficiently tight to have a practical use still. An intuitive example where the size of the SafeZone is compared to an optimal SafeZone could illustrate this usefulness. Also, the problem used in the empirical demonstration does not provide any intuition regarding the applications of the SafeZone problem, as given in the introduction. Some discussion on how this relates would be helpful. Typo in Section 3, line 208. theses -> these Incorrect reference in Section 5, line 355. Section 5 -> Figure 1. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Why is Greedy by Threshold excluded from the empirical demonstration? How should an escape probability threshold be chosen? What escape probability threshold value was used for the empirical demonstrations? How often were the experiments of the empirical demonstration repeated? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the helpful and detailed review. In what follows we address your comments: **Proof of Theorem A.2 is given in Appendix D** We apologize for the confusion regarding the proof of Theorem A.2 and we will mention that it appears in Appendix D and that there are more figures in App E in the final version of the paper. **How should an escape probability threshold be chosen?** A feature of the algorithm is that the escape probability is a free parameter that can be tuned depending on the use case, or experimented to select the ideal one for the target (e.g., using binary search). **Escape probability threshold in demonstration** Rather than stopping the algorithm when it reaches some escape probability threshold, the implementation adds trajectories one after another until reaching a specific safezone size (the reason being that the x-axis is k). **Greedy by threshold** We only implemented one type of greedy algorithm, as the MDP in the demonstration is not layered (i.e., there are states that appear in more than a single time step, see Line 220). **Intuitive examples** We appreciate the advice and will use the autonomous vehicle example as a running example throughout the paper. **How often were the experiments of the empirical demonstration repeated?** We ran each experiment 2000 times (see line 353). --- Rebuttal Comment 1.1: Title: Thank you. Comment: I thank the authors for their responses, these clarify my questions. I will keep my score.
Summary: The paper introduces a new problem that involves finding a SafeZone: given the possibility to interact with an MDP using a fixed policy, the learner as to find a subset of states F of minimal size s.t. the probability that a trajectory visits a state outside F is at most a given parameter \rho. They show that the problem is NP-hard even when the transition matrix is known. They propose an approximate algorithm that finds near-optimal SafeZones with polynomial sample complexity (in the relevant variables). The approach is numerically tested on a toy problem. Strengths: 1. The paper introduces an interesting problem (though it may lack a bit of "concrete motivation", see below). The problem is definitely challenging from a theoretical perspective. 2. While I haven't checked the proofs, the results seem sound 3. The authors managed to derive a computationally-efficient algorithm computing near-optimal Safe Zones even though the base problem is NP-hard Weaknesses: 1. Although the introduction presents some examples, I still wasn't fully convinced about the motivations behind studying this peculiar setting. In particular, it is still unclear to me what problems the proposed algorithm (or in general finding safe zones) enable solving. It would be good to provide a more "concrete" example, ideally some numerical results on a real-world problem (or a simulated variant) where it is clear that finding safe zones is the right thing to do 2. Moreover, even in the context of the examples given in the introduction, the assumption that we can only interact with the MDP with a single policy seems a bit limiting. In those settings, it seems reasonable to observe trajectories collected by multiple policies, mostly because multiple agents are interacting with the environment and each displays learning behavior (so changing policies). How to extend the proposed setting and algorithms to such a context is not clear 3. I found the core part of the paper (Sec. 3,4) a bit hard to read. In particular, if my understanding is correct, Sec. 3 tries to build some intuition which is later useful to explain the main algorithm in Sec. 4. However, that didn't really work for me: Sec. 4 starts with "In this section, we suggest a new algorithm that builds upon and improves the added trajectory selection of the SIMULATION Algorithm", but the SIMULATION algorithm was not explained at all in Sec. 3. The rest of Sec. 4, especially the first paragraphs, are also hard to follow, while Sec. 3 seems mostly to report a bunch of technical results rather than intuitions (so I did not really feel "a gentle start" to me). 4. It is not clear how good the sample complexity of the main algorithm (Th. 4.2) is, especially in terms of dependences on the main variables (k^\star, \delta). It would be good to add some discussion about this. Do the authors think that the current dependences are optimal or improvable? 5. The experiments are conducted on a very toy and low-scale domain. I would have like to see larger domains and also a comparison of the computational requirements (time complexity) of the different algorithms. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I did not found any discussion about limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the helpful and detailed review. We will address your comments below. **MDP with a single policy** Our setting does allow for any number of policies (multiple agents) by using a single mixed policy as follows: each agent $i\in\{1,\ldots,n\}$ has some policy $\pi^i$. The mixed policy is the one that selects a number uniformly at random from $1,\ldots,n$, then runs $\pi^i$ (alternatively, we can consider any other distribution over the agents/policies). As a result, it is possible to observe trajectories collected by multiple policies and use the algorithms from the paper as is. Thank you for bringing it up! We will discuss it in the final version of the paper. **Sections 3+4 readability** SIMULATION is an intuitive algorithm that simply samples a certain amount of random trajectories and adds their states to a set, and then returns this set. It is described in line 207 and formalized in Appendix B as Algorithm 4. If anything is unclear regarding SIMULATION Algorithm 4 or Sections 3,4 we would be happy to answer further questions. **Sample complexity of the main algorithm (Th. 4.2)** We expect the dependency on $k^\star$ to be optimal. The reason why is the following. Consider a MC such that there are $k^*-1$ trajectories, each starts in an initial state $s^0$ and ends with a unique corresponding state $1,\ldots,k^*-1$. Consider a case where the probability of each such trajectory is $\frac{1-\delta}{k^*-1}$. In this case, it would take at least $k^*$ samples to find a $k^*$-safezone. As for the parameter $\delta$, this could be treated as a constant. For example, selecting $\delta=1/3$ yields that we need to run the algorithm $6\cdot \ln 300$ times and a solution of size $7/3k^*$ w.h.p.. We will add a discussion about this in the final version of the paper. **Experiments + Time complexity of the naive algorithms** As we were the first to formalize the safezone problem, we focused on theoretical guarantees and provided some experiments just for demonstration. As for time complexity of the naive algorithms: - In Greedy by Threshold, the running time is bounded by the maximum between the number of states reachable by the policy with probability $>0$ , i.e., by $|S|$. - Simulation Algorithm has a running time of $O(H/\beta \ln k^∗)$ as it samples $O(1/\beta \ln k^∗)$ trajectories and each has at most $H$ states. - In Greedy at Each Step, as there are MC with states reachable in every level, and it needs to rank each level, it has a running time of $O(H|S|\log|S|)$ (using, e.g., mergesort as sorting algorithm). **Motivation** The motivation of our paper, as discussed in the introduction and discussion sections, emphasizes the practical significance of our approach. By identifying SafeZones, we offer solutions to challenges in autonomous vehicles, manufacturing, and potentially compact policy design. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the detailed response. The fact that the proposed algorithms can straightforwardly deal with multiple policies is quite interesting and should be definitely mentioned in the paper. I have increased my score accordingly. I still think that the paper should be improved in terms of clarity and motivation. Maybe, as Reviewer EfcG suggests, using one of the practical use cases given in the introduction as a running example throughout the whole paper could help in both these aspects. --- Reply to Comment 1.1.1: Comment: Thank you for your quick response and for updating your score! We appreciate your helpful suggestions. We will incorporate the explanation that multiple policies can be dealt with, along with the practical example of autonomous driving, throughout the paper.
Summary: The paper proposes an innovative definition called the SAFEZONE and uses it to describe the escape probability of the sampled trajectory. This paper analyzes several naïve algorithms and proposes an algorithm to overcome the weaknesses in naïve algorithms, especially when considering the size of safety states. Finally, numerical experiments are conducted to evaluate the algorithm's performance compared to mentioned naïve algorithms. Strengths: I think the paper is easy to follow. Strengths: 1. The paper introduces the definition called the SAFEZONE to find the subset of ‘safe states’, i.e., the set that has low escape probability $\rho$ and a small size $k$. 2. Three naïve algorithms are clearly discussed with solid theoretical analysis and specific MDPs examples in the appendix. 3. A new approach for solving an approximate SAFEZONE is proposed with complete analysis, and numerical comparisons show the algorithm's performance. Weaknesses: No obvious limitations, but this paper is not written well. The motivation for this work should be discussed more, and it would be helpful if the author discuss more concurrent works. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: No more questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your evaluation of our paper. We will address your valuable suggestions and enhance the clarity and motivation of our paper. We are grateful for your positive comments on our paper's concepts and algorithms. The motivation of our paper, as discussed in the introduction and discussion sections, emphasizes the practical significance of our approach. By identifying SafeZones, we offer solutions to challenges in autonomous vehicles, manufacturing, and potentially compact policy design. We will discuss more concurrent works (e.g., about other approaches for RL safety) to provide a comprehensive context for our contributions as you suggested.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their valuable and informative reviews! We are glad you appreciate the new ֿSafeZone definition and our theoretical contribution. If we misunderstood any of the questions, we would be happy to clarify any further information during the discussion period.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Optimal Caching and Model Selection for Large Model Inference
Accept (poster)
Summary: The paper presents an innovative approach to mitigating the challenges posed by large language models (LLMs), namely high resource consumption and latency, by employing a cache to store previous queries and a model selector to choose the most efficient model for query processing. The authors propose an optimal algorithm that jointly optimizes both these approaches in offline and online tabular settings, and demonstrate its effectiveness through a series of simulations and real-world dataset experiments. The paper concludes by suggesting potential future research directions in caching and model selection optimization. Strengths: 1) This paper introduces a novel approach to dealing with the resource-intensive nature of LLMs by jointly optimizing the usage of a cache and a model selector. It creatively merges established concepts of caching with model selection. 2) The theoretical grounding and empirical validation of the proposed algorithm underline the quality of the research. The authors have effectively leveraged the Greedy Dual Size with Frequency (GDSF) and Least Expected Cost (LEC) caching algorithms alongside a model selector to achieve optimal rates. 3) The paper is well-structured and effectively communicates complex ideas and methodologies in a clear and comprehensive manner. The stepwise progression from basic principles to specific techniques is particularly commendable. 4) Addressing the issues of resource consumption and latency in LLM deployment is of critical significance in modern AI. The proposed solutions have a potentially broad impact, improving efficiency and feasibility of real-world AI applications. Weaknesses: 1) Model Selection Scope: The experiments seem to consider a limited number of model options. The approach may encounter difficulties when scaling up to situations where selection is to be made from thousands or millions of models, which is a realistic scenario in complex AI systems. 2) Prompt Representation in Caching: While the paper does a good job of exploring how to insert into a cache, it doesn't delve into the challenge of representing each prompt in the cache. This is an important aspect of LLM serving, as a comprehensive strategy for hashing, differentiating knowledge, and specifying unit information is necessary for efficient caching. 3) The idea and each problem is well known. I am not sure about the novelty of the approach. The application and problem is critical though. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How does the proposed algorithm scale when selecting from a large number of models, possibly in the order of thousands or millions? Can you provide any theoretical or experimental insight on this scenario? 2) While you discuss inserting prompts into a cache, the paper does not fully address how prompts are represented in the cache. Could you elaborate on this aspect of your approach? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have touched upon potential future work related to this study, highlighting open problems that require further investigation. However, a discussion on the limitations of their approach and its potential negative societal impacts seems to be missing. An exploration of the potential risks associated with the approach, such as the possible amplification of biases if the model selector disproportionately selects certain models, would be beneficial. Further, a discussion on how this approach could contribute to increasing centralization and monopolization in AI due to the computational requirements of maintaining and selecting from large model ensembles could offer a balanced perspective. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. Please find our responses to each comment below. ## Comment 1 **Reviewer:** > Model Selection Scope: The experiments seem to consider a limited number of model options. The approach may encounter difficulties when scaling up to situations where selection is to be made from thousands or millions of models, which is a realistic scenario in complex AI systems. How does the proposed algorithm scale when selecting from a large number of models, possibly in the order of thousands or millions? Can you provide any theoretical or experimental insight on this scenario? **Response:** Thank you for your comments! We briefly discuss how to generalize to multiple models in Appendix C. If there's $K$ models, we can train a neural network with $K$ dimensional output, each predicting the cost for one of the models. So as the number of models $K$ grows, only the last layer of the neural network scales linearly with $K$. However, this won't be able ideal to deal with the case of millions of the models, since that require a large amount of training data for the selector to estimate accurately the cost for all the models. Given the current size of large langauge models (billions of parameters -> tens of Gb per model size and memory consumption), it is not yet realistic to simutaneously run thousands or millions models at a time for serving. Also, there may not be that many high-quality language model that needs to be served at the same time. So we believe our setting is a realistic setting especially in the field of large language models. ## Comment 2 **Reviewer:** > Prompt Representation in Caching: While the paper does a good job of exploring how to insert into a cache, it doesn't delve into the challenge of representing each prompt in the cache. This is an important aspect of LLM serving, as a comprehensive strategy for hashing, differentiating knowledge, and specifying unit information is necessary for efficient caching. While you discuss inserting prompts into a cache, the paper does not fully address how prompts are represented in the cache. Could you elaborate on this aspect of your approach? **Response:** Thank you for the comments! We agree that prompt representation is very important problem left open. There have been some preliminary solutions from vector database in [1], which represents the query as a vector from the embedding of a pre-trained or fine-tuned large language model specifically designed for retrieval. And one can compute the cosine similarity between the vectors of prompts to determine whether they're similar in semantic meaning or not. However, we still don't see a comprehensive research study on which method leads to best caching problem. For this paper, we focus on the optimal algorithm for caching and model selection. We assume that we either do exact match (with simple hashing on the prompts) or the existing semantic matching algorithms are good. The exact match case can also be applied to practical scenarios, especially when the large language models are used for serving API calls enterprise softwares and the prompts are simple and more repetitive. In our experiments, we focus on exact match case and directly use hashing to represent the prompts for caching. We believe that the appropriate prompt representation deserves a serious and comprehensive study, but this may be out of the scope of our current focus. [1] GPTCache: https://github.com/zilliztech/GPTCache ## Comment 3 **Reviewer:** > The idea and each problem is well known. I am not sure about the novelty of the approach. The application and problem is critical though. **Response:** Thank you for your comments! - In terms of formulation, we are the first to formulate the joint optimization of caching and model selection for large language models. Different from most of the traditional caching problems, where the cost of each query is the same, the queries for large language model differ a lot in terms of the response lengths, and thus leading to varying costs. Furthermore, when we have multiple models, this would lead to further variations in the cost for generating responses. Thus the formulation we propose is new. - Theoretically, we prove the minimax-optimality of the LEC + model selector idea in both offline and online settings. The upper bound analysis is very different from traditional bandit problems. In traditional bandit, one has the ability to choose which action to explore in each round. However, in our setting, the actions (queries) come passively as samples from a fixed query distribution, and we can only choose caches which hurt the exploration of cached queries. This brings a brand new theoretical question and requires completely new analysis. - We show that the optimal model selector requires a lower confidence bound adjustment on the estimated performance. Such a correction term is new and critical for the optimality of the algorithms in both offline and online settings. - Empirically, we are the first to systematically benchmark the existing caching and multi-model serving ideas for large language models. We demonstrate the superiority of both LEC and the proposed model selector compared with existing ideas like LFU or cascading. ## Comment 4 **Reviewer:** > A discussion on the limitations of their approach and its potential negative societal impacts seems to be missing. **Response:** Thank you for your comments! We will add more discussions on the limitation of the paper in the revised version, including potential biases introduced due to caching and model selector, and how this approach could lead to centralization and monopolization in AI due to the computational requirements of maintaining and selecting from large model ensembles. --- Rebuttal Comment 1.1: Comment: Thank you for the comments. reviewed other comments as well. This is a decent paper. I recommend an accept.
Summary: * this paper proposes a theoretically optimal algorithm for efficient large-scale deployment of LLMs * particularly; they combine caching and model selection to reduce the inference cost * they provide theoretical guarantees for both caching w and w/o model selection Strengths: * large scale models are deployed everywhere and the motivation to make inference more efficient is very strong * the paper comes with strong theoretical guarantees * the method's efficiency gains are demonstrated in relevant real-world settings using models up to 13B * the idea to perform model selection on the fly in an online fashion seems clever Weaknesses: I have to admit that this paper is out of my depth. The considered (problem, solution) seems very convincing and relevant to me, but I'm not familiar with its literature and cannot identify any obvious weaknesses. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * how can we assess whether caching is relevant in a particular setup? for example, when using publicly available datasets like OpenAssistant; what if there are no redundant queries? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes, the authors discussed limitations and future work directions Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. Please find our responses to your comment below. ## Comment 1 **Reviewer:** > How can we assess whether caching is relevant in a particular setup? for example, when using publicly available datasets like OpenAssistant; what if there are no redundant queries? **Response:** Thank you for your comment! The cache hit rate depends heavily on the application. Below are three cases: - According to [1], the cache hit rate in web search systems is usually 30%-60%. - According to our own calculation from the public data in [2], the cache hit rate for saving 3k conversations in real-world chatbot over 33k chat data is 31%. There may not be too many queries that are exactly the same. However, one can match the queries with the same semantic meaning. Thus we use fuzzy matching by constructing a vector database for all the queries, and match them when their cosine similarity is large enough so that they have very close semantic meaning. - When it comes to using LLM for API calls to software in enterprise, we shall expect more redundant queries and the cache hit rate might be much higher than chat even if we do exact match. [1] Jeff Dean, Building Software Systems at Google and Lessons Learned [2] Lianmin Zheng et al., Judging LLM-as-a-judge with MT-Bench and Chatbot Arena --- Rebuttal Comment 1.1: Title: Thank you. Comment: Thanks for your responses. Would be great if you could incorporate this cache hit discussion in the final version. I recommend acceptance.
Summary: This paper presents an LLM inference framework design that aims to reduce inference cost by caching and model selection. This work first provides an optimal formulation for jointly optimizing both caching and model selection in both offline and online settings. Then, evaluations on simulations and two tasks show that proposed work achieves cost saving compared to baseline. Strengths: (1) The high-level idea of caching + model selection sounds reasonable for LLM inference. (2) Detailed formulation of the optimization problem. Weaknesses: (1) My most unsatisfying part of this work is that the problem that authors try to solve should not be simplified to “cost saving” only, but it should be formulated as “a cost-accuracy trade-off”. I do see that the authors acknowledge that it is a trade-off (e.g., “The need for fuzzy search.” In introduction, and in section 5.2 “If the small model is chosen but its result is wrong, the large model must be run and it will incur an additional penalty.”), but I really want the authors to investigate more about this instead of simply comparing on the cost. For example, for fuzzy search during caching, how much will it actually affect inference accuracy? For model selection, what if there are more than two models, and what if there is no way to know whether the inference result is correct or not (so that you cannot use multiple models). Ultimately, the results should be presented as a cost-accuracy Pareto curve, and the proposed work has to show how it advances the Pareto frontier compared to baseline (achieving better accuracy under same cost, or achieving same accuracy under less cost). The unbalance between formulation (5 pages) and experiments (1.5 pages) exacerbates this issue. (2) In introduction it says “For the fuzzy search problem, semantic search or vector-embedding-based ideas provide a systematic solution that includes embedding extraction and matching algorithms. To simplify the problem, we assume that there exists some semantic search oracle that can group the prompts with the same semantic meaning…”. To me this assumption oversimplify the caching problem. In order to demonstrate the feasibility of caching, it’s needed to actually build yourself a semantic search-based cache, and actually evaluate how it affects the accuracy-cost tradeoff. (3) In section 5.2 it says “We fine-tune a BERT-base model as the model selector by predicting whether the small model can give the correct result and achieve 80.2% accuracy.”. But I can’t find any further information such as the fine-tune hyperparameters, and whether this 80.2% accuracy is training accuracy or validation accuracy. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (1) In section 5.2, the next-token prediction task uses FLOPs as the cost, and the chat assistant task uses latency as the cost. Is there a reason to use different cost definition here? Or why not evaluate both cost definitions in both cases? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors acknowledged some limitations, such as “We leave the training of a better selector as future work.”. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. Please find our responses to each comment below. ## Comment 1 **Reviewer:** > That authors try to solve should not be simplified to “cost saving” only, but it should be formulated as “a cost-accuracy trade-off”. I really want the authors to investigate more about this instead of simply comparing on the cost. The results should be presented as a cost-accuracy Pareto curve, and the proposed work has to show how it advances the Pareto frontier compared to baseline (achieving better accuracy under same cost, or achieving same accuracy under less cost). **Response:** It is a misunderstanding that we only focus on "cost saving". In both theory and experiments, we focus on the **cost-accuracy trade-off rather than cost**. Additionally, our framework is not limited to cost-accuracy trade-off. It optimizes for the objective, which can be defined as any reward depending on the scenario. In small-large model selection which is the motivating example in our paper, the reward is defined as the cost-accuracy trade-off. While in the model ensemble setting, the objective can be simply defined as accuracy. See below for more details. - It might be misleading that we call $c$ as a cost function (a terminology in bandit), which is indeed a trade-off term between the cost in real world and accuracy. For example, a common choice of $c$ can be chosen as $c(q) =$ cost of $q - \lambda\times$ accuracy of $q$. Our theoretical analysis shows that once we choose the target function $c$ regarding how one combines cost and accuracy (i.e. fix a $\lambda$), the proposed algorithms always give the minimax-optimal trade-off between cost and accuracy. Thus our framework is general enough to incorporate any trade-off between cost and accuracy. - In our experiments, we **always guarantee the output quality at the same level**, and compare the cost. Thus the proposed algorithm advances the Pareto frontier compared to the baseline. In all experiments, if we call small model and find that it is not giving satisfying result (the evaluation score from GPT4 is smaller than 6 out of 10), we will call the large model again to fix the output. This incurs extra cost due to the error induced and the extra compute needed to call the large model. No matter whether we hit or miss the cache, and whether we select the wrong model or the correct model, the output will always be satisfying with score larger than 6. One may also change the reward from binary satisfying vs unsatisfying to the actual scalar of GPT 4 eval. ## Comment 2 **Reviewer:** > For model selection, what if there are more than two models, and what if there is no way to know whether the inference result is correct. **Response:** We briefly discuss how to generalize to multiple models in Appendix C. If there're $K$ models, we can train a neural network with $K$ dimensional output, each predicting the cost for one model. If there is no way to evaluate the result, the model selection algorithm would not work. However, there have been a good amount of evaluation methods, like judgement from GPT4, or a reward model trained from human preference data. ## Comment 3 **Reviewer:** > In introduction “We assume that there exists some semantic search oracle that can group the prompts with the same semantic meaning…”. This assumption oversimplify the caching problem. In order to demonstrate the feasibility of caching, it’s needed to build a semantic search-based cache, and evaluate how it affects the accuracy-cost tradeoff. For fuzzy search during caching, how much will it actually affect inference accuracy? **Response:** - There have been some preliminary studies and systems using vector database for caching in [1], which represents the query as a vector from the embedding of a pre-trained or fine-tuned large language model specifically designed for retrieval. Thus the effectiveness of simple caching has been demonstrated, and building such a semantic search system only for caching is not our main focus given existing efforts. - According to [2], the cache hit rate in web search systems is usually 30%-60%. The semantic matching part in large models shall be very similar to the existing ones used in the web search systems. And thus there are been mature systems in industry that validate the effectiveness of caching system. - Even without semantic search and fuzzy matching, the exact-match-based caching is useful in practice. When it comes to using LLM for API calls to software, we expect more redundant and identical queries. And the cache hit rate can be much higher than chat even if we do exact match. Our experiments with exact match also validates the effectiveness of the proposed algorithms. - In our paper, we focus on identifying the information-theoretic optimal algorithms for jointly optimizing the caching and model selection system. We believe that the most appropriate fuzzy search deserves a serious and comprehensive study. But this may be out of the scope of our current focus. [1] GPTCache: https://github.com/zilliztech/GPTCache [2] Jeff Dean, Building Software Systems at Google and Lessons Learned ## Comment 4 **Reviewer:** > - In section 5.2 it says “We fine-tune a BERT-base model and achieve 80.2% accuracy.”. But I can’t find any further information such as the fine-tune hyperparameters, and whether this 80.2% accuracy is training accuracy or validation accuracy. > - In section 5.2, the next-token prediction task uses FLOPs as the cost, and the chat assistant task uses latency as the cost. Is there a reason to use different cost definition here? Or why not evaluate both cost definitions in both cases? **Response:** - We will include all the details on the fine-tune hyperparameters and the models. This 80.2% accuracy is test accuracy on the unseen prompts. - We have included experiments for both FLOPs and latency for both cases in the revised paper, along with the one page PDF. --- Rebuttal Comment 1.1: Title: Rebuttal Reply Comment: I appreciate the clarifications and additional experiment results from the author. They do help make it easier to me to appreciate this work, thus I'm raising my score to borderline accept. I would highly recommend the authors to always keep in mind of balancing the theory/formulation and engineering/experiments in future paper writing, not only for improving the soundness of the paper itself, but also for improving the chance of being actually used by AI community, a community that currently emphasizes engineering efforts and practical usability. --- Reply to Comment 1.1.1: Comment: Thank you for your great suggestions! We are working on building real product that can benefit the AI community more based on the paper. We will release it soon once finished. Please stay tuned!
Summary: The paper addresses the computation costs of large language models. Specifically, the paper proposes a framework where previous queries are cached and retrieved and where given a query to process, an appropriate model is selected to answer based on the query. The framework flow is the following: given a query in test time, we check whether the response can be retrieved from the cache. If so, we return from the cache. If not, we select one of two models based on the query, where a natural configuration is to have two models - a large more accurate model with large computation cost and a small less accurate model with smaller computation costs (the com. cost can be measured in various ways). Cache: The method estimates two oracle - the first oracle DenEstOracle estimates the probability distribution that a query q will be observed. This is useful in defining the cache, sense we want to cache the most frequent queries. The second oracle is RegressionOracle which estimates the cost of processing a query. This is useful since we want to cache the queries which cost the most to process. Then, in the online setting, in time step q we can have a cache which in stores the most frequent queries which also have the most cost. Remember that the cache is finite, so formally, in time step t the cache which is denoted L_t holds the L caches such that for query q in L_t the value P_t(q) * Cl_t(q) is larger than all previously seen queries that are not in the cache. This basically means that the set of queries in the cache is optimal, in the sense that the estimated cost of encountering those queries is the largest given both the probability that we will encounter them and the cost of processing them. Model selector: Now, for the model selector, the method assume that the cost of a query is weighed sum of the C_0(q) + Y(q) * C_1(q), where C_0(q) is the compute cost of the small model assuming the user is satisfied with the results, C_1(q) is the cost if the user is not satisfied with the results and Y(q) is a random binary variable denoting whether or not the user is satisfied. Appendix A discusses the costs and model selection and considers options such as C_1(q) == cost of user dissatisfaction or C_1(q) == cost of re-running the large model on q where the latter is used in the online experiment presented in the paper. Now, in the online setting the model selector simply chooses the model with the smaller cost and the cache stores the queries with largest P(q) * min(C*_s(q), C*_l(q)) where C* is the estimated cost for the RegressionOracle. The paper provides experimental results in synthetic, offline and online setting (online setting is done using the open assist dataset). The paper compares against baseline such as simple Least Frequently Used (LFU) cache as baseline cache and simple cascade (calling the large model if the result of the small model is un-satisfactory) as model selector. The paper compares the baselines against the proposed cache and model selector, separately and jointly. Strengths: * The paper addresses an important real-world problem in a novel manner with both a detailed theoretical discussion and practical experiments and results. * The presented framework is general and can be extended/adjusted to various use cases and practical applications. * The paper is well written and easy to follow. Weaknesses: IMO, the experiments can benefit from further analysis, e.g.: * Performance/accuracy analysis as function of number of steps in the online setting. * What is the cost of running the model selector and how does that affect the performance of the system? * Can you provide some accuracy measure of the oracles? for example, in the online setting, how many steps are required so that the oracles would be properly estimated? What is the accuracy as a function of the step number? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Can you discuss how to ensure the cache diversity? the top most probable queries might be very similar to each other. I understand that the method does not cacher queries, but "query groups" where similar queries are cached as one group (it is assumed in the paper that there exists some semantic search oracle that can group the prompts with the same semantic meaning). Still, due to the long tail distribution of queries, the bottom most frequent queries in the cache might be less similar to each other than the top ones, which might motivates grouping the queries differently, i.e. taking into account P(q) and varying the minimal distance to group two queries together by semantic search oracle that groups prompts close embedding. If the cache is not diversified enough, less common queries would be processed slower. This is especially true if we want to ensure the system is not biased towards specific user population. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors did not specifically addressed limitations or potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We have corrected all the typos and added all the suggested details in the revision. Please find our responses to each comment below. ## Comment 1 **Reviewer:** > The experiments can benefit from further analysis, e.g.: - Performance/accuracy analysis as function of number of steps in the online setting. - What is the cost of running the model selector and how does that affect the performance of the system? - Can you provide some accuracy measure of the oracles? for example, in the online setting, how many steps are required so that the oracles would be properly estimated? What is the accuracy as a function of the step number? **Response:** Thank you for your comments! - For the online case, the accuracy w.r.t. step size largely depends on the number of distinct queries we face. In our experiments, we use 100 distinct queries. And thus the oracle quickly converges after seeing the cost most of the queries once. The accuracy is thus proportional to the fraction of seen queries, and is directly affected by the distribution of the queries. We are happy to add more details in the revised paper. - The cost of running the model selector is equivalent to one forward call of the used model (BERT or causal language models). In contrast, for a response with 1000 tokens, the causal language models like LLama, Vicuna and GPT 3.5 require 1000 times of forward call. Thus the cost of running the model selector is neglibile compared to the cost used to generate the responses. - The shaded area in Figure 2 shows the variance which shows the convergence rate of the oracle selector. In practice, the quality of the predictor continues to improve as we have more online data. And we can have good estimation for those that are seen multiple times (and more likely saved in cache). ## Comment 2 **Reviewer:** > Can you discuss how to ensure the cache diversity? the top most probable queries might be very similar to each other. I understand that the method does not cache queries, but "query groups" where similar queries are cached as one group (it is assumed in the paper that there exists some semantic search oracle that can group the prompts with the same semantic meaning). Still, due to the long tail distribution of queries, the bottom most frequent queries in the cache might be less similar to each other than the top ones, which might motivates grouping the queries differently, i.e. taking into account P(q) and varying the minimal distance to group two queries together by semantic search oracle that groups prompts close embedding. If the cache is not diversified enough, less common queries would be processed slower. This is especially true if we want to ensure the system is not biased towards specific user population. **Response:** Thank you for the comment! We are happy to include more discussions on the cache diversity. You are absolutely correct that in the case when the bottom most frequent queries in the cache are less similar to each other, we may want to motivate grouping differently. Vector database is popular for retrieving relevant documents by matching queries with similar embeddings into the same group, thus grouping responses with similar semantic meaning. In the case of caching, we may want to design new embedding method that takes into account the frequency $P(q)$ and design different thresholds for grouping. This will be a very interesting open problem to be further explored. ## Comment 3 **Reviewer:** > The authors did not specifically addressed limitations or potential negative societal impact of their work. **Response:** Thank you for your comments! We will include more discussions on the limitations or potential negative societal impacts of the work. For example, this may lead to biased processing speed towards specific user population when the group sends more queries than other groups. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarifications. I keep my original score of 7, this is a good paper IMO and would benefit the community and I would recommend accepting it.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for the valuable comments and suggestions, which have helped us greatly improve the paper. Here we briefly summarize our additional experiments done for the rebuttal period. In our original experiments in the main text, we include 2 experiment tables with $100$ distinct queries, $40$ cache size and $10k$ total number of queries on both FLOPs for offline next-token-prediction task and latency for online chat task. In our revised version, we include **22 extra experiment tables on different parameters and tasks**, listed as below: - For the original set of parameters ($100$ distinct queries, $40$ cache size and $10k$ total number of queries), we include 6 extra experiment tables. Combined with the two original tables, they form the metric [FLOPs, latency] on [offline, online] [next-token-prediction, chat] task, thus in total 8 tables. - We include another 8 tables with parameters ($1000$ distinct queries, $0$ cache size and $2000$ total number of queries), with the metric [FLOPs, latency] on [offline, online] [next-token-prediction, chat] task. - We include another 8 tables with parameters ($1000$ distinct queries, $100$ cache size and $2000$ total number of queries), with the metric [FLOPs, latency] on [offline, online] [next-token-prediction, chat] task. Due to space limit of one page PDF for uploading during rebuttal, we only include the first 6 extra experiments on the same parameters for different metrics and different tasks / settings. We also include two extra tables on the case without any cache ($1000$ distinct queries, $0$ cache size and $2000$ total number of queries) below. The rest will be available in the revised paper. The tables list cumulative costs ($10^3$). > FLOPs on offline lambda dataset, opt-1.3b vs opt-13b | \(\alpha\) | selector accuracy | large | cascade | selector | |------------|-------------------|------------|--------------|---------------| | 0.2 | 0.8 | 4.07 | 5.65 | **3.22** | | 0.5 | 0.8 | 4.16 | 4.57 | **3.02** | | 0.8 | 0.8 | 4.12 | 4.42 | **3.09** | | 0.2 | 1 | 4.14 | 3.82 | **1.87** | | 0.5 | 1 | 4.13 | 4.35 | **2.28** | | 0.8 | 1 | 4.13 | 4.46 | **2.24** | > Latency on offline lambda set, opt-1.3b vs opt-13b | \(\alpha\) | selector accuracy | large | cascade | selector | |------------|-------------------|------------|--------------|---------------| | 0.2 | 0.8 | 0.39 | 0.54 | **0.35** | | 0.5 | 0.8 | 0.39 | 0.48 | **0.32** | | 0.8 | 0.8 | 0.39 | 0.48 | **0.32** | | 0.2 | 1 | 0.39 | 0.47 | **0.19** | | 0.5 | 1 | 0.39 | 0.45 | **0.23** | | 0.8 | 1 | 0.39 | 0.46 |**0.24** | Pdf: /pdf/28ecc50eb21027d32a6ebd595883e55eb440843f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper investigates the combination of caching and model selection strategies to reduce the inference costs of large models (LLMs in particular). In the result, the authors propose a theoretically grounded algorithm that demonstrates promising results in practice. Strengths: 1. The problem of the cost-effective LLM inference is in high demand. 2. The paper is well written and easy to follow. 3. Theoretical contribution seems novel and complete. 4. Synthetic experiments are promising, especially when the ratio between min/max query costs is large. Weaknesses: In my opinion, Section 5.2 lacks some important details and clarifications. It makes it difficult to estimate the practical contribution of the proposed approach. I highly recommend adding the detailed evaluation protocols for both tasks to the appendix. **Minor** * In Figure 2, please specify which plot corresponds to the offline/online setting. * In L132 and L152, one can introduce $\mathcal{L}^{\star}$ and $\pi^{\star}$ that appear in the corresponding equations. * Consider adding the cost measures to the captions in Tables 1,2 * Consider specifying the hardware specs used for the latency evaluation and corresponding std values in Table 2. * Typos: L168 "last year" -> "last layer" Technical Quality: 3 good Clarity: 3 good Questions for Authors: The following questions will help to address my concerns about the empirical contribution in Section 5.2: * How is the performance measured for each task? * For a fair comparison, all methods within each row (Tables 1,2) should provide the same model quality. * Does this hold true in both cases? If yes, how is it guaranteed for the chat assistant task? * What is the target performance for each task? * What is the criterion to quantify if the output of the small model satisfies or not (especially in the chat assistant task)? * In the offline setting, the predictor is not accurate enough to outperform "cascade" while it is not the case in the online setting. I am curious why it happens. * Is the chat assistant task easier for the selector? Or is it caused by the difference in the online/offline settings? * What selector is used in the online setting? How accurate is it at convergence? What portion of queries does the predictor observe until convergence? * What if one tries the online and offline settings for the Lambada and chat assistant tasks, respectively? * How do the costs and gains depend on the cache size? What are the cache sizes in both settings? How are they selected? * It might be useful to report the "large", "cascade," and "selector" costs without caching to understand the LFU/LEC gains better. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We have corrected all the typos and added all the suggested details in the revision. Please find our responses to each comment below. ## Comment 1 **Reviewer:** > In my opinion, Section 5.2 lacks some important details and clarifications. It makes it difficult to estimate the practical contribution of the proposed approach. I highly recommend adding the detailed evaluation protocols for both tasks to the appendix. **Response:** Thank you for your comments and sorry for missing the details! We will include all the detailed evaluation protocols in the Appendix. Please find below for our responses to the individual questions. ## Comment 2 **Reviewer:** > How is the performance measured for each task? What is the target performance for each task? What is the criterion to quantify if the output of the small model satisfies or not (especially in the chat assistant task)? **Response:** For the offline next-token prediction task, the target performance metric is the number of correct tokens predicted. We have the ground-truth token from the Lambada dataset. Thus it is easy to measure the success. If the small model is chosen but its result is wrong, the large model must be run and it will incur an additional FLOPs. For the online chat assistant task, the quality of response is evaluated by GPT4 judgement. We say a response is satisfying if the score is larger than 6 out of 10, and unsatisfying otherwise. If the response from the small model is unsatisfying, we will call the large model again and incur an additional cost in latency. ## Comment 3 **Reviewer:** > For a fair comparison, all methods within each row (Tables 1,2) should provide the same model quality. Does this hold true in both cases? If yes, how is it guaranteed for the chat assistant task? **Response:** Thank you for your comments! Yes, in the next-token prediction task, if the small model fails to predict the ground-truth, we will call the large model again to re-generate. Thus it provides the same quality. In the chat assistant task, if the response of the small model is evaluated as unsatisfying (score less than 6 out of 10 in GPT 4 judgement), the large model will be called to ensure the output quality is as good, at a cost of higher latency due to calling both small and large model sequentially. Thus all the methods are providing the same model quality. One may adjust the GPT 4 judgement score to be more strict on the quality of the output. ## Comment 4 **Reviewer:** > In the offline setting, the predictor is not accurate enough to outperform "cascade" while it is not the case in the online setting. I am curious why it happens. Is the chat assistant task easier for the selector? Or is it caused by the difference in the online/offline settings? What selector is used in the online setting? How accurate is it at convergence? What portion of queries does the predictor observe until convergence? What if one tries the online and offline settings for the Lambada and chat assistant tasks, respectively? **Response:** Thank you for your comments! In both offline and online settings, we work with 100 distinct prompts. In offline setting, we train a BERT-based model for predicting based on 2k prompt-responses pairs. In the online setting, we consider a tabular case selector where one memorizes the cost for each prompt after seeing them once, and thus the selector converges when it sees each of the query once. In the online setting, the initialization for the selector is calling the small model first (same with cascade) and changing to call the larger model when it learns that the larger model is better. This makes our method better than cascade. In practice, the quality of the predictor continues to improve as we have more online data. And we can have good estimation for those that are seen multiple times (and more likely saved in cache). The shaded area in Figure 2 shows the variance which shows the convergence rate. We added new experiments of the online and offline settings for the Lambada and chat assistant tasks which are included in the attached pdf. ## Comment 5 **Reviewer:** > How do the costs and gains depend on the cache size? What are the cache sizes in both settings? How are they selected? It might be useful to report the "large", "cascade," and "selector" costs without caching to understand the LFU/LEC gains better. **Response:** Thank you for your comments! The gains for caching largely depend on the number of cached items and the distributions of real-world queries. In our experiments, we work with 100 distinct queries and cache 40 queries. Thus the gain for caching is relatively large compared to the case without caching. If we only save 5 queries, the gain for caching can be lower. To provide a more comprehensive comparison, we have added new experiments for different cache size, including no cache, 40 cache out of 100 queries, and 100 cache out of 1000 queries in the revised version (along with the uploaded PDF). --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I would like to thank the authors for their thoughtful clarifications and additional results. The setting with 1k distinct queries out of 2k and 100 cache size sounds interesting and reasonable, so I look forward to seeing these results in the revision. Overall, my questions have been well addressed. So, I'm happy to update my score accordingly.
null
null
null
null
null
null
Synthetic Combinations: A Causal Inference Framework for Combinatorial Interventions
Accept (poster)
Summary: The manuscript introduces a synthetic combinations estimator, which computes causal effects for unseen treatment combinations under some reasonable assumptions on the data generating process. Building on theory from potential outcomes and synthetic interventions, the authors show how to exploit information sharing across units and treatments to identify causal effects in this challenging setting. The proposed two-step algorithm is more efficient and flexible than existing alternatives. Strengths: The topic is timely and interesting. The manuscript is exceptionally clear and well-written, which is greatly appreciated when dealing with dense formalisms. The theoretical results are strong and convincing, rooted in established results while simultaneously going beyond the current state of the art. The analysis is sound and well-motivated. Weaknesses: My main critique of this manuscript is one I am sure the authors will have anticipated – there are no empirical results in the main text! I am aware that 9 pages is tight but the authors could have been more judicious in their selection of what to send to the appendix. I also found the material on CART to be super interesting; a shame to banish this material to the wilderness of Appendices J and K. Fortunately, the final manuscript affords one extra page. I strongly encourage the authors to move Fig. 2 to the main text and expand on this empirical evaluation. Would also be great to shoehorn in some of the CART results, but that may prove difficult. If it is impossible to squeeze everything within the limit, then the authors may want to consider repurposing this manuscript for a journal submission. There is more than enough material here for a very solid journal contribution. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback, and address their concerns as follows. >My main critique of this manuscript is one I am sure the authors will have anticipated – there are no empirical results in the main text! I strongly encourage the authors to move Fig. 2 to the main text and expand on this empirical evaluation We thank the reviewer for highlighting this point about our presentation and empirical evaluation. We perform additional experiments in the global response on a real-world dataset on recommendation systems for combinations of movies that highlight the benefit of our approach as compared to other methods (e.g., Lasso and matrix completion techniques). Further, we show that the key assumptions required for Synthetic Combinations to work are satisfied in our real-data experiments. We will revise the paper to include these real-world experiments, as well as those in the appendix. > I also found the material on CART to be super interesting; a shame to banish this material to the wilderness of Appendices J and K. Fortunately, the final manuscript affords one extra page. Would also be great to shoehorn in some of the CART results, but that may prove difficult. We are glad the reviewer found the material on CART to be interesting! We highlight some results regarding CART in Corollary 6.8 which shows that CART can exploit additional regularity conditions placed on the potential outcomes to allow the sparsity $s$ to scale more quickly (i.e., by a factor of the number of interventions $p$) while achieving consistency. As a result, CART is able to achieve an improved sample complexity of $O(\text{poly}(r/\delta) \times (N + s^2))$ as compared to $O(\text{poly}(r/\delta) \times (N + s^2p))$ samples required when the horizontal regression is done via the Lasso. We will revise the paper to make the benefits of sample complexity of using CART clearer, and also attempt to include more formal results related to CART in the main text. --- Rebuttal Comment 1.1: Comment: We thank the reviewer again for their thoughtful comments. We hope that they have had a chance to review our response to their specific concerns, and our real-world experiments in the global response where we demonstrate the efficacy of Synthetic Combinations over baselines methods, and that our key modeling assumptions (i.e., low-rank and sparsity) hold. Please let us know if there is anything else we can do to address your concerns, and we hope you improve your score.
Summary: The paper studies the problem of estimating potential outcomes in the presence of a combinatorial number of intervention choices. Under some assumptions, they propose a two-phased algorithm "Synthetic Combinations": first exploit structure across combinations of interventions (via "horizontal regression") and then exploit structure across units (via "vertical regression"). Experiments are given in the appendix. Strengths: The proposed algorithm is clean and intuitive. It also seems to scale nicely with the number of intervention combinations in experiments. While the theoretical guarantees rely heavily on a bunch of assumptions, Section 7 proposes an experimental design framework which ensures that an important set of assumptions (existence of donor units) will be met with high probability. In fact, I strongly propose that the authors rephrase their paper to highlight this; otherwise it is hard to believe that their work will useable as it is highly unlikely that all the required assumptions are met in practice without having control in assigning interventions to the units. Weaknesses: I did not check all the proofs in detail, but I do not see any glaring weaknesses. There are a lot of assumptions and it is highly unlikely that all the required assumptions are met in practice (Section 7 helps to mitigate some of these concerns). I am skeptical about the low-rank assumption on the matrix of Fourier coefficients $A$. While it is true low-rank assumptions are common in prior matrix completion settings, they usually directly consider the matrix at hand and not transform it into the Fourier space first. For example, Lines 655-657 in the appendix writes "This missingness pattern where outcomes with larger absolute values are observed is common in applications such as recommendation engines, where we are only likely to observe ratings for combinations that users either strongly like or dislike". The corresponding missingness pattern in the problem studied here is on the $N$-by-$2^p$ matrix. It is unclear to me why it should be believable that the transformed space is low-rank. The authors ought to justify this, ideally with practical examples/settings, or risk diminishing the impact of their contributions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Line 83: By "equivalent", do you mean that they proved equivalence between the two problems via reductions, or do you mean "equivalent" in a colloquial sense of the word? Assumption 3.1: As discussed in the weaknesses, it is unclear to me why this model is interesting or justified. Of course, this work can be appreciated under the restriction of this assumption, but it will greatly weaken the contributions. I am more than happy to increase my "contribution" score if the authors provide sufficient justification for the low-rank assumption. Type on Line 183: double "exists" Motivating example on Line 190: I don't understand why this motivates the existence of donor units when the paper has thus far repeatedly claim to allow unobserved confounding. If we allow interventions to be arbitrarily assigned to units, it is unclear why we should believe that donor units exist. The "correct" way to justify should be to say that there is an experimental design that ensures the existence of donor units, and then refer to Section 7. Determining donor set on Line 246: This feels very ad-hoc. As it is unlikely that donor units will exist if we allow arbitrary experiments, I feel that this paragraph could be removed once the authors reorder their paper to place more emphasis on the experimental design proposed in Section 7. Subsection on Additional Assumptions: I feel that "so-and-so also has such an assumption" is not sufficient discussion of assumptions. Firstly, "so-and-so" may have the assumptions under different contexts (e.g. see my complaint about low-rank assumption in the Weaknesses section) so it is unclear why such assumption is justified in the setting studied in this paper. Secondly, the discussion should explain "what goes wrong" if one particular assumption is violated, or why we should expect any particular assumption to hold in practice. As mentioned several times by now, one "partial fix" is to emphasize that experimental design of Section 7 guarantees some assumptions with high probability. That is, "Synthetic Control" should be used in conjunction with the experimental design proposed in Section 7. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Nil. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback, and address their concerns as follows. > I am skeptical about the low-rank assumption on the matrix of Fourier coefficients $\mathcal{A}$. While it is true low-rank assumptions are common in prior matrix completion settings, they usually directly consider the matrix at hand and not transform it into the Fourier space first. > `Assumption 3.1: As discussed in the weaknesses, it is unclear to me why this model is interesting or justified... __I am more than happy to increase my "contribution" score if the authors provide sufficient justification for the low-rank assumption__. The $2^p \times N$ matrix of potential outcomes $\mathbb{E}[\mathbf{Y}_N^{(\Pi)}]$ can be written as $\mathbb{E}[\mathbf{Y}_N^{(\Pi)}] = \mathbf{\chi}(\Pi) \mathcal{A}^T$, where $\mathbf{\chi}(\Pi) $ is the matrix of Fourier characteristics. Since $\mathbf{\chi}(\Pi) $ is an invertible matrix, $\text{rank}(\mathbb{E}[\mathbf{Y}_N^{(\Pi)}]) = \text{rank}(\mathcal{A})$. Hence, placing a low-rank assumption on the Fourier coefficients $\mathcal{A}$ is equivalent to placing a low rank-assumption on the matrix of outcomes $\mathbb{E}[\mathbf{Y}_N^{(\Pi)}]$. As the reviewer points out themselves, placing low-rank structure on the outcomes $\mathbb{E}[\mathbf{Y}_N^{(\Pi)}]$ is common when studying matrix completion. We discuss this equivalence in line 140, but will make this point clearer in our revision. > Determining donor set on Line 246: This feels very ad-hoc. As it is unlikely that donor units will exist if we allow arbitrary experiments... This paragraph provides a data-driven method using cross-validation (CV) to identify donor units in observational settings. Our additional experiments in the global response on real-world data demonstrates that we outperform other baselines (e.g., Lasso and matrix completion methods), motivating the existence of a donor set in an observational setting. Moreover, we use the method laid out in this paragraph of CV to select the donor set in our real-world experiments, indicating that this approach can be used in practice. However, the reviewer correctly points out that a donor set is not guaranteed to exist under arbitrary/adversarial observation patterns. We will clarify this in our revision. > Motivating example on Line 190: I don't understand why this motivates the existence of donor units when the paper has thus far repeatedly claim to allow unobserved confounding... We provide examples that guarantee the existence of donor units in both an experimental (i.e., our motivating example) and an observational setting. In the motivating example, the reviewer correctly points out that our treatment assignment mechanism does not induce unobserved confounding. We will revise the language to reflect this. In the observational setting, we provide an example (deferred to Appendix C for space constraints) where these assumptions hold under a natural model of unobserved confounding. Specifically, our observational example consists of a treatment assignment where only outcomes with large absolute values are seen. This observation pattern is common in recommendation systems where we only observe ratings from users who strongly like or dislike a product. We will provide a brief description of this example in our revision. >Subsection on Additional Assumptions: I feel that "so-and-so also has such an assumption" is not sufficient discussion of assumptions. Firstly, "so-and-so" may have the assumptions under different contexts (e.g. see my complaint about low-rank assumption in the Weaknesses section) so it is unclear why such assumption is justified in the setting studied in this paper. Secondly, the discussion should explain "what goes wrong" if one particular assumption is violated... Thank you for pointing out the role of assumptions in our work. We discuss assumptions from these previous works in order to build upon them, and provide context for our work. However, we agree with the reviewer that we can revise our language to reflect how we discuss our assumptions better. We will revise the text to say that in our analysis, if the assumptions that place unit-specific structure (e.g., sparsity or incoherence of Fourier characteristics) are not satisfied (which can be tested via CV), then the outcomes of the donor units cannot be accurately estimated via the Lasso. In this case, alternative horizontal regression algorithms to estimate donor unit potential outcomes may be required instead. Similarly, if the assumptions that place structure across units (e.g., low-rank condition) do not hold (which again can be tested by examining the spectra of the matrix), then it is difficult to accurately transfer the outcomes of the donor set to the non-donor units. In the global response we verify that these assumptions do seem to hold in our real-world experiments. Further, we discuss why important applications such as factorial design experiments and recommendation systems naturally induce low-rank and sparse representations in Appendix B. However, we will discuss the limitations of our approach and "what goes wrong" if these assumptions do not hold in our revision. > Line 83: By "equivalent", do you mean that they proved equivalence between the two problems via reductions, or do you mean "equivalent" in a colloquial sense of the word We mean colloquially in the sense that both problems can be cast as missing data problems. That is, the central task of both causal inference and matrix completion is to impute unobserved (i.e., missing) outcomes. Specifically, given a ``causal inference estimator'' for imputing missing potential outcomes, one can use such an estimator to directly impute missing entries in the appropriately defined matrix. Similarly, given a matrix completion estimator to impute missing entries in a matrix, one can then use it to impute missing potential outcomes. > Type on Line 183: double "exists" Thanks! We will fix this. --- Rebuttal Comment 1.1: Comment: Thank you for your patience and effort to clear my doubts and misunderstandings. Also, I appreciate the effort that the authors took to perform additional experiments --- it must have been tough to do so in such a short period of time! I am very satisfied with the detailed response and have updated the scores accordingly :) Please kindly incorporate some of the discussions here into your revision. Thanks! --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to comment in such detail on our paper, and for reading our response. Your feedback was very helpful in improving our paper, and we will include them in our revision. Thank you for increasing your score!
Summary: The goal paper studies the problem of recovering $N\times 2^p$ unit-specific outcomes for N heterogeneous units and any combination of p possible interventions from small number of experiments and observations. Prior to this work, the problem has been studied under assumptions of latent similarity or regularity in how combinations of interventions interact, as well as some other setups. In the setup when one assumes latent similarity, namely that the matrix of Fourier coefficients across units has rank <= r, the problem is reduced to matrix completion, and hence all causal outcomes can be recovered from $O(poly(r)\times (N+2^p))$ observations. In the case, when regularity in intervention interactions is assumed it is known that $O(Ns^2p)$ measurement are sufficient, where s is the sparcity parameter of coefficients in the Fourier expansion of the potential outcomes. This paper studies the problem in the case when both latent similarity and intervention regularity is assumed. Under both assumptions the paper proposes an algorithm that recovers all $N\times 2^p$ causal outcomes from $O(N\times (s^2 + p))$ measurements. Strengths: The problem of estimating causal outcomes under a combination of interventions is a notoriously hard problem with various applications. One complication is that it is usually expensive\impossible to run many experiments to measure the effects caused by interventions. Hence, understanding how to set up experiment that will require the minimum amount of measurements is of great importance. This paper proposes an algorithm, called Synthetic Combinatorics, that provably recovers all $N\times 2^p$ unit-specific outcomes under $2^p$ interventions under a combination of two widely accepted assumptions from $O(N\times (s^2 + p))$, which is a significant improvement over the prior work. The paper also provides statistical estimates for the number of samples needed for every experiment to achieve desired accuracy of the recovery. The paper is well-written and as far as I can judge is correct, though I did not read the proofs carefully. Weaknesses: It is not completely clear how realistic is the scenario when latent similarity and intervention regularity holds simultaneously. I can imagine that in some datasets one or the other may hold, while both assumptions at the same time may not hold. The paper will benefit significantly from experiments on real-world datasets, that can confirm that theoretical assumptions are realistic and we indeed see improvement in the number of measurements needed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Can you provide some intuition why you believe that the assumptions needed for Synthetic Cobminatorics to work are expect to hold for real-world datasets? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback, and address their concerns as follows. >It is not completely clear how realistic is the scenario when latent similarity and intervention regularity holds simultaneously. I can imagine that in some datasets one or the other may hold, while both assumptions at the same time may not hold. The paper will benefit significantly from experiments on real-world datasets, that can confirm that theoretical assumptions are realistic and we indeed see improvement in the number of measurements needed. > Can you provide some intuition why you believe that the assumptions needed for Synthetic Cobminatorics to work are expect to hold for real-world datasets? We thank the reviewer for pointing out the role of assumptions in our work. We perform additional experiments in the global response on a real-world dataset on recommendation systems for combinations of movies that highlight the benefit of our approach as compared to other methods (e.g., Lasso and matrix completion techniques). We also show that the key modeling assumptions (e.g., low-rank structure and sparsity) hold in this real-world dataset. Further, the improved performance of our method as compared to other approaches, and in particular, the Lasso, imply the existence of a valid donor set. We hope that these experiments motivate the empirical utility of our methods, and that our theoretical assumptions are grounded. We will revise the paper to include these results. We also note that latent similarity is equivalent to placing a low-rank assumption on the matrix of potential outcomes $\mathbb{E}[\mathbf{Y}_N^{(\Pi)}]$. That is, the $2^p \times N$ matrix of potential outcomes $\mathbb{E}[\mathbf{Y}_N^{(\Pi)}]$ can be written as $\mathbb{E}[\mathbf{Y}_N^{(\Pi)}] = \mathbf{\chi}(\Pi) \mathcal{A}^T$, where $\mathbf{\chi}(\Pi)$ is the matrix of Fourier characteristics. Since $\mathbf{\chi}(\Pi)$ is an invertible matrix, $\text{rank}(\mathbb{E}[\mathbf{Y}_N^{(\Pi)}] ) = \text{rank}(\mathcal{A})$. Hence, placing a low-rank assumption on the Fourier coefficients $\mathcal{A}$ is equivalent to placing a low rank-assumption on the matrix of outcomes $\mathbb{E}[\mathbf{Y}_N^{(\Pi)}]$, which is a widely made assumption when studying matrix completion. With regards to intervention regularity and sparsity, we provide a discussion of why these assumptions hold in models used to study relevant applications such as factorial design experiments and recommendation systems in Appendix B. --- Rebuttal Comment 1.1: Comment: We thank the reviewer again for their thoughtful comments. We hope that they have had a chance to review our response to their specific concerns, and our real-world experiments in the global response where we demonstrate the efficacy of Synthetic Combinations over baselines methods, and that our key modeling assumptions (i.e., low-rank and sparsity) hold. Please let us know if there is anything else we can do to address your concerns, and we hope you improve your score.
Summary: __Disclaimer__: This is my first time reading & reviewing a paper from the field of Combinatorial Interventions. My expertise is in NLP. This work studies latent structure across units and combinations of interventions, assuming similar outcomes across units and regular interaction. An estimation procedure, Synthetic Combinations, is proposed, establishing finite-sample consistency under precise conditions. This work also uses methods to reduce errors in variables and provides a possibility of model-agnostic analysis. Strengths: * All the proofs and other mathematical explanations are clear, but I'm not able to understand them properly because of no expertise in that field. Weaknesses: * Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: NA Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback! We note that we perform additional experiments in the global response on a real-world dataset on recommendation systems for combinations of movies that highlight the benefit of our approach as compared to other methods (e.g., Lasso and matrix completion techniques). Further, we show that the key assumptions required for Synthetic Combinations to work are satisfied in our real-data experiments. We hope that these experiments motivate the empirical utility of Synthetic Combinations, and we will revise the paper to include these results. --- Rebuttal Comment 1.1: Comment: We thank the reviewer again for their thoughtful comments. We hope that they have had a chance to review our response to their specific concerns, and our real-world experiments in the global response where we demonstrate the efficacy of Synthetic Combinations over baselines methods, and that our key modeling assumptions (i.e., low-rank and sparsity) hold. Please let us know if there is anything else we can do to address your concerns, and we hope you improve your score.
Rebuttal 1: Rebuttal: We thank the reviewers for their positive feedback! A primary concern amongst reviewers was a lack of empirical evaluation of Synthetic Combinations. Here, we present a real-world data experiment on recommendation systems for sets of movie ratings to address these concerns, and highlight the empirical effectiveness of our approach. Further, we empirically validate that the key assumptions (i.e., low-rank condition and sparsity of donor unit Fourier coefficients) required for Synthetic Combinations to work also hold. We address specific reviewer concerns in rebuttals to each of them separately. __Data and Experimental Set-up.__ We use data collected in [2] which consists of user ratings of sets of movies. Specifically, users were asked to provide a rating of 1-5 on a set of 5 movies chosen at random. This resulted in a total of ratings from 854 users over 29, 516 sets containing 12, 549 movies. More details about the data collection process can be found in [2]. Due to computational constraints, we only perform experiments on N = 100 users and 4000 sets of ratings chosen at random. We use 80% of each user’s ratings as the training set, and the other 20% as the test set to evaluate performance. __Comparison Methods__. As in the numerical simulations in the appendix, we compare Synthetic Combinations to matrix completion algorithms: SoftImpute [ 1], and IterativeSVD [ 3]. These methods require that the rank of the underlying matrix be provided as a hyper-parameter. This was chosen via 5-fold cross-validation (CV). We also compare Synthetic Combinations to the Lasso, where we tune the regularization parameter λ via 5-fold CV. For Synthetic Combinations, we tune all hyper-parameters via 5-fold CV. Additionally, we choose the donor set via the approached outlined in the manuscript (see lines 246-257). __Results.__ We measure the root mean squared error (RMSE) for all methods, and average their results over 3 repetitions. The RMSE is displayed in the table below. We observe that Synthetic Combinations outperforms all other methods. Further, the gap between Synthetic Combinations and the Lasso shows the benefit of first estimating the outcomes of the donor set, and then transferring these estimated outcomes to non-donor units. We hope that this experiment enforces the empirical utility of our approach, and showcases that our theoretical and modeling assumptions are grounded. We will revise the manuscript to include these experiments, and expand on this empirical evaluation as well (e.g., investigating performance as a function of the number of users and sets). | Method | __Synthetic Combinations__ | SoftImpute | IterativeSVD | Lasso | |--------|--------------------------|---------------------|-----------------------|-----------------| | RMSE | __0.55__ $\pm$ 0.06 | 0.67 $\pm$ 0.03 | 0.68 $\pm$ 0.02 | 0.80 $\pm$ 0.12 | __Key Assumptions of Synthetic Combinations hold__. We also verify that two of the key assumptions, the low-rank condition on the matrix of outcomes and sparsity of donor unit Fourier coefficients, hold in this real-world dataset. For the low-rank condition, we choose the set of movies that were rated by all users, and and plot its singular value spectrum (using a log-scale for the magnitude of the spectrum) in Figure 1 of the attached PDF. As seen in the plot, it is clear that the matrix of outcomes displays low-rank structure. For the sparsity condition, we investigate the Lasso model that was learnt for the donor units. The RMSE averaged across all the donor units on the test set was 0.51, indicating that the estimated Fourier coefficient is an accurate representation of the true underlying Fourier coefficient. Further, we note that the estimated Fourier coefficients are indeed sparse, and on average only 8.7\% of all possible coefficients are non-zero. __References__ [1] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. The Journal of Machine Learning Research, 11:2287–2322, 2010. [2] M. Sharma, F. M. Harper, and G. Karypis. Learning from sets of items in recommender systems. ACM Trans. Interact. Intell. Syst., 9(4), jul 2019. ISSN 2160-6455. doi: 10.1145/3326128. URL https://doi.org/10.1145/3326128. [3] O. Troyanskaya, M. Cantor, G. Sherlock, P. Brown, T. Hastie, R. Tibshirani, D. Botstein, and R. B. Altman. Missing value estimation methods for dna microarrays. Bioinformatics, 17(6):520–525, 2001. Pdf: /pdf/f0f06931146666a526d478213783ba5a36c67bc4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Object-Centric Slot Diffusion
Accept (spotlight)
Summary: This paper proposes Latent Slot Diffusion (LSD), an object-centric learning framework combining the Slot Attention module and a Latent Diffusion Model (LDM) based slot decoder. The model is trained in an auto-encoding manner, where the loss is the slot-conditioned denoising loss in the LDM-based decoder. Extensive experiments demonstrate the effectiveness of LSD in 1) unsupervised scene decomposition 2) object property prediction 3) image generation/editing, surpassing previous SOTA SLATE which uses a auto-regressive Transformer-based slot decoder. Strengths: - The paper is well-written and easy to follow. The figures are of high quality - The use of LDM as the slot decoder is very intuitive and reasonable, considering the recent trends in object-centric models (CNN decoder --> dVAE+Transformer decoder --> KL-VAE+LDM decoder). The results verify this design choice - The experimental results cover lots of different tasks, and show very strong performance. The improvement in generation capacity is very impressive compared to baselines Weaknesses: I don't see any big issue with the paper. One might consider LSD as "simply replacing the slot decoder with an LDM", but I think this modification is reasonable, and works pretty well across datasets and tasks. SLATE/STEVE also just replace the CNN decoder in Slot Attention with Transformer decoders, and they have been widely used due to their strong unsupervised segmentation performance. I believe LSD's improvement in generation quality will also facilitate future research in object-centric generative models. That being said, I think the paper should have more discussion of its limitations. See the `Questions` and `Limitations` sections below. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Regarding the pre-trained VAE model: - I appreciate the authors' efforts in having SLATE+ with the same VAE for a fair comparison. Have you tried training a VAE from scratch, and comparing this LSD variant with SLATE (which also trains dVAE from scratch)? 2. Regarding the LSD + Stable Diffusion (SD) experiments in the Appendix (**not affecting paper decision**): - This is an interesting experiment, especially the results on the real-world COCO dataset. Do you have any numerical results to compare with the SOTA object-centric model DINOSAUR [1]? - Can the authors show some generation or reconstruction results? I am curious because I believe even LSD cannot generate realistic unconstrained real-world images (FFHQ images are constrained as they only capture human faces). Will using a pre-trained SD decoder help here? [1] Seitzer, Maximilian, et al. "Bridging the gap to real-world object-centric learning." ICLR. 2023. 3. Minor question (**not affecting paper decision**): a [concurrent work](https://arxiv.org/abs/2305.11281) you might want to cite, but I understand that paper comes out after the submission deadline. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors discuss limitations in Section 6. However, there are more limitations of this work (and the entire object-centric learning field): - LSD is still unable to generate realistic unconstrained real-world images - LSD suffers from the part-whole ambiguity issue, as can be seen from the COCO examples I'd like to see the authors discuss these points in the paper. Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes Flag For Ethics Review: ['No ethics review needed.']
Rebuttal 1: Rebuttal: ### We genuinely appreciate your positive recommendations and insightful feedback! > I appreciate the authors' efforts in having SLATE+ with the same VAE for a fair comparison. Have you tried training a VAE from scratch, and comparing this LSD variant with SLATE (which also trains dVAE from scratch)? Thank you for this insightful suggestion! We agree that this experiment would provide valuable insights and we have plans to incorporate it into our final version. In our study, we chose to use the pretrained VAE instead of training a VAE from scratch, based on the following considerations: - The use of pre-trained VAEs is common practice when training diffusion models, primarily due to its training stability and demonstrated effectiveness. - Our experiments indicate that with a pretrained VAE, SLATE+ exhibits superior performance when compared to SLATE, which uses a VAE trained from scratch. - Furthermore, a single pretrained VAE demonstrated satisfactory performance across all four datasets we utilized. This suggests that the use of pretrained VAE can contribute to more efficient training processes. Given these reasons, we opt for using a pre-trained VAE in our approach. > Regarding the LSD + Stable Diffusion (SD) experiments in the Appendix (not affecting paper decision). > Do you have any numerical results to compare with the SOTA object-centric model DINOSAUR [1] (on COCO dataset)? Can the authors show some generation or reconstruction results? I am curious because I believe even LSD cannot generate realistic unconstrained real-world images (FFHQ images are constrained as they only capture human faces). Will using a pre-trained SD decoder help here? Thank you for the insightful suggestions. We agree that the recommended experiments would be significant additions to our work. In our tests, we do observe visual artifacts in decoded images when dealing with real-world scenes under the current (LSD+SD) model configuration. Such artifacts have constrained both the image reconstruction and generation capabilities of the model on the COCO dataset. However, we believe this is a very interesting direction worth dedicated effort, and we are diligently investigating this model design. If possible, we will include our findings including the quantitative results in our final revision. > Minor question (not affecting paper decision): a concurrent work (slotdiffusion) you might want to cite, but I understand that paper comes out after the submission deadline. Thank you for bringing this to our attention, we will include the discussion of the concurrent work in the revised paper. > The authors discuss limitations in Section 6. However, there are more limitations of this work (and the entire object-centric learning field): > > 1. LSD is still unable to generate realistic unconstrained real-world images > 2. LSD suffers from the part-whole ambiguity issue, as can be seen from the COCO examples Thank you for your valuable suggestion. We will be adding a dedicated limitation section in the revised version of our paper to provide a comprehensive explanation of these aspects. --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledgment Comment: I thank the authors for the response. I maintain my original rating of Accept after reading reviews from other reviewers.
Summary: Previous work has shown that transformer-based image generative models can be trained for object-centric learning which can handle complex scenes. That is, given unstructured observation, transformer-based models learn to find latent compositional structures to bind relevant features. This work makes an attempt to explore the feasibility and potential of integrating diffusion models into object-centric learning. A major contribution of this work is a novel Latent Slot Diffusion (LSD) model which combines conventional slot decoders with a conditional latent diffusion model, conditioned on object-centric slots provided by Slot Attention. The authors have shown the effectiveness of the proposed model by evaluating several object-centric tasks including unsupervised object segmentation, downstream property prediction, compositional generation, and image editing. Strengths: - **Scope and relevance**: Considering the growing interest in diffusion models, this paper is exceptionally timely as it broadens their application towards object-centric learning. - **Significance of contributions**: This paper presents a novel diffusion model for object-centric learning without the requirement of supervised annotations. It answers the raised question about how diffusion-based generative modeling can benefit object-centric learning properly. - **Experimental results**: The experiments in this paper are enough to prove the effectiveness of the proposed model. - **Clarity**: The main body of the paper is written very well. Weaknesses: - **Limited technical contribution**: The proposed slot-contained diffusion is a trivial extension of existing text-to-image generation in Latent Diffusion Models (LDM) [59], which replaces text inputs with latent slots. - **Unclear model details**: To my understanding, the whole model is trained for image generation objectives. How to apply such a generative model to unsupervised object segmentation is not clear. - **Unclear training details**: if I am not mistaken, the object-centric encoder is trained independently rather than jointly with the latent slot diffusion decoder. I would suggest the authors clarify this. Thus, control experiments might be needed to study the effect of training strategies. - **Some implementation details are missing**: Without providing the source code, some important hyperparameters are not presented to justify the reproducibility of this work, eg., the number of $K$ for k-means cluster, noise schedule $\alpha$, and steps $T$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In addition to the above weaknesses, here are two more questions: - In line 231, given that the object segmentation masks are derived from the attention masks of Slot Attention, why using diffusion-based generative modeling can help unsupervised object segmentation? - The LSD model has been evaluated across multiple tasks and outperforms existing state-of-the-art methods. However, it raises the question: to what extent does the enhanced performance truly stem from the diffusion-based generative modeling rather than the improved image encoder? It's an intriguing point to ponder. Overall, this paper is a good effect. I will raise my rating if the authors can address my concerns. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not adequately address the limitations of this work. A significant constraint of this study is the requirement for two distinct visual encoders: a pre-trained image auto-encoder and an object-centric encoder. This dual requirement results in a relatively heavy model for inference. This leads to the question: Is there any potential to share certain components between these two encoders to enhance efficiency? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We sincerely appreciate for your constructive recommendation and valuable insights! > If I am not mistaken, the object-centric encoder is trained independently rather than jointly with the latent slot diffusion decoder. I would suggest the authors clarify this. Thus, control experiments might be needed to study the effect of training strategies. Thank you for your valuable feedback. We would like to clarify that the object-centric encoder is **trained jointly** with the diffusion decoder using the denoising loss. It is not trained independently. > The proposed slot-contained diffusion is a trivial extension of existing text-to-image generation in LDM, which replaces text inputs with latent slots. We appreciate your feedback and the opportunity to clarify our contributions. While the proposed model might seem like a combination of known components, the impact of our work exceeds the sum of its parts. In our model, the object-centric encoder (that provides the slots) and the diffusion decoder are **trained jointly**. This joint training has two important consequences: 1. **From an Object-Centric Learning Perspective**: Before our work, it was not known if using a diffusion decoder instead of transformer-based autoregressive decoders would lead to object discovery in the encoder. In this sense, it is remarkable that our model actually surpasses transformer-based autoregressive decoders—the current state-of-the-art in unsupervised scene decomposition. The importance of this finding is also resonated by reviewer QNuS. 2. **From a Diffusion Model Perspective**: Prior to our work, compositionality in diffusion models was achieved via text annotations. We show for the first time that compositionality can emerge in diffusion models without requiring text—solely via unsupervised training—allowing us to compositionally generate and edit scenes without text. This also points to the potential of harnessing vast amounts of unlabelled image data as a future avenue. > To my understanding, the whole model is trained for image generation objectives. How to apply such a generative model to unsupervised object segmentation is not clear. Thank you for this question. The model is trained with an image reconstruction objective, however, the learning signal is back-propagated through both the slot attention encoder and the decoder. The attention masks of slot attention emerging from this training process serve as unsupervised object segmentation. Specifically, in the encoder, the slots attend to a grid of image features. The area each slot attends is considered an object segment. This method for obtaining $\mathbf{A}$ is already detailed in Section 2.1. Please let us know what we might have overlooked; we're happy to provide additional details. > Some implementation details are missing: Without providing the source code, some important hyperparameters are not presented to justify the reproducibility of this work, eg., the number of for k-means cluster, noise schedule, and steps. We appreciate your feedback and attention to detail. While we have included the implementation details in the appendix of the paper, we acknowledge that there was an oversight regarding the number of $k$-means clusters. We will thoroughly review our implementation and documentation to ensure that all information is appropriately included. Additionally, we are committed to releasing our complete implementation upon acceptance. > In line 231, given that the object segmentation masks are derived from the attention masks of Slot Attention, why using diffusion-based generative modeling can help unsupervised object segmentation? Thank you for sharing the concern. As we clarified above, the attention masks are learned through a joint training of the object encoder module and the diffusion decoder using only the denoising loss. In our study, we have found that by combining these two components, the model naturally develops object segmentation capabilities without needing a supervision signal. > The LSD model has been evaluated across multiple tasks and outperforms existing state-of-the-art methods. However, it raises the question: to what extent does the enhanced performance truly stem from the diffusion-based generative modeling rather than the improved image encoder? It's an intriguing point to ponder. To answer this question, we have provided the comparison between LSD and SLATE+ in our study. This comparison essentially contrasts the performance of a transformer decoder versus a diffusion decoder. We believe the notable performance gap between LSD and SLATE+ does suggest that the diffusion decoder plays a key role in achieving the superior results. > The authors did not adequately address the limitations of this work. Thank you for your insightful suggestion. We will add a dedicated Limitation section in the revised manuscript to explain the limitations of our work. > A significant constraint of this study is the requirement for two distinct visual encoders: a pre-trained image auto-encoder and an object-centric encoder. This dual requirement results in a relatively hefty model for inference. This leads to the question: Is there potential to share certain components between these two encoders to enhance efficiency? Thank you for the insightful comment! One potential way to enhance efficiency is to share the encoder of the VAE with the object-centric encoder. In fact, as also discussed in our response to reviewer uRVS, we have investigated this specific configuration of LSD during our early experiments. However, our initial tests indicated suboptimal object segmentation and, as a result, we opted to keep the two components separate. We will include a discussion on this question in the revised version of the paper. Additionally, it is noteworthy that during inference, if the downstream tasks only require object segmentation or object representations, then only the object-centric encoder is required. --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledgment Comment: I thank the reviewers for their detailed clarification, which has released my concerns about their work. Therefore, I raise my rating to accept.
Summary: The paper studies the diffusion models into object-centric learning. The authors introduce the concept of latent slot diffusion (LSD) that can replace slot decoders conditioned on object slots. The model can work in an unsupervised compositional mode without requiring annotations such as text. Their experiments show that LSD performs better in complex scenes comparing with other methods that do unsupervised compositional generation. Strengths: The proposed method LSD can be viewed as a model substituting conventional slot-decoders with a conditional latent diffusion model, where the conditioning is done via the object slot attentions. It can also be viewed as a unsupervised conditional compositional diffusion based generation. The ablation experiments are helping in explaining the crux. Weaknesses: Object segmentation using LDS is yet to be perfected, leading to suboptimal downstream applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are potential ways to improve segmentation using LSD? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As a generative model, the method needs to consider the privacy and impacts of image manipulation. Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We would like to express our appreciation to your insightful feedback! > Object segmentation using LDS is yet to be perfected, leading to suboptimal downstream applications. Thank you for your observation. While the object segmentation in LSD may have room for improvement, it's crucial to underscore that LSD is functioning under the **unsupervised learning setting**. The problem of unsupervised entity-level segmentation is a well-recognized challenge in both object-centric learning and computer vision, with no perfect solution yet. Nevertheless, our work shows significant progress compared to the previous state-of-the-art, which is demonstrated by the effectiveness of LSD in comparison with baseline models under this demanding condition. > What are potential ways to improve segmentation using LSD? Thank you for your question. We would like to share some potential ways to improve the segmentation performance of LSD as followings: - **Using post processing techniques to improve the resolution and boundary prediction of the mask.** - As also discussed in our response to reviewer uRVS, improving the quality of segmentation masks might be achieved with post-processing techniques like bilateral solvers and conditional random fields. These methods utilize low-level features, such as RGB colors and pixel positions, to fine-tune the boundaries of the segmentation masks. Integrating this refinement process into our model can potentially lead to improved segmentation outcomes. - **Applying LSD on pre-trained diffusion models.** - As mentioned in our paper's appendix, LSD's segmentation can further benefit from pre-trained diffusion models. Such models may alleviate the slots' burden to capture the perceptual details of the images, allowing them to concentrate more on object discovery and produce more accurate segmentation masks. We will provide further investigation of this direction in our revised appendix. - **Adding supervised signals from large scale segmentation datasets.** - While LSD operates within unsupervised learning contexts in this study, one might consider shifting to a semi-supervised learning approach when object segmentation is a primary concern for the downstream application. Integrating supervised segmentation signals into the object-centric encoder for specific data samples has the potential to greatly improve segmentation accuracy. > Ethics Concerns: As a generative model, the method needs to consider the privacy and impacts of image manipulation. We acknowledge the importance of privacy issues and the impacts of the ability to perform image manipulation. We have discussed these issues and other potential social implications of LSD in the "Broader Impact" section in the appendix, and we will address additional ethical consideration in our revised version. --- Rebuttal Comment 1.1: Title: Post-rebuttal Comment: Thank you for posting your rebuttal. I'll keep my original rating "accept".
Summary: This work introduces a methodology for using diffusion-based models to obtain object-centric representations. This is done using slot representations, obtained from the Slot Attention module applied to the original image, as a conditioning variable for a latent diffusion model. The diffusion model and the Slot Attention module are trained end-to-end. The proposed model is called Latent Slot Diffusion (LSD) and is evaluated on unsupervised object segmentation, downstream property prediction, compositional image generation, and image editing. The datasets used for evaluation are ClevrTex, MOVi-E, MOVi-C and, for the first time in the field, FFHQ. The performance of LSD is evaluated quantitatively on the first 4 tasks and is compared against SLATE and SLATE+. The proposed model outperforms both baselines in the tasks according to most metrics. Strengths: **Originality**. The paper proposes a novel way to combine two existing models: slot attention and latent diffusion. The presentation clearly shows what are the novel elements. **Quality**. The method shows a successful way to leverage diffusion in object-centric learning. Especially the results on property prediction show that it has concrete benefits for the representation learning itself, which might be useful for several different tasks, not only the ones shown in the paper. **Clarity**. The submission neatly shows all the experiments that were carried out and the description of the underlying method is clear. **Significance**. The work is a necessary exploration of leveraging the generative power of the diffusion approach in the object-centric setting, highlighting the complexities related to having too-strong of a decoder and providing clear examples of its strengths, especially by using a very complex dataset (FFHQ). Weaknesses: **Quality**. The lack of error bars (i.e. standard deviation) in the analysis makes the quantitative analysis weaker. Additionally, it would be interesting to further explore the problems with FG-ARI, as it is currently the standard metric used in the field, and although I agree with the statements, this is only based on experience and intuition, and not a proper scientific analysis, which has not been carried out yet. Lack of comparison with traditional models such as the improved Slot Attention architecture proposed in [1] makes the performance of the model harder to evaluate. Are the good results obtained primarily due to the improved architecture (e.g. larger encoders, better decoder), or is there something fundamentally good about using diffusion (e.g. the iterative improvement) that results in better object representations? **Clarity** The description of the method used for unsupervised object segmentation is lacking. There is only a reference to the use of the attention masks from slot attention, it would be much better to refer directly to the nice mathematical notation used before in the text. *References* 1) Biza, Ondrej, et al. "Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames." arXiv preprint arXiv:2302.04973 (2023). Technical Quality: 3 good Clarity: 3 good Questions for Authors: As object segmentation is performed using the masks from the slot attention module, it is clear that it is not possible to obtain masks with the full resolution of the images in many cases. Could this be an area of improvement for future work or is it possible to already try using some unsupervised super-resolution technique to get higher-quality masks? Could be worth considering object segmentation as a visualization of what the model is doing instead of a separate task? Do you have any insights into how the model would perform if slot attention is applied to the latent representation of the latent diffusion model instead of the original image? This would help clarify the choices made during the development of the method. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have sufficiently addressed the limitations of the proposed model, as well as the broader impact of their approach. However, this can be improved further, by being more explicit about direct effect of ethnic bias, ageism or other forms of mis-representation that the model could lead to, which is standard practice for modern diffusion model, considering the extreme high-quality of the generated images. Direct manipulation of certain characteristics enables these effects in a much easier way. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We sincerely thank you for your positive recommendation and thoughtful comments! > ... lack of error bars We agree that incorporating error bars can better evaluate model’s robustness, and we intend to provide the results here. However, due to constraints in computing resources, we only managed to complete the MOVi-E experiment within the rebuttal window. The table below shows MOVi-E results with 3 seeds per model. The results demonstrate LSD’s consistent high performance, outperforming baseline models across all metrics. We hope the new analysis provides clear evidence of LSD’s strength and has properly addressed your concern. We will include the rest of the results in the revised version of the paper. |Segmentation|SLATE|SLATE$^+$|LSD (Ours)| |:-:|:-:|:-:|:-:| |mBO ($\uparrow$)|30.17 $\pm$ 2.09|22.17 $\pm$ 0.47|**38.96 $\pm$ 0.58**| |mIoU ($\uparrow$)|28.59 $\pm$ 2.03|20.63 $\pm$ 0.43|**37.64 $\pm$ 0.55**| |FG-ARI ($\uparrow$)| 46.06 $\pm$ 4.07|45.25 $\pm$ 0.94|**52.17 $\pm$ 1.09**| |Representation|SLATE|SLATE$^+$|LSD (Ours)| |:-:|:-:|:-:|:-:| |Position ($\downarrow$)|2.09 $\pm$ 0.15| 2.15 $\pm$ 0.09|**1.85 $\pm$ 0.06**| |3D B-Box ($\downarrow$)|3.36 $\pm$ 0.14| 3.37 $\pm$ 0.33|**2.94 $\pm$ 0.00**| |Category ($\uparrow$)|38.93 $\pm$ 0.20| 38.00 $\pm$ 0.45|**42.96 $\pm$ 0.26**| > ... would be interesting to further explore the problems with FG-ARI We appreciate that you agree with our observations. Previous studies [1,2] have also noted similar limitations of FG-ARI and recommended using additional metrics, e.g., mIoU. Therefore, in our experiments, we report 3 metrics: mIoU, mBO, FG-ARI; and our model has shown significant benefits in all of them. Since these are standard metrics in the line of object-centric learning, exploring their limitations is perhaps beyond the scope of this work. That being said, it is an interesting and important topic of future research. > ... comparison with traditional models such as the Invariant Slot Attention (ISA) Although we acknowledge that including a comparison to ISA would provide additional insights to our results, we do not consider it critical. Firstly, our study and the ISA paper approach different aspects. Our work focuses on improving representation learning and image generation quality, while ISA aims to obtain object representations that are invariant to position and scale. An interesting direction would be to combine our work with ISA to introduce invariance in the object representation. However, we leave this exploration to future studies. > ... importance of improved architecture vs diffusion process Thank you for your insightful question. We believe that the improvement in performance stems from a combination of both. The diffusion architecture (e.g., the UNet) utilized in LSD has been meticulously explored within the diffusion models literature. And the iterative denoising technique has also been adopted to achieve high image generation quality, as evident from the progression from DALL-E [3] to DALL-E 2 [4]. On the other hand, as highlighted in the SLATE paper and reaffirmed in our study, improved generation capacity can significantly improve the representation learning capability, particularly in scenarios involving complex scenes. Therefore, we hold the view that the combination of both architecture and the iterative process contributes to the observed improvement. > ... mathematical notation for unsupervised object segmentation We will clarify this in the revised version (e.g., in Appendix) using the notations in Section 2.1. > ... improvement to get higher-quality masks? We agree that it is a promising direction for future work. One reason for the reduced resolution is to reduce memory and computational costs. There are existing potential solutions to this challenge, such as using bilateral solvers or conditional random fields for post-processing. These methods incorporate low level features like RGB colors and pixel positions to refine the boundary of the scaled segmentation masks and have been effectively utilized in other works [5]. We believe they could potentially be integrated into LSD for even higher-quality masks. We will include this discussion in our revised manuscript. > ... considering object segmentation as a visualization of what the model is doing? We appreciate this perspective. The segmentation are derived from the cross-attention between the slot representation and image patches. It does serve as a visualization of the model's spatial disentangling process during inference of slot representations. > ... how the model would perform if slot attention is applied to the latent representation of the latent diffusion model? Yes, this particular configuration of LSD was explored in our preliminary stages. Initial experiments indicated that object segmentation was not optimal when sharing the VAE encoder's latents to learn the slots. Additionally, we note that with the VAE encoder shared with the object encoder, the segmentation resolution will be constrained by the VAE latent's resolution, which in the case of LSD, would be ⅛ of the original image size. We acknowledge the importance of clarifying this development choice and will be including a discussion on this investigation in the revised version. > ... limitations and broader impact Thank you for your valuable suggestion. We will be adding a dedicated limitation section in the revised version of our paper to provide a comprehensive explanation of these aspects. [1] Monnier, et al. "Unsupervised layered image decomposition into object prototypes." ICCV. 2021. [2] Zimmermann et al. "Sensitivity of Slot-Based Object-Centric Models to their Number of Slots." arXiv. 2023. [3] Ramesh, et al. "Zero-shot text-to-image generation." ICML. 2021. [4] Ramesh, et al. "Hierarchical text-conditional image generation with clip latents." arXiv. 2022. [5] Wang, et al. "Cut and learn for unsupervised object detection and instance segmentation." CVPR. 2023.
Rebuttal 1: Rebuttal: ## **General Response** We thank all reviewers for their insightful and positive feedback! We are encouraged that they find our work **novel** (uRVS, ZjQw), **timely** (ZjQw), and **both intuitive and nontrivial** (QNuS). They also highlighted its **practical implications** (uRVS) and **potential to facilitate future research** (QNuS). We are pleased that they recognized our empirical evaluation as **thoroughly conducted** (uRVS, QNuS, ZjQw), **demonstrating impressive improvement** (QNuS), and our paper **well-written and easy to follow** (uRVS, ZjQw, QNuS). We would like to extend our gratitude to the ethics reviewers for their insights on the ethical dimensions of our work. We acknowledge the concerns about the image manipulation ability and are committed to addressing them explicitly in our revised manuscript. We will respond to each reviewer’s concerns and questions separately below.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Slot-guided Volumetric Object Radiance Fields
Accept (poster)
Summary: This paper presents slot-guided volumetric object radiance fields (sVORFa), a learning method for 3D object-centric representation. They proposed three key techniques, scene decomposition based on global pooling and slot-based representations, hyper-network to generate per-object radiance fields, and scene composition module. They have tested the method on various tasks on synthetic datasets, such as CLEVR and Room, showing superior performance to prior arts. Finally, they provided the simple experiment results on real images (LLFF), highlighting the broader effectiveness of the methods. Strengths: 1. The proposed method improved the prior methods by a large margin, especially on Room-Chair and Room-Diverse datasets (segmentation task) 2. The newly suggested modules seem very efficient. and I think the global pooling features combined with the slot-guided method is a reasonable and better approach than the previous slot-attention method, at least in these particular tasks. Weaknesses: 1. From my understanding, three modules were newly adopted or introduced to object-centric representation learning tasks. They are 1. ‘global pooling’ with slot-guided scene decomposition module, 2. hypernetwork, and 3. scene composition modules. However, throughout the experiments, I do not see which components are actually effective compared to prior methods. For example, they could have introduced ‘hypernetwork’ to uORF, or they could have tested ‘slot-attention’ in the proposed sVORF to see the effectiveness of the 'hypernetwork' or ‘global-pooling’ slot-guide method. Hence, it is hard for the readers to know every component actually contributes to the final performance. 2. It seems connectivity regularization (CR) is very important, I would like to see how much CR can improve the performance of the previous methods, such as uORF and others. 3. Similarly, I believe we can easily incorporate spatial broadcast (SB) or slot mixers (SM) in the proposed method sVORF, which could clearly show the effectiveness of the proposed scene composition module. 4. Although the newly introduced modules are well orchestrated into a single model and improve performance, I do not see significant technical novelty. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In Table 2, why did you have to use different settings and re-evaluate the previous methods? Other questions are embedded in the weaknesses section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I do not see any potential negative societal impact from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful suggestions! $\textbf{Q1:}$ Need ablation studies on main components. $ \textbf{A1:}$ We conduct some experiments to compare our proposed components with previous methods. $\textbf{Firstly}$, we show the effectiveness of the transformer-based module. We substitute the transformer with slot attention and we observe that slot attention fails to achieve the decomposition task in our model, as shown in Fig 5 in the rebuttal PDF. Based on this comparison, we can conclude that our transformer-based module has a better scene decomposition than slot attention in our training setting. $\textbf{Secondly}$, we validate our proposed slot-guided scene composition module. We replace the hypernetwork with the conditional NeRF used in uORF. As shown in Table of our global response, using a hypernetwork performs better than using conditional NeRF. We speculate that using hypernetwork can provide stronger 3D geometric bias than directly using the slots for conditioning the radiance fields per object. Besides, we demonstrate the efficacy of using slots as a guidance to compose individual objects and background. As shown in section Composing Mechanism, this scheme outweighs the density-weighted mean used in uORF largely in FG-ARI metric. This comparison proves that slot-guided composition can make slot features 3D-aware, which is useful for scene decomposition. $\textbf{Q2:}$ How much CR can improve the performance of the previous methods, such as uORF and others. $\textbf{A2:}$ The CR module is not the key to sVORF decoupling. It serves the purpose of mitigating the presence of semi-transparent clouds when the number of slots significantly exceeds the total number of objects in a scene, as discussed in section 3.4 of main paper. We also give an analysis of how this module affects the overall performance of the model in section 4.4. Based on the analysis, it is not one of the key components. Considering that the occurrence of semi-transparent clouds is infrequent in uORF, it is not necessary to provide experimental results of CR module on uORF. $\textbf{Q3:}$ Incorporate Spatial Broadcast (SB) or Slot Mixers (SM) in the proposed method sVORF. $\textbf{A3:}$ For SB method, we present the ablation results in the Composing Mechanism section of the main paper. The results show that our composing mechanism outperforms the density-weighted mean combination mode (SB), especially for small objects that may otherwise be segmented into attachments of other objects. As for the SM method, we have some modifications on the SM decoder[1]. First, we use a 3D point $x$ instead of the target ray as the query to aggregate the weighted slot feature. Second, we transform the weighted slot feature into the corresponding radiance field. Third, based on the radiance field, we can obtain the density and color of the 3D point $x$. In the experiment, we find that the composition performance of the SM method is lower than our proposed composition method. Specifically, the SM method exhibits 3D-inconsistent segmentation results, as shown in Fig 5 in the rebuttal PDF. It proves that the introduction of 3D geometric bias is really important for scene decomposition. $ \textbf{Q4:}$ Limited significant technical novelty. $ \textbf{A4:}$ Unlike the previous work like uORF, our method has two unique contributions. $\textbf{Firstly}$, we use a transformer-based module instead of the GRU module to extract object and background slots. This module is simple and easy to optimize without GRU block. When we substitute the transformer with slot attention, we observe that slot attention fails to achieve the decomposition task in our model, as shown in Fig 5 in the rebuttal PDF. Based on this comparison, we can conclude that our transformer-based module has a better scene decomposition than slot attention in our training setting. $\textbf{Secondly}$, we propose a slot-guided scene composition module to recompose the slots to novel views. Compared to a conditional NeRF in uORF, this module uses a hypernetwork to transform a slot to its radiance field and utilizes explicit geometric bias to obtain density and color. This design performs better than using conditional NeRF, as shown in Table in our global response. We speculate that using hypernetwork can provide stronger 3D geometric bias than directly using the slots for conditioning the radiance fields per object. Besides, this module uses slots as a guidance to compose individual objects and background. This scheme can make slot features 3D-aware, which is useful for scene decomposition. As shown in section Composing Mechanism, this scheme outweighs the density-weighted mean used in uORF largely in FG-ARI metric. $\textbf{Q5:}$ In Table 2, why did you have to use different settings and re-evaluate the previous methods? $\textbf{A5:}$ In the CLEVR-3D dataset, ObsSuRF and OSRT employ distinct training and test data divisions, as well as different test metrics. To ensure a fair comparison, we use the settings of ObsSuRF as default and re-evaluated OSRT accordingly. [1] Mehdi SM Sajjadi, Daniel Duckworth, Aravindh Mahendran, Sjoerd van Steenkiste, Filip Pavetic, Mario Lucic, Leonidas J Guibas, Klaus Greff, and Thomas Kipf. Object scene representation transformer. Advances in Neural Information Processing Systems, 35:9512–9524, 2022.
Summary: This paper proposes a method, sVORF, for unsupervised object-centric 3D representation learning. Given a single image as input, it decomposes the scenes into individual 3D objects and 3D backgrounds which thus support object segmentation, novel view synthesis (named scene generation in the paper), and scene editing (including object moving and background repainting). The key idea of the paper is to combine object slot with object radiance fields together in a framework, in which the object slot is responsible for decomposing the scene into objects. In contrast, the object radiance fields are used to re-compose the scene for volumetric rendering. The only cue for training the network is the rendering loss between the re-composed and volumetric rendered image with the input single-view image. The overall performance of sVORF is plausible and thorough experiments and ablation studies well validated the effectiveness of the proposed method. Strengths: +The task of reconstructing each object and the background in 3D given only a single RGB image is very challenging and with great importance for practical applications in robotics and CHI, etc. +The overall performance of the proposed method is strong when compared with existing approaches. +The idea of using the attention mechanism for scene decomposition and re-composition as well as the use of HyberNetwork for object-level NeRF generation is sound and practical. +The demonstration of segmentation on real captured dataset LLFF is important to help the readers to qualify the performance of the method. +The paper is well written and easy to follow. Weaknesses: - The dataset used for validating the proposed method is quite simple, with obvious color or shape differences between objects and simple backgrounds, which is very convenient for segmentation and decomposition. In the LLFF Dataset, the segmented foreground color is also very different from the background. So maybe more complex scenes should be used to demonstrate the effectiveness of the proposed method. - The HyperNetwork produces MLP parameters of NeRF. However, the MLP representation of NeRF is very ambiguous, so learning such a hyberNetwork for NeRFs may not be very robust, please explain how to deal with this problem, especially for real-world scenes with complex objects in it. - In Fig.5, only segmentation results are provided on the LLFF dataset, is it possible to also provide the NVS results? why not? - The main paper should clarify the training strategies and key implementation details like how to set K for each dataset. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Line 224, please provide the full name of FFN since this is the first time you use FFN. - In Table 2(a), it seems ObSURF works generally better on 3D segmentation, does this mainly benefit from the depth input? - Line 149, a->an Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful suggestions! $\textbf{Q1:}$ Demonstrate the effectiveness of the proposed method on more complex scenarios. $\textbf{A1:}$ To validate our method on more challenging scenarios, we conduct a preliminary experiment on the MSN dataset. The MSN dataset comprises $\textbf{11,733 distinct shapes}$, with each scene populated by 2-4 objects sourced from the ShapeNetV2 3D model dataset. The results are shown in below: | Model | Supervision | ARI $\uparrow$ | Fg-ARI $\uparrow$ | PSNR $\uparrow$| | ----------- | ----------- | ------------ | ----------- | ----------- | | ObSuRF | image+depth | $\bf{64.1}$ | 81.4 | 27.41 | | sVORF | image | 63.4 | $\bf{84.1}$ | $\bf{30.51}$ | Compared with ObSuRF, sVORF achieves significantly higher Fg-ARI and comparable ARI without using depth information, which demonstrates the model's ability to decouple more complex scenarios. Please refer to Fig 1 in the rebuttal PDF for visual results. $\textbf{Q2:}$ The robustness of hypernetwork. $\textbf{A2:}$ To our knowledge, Shap$\cdot$E[1] also learns a hypernetwork to produce MLP parameters. Shap$\cdot$E can handle a large complex and diverse 3D assets, although it does not involve complex background. It is an indicator that a hypernetwork for NeRF may be enough for real-world scenes with complex objects in it. For the success of Shap$\cdot$E, we speculate that a large dataset may be the key factor. Thus, to deal with the robustness of a hypernetwork, it may be an alternative to train with a large dataset. Certainly, we acknowledge the robustness problem of the hypernetwork. It is a future work to investigate and solve this problem. $\textbf{Q3:}$ Provide the NVS results on the LLFF dataset. $\textbf{A3:}$ As a result of limited page space, we provide the novel view synthesis results of LLFF in Appendix A.2. These results showcase the multi-view consistency of sVORF on the LLFF dataset. If possible, we will include these results in the main paper. $\textbf{Q4:}$ Clarify the training strategies and key implementation details like how to set K for each dataset. $\textbf{A4:}$ The implementation details are provided in Appendix. Furthermore, we employ additional training strategies as outlined below. We utilize the Adam optimizer with a learning rate of 0.0001, $\beta_1$ = 0.9, and $\beta_2$ = 0.999. Additionally, we implement learning rate warm-up for the initial 1,000 iterations. The value of K (set based on the number of objects in the scene) was customized separately as follows: K = 8, 7, 5, 5, 2, and 5 for the CLEVR-567, CLEVR-3D, Room-Chair, Room-Diverse, LLFF, and MSN datasets, respectively. For the CLEVR-567 and Room-Chair datasets, sVORF is trained with a batch size of 16 for approximately 7 hours on 8 Nvidia RTX 2080ti GPUs. For the larger CLEVR3D datasets, sVORF spends approximately 2 days using a batch size of 16 on 8 Nvidia RTX V100 GPUs. $\textbf{Q5:}$ Provide the full name of FFN. $\textbf{A5:}$ Thank you for catching that typo. The full name of FFN is feed-forward network. $ \textbf{Q6:}$ Influence of depth information on ObSuRF. $ \textbf{A6:}$ Regarding image generation, ObSuRF points out that incorporating additional depth supervision can significantly reduce reconstruction errors. As for decomposition, although ObSuRF does not provide ablation experiments about the depth map, it is obvious that depth information plays a crucial role in separating the foreground and background, particularly for shadows and foreground objects. $ \textbf{Q7:}$ Typo in L149 $\textbf{A7:}$ Thank you for catching that typo and we will fix it in final version. [1] Jun, Heewoo, and Alex Nichol. "Shap-e: Generating conditional 3d implicit functions." arXiv preprint arXiv:2305.02463 (2023). --- Rebuttal Comment 1.1: Comment: Thanks for your reminder. I am overall satisfied with the author's response. The effectiveness was evaluated using a more complex dataset. I would suggest discussing more insightfully the advantages of using hyper network but not simply following ShapeE. Moreover, the philosophy to set K should be clarified in the main paper to avoid confusion.
Summary: This paper presents a method called sVORF for learning 3D object centric representations. A transformer decompose a single input image into slots which are then each mapped to volumentric radiance fields by a hypernetwork. These object-radiance fields are then composed into a 3D scene. The effectiveness of sVORF is demonstrated on a set of simple synthetic datasets and two real world scenes. It achieves SOTA or at least competitive results on CLEVR567, CLEVR3D, Room-Chair, and Room-diverse datasets, compared to SlotAttention, uORF and COLF. Strengths: * The problem of unsupervised 3d object decomposition is quite challenging. The fact that sVORF performs so well with only a single input image, and no depth or segmentation supervision is remarkable. * The writing is clear and easy to follow. The figures look nice, are helpful and are easy to understand. * The individual parts of the method are well motivated and their individual contributions are established through systematic ablations. Weaknesses: * The paper stresses the low computational costs and memory footprint of sVORF, but fails to report any data on them in the main paper. Memory consumption is mentioned in the appendix, alongside training time per epoch. But information about the length of training is lacking so the real computational costs remain unclear. * The studied scenes are visually simple (yes even Room-diverse which is advertised as complex). The only exception are the two scenes from LLFF. It is encouraging that sVORF succeeds on these two scenes, but in this setting it appears to me that the model is effectively overfitting on the given scene, and there is no evidence for generalization to novel scenes in that setting. * The restriction to a single input image is rather limiting. The 3D structure of a general scene is highly underspecified from a single image, which should lead to large uncertainties (e.g. in occluded areas). For simple scenes like CLEVR and Room-Chair this is not a problem because none of the elements is ever fully occluded, and they come from a small set of known objects. But I expect this to become a severe limitation for scaling to more complex scenes. Especially since sVORF and its training objective are not designed to be probabilistic / generative, and its fidelity will thus likely suffer substantially from complex ambiguities. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - How many samples are used per scene during training. Appendix A mentions 64 rays with 64 coordinates in the coarse volume and 128 additional coordinates in the fine volume. What are the coarse and the fine volumes? - For how long (number of updates / epochs) was sVORF trained? How does that compare to uORF, COLF, OSRT? - The LLFF results use only two scenes. Does that mean the shown results are training images? Is sVORF trained on both scenes at once or as two separate models. In the latter setting it seems to me that sVORF is essentially equivalent to a simple volumetric NeRF, since the model could in principle learn to produce the correct target views during inference without even looking at the input view. Is there any evidence for generalization in this setting? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: There is a brief discussion of limitations in the supplementary that touches on two important limitations of the method. It does however not discuss any concerns regarding generalization to more complex real-world scenes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful suggestions! $\textbf{Q1:}$ Information about the length of training. How does that compare to uORF, COLF, OSRT? $\textbf{A1:}$ For the CLEVR-567 and Room-Chair datasets, we train sVORF for approximately 7 hours using 8 Nvidia RTX 2080 Ti GPUs with batch size 16. The uORF and COLF models are trained on an Nvidia RTX V100 GPU for approximately 7 and 2 days, respectively, with a batch size of 1. For the larger CLEVR3D datasets, sVORF is trained for approximately 2 days using 8 Nvidia RTX V100 GPUs with batch size 16, while OSRT is trained for approximately 1 day on 8 A100 GPUs with a batch size of 256. $\textbf{Q2:}$ The studied scenes are visually simple. $ \textbf{A2:}$ To validate our method on more challenging scenarios, we conduct a preliminary experiment on the MSN dataset. The MSN dataset comprises $\textbf{11,733 distinct shapes}$, with each scene populated by 2-4 objects sourced from the ShapeNetV2 3D model dataset. The results are shown in below: | Model | Supervision | ARI $\uparrow$ | Fg-ARI $\uparrow$ | PSNR $\uparrow$| | ----------- | ----------- | ------------ | ----------- | ----------- | | ObSuRF | image+depth | $\bf{64.1}$ | 81.4 | 27.41 | | sVORF | image | 63.4 | $\bf{84.1}$ | $\bf{30.51}$ | Compared with ObSuRF, sVORF achieves significantly higher Fg-ARI and comparable ARI without using depth information, which demonstrates the model's ability to decouple more complex scenarios. Please refer to Fig 1 in the rebuttal PDF for visual results. $\textbf{Q3:}$ The LLFF results use only two scenes. Does that mean the shown results are training images? On real-world datasets, there is no evidence that sVORF generalizes to new scenarios. $\textbf{A3:}$ We train sVORF on both LLFF scenes as two separate models, but the results presented do not include the training images. Following the setup in NeRF-SOS[1], we divide the data of each scene into training and testing sets, ensuring there is no overlap between these two sets. Our experiment on LLFF aims to demonstrate the capability of sVORF to segment objects in real scenarios. For a simple volumetric NeRF, it can not segment objects in this setting like sVORF. Due to the limited availability of multi-view data from real scenes, it is challenging to show generalization of our method in this setting. However, it is an interesting topic and we plan to investigate it in future research. $\textbf{Q4:}$ The restriction to a single input image is rather limiting. $ \textbf{A4:}$ We acknowledge that the restriction to a single input image is rather limiting, particularly when dealing with more complex scenes. Incorporating multi-view images as inference may alleviate this problem. We consider this as a potential avenue for future research. $ \textbf{Q5:}$ For how long (number of updates / epochs) was sVORF trained? How does that compare to uORF, COLF, OSRT?} $ \textbf{A5:}$ Please refer to the comments in A1. $ \textbf{Q6:}$ How many samples are used per scene during training? $ \textbf{A6:}$ We follow the hierarchical sampling strategy in NeRF[2]. Specifically, we sample 64 points per ray through the coarse network and 64 + 64 = 128 points per ray through the fine network. The "coarse" and "fine volume" refers to the “coarse” and “fine” network. [1] Zhiwen Fan, Peihao Wang, Yifan Jiang, Xinyu Gong, Dejia Xu, and Zhangyang Wang. Nerf-sos: Any-view self-supervised object segmentation on complex scenes. arXiv preprint arXiv:2209.08776, 2022. [2] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: First, I would like to thank the authors for writing such an extensive rebuttal with numerous additional experiments. I agree with the other reviewers, in that the paper is more of a recombination and improvement upon previous works than an innovation in and of itself. I am also still skeptical about the ability of sVORF to generalize to data with real-world complexity. However, I would like to point out that the problem of unsupervised scene decomposition is extremely challenging, and that the authors do demonstrate clear improvements above the baselines, especially in terms of compute. It is still not a strong result, but in my opinion the additional results and clarifications significantly improve the paper and push it into the region of a clear accept.
Summary: This paper introduces sVORF, a compositional method for representing scenes as a collection of slots that are parametrized as per-object radiance fields. The proposed model can be used to perform novel-view synthesis, as well as semantic segmentation in 3D. Moreover, the proposed compositional representation, also enables performing simple editing operations in the scene. In more detail, given an input image, sVORF first extracts image features from the image and passes them to a transformer encoder-decoder module that extracts slots for the objects in the scene and the background. Subsequently, these slots are mapped to volumetric object radiance fields using a hypernetwork. Using the per-object radiance fields, the scene is rendered by simply compositing their outputs at a 3D location with the guidance of each slot. Unlike [8] that was the first work that explored discovering slots in 3D data, sVORF maps slots to per-object radiance fields using a hypernetwork. The authors evaluate the performance of their model on the CLEVR-567, CLEVR-3D, Room-Chair, Room-Diverse and the LLFF datasets and compare with several compositional representations that represent slots as radiance fields [8, 35] or as light fields [9]. From their experimental results, it seems that the proposed model outperforms all baselines both on the novel view synthesis task as well as on the 3D segmentation task. Overall, I think the proposed idea for discovering slots in 3D is interesting and the authors show that their approach is working for simple scenes with a few objects. In terms of novelty, the proposed pipeline is quite similar to [8], with the main difference being the use of the hypernetwork for mapping slots to per-object radiance fields. Looking at the quantitative results, it seems that the proposed model is better than the baselines on most tasks, thus I am leaning towards accepting the paper but I am still a bit concerned by the fact that the proposed model and [8] are relatively similar and that the proposed model is only evaluated on extremely simple scenes. Strengths: 1. The authors conduct multiple experiments and compare their model with several baselines on multiple datasets. From the quantitative evaluation, we note that the proposed model outperforms all baselines on most tasks. I particularly liked the scene editing experiment in Sec. 4.3. 2. Although some things are not 100% clear I think the paper is easy to read and relatively nicely written. I really liked Figure 1 that is a pictorial representation of the overall pipeline. Weaknesses: 1. One of the major weaknesses of this work is that the authors only demonstrate the performance of their model on very simple datasets that consist of few objects that in most cases belong to the same object class e.g. chairs. Although I am aware that all baselines evaluate their models on similar setups I think it would have been great if the authors could show that their model works on more challenging scenarios, with different objects and more diverse backgrounds. A rather simple indoor dataset that the authors could consider is the 3D-FRONT dataset. Another less exciting alternative would be to also show results on the City-Block dataset introduced in [9]. However, since also this dataset contains scenes with cars, I think that the 3D-FRONT would be a more appropriate benchmark. 2. In Sec. 3.2 and in L116-117 the authors state that the adopt an efficient transformer module to infer the object and the background slots form the image features. Can the authors describe how this efficient transformer module compares to the classical slot attention module used in [8]? Is the efficient transformer module a standard transformer encoder/decoder? I think that to make things very clear, it would have been very useful to properly define (e.g. using a math expression) how is the cross-attention defined? In addition, can the authors also clarify why/how is this model efficient? 3. Although, the authors provide several ablation studies, I believe that they should have ablated the impact of using a hypernetwork as opposed to directly using the slots for conditioning the per-object radiance fields. This is more similar to [8] and it could better explain why is the proposed model outperforming [8], although their quite similar. 4. I am bit concerned regarding the limited technical contribution of this work. Unless I am missing out something I think the proposed paper is quite similar to [8]. That being said, the proposed method seems to be significantly better than [8]. This could be either due to the use of the hypernetwork or the use of the attention module for inferring the slots. I believe that it is very important to clarify what are the differences and also try to justify the improvement in performance though ablations e.g. ablate the use of a hypernetwork for mapping slots to per-object radiance fields. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Unless I am missing out something, I think that the proposed model assumes that the number of objects/slots is known right? In other words, although we might not know what are the objects, we know how many objects/slots exist in the scene, right? This is never clearly mentioned in the text. I think the authors should clarify it. 2. I think that the per-object NeRFs should be locally defined, namely each NeRF is augmented with a local affine transformation defining its pose in 3D. Otherwise, it would not be possible to change the location of a specific object/slot to a new position. Is this really the case? 3. I think that from the provided information for the composing mechanism in L146-150 it is not 100% clear how are the aggregation $D^c$ and the attention bloc $D^a$ really implemented. One thing that is not very clear to me is why do we need both? Can the authors please explain? 4. In L264-272, the authors present an ablation study that tries to investigate the impact of the Novel View Synthesis setup. To be honest, I am not 100% sure that I understand what the authors mean by "we modify our reconstructed target view to equal with the input view". Can the authors please clarify? 5. I am wondering what is the quality of the 3D geometry of the proposed model and how does it compare to baselines. Since the authors represent the scene using a compositional NeRF-based representation I think it would be useful to also show some depth maps at least in the supplementary. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss the limitations of their work in their supplementary material but I was not able to find any discussion about the potential negative societal impact of their work. That being said, I think that for this paper this discussion is not 100% necessary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful suggestions! $\textbf{Q1:}$ Experiments on more challenging scenarios. $\textbf{A1:}$ To validate our method on more challenging scenarios, we conduct a preliminary experiment on the MSN dataset. The MSN dataset comprises $\textbf{11,733 distinct shapes}$, with each scene populated by 2-4 objects sourced from the ShapeNetV2 3D model dataset. The results are shown in below: | Model | Supervision | ARI $\uparrow$ | Fg-ARI $\uparrow$ | PSNR $\uparrow$| | ----------- | ----------- | ------------ | ----------- | ----------- | | ObSuRF | image+depth | $\bf{64.1}$ | 81.4 | 27.41 | | sVORF | image | 63.4 | $\bf{84.1}$ | $\bf{30.51}$ | Compared with ObSuRF, sVORF achieves significantly higher Fg-ARI and comparable ARI without using depth information, which demonstrates the model's ability to decouple more complex scenarios. Please refer to Fig 1 in the rebuttal PDF for visual results. $\textbf{Q2:}$ Details of the efficient transformer module. $\textbf{A2:}$ The efficient transformer module is a standard transformer decoder. Specifically, we firstly use each slot $\textbf{z}_i$ as query and interact with other object slots with multi-headed self-attention operation. Then we employ multi-headed cross-attention operation to attend into and aggregate features from the flattend image features $E(\textbf{I})$. Finally, we pass the resulting slot features into a feed forward network (FFN) to get the final slots. As mentioned in the main paper, this transformer module is simple and easy to train than the slot attention module, as it does not contain a Gated Recurrent Unit (GRU) block. Additionally, when we replace the transformer with slot attention, we observe that slot attention fails to achieve the decomposition task, see Fig 5 in the rebuttal. Based on this comparison, we can conclude that our transformer-based module has a better scene decomposition than slot attention in our training setting. $\textbf{Q3:}$ The impact of using a hypernetwork. $\textbf{A3:}$ To validate the effectiveness of the hypernetwork, we conduct an ablation study, which replaces the hypernetwork with directly using the slots for conditioning the radiance fields per object. As shown in Table in our global response and Fig 5 in the rebuttal PDF, the model’s performance significantly decreases. We speculate that using hypernetwork can provide stronger 3D geometric bias than directly using the slots for conditioning the radiance fields per object. This 3D-aware slot facilitates guiding the composition of all volumetric neural radiation fields. $\textbf{Q4:}$ The limited technical contribution Unlike the previous work like uORF, our method has two unique contributions. $\textbf{Firstly}$, we use a transformer-based module instead of the GRU module to extract object and background slots. This module is simple and easy to optimize without GRU block. When we substitute the transformer with slot attention, we observe that slot attention fails to achieve the decomposition task in our model, as shown in Fig 5 in the rebuttal PDF. Based on this comparison, we can conclude that our transformer-based module has a better scene decomposition than slot attention in our training setting. $\textbf{Secondly}$, we propose a slot-guided scene composition module to recompose the slots to novel views. Compared to a conditional NeRF in uORF, this module uses a hypernetwork to transform a slot to its radiance field and utilizes explicit geometric bias to obtain density and color. This design performs better than using conditional NeRF, as shown in Table in our global response. We speculate that using hypernetwork can provide stronger 3D geometric bias than directly using the slots for conditioning the radiance fields per object. Besides, this module uses slots as a guidance to compose individual objects and background. This scheme can make slot features 3D-aware, which is useful for scene decomposition. As shown in section Composing Mechanism, this scheme outweighs the density-weighted mean used in uORF largely in FG-ARI metric. $\textbf{Q5:}$ Is the number of slots known? $\textbf{A5:}$ We know the maximum number of objects/slots in the scene, and ensure that the number of slots set is equal to or exceeds this maximum value. $\textbf{Q6:}$ I think that each NeRF is augmented with a local affine transformation defining its pose in 3D. Otherwise, it would not be possible to change the location of a specific slot to a new position. $\textbf{A6:}$ Yes, as you mentioned, in order to relocate the slot, an affine transformation is applied to the 3D sample points before passing them to the corresponding object NeRF. $\textbf{Q7:}$ For composition mechanism, why both aggregation and attention modules are needed? $\textbf{A7:}$ The aggregation block performs a cross-attention operation, which aggregates object representations $S$ with the 3D location $x$ as the query to obtain the corresponding feature $z$. The attention block computes the similarity between $S$ and $z$ after mapping them into the same space through a linear layer, thus obtaining the probability distribution of $x$ belonging to each slot. In the initial stages of our experiments, we observed that utilizing both modules simultaneously yielded superior results. We speculate that this improvement could be attributed to the increased difficulty of directly computing the similarity between the 3D points and the slot feature when they are not spatially aligned using $D^a$. $\textbf{Q8:}$ Clarifing the meaning of "we modify our reconstructed target view to equal with the input view" in L264. $\textbf{A8:}$ It means that we turns sVORF into a 2D image auto-encoder. $\textbf{Q9:}$ The quality of the 3D geometry of the proposed model. $\textbf{A9:}$ As shown in the rebuttal PDF Figure 3, we illustrate the depth maps of sVORF on different datasets. The results show that our method can learn a high quality of the 3D geometry. --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledgment Comment: I would like to thank the reviewers for taking the time to provide additional results on more complex scenes as well as discuss the key differences between the proposed model and [8]. After having read the author's rebuttal and the other reviews, most of my concerns have been addressed, hence I will raise my score to 6: Weak Accept. That being said, I would like to urge the authors to include the additional experiments, provided in the rebuttal, in the final version of their paper. If possible, please also consider providing additional qualitative comparisons on the various datasets. Moreover, I think that the authors should also add a section discussing the differences of their model compared to [8]. I believe that the ablation study about the impact of the use of hypernetwork is very important and the authors should definitely include it in the final version of their paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful comments, according to the feedback and suggestions, we mainly added the following experiments: - [uYMS, JyjT, 3Ksj, thZt] We conduct experiments on more challenging MultiShapenet (MSN) dataset. The results demonstrate our model’s ability to decouple more complex scenarios. - [uYMS, JyjT, 6zB4] We replace the hypernetwork with directly using the slots for conditioning the radiance fields per object, and observe that using hypernetwork can provide stronger 3D geometric bias than conditioning NeRF. - [uYMS] We train sVORF with ResNet34 backbone on LLFF dataset, which produces a coarse segmentation and still segments foreground object from complex scenes. - [JyjT, 6zB4] We replace the transformer with slot attention, and observe that our transformer decomposes scene better than slot attention. - [6zB4] We incorporate Slot Mixers (SM) in our sVORF. The results prove that the introduction of 3D geometric bias in our slot-guided composition method is really important for scene decomposition. - [uYMS] We discuss testing results on unseen grayscale version of CLEVR-567 dataset, indicating that our model really learns to decompose the scene intrinsically. - [uYMS, JyjT] We provide some examples of depth maps for each dataset to show the geometry of our method. - [uYMS] We give the visualization of learned object-level radiance field. The results of the ablation experiments are recorded in the table below | Model | NV-ARI$\\uparrow$ | FG-ARI $\\uparrow$ | | --- | ----------- | ----------- | | sVORF~(w/o Hypernetwork) | 21.6 | 65.9 | | sVORF~(w Slot-Attention) | 14.1 | 76.8 | | sVORF~(w SM) | 28.4 | 71.2 | | sVORF~(ours) | $\bf{81.5}$ | $\bf{92.0}$ | Pdf: /pdf/4a61d5a7456d658f7daeea761f36690e1bd2c288.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes sVORF to tackle the challenging problem of ‘unsupervised’ scene decomposition by learning a series of slot-guided neural radiance fields. Inferring object-level NeRF from a hyper-network instead of costly GRU modules, the overall pipeline can be efficiently trained with on a collection of multi-view images and produce promising segmentation results on several synthetic benchmark datasets. Strengths: - The overall ideas is well motivated and easy to follow, and the main pipeline could be well taken. - Extensive experiments on adequate synthetic datasets are conducted. The numerical results are promising compared to SOTA baselines, with detailed ablations. Weaknesses: - Limited unique contribution compared to previous work like uORF. I think this paper proposes a good extension on top of uORF while my main concern is that the unique contributions are somewhat limited. The overall pipeline is built on top of previous work uORF by replacing the expensive GRU modules to hyper-network inferred compositional object-level NeRFs, which I see as the main difference bringing efficiency advantages. - I do appreciate the promising performance on synthetic datasets, however I think more discussions and results towards the main statements (consistency, efficiency against current baselines) need to be further strengthened. - The method of using radiance fields instead of light-fields stand out in terms of strict multi-view consistency. Therefore, discussions and analysis of multi-view consistency of segmentation masks (on synthetic and real-world cases) are expected to show to validate this point. - Are the adopted backbone networks (ResNet34 and ViT-Base of real-world image segmentation) pre-trained in any way or trained from scratch (L318)? As I am not sure if the used datasets are sufficient to train such large network. If so, is the method strictly unsupervised anymore? More clarifications are expected. Also, for real-world segmentation, I am wondering how does the backbone impact the performance if ResNet34 is used. - How does appearance affect the perform? Does the mode really learn to decompose the scene intrinsically or a very good ‘colour/apperance’ segmentor? I see some good evidence from fig2 but find the networks struggles to separate close-by objects with similar colour as well as shadows. Moreover, related to last point of consistency, is this ‘incorrect pattern’ also consistent across view? - Related to last point, visualization of learned object-level radiance fields are expected. Readers could know if the method indeed decompose the scenes into objects and background as we expect. - It would be more exciting to see how the methods work on real-world images with more than 1 foreground objects. - What is the total training time required to converge sVORF as only per-epoch time is provided. -L118. (GEU) --> (GRU)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations have been discussed in the supplement. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful suggestions! $\bf{Q1}$: Limited unique contribution compared to previous work like uORF. $\bf{A1}$:Unlike the previous work like uORF, our method has two unique contributions. $\textbf{Firstly}$, we use a transformer-based module instead of the GRU module to extract object and background slots. This module is simple and easy to optimize without GRU block. When we substitute the transformer with slot attention, we observe that slot attention fails to achieve the decomposition task in our model, as shown in Fig 5 in the rebuttal PDF. Based on this comparison, we can conclude that our transformer-based module has a better scene decomposition than slot attention in our training setting. $\textbf{Secondly}$, we propose a slot-guided scene composition module to recompose the slots to novel views. Compared to a conditional NeRF in uORF, this module uses a hypernetwork to transform a slot to its radiance field and utilizes explicit geometric bias to obtain density and color. This design performs better than using conditional NeRF, as shown in Table in our global response. We speculate that using hypernetwork can provide stronger 3D geometric bias than directly using the slots for conditioning the radiance fields per object. Besides, this module uses slots as a guidance to compose individual objects and background. This scheme can make slot features 3D-aware, which is useful for scene decomposition. As shown in section Composing Mechanism, this scheme outweighs the density-weighted mean used in uORF largely in FG-ARI metric. $\bf{Q2}$: Limited discussions and results towards the main statements (consistency, efficiency against current baselines). $\bf{A2}$: $\textbf{Multi-view consistency}$: To our knowledge, the generation and segmentation quality of novel view can show the multi-view consistency of the proposed method. As shown in Table 1, our method beats all baselines in terms of NV-ARI (ARI on synthesized novel views) in both Room-Chair and Room-Diverse scene. Besides, our method outperforms other baselines on novel view synthesis, as shown in Table 3. $\textbf{Efficency}$: We provide the details about the training speed and memory consumption in Appendix A.1. In addition, we provide some examples of depth maps for each dataset to show the effectiveness of our method in Fig 3 of the rebuttal PDF. $\bf{Q3}$: Is the method strictly unsupervised anymore? How does the backbone impact the performance if ResNet34 is used in real-world cases? $\bf{A3}$: The adopted backbone networks are trained from scratch, and the method strictly follows unsupervised learning. We use ResNet34 as the backbone and provide the segmentation results in Fig 2 of the rebuttal PDF. Unlike sVORF with ViT-Base, sVORF with ResNet34 produces a coarse segmentation and still segments foreground object from complex scenes. $\textbf{Q4}$: How does appearance affect the perform? Is the ‘incorrect pattern’ of shadow also consistent across view? $\textbf{A4}$: To explore whether the sVORF mainly relies on RGB color for scene decomposition, we conduct an evaluation on a grayscale version of CLEVR-567 dataset. The model used in the evaluation is only trained on RGB CLEVR567 dataset. The model achieves 87.5 FG-ARI on the grayscale test set, which is on par with 92.0 FG-ARI on the default RGB images. The evaluation results demonstrate that sVORF really learns to decompose the scene intrinsically. For qualitative results , please refer to the rebuttal PDF. The ‘incorrect pattern’ of the shadow remains consistent across different views, as illustrated in Fig 5 in the rebuttal PDF. $\textbf{Q5}$: Visualization of learned object-level radiance fields. $\textbf{A5}$: Thank you for this suggestion! We provide the visualization of learned object-level radiance fields on CLEVR-567 dataset in Fig 4 in the rebuttal PDF. It is further demonstrated that our method can achieve very clean scene decomposition. $\textbf{Q6}$: Experiments on real-world images with more than 1 foreground objects. $\textbf{A6}$: To validate our method on more challenging scenarios, we conduct a preliminary experiment on the MSN dataset. The MSN dataset comprises $\textbf{11,733 distinct shapes}$, with each scene populated by 2-4 objects sourced from the ShapeNetV2 3D model dataset. The results are shown in below: | Model | Supervision | ARI $\uparrow$ | Fg-ARI $\uparrow$ | PSNR $\uparrow$| | ----------- | ----------- | ------------ | ----------- | ----------- | | ObSuRF | image+depth | $\bf{64.1}$ | 81.4 | 27.41 | | sVORF | image | 63.4 | $\bf{84.1}$ | $\bf{30.51}$ | Compared with ObSuRF, sVORF achieves significantly higher Fg-ARI and comparable ARI without using depth information, which demonstrates the model's ability to decouple more complex scenarios. Please refer to Fig 1 in the rebuttal PDF for visual results. $\textbf{Q7}$: The total training time. $\textbf{A7}$: For the CLEVR-567 and Room-Chair datasets, sVORF is trained with a batch size of 16 for approximately 7 hours on 8 Nvidia RTX 2080ti GPUs. For the larger CLEVR3D datasets, sVORF spends approximately 2 days using a batch size of 16 on 8 Nvidia RTX V100 GPUs. $\textbf{Q8}$: Typo in L118 $ \textbf{A8}$: Thank you for catching that typo and we will fix it in the final version. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Firstly I thank authors for prodiving more clarifications and experiments on relatively more complext MSN dataset. I have carefully read all the reviews and author response, overall I think some concerns have been succussfully addressed. I think the additional visualisation and analysis would be critical for people to see how the proposed methods decompose the scenes via slots related neural fields. In addition, I expect the detailed configuration as well as experiments on gray-scale images and MSN dataset to be fitted in the main paper and believe it will be a stronger submission. However, some key limitations still remian, for example, I guess the proposed method as well as previous ones would struggle on real-world scenes with multiple objects with rich textures. I hope such limitations could be clearly discussed in the main paper to make it clear where the method stand in the challenging task of unsupervised scene decomposition. Overall, I raise my score to 5 with borderline acceptance considering above modifications and limitations.
null
null
null
null
null
null
Semi-Implicit Denoising Diffusion Models (SIDDMs)
Accept (poster)
Summary: This paper proposes the Semi-Implicit Denoising Diffusion Model (SIDDM) to enable fast sampling while maintaining high-generation quality. Specifically, SIDDM applies an implicit model to match the marginal distributions of the reverse diffusion process. Meanwhile, SIDDM models explicit conditional distribution of the forward diffusion. Furthermore, a regularization method is proposed to enhance the performance. Experiments show that SIDDM has comparable performance to other diffusion models with fewer sampling steps. Strengths: 1. The authors analyse the reasons for the limitations of DDGAN, and decomposite the denoising distribution to improve the training objective. The idea is reasonable. 2. The experiments in the simulated Mixture of Gaussians and several popular public datasets demonstrate the effectiveness of the proposed method. 3. The authors also provide the code for results reproduction, which shows the solidness of the work. Weaknesses: 1. In Fig. 1, there are dashed lines of different colours in Figure 1. But the authors don't give some corresponding explanation. 2. The implementation details are not clarified, like the structure of the regression model, the discriminator regularizer, and the denoiser. Meanwhile, the training settings (e.g., training iterations) are not provided. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Fig. 3 and Tab.5, the larger step (e.g., 16) performs worse than smaller step sizes (e.g., 2). This is different from the conventional situation in diffusion models. Please give some analysis. 2. It is recommended to give the training and inference strategy (algorithm) of SIDDM. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the societal impact in the supplementary material. It would be better if the authors discuss some limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Larger step (e.g., 16) performs worse than smaller step sizes. In the baseline model DDGANs [11], they also observe that increasing the number of diffusion steps degrades the generative quality. The authors of DDGANs hypothesize that increasing the number of diffusion steps needs more capacity for Discriminator as we need a conditional GAN for each denoising step, where Conditional GANs are difficult to be trained when the number of category labels is large. However, the exact reason still remains unknown. According to our simulation results, it is possible that increasing the number of steps can lead to worse results. Fortunately, we observe that with a small number of diffusion steps our method can already obtain high-quality generations. While we agree that a thorough theoretical understanding of this issue is essential, conducting this analysis is non-trivial and we will definitely consider it in the near future. > Training and inference strategy (algorithm) of DDGAN. For the training settings, we reimplement DDGANs with our proposed GAN training structure and found it would not stable during training without R1 constraint, including the time schedule and the network design. For the inference strategy, we follow the original DDGANs implementation with the posterior sampling which conditioned on the previous $x_t$ and the predicted $x'_0$ from the denoisier. To be more specifically, we write the following algorithm for the DDGAN Training: 1. Sampling $x_0\sim q(x_0)), t\sim \text{Uniform}(\{0, . . . , T-1\}), \epsilon_t\sim N(0,1), \epsilon_{t+1}\sim N(0,1)$. 2. $x_t=\sqrt{\bar \alpha}x_0 + \sqrt{1 - \bar \alpha}\epsilon_t, x_{t+1}=\sqrt{1-\beta_{t+1}}x_t + \sqrt{\beta_{t+1}}\epsilon_{t+1}$. 3. $x_0'=G_\theta(x_{t+1},t+1), x_t' \sim q(x_t|x_0',x_{t+1})$, where $q(x_t|x_0',x_{t+1})$ is the posterior sampling from DDPM. 4. D step: $\nabla_\phi (-\log(D_\phi(x_{t-1},x_t,t))-\log(1-D_\phi(x_{t-1}',x_t,t)))$ 5. G step: $\nabla_\theta (-\log(D_\phi(x_{t-1}',x_t,t)))$ 6. Repeat 1-5 until model converge. Inference: 1. $x_T\sim N$ 2. for t = T,...,1 do $\epsilon \sim N(0,1)$ if $t>1$ else $\epsilon=0$, $x_0'=G_\theta(x_t,t)$, $x_{t-1}' \sim q(x_{t-1}|x_0',x_t)$ end for return $x_0$, where we simply denote the posterior sampling of DDPM as $ x_{t-1}' \sim q(x_{t-1}|x_0',x_t)$ due to the mark down rendering issue. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Thanks for the rebuttal. First of all, I need to apologize. Due to my mistake, I mistype the method name in **Summary** and **Question-2**. **I have revised my comment.** For Question-2, I actually hope the authors provide the training and inference strategy of this method (SIDDM). I apologize again for the misunderstanding and inconvenience caused to the authors, area chairs, and other reviewers. Back to the rebuttal, the authors provide an explanation for Question-1. It is better if the authors further provide the method's training and inference strategies (algorithms). --- Reply to Comment 1.1.1: Title: Training and inference strategy of this method (SIDDM) Comment: To reviewer 6duC: Hi, No worries, we are glad to provide the training and inference strategy of our proposed method. Training: 1. Sampling $x_0\sim q(x_0)), t-1\sim \text{Uniform}({0, . . . , T-1}), \epsilon_{t-1}\sim N(0,1), \epsilon_t\sim N(0,1)$. 2. $x_{t-1}=\sqrt{\bar \alpha}x_0 + \sqrt{1 - \bar \alpha}\epsilon_{t-1}, x_t=\sqrt{1-\beta_t}x_{t-1} + \sqrt{\beta_t}\epsilon_t$. 3. $x_0'=G_\theta(x_t,t), x_{t-1}' \sim q(x_{t-1}|x_0',x_t)$, where $q(x_{t-1}|x_0',x_t)$ is the posterior sampling from DDPM,.$x_t'\sim q(x_t|x_{t-1}')$, where q(x_t|x_{t-1}') is the distribution of forward diffusion. 4. D step: $\nabla_{\phi,\psi} (-\log(D_\phi(x_{t-1},t-1))-\log(1-D_\phi(x_{t-1}',t-1))$ $+ ||C_\psi(x_{t-1}', t-1))-x_t'||_2) $ 5. G step: $\nabla_\theta (-\log(D_\phi(x_{t-1},t-1)) + ||x_t-x_t'||_2$ $-||C_\psi(x_{t-1}', t-1))-x_t'||_2)$ Repeat 1-5 until model converge. Our inference follows the same strategy as DDGAN. Inference: 1. $x_T\sim N(0, 1)$ 2. for t = T,...,1 do $\epsilon \sim N(0,1)$ if $t>1$ else $\epsilon=0$, $x_0'=G_\theta(x_t,t)$, $x_{t-1}' \sim q(x_{t-1}|x_0',x_t)$ end for return $x_0$. Let us know if you have more concerns on our training or inference strategies, we are more than happy to help you address them.
Summary: This method propose a way to achieve fast sampling during inference without compromising sample diversity and quality of diffusion model, and shows their effectiveness on both conditional and unconditional generation, qualitatively and quantitatively. A theoretical framework is also proposed which looks reasonable to me. Strengths: This method basically proves that a very low FID can be obtained with very few sampling steps, compared to ADM and DDGAN. In addition, it claims that it can do both conditional and unconditional generation. Moreover, the theoretical framework looks solid to me, though I am not from the theory field so I cannot comment more on this. Weaknesses: However, it seems that there are no experiments on conditional generation (both qualitative and quantitative). Perhaps more relevant experiments are welcome to consolidate this submission and prove the effectiveness of the new sampling strategy. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The overall story looks good to me. I wonder whether you would like to perfect your experiment with more qualitative/quantitative in the conditional setting. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As mentioned above, more analysis on the conditional setting is welcome, e.g., using Stable Diffusion as the base model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Conditional setting Thanks for the agreement on our work and the suggestions. It is our mistake that we did not highlight our conditional experiments. In fact, our experiments and the comparisons are conditional generative models on the Imagenet1000 and we have some additional preliminary results on the text2image conditional generation on Laion4B of small UNet setting, shown in Figure 6 in the rebuttal pdf.
Summary: In this paper, the authors propose to use an implicit model to match the marginal distributions of noisy data and the explicit conditional distribution of the forward diffusion. Specifically, the adversarial loss is applied to the marginal distributions obtained by forward and backward process, and KL loss is used to regularize the different between the conditional distribution. Experiments show that the method works with small inference steps. Strengths: * Using the adversarial loss is a good idea to learn the conditional distribution of the backward process with large step size. * The experimental results with small steps seem to be good, both for toy examples and real datasets. Weaknesses: * Generally, the paper is hard to follow, the formulas should be presented in a better way: * How to obtain equation (6) through (5)? * Line 129-137 is really difficult to follow, since $q(x_t|x_{t-1})$ is Gaussian distribution, no matter the step size, why not directly use a Gaussian distribution to parameterize $p_\theta(x_t|x_{t-1})$? * It problematic to treat equation (11) and (12) as equivalent, since $\psi$ is fixed in (11). * In the experiments, with smaller steps, the method works better for both the toy and real datasets. It is not reasonable. Any explanation about this phenomenon? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please see weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi, thanks for the questions, we would love to make the details more readable. Also, due to rendering issue, we place it in the rebuttal pdf. > Line 129-137 is really difficult to follow We agree with the reviewer that those line are not written well. Here is our new rewrite and we hope it clarifies that paragraph: There are two terms in Eq 8. The second term, $H(p(x_t|x_{t-1}),q(x_t|x_{t-1}))$ is a cross entropy which is simply reconstruction loss. This loss can be easily computed because the $q$ generates data, and $p$ does the denoising using $G$. The first term, $H(p (x_t | x_{t-1}))$ is challenging to estimate. The challenges stem from two facts. First, we $p (x_t | x_{t-1})$ is unknown in general, and second, estimating entropy entails computing a high-dimensional integral. However, we observe that *at the convergence*, $p (x_t | x_{t-1})$ is the same $q (x_t | x_{t-1})$ which is a Gaussian distribution, and we only need to estimate its parameters, $\psi$. Our method can be viewed as a continuous generalization of the method proposed in [29]. > Obtaining equation (6) through (5). Please refer to the Equation (A) in the rebuttal pdf, where we can rewrite Equation 5 as a typical adversarial training objective with non-saturated loss in the middle expectation, and then, it becomes the joint distribution JSD matching under the sampling strategy proposed by DDGANs. Here we have to admit that Eq (6) has a typo, where the $E_{q(x_0)q(x_{t-1}|x_0)q(x_t|x_{t-1})}$ in the last joint matching equation should not appear in it. Thanks for your carefully checking. > It problematic to treat equation (11) and (12) as equivalent We are sorry for the confusion. We re-wrote that part as the Equation (B) in the rebuttal pdf. Due to markdown rendering issue, we represent $E$ as the expectation symbol. The $H(X)=E_{q(X)}\log q(X)$ typically represents the entropy for variable $X$. And $H(X,Y) = E_{q(Y)}\log q(X)$ denotes the cross-entropy between two variables $X,Y$. In Equation (10), we have the maximization for the negative cross-entropy and optimize $\psi$, which will result in $p_\psi(x_t|x_{t-1})= p_\theta(x_t|x_{t-1})$ at the optimal. In practice, there might be possible approximation error, i.e., $p_\psi(x_t|x_{t-1})\approx p_\theta(x_t|x_{t-1})$. Thus under this condition, if we try to minimize the cross entropy and optimize $\theta$ while fixing $\psi$, we can rewrite the minimization of the cross entropy as Equation (B). >. Smaller steps works better. In the baseline model DDGANs [11], they also observe that increasing the number of diffusion steps degrades the generative quality. The authors of DDGANs hypothesize that increasing the number of diffusion steps needs more capacity for Discriminator as we need a conditional GAN for each denoising step, where Conditional GANs are difficult to be trained when the number of category labels is large. However, the exact reason still remains unknown. According to our simulation results, it is possible that increasing the number of steps can lead to worse results. Fortunately, we observe that with a small number of diffusion steps our method can already obtain high-quality generations. While we agree that a thorough theoretical understanding of this issue is essential, conducting this analysis is non-trivial and we will definitely consider it in the near future. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, the new version of the paper looks better and clearer and I'll change my rating to 5. However, I am still not convinced about why smaller steps get better results and think the authors can do more work on this part. --- Reply to Comment 1.1.1: Comment: Hi, Thanks for the thoughtful suggestions, we will make this explanation better and we are trying to analyze the reason behind the influence of training steps in our method. Thanks! --- Rebuttal 2: Comment: Hi, We would greatly appreciate knowing if we have successfully addressed your questions. If you have any additional concerns, please don't hesitate to share them with us.
Summary: The paper presents a method called Semi-Implicit Denoising Diffusion Model (SIDDM) aimed to accelerate the sampling process, enhance scalability to large datasets, and improve model performance. It improves upon the DDGAN model by reformulating the denoising distribution of diffusion models with explicit and implicit training objectives, which leverages an implicit GAN objective for the marginal distribution and an L2 reconstruction loss for the conditional distribution. To improve generative quality, the authors adopt a U-net-like structure for the discriminator and a new regularization technique involving an auxiliary denoising task. Experiments are conduct on CIFAR-10, CelebA-HQ-256, and ImageNet to demonstrate it effectiveness. Strengths: (1) The experiment and comparison were comprehensive, and the results were good. Ablation was comprehensive. (2) The writing has a clear structure (just not sure if there are any issues with the derivation). (3) The method performs quite well on large datasets like ImageNet, whereas the previous ones didn't do well. Weaknesses: (1) There lacks a graph showing the FID-sampling speed tradeoff on ImageNet. (2) In Table 1, the 'SIDDMs w/o AFD (ours)' entry is quite close to DDGAN in terms of setting, but why is there such a big difference in performance? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weakness part. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. FID-sampling speed tradeoff on ImageNet. We have shown the results of FID sample steps in Table 7 of the rebuttal pdf. 2. 'SIDDMs w/o AFD (ours)' has a large gap on the performance compared with DDGANs. This is a good question; In Equation 7, our GAN objective models distribution between marginals and the AFD models the matching between conditionals. They together match the joint distribution between $q(x_{t-1}, x_t)$ and $p_{\theta}(x_{t-1}, x_t)$. Matching the distribution of joints can sufficiently guarantee the matching between the conditionals of $q(x_{t-1}|x_t)$ and $p_{\theta}(x_{t-1}|x_t)$. In the DDGANs, they are also modelling the matching between the joint distribution. However, if we abandon the AFD, it results in only matching the marginals between $q(x_{t-1})$ and $p_{\theta}(x_{t-1})$, which will not guarantee the matching between the conditionals of $q(x_{t-1}|x_t)$ and $p_{\theta}(x_{t-1}|x_t)$. Ultimately, 'SIDDMs w/o AFD (ours)' will output biased distribution. --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: NA
Rebuttal 1: Rebuttal: We want to thank all reviewers evaluating the paper and will fully address all the reviewers' concerns. We put additional Tables and Figures in the rebuttal pdf file, please find the corresponding Tables and Figures for our reponses below. If our answers are satisfactory, we would be thankful if you could update your score. Otherwise, we are happy to answer any more questions. Pdf: /pdf/fd6c2b1774af70c7357d90f9b51fc221097efb0c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper addresses the issue of making diffusion models faster (faster sampling) while still yielding high quality, diverse samples, from a large phenomenological space (scaling to large datasets). This is achieved by a novel semi-implicit denoising diffusion model (SIDDM). The idea extends the approach followed in DDGANs by reformulating the objective, and a novel decomposition into two components: (a) a pair of marginal distributions over the denoised data (at each step) with an implicit form to be aligned with the JSD via a learnt critic, and (b) a pair of conditional, forward diffusion components having explicit forms to be aligned using the KL divergence. This mixed objective allows better scaling than DDGANs. A novel regularizer is also introduced for the discriminator which allows a more granular distribution matching. A proof of concept validation is performed using a synthetic, Mixture of Gaussians dataset. Ablations demonstrate the benefit of both the innovations. SIDDM is benchmarked against the art on the CIFAR10, CelebA-HQ-256, and ImageNet datasets demonstrating performance competitive with DDGANs, better scaling to ImageNet, and much higher sampling speeds than classical DDPMs. Strengths: **Relevance** The paper addresses an important problem – speeding up diffusion based generative image models while not compromising on the image quality, with good scalability to large, complex datasets. This should be of relevance to the community working in these areas but also of interest to a much wider audience. **Originality** The proposed decomposition is interesting and novel, balancing both a non-parametric, population-level statistical alignment of distributions and a direct (simple, parametric) objective which decomposes into a sample level objective over Auxiliary Forward Diffusion (AFD). It stands to reason that this can have a significant impact on the training of the diffusion reversal DNN. **Technical Quality** - The technical approach mostly appears sound. It creates a new, direct and stronger learning signal which augments the critic based learning objective. This can lead to better learning outcomes over more complex distributions representing larger datasets like ImageNet. In addition, regularizing the discriminator via the UnetGAN formulation seems reasonable too. - The mathematical formalism used in the paper, including the upper bound in Theorem 1, add to the strength of the contribution, making it principled, and beyond the heuristic choices made. I haven’t verified the derivations in the paper entirely but have followed the structure of the derivations and the proof and find them reasonable. **Experimental Validation** There are several positives in the choices made for evaluation: - The choice of using a synthetic dataset based on Mixture of Gaussians (MoGs) allows for validating the core idea. - While the results on CIFAR10 and CelebA-HQ-256 are comparable with DDGANs, the FID scores on ImageNet demonstrate a performance comparable to SOTA diffusion models while DDGANs performance seems to demonstrate a failure to scale to ImageNet. This validates the core motivation of the paper. - The ablation study, with and without the AFD term, validates the benefit of the decomposition of the training objective that forms the basis of SIDDMs; as well as the benefit due to the UNetGAN regularizer in (14) **Significance** The approach has the potential for significant impact over an important area of research once the approach has been reproduced and ‘hardened’. Weaknesses: **Experimental Validation** (a) The MoG experiment (Section 5.1) demonstrates that SIDDM achieves good results with very small number of time steps, and has better stability than DDGANs. However, - It is not discussed why it is unable to converge to the simple MoG data distribution, when the number of time steps increases (similar to DDGAN). - Table 1 shows the FID score. These trends don’t tally well with the behavior shown in Figure 3. Perhaps using other metrics (JSD, LPIPS, etc.) may explain the results better? - The implications of the above on the learning on real datasets, and the scalability thereof on larger, complex datasets like ImageNet is unclear. Such tasks may require more time steps (it is not clear whether this is the case) in which case, convergence to the data distribution becomes an issue. (b) The results on CelebA-HQ-256 are typically used to show performance on high-resolution data. The images (depictions) of the generated samples, both in the paper and in the supplementary, are too small. Similarly, the ImageNet depictions are very small. (c) While NFE (number of function evaluations) and the wall-clock time are shared, no details of the hardware are shared. **Clarity** (l. 230) It is not quite clear why the material on training with real datasets like CIFAR10, Celeb-HQ etc. is in Section 5.1 and not Section 5.2. Kindly fix. **Reproducibility** Implementation details are missing. The authors also don’t mention that the code will be released. Given this, I suspect it may be hard to reproduce results. **Discussion of Limitations** No discussion of limitations in the paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Kindly address the following: - Explain the behavior on the MoG dataset using a larger suite of metrics (see Borji, CVIU 2019; and its 2021 update for example). - Explain the behavior (non-convergence to data distribution) when using a larger number of steps and the scalability implications. - Did you investigate the tradeoffs between a smaller step size with larger number of steps, and smaller number of steps with a larger step size? How does one decide this? Does the theoretical formulation lend itself to such a guidance? - Kindly share details of the hardware, implementation, and other details which will aid reproducibility. - Will the code be shared? - Discuss limitations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors don’t address limitations of the paper. I don’t think there are any direct negative societal implications. Other limitations and opportunities for improvement are addressed in my responses to previous questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Explain the behavior on the MoG dataset using a larger suite of metrics We pick the Unbiased FID mentioned in Borji, CVIU 2021 and MMD metric for our evaluation of the MOG generative quantitation, and the results are shown in Table 6 MMD is a kernel-based nonparametric metric to measure the difference between two distributions. It has nice theoretical properties and is suitable for the MoG data because here we do not need deep networks to learn representations. For the additional results for Unbaised FID and MMD, we found the scores are pretty close and our model still perform overall better than the DDGAN. There are also other advanced metrics in Borji, CVIU 2021, but some are specifically designed for image datasets or conditional generation cases. > Explain the behavior (non-convergence to data distribution) when using a larger number of steps and the scalability implications In the baseline model DDGANs [11], they also observe that increasing the number of diffusion steps degrades the generative quality. The authors of DDGANs hypothesize that increasing the number of diffusion steps needs more capacity for Discriminator as we need a conditional GAN for each denoising step, where Conditional GANs are difficult to be trained when the number of category labels is large. However, the exact reason still remains unknown. According to our simulation results, it is possible that increasing the number of steps can lead to worse results. Fortunately, we observe that with a small number of diffusion steps our method can already obtain high-quality generations. In our paper, we train four-step diffusion on the Imagenet1000, and here we additionally show the preliminary results of our SSIDMs on the text-conditional Laion4B dataset with a small UNet. The preliminary results on a small UNet are shown in rebuttal pdf Figure 6. > the tradeoffs between a smaller step size with larger number of steps, and smaller number of steps with a larger step size In DDGANs and our proposed method, we do not have the Gaussian assumption for $p_{\theta}(x_{t-1}|x_t)$. Thus theoretically, the model trained on any step size with the corresponding number of steps would result in the same distribution convergence at the end of training. However, this claim is based on the assumption that GANs would converge to the optimal, which is often empirically impossible. Our choice of step size with its corresponding number of steps mainly depends on the empirical and DDGANs' observations (we show additional results for the sensitivity of steps in the rebuttal pdf Table 7). While we agree that a thorough theoretical understanding of this issue is essential, conducting this analysis is non-trivial and we will definitely consider it in the near future. > Details of the hardware, implementation, and other details Our models are trained on the TPUv4 clusters. A single TPUv4 is around $1.2\times$ faster as A100. And the code is implemented with JAX. Our model structure mostly follows the ADM [2] attention UNet which is also followed by IMAGEN. But instead of predicting the noise from our $G$ model, our model directly reconstructed $x_0'$. We have put our simulation code in the supplementary ".zip" file containing our paper's main formulation. We will release the code for public research once our work gets into the final stage. > Discuss limitations Our model incorporated with UNet-like discriminator, and the discriminator needs to see both fake and real data. Thus we have at least two times memory usage than the Diffusion model. Also, in the GANs' training, the model contains two stages of training which results in at most half the training speed for each batch iteration compared with DDPM. These facts would lead to more CO2 emission. However, Distillation on the DDPM model would also take the same amount training time as training DDPM. Thus compared with Distillation+DDPM, our model probably cost the same amount energy and time but with less performance degradation. --- Rebuttal Comment 1.1: Title: Post rebuttal Comment: Thanks for the detailed response. I have no further questions. I have also gone through the remaining reviews and author responses. I will finalize the rating accordingly.
null
null
null
null
null
null
On Private and Robust Bandits
Accept (poster)
Summary: In this paper, the authors study private and robust multi-armed bandits (MABs), where the heavy-tailed rewards are contaminated. They first present its minimax lower bound, characterizing the information-theoretic limit of regret with respect to privacy budget, contamination level, and heavy-tailedness. Then, they propose a meta-algorithm that builds on a private and robust mean estimation sub-routine PRM which could achieve nearly-optimal regrets. Finally, the authors run simulations to support their theoretical results. Strengths: 1. The problem of private and robust bandits is well-motivated. 2. The upper bounds nearly match the lower bound, the paper is solid. 3. The writing is very clear, I enjoy reading the paper. 4. Connecting privacy with robustness using truncation is quite interesting. Weaknesses: My main question is about the last term in both Theorem 6.2 and Theorem 6.12. Intuitively, as the contamination converges to 0, the regret upper bound should be smaller, while the last term will converge to infinity. Could the authors discuss possible methods to get rid of the term or the difficulties? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive evaluation of our paper! Your observation is really sharp and points to a subtle part of Huber model. In fact, your observation is exactly the reason that we choose an upper bound $\alpha_1$ on the actual contamination level $\alpha$ and state the upper bound results in terms of $\alpha_1$ rather than $\alpha$. That is, for a very small but non-zero $\alpha$, one can choose a larger $\alpha_1$ to balance the regret. This subtle issue is also mentioned in one nice related work [1], see the remark after Theorem 7.4 and Remark 5.4. [1] Chen, Sitan, et al. "Online and distribution-free robustness: Regression and contextual bandits with Huber contamination." 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concern is mostly addressed. I maintain my score and vote for acceptance.
Summary: The paper studies the MAB problem where the agent receives rewards that are both heavy tailed and contaminated according to Huber's process (provides true rewards w.p. $1-\alpha$ and provides samples from an arbitrary, unknown distribution w.p. $\alpha$). For the case of heavy-tailed rewards, both settings with finite $k$-th raw moment (the expected reward in a narrow range) and finite $k$-th central moment (expected deviation from the mean is in a relatively narrow range). The paper provides a regret lower bound for any algorithm in this setting, proposes a novel algorithms whose regret is analysed in both the finite $k$-th raw moment setting, as well as the finite central moment setting. Some experiments are provided in the Appendix though an existing algorithm, RUCB, appears to outperform the proposed algorithm. Strengths: The paper provides both a regret lower bound as well as nearly matching upper bounds for the regret of their proposed algorithm. If contamination is removed from the setting, the regret upper bounds provided here match the lower bounds for the bounded central moment setting, filling a gap in the existing study of private bandits. The algorithms are accompanied by experiments showcasing their performance against other existing algorithms designed for similar (but not identical) settings. Weaknesses: Presentation is lacking in my opinion and I found the paper hard to follow. I am missing a thorough description of the problem statement describing the interaction between the agent and the environment in a self-contained, concise section. I found the quite hard to read as the setting isn't made explicit in a single place. I suggest this be done instead of the Preliminaries section (the definitions can be made in place as the need appears rather than having to keep track of them until they appear). I would have liked to see a more explicit description of how privacy ties into this setting: whose privacy are we aiming to defend here and from whom? A more clear description of the setting should also clarify this, in my opinion. I find it hard to argue for the significance of the results here meeting the conference standards as the setting feels a bit too narrow. I would have liked to see the synergy between the two aspects studied here better articulated: is the combination of privacy and heavy tailed rewards in bandit settings more difficult than the sum of its parts? What does their interplay look like? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weaknesses. Also am I reading the experimental results correctly when assessing that the RUCB algorithm outperforms the one proposed here? Can you find a problem instance where your algorithm performs best among the baselines? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I do not see any potential negative societal impact arising from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the suggestions on the presentation. We will address them as follows. - **Description of the problem setting** We have stated the MAB protocol in the first paragraph of the introduction section. As suggested by the reviewer, we will re-emphasize it in the preliminary part (Section 3.1) in the next version. - **Description of privacy protection** We have explained privacy protection in lines 112-118. We will add a more intuitive explanation in the next version as follows: In other words, we protect the privacy of any individual user who interacts with the learning agent in the sense that an adversary observing the output of the learning agent (i.e., a sequence of actions) cannot infer too much about whether any particular individual has participated in this process, or the specific reward feedback of this individual. **Interplay of different parts** We first clarify that privacy and heavy-tailed rewards are not just the sum of the two parts. Rather, it introduces several interesting interplays. In particular, due to heavy-tailed rewards, it becomes difficult to guarantee privacy as the sensitivity is not bounded anymore. Thus, one cannot directly apply standard DP mechanisms, which assume a bounded sensitivity. Instead, one needs to first carefully control the sensitivity in some way while not leading to too much increase in regret. To this end, we employ the truncation method with a careful choice of truncation threshold to balance between privacy and regret. In addition to the interplay of privacy and heavy-tailed rewards, we also considered robustness in terms of contamination. Interestingly, we showed that truncation method can handle all three of them in a principled way. Given the recent popularity of research on privacy, heavy-tailed feedback, and robustness, we believe that our setting that studies the interplay of all three of them will have a broad audience. Practically speaking, our setting naturally covers many real-world scenarios. Theoretically speaking, our paper not only provides some useful tools for the field (e.g., optimal concentration result), but reflects the recent trends on the interplay of privacy and robustness [1]-[3]. [1] Samuel B Hopkins, Gautam Kamath, Mahbod Majid, and Shyam Narayanan. Robustness implies privacy in statistical estimation. arXiv preprint arXiv:2212.05015, 2022. [2] Hilal Asi, Jonathan Ullman, and Lydia Zakynthinou. From robustness to privacy and back. arXiv preprint arXiv:2302.01855, 2023. [3] Kristian Georgiev and Samuel B Hopkins. Privacy induces robustness: Information-computation gaps and sparse mean estimation. arXiv preprint arXiv:2211.00724, 2022. **Experimental results** We first clarify that we design the experiments by choosing DPRSE as the private bandit benchmark and RUCB as the robust bandit benchmark. RUCB is only a robust but not private algorithm and hence it's better than ours in terms of regret. Our algorithm not only handles robustness but also guarantees differential privacy, which naturally leads to a worse regret guarantee compared to RUCB. --- Rebuttal Comment 1.1: Title: Further clarifications on the setting Comment: Thank you for your response. I found your clarifications regarding the interplay between heavy-tailed rewards and the standard DP mechanisms and those surrounding the experimental results to be very informative. I believe the description provided in the sections you mention and the description in the response does not provide sufficient clarity about the setting to someone not intimately familiar with the privacy side of MABs (such as myself). Please allow me to formulate what I am finding unclear more explicitly: - What information does the adversary have access to (entire history of actions of the decision maker? The entire history of realised rewards as well? What does the adversary know about the users whose privacy we aim to protect?) and what is the information we are aiming to keep private (whether or not a user presented themselves to the system?)? Presenting this aspect of the setting together with the MAB protocol would make the paper a lot more accessible, in my opinion, as it crystallises the problem in one place and allows the reader to easier recognise the use of concepts introduced later in the paper. - "Practically speaking, our setting naturally covers many real-world scenarios." - Can you provide an example scenario where this setting applies: heavy-tail rewards and a setting where it is "easy" (possible) for an adversary to infer the information we aim to keep private (as described in the answer to the point above) when no privacy preserving mechanisms are employed? Such an example would make the significance of the problem a lot more obvious to me. --- Reply to Comment 1.1.1: Title: Further rebuttal to Reviewer Ssa1 Comment: We thank the reviewer for the follow-up. We are glad to hear that you find our clarifications on the interplay between heavy-tailed rewards and privacy very informative. We would like to provide further clarifications on the setting by answering your specific questions. **Privacy notion in the paper** We adopt the central DP for MABs as our privacy notion, which has been widely used in previous works on private MABs [19,20,24]. We would like to give more details about this notion in the following two steps. 1. We first give the standard interpretation of this privacy notion. In particular, the adversary is an external party that has the information on all the $T$ actions during the MAB learning process. The information we aim to protect is the reward generated by each of the $T$ users. Central DP (cf. Def. 3.4) protects user's reward in the following sense: The external third-party (adversary) cannot determine the reward of any user $t \in [T]$ with high confidence by observing all $T$ generated actions. This is because, by defintion, while changing the reward of any user $t$, the output action sequences are indistinguishable in probability. 2. In fact, in addition to the above standard interpretation, central DP also offers the following stronger protection: The adversarially can be all other $T-1$ malicious users and even if they can collude adversarlly to induce the learning agent to reveal information about the reward of the remaining user, they cannot infer too much about the reward of the remaining user. Further, if one considers replacing the reward at any $t$ to a special symbol to represent the event of removal of the corresponding user $t$ in the input sequences, then central DP also protects the information whether one user has participated in the learning process or not. **A Concrete Example** We will use **dynamic pricing** as a concrete example scenario where the reward can be heavy-tailed and there exists a privacy leakage of the reward if no privacy protection is adopted. *Scenario:* Online Retailer Selling Sensitive Products. The MAB learning agent sequentially chooses a action (price for the product) based on previous reward feedback (demand) so as to maximize the total expected revenue. *Heavy-tailed Demand:* The demand for the product may exhibit a heavy-tailed pattern due to factors such as: - Seasonal outbreaks leading to sudden spikes in demand. - Public awareness campaigns or celebrity endorsements causing immediate interest. - Regulatory changes making the product more accessible to a broader population. *Privacy leakage:* Suppose the product is a specific medication used to treat a highly sensitive or stigmatized health condition. Thus, the particular demand of a user (the reward in MAB formulation) is highly sensitive. As discussed in [11, 12], an adversary might place orders immediately before and after a person of interest (i.e., target user) and if he sees a slight spike in his received prices, he might be able to infer the purchase decision (demand/reward) of the target user. Please let us know if our clarifications help to resolve your concern and we are happy to engage more if there are any additional questions.
Summary: This work studies the specific setting of heavy-tailed bandits with Huber contamination and differential privacy constraints. They first give a regret lower bound, tightly characterizing the minimax rate in terms of all the parameters involved in this setting. Then, they provide matching (up to log terms) upper bounds. The crucial technical novelty is to use new concentration bounds they developed for private and robust estimation via truncation methods in bandits. Strengths: They derive the first minimax rates in this robust+private bandit setting and give matching upper bounds. The paper is also generally well-written and has many useful remarks comparing the results and techniques with previous works. The algorithmic ideas and intuition behind the new estimation scheme were easy to follow. Weaknesses: * I think some discussion on problem-dependent rates in this setting would be interesting. The paper could at least comment on why their analyses don't extend easily to obtain problem-dependent rates or what the minimax problem-dependent rates might look like in terms of $\epsilon, k,\alpha$. * There are many recent works on "bandits with total corruption budget" (e.g., the cited paper Lykouris et al., 2018). It would be good to include some comparison with this setting, even if just in terms of experiments. For instance, the $T\cdot \alpha^{1-1/k}$ term in this paper's minimax regret rate seems to be similar to the additive corruption term sometimes seen in this other setting (Theorem 1; Gupta et al., COLT 2019). Can the analyses in this paper extend to this setting? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: No forseeable negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation of our paper! We would like to provide the following clarifications to the reviewer's questions. **Minimax vs. problem-dependent bound** Thanks for your sharp comments. Let us approach your question in the following steps. - Problem-dependent upper bound: Our current analysis can also yield problem-dependent upper bounds. In particular, by line 586-587, if $\alpha \leq c \Delta_{\min}$ (where $\Delta_{\min}$ is the minimal gap; $c$ is some constant), then one can determine the number of pulls for each sub-optimal arm, hence a standard problem-dependent bound, which mimics the one established for the case of Gaussian inlier rewards [1]. - Problem-dependent lower bound: it is unclear for private and robust bandits, but some work has been done for corrupted heavy-tailed bandits; see Theorem 1 in [2]. One interesting future direction is to study how to leverage both the insights in [2] and the lower bound under privacy to derive a problem-dependent lower bound for private and robust bandtis. [1] Sayash Kapoor, Kumar Kshitij Patel, and Purushottam Kar. Corruption-tolerant bandit learning, Machine Learning 108.4 (2019): 687-715. [2] Debabrota Basu, Odalric-Ambrym Maillard, and Timothée Mathieu. Bandits corrupted by nature: Lower bounds on regret and robust optimistic algorithm. arXiv preprint arXiv:2203.03186, 405 2022. **Comparision with "bandits with total corruption budget"** - First, we clarify that the "bandits with total corruption budget" and "Huber contamination model" is different in nature, as they are models for different contamination scenarios. This is also reflected in Section 3.4 of [1]. - At a high level, Algorithm 1 in [2] is similar to our meta-algorithm in the spirit of arm elimination, forgetting, and batching since both algorithms proceed in epochs that increase exponentially in length and only use the most recent epoch to calculate statistics. The difference is that Algorithm 1 in [2] chooses the arm with a probability at step 8 and never completely eliminates any arm, but our algorithm removes the arm based on the confidence radius. - One possible approach to further provide a privacy guarantee to Algorithm 1 in [2] is as follows: As in our paper, one can also add some noise on the average of rewards at step 9. Then there should be an additional term related to noise in concentration inequality (1) in [2]. This approach is reasonable because Algorithm 1 in [2] also enjoys doubling and forgetting. We believe that one can use this approach to derive a private version regret guarantee of Algorithm 1 in [2]. [1] Niss, Laura, and Ambuj Tewari. "What You See May Not Be What You Get: UCB Bandit Algorithms Robust to $\varepsilon $-Contamination." In Conference on Uncertainty in Artificial Intelligence, pp. 450-459. PMLR, 2020. [2] Gupta, Anupam, Tomer Koren, and Kunal Talwar. "Better algorithms for stochastic bandits with adversarial corruptions." In Conference on Learning Theory, pp. 1562-1578. PMLR, 2019. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for the detailed and clarifying response. My concerns were mostly writing/discussion suggestions, and I'm still in support of accepting the paper.
Summary: The paper studies the private and robust multi-armed bandits, where the rewards are heavy-tailed or contaminated. It proposes a meta-algorithm which is based on private and robust mean esitmation sub-routine incorporating reward truncation and Laplace mechanism. Strengths: The paper establish the minimax regret lower bound for private and robust MABs. It proposes a meta-algorithm matching the lower bound. It takes into account both reward contamination and heavy-tailed cases. The analyses are relatively complete. Weaknesses: The current setting seems to require the horizon, T, to be known in advance. If T is unknown or infinity, how does it change the differential privacy definition and the technical difficulty. The paper needs to include additional literature review about heavy-tailed cases, especially in the line of Catoni's estimator (with application in bandits). For example, 1. Olivier Catoni. Challenging the empirical mean and empirical variance: a deviation study 2. Gabor Lugosi and Shahar Mendelson. Mean estimation and regression under heavy-tailed distributions: A survey. 3. Sujay Bhatt, Guanhua Fang, Ping Li, Gennady Samorodnitsky. Nearly Optimal Catoni’s M-estimator for Infinite Variance Some detailed explanations of differences between proposed method and DPRSE should be given. In supplementary, it seems that Lemma B.2 is never used. According to the current algorithm, the pararell composition should be sufficient? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Overall, I think the paper is well organized and easy to follow. See questions in weakness part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and comments. We are glad you find our analyses are relatively complete. We will recap your comments and present our detailed response. We hope our answers will resolve your concern. **Unkown and infinity horizon T** - We first clarify when $T$ is unknown and infinity, the differential privacy definition is the same for differentially private online learning under the event-level privacy framework of [1]. The definition is the same under two cases also has been discussed in Section 2.2 of [2]. - When $T$ is unknown and infinity, in fact, once armed with our novel PRM modules, one can also use other exploration strategies like UCB in [3] for anytime regret guarantee. One difference is that now instead of first pulling each arm once, it needs to pull each arm $\mathcal{T}$ times to ensure that concentration kicks in later. This is in fact, not surprising since on the high level, the analysis of SE and UCB is very similar, i.e., doubling and forgetting. - We also note that instead of UCB, one can also adapt it to the Thompson sampling strategy, e.g., [4], with our PRM module. Again, the key idea is doubling and forgetting. We will include the above discussion in the next version to highlight the flexibility of our PRM modules. [1] Dwork, C., Naor, M., Pitassi, T., and Rothblum, G. N. Differential privacy under continual observation. In Proceedings of the forty-second ACM symposium on Theory of computing, pp. 715–724, 2010. [2] Hu, Bingshan, Zhiming Huang, and Nishant A. Mehta. "Optimal algorithms for private online learning in a stochastic environment." arXiv preprint arXiv:2102.07929 (2021). [3] Azize, Achraf, and Debabrota Basu. "When privacy meets partial information: A refined analysis of differentially private bandits." *Advances in Neural Information Processing Systems* 35 (2022): 32199-32210. [4] Hu, Bingshan, and Nidhi Hegde. "Near-optimal Thompson sampling-based algorithms for differentially private stochastic bandits." In Uncertainty in Artificial Intelligence, pp. 844-852. PMLR, 2022. **Additional literature review about heavy-tailed cases** Thanks for pointing out these important and nice works. We will include them in the next version. **Difference between our proposed algorithms and DPRSE** - First, DPRSE in [5] is only designed for handling privacy and heavy-tailed rewards, i.e., no robustnees with respect to contamination. In contrast, our proposed algorithm can handle privacy, heavy-tailed rewards and Huber contamination altogether. - Second, even if one only considers privacy and heavy-tailed rewards, our algorithm can handle the important case where the central moment is bounded while DPRSE cannot. This is because DPRSE is desinged only for the finite raw moment case. [5] Youming Tao, Yulian Wu, Peng Zhao, and Di Wang. Optimal rates of (locally) differentially private heavy-tailed multi-armed bandits. arXiv preprint arXiv:2106.02575, 2021 **Parallel composition** Yes, we didn't use Lemma B.2 and parallel composition is sufficient. Thanks for pointing out this and we will remove it in the next version.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Skilled Population Curriculum for Multi-Agent Reinforcement Learning
Reject
Summary: This paper presents a new approach to automatic curriculum learning designed specifically for multi-agent coordination problems. Strengths: - The main strength of the paper, in my opinion, is the well-formulated approach to the curriculum learning problem. To the best of my knowledge of the related literature, the non-stationary contextual bandit as the teacher and the population-invariant skills for the students are both original and useful contributions to the literature. Weaknesses: - My general problem with this paper is that I am finding it hard to evaluate the significance of the work in the automatic curriculum learning sphere without an adequate baseline provided for GRF. Whilst the authors do argue that VACL is not used on GRF due to requiring prior knowledge, it seems unreasonable to therefore provide no baselines that are actually designed for these larger settings. For example, if VACL was unusable, then I would have maybe liked to have seen a comparison to population-based approaches in MARL or any of the other automatic curriculum learning approaches mentioned in Sec. 4. Overall, it is hard to properly evaluate the gains from this automatic curriculum learning framework without seeing the performance of baselines in an environment that actually requires automatic curriculum learning (MPE does not need it according to line 297-298). I am happy to update my score if the authors can make a reasonable argument against the lack of other baselines in the work. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Line 37-39, 'However,..., tasks matters' - I am very confused by this sentence, could the authors please clarify? - Upon inspection of the results, the final improved performance does not seem to be massively impacted by the hierarchical RL element of the framework. I was wondering if the authors could discuss a little more on this, in terms of its necessity in the framework and when they believe it would provide greater gains in performance? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors briefly make mention to the limitations of the work. I agree with the over-design of the framework for simple tasks, so would definitely like to see its performance in more difficult environments that it is designed for. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer iWBE, Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted: ----- **Q:** Make a reasonable argument against the lack of other baselines in the work **A:** Thanks for pointing this out. We compared different teacher/auto-curricula algorithms on GRF Corner-5 in Appendix D. The training task space consists of n agents, where n ∈ {1, 3, 5}. All teachers have the same base architecture without transformer architecture and HRL. ----- **Q:** Clarification on 'However, these approaches (DyMA-CL, EPC, and VACL) rely on a framework of an off-policy student with a replay buffer that is hard to decide the size of the replay buffer since the proportion of different tasks matters.' **A:** It is hard to decide the size of the replay buffer since the proportion of different tasks matters. To be more specific, in the football environment, when we treat the score as the reward, the same state-action pairs of the team agents in different tasks might lead to different returns. For example, three learned agents could get more scores in a 3v1 match, while the same three agents could get fewer scores in a 4v11 match with an unlearned random teammate. When decomposing at the same state-action pairs, agents will get different credit assignment. We choose IPPO, a simple but efficient on-policy RL algorithm, as SPC’s backbone. In the same case, each agent will get its own reward, so that SPC doesn’t utilize a replay buffer and require the same assumption. ----- **Q:** Upon inspection of the results, the final improved performance does not seem to be massively impacted by the hierarchical RL element of the framework. I was wondering if the authors could discuss a little more on this, in terms of its necessity in the framework and when they believe it would provide greater gains in performance? **A:** The hierarchical RL is used to extract useful skills for the student to transfer between the tasks. Currently, we test on GRF 5v5, and the learned skills are relatively simple, e.g., in Appendix D, shooting, passing, and running. We expected HRL will contribute more when the number of agents increases and more high-level strategies/tactics are emergent. ----- We thank you for your endorsement of the motivation and the experimental studies of our work We appreciate your positive comments on the quality and experimental studies of our work. If you have any further questions or comments, we will be happy to discuss or fix them further. We are looking forward to your feedback. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for addressing my issue with the baselines by pointing out some results to me. I have updated my score to reflect this.
Summary: This paper presents Skilled Population Curriculum (SPC) which is a method for learning a curriculum to help a team of agents complete a complex task. SPC models the problem of choosing tasks for agents as a contextual bandit problem, and builds on top of the Exp3 algorithm to solve this bandit problem. SPC also uses an attention-based communication approach, and a hierarchical policy framework. Experiments are performed in the Multi-agent Particle Environment (MPE) and Google Research Football environment (GRF). While MPE does not seem to benefit much from SPC, in the more complex GRF domain the authors show using their SPC approach can accelerate training relative to MARL baselines. Strengths: - The paper is clear: it is well-structured, and provides a good balance of intuition and detail. It is clearly motivated, and addresses an interesting problem. - The presented results show clear benefits to the authors' approach. - The authors use suitable baselines approaches, and suitable environments. Weaknesses: - SPC seems to add significant computational complexity vs. baselines like IPPO. While the authors can justify focusing on sample complexity, for completeness they should also record information about the wall-clock time / computational resources needed to train their different baselines. - In their research question stated on lines 52–53, the authors highlight their desire to consider complex sparse-reward settings. However, the MPE domains are a sparse setting, though fairly simple, and the results here show little benefit to using SPC. On the other hand, the GRF experiments appear to have a somewhat dense reward (the GRF checkpoint reward, which while not necessarily active every timestep, it could be argued is 'somewhat dense'). It seems like absent the checkpoint reward, SPC would struggle because there would be little information in the returns for the teacher agent to use — and I expect this would be the case in most complex (very) sparse reward environments - Line 277 the authors state MADDPG/MAPPO would not be suitable in these experiments. This might be true in general, but for GRF specifically the critic input size actually would be the same across all tasks (as it pads observations if agents are absent). But since GRF is fully observable, MAPPO is equivalent to IPPO so this is not an issue for this work — though the authors may wish to revise their statement. - Though the GRF environment is complex and difficult to solve, its level of complexity is somewhat deceptive, as evidenced by the video on the project website. The rollouts show that the agents have learned a simple "force an offside and run in a straight at the goal line" strategy which exploits deficiencies in the GRF bots. This behaviour has been observed before by Song et. al (http://arxiv.org/abs/2305.09458). However, this is not a fault of the authors, and is more broadly an issue in the MARL research community. Because of this, it's not clear what skills the agents learn in the training tasks that are useful in the target task. It would be interesting to see a plot of training task performance throughout training. - It's unclear why the IPPO baselines have a sharp step change in performance around 80 and 90 million timesteps. The authors should investigate this, and perhaps make a comment (at least in the appendix) about why it occurs. In my experience, things like this sometimes occur when training runs stop unexpectedly before the full 100M timesteps, and so the remaining timesteps are aggregating over fewer seeds with lower performance. I would encourage the authors to produce plots reporting the interquartile mean of their results, and produce a plot showing the disaggregated training curves for each seed. These can go in the appendix. - The authors state (line 301): " InFig. 5b, we omit the curve of QMix as its mean score is low and affects the presentation of the figure". I don't expect QMix to perform worse than the presumably near-uniform policies at the start of training for the other agents. So it's not clear how including QMix would disrupt the graph. Is it the case that QMix has a worse average goal difference than -2? - Can the authors clarify: the target distribution for GRF is "100% 5vs5"? What is the target distribution for the MPE tasks? (I see now that these are mentioned later in the text: they should be mentioned when introducing the environments) - It doesn't seem like there's a pattern to the task distribution (Fig. 6a) beyond "academy_pass_and_shoot_with_keeper becomes less common". It would be good to see the same plot for other trials. This possibly explainable by academy_pass_and_shoot_with_keeper requiring coordinated passing and shooting, whereas the 5vs5 rollouts (see video on project website) show a very simple GRF-bot exploiting strategy which does not closely resemble the behaviour required in academy_pass_and_shoot_with_keeper. - Where the authors claim "For example, the proportions of 3vs1 and Empty-Goal tasks gradually drop as the student becomes proficient in these scenarios", it is difficult to support this by looking at Fig. 6a. - In my opinion this approach is over-engineered, but the authors do acknowledge this. Stripping some components (e.g the hierarchical RL) and focusing on deeply investigating the remaining components would improve this work - Minor writing fixes: - line 42/43 "more scores" → "more goals" - line 43: "4v11" → "4v1" (I assume) - line 305: "tons of" → "many" (more formal tone) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Why did you decide to add the shooting reward for 5vs5? Does it make a big difference? - What causes the step-change drop in IPPO performance? - How does including QMix in 5b disrupt the graph? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - The limitations section is quite limited, and limitations and assumptions could be more clearly stated throughout. - The authors recognise that their approach is complex and computationally intensive, so might not be applicable in simple environments. Testing in a complex environment like Google Research Football is a good choice, although due to issues with Google Research Football (such as the exploitability of the built-in AI) it is perhaps not as complex as the authors may hope, even though it has presented a challenge to past MARL research. However, this is a broader issue within the MARL community and the authors of this paper cannot fairly be singled out for this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer k74L, Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted: ----- **Q1:** computational resources **A:** IPPO is directly trained in GRF 5v5, while SPC or other curriculum learning baselines are trained in training tasks. The computational resources are the same for these baselines since the computational overhead mainly depends on the inference/backpropagation of the neural network, which is aligned in this implementation. We implemented SPC and VACL based on RLLib in https://anonymous.4open.science/r/MARL_SPC/. ----- **Q2:** Experimental setting **A:** First, the experiments on sparse-reward MPE indicate the MPE environment is relatively simple so curriculum learning is not necessary. Then the GRF checkpoint reward is activated only once for each agent in one single game of training tasks. This reward can only be received once when an agent reaches one checkpoint. The full court is divided into 10 equal intervals, and the endpoints of these intervals are checkpoints. ----- **Q3:** MADDPG/MAPPO **A:** Thanks for pointing this out. We will revise our statement in terms of the input of the critic with padding observation. ----- **Q4:** It's not clear what skills the agents learn in the training tasks that are useful in the target task. **A:** Thanks for pointing this out. We will add the training task performance throughout the training of SPC in the revised version. ----- **Q5:** IPPO performance drops **A:** Thanks for pointing this out. The performance drops around 80 and 90 million steps might be caused by the collapse of training. We will include more experiments with random seeds and smoothen the graph to fix it. ----- **Q6:** QMix **A:** We use the RLLib version’s QMix, which will collapse w.r.t. the goal performance to a worse average goal difference than -5. ----- **Q7:** What is the target distribution for the MPE tasks? **A:** The target MPE task is cooperative navigation with 16 agents. ----- **Q8:** The proportions of 3vs1 and Empty-Goal tasks gradually drop. **A:** The area of 3vs1 and Empty-Goal becomes smaller around 150M steps. ----- **Q9:** Over-engineered. **A:** Thanks for your valuable comments on the current presentation. We compared different teacher/auto-curricula algorithms on GRF Corner-5 in Appendix D. The training task space consists of n agents, where n ∈ {1, 3, 5}. All teachers have the same base architecture without transformer architecture and HRL. Since we aim to solve the challenging cooperative MARL problems, we tested a few methods and found that only one method cannot be the silver bullet. So, we propose SPC, an auto-curricula MARL framework. As we mentioned in Sec. 3, there are three challenges in designing an auto-curricula MARL algorithm with varying numbers of agents: (1) a non-stationarity curriculum selection problem due to the ever-changing student's strategies; (2) a lack of a general student framework to deal with the varying number of agents; (3) the forgetting and relearning problem. The three issues are related to the curriculum setting and are not easy to be decoupled in this paper setting. Towards these challenges, we first describe our curriculum learning algorithm in Sec 3.2 with detailed design and regret analysis. And then describe the hierarchical structure and communication structure in fewer words in Sec 3.3. Note that the student policy architecture introduced in this paper is fixed and not changed when the training tasks (with different numbers of agents) are changing. We admit that the hierarchical/communication structure is less novel. However, proposing new HRL/communication algorithms is not the core idea of the paper. We just show one possible solution to deal with the mentioned challenges. The hierarchical/communication compositions could have different algorithms, which can be used in SPC. ----- **Q10:** The shooting reward **A:** If without the shooting reward, only checkpoints reward and goal reward will lead to a running behavior.
Summary: The paper introduces a new automatic curriculum learning framework, Skilled Population Curriculum (SPC), for multi-agent reinforcement learning. The algorithm includes three major components: (1) a contextual bandit conditioned by student-policies representation for automatic curriculum learning; (2) An attention-based communication architecture for policies to learn cooperation and behavior skills from distinct tasks with varying numbers of agents; (3) A hierarchical policy architecture to help agents to learn transferable skills between different tasks. The experiments are conducted in Google Research Football environment and Multi-agent Particle environments, which demonstrate the efficiency of the proposed method to IPPO and VACL. Strengths: 1. The proposed method is simple yet efficient in the complex Google Research Football environment. 2. The motivation of the components are also clear and make sence. 3. In exepriment, several ablation studies demonstrate the effectiveness of the proposed components; 4. Also, the paper is overall easy to follow to me. The key idea is easy to understand. Weaknesses: This paper could benefit from further improvements in the following aspects: 1. It seems that the manuscript introduces various components. While each one appears to be intuitive and rational in isolation, I recommend that the authors should provide a unifying theme or framework to better connect these components. Presently, it appears as if these components are addressing three discrete issues: a) efficient curriculum learning, b) policy architecture development, and c) communication in varying agent scenarios. It is noteworthy that a paper does not necessarily need to devote substantial attention to the innovative aspects of each introduced components. In the case of this paper, the hierarchical structure, for instance, appears to be a standard approach with limited novelty. The authors can highlight how they design efficient automatic curriculum learning in the context of variable agent scenarios. 2. In the section discussing related work (Line 221), the authors mention various curriculum learning mechanisms without a detailed discussion. Could the authors provide an expanded explanation on how these works conduct curriculum learning and how they relate to or differ from the proposed methodology? 3. There is room for improvement in the experiments section. Specific recommendations are detailed in the questions section. 4. The paper could be further polished, for instance: - There are several instances where a capital letter follows a comma, such as in line 40: "For example, In the football environment, when we…" - The legend of Figure 6(b) lacks clarity. It would be beneficial if the authors could provide a detailed explanation of what the labels 0,1,2,3 represent. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Could the authors clarify why maximizing rewards should be the objective of a teacher in curriculum learning? Intuitively, it seems a teacher policy that aims to maximize performance given student policies would tend to recommend simpler tasks to learn, which is not what we would like to see. 2. Ignoring the previous question, could the authors explain why the Bandit model necessitates using the representation of students' policies as input of the teacher policy? It seems that providing an optimal course distribution based on the current student's behavior would be sufficient. Why is there a need to consider course distributions under the representations of other (mainly comes from the historical) student policies? Regarding the experimental section: 1. At Line 293, the authors mention that the SPC can switch to the largest population rapidly. Could the authors provide further explanation as to why this is possible and why it represents an advantage? 2. The second and third paragraphs of Section 5.3 are unclear; the authors could rephrase them for clarity. You could try presenting your point as follows: "From figure X (or the comparison of X and Y), we can observe XX, which indicates XX." 3. In Appendix C, a more challenging 11 vs. 11 experiment was introduced, with the authors claiming superior performance of the SPC, which is great to see. But this claim raises questions as there are no baselines for comparison. Could the authors consider adding some baselines to this task? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NAN Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer oZVt, Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted: ----- **W1:** It seems that the manuscript introduces various components. **A:** Thanks for your valuable comments on the current presentation. Please see joint reponse. ----- **W2:** Could the authors provide an expanded explanation on how these works conduct curriculum learning and how they relate to or differ from the proposed methodology? **A:** ADR [31, 32] is proposed by OpenAI to solve Rubik's cube by realizing a training curriculum that gradually increases the difficulty. The additional training environments are only added when a minimum level of performance is achieved. Different from ADR, SPC automatically selects new tasks instead of using a threshold. ALP-GMM [33] fits a Gaussian Mixture Model (GMM) as the teacher on a dataset of Absolute Learning Progress (ALP) measure, where ALP = | r_new - r_old|, r_new, r_old are mean episode rewards under new or old training task distribution. ALP-GMM aims to maximize average competence over a given parameter space. Different from ALP-GMM, SPC uses mean episode rewards under testing task distribution as Absolute Learning Progress. SPCL [34] is about machine learning instead of reinforcement learning, which embeds curriculum design as a regularization term into the learning objective. SPDL [38] utilized SPCL by introducing a KL divergence regularization term between training probability distributions and testing probability distributions. CURROT [39] utilized SPCL by introducing Wasserstein distances regularization terms between training probability distributions and testing probability distributions. Different from these methods, SPC model curriculum learning as a two-level optimization instead of introducing a regularization term. GoalGAN [35] uses GAN to generate a new curriculum. Different from GoalGAN, SPC uses a multi-arm bandit algorithm instead of a GAN for stable training. PLR [36, 37] selectively samples the next training level given the current policy, by prioritizing levels with higher estimated learning potential when replayed. Different from PLR, SPC uses on-policy RL algorithms as the backbone instead of off-policy RL algorithms with replay buffer which might lead to the forgetting and relearning problem. Graph-curriculum [40] introduces a heuristic to guide curriculum learning algorithm. The heuristic assumes the larger the size of the state space is, the harder the training task is. Different from Graph-curriculum, SPC automatically selects new tasks instead of using manual configuration. Different from all these methods, SPC aims to solve the challenging cooperative MARL problems by designing an auto-curricula MARL algorithm with varying numbers of agents. While these methods focus on single-agent RL problems. We will include this discussion in the Appendix. ----- **W3:** The paper could be further polished. **A:** Thanks for your valuable comments on the current presentation. The labels 0,1,2,3 represent a default kill and skill 1,2,3 in Appendix D. We will fix these typos and presentations. ----- **Q1:** Could the authors clarify why maximizing rewards? **A:** Intuitively, the objective of a teacher is to make the student better. It is measured by the mean episode reward of the student in the testing environment in SPC. SPC selects different training environments for students to learn skills based on their performance in testing environments. Even though the students get higher rewards in simpler training tasks, it doesn’t mean that the skills learned in training tasks can lead to a higher score in testing. For example, players learn to run well in the most simpler task but cannot shoot in 5v5 competition. The measurement in SPC is for handling the forgetting and relearning problem. **Q2:** Could the authors explain why the Bandit model necessitates using the representation of students' policies as input of the teacher policy? **A:** Since the curriculum learning from the perspective of the teacher is non-stationary. That is, given the same arm selection, different students learning status leads to different rewards for the teacher. For example, a rookie and an expert will perform differently under the same training distribution. It necessitates the multi-armed bandit algorithms to consider the student learning status as context. So, SPC uses RNN to capture the historical student's behavior to approximate the students learning status. ----- **Q3:** Could the authors provide further explanation as to why this is possible and why it represents an advantage? **A:** Since IPPO is trained and evaluated directly on the target task, it can achieve a not-bad performance. It indicates that the MPE environment is relatively simple so curriculum learning is not much necessary. So it is an advantage of SPC that it can switch to the largest population rapidly to train on the most helpful tasks instead of staying simple tasks. ----- **Q4:** The second and third paragraphs of Section 5.3 are unclear; the authors could rephrase them for clarity. **A:** Thanks for your comments on the presentation of Sec 5.3. We will fix these presentations. ----- **Q5:** In Appendix C, a more challenging 11 vs. 11 experiment was introduced, with the authors claiming superior performance of the SPC, which is great to see. But this claim raises questions as there are no baselines for comparison. Could the authors consider adding some baselines to this task? **A:** GRF 11vs11 is super hard for current MARL algorithms and to our best knowledge no algorithm is reported to handle the GRF 11vs11. We tested other algorithms but it requires much computation resources which is out of our capacity. However, we opensource our code https://anonymous.4open.science/r/MARL_SPC/ for the community for further research. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Parts of my concerns have been solved. I have the following-up questions. For previous Q1: The author says the student's fitness values are estimated in a test environment, but why can this be the preference of the teacher policy to improve learning efficiency? Also, as mentioned by the authors, players learn to run well in the most simpler task but cannot shoot in 5v5 competition, so how can your student policies complete the tasks in your test environment? For previous Q2: I known that given the same arm selection, different students learning status leads to different rewards for the teacher, but why do we need to consider the students learning status? I think providing an optimal course distribution based on the **newest** student's behavior would be sufficient to guide the student learning. Do I miss something? For previous Q3: so why can SPC switch to the largest population rapidly? --- Reply to Comment 1.1.1: Comment: We appreciate your comments and effort. Below is our answers to your new questions: ----- **Q1:** Student's fitness values as the preference of the teacher policy **A:** This concept is from curriculum learning. As shown in our related work, most curriculum learning methods use "Learning Progress" as a metric to determine whether a curriculum selection is good. Different methods have different formulations of "Learning Progress". For example, ALP-GMM [33] uses ``ALP = |r_new - r_old|``, where r_new, and r_old are mean episode rewards under new or old training task distribution. VACL uses the difference of value function ``V_new(s)-V_old(s)``, where $s$ is sampled from the state distribution in testing task $p_{test}(s)$. In SPC, we use the testing return as "Learning Progress", that is, $return_{5v5}(\pi_{new student})$. Players learn to run well in the most simpler task but cannot shoot in 5v5 competition, so the reward in early learning is low. Bandit teacher would try other arms/training tasks, such as a 3v1 shooting task. In this case, once the players learn to shoot, they can get higher rewards in 5v5. So the testing reward can be the measure of how good the curriculum selection is. ----- **Q2:** Why do we need to consider the students learning status? **A:** The student's learning status is exactly indicating the newest student's behavior. The reason why we consider the students learning status is the forgetting and relearning problem. In this case, the students are not always improving themselves. We should always provide a course for students. Our bandit teacher would exploit and explore the course distribution. (Other curriculum learning methods are also based on such perspective of course distribution.) In the teacher's exploration, the students might learn easier or unrelated tasks, leading to forgetting learned skills. The player might go back to the previous learning status. For example, if a coach already taught a player how to make a three-point shot, then this coach teaches the player how to make a slam dunk. Then the player might forget how to make a three-point shot well. So the teacher should take the current student's learning status into consideration. ----- **Q3:** Why can SPC switch to the largest population rapidly? **A:** SPC can switch to the largest population rapidly in the MPE environment. During the exploitation and exploration of the curriculum selection, the teacher notices that providing tasks with a larger population can lead to a larger reward, SPC can switch to the largest population rapidly. ----- We thank you for your endorsement of the motivation and the experimental studies of our work We appreciate your positive comments on the quality and experimental studies of our work. If you have any further questions or comments, we will be happy to discuss or fix them further. We are looking forward to your feedback.
Summary: This work introduces the Skilled Population Curriculum (SPC), an automated curriculum learning algorithm designed for Curriculum-enhanced Dec-POMDP. The goal of SPC is to enhance the student's performance on target tasks via a sequence of training tasks provided by the teacher. The SPC functions as a nested-HRL method, where the teacher serves as the upper-level policy and is modeled as a contextual multi-armed bandit. At each teacher timestep, the teacher selects a training task from the distribution of bandit actions, with the context derived from the student policy's hidden state. The teacher's bandit is optimized using the student policy's test reward. The lower-level policy, also known as the "student", is in itself a hierarchical policy. The high-level policy implements population-invariant communication using a self-attention communication channel to manage messages from a number of agents, and all students share the same low-level policy. Strengths: - This paper is well-presented. Figure 1 is well-designed. I can get a good understanding of this paper's method just by reading this figure. - The algorithm is implemented with Ray RLlib, though the code is not currently available. Weaknesses: - **This study seems to be an overcomplicated amalgamation of pre-existing methods.** SPC stacks three layers of hierarchical policies (teacher 1 + student 2), the teacher is modeled as a multi-arm bandit with a fixed output dimension (number of tasks), and the lower-level control policies of the students are shared. The intricacy of this pipeline leads me to question its generalizability and practical applicability. - **More rigorous comparison with current MARL algorithms, and need benchmark results on SMAC**, which is de facto the most standard benchmark for MARL algorithms. Please consider adding [MAPPO](https://github.com/marlbenchmark/on-policy), [HARL](https://github.com/PKU-MARL/HARL), and their multi-agent communication variant as your baselines. - Line 236-238, “However, current approaches that extend HRL to multi-agent systems or utilize communication are limited to a fixed number of agents and lack the ability to transfer to different agent counts”, this is an inaccurate claim because it has been done in the ICLR 2022 publication, [*ToM2C*](https://arxiv.org/pdf/2111.09189.pdf), which similarly uses the HRL with a population-invariant multi-agent communication mechanism. AFAIK this cannot be treated as "communication limited to a fixed number of agents". Please consider citing this work and changing your statement regarding the previous work. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - **I need more convincing results for proving "The Necessity of Curriculum Learning".** Why is an in-depth analysis of the teacher-student framework necessary? Despite its significantly increased implementation complexity compared to the original MAPPO algorithm, their performances appear roughly equivalent. Moreover, the MAPPO algorithm, provided sufficient exploration and large batch size, has already demonstrated state-of-the-art performance on both SMAC and GFR benchmarks. I presume an advantage of combining the ACL and the teacher-student framework lies in handling more challenging scenarios through incremental learning. To underscore the superiority of SPC over traditional multi-agent PPO methods, could you present performance data from the SMAC Super-Hard Map difficulty? This could include instances like 3s5z_vs_3s6z, where MAPPO previously underperformed significantly. - I am interested in understanding the implementation of multi-agent communication in Ray RLlib. It appears that the agents are exchanging messages before outputting their current actions. It's somewhat challenging for me to envision how this process is technically executed within the RLlib framework. I wish the code is available. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations of this paper are only briefly mentioned in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer p3Bh, Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted: ----- **W1:** Complexity and Generalizability. **A:** While we acknowledge that our study introduces an intricate pipeline, this approach has been adopted to address certain challenges that conventional methods fail to accommodate. Since we aim to solve the challenging cooperative MARL problems, we tested a few methods and found that only one method cannot be the silver bullet. As we mentioned in Sec. 3, there are three challenges in designing an auto-curricula MARL algorithm with varying numbers of agents: (1) a non-stationarity curriculum selection problem due to the ever-changing student's strategies; (2) a lack of a general student framework to deal with the varying number of agents; (3) the forgetting and relearning problem. The three issues are related to the curriculum setting and are not easy to be decoupled in this paper setting. Furthermore, the proposed hierarchical policies are designed to introduce structured learning, which in turn, we believe, enhances the method's generalizability. We also show SPC’s performance in 11vs11 in the appendix to showcase its practical applicability. ----- **W2:** Comparison with Other MARL Algorithms. **A:** The experiments demonstrate the effectiveness of each module of SPC. We choose IPPO for comparison because we also use IPPO as SPC’s backbone. We didn’t include other MARL algorithms since the curriculum is orthogonal to the backbone policy optimization algorithms such as MAPPO and HARL. Such MARL can be easily extended by introducing a teacher module and an RNN module to record historical behavior. ----- **W3:** Please consider citing ToM2C and changing your statement regarding the previous work. **A:** Thank you for pointing out the ToM2C paper. We'll incorporate this reference and modify our statement to give due credit to the prior work. ----- **Q1:** On the Necessity of Curriculum Learning. Why is an in-depth analysis of the teacher-student framework necessary? **A:** Experimentally, we tested a few methods and found that only one method cannot be the silver bullet. We compared different teacher/auto-curricula algorithms on GRF Corner-5 in Appendix D. The training task space consists of n agents, where n ∈ {1, 3, 5}. All teachers have the same base architecture without transformer architecture and HRL. We can see that without curriculum learning algorithms, the performance of “None” is not competitive. The in-depth analysis of the teacher-student framework is primarily to elucidate the reasons we introduce each module in SPC. Curriculum learning is beneficial when solving complex tasks in many domains. With the analysis, we point out the drawbacks of current curriculum learning methods when applied in MARL settings. ----- **Q:** On the SMAC benchmark. **A:** SMAC environment is a battle-based environment. The agents are encouraged to attack enemies one by one. In this case, the agents are supposed to have same behavior. SPC aims to learn complex cooperation with sparse reward in MARL. SOTA algorithms already can learn the hardest scenarios in SMAC without curriculum, while cannot learn in GRF 5v5. For example, RODE has already achieved median win rate of 96.8 in the 3s5z_vs_3s6z you mentioned [1,2]. Therefore, we did not include SMAC. ----- **Q:** Implementation of multi-agent communication in Ray RLlib. **A:** We implement multi-agent communication by modifying the single-agent pipeline of RLlib. To achieve this, we update the original PPOTorchPolicy into PPOComTorchPolicy by overriding the computation of loss and GAE to separate the loss for different agents. We also customize a MultiActionDistribution to handle the actions of each agent. For more details, the code for reproduction is available at https://anonymous.4open.science/r/MARL_SPC/. ----- [1] The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games. [2] RODE. ----- We thank you for your endorsement of the motivation and the experimental studies of our work We appreciate your positive comments on the quality and experimental studies of our work. If you have any further questions or comments, we will be happy to discuss or fix them further. We are looking forward to your feedback. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for your rebuttal. I can see the amount of effort spent, and it addresses most of my questions, hereby raising my score. Please consider uploading the updated version of your manuscript asap.
Rebuttal 1: Rebuttal: We thank you for your endorsement of the motivation and the experimental studies of our work We appreciate your positive comments on the quality and experimental studies of our work. If you have any further questions or comments, we will be happy to discuss or fix them further. We are looking forward to your feedback. Here we provide the joint question about this work. ----- ### Q: Complexity and Generalizability A: Since we aim to solve the challenging cooperative MARL problems, we tested a few methods and found that only one method cannot be the silver bullet. So, we propose SPC, an auto-curricula MARL framework. As we mentioned in Sec. 3, there are three challenges in designing an auto-curricula MARL algorithm with varying numbers of agents: (1) a non-stationarity curriculum selection problem due to the ever-changing student's strategies; (2) a lack of a general student framework to deal with the varying number of agents; (3) the forgetting and relearning problem. The three issues are related to the curriculum setting and are not easy to be decoupled in this paper setting. Towards these challenges, we first describe our curriculum learning algorithm in Sec 3.2 with detailed design and regret analysis. And then describe the hierarchical structure and communication structure in fewer words in Sec 3.3. Note that the student policy architecture introduced in this paper is fixed and not changed when the training tasks (with different numbers of agents) are changing. We admit that the hierarchical/communication structure is less novel. However, proposing new HRL/communication algorithms is not the core idea of the paper. We just show one possible solution to deal with the mentioned challenges. The hierarchical/communication compositions could have different algorithms, which can be used in SPC. The contributions of SPC are: (1) utilizing a teacher-student framework to train agents with varying numbers; (2) proposing a multi-armed bandit algorithm to handle the non-stationary non-differentiable joint teacher-student optimization; (3) introducing the hierarchical and communication structure for a general student framework. We will highlight our contributions, especially the design of efficient automatic curriculum learning in the revised version.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the multi-agent RL problem with sparse reward and a varying number of agents. The authors propose a novel automatic curriculum learning strategy to solve complex cooperation tasks in this setting. Their curriculum strategy involves a teacher component and a student component. The teacher component selects the sequence of training tasks for the student component using the contextual bandit algorithm with predictive representation of the student’s current policy as context. The student component is endowed with a hierarchical skill framework and population-invariant communication. They empirically investigate their proposed strategy in two environments (MPE and GRF). Strengths: The paper is overall well-written, and the related work is extensively discussed. The theoretical results in this paper seem correct; I haven’t checked the details of the proofs. The population invariant communication module is an interesting contribution to dealing with the varying number of agents across tasks. It would be interesting to compare its effectiveness (on its own) against the existing methods to deal with varying numbers of agents [23, 24]. Weaknesses: I am unsure about the broader applicability of the contextual representation of the student policy using an online clustering algorithm. How much information will be lost in this process for a high-dimensional policy (e.g., that operates on image inputs)? Presented experimental results are not sufficient to validate the effectiveness of the proposed curriculum strategy (specifically the teacher component) in complex scenarios, given that in the MPE environment, the impact/necessity of curriculum is negligible. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Why EPC [9] is not used as a baseline in the experiments? The authors mention, "To ensure a fair comparison, we modify VACL by removing the centralized critic for MPE tasks.” Please explain why. In Figure 5, including the results for SPC w/o “both” HRL and COM would be good. Then, we can see the effectiveness of the teacher component (or curriculum). In Figure 3 (MPE environment), including the ablation study results (similar to Figure 5) would be good. It is also important to discuss/report the proposed strategy's computational cost/overhead (run time) compared to the baselines like VACL. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The paper is of an algorithmic nature and does not have any direct potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dera Reviewer reFd, Thanks for your review of our paper. Here's our response to the weaknesses, questions, and limitations you have highlighted: ----- **W1:** I am unsure about the broader applicability of the contextual representation of the student policy using an online clustering algorithm. How much information will be lost in this process for a high-dimensional policy (e.g., that operates on image inputs)? **A:** An **online** clustering algorithm is used to classify the **ever-changing** student policy (which is represented by a context vector) in an end-to-end manner. The context vector indicates the learning status of the students. The clustering operation means discretizing the context vector to satisfy the regret analysis in Sec 3.2.2. The information loss depends on the number of cluster centers, which is also the number of arms in multi-arm bandit algorithms. Based on Theorem 3.4, the more cluster centers are, the larger regret bound becomes. ----- **W2:** Presented experimental results are not sufficient to validate the effectiveness of the proposed curriculum strategy (specifically the teacher component) in complex scenarios, given that in the MPE environment, the impact/necessity of curriculum is negligible. **A:** We mainly focus on GRF since there are some challenges in the GRF. (1) Large-scale problem: In the GRF, for cooperative players, the joint action space is large; therefore, it is difficult to build a single agent to control all players. Moreover, the opponents are not fixed due to a stochastic environment and a difficult configuration, and the agents should be adapted to various opponents. (2) Sparse rewards: The goal of the football game is to maximize the scores, which can only be obtained after a long time by iteration. Other environments like the SMAC benchmark are not studied here since the SMAC environment is a battle-based environment. The agents are encouraged to attack enemies one by one. In this case, the agents are supposed to have the same behavior. SPC aims to learn complex cooperation with sparse reward in MARL. SOTA algorithms already can learn the hardest scenarios in SMAC without curriculum, while cannot learn in GRF 5v5. So we did not include SMAC. ----- **Q1:** Why EPC [9] is not used as a baseline in the experiments? **A:** We didn’t compare with EPC for three reasons. 1. EPC increases the number of agents by twice every curriculum update, which doesn’t support the arbitrary number of agents. 2. EPC doubles the number of agents by **cloning** each of the existing agents, which leads to similar behavior of agents. 3. The core idea of EPC is to progressively increase the population of agents throughout the training process for large-scale settings. While SPC is proposed to solving different tasks with varying numbers of agents. ----- **Q2:** The authors mention, "To ensure a fair comparison, we modify VACL by removing the centralized critic for MPE tasks.” Please explain why. **A:**We remove the centralized critic in VACL since SPC uses independent PPO without the centralized critic as the backbone. We aligned the actor-critic structure of SPC and VACL. Maybe an argument would be made that the use of a transformer architecture in SPC can be effectively viewed as centralized training since the gradient is passed across agents during training. In this paper, we remove the communication and compare SPC without gradient-pass across agents and VACL in Fig. 5 for a fair comparison. ----- **Q3:** In Figure 5, including the results for SPC w/o “both” HRL and COM would be good. Then, we can see the effectiveness of the teacher component (or curriculum). **A:** SPC = bandit teacher + IPPO with communication and HRL. So SPC w/o “both” HRL and COM is only bandit teacher + IPPO. We add the new experiments SPC w/o “both” HRL and COM. It achieves a 0.53±0.07 win-rate in GRF 5v5. ----- **Q4:** In Figure 3 (MPE environment), including the ablation study results (similar to Figure 5) would be good. **A:** We conducted the ablation study by removing components of SPC and will update Fig.3 in a revised version. ----- **Q5:** It is also important to discuss/report the proposed strategy's computational cost/overhead (run time) compared to the baselines like VACL. **A:** We implemented SPC and VACL based on RLLib in https://anonymous.4open.science/r/MARL_SPC/. The running time of SPC and VACL is similar since the computational overhead mainly depends on the inference/backpropagation of the neural network, which is aligned in this implementation. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and for addressing my concerns.
null
null
null
null
null
null
Best Arm Identification with Fixed Budget: A Large Deviation Perspective
Accept (spotlight)
Summary: The authors study the problem of best arm identification with fixed budget in the $K$-arm bandit problem. Using tools from the large deviation literature, they propose a new methodology to analyse algorithms for this problem. They apply this method to a state of the art algorithm and obtain sharper asymptotical bounds on the error probability of this algorithm. They propose two new algorithms and establish bounds on their error probability, showing that one of them is asymptotically better that existing algorithms. Finally, they illustrate the good performances of their algorithms by extensive numerical simulations. Strengths: Clarity : This article is very well written and enjoyable to read. The literature review is complete and well presented. Methods and results are clearly introduced and discussed, and the sketches of proofs are insightful. The level of detail in the evidence greatly helps the reader, and the authors' efforts to make this article self-contained are commendable. Quality : This article introduces important new methodological tools and theoretical results, that are supported by rigorous proofs. Originality : The authors study a central problem in the bandit literature from a new perspective, leveraging tools from large deviation literature to obtain sharper results and develop new algorithms. Significance : The error bounds in this paper improve over existing results on a central question, and the methodology used to obtain them could be applied to study other related problems. Weaknesses: Why the authors discuss the motivations behind the two algorithms they introduce, they do not compare their theoretical guarantees. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could you please discuss more the theoretical guarantees of the two algorithms? Is there settings where one would perform better than the other, and vice-versa? Could you perhaps add an introduction to your Appendix, to present its content and the different aspects of the problem that have been postponed there? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors adequately address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful review and positive feedback. 1. About > Could you please discuss more the theoretical guarantees of the two algorithms? Is there settings where one would perform better than the other, and vice-versa? In general, we cannot say that one of our two algorithms, CR-A or CR-C, is better than the other. We illustrate this below for the instances presented in Appendix J.1 and J.4. For the instances in J.1, we can easily prove that the performance guarantees for CR-C is better than that those for CR-A. On the contrary, for the instances in J.4, CR-A has better performance guarantees than CR-C. We will add the above discussion to the paper. We introduced these two algorithms, CR-A and CR-C, because we wanted to design algorithms that discard arms with different levels of aggressivity. 2. About > Could you perhaps add an introduction to your Appendix, to present its content and the different aspects of the problem that have been postponed there? Thanks for this suggestion. We agree that it could be nice to have a summary of the appendices. We will add an introduction before the appendices to state the role and content of each of them. --- Rebuttal Comment 1.1: Comment: Thank you for this reply. I have read the rebuttal and will keep the score as it is.
Summary: This paper studies the problem of best arm identification (BAI) with fixed budget (FB), with bandit feedback on K arms. The asymptotically optimal sample complexity for this problem has been of significant interest recently and has proved surprisingly difficult to understand compared to the fixed confidence problem. This paper gives a new upper bound on the failure probability of a given algorithm, and uses it to study specific algorithms: successive rejects and continuous rejects. Strengths: The paper achieves new large deviation bounds for the failure probability of specific algorithms using a new LDP bound, and shows their superior empirical performance at minimizing failure probability. The analysis of the successive rejects algorithm is nice and improves over previous results. The required analyses for the continuous rejections (in the appendix) seem to be quite involved, but show even better performance guarantees. The writing is pretty good too. Weaknesses: The result seem a little incremental as they most concern specific algorithms; the paper does not seem like it will help much toward an eventual solution to the general FB-BAI problem. But getting better non-sharp bounds is still nice (and a general solution might be abstract enough not to subsume them). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I didn't see the relevant Polish spaces specified anywhere. I suggest using i_* instead of \hat i, since it is hard to see the latter. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful review and positive feedback. 1. About > The result seems a little incremental as they most concern specific algorithms; the paper does not seem like it will help much toward an eventual solution to the general FB-BAI problem. But getting better non-sharp bounds is still nice (and a general solution might be abstract enough not to subsume them). We believe that our general LDP result (Theorem 1) is not incremental, for it allows us to analyze in a simple way various algorithms and to design new algorithms with improved performance. It allows us to analyze truly adaptive algorithms, and we plan to search for even more adaptive algorithms than CR. Now, solving the general FB-BAI problem (finding a problem-specific lower bound of the error rate and a matching algorithm) seems actually very hard if not impossible. This was recently discussed in Degenne [12] (reference is from the supplementary material). 2. About > I didn't see the relevant Polish spaces specified anywhere. We mentioned Polish spaces (separable and complete metric spaces) because this is the traditional (and the most general) framework to define and work with Large Deviation principles (see the book [30]). Here, you are right, we just work with Euclidian spaces; we will clarify this. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I don't have further questions and will keep my score as is for now.
Summary: This paper considers the problem of best arm identification with fixed budget, where the learner pulls an arm in each round for $T$ rounds, then outputs a candidate for the arm with the largest mean. The objective is to minimize the probability of mis-identification. Characterizing the instance specific complexity for this problem is an open question. The authors present a more refined analysis based on large deviation principles resulting on a refined upper bound for the successive rejects (SR) algorithm. Further, the authors present two algorithms based on the previous analysis where arms elimination is done following conditions rather than at pre-specified rounds as in SR. Strengths: The paper is well-organized, and the discussions are interesting. Using LDP for best arm identification appears to be novel. Weaknesses: The presented bounds are not easy to read (especially in Section 4). It would be nicer to provide a more detailed discussion about the comparisons with bounds in the literature. Some theorem statements (Proposition and Theorem 1) don't clearly state the assumptions made. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Minor comment/question: line 147, about returning the arm with the highest empirical reward: can't we have a scenario in SR algorithm where the at the end (when the budget is exhausted), the empirical mean of the wining arm is smaller than the empirical mean of the first eliminated arm (which was computed only during the first epoch) ? If true, the latter statement is not valid. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful review and positive feedback. 1. About > The presented bounds are not easy to read (especially in Section 4). It would be nicer to provide a more detailed discussion about the comparisons with bounds in the literature. Thanks for this remark. We will add a more detailed discussion to compare our bounds to those of the literature (using the extra space allowed in the final version of the paper). From the results of Theorem 3, we can conclude that CR-C has better performance guarantees than SR, the state-of-the-art algorithm. Indeed, one can readily see that the bound derived in Theorem 3 for CR-C is higher than $2\xi_j/j\overline{\log}K$, which is the improved bound for SR presented in Section 3. We will make this clear in the paper. In general, we cannot say that one of our two algorithms, CR-A or CR-C, is better than the other. We illustrate this below for the instances presented in Appendix J.1 and J.4. For the instances in J.1, we can easily prove that the performance guarantees for CR-C is better than that those for CR-A. On the contrary, for the instances in J.4, CR-A has better performance guarantees than CR-C. We will add the above discussion to the paper.   2. About > Minor comment/question: line 147, about returning the arm with the highest empirical reward: can't we have a scenario in SR algorithm where the at the end (when the budget is exhausted), the empirical mean of the wining arm is smaller than the empirical mean of the first eliminated arm (which was computed only during the first epoch) ? If true, the latter statement is not valid. Yes, you are right. More precisely, in Line 147, we state that *if* the algorithm returns the arm with the highest empirical reward, then the error probability is $P_{\mu}[\hat{\imath}\neq 1(\mu)]=P_{\mu}[\hat{\mu}(T)\in Alt(\mu)]$. As you noticed, algorithms (SR, SH, or CR) may not always return the best empirical arm, but we accounted for this possibility in our analysis. To understand why, please refer to the proof of Theorem 2 for example: the analysis consists in upper bounding the probability of eliminating the best arm in each phase of the algorithm. This probability is first connected to an event related to the empirical rewards of the arms, and then the probability of this event is bounded using Theorem 1 (our Large Deviation result). --- Rebuttal Comment 1.1: Comment: Thank you for the reply. I have read the rebuttal and I will not change the score for now.
Summary: This paper contributes an important tool geared towards closing the performance gap for best arm identification problems under the fixed budget setting, a well-known open problem in the area. In particular, the result is a large deviation bound on the sample means of the arm rewards as a function of any large deviation bound on the empirical means of the number of times each of the arms is pulled under a fixed policy (which could very well be adaptive). Since for popular policies such as successive rejects, the latter bound could be easier to specify due to their structure (batched pulls, etc.), the result can be used to derive tighter upper bounds on the probabilities of misidentification under such policies. The paper illustrates this power by deriving a tighter upper bound on the probability of misidentification for the successive rejects (SR) policy and derives a new policy called continuous rejects (CR) that achieves a smaller probability of error. Strengths: - While it doesn't seem surprising that such a result would hold, I think this is a solid result that I anticipate being of importance not only for the best arm-identification problem but as a general tool in the bandit analysis toolkit. - The paper is very well written. Weaknesses: - I don't think there are any major weaknesses except stating that the considered problem is the "last fundamental open problem in MABs" is quite presumptuous. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Hasn't it been shown, as stated in the open problem paper by Chao Qin, that the conjecture in Equation (1) is false? Am I missing something? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper addresses a specific problem in bandits that does not seem to have any more nefarious use cases than typical optimization models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful review and positive feedback. 1. About > I don't think there are any major weaknesses except stating that the considered problem is the "last fundamental open problem in MABs" is quite presumptuous. We agree and we will rephrase. 2. About > Hasn't it been shown, as stated in the open problem paper by Chao Qin, that the conjecture in Equation (1) is false? Am I missing something? (1) is the lower bound conjectured by Kaufmann and Garivier in [17] (references are from the supplementary material). No one has proven yet whether (1) is a correct lower bound or not (as far as we are aware). Now the conjecture stated by Chao Qin in [28] (Conjecture 2 in [28]) is whether there exists a single algorithm with error rate matching the r.h.s. of (1) for *all* instances. This last conjecture does not hold as mentioned in [28] – this is a consequence of results from [1]. Recently, Degenne [12] (published in COLT 2023) also proved that the conjecture was false for 2-arm bandit problems with Bernoulli rewards. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I am in support of the contribution and will maintain my score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Differentiable sorting for censored time-to-event data.
Accept (poster)
Summary: This paper concerns a novel way to solve risk stratification for time-to-event data in the presence of right-censoring. Particularly, the authors solved the proportional hazard model via the differentiable sorting algorithm. Experiments were done on simulated and real-world benchmark datasets. Strengths: - The paper is well-written and easy to follow. - The authors extended the differentiable sorting network to account for right-censoring by introducing a possible permutation matrix and applied it to survival analysis. - This paper investigated the role of transitivity in the proportional hazards survival model. Weaknesses: - Neither the experimental nor theoretical advantage of Diffsurv over CPL is clear enough. I do not find a specific reason to use Diffsurv rather than CPL for training a PH survival model. - Even if the authors showed some advantages of the proposed Diffsurv on the simulated dataset, it hardly shows a difference against CPL in real-world benchmark datasets. - I believe Diffsurv is more than just one interesting way of solving the proportional hazards model. I think the differentiable sorting is expected to do better than the partial likelihood approach; however, the experimental results do not strongly support the advantage. The potential problem is the proportional hazard assumption - we observe marginal improvement as the PH assumption is too strong. Applying the differentiable sorting to models with no PH assumptions, such as CoxTime or DeepHit (as discussed in the appendix), may result in clear improvement (moreover, the experimental results also suggest that the method should be extended to non-PH because, under extremely simple simulated data that follows Weibull distribution with mild independent censoring, Diffsurv outperforms CPL but in real-world cases, the results hardly differ from those of the CPL’s suggesting that real-world applicability is questionable.). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I am not so sure about the clinical context of the top-$k$ risk stratification task. Could the authors kindly explain why the top-$k$ risk stratification is important enough to be evaluated separately, even if C-index accounts for the risk stratification? - I do not think Kvamme et al. explicitly assumed pairwise independence, but they presented that it is sufficient to consider a risk set of size 1, which means having a risk set of size 1 is as good as having a larger risk set. Moreover, due to the nature of the random selection of SGD, the transitivity is implicitly considered. Also, in the paper, they investigated the impact of risk set size, which supports their claim “it is often sufficient to choose $n=1$.” - I am curious how emphasizing top-K prediction changes the discrimination and calibration performance of survival models. - Could authors kindly provide a computational cost analysis (either experimental or theoretical) of Diffsurv? It will be informative if the authors compare the computational costs of Diffsurv and CPL. - RSF is not based on PH assumption. If the authors did not include DeepHit or other SOTA deep survival models just because they are not based on PH assumption, why is RSF given as a baseline? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not provide any theoretical analysis of Diffsurv. CPL might seem to be naive as it does not exhaustively consider all possible pairs, however, the estimator that maximizes the partial likelihood has nice statistical properties, including consistency and asymptotic normality. It would be nice if the authors could provide statistical/information theory behind Diffsurv and compare it with the existing ones. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review of our manuscript. We've addressed each point you raised and provided further clarifications where necessary: **Weakness 1 (Theoretical and practical advantages of the method)** *Theoretical Advantages*: Diffsurv integrates differentiable sorting networks, thereby introducing a transitive inductive bias into the survival ranking model. We have shown that this novel survival analysis approach performs competitively compared to CPL. More significantly, the ordering supervision enables straightforward implementation of new tasks, such as top-k risk prediction. In contrast, CPL cannot be readily adapted to predict permutation matrices directly. As we demonstrate in the global response and new experiments, attempts to adapt CPL for this purpose result in poorer calibration in the predicted rankings compared to Diffsurv. *Experimental Advantages*: Our experimental findings, detailed in Table 1 and lines L275-281, demonstrate Diffsurv outperforms CPL, particularly as transitivity increases within the data. This improvement is pronounced in larger risk set sizes, where our method leverages the inherent transitivity due to the introduction of sorting methods. Further experiments (Table 3) reveal consistent outperformance of CPL by Diffsurv in the top-k setting with significant clinical applications, like identifying top-k highest-risk individuals for specific treatment of preventative measures (see also our reply to Question 1 below). --- **Weakness 2 (Real-world benefits)** We acknowledge in the real-world risk stratification benchmark datasets, Diffsurv is only marginally better than the CPL baseline. We have introduced a novel approach to survival analysis using differentiable sorting algorithms that is at least competitive with established methods, but importantly also enables the development of novel downstream tasks using ordering supervision such as top-k risk prediction. --- **Weakness 3 (Potential of Diffsurv in non-PH settings)** We agree extending Diffsurv to time-dependent predictions without PH assumptions similar to CoxTime or DeepHit is a promising approach that will likely show a clearer performance benefit compared to the baselines and aim to address this in future work. --- **Question 1 (Significance of top-k risk prediction task)** We believe the top-k risk prediction task has important clinical applications: Whereas C-index accounts for risk stratification and measures the proportion of concordant pairs in all comparable pairs, the top-k risk prediction task as introduced in the manuscript tries to identify the set of top-k highest risk individuals, regardless of the orderings of individuals within the set of top-k highest individuals. The value of the top-k risk prediction task emerges in real-world clinical scenarios where resources like treatments or preventative interventions, such as vaccines, are constrained. In these situations, the primary concern is identifying high-risk individuals most likely to benefit from interventions, rather than predicting relative risk within that top tier. Risk ordering within the top-k individuals is secondary and may not affect immediate clinical decisions. Note this task is straightforward to implement with Diffsurv due to the predicted permutation matrix and thus showcases the advantages of including ordering supervision in a risk stratification model. --- **Question 2 (Pairwise independence and transitivity in Kvamme et al.)** We acknowledge that Kvamme et al. do not explicitly assume pairwise independence. However, it's worth noting optimisation with a risk set size of 1 inherently assumes pairwise independence. While Kvamme et al. demonstrate this is not an issue in their work, our empirical results contradict this. In our experiments, we observe benefits for larger risk sets, both for CPL and Diffsurv, though stronger for Diffsurv. The following differences in our study might explain this discrepancy: * We evaluate C-Index, while Kvamme uses MPLL, a lower bound of C-Index [1] * Our dataset and model complexity make optimisation more challenging, which might lead to our observed benefits for bigger risk sets * We have evaluated larger risk sets compared to Kvamme et al.'s study --- **Question 3: (Top-K prediction changes to discrimination and calibration)** While we did not include these results in the manuscript, we have noticed all model variants trained using top-k objectives perform slightly worse in risk stratification when evaluated using C-index. --- **Question 4 (Computational cost analysis)** We have now analysed the time complexity of the methods and run additional experiments to measure runtimes, please see the global response and the attached PDF. --- **Question 5 (Baselines)** We believe there is no stronger baseline than a tuned deep neural network with a CPL objective. While other models like DeepHit tend to outperform CPL in $\text{C-Index}^\text{td}$, for the ranking setting (non time-dependent) C-Index is the appropriate metric. We still include RSF as it is a commonly used baseline in the literature and is meaningfully different from Cox regression and neural networks trained with CPL. **Limitation 1 (Theoretical analysis of Diffsurv)** We agree CPL has some favourable statistical properties and appreciate the concern for a more thorough analysis of the theoretical properties of our proposed method. We would like to point out these align closely with the properties of sorting networks and differentiable sorting networks, which have been studied extensively in prior work [2, 3, 4]. --- [1] Steck et al. 2007."On ranking in survival analysis: Bounds on the concordance index." NeurIPS. [2] Knuth, D. E. 1997. The Art of Computer Programming. Addison–Wesley. [3] Petersen et al. 2021. "Differentiable sorting networks for scalable sorting and ranking supervision." ICML. [4] Petersen et al. 2021. "Monotonic Differentiable Sorting Networks." ICML. --- Rebuttal Comment 1.1: Comment: **Update on question 3** We now report additional experiments varying the Top K% and corresponding C-index performance. Please see the replies to Reviewer GX1f: - https://openreview.net/forum?id=gYWjI7wLhc&noteId=9ucr4JN7g0 - https://openreview.net/forum?id=gYWjI7wLhc&noteId=TMw9rdHVPr --- Rebuttal Comment 1.2: Comment: I would like to thank the authors for their thorough feedback and additional experimental results. The authors addressed most of my concerns appropriately. However, still, I found some points are unclear to me. I have carefully gone through the discussion between the authors and reviewer GX1f regarding the top-k risk prediction task. I am still not fully convinced whether top-K risk prediction aligns with the conventional survival analysis scope. It might be better suited as a distinct task somewhat connected to survival analysis. However, with the C-index in each top-K group, it now seems like if there is a strong motivation to differentiate between high-risk and low-risk groups and to stratify the risk within the high-risk group, the concept of top-K risk prediction could potentially offer practical utility in the clinical field. Thus, Diffsurv having better Top-K accuracy as well as a better C-index in the Top-K group seems to demonstrate that Diffsurv has advantages over CPL. Nevertheless, I am still unsure of what advantages Diffsurv offers compared to CPL in the typical survival analysis context, especially when we limit our scope to the proportional hazards models. Even if the authors demonstrated that Diffsurv could show a better C-index than CPL as transitivity increases, such high transitivity may not be observed in real-world cases, as we see marginal improvement in C-index from the real-world dataset experiments. Overall, for me, Diffsurv looks more like a novel and promising ordering algorithm in the presence of censoring but not a complete survival analysis model. Also, the authors did not present a way to derive the survival function of Diffsurv, so it is not straightforward to evaluate the utility of Diffsurv as a survival analysis model. That is to say, additional investigation is still needed if the authors want to claim Diffsurv is for survival analysis. Therefore, I keep my score as is. One additional comment: If Diffsurv arranges the risks better than CPL, then Diffsurv may have better calibration performance than CPL (if the authors can derive survival probability from Diffsurv) --- Reply to Comment 1.2.1: Comment: We greatly appreciate your continued engagement and recognize the significance of the point you've highlighted regarding the positioning of Diffsurv within the domain of survival analysis. As we've delved into this topic, we too have grappled with the precise placement of Diffsurv within the broader landscape of survival analysis. This is reflected in our choice to emphasize "censored time-to-event data'' in the title. While it is possible to directly relate Diffsurv to existing survival methods, it is unique in its ability to directly predict permutation matrices, enabling novel tasks such as top-k identification. You are absolutely right in pointing out that Diffsurv currently does not produce traditional survival curves. This limitation is something we've acknowledged in our paper (L305). Merging the functionalities of Diffsurv with more traditional survival models to derive survival curves is an avenue we are actively exploring for future research. Nevertheless, we believe that the introduction of differentiable sorting and the novel capabilities introduced by Diffsurv are valuable in their own right. On reflection, we may term these capabilities as "survival ranking", as it aptly captures the essence of what Diffsurv does. However, we still believe that "survival analysis" serves as a broader and inclusive term, encompassing the subdomain of "survival ranking". This follows Raykar et al.’s introduction to the connections between the two domains (“we show that classical survival analysis involving censored data can naturally be cast as a ranking problem.”) [1]. We have updated the introduction and conclusion to make this distinction clearer. --- [1] Raykar et al. 2007. ‘On Ranking in Survival Analysis: Bounds on the Concordance Index’. NeurIPS.
Summary: The authors propose a deep learning-based survival model that utilizes a novel differentiable sorting objective capable of handling censored time-to-event data. In contrast to previous pairwise sorting objectives, the proposed listwise sorting objective achieves the transitive property inherent in survival data. Through semi-synthetic and real-world experiments, the authors demonstrate that the proposed method achieves comparable or improved risk ranking performance by leveraging the advantages of the proposed differentiable sorting objective. Strengths: Extending a differentiable sorting method to account for censoring and applying it to train survival models is novel. Weaknesses: - The writing of the paper should be improved: it has many typos and grammatical errors. Some of the examples are listed below (note that there are many others not listed here): Line 61: “extension differentiable” -> “extension of differentiable” / Line 97: “n=2” should be in mathematical form / Line 154: “It possible” -> “It is possible” / Line 161: “loss out we find” -> “loss, we find” / Line 77: “1-dimensional vector of size d” -> “d-dimensional vector” / Make notations for referring to “Equation” consistent throughout the paper. - The motivation for using the “listwise” sorting method that achieves transitive property over the “pairwise” sorting method is not clearly stated throughout the paper (especially in the Introduction). It would be helpful to state the formal definition of transitive property in survival data with mathematical notations, and the limitations when this property is not maintained. - The design of semi-synthetic experiments does not elaborate why listwise sorting should outperform pairwise sorting. - The experiments are performed only with limited benchmarks (mainly the variants of partial likelihood methods and the proposed method itself) and evaluated only in terms of the discriminative power. The authors should provide predictive power (such as Brier scores) of survival models and in-depth qualitative analysis on the benefit of using the proposed listwise sorting method (when it has pros and when not). - It would be helpful if mathematical definitions are used for describing two scenarios of ranking right-censored samples in lines 187-191. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Regarding Weakness 2, what is the limitation of pairwise sorting over listwise sorting? If correct pairwise sorting is achieved for all acceptable pairs, doesn’t it necessarily achieve transitive property? - Regarding Weakness 3, why listwise sorting over pairwise sorting is important in the semi-synthetic survSVHN? - Acceptable pairs do not consider cases in which the risk of a pair of samples cannot be directly compared. However, the proposed permutation ranking takes into account the possibility that right-censored samples may have higher ranks. The question arises: what is the benefit of including such potential cases for which we cannot guarantee a direct comparison? Furthermore, wouldn't it be harmful if a right-censored sample is highly likely to experience an early time-to-event, which can be inferred from the covariates? Can this be supported qualitatively based on the semi-synthetic (or synthetic) experiments? - In relation to Question 3, the semi-synthetic data generation process does not account for independent censoring, which assumes that the time-to-event and time-to-censoring are conditionally independent given the covariates. This assumption is commonly utilized in various survival analysis literature. Wouldn't the proposed semi-synthetic scenario be more favorable for utilizing the proposed permutation ranking method for sorting, rather than relying on acceptable pairs for sorting? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - It seems that computing the Q matrix is computationally burdensome as it needs to compare all the sample pairs. - Please see details in Weakness 2. - Please see details in Weakness 4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your feedback and hold your insights in high regard. Our goal is to respond to every concern you have mentioned: --- **Weakness 1 (Writing)** Thank you for the feedback on typographical and grammatical errors. We've addressed these and ensured greater consistency in notation and equation references. --- **Weakness 2 and Question 1 (Pairwise vs Listwise and Formal definition of transitivity)** The motivations for listwise over pairwise methods and their connection to transitivity warrant clarity. Recall that Cox Partial Likelihood (CPL) is a listwise approach, as indicated on L123. Let's highlight the benefits of listwise methods and then differentiate between differentiable sorting and CPL. Pairwise sorting methods handle ranking pairs individually. They establish local orderings (A over B, B over C) but may not ensure global consistency (A over C) throughout the set, as outlined in L113-115 and L266-274. Such limitations arise from focusing solely on local pairings. In some scenarios, these local orders can create inconsistencies. Increasing the minibatch size to include more pairs poses scalability challenges. Listwise methods evaluate comprehensive relationships among data points, scaling factorially. Ensuring pairwise order for every set may imply transitivity, but it's not always the case due to inherent inconsistencies when solely considering pairs. Refer to [1] for more on listwise motivations. Comparing CPL and Diffsurv, while both are listwise, they have distinct principles. CPL follows a top-one approach. Each term in the product ensures that one patient with an earlier observed event must higher rank than a subset of those observed later. Diffsurv instead considers the order of every patient in the set, and any incorrect ordering within each layer of the relaxed sorting network propagates to a higher predicted probability of permuting to an incorrect rank. --- **Weakness 3 and Question 2 (Transitivity in the semi-synthetic experiment)** It is important to highlight the role of transitivity in larger risk set sizes, which directly contributes to the higher performance of our method compared to the CPL baseline. The experiments shown in Table 1 and the details in lines L275-281 of the manuscript support this claim by demonstrating that both CPL and Diffsurv improve with larger risk set sizes. However, Diffsurv, due to the introduction of differentiable sorting methods, benefits more from the inherent transitivity in these larger risk sets, resulting in a more substantial performance delta. Furthermore, our results validate the significance of listwise sorting in the semi-synthetic survSVHN dataset. Table 2 provides an in-depth exploration of this, revealing that as the degree of transitivity within the data increases, Diffsurv consistently outperforms CPL. This divergence emphasises the importance of the differentiable sorting network in capturing the transitivity in the ranking task. --- **Weakness 4 (Benchmarking)** Our proposed method focuses on survival ranking, and we don’t learn a hazard function as in CPL, but rather a ranking function. Clinical decisions often hinge on relative comparisons, such as whether an observation affects survival time, rather than precise event timing. Additionally, there are specific applications in survival analysis where ranking is the primary objective, such as resource allocation among patients. In line with recent works in deep learning survival analysis, we chose CPL as the baseline and rigorously tuned the hyperparameters to ensure a fair comparison. Since our method is geared towards survival ranking, predicting event times and thus scores like IBS are not applicable in this situation. However, we recognise the value of assessing model calibration and appreciate your suggestion to include Brier scores for predictive power. We have incorporated the Brier scores for the permutation ranking in the attached PDF of our global response. Our findings demonstrate that Diffsurv outperforms the CPL baseline. --- **Weakness 5 (Mathematical notation)** We agree that adding the mathematical notation introduced in L73-77 and relating that to the possible ranks in L187-191 improves clarity and have adapted this section accordingly. --- **Question 3 (Ranking loss for right-censored samples)** In response to the concerns raised about the ranking of samples that cannot be directly compared, it is essential to highlight the design of our loss function, which has been constructed to solve this issue. The loss function (Equations 13 and 14) accounts for the potential challenges of incorporating right-censored samples. For an early right-censored sample $i$, $Q_{pi}$ is 1 almost everywhere, effectively sidestepping the risk comparison issue. The binary cross-entropy remains identical whether the model predicts uniform probability for all possible ranks or concentrates the probability mass on an early rank, inferred from the covariates. This ensures that the predicted rank falls within the possible ranks, considering censoring, without further influencing which rank within those possible ranks is preferable. We have now clarified this point in the methods section. --- **Question 4 (Censoring in semi-synthetic data)** To clarify, our semi-synthetic data generation uses only independent censoring. Is the question whether dependent censoring would favour our method in the benchmarks? --- **Limitation 1 (Runtime of calculating Qp)** We now report both theoretical time complexities of the differentiable sorting networks and empirical runtime results, including calculation of the $Q_p$ matrix in appendix section B.3 and show that the distinction in training times between Diffsurv and CPL is insignificant. Please also see our global response for further details on the runtime analyses. --- [1] Cao et al. 2007. ‘Learning to Rank: From Pairwise Approach to Listwise Approach’. ICML. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: I have read the authors' feedback (with additional experiments on the calibration performance) and updated my evaluation accordingly (from 4. Borderline reject to 5. Borderline accept). I appreciate the response to my comments but I still have concerns about the clinical utility of top-k predictions (which can be simply replaced with sorting based on individual risk predictions) and the absence of comparisons to more state-of-the-art deep learning methods. --- Reply to Comment 1.1.1: Title: Re: Re: Rebuttal by Authors Comment: Thank you for taking the time to consider our rebuttal and adjusting your evaluation. We would like to shortly respond to the two remaining points: --- **Clinical Utility and Top-k Prediction**: The top-k prediction task has clinical utility in settings where a scarce resource, such as a new vaccine or a prevention checkup, must be allocated to the set of individuals most likely to benefit; usually, these are the individuals most at risk. In this setting, proper risk stratification is not necessary, as it is not required to correctly order individuals within the top-k most at risk. We only need to identify the individuals most at risk. We agree that, for a perfect risk model, one could identify the top-k most at risk by sorting based on individual risk predictions, as you suggested. However, we note that this is how the non-top-k baselines (e.g., the unadjusted Cox Partial Likelihood (CPL) model) in Table 3 are evaluated. Here, we simply train a standard CPL loss, sort individuals by predicted risk, and evaluate the top-k prediction performance. Table 3 also shows that, for real-world datasets with limited available data, the model variants suggested by us clearly outperform the unadjusted models, and that the Diffsurf-TopK model outperforms all other variants in this task. --- **State-of-the-art Deep Learning Baselines**: We want to emphasise that we do compare against extensively tuned deep learning baselines trained using several CPL loss variants, and we are unaware of any stronger time-independent baselines we could compare our method against. Are there specific state-of-the-art deep learning methods you had in mind? It's worth noting that directly comparing our method with time-dependent survival analysis methods, which model non-proportional hazards, presents challenges. Typically, these models need to be evaluated using metrics that are able to evaluate their time-dependant predictions, such as time-dependant c-index [1]. For instance, methods like DeepHit provide a probability mass function (PMF) for survival times, but the standard c-index necessitates ranking irrespective of time. This fundamental difference makes comparisons challenging (please note that the DeepHit paper also only evaluates the time-dependent C-index metric), and it is unlikely that these models improve upon Cox Partial Likelihood in the time-independent setting. We appreciate any insights or recommendations you might have on this matter. --- [1] Antolini, Laura, Patrizia Boracchi, and Elia Biganzoli. 2005. "A time‐dependent discrimination index for survival data.” Statistics in medicine.
Summary: This paper investigates the problem of survival analysis under the proportional hazards (PH) hypothesis with deep learning models. The main issue with previous approaches (Katzman et al. 2018 and Kvamme et al. 2019 as main representatives) is that the Cox loss is only computed in small mini-batches, potentially of size 2 in the case of the last reference. As a consequence, only pairwise ordering is imposed, potentially breaking transitivity. This paper proposes to leverage recent advances in differentiable sorting to take into account transitivity in training deep survival models under the PH hypothesis. The main issue comes from censoring, which only implies a partial ordering. The main contribution of the authors is to propose a method that takes into account this censoring, defining a possible permutation matrix. The resulting method is flexible and can be adapted to plain survival or other tasks, such as top-K risk prediction. Experiments on real data show that: 1. the method improves over previous deep learning methods (table 1), showing a +1 c-index point 2. the gap between both methods increases when the transitivity of the dataset increases (table 2) 3. this gap is maintained for other datasets (Table 3, first part) and other tasks (table 3, last part) Strengths: - Potential large impact: this paper provides a more sound way to deal with survival data and minibatches in deep learning, which could have a large impact - Quality: the experiments are well conducted and yield convincing results - Originality: to the best of my knowledge, this is one of the first papers leveraging differentiable sorting for survival analysis - Clarity: the paper is well written, up to minor issues (see weaknesses) Weaknesses: - Clarity - notations in Equations (2), (3) and (4) are fuzzy. How is $j \in \mathcal{R} \ \lbrace i \rbrace$ chosen? It is unclear whether the sum applies to all such $j$ (as should be the case in eq. (4), since the c-index is computed over all acceptable pairs as defined L 106) or to only a random sample (as in Eq. (3)). - Top-k task - As pointed out by the authors, due to the partial ordering the set $\mathcal{T}_k$ can grow very large (L217). As a consequence, the top-k-score defined in Eq (20) can be arbitrarily good. How do the authors control for this potential effect in the results? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - cf top-k task: how do the authors control for the effect of the size of $\mathcal{T}_k$ on the top-k-score? - Is there a difference in run-time between previous approaches and the proposed differentiable sorting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: - This paper is limited to proportional hazards assumptions, but it is clearly acknowledged by the authors Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough evaluation and positive feedback on our manuscript. Your insights into our work's potential impact and originality are highly encouraging. We acknowledge your concerns, especially regarding the clarity of notations and the handling of the top-k task. In this rebuttal, we strive to address each point further to enhance the quality and clarity of our contribution. --- **Weakness 1 (Clarity)** Here, we reproduce the notation from Kvamme et al. [1]. In theory, the sum applies to all such j as in the C-index. In practice, we must sample a limited number j’s according to the minibatch size. This is done randomly from the set of possible j’s. --- **Weakness 2 and Question 2 (Top-k task)** We thank the reviewer for highlighting the potential influence of the set size on the top-k-score due to partial ordering. Indeed, a large set size, stemming from partial ordering, can theoretically lead to an arbitrarily high top-k-score. However, a few points to note: * Our primary emphasis is on relative scores between models. While individual scores could be elevated due to partial orderings, the relative differences between models remain and can be used to assess their comparative performances. * Empirically, we observed that the scores manifest meaningful differences across methods and real-world datasets, including those with substantial censoring. These variations support the practical effectiveness of the metric. --- **Question 2 (Runtime analysis)** Thank you for the question on runtime differences between our proposed method and previous approaches, which we have now analysed in more detail in the global response and attached PDF. --- [1] Kvamme, Havard, Ørnulf Borgan, and Ida Scheel. "Time-to-Event Prediction with Neural Networks and Cox Regression." Journal of Machine Learning Research 20 (2019): 1-30. --- Rebuttal Comment 1.1: Title: Thank your for your rebuttal Comment: Dear authors, Thank you for your answer and the additional experiments. I am satisfied regarding runtime and for the top-k task. Regarding clarity, thanks for the pointer to Kvamme et al. Although your notation matches the one of Equation (9) in Kvamme et al., I still think this is confusing for the reader: if a single point is sampled, I would recommend to a single index like $J(i)$: $L(\theta) = \prod_{i: \delta_i = 1} \frac{f_{\theta}(x_i)}{f_{\theta}(x_i) + f_{\theta}(x_{J(i)})},\text{ where }J(i) \sim \mathcal{R}\backslash \lbrace i \rbrace$ If a random subset $\tilde{\mathcal{R}}\backslash \lbrace i \rbrace$ of size >1 is used, the notations of Eq (8) in Kvamme et al. are more precise. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback. We agree with your suggestions for clarity in this area, and in response, we have updated the manuscript to align with your recommendations.
Summary: The paper aims to address the survival problem as a ranking problem. The author asserts that by taking into account the transitivity property of ranking, sorting networks can be employed to tackle the current ranking problem effectively. Furthermore, the paper presents a solution to handle numerous potential permutation matrices in the presence of censoring. Additionally, it claims to outperform baseline methods in terms of top-k risk prediction. They tested their method on a semi-synthetic dataset based on the SVHN dataset and 5 real-world datasets. Strengths: By utilizing sorting networks, the paper leverages the inherent transitivity of ranking to its advantage. Additionally, it offers a comprehensive solution to handle all conceivable permutation matrices. Weaknesses: 1. Survival analysis encompasses more than just generating rankings. In many cases, it is crucial to estimate the timing of the occurrence of the event of interest. The paper could address this aspect by comparing the learned hazard with the ground truth hazard in the semisynthetic experiment. 2. when it comes to top-k risk prediction, the method needs to provide a meaningful ordering, which, unfortunately, is not demonstrated in the paper. 3. It is not immediately apparent why this particular method is necessary. Is its sole purpose to accommodate the limitation of fitting the entire at-risk set into memory? 4. It would be beneficial for the author to provide a more detailed explanation of sorting networks and their functioning. The working principles of these networks are currently unclear to me. Furthermore, it would be helpful to know if sorting networks have any limitations regarding the size of the input list they can handle. 5. Improvements are not significant enough, even though baseline is a very simple method. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What is the relationship between batch size and risk set size? How is it possible for the risk set size to be larger than the batch size? 2. I believe that as the risk set size in the Cox model increases, the performance of methods should be closer, but this does not seem to be the case (based on lines 111 and 112). Could you explain this to me? 3. Cox proportional hazards regression (CPL) forms the basis of Cox regression, and the hazard function in both methods should be the same. Consequently, the c-index should be the same for both methods. Why is it different? 4. According to the paper, the transitivity ratio should be between 0 and 1. However, it is reported as infinity. Why is that? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1. How does this method handle issues related to time-varying hazards? 2. As the method only provides risk scores, it is unclear how it can be assessed using metrics like IBS and Calibration. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We value your insights and aim to address each concern you've raised. Here are our responses and clarifications: --- **Weakness 1 and Limitation 2 (Predicting the time of event vs ranking and calibration scores)** While estimating absolute timings for event occurrences is important in certain applications, many do not require this specific information. Decision-making through survival analysis frequently focuses on relative comparisons rather than absolute values. For instance, medical professionals may be more concerned with whether a particular observation will increase or decrease survival time rather than with the exact absolute value. Additionally, in some applications, the ranking itself is the primary objective of survival analysis. This may be evident in scenarios like assigning a limited discrete resource, such as a vaccine, to a subset of individuals most at risk. Importantly, we do not learn a hazard function as in CPL, but rather a ranking function. While it may be possible to recover a hazard function using adaptations of methods such as Breslow's estimator, we see this as future work. While we can’t assess commonly used calibration scores like IBS in the ranking setting, we now report Brier scores for the permutation ranking (see point model calibration in global response and attached PDF) and show that Diffsurv scores are better than the CPL baseline. --- **Weakness 2 (Meaningful ordering within top-k?)** The top-k task proposed in our paper intentionally does not focus on the order within the top-k, and we believe that this task has significant real-world clinical applications. For example, in scenarios where the objective is to identify the top-k highest-risk individuals for preventive measures (e.g., a vaccine with limited supply), the order within this group is irrelevant, as all selected individuals will be contacted. Therefore, we neither optimise for nor evaluate the meaningfulness of the predicted ordering of individuals within the top-k grouping. Instead, our results demonstrate that the models can accurately identify the top-k highest-risk individuals and that our method consistently outperforms the baselines in this context. --- **Weakness 3 (Necessity/motivation for proposed method)** Introducing differentiable sorting networks doesn't target the memory needs of large risk sets. By using these networks, our method integrates transitivity as an inductive bias into survival ranking, a novel approach to survival analysis with at least competitive results to established baselines. More importantly, Diffsurv establishes a framework that is also straightforward to build upon, such as in a top-k setting enabled by predicted permutation matrices from the differentiable sorting method. We note that established methods like CPL can’t easily be extended to predict permutation matrices directly, and we now show (see attached PDF in global response with new calibration experiments) that even if we extend CPL to do that, they exhibit worse calibration in predicted rankings compared to our method. --- **Weakness 4 (Explanation of sorting networks)** We have now expanded our description of sorting networks in Appendix E and added more references to traditional sorting networks (in addition to the papers introducing differentiable sorting networks). --- **Weakness 5 & Question 3 (Comparison with baselines)** We refer to Cox Partial Likelihood (CPL) as a loss function, without a specific covariate interaction, $h_\theta(x_i)$. Cox Regression refers to the scenario where linear interaction is maintained, while in CPL baselines, $h_\theta$ is parameterised with a neural network, explaining the better C-index performance. CPL is the default loss function in many recent deep learning survival analyses [1, 2]. We train our CPL baselines with the same architectures as Deepsurv, tuning the hyperparameters extensively. Other baselines, like DeepHit, are not expected to outperform CPL in this setting due to the non-proportional hazards affecting the time-dependent C-index but not the standard C-index. See our response to Weakness 1 regarding time dependency. --- **Question 1 (Risk set size vs Batch size)** We follow Petersen et al., 2021 [3] and train the model in batches of risk sets, i.e. each minibatch has shape (minibatch size, risk set size, feature size). Thus, for minibatch size 8 and risk set size 32, 256 samples are in the batch. This is described in more detail in section D.1 in the appendix. --- **Question 2 (Performance with increasing batch size)** The performance of both CPL and Diffsurv improves with larger risk set sizes, as shown in Table 1, but Diffsurv benefits more from larger risk set sizes. We believe this is because our method benefits more from the inherent transitivity in these larger risk sets due to the introduction of differentiable sorting methods. Please also see L275-281 in the manuscript for a more detailed explanation of this phenomenon and Table 2 with another experiment investigating this in more detail. --- **Question 4 (Transitivity ratio can’t be infinity?)** We don’t report a transitivity ratio of infinity. Instead, Table 2 shows that with an infinite number of quantiles (no discretisation as defined on L274), the transitivity ratio is .991. --- **Limitation 1 (Time-varying covariates)** We recognise the significance of models addressing time-varying hazards. While our method can be expanded in future work, our current focus is on fixed-time ranking and introducing differentiable sorting networks to risk analysis. --- [1] Buergel et al. 2022. ‘Metabolomic Profiles Predict Individual Multidisease Outcomes’. Nature Medicine. [2] Carr et al. 2021. ‘Longitudinal Patient Stratification of Electronic Health Records with Flexible Adjustment for Clinical Outcomes’. PMLR. [3] Petersen et al. 2022. Differentiable sorting networks for scalable sorting and ranking supervision. ICML --- Rebuttal Comment 1.1: Comment: Having thoroughly reviewed both the author's response and the feedback from fellow reviewers, I've taken note of a notable observation regarding the c-index's decline in the context of top-k predictions, as highlighted in the response addressed to another reviewer. Given the acknowledged imperfections of the c-index as a metric, I believe it would be usefull to compare the c-index values between the data-generating process in the semi-synthetic experiment and the model. It is important that the c-index associated with the model does not surpass that of the data-generating process. The concept of employing sorting methods is appealing; however, it's important to underscore that the entire concept is rooted in a score that isn't flawless (we recognize that the log-likelihood serves as the definitive score for survival analysis). In the interest of comprehensive analysis, it would be advantageous to conduct a study exploring the impact of varying the 'k' parameter in top-k predictions. Such an investigation could shed light on how diverse methods perform as 'k' increases. The internal mechanics governing the ordering within the top-k framework remain somewhat opaque, and I'm eager to know whether it yields an acceptable c-index within the top-k subset. Taking into account the insights gleaned from the author's response, I am inclined to adjust my evaluation to lean towards a borderline rejection. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to consider our rebuttals, in particular noting the innovative use of differentiable sorting within survival analysis. We will address your remaining points and sincerely hope that our clarifications will further persuade you towards recommending acceptance of our paper. --- **C-index concerns** We agree that the C-index performance of the model should not exceed that of the data generation process. As such, we have calculated the C-index of the data generation process using the ground truth risk values and observed sampled times, and found a value of 0.980. This is significantly higher than the best performing model (Diffsurv) with a score of 0.943. This difference underscores the inherent challenge of the task, and even with a sophisticated model like Diffsurv, there remains room for improvement. The C-index is widely recognized as the de-facto metric in clinical practice for evaluating survival models, and most machine learning papers in survival analysis use variants of the C-index as the primary evaluation metric as well. While we acknowledge that the C-index, like any metric, has its limitations and trade-offs, its predominant use by practitioners as the primary evaluation metric for survival analysis indicates that optimising models for this metric can have a significant impact. More theoretically, Raykar et al. [1] summarises why we think that treating survival analysis as a ranking problem is reasonable: > In this paper, we show that classical survival analysis involving censored data can naturally be cast as a ranking problem. The concordance index (CI), which quantifies the quality of rankings, is the standard performance measure for model assessment in survival analysis. In contrast, the standard approach to learning the popular proportional hazard (PH) model is based on Cox’s partial likelihood. We devise two bounds on CI–one of which emerges directly from the properties of PH models–and optimise them directly. [1] Raykar et al. 2007. ‘On Ranking in Survival Analysis: Bounds on the Concordance Index’. NeurIPS. --- **Top-k evaluation** We acknowledge that evaluating the top-k prediction task for $k \neq 10$ will further underscore the robustness of our results, and have now evaluated all model variants for varying k on the tabular real-world data sets. As you can see from the table below, the results are largely consistent across datasets and k. For NWTCO, the top-25 metric is not informative due to the very high rate of censoring in this dataset. Note: Metric is Top K% where K is indicated in the column. **FLCHAIN** | Model | Top 5% | Top 10% | Top 25% | |:-|:-:|:-:|:-:| | Cox Partial Likelihood | .314 (.012) | .462 (.017) | .699 (.029) | | CPL-TopK (Variant I) | .375 (.041) | .468 (.015) | .818 (.031) | | CPL-TopK (Variant II) | .369 (.040) | .465 (.008) | .717 (.026) | | Diffsurv | .326 (.038) | .468 (.015) | .709 (.039) | | Diffsurv-TopK | .388 (.023) | .488 (.016) | .825 (.036) | **SUPPORT** | Model | Top 5% | Top 10% | Top 25% | |:-|:-:|:-:|:-:| | Cox Partial Likelihood | .255 (.027) | .286 (.014) | .403 (.014) | | CPL-TopK (Variant I) | .499 (.050) | .481 (.016) | .520 (.008) | | CPL-TopK (Variant II) | .489 (.043) | .475 (.018) | .523 (.012) | | Diffsurv | .255 (.047) | .304 (.027) | .409 (.019) | | Diffsurv-TopK | .553 (.049) | .521 (.023) | .560 (.009) | **METABRIC** | Model | Top 5% | Top 10% | Top 25% | |:-|:-:|:-:|:-:| | Cox Partial Likelihood | .179 (.063) | .247 (.054) | .534 (.046) | | CPL-TopK (Variant I) | .627 (.161) | .507 (.120) | .587 (.028) | | CPL-TopK (Variant II) | .640 (.137) | .500 (.063) | .589 (.046) | | Diffsurv | .263 (.061) | .325 (.064) | .555 (.028) | | Diffsurv-TopK | .587 (.165) | .547 (.113) | .643 (.020) | **NWTCO** | Model | Top 5% | Top 10% | Top 25% | |:-|:-:|:-:|:-:| | Cox Partial Likelihood | .311 (.100) | .400 (.070) | 1.000 (.000) | | CPL-TopK (Variant I) | .379 (.117) | .418 (.067) | 1.000 (.000) | | CPL-TopK (Variant II) | .358 (.109) | .416 (.055) | 1.000 (.000) | | Diffsurv | .337 (.082) | .390 (.067) | 1.000 (.000) | | Diffsurv-TopK | .384 (.124) | .416 (.056) | 1.000 (.000) | *Continued in the next comment...*
Rebuttal 1: Rebuttal: We sincerely appreciate the thorough and constructive feedback provided by all four reviewers. As suggested by multiple reviewers, we have further investigated the runtime (both theoretically and empirically) and calibration of our proposed survival ranking method. --- **Theoretical and empirical analysis of runtime** Diffsurv and the baseline models using CPL use the same neural network architectures. The operations relating to these networks are the main bottleneck for all methods, particularly for the imaging datasets. Only minor differences in runtime are due to the respective Cox Partial Likelihood (CPL) and differentiable sorting operations. *Theoretical Time Complexities*: First, we highlight the number of operations required for the differentiable sorting networks: Odd-Even as $\mathcal{O}(n^2)$ and Bitonic sorting networks as $\mathcal{O}(nlog^2n)$ as discussed in Appendix C and [1]. The time complexity of our primary baseline, CPL, is $\mathcal{O}(nlogn)$ as it also requires sorting the event times. *Benchmarking Process*: We have conducted experiments to understand these differences (see results in the attached PDF). We measured the time taken for a forward and backward pass on an NVIDIA GeForce GTX 1080 Ti, utilising randomly generated logits. This experiment provides a more isolated measure of the compute times, allowing for a focused comparison between the methods. For Diffsurv, our timing includes the predicted permutation matrix computation via differentiable sorting networks and the subsequent masking (Equation 13) and binary cross-entropy (Equation 14). Importantly, we precomputed the potential permutation matrix ($Q_{p}$) generation off-GPU (same as during minibatch training), so it isn't included in this benchmark. *Results*: As documented in Table 1, our Diffsurv methods, especially Bitonic, have a noticeable edge over CPL methods regarding compute time across various batch and risk set sizes. It's interesting to observe that as the risk set size goes up, CPL methods show a reduced compute time, while Diffsurv's Odd-Even variant sees a substantial increase, particularly when moving from risk set sizes of 32 to 128. *Optimization Possibilities*: It's crucial to note that current implementations of CPL methods compute over batches using a straightforward 'for' loop since they don't support batch-parallel computation. There's significant room for further optimisation here. *Overall Observations*: In comprehensive model runs, the difference in training times between Diffsurv and CPL is insignificant. The compute time is primarily driven by the model architecture. Specifically, we observed similar convergence times across methods, with the Diffsurv Bitonic variant being slightly quicker. We recognise the importance of computational efficiency in practical deployments and point to [1] for further analysis of differentiable sorting network compute time. --- **Model calibration** It is important to note that we do not learn a hazard function as in CPL, but rather a ranking function, and thus can’t assess predictive model calibration using commonly used scores like IBS. We acknowledge that model calibration in survival analysis models is essential for ensuring that the predicted probabilities of outcomes align closely with the true probabilities. We include a new analysis (see attached PDF), focussing on the calibration of predicted individual rankings. Specifically, we first qualitatively illustrate in Figure 1 in the attached PDF that discrete predicted ranks and ranking probabilities are accurately calibrated for a model with a small risk set size for the Diffsurv approach. To perform a quantitative comparison with baseline methods, we need to derive ranking probabilities for the CPL model. Based on prior work [2], we assume that the probability of correct pairwise ordering for the CPL adheres to the logistic function. We thus compute permutation matrices using differential sorting networks, employing predicted partial log hazards from a pretrained model as inputs and the logistic sigmoid function as the differentiable sorting operator. By subsequently calculating Brier scores for the rank probabilities in the predicted permutation matrices and ground truth permutations derived from the hazards in the survSVHN dataset, we analyse various combinations of batch size and risk set size. Our findings show that the Diffsurv models consistently exhibit the lowest Brier scores across all settings (Table 2 in the attached PDF). --- [1] Petersen, F., Borgelt, C., Kuehne, H., & Deussen, O. (2021, July). Differentiable sorting networks for scalable sorting and ranking supervision. In International Conference on Machine Learning (pp. 8546-8555). PMLR. [2] Steck, Harald, et al. "On ranking in survival analysis: Bounds on the concordance index." Advances in neural information processing systems 20 (2007). Pdf: /pdf/2238f5060b4d6eb131e2bc9e1bdb23f72c6c96dd.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Global Convergence Analysis of Local SGD for Two-layer Neural Network without Overparameterization
Accept (poster)
Summary: This paper studies the global convergence of local SGD for One-hidden-layer Convolutional Neural Networks. The authors present solid theoretical understanding. They show that without overparameterization and injecting noise, local SGD have global convergence by new proof techniques and new understanding. Strengths: The paper is clear and well-written. Understanding the optimization dynamics of gradient-based method is a significant theoretical issue. This paper provides a solid theoretical analysis without highly over-parameterization and injecting noise, which results in the highly non-convexity. Theoretically, the authors provide a novel understanding of the training dynamics by dividing them into two phases: self-correction and convergence. Weaknesses: The article's significance is limited due to the restriction of input data to Gaussian distributions. However, this limitation is not a major concern since the problem itself is non-convex, even in this simplified scenario. Additionally, I have significant concerns regarding Assumption 1 about the target function, which I will discuss in detail in the Questions section. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: My main question pertains to Assumption 1 about the teacher traget. It appears that this assumption suggestes that stronger conditions on $\boldsymbol{a}^*$ are required for larger values of $k$. I am curious to know whether this requirement is essential or merely technical. In other words, if one use a sufficiently large $k$ that contradicts this assumption, do the training dynamics in the last region (in the proof of Theorem 1) fundamentally change? The authors could attempt to demonstrate this aspect experimentally or theoretically. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations of this article lie in the requirements for data and target Assumption. However, these limitations should not be grounds for rejection. Given our limited understanding of the training dynamics of nonconvex optimization, studying such simple setups can still provide valuable insights and contribute to the field. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for taking your precious time to review our paper and please see our responses to your questions. **1. My main question pertains to Assumption 1 about the teacher target. It appears that this assumption suggests that stronger conditions on $a^{*}$ are required for larger values of k. I am curious to know whether this requirement is essential or merely technical. In other words, if one uses a sufficiently large k that contradicts this assumption, do the training dynamics in the last region (in the proof of Theorem 1) fundamentally change? The authors could attempt to demonstrate this aspect experimentally or theoretically.** We thank the reviewer for this question. In Assumption 1, the condition on the upper bound of $(1^{\prime}a^*)^2$ is merely technical, and the condition in Eq.(5) is almost essential to show the global convergence with arbitrary initialization. Let $\alpha = \frac{(1^{\prime}a^*)^2}{k |a^*|^2}$ where $\alpha \in (0,1]$ roughly stands for the degree of sparsity of the vector $a^{*}$. Then Eq.(5) will hold if $$\alpha > (1-32(\pi-1)/k)^{-1}\frac{\pi +k-1}{(\pi-1)k}.$$ This right-hand side can be a constant in $(0,1)$ for proper $k$. For example, when $k \geq 320(\pi-1)^2$ (a technical choice as in Theorem 1), this condition will be satisfied as long as $\alpha > 0.48$. To show that (5) is essential, we have added an additional simulation with different $\alpha$ to show that this condition is nearly necessary for arbitrary initialization. In each trial, we randomly select an initial point from the initial region defined in our paper. Then we calculate the probability of convergence over 100 independent trials with $k=64$. From the following table (**Table 1 in the rebuttal PDF file**), we can see the probability of convergence becomes larger when $\alpha$ tends to 1. Specially, the probability becomes 1 when $\alpha > 1/4$. | alpha | 1/64 | 1/32 | 1/16 | 1/8 | 1/4 | 1/2 | |-----------|------|------|------|------|-----|-----| | Local SGD | 0.54 | 0.65 | 0.74 | 0.87 | 1 | 1 | | SGD | 0.54 | 0.62 | 0.73 | 0.85 | 1 | 1 | It is challenging to theoretically demonstrate the last region in the proof of Theorem 1 if Eq.(5) does not hold because it would require a lower-bound analysis for this specific model. Experimentally, we plot the trajectories of SGD and local SGD that start from the region with $\phi_0$ and converge to the spurious local minima. It means that the self-correction process could fail if Eq.(5) does not hold for the ground truth. **2. The limitations of this article lie in the requirements for data and target Assumption. However, these limitations should not be grounds for rejection. Given our limited understanding of the training dynamics of nonconvex optimization, studying such simple setups can still provide valuable insights and contribute to the field.** We thank the reviewer for these insightful and encouraging comments. To the best of our knowledge, this is the first global convergence result for local SGD for such a nonconvex function which does not require overparameterization and NTK analysis. --- Rebuttal Comment 1.1: Comment: No further questions from me. I will keep my score as is.
Summary: This paper provides a rigorous theoretical justification on how the local SGD (without the injection of noise) can find the global minima for CNN without relying on the NTK-type analysis (i.e., overparametrization) under federated learning framework. Strengths: A. The contributions of the paper are clearly written. B. Nice simulation results which are consistent with the theoretical predictions. C. Like the part where the authors divide the landscape of the objective function into several regions. This type of analysis is not something that I can find in the current literature. Weaknesses: A. Writing is little bit technical. It would be great if the authors can draw some cartoons of dynamics of each layer for the global convergence. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: A. Why is the f (Z, w, a) CNN? It looks to me like a shallow fully connected neural network. What is the exact structure of $w \in \mathbb{R}^{d}$? B. In line 90, the paper says the algorithm starts from the random initialization from the same initial region as in [13, 63]. In line 203, it says arbitrary initialization. I was bit confused while reading the paper on this point. And it becomes clear while reading line 312~313. C. Just to clear my understanding, so your result is saying that weights in the first layer converges (in polynomial time), and then the weights in the second layer converges afterward? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: This work has no negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating our work and please see our responses to your questions. **1. Why is the f (Z, w, a) CNN? It looks to me like a shallow fully connected neural network. What is the exact structure of $w$?** We apologize for the confusion. Please note that we follow the assumption and terminology in Du et al. [13] and Zhou et al. [63] to refer to the network as a CNN. We acknowledge that it is indeed a fully-connected network, since there is no overlap between patches in CNN. In our paper, we elect to refer to the network as a CNN so that we can keep the terminology consistent with the most related works [13, 63]. We will add a sentence to explain this in the revised version of the paper. There is no special structural for $w$. **2. In line 90, the paper says the algorithm starts from the random initialization from the same initial region as in [13, 63]. In line 203, it says arbitrary initialization. I was bit confused while reading the paper on this point. And it becomes clear while reading line 312~313.** We thank the reviewer for pointing this out. We will modify the statements in lines 90 and 203 to make it more precise. We will mention explicitly that the local SGD converges from arbitrary initialization except for a measure zero set, where the angle of the first layer between initialization and the global minimum is $\pi$. **3. Just to clear my understanding, so your result is saying that weights in the first layer converges (in polynomial time), and then the weights in the second layer converges afterward?** We thank the reviewer for this insightful question. Lemmas 4 and 5 in Section 4.3 are only used to prove the convergence results, which are not saying the second layer converges afterward the first layer. They actually converge simultaneously without particular order. We will add this remark in our revision. Our simulation results in **Figure 1 of rebuttal PDF file** indeed show that two layers almost converge simultaneously after the self-correction process. **References** [13] Simon Du, Jason Lee, Yuandong Tian, Aarti Singh, and Barnabas Poczos. Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima. In International Conference on Machine Learning, pages 1339–1348. PMLR, 2018. [63] Mo Zhou, Tianyi Liu, Yan Li, Dachao Lin, Enlu Zhou, and Tuo Zhao. Toward understanding the importance of noise in training neural networks. In International Conference on Machine Learning, pages 7594–7602. PMLR, 2019. --- Rebuttal Comment 1.1: Comment: No further questions from my side. I will keep my score as is
Summary: The paper asserts that local SGD achieves convergence to the global minimum in the presence of Gaussian input. The experimental setup involves a two-layer student model, where the ground truth is generated by a two-layer teacher model. The authors highlight their main contribution as a proof that does not necessitate noise injection. Strengths: The paper proved global convergence of local SGD without noise injection. Weaknesses: 1. The presentation of the results is unclear and the proof is difficult to follow. It would be beneficial for the authors to provide a more intuitive interpretation of the terms, lemmas, and equations. For instance: - Could the authors clarify the purpose and significance of the intermediate step $\check{v}_{t+1}$? - How does Theorem 1 demonstrate the self-correction of the second layer? 2. The realism of the assumptions made by the authors is questionable. For instance: - What is the probability that a randomly selected $a^*$ satisfies equation (5)? It appears to be very low in my estimation. 3. I'm uncertain why the authors chose to consider CNN instead of linear layers. It seems to add unnecessary confusion to the problem. 4. What sets "local" SGD apart from regular SGD? It would be helpful for the authors to clarify the distinctive aspects (in terms of proof) of "local" SGD in comparison to regular SGD. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Please review the suggested revisions for the mentioned weaknesses: Minor comments: In line 167, could you please clarify the distinction between L(w, a; Z) and l(w, a; Z)? In line 169, "wights" should be corrected to "weights". In line 215, "learning" should be changed to "learning rate". Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: I believe it is important for the authors to provide justification for the assumptions made. Without such justification, the scope and applicability of the proposed results may be limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. **1. Could the authors clarify the purpose and significance of the intermediate step $\check{v}\_{t+1}$.** The purpose of the intermediate step $\check{v}_\{t+1}$ is to establish an exact recursion to characterize how the global angle $\phi\_{t+1}$ changes in the training process. In traditional analysis of local SGD, the descent inequality is obtained by taking the average on local weights and local gradients. However, this analysis is not applicable in our setting since we have to consider the angle $\phi_t$ instead of the inner product between $v_t$ and the ground truth. Due to the nonlinear relationship between the global angle $\phi_t$ and local angle $\phi_t^i$, we cannot get a global dynamic of $\phi_t$ by averaging local dynamics of $\phi_t^i$ as provided in [13]. To address this challenge, we introduce the virtual sequence $\check {v}\_{t+1}$ and use the corresponding angle $\check{\phi}\_{t+1}$ as a proxy between $\phi\_{t+1}$ and $\phi_t$. **2. How does Theorem 1 demonstrate the self-correction of the second layer?** The intermediate step $\check{v}\_{t+1}$ appears in the term $H_t$ in (3), so its significance in the proof hinges on the recursion (3). First, the sign of $\lambda_t \cos \phi_t$ in (3) determines the direction of the first layer’s update, and this observation enlightens us to analyze the self-correction of the first layer in different regions. Second, $H_t$ is the discrepancy term, $M\_{1,t}$ is a martingale difference sequence, and $M\_{2,t}$ is the variance term of noise. Hence the recursion (3) actually resembles the descent inequality of local SGD analysis for general functions. It helps us show the linear speedup and reduce communication rounds in the convergence stage. Du et al. [13] proved that GD can converge to the global minimum if the initialization is in the attraction basin, that is $\{(v_0, a_0): \phi_0 < \pi/2, a_0^{\prime}a^* > 0\}$. In our paper, we consider local SGD with almost arbitrary initialization including the bad area with wrong signals such that $\phi_0 > \pi/ 2$ or $a_0^{\prime}a^* \leq 0$. The self-correction process means SGD or local SGD can correct the wrong signals from the initialization and enter the attraction basin in polynomial time. In Theorem 1, $\tau_a$ is the time stamp when the signal of the second layer turns positive (and the initial signal could be negative). Therefore, $\tau_a \leq O(\eta^{-1} \log k)$ means that the signal of the second layer can be corrected in $O(\eta^{-1} \log k)$ steps for any initialization. **3. The realism of the assumptions made by the authors is questionable. For instance: What is the probability that a randomly selected $a^{*}$ satisfies equation (5)? It appears to be very low in my estimation.** Since $a^*$ is a fixed vector in our paper, we respectfully disagree that one can use random selection to evaluate the condition Eq.(5). But it can be satisfied when $a^*$ is dense. For example, we let $\alpha = \frac{(1^{\prime}a^*)^2}{k |a^*|^2}$ where $\alpha \in (0,1]$ roughly stands for the degree of sparsity of the vector $a^{*}$. Then Eq.(5) will hold if $$\alpha > (1-32(\pi-1)/k)^{-1}\frac{\pi +k-1}{(\pi-1)k}.$$ This right-hand side can be a constant in $(0,1)$ for proper $k$. For example, when $k \geq 320(\pi-1)^2$ (a technical choice as in Theorem 1), this condition will be satisfied as long as $\alpha > 0.48$. We have added an additional simulation with different $\alpha$ to show that this condition is almost necessary to show the global convergence with arbitrary initialization. In each trial, we randomly select an initial point from the initial region defined in our paper. Then we calculate the probability of convergence over 100 independent trials with $k=64$. From **Table 1 in the rebuttal PDF file**, we can see the probability of convergence becomes larger when $\alpha$ tends to 1. Specifically, the probability becomes 1 when $\alpha > 1/8$. Even though the coefficient in our condition on $\alpha$ may require it to be greater than $1/4$ due to the technical relaxation, the results indicate that a constant $\alpha$ is necessary. **4. I'm uncertain why the authors chose to consider CNN instead of linear layers. It seems to add unnecessary confusion to the problem.** We apologize for the confusion. Please note that we follow the assumption and terminology in Du et al. [13] and Zhou et al. [63] to refer to the network as a CNN. We acknowledge that it is indeed a fully-connected network since there is no overlap between patches in CNN. In our paper, we elect to refer to the network as CNN to keep the terminology consistent with the most related works [13, 63]. We will add a sentence to explain this in the revised version of the paper. **5. What sets "local" SGD apart from regular SGD? It would be helpful for the authors to clarify the distinctive aspects (in terms of proof) of "local" SGD in comparison to regular SGD.** We thank the reviewer for this suggestion. For regular SGD (that is, single machine with $N=1$), the intermediate step $\check{v}\_{t+1}$ equals to $v\_{t+1}$, so the discrepancy $H_t$ is zero. Then the distinctive aspects of local SGD in the proof are how to quantify and control the scale of the discrepancy term $H_t$. The traditional analysis in distributed nonconvex optimization relies on the bounded smoothness condition to bound the discrepancy term and attain linear speedup. Our paper carries out a more careful analysis because the loss function of the neural network model has an unbounded smooth parameter. **6. In line 167, could you please clarify the distinction between $L(w, a; Z)$ and $\ell(w, a; Z)$?** We apologize for this. We first want to clarify that only $\ell(v,a;Z)$ is used in our paper, and there is no definition for $\ell(w,a;Z)$. The loss $\ell$ is defined in terms of $v$, and the loss $L$ is defined in terms of $w = v/\|v\|$. We will take care to address this in the paper. --- Rebuttal Comment 1.1: Title: Looking forward to rebuttal feedback Comment: Dear Reviewer coFq, Thank you for reviewing our paper again! We have posted our responses to your questions and concerns. We are wondering if you have a new evaluation of our paper after reading our responses. If you have other questions, we are very happy to discuss them with you. Best, Authors
Summary: This paper considers a federated learning setting for a one-hidden-layer convolutional neural network without overparameterization. It is proven that vanilla local SGD (where in each iteration each node updates the model by the local gradient and then all nodes synchronize the model parameters) can converge to a global minimum. Strengths: 1. The convergence analysis does not require over-parameterization (i.e., in the NTK region) and injected noise. 2. Based on a careful landscape study, the paper proposes a self-correction mechanism that ensures the algorithm enters a good landscape region in polynomial time. This explains why local SGD can converge in practice even though the landscape is not well-conditioned. Weaknesses: 1. The online learning assumption, i.e., each node samples data i.i.d. from the same distribution in each iteration, is crucial to the analysis of this paper, but it deviates from the practical settings, where each node computes its gradient according to a given set of local data. 2. Why is the proposed model a CNN? It seems a fully-connected network. 3. The weight-normalization technique twists the vanilla SGD. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see "weakness". Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I don't think the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for taking your precious time to review our paper! Next, we provide detailed responses to your comments and questions. **1. The online learning assumption, i.e., each node samples data i.i.d. from the same distribution in each iteration, is crucial to the analysis of this paper, but it deviates from the practical settings, where each node computes its gradient according to a given set of local data.** Thank you for this important point. We agree that non-i.i.d. is a relevant problem in federated learning. However, one of the points we want to make in the paper is that the global convergence of SGD under one hidden layer neural network without overparameterization in the single-machine setting is still unknown until our paper. Specifically, due to the non-convexity, prior works only show the local convergence of GD [13] and the global convergence of perturbed GD by injecting noise [63]. Currently, there is no literature analyzing vanilla SGD for this neural network, let alone local SGD. Our paper proves the global convergence of local SGD under this setting, which is even more challenging than SGD since we have to handle the effects of local steps. We devote significant efforts to establishing the new recursive dynamics and developing new analysis techniques (namely, self-correction) to demonstrate the convergence to global minima with almost arbitrary initialization, despite the loss landscape being nonconvex. Note that the technique in local SGD for general nonconvex problems can only guarantee finding a stationary point instead of a global minimum [56,62]. **2. Why is the proposed model a CNN? It seems a fully-connected network.** We apologize for the confusion. Please note that we follow the assumption and terminology in Du et al. [13] and Zhou et al. [63] to refer to the network as a CNN. We acknowledge that it is indeed a fully-connected network since there is no overlap between patches in CNN. In our paper, we elect to refer to the network as CNN to keep the terminology consistent with the most related works [13, 63]. We will add a sentence to explain this in the revised version of the paper. **3. The weight-normalization technique twists the vanilla SGD.** The vanilla SGD in our paper refers to optimizing the loss function of the model defined in Eq.(2) after applying weight-normalization, which is a function of $v$ and $a$. Note that the description of Algorithm 1 on page 4 is indeed vanilla SGD because it applies stochastic gradient updates over $(v,a)$ but not $(w,a)$. Applying the weight normalization is consistent with Du et al. [13] and Zhou et al. [63]. The GD algorithm in Du et al. [13] and the perturbed GD in Zhou et al. [63] are also conducted under the same model. The original network $f(Z, w, a)$ has a positive-homogeneity issue: for any $c>0$, it holds $f(Z, cw, a/c) = f(Z, w, a)$. This property allows the network to be rescaled without changing the function value. It also implies that only the direction of the first layer $w$ will essentially affect the loss function. The weight-normalization technique makes the learning algorithm scaling-invariant and can stabilize the training process [46]. **References** [13] Simon Du, Jason Lee, Yuandong Tian, Aarti Singh, and Barnabas Poczos. Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima. In International Conference on Machine Learning, pages 1339–1348. PMLR, 2018. [46] Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Advances in neural information processing systems, 29, 2016. [56] Hao Yu, Rong Jin, and Sen Yang. On the linear speedup analysis of communication efficient momentum sgd for distributed non-convex optimization. arXiv preprint arXiv:1905.03817, 2019. [62] Fan Zhou and Guojing Cong. On the convergence properties of a k-step averaging stochastic gradient descent algorithm for nonconvex optimization. arXiv preprint arXiv:1708.01012, 2017. [63] Mo Zhou, Tianyi Liu, Yan Li, Dachao Lin, Enlu Zhou, and Tuo Zhao. Toward understanding the importance of noise in training neural networks. In International Conference on Machine Learning, pages 7594–7602. PMLR, 2019. --- Rebuttal Comment 1.1: Comment: Thanks to the author for the comprehensive response. I think this paper has made a considerable contribution even under the i.i.d. data assumption. For the CNN issue, I don't think it's appropriate to use a misleading term in order to "keep terminology consistent with the most related works". I also see two other reviewers that have raised this issue. My prior review "Weight normalization twists the vanilla SGD" is not entirely precise. What I wanted to convey is that the weight normalization re-parameterizes the model. Consequently, the analysis of SGD on the re-parameterized model diverges from that of the original model. The author argued that this re-parameterization was also adopted in existing works [13], [63]. However, [13] and [63] are pioneer works in this field, dating back at least 3 years. Rather than adhering to the unrealistic assumptions in these prior works, it would be more valuable to take efforts to relax these assumptions. (By the way, [46] was published 7 years ago. It was indeed an insightful work. However, the experiments were carried out on MNIST and Cifar-10. To my knowledge, practitioners nowadays do not use such type of weight normalization in deep networks.) I will keep the score as it is. --- Reply to Comment 1.1.1: Comment: Dear Reviewer p3Vw, Thanks for your response. We will fix the terminology of CNN in the revised version. However, we respectfully disagree with the statement that weight normalization is unrealistic. This technique was proposed to stabilize the training process of neural networks, which also inspired similar techniques in recent years (e.g., Year 2021~2023). For example, a well-known paper, Miyato et al. [1] generalized the weight normalization to spectral normalization and applied it to stabilize the training of the discriminator in Generative Adversarial Networks. Bjorck et al. [2] also used this technique in the training of deep reinforcement learning, which enables stable training with large modern architectures. Liu et al. [3] proposed a spectral-normalized neural Gaussian process that applies spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in the hidden layer and improves the quality of hidden representations of neural networks. Please note that [2] and [3] were published very recently: [2] was published in NeurIPS 2021, and [3] was published in JMLR 2023. Therefore, we believe that weight normalization is still relevant in the training of deep neural networks. [1] Miyato, Takeru, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. "Spectral normalization for generative adversarial networks." ICLR 2018. [2] Bjorck, Nils, Carla P. Gomes, and Kilian Q. Weinberger. "Towards deeper deep reinforcement learning with spectral normalization." Advances in Neural Information Processing Systems 34 (2021): 8242-8255. [3] Liu, Jeremiah Zhe, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zachary Nado, Jasper Snoek, Dustin Tran, and Balaji Lakshminarayanan. "A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness." J. Mach. Learn. Res. 24 (2023): 42-1. Best, Authors
Rebuttal 1: Rebuttal: Dear Reviewers: We appreciate your constructive reviews and would like to thank you for your time in helping us to improve our paper. We have responded to the reviews and questions individually and have added several new simulations in the PDF file. Here we provide a general response to address the two frequently asked questions by reviewers about the terminology and assumption in the paper: **Regarding the terminology of CNNs.** We apologize for the confusion. We have followed the assumption and terminology in Du et al. [13] and Zhou et al. [63] to refer to the network as a CNN. We acknowledge that it is indeed a fully-connected network since there is no overlap between patches in the CNN. In our paper, we elect to refer to the network as a CNN so that we can keep the terminology consistent with the most related works [13, 63]. We will add a sentence to explain this in the revised version of the paper. **Regarding the Assumption in Theorem 1.** To illustrate the necessity of condition (6) to guarantee the global convergence with almost arbitrary initialization, we calculate the probabilities of convergence to the global minimum under different values of $(1^{\prime}a^*)^2/\|a^*\|^2$ in the new simulations. **Table 1 in the rebuttal PDF file** shows that the condition where the probability of converging to the global minimum reaches 1 is very close to condition (6). Therefore, we believe (6) is nearly essential to the self-correction mechanism as illustrated in Theorem 1. [13] Simon Du, Jason Lee, Yuandong Tian, Aarti Singh, and Barnabas Poczos. Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima. In International Conference on Machine Learning, pages 1339–1348. PMLR, 2018. [63] Mo Zhou, Tianyi Liu, Yan Li, Dachao Lin, Enlu Zhou, and Tuo Zhao. Toward understanding the importance of noise in training neural networks. In International Conference on Machine Learning, pages 7594–7602. PMLR, 2019. Pdf: /pdf/bd5f1b3cd3e0bd2af8a943efff9125e2fcdbbd71.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Provably Robust Estimators for Inverse Problems via Jittering
Accept (poster)
Summary: This work considers introducing Jittering to the inverse problems for the benefits of robustness to the l2-worst-case. In this work, some analytical results are provided in proving the robust estimation with some assumptions, which I think are reasonable. Then, empirical experiments verify the effectiveness on the robustness improvement at a relatively lower computational cost compared to adversarial training. Strengths: 1. The motivation of this work is quite nice, towards a provable robust estimator. 2. The assumptions are reasonable. 3. Existing results show somewhat promising performance. Weaknesses: 1. Though some analytical results are presented, the actual realization and numerical implementation of the algorithm are not articulated well I think, e.g., the assumed energy level of signals and noises are normally unknown, so that more details and algorithmic implementation discussions can be presented in a more systematic way, rather than just being mentioned in fractures in Section 4. 2. Several experiments are conduced, but the presented results are limited in each experiment, e.g., cherry pick can be done if only a few images are presented. 3. Some parts of the paper I read a few times, but I still found a bit difficult back and forth to relate all analytical results to the experiments in practical applications. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Some statements lack clarity, e.g., ‘we study jittering, a simple regularization technique that adds noise during training as a robustness-enhancing technique for inverse problems’……as a regularization technique, I would expect a concrete implementation or algorithmic description to make the paper more readable to general readers. (Similar as the bullet 3 in *Weakness); 2. Line 90-94, page 3, [1][2] are mentioned. I quickly checked [1], and it seems like it already implemented the Jittering to the task of inverse problem. It would be nice to provide more details on the difference and relation to [1]. Though [1] has been cited, no further explanations or comparisons are provided later on in this work. As far as I have understood, this work is posed as a step forward upon [1] with some analytical results? This I believe should be clarified. 3. It mentioned the random smoothing method in Section 2 and also the appendix. I understand that RS is significantly different this work. In RS, lots of samplings are required in evaluations. I’m wondering as an expectation over ‘w’ as in (2), there should also be some procedures requiring samplings either for training or evaluations or for sufficing assumptions. Could you please elaborate this part? (Similar as the bullet 3 in *Weakness) 4. In the analytical results of this work, $\sigma_c$, $\sigma_z$ are the essentials. Could you summarize or provide a general procedure of the relevant implementation procedures or simulation verifications on these variables when they are unknown in the practical settings. (Similar as the bullet 3 in *Weakness), and the same applies to Section 3.3. 5. In section 4, besides the bullet 2 in *Weakness, the authors can provide some synthetic experiments to show and explain more detailed results or analysis, as these mentioned variables in the assumptions can be known explicitly and also at least can help the readability. 6. It seems that the hyper-parameter tuning can be time consuming, if too many searches are needed to bring good results. 7. One major claimed benefit is the efficiency, however, only Table 1 is provided roughly, where the complexity analysis is lacking. It should also be nice to mention the computational part involved in finding the hyper parameters. I roughly checked the codes of defining “param” argument and it seems that it just pre-gave a energy value and heuristically proceeds the setup of other relevant params to run the training. (Similar as the bullet 3 in *Weakness) . For instance, though the compared the adversarial training is more expensive, it doesn't need too many pre-requisite or tuning and can be flexibly applied in various practical tasks. If this paper can be put in a clearer way connecting the theoretical results and the implementation, and also the empirical experiments can be more practically feasible and convincing in the results, this work would deserve an improved score and I would like to increase my evaluation. [1] M. Genzel, J. Macdonald, and M. Marz. Solving Inverse Problems With Deep Neural Networks- Robustness Included. IEEE TPAMI 2022 [2] K. V. Gandikota, P. Chandramouli, and M. Moeller. On adversarial robustness of deep image deblurring. In IEEE International Conference on Image Processing, 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors should verify how convenient and efficient the proposed method can be applied in practical settings and explain some pipelines, in line with the claimed provability of the robust estimator. It seems unclear to me that how promising it could be in generally practical applications, especially considering the claimed analytical aspects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the feedback and noting that the motivation is quite nice, the assumptions are reasonable, and the results show promising performance. Comments on the weaknesses: - **Relation of theory and experiment:** The theory gives insights into worst-case robustness and justifies using jittering for obtaining optimal robust estimators. Our theory even predicts the optimal choice of jittering noise level for Gaussian denoising well. In addition, in practice, the optimal jittering level can be determined via a single-parameter hyperparameter search. - **Experimental results:** We perform experiments for several inverse problems (denoising, deconvolution and compressive sensing) and evaluate the methods on large datasets. Therefore the results are not cherry picked, as we evaluate on thousands of examples (see 4.1. Problem setup). Perhaps this impression can arise since we show a few example images only. Those images are chosen randomly to visualize reconstructions and to illustrate that jittering yields smoother reconstructions. During the rebuttal, we also quantified the smoothness using the TV norm (see the attached pdf) to be more precise. Regarding the questions: - **Question 1, implementation details on jittering:** Thanks for your feedback on this. It is indeed important to be specific on the implementation details of training with jittering. To fix this, we added the following lines in the paragraph *Training methods* in section 4.1: “Jittering is practically implemented via performing the SGD update rule $\theta \leftarrow \theta - \frac{\eta}{n} \sum_{i=1}^n \nabla_{\theta} \| f_{\theta} (y_i + w_i) - x_i \|^2$. For jittering, the network output is calculated on the noisy input $y_i + w_i$, instead of $y_i$ (standard training) or $y_i + e_i$ (adversarial training). To approximate the expectation we draw independent jittering noise samples $w_i$ in each iteration of SGD. - **Question 2, related work on jittering for inverse problems [1,2]:** The focus of the paper [1] is to obtain high-quality CT reconstruction. While the paper [1] mentions that it trains with jittering and this improves robustness, there is no systematic study on the effectiveness of jittering, no comparison to robust training, and no theory. In order to clarify the relation to [1,2], we revised the paragraph on jittering for enhancing robustness in inverse problems in our related work section accordingly. - **Question 3, regarding sampling and RS**: Randomized smoothing (RS) and jittering both approximate expectations w.r.t. Gaussian random variables. However, since RS is performed during evaluation (fixed network) lots of samples are required to approximate this expectation. Contrary, jittering is performed during training. While every sample is revisited multiple times during multiple epochs of training, it is sufficient to sample new noise at each iteration of SGD. We will further discuss this in the paper. - **Question 4 and 5, estimating signal energies in the experiments**: In practical experiments $\sigma_c$ and $\sigma_z$ are not actually required, since the optimal jittering level for robustness can simply be found via hyperparameter search (see also above). If needed (e.g. for testing the theory), they can be calculated as follows: *Signal energy*: Given a dataset of $n$ images $x_i$, the signal energy $\sigma_c^2 = \mathbb{E}[\|x\|^2]$ can be estimated via $\frac{1}{n} \sum_{i=1}^n \| x_i \|^2$. *Noise level*: If the forward operator $A$ for the inverse problem $y = A x + z$ is known, the noise level can be estimated via $\sigma_z^2 = \mathbb{E}[\|y - A x \|^2]$. If not, a linear reconstruction operator can be trained, and $\sigma_z^2 d/m$ be obtained from its standard risk (see e.g. the formula in 3.2.2. Robustness accuracy trade-off). - **Question 6 and 7, cost of hyperparameter search:** The hyperparameter search is computationally inexpensive, since it is only a search over a single scalar variable. Moreover, our experiments show that only relatively few epochs are needed to find the optimal jittering noise levels (e.g. 30 epochs for denoising natural images). The actual training of the networks, however, needs a lot more epochs for convergence (600 for denoising). Thus, the cost for the hyperparameter search is a fraction of the training cost and thus relatively inexpensive. Moreover, adversarial training requires $N$ times more GPU hours compared to standard training, where $N$ is the number of iterations for seeking the worst-case examples during training. For the networks depicted in the complexity analysis $N=3$, so adversarial training is three times more expensive (section D.4). Due to this scaling, hyperparameter search for jittering is significantly more efficient. Finally, the pre-given energy level can be easily computed from the dataset, as described above. It is used in the code to scale the relative perturbation levels ($\epsilon^2 / \mathbb{E}[\|x\|^2]$ in paper). We hope that those clarifications and changes connect the theoretical results and the implementation better, and we hope that we clarified that the experiments are practically feasible. Thanks for being open to increasing your score. Please let us know if you have any additional comments and questions. --- Rebuttal Comment 1.1: Title: Checking in Comment: Thanks a lot again for your review and feedback. We hope we have addressed your concerns. Please let us know if you have any remaining concerns and questions.
Summary: This work studies the design of estimators for the solutions of inverse problems that are adversarially robust, in the sense that they minimize the maximum MSE after being contaminated with an additive and L2 bounded perturbation. The authors show that, for linear estimators and for signals lying on linear subspaces, the optimal solution (i.e. optimally robust) is attained by training the estimator with “Jiterring”, i.e. by minimizing a loss over the randomly perturbed inputs. For more general inverse problems (when the forward operator is not an identity), the authors show that the resulting estimator from Jittering is not optimally robust in general, but their difference can be small. Moreover, they show numerically that this gap is small in practice, thus leading to similar performance (as that obtained by adversarial training) alas with significant increase in computational efficiency. Strengths: - Neat and elegant idea. - Very clear presentation - this was a pleasure to read. - The results are novel and interesting. Weaknesses: - The results are somewhat limited, and hold for linear estimators with potentially overly simple signal models. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. As the authors mention, estimating the robust risk is non-trivial, since this involves the optimization of a non-convex/non-concave problem for non-convex functions $f(\cdot)$ (as in the case of U-net). In light of this, the plots in Fig. 1 are not “exactly” the robust risk, but rather numerical approximations to it (except for the linear case). The authors might wish to clarify this in the caption and/or the description of Fig. 1 in the text. 2. The role of the symmetry of H is not completely clear to me. When describing their results colloquially, the authors mention that, broadly speaking, training linear estimators with Jittering provides an estimator with minimal robust risk if the signal lies on a subspace. However, as written, the formal statement of Theorem 3.1 requires the estimator not only to be linear but also symmetric. So is, is symmetry required for Thm 3.1 to hold, or does symmetry arise as a property of the optimal estimator? I have a related question for Conjecture 3.3 - is symmetry here required, and in particular, $H = UV\Sigma W^T$ need not be symmetric, as written. 3. Their results are stated for $d\to \infty$. It is unclear why this is needed, or how the other dimensions and parameters scale with d. 4. In Eq (3), do the authors mean to require $\lambda \geq max_i \sigma_i^2$? 5. Can the authors comment some more on the choice of norms? More precisely, the authors have focused on the MSE as loss (L2) as well as L2 bounded perturbations. I wonder to what extent their conclusions might extend to other norms. Moreover, it is possible that for different choices, some of the limitations in the analysis (e.g. in proving their conjecture) might be resolved. On the same vein, and because of their signal model, I wonder if there are any connections to the work of [Awasthi, Pranjal, et al. "Adversarially robust low dimensional representations." COLT 2021], that the authors could leverage. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Some limitations are commented on throughout the text. Might be nice to stress or clarify them further. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and are very pleased to hear the reviewer enjoyed reading our paper, that the results are novel and interesting, and that the idea is neat and elegant. - **Question 1, concerning robust risk vs an approximation of the robust risk**: Thanks for the suggestion, we added "empirical robust risk" to the labels of the plots and included “To approximate the optimal worst-case robust risk, i.e., the minimizer of $R_{\epsilon}$, adversarial training is performed.” in the caption to clarify that we empirically approximate the robust risk, since we can't compute it exactly. - **Question 2, the role of symmetry of $H$**: That is a very good point, it turns out symmetry of $H$ is not required for the main theorem, nor should it be part of the conjecture for general linear inverse problems (section 3.3). In the submitted version, we assumed symmetry of $H$. During the rebuttal, we revisited the proof of our main theorem and were able to waive the symmetry assumption completely. This only required minor changes in the proof. Specifically, in the proof of the lower bound in section A.1, we now employ an SVD $H=V \Sigma W^T$, and show that cross-terms between $V$ and $W$ in the robust risk can be lower-bounded suitably such that one can proceed as before. - **Question 3, assuming $d \to \infty$**: Our results exactly characterize the optimal estimator for the asymptotic case, and in the asymptotic case there is for example a sharp transition where the estimator maps to zero when the noise energy is equal to the signal energy. For very small d, there is no such sharp transition since the associated random variables do not concentrate well. That said, for moderately large values of d (larger than a constant) our results hold approximately with high probability by utilizing results from high-dimensional probability. We decided to state asymptotic results to provide cleaner expressions. Indeed, our simulations for the linear model show that the analytical formula is already accurate for moderate dimensions (d > 25). _Regarding the other variables_: In the limit subspace dimension $d \to \infty$, the embedding space $n > d$ is similarly treated in this limit. The energies $\sigma_c^2$ and $\sigma_z^2$, however, do no not depend on the dimensions. - **Question 4, question on Eq (3)**: $\lambda \geq \sigma_i^2$ is used as an abbreviation of $\lambda \geq \max_i \sigma_i^2$. We included the maximum in the revision for clarification. - **Question 5, considering other norms:** We consider the $\ell_2$-norm for the loss function and perturbations, since they are most relevant for inverse problems ($\ell_2$-norm measures signal energy). We would certainly be interested in extending our theory to other norms. However our results do not generalize in a straightforward manner to other norms. Specifically, it is unclear how to generalize Lemma 1 of the appendix, which reformulates the robust risk optimization problem to a tractable one with respect to a single variable. However, while it's unclear how to obtain a precise characterization, some more general predictions can be made. As an example, the transition predicted to the zero-estimator, predicted by Theorem 1, also exists $\ell_p$-type perturbations ($p \geq 2$). From our understanding, the work on robust PCA the reviewer pointed us to studies efficient approximation algorithms for finding $\ell_p$-adversarially robust subspaces (but not actually characterizing the optimal ones). Moreover, finding these subspaces consists of finding optimal projection matrices onto subspaces, whereas we consider general reconstruction matrices for solving linear inverse problems. The paper is very interesting and we’ll investigate further connections. We appreciate the reference and add a discussion of this work to the related work section of our paper. We really appreciate your feedback, which has improved the paper, in particular it led us to generalize our results to non-symmetric H. We hope this addresses your concerns. If so, we would appreciate it if you can consider increasing your score. Of course we are happy to address any further questions or comments you may have. --- Rebuttal Comment 1.1: Title: Thank you for your responses Comment: I thank the authors for carefully considering and addressing my comments and questions. I've decided to increase my score to 6 - my main limitation to increase the score further is the limitation of their results which address only very simple cases. If accepted, I encourage the authors to make your assumptions clear (remove the unnecessary assumption on symmetry, state clearly that the results hold asymptotically, and provide the extended results on approximations with high probability if possible).
Summary: The authors study the effectiveness of jittering in the setting of inverse problems, specifically considering denoising, compressive sensing, and deconvolution problems. Jittering is a well-known regularization technique for classification problems. The authors prove in the linear setting that the robust risk estimator can be probed to be the optimal estimator when learned with jittering for Gaussian subspace denoising. In addition, the authors demonstrate experimentally jittering is effective at improving the robustness of compressive sensing and image deconvolution, even though it yields suboptimal estimates. Strengths: 1). The theoretical contribution is straightforward but has a strong contribution in the denoising setting. Even though the authors studied a linear model they were able to provide insights in the non-linear model (U-net) that was studied empirically. Refer to Figure 3 (Middle Plot), Cor.3.2 can accurately predict the optimal jittering level. 2). The authors extend theoretical contribution for general inverse problems, section 3.3. 3). The authors do a thorough investigation on the effectiveness of jittering empirically and theoretically for denoising inverse problems. Weaknesses: 1). The motivation of the paper seems a bit unclear, it appears to be motivated by results from classification problems rather than image reconstruction results in inverse problems. This approach seems slightly ad-hoc because the authors choose a popular regularization technique for classification problems and tested if it was effective for inverse problems without stating their intuition for it being effective or being of interest to the community. 2). Section 4.2 lacks some metrics, specifically, Section 4.2 Figure 4 states "Jittering yields robust estimators, but at the same time yields smoother reconstructions.". This claim seems pretty strong considering it is not reported over a large test set with appropriate metrics for image reconstruction. 3). The section for 3.3 appears disconnected from section 4, it's unclear whether the linear model studied in this section (3.3) is representative of the behavior of the non-linear model utilized in section 4. Minor Weakness- Typo Figure 3- "The jittering estimators estimators are similarly robust as adversarial training (Figure 1), but attain lower standard risks (right panel)" Repeated word estimators estimators Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1). Could you provide metrics for a test set confirming jittering yields smoother reconstructions? 2). Could the authors justify why they studied a linear model instead a non-linear model, a model more similar to a U-net? 3). Could the authors explain the connection of section 3.3 to experiments in Section 4? Does the linear model exhibit enough similar behavior as the non-linear model to justify it as an appropriate model for general linear inverse problems? If so, could you please provide some numerical evidence? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not see any negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and for noting that we make a strong contribution for denoising, and that even though 'they study a linear model they were able to provide insights in the non-linear model (U-net) that was studied empirically'. Comments on the weaknesses and answers to the questions: **Weakness (1), clarifying the motivation of the paper.** The main motivation for the paper is to understand whether neural networks for signal/image reconstruction can be trained efficiently to be worst-case robust. To address this question, we first characterize the worst-case optimal estimator for a linear setup. This is the main technical contribution. While the worst-case optimal estimator is intuitive, showing optimality is non-trivial. We then investigate whether jittering, a simple regularization technique that adds isotropic Gaussian noise during training, is effective for learning worst-case robust estimators for inverse problems, motivated by an ongoing discussion in the community on whether jittering is effective or not specifically for signal/image reconstruction problems. Specifically, Genzel et al. (2022) finds that jittering is effective for MRI and CT, whereas Gandikota et al. (2022) report suboptimality for deconvolution. Our work shows that jittering is provably effective for inverse problems, which we consider to be of great interest for the community given that it is computationally so much cheaper than adversarial training. We'll clarify this motivation in the paper. **Weakness (2) and Question (1), on metrics capturing the smoothing effect:** During the rebuttal, we measured the smoothness using the total variation (TV) norm of reconstructed images for the deconvolution problem (same networks as in Figure 1; 2k test images). The results are in the attached pdf and confirm our observation that jittering yields to smoother reconstructions: The TV-norm of reconstructions using networks trained via jittering is generally smaller than those of standard or adversarial training. **Question (2), why doing theory for a linear model and not for a U-net**: We certainly would like to extend our theory to deep neural networks in particular a U-net or the like, but right now the theoretical tools for characterizing optimal network estimators of the complexity of U-nets do not exist to the best of our knowledge. The vast majority of results for neural networks is actually for linear networks, and for networks in the neural-tangent-regime that behave like an associated linear model. Our setup is already very difficult to treat analytically, since we have a minimization followed by an expectation followed by a maximization. That said, as we demonstrate empirically the insights carry over to neural networks and serve as an important first step towards analyzing nonlinear networks. Indeed, many theoretical results that are emerging (NTK, power method analysis in first few iterations) rely on understanding of appropriate linear networks for their nonlinear analysis. We hope to build upon this first step in future work. **Weakness (3) and Question (3), on the connection of 3.3 (theory for a linear model) to section 4 (experimental results):** While our theory is for a linear model, our experiments are for real-world imaging problems. We find a strong agreement of the theoretical results for the linear model and the real-world simulations for the U-net: - Jittering empirically yields optimal worst-case robust U-Net denoisers, as proven for the linear estimator (Fig. 1). - Jittering can be suboptimal for inverse problems beyond denoising, which is explained by our theory for general linear inverse problems in section 3.3 (Fig. 2 and Fig. 1). - The optimal jittering noise levels for U-net-denoisers are accurately predicted by theory (Fig. 3). - Adversarial training of U-nets even reproduces the theoretically predicted extreme behavior for large perturbations (obtaining U-nets mapping everything to zero, Fig. 8). Please let us know if that addressed your concerns, if the clarifications change your final score, and if you have any further questions or comments. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors for carefully considering and addressing my questions/concerns. After reading the rebuttal, I would like to increase my score to a 7. I believe the paper is a strong contribution to the community of researchers interested in inverse problems.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful feedback. In this post, we address one of reviewer *koTN*'s questions. **Metrics quantifying the smoothing effect**: During the rebuttal, we measured the total variation (TV) norm of reconstructed images for the deconvolution task over a large test dataset. The results are presented in the attached pdf and confirm our observation that jittering yields smoother reconstructions compared to adversarial and standard training. Pdf: /pdf/145da825d4d4263c2b5fcf27e193fcb591b76337.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Diffusion-TTA: Test-time Adaptation of Discriminative Models via Generative Feedback
Accept (poster)
Summary: The authors propose combining a diffusion model with a discriminative model for test-time adaptation: They propose adapt a pretrained classifier using a diffusion model with its likelihood loss. They show results on several OOD robustness benchmarks, commonly used in TTA. Strengths: I like the idea of combining a generative model with a discriminative model to try to get the benefits of both. The proposed method is elegant, but unfortunately heavy on the compute side. The paper is well-written and easy to follow. The approach is clear. In general, I think the method is interesting and the idea looks novel to me. But I do think there are several major issues with the paper which need to be resolved. I am happy to engage in a discussion with the authors and will raise my rating if my points are addressed. Weaknesses: Weaknesses: My biggest issue is with the experimental results. I find them impossible to judge (see below) and thus, do not understand whether this method works similarly or better than other methods. Given that the proposed method is very heavy on the compute side, I think it would need to make up for it in much stronger results. If e.g. this method performs similarly to TENT / COTTA, then people would still use TENT / COTTA because those methods are very cheap to use. Given that this is an empirical and not a theory paper, I also think more results / ablations should be presented. ### Issues with the results section Table 2: 1) It is not clear which baseline models were used for TENT / COTTA / TTT here. The final numbers heavily depend on the chosen architecture and the results are only comparable if the same architecture was chosen. Please indicate the architecture for the baseline methods both in the text and in the Figure captions. 2) It is not clear whether accuracy or error are displayed in Table2. Please indicate this, and also please add \uparrow and \downarrow to indicate whether a higher or a lower number is better. An accuracy (if accuracy is displayed?) of 11% on ImageNet-C looks quite bad and I am wondering where this number comes from? The TENT authors report an accuracy of 44% for a ResNet50 in their paper. I think it would be necessary for the authors to add results for a ResNet50 such that the numbers are comparable. Right now, the results look strange. If we assume that the architecture is the same, then it seems TENT strongly degrades performance. E.g. on ImageNet-R, the TENT number is 24.3 for TENT, 34.6 for the baseline RN18 and 39.7 for Diffusion-TTA. Does this mean that there is an optimization issue because TENT should not degrade performance or is it a different architecture? In that case, the numbers are not comparable. Please rework Table 2 as following: Show the baseline performance without adaptation in the first row, then report numbers from the literature for this architecture, then report your results on the same architecture. ### General comments Line 36: “In this paper, we take an alternative perspective. Instead of considering generative and discriminative models as competitive, we argue that they should be coupled in a way that leverages the best of both worlds: discriminative models are good at building powerful conditional density models but overfit to training distribution, and generative models generalize better but struggle to learn discriminative features.” -> minor: to the training distribution. The statement that “generative models generalize better” seems unsupported to me. As far as I see, [18] does not provide evidence for better OOD generalization for generative models. Further Carlini et al. [A] show that diffusion models actually memorize their training data and emit the memorized data at test time. Could the authors comment on how they think the findings presented in [A] square with the results in this paper? If diffusion models memorize their training data, I find it unintuitive that they would learn features that would allow them to generalize to OOD data. Put simply, if the task can be solved with memorization, there is no need to learn generalizable features. [A] Carlini et al. “Extracting Training Data from Diffusion Models”, https://arxiv.org/pdf/2301.13188.pdf In my opinion, the paper lacks intuition and needs a better explanation why the method works. As written above, it is not evident to me that generative models should have better generalizable features compared to discriminative models. I am wondering whether the approach works because of an effect similar to model ensembling where we know that combining predictions of multiple models leads to better results than using a single model. I wonder if using a diffusion model has a similar effect, because it regularizes the weights to be compatible with both the discriminative and the generative model. Could the authors comment on this thought / better explain why their method works? Do the authors think one could train a discriminative network from scratch using their approach? If not, why do they think it (only?) helps in the test time adaptation stage? Is it maybe that one finds the “correct” loss basin with regular training and then finds a better loss value within that basin with test time adaptation? I am thinking of the paper by Wortsman et al. [B] where the authors showed that one can interpolate weights of different models because they are in the same low error basin. Could the authors maybe compare the training and test losses (regular cross-entropy) of the baseline and test-adapted models when interpolating the weights from the regular to the adapted model? I am wondering whether the loss goes down monotonically. [B] Wortsman et al. “Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time”, https://arxiv.org/abs/2203.05482 The text in Figure 5 is too small below the bars and largely unreadable. Figure 6: It would be helpful to repeat this analysis on some other dataset. The solo peak at K=5 somewhat looks like an artifact and I wonder whether one would observe similar behavior in other datasets. How do the authors interpret this strange “peakiness” of the curve at K=5? More discussion on the ablation analysis would be nice. Why is it that “using a single randomly sampled timestep and noise latent (+diffusion TTA), we find a significant reduction in the classification accuracy (−2.9%)”? I think the authors should perform a complexity analysis since their method involves making a forward and a backward pass through a large diffusion model to make use of its loss. This is a large overhead compared to much simpler TTA methods such as TENT. While there is a complexity analysis in the Appendix, Fig.2, the overhead is not compared to e.g. TENT which should be much faster. In a similar vein, I would argue it will be hard to actually find an application for this method since the computational requirements are so massive. The authors write that “We conduct our experiments on a single NVIDIA-A100 40GB VRAM GPU (…). Since our GPU fits only a batch size of 20, (…)”. TTA methods are usually designed to be light-weight such that they can be applied on the fly, e.g. if an autonomous car needs to adapt to changing weather conditions. This method does not fulfil the requirements usually considered for TTA, and thus, will be of rather limited use for practitioners. Line 173: “For all experiments, we adjust all the parameters of the classifier.” I am curious about this choice. Several papers show that at least for ResNets, it is better to do TTA with affine BN layers [C, D]. Does this method not work when adapting affine BN layers or does it work better if all layers are adapted? [C] Wang et al. “Tent: Fully Test-time Adaptation by Entropy Minimization” [D] Rusak et al. “If your data distribution shifts, use self-learning” The appendix should be referenced in the main text. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please respond / fix the points I outlined above. Please fix Table 2 to make baselines comparable to your results. Please add a RN50 baseline since it is common to report TTA results on this architecture. I would advise the authors to do a round of proof-reading of the manuscript, as there are some grammar errors and typos, e.g. line 234, line 123: adaptation Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have not included a Broader Impact section and did not discuss the potential negative societal impact of their work. I think they could discuss how the chosen diffusion model with affect the adapted classifier. Practitioners should note that biases present in the diffusion model will likely also manifest in the adapted classifier. The classifier itself may or may not have biases of its own which in turn may or may not be corrected by the diffusion model. One limitation of the method is that it is very compute-intense and thus will likely be of limited use to practitioners. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q6.1: Backbone architecture unclear for baselines.** Please see Q1.1 and Q1.2 in the global comment. **Q6.2: Add RN50 backbone to baselines.** We have included ResNet50 in our online TTA results in Q1.2. Here, we report TTA results using ResNet50 under single-example settings. ||ImageNet|ImageNet-R|ImageNet-V2| |:---|:----:|:----:|:----:| |**ResNet50**|76.0|40.0|63.5| |$~~~~$+CoTTA|70.4 (-5.6)|39.2 (-0.8)|57.6 (-5.9)| |$~~~~$+TENT|70.4 (-5.6)|39.3 (-0.7)|57.7 (-5.8)| |$~~~~$+Dif-TTA (single-example)|**78.6 (+2.6)**|**42.5 (+2.5)**|**66.9 (+3.4)**| **Q6.3: TENT accuracy very low, Accuracy or Error not mentioned.** We report the accuracy in all our Tables. In our online adaptation results (see Q1.2), we show that TENT and CoTTA only improve on certain distribution shifts (ImageNet-C shifts) and classifiers (ResNet). They both fail to improve classification on other ImageNet variants (see Q1.1). Our conjecture is that TENT and CoTTA are online adaptation methods, and they are not robust when input examples share very little distribution shifts. A similar conclusion can be found in a recent paper “On Pitfalls of Test-Time Adaptation”[1]. The authors show that methods that primarily work under online settings (TENT or CoTTA) are not robust to different types of distribution shifts (as reported in Table 4 in their paper). In fact, these methods turn out to be very sensitive to model architectures and optimization hyper-parameters. Our method instead exploits a strong generative prior and thus is more robust. **Q6.4: Unsupported claim of better generalization of generative models. Comment on diffusion models memorize train data.** Thanks for the suggestion, We will remove the sentence regarding better generalization of generative models and revise the justification of our method as follows: Diffusion Classifier[1] and Clark et al [2] explored how to use generative models (diffusion models) for the task of classification. They find that while discriminative models are better at learning unary concepts, generative models offer distinct benefits. In particular, in Table 2 of [2], the authors show that generative models (Imagen) exhibit reduced texture bias than state-of-the-art discriminative models (CLIP-ViT/L-14 or ViT-22B). In Table 2 of [1], generative models (Stable Diffusion) outperform state-of-the-art discriminative models (CLIP-ViT/L-14 and OpenCLIP-ViT/H-14) in modeling object relations on Winoground dataset. Motivated by the desire to leverage the strengths of discriminative models and generative models, we propose to combine them using test-time adaptation. [1] Your diffusion model is secretly a zero-shot classifier. Li et al. [2] Text-to-image diffusion models are zero-shot classifiers. Clark, K et al. **Q6.5: Paper lacks intuition and needs a better explanation why it works?** We conjecture that Diff-TTA works, as both discriminative and generative models capture distinctive aspects of the data, as discussed in previous question. Our method integrates both these models. Specifically - It optimizes the classifier to achieve high image likelihood under the loss landscape of a pre-trained diffusion model - It optimizes the diffusion model to have image likelihood given the output of a pre-trained classifier. We think in satisfying both these constraints, it optimizes for a form of consensus between the two models. To better understand what’s happening, we compare Diff-TTA against the following baselines. - Logit-Adapt: In this method we do not use any pre-trained classifier, instead we initialize the logits using zeros and optimize them per example using diffusion objective. - Logit-Adapt-Ensemble: We average the probabilities of the above Logit-Adapt baseline with the probabilities predicted by the pre-trained classifier. - Classifier-Adapt: In this method we adapt the pre-trained classifier weights while freezing the diffusion model. - Diff-Adapt + Classifier-Adapt (Ours): In this method we adapt both the pre-trained classifier weights and the diffusion weights. This refers to our method Diff-TTA. ||ResNet18|Logit-Adapt|Logit-Adapt-Ensemble|Classifier-Adapt|Diffusion-Adapt + Classifier-Adapt (Ours)| |:---|:----:|:----:|:----:|:----:|:----:| |ImageNet|68.4|71.8|72.6|73.0|**78.2**| |ImageNet-R|37.0|37.0|36.5|38.0|**42.6**| **Q6.6: Can a discriminative model be trained from scratch?** We find that training a classifier scratch from does not work. Our conjecture is that good initialization of the classifier makes the loss landscape more convex and therefore easier to optimize. ||Adapt classifier from scratch |Adapt pre-trained classifier| |:---|:----:|:----:| |ImageNet|0.7 |**78.2**| |ImageNet-R|4|**42.5**| **Q6.7: Visualize loss curves vs interpolating weight constant, akin to Model Soup?** Thanks for this suggestion. Please refer to Figure 2 in the attached pdf, where we make this plot. We indeed find that, in instances where Diff-TTA correctly classifies the image, the loss monotonically reduces as we interpolate between the model weights. **Q6.7: K evaluation on more datasets** Please see our response to Q2.3. **Q6.8: Why single sample timestep doesn't work** Our method performs TTA by minimizing the variational lower bound. Using a single timestep instead of an expectation over multiple timesteps results in a bad approximation of the lower bound, thus hurting performance. In Figure 3 of the attached pdf, we visualize this phenomenon using a test sample. **Q6.9: Heavy compute cost.** Please see our response to Q3.2. **Q6.10: Complexity analysis compared to TENT** Using a single or multiple gpu, Diff-TTA takes 55 or 1.2 seconds to adapt on one image, though TENT takes 0.03 sec per image. **Q6.11: Adapting just the BN Layers vs all layers** Please see our response to Q4.4. **Q6.12: Discuss Negative Impact** Thanks for the bringing this up. Due to lack of space we will include this in our final paper. --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: Dear authors, thank you for addressing my concerns in your rebuttal. I think you did a great job providing new results! In particular: - I appreciate reporting standardized results for your method vs TENT / CoTTA. It is nice to see that your method works best. - Thank you for adding semantic segmentation results. - I think your response to Q.6.7 is really interesting and Figure 2 in the attached pdf is insightful. I think it helps understanding the method better and would encourage you to include the results in the main manuscript, if space permits. - It is also interesting that the model cannot be trained from scratch using the method, but that it only works for finetuning. Again, I think this provides us with insight about the method and should be included in the final version. One concern remains and that is of the computational complexity of the method. This method is 2-3 orders of magnitude slower than TENT, but then TENT does not always work and also cannot be applied in the single-sample regime. Given that TTA methods typically need to be light-weight, I am not sure how widely this method will be used in practice. I don't know how big of a concern this is, as this paper can be regarded as a "proof-of-principle" contribution. Could the authors maybe try to elaborate in which settings they think this method could be used successfully? I have raised my score because I think the authors did a great job addressing my concerns. Best, Reviewer 4gPG --- Reply to Comment 1.1.1: Comment: We would like to sincerely thank the reviewer for all their insightful comments. They really help us improve our work! As recommended, we will incorporate the results and ablations in the paper's final version. We believe our current method holds promise for offline test-time adaptation. In scenarios where static, unsupervised data from a new domain is available, our method can adeptly adjust to distribution shifts. Unlike online methods, offline TTA approaches do not necessitate being lightweight. However, we also recognize the potential of methods like Consistency models [1]. Such models can strike a balance between computational efficiency and accuracy, potentially optimizing computation speed in a significant way. [1] Consistency Models. Song et al. ICML 2023.
Summary: The paper discusses a method that uses generative models as test-time adapters for discriminative models. The authors propose a technique to adapt pre-trained classifiers and CLIP models to individual unlabeled images. They achieve this by modifying the text conditioning of a text-conditional pretrained image diffusion model and maximizing the image likelihood through backpropagation. The proposed approach improves classification accuracy on various datasets, including ImageNet. The authors compare their method with previous test-time adaptation techniques and demonstrate its superior performance. They highlight that this is the first work to adapt large-scale pre-trained discriminative models to individual images without requiring joint discriminative and self-supervised training objectives. Strengths: * The paper addresses a fundamental challenge in deep learning models, which is test-time out-of-distribution generalization. This problem is known to be difficult to solve. * The authors propose an interesting approach that combines generative and discriminative models to tackle this issue. Particularly intriguing is their idea of using a diffusion model as a guide for the classifier, by modulating the text conditioning of the model. * The experimental evaluation conducted in the paper is good, demonstrating the effectiveness of the proposed method. The authors thoroughly evaluate their approach on various datasets, including conducting ablations to analyze the discriminative and generative components. Overall, the paper provides a valuable contribution to the field with its innovative approach and robust experimental evaluation. Weaknesses: * While I agree with your premises regarding the class conditional model, I noticed that the improvements achieved are marginal on certain datasets, in particular for open-set problems. It would be beneficial to have a more in-depth analysis specifically focusing on ImageNet for open-set evaluation. * Additionally, it would be valuable to incorporate confidence intervals in the evaluation to provide a better understanding of the statistical significance of the results. Furthermore, I believe it would be advantageous to include additional metrics such as top-k accuracy and f-score (where possible), rather than solely relying on accuracy. These metrics would provide a more comprehensive evaluation of the model's performance in various scenarios. * It is worth mentioning that CLIP, even without any adaptation, seems to perform well, and the extent of improvement achieved by the proposed method is not decisively evident in most cases, especially without the inclusion of confidence intervals. * Finally, the clarity of the introduction can be improved. I think a better discussion of discriminative vs generative models would be a valuable addition to this paper. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * How are the learned class or text embeddings obtained, specifically referring to line 156 or Fig.1? Are the embeddings the same as those employed by the classifier or CLIP models? If so, does this imply that the class/text embeddings change during test time adaptation? * What is the time required to adapt the model to a single image? Can this approach be scaled up for practical adaptation applications? * In Figure 6, have you investigated the effects of increasing the value of K for a CLIP model? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The obtained results are interesting; however, it is worth noting that the observed improvements, particularly in relation to the CLIP models, appear relatively small. It is interesting that the base classifiers, in particular the CLIP models, already exhibit good performance even without any fine-tuning or test-time adaptation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q5.1: Marginal Improvements in Open-Set Datasets** You are correct that the improvements on open-set problems are not as high on the closed-set problems. We think this is mainly because we use Stable Diffusion as our diffusion model. Stable Diffusion conditions the image generation on the text embeddings from CLIP . However, the language model in CLIP is much weaker than open ended large-language models like T5-XXL which is used in Imagen. Clark etal [1] validates this claim, where they find Imagen results in a significantly better classifier than Stable Diffusion. We don’t have this issue for DiT as it learns its own text embedding for each class. Here, we compare our method against a strong baseline–TPT [2]. TPT is a test-time adaptation method for large-scale text-to-image models such as CLIP. TPT optimizes the text prompts to encourage consistent predictions across augmented views of the same test image by minimizing the marginal entropy. Our method on the other hand optimizes the network parameters using generative feedback. Please see our response to Q2.4 for the comparison among our method against a strong baseline–TPT. We find that Diff-TTA consistently outperforms TPT across all CLIP backbone architectures. [1] Text-to-image diffusion models are zero-shot classifiers. Clark, K et al. arXiv:2303.15233. [2] Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models, NeurIPS 2022. **Q5.2: ImageNet Variant on Open vocab setting** Great suggestion! Our Diffusion-TTA improves CLIP-ViT/B-32 on ImageNet variants (ImageNet, ImageNetV2, and ImageNet-R) consistently. | | ImageNet | ImageNet-V2 | ImageNet-R | | :--- | :----: | :----: | :----: | | **CLIP-ViT/B-32** | 56.3 | 51.3 | 55.5 | | $~~~~$+Diffusion-TTA (single-example) | **58.1 (+1.8)** | **52.5 (+1.2)** | **58.2 (+2.7)** | **Q5.3: Confidence intervals/ Top-k to get better understanding of results** We report confidence intervals of top-1 and top-3 accuracy over 5 random seeds on the ImageNet-R dataset. We conduct experiments using the ResNet50 classifier and DiT diffusion model. Our performance gains are consistent across different random seeds. We will include more detailed analysis in our final submission. | | Before-TTA | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Mean | | :--- | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | Top-1 Acc. | 40.0 | 42.5 | 42.2 | 42.7 | 41.8 | 42.8 | 42.4 ($\pm$ 0.41) | | Top-3 Acc. | 51.7 | 52.0 | 51.8 | 52.2 | 51.8 | 51.8 | 51.9 ($\pm$ 0.13) | **Q5.4: Better discussion of the differences between generative/discriminative models in the introduction** Thank you for your suggestion. We will include the following sentences in our intro. So far discriminative models have mainly been used for discriminative tasks such as image classification. Recently, Diffusion Classifier[1] and Clark etal l2], explored the use of recent generative models (diffusion models) for the task of classification. They find that while discriminative models are better at learning unary concepts, generative models offer distinct benefits. Notably, Clark etal in Table 2 of their paper find that generative models such as Imagen exhibit reduced texture bias than state-of-the-art discriminative models such as CLIP-L14 or ViT-22B. Further Diffusion Classifier[1] in their Table 2 find that generative models such as Stable Diffusion outperform state-of-the-art discriminative models such as CLIP-L14 and OpenCLIP-H14 in modeling object relations on Winoground daetaset. Motivated by the desire to leverage the strengths of discriminative models and generative models, we propose to combine them using test-time adaptation. The exact form of their combination is motivated by neuroscience findings that suggest that the brain performs a form of analysis by synthesis by rendering inferred concepts in an iterative feedback loops of encoding and decoding processes [3]. Our model is a form of analysis by synthesis using conditional diffusion models as the renderer (decoder) and classifiers or segmentors as the encoder. [1] Your diffusion model is secretly a zero-shot classifier. Li et al. arXiv 2023. [2] Text-to-image diffusion models are zero-shot classifiers. Clark, K et al. arXiv:2303.15233. [3] Generative Feedback Explains Distinct Brain Activity Codes for Seen and Mental Images. Breedlove et al. Current Biology 2020. **Q5.5: How are text embeddings obtained/updated** The class or text embeddings are obtained from the pre-trained diffusion model. - For stable diffusion, the text-embeddings are obtained by encoding each class name in the dataset using the CLIP text encoder. - DiT, on the other hand learns a set of class embeddings for ImageNet classes, we simply use that in our experiments. - In our experiments, as we update the diffusion model we do update the text embeddings during TTA. **Q5.6: Time to Adapt to a Single image?** In a single GPU, our method takes 55 seconds for adaptation mainly due to sequential gradient accumulation steps. We can reduce the computation latency to 1.2 seconds per example using multiple gpus. Also, we can further speed up our method by: 1. Dynamically deciding when to adapt the model based on the entropy in the diffusion loss. 2. Incorporating better timestep weighting mechanisms [1] and recent advances with Consistency Models [2] which allow one-step image generation can provide significant boosts in the speed. [1] Consistency Models. Song et al. ICML 2023. [2] Text-to-image diffusion models are zero-shot classifiers. Clark, K et al. arXiv:2303.15233. **Q5.7: Increasing the value of K for CLIP** Please see our response to Q2.3 --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It answers my questions and I appreciate the new results. --- Reply to Comment 1.1.1: Comment: We also want to thank the reviewer for the questions. They help us improve our work and made us focus more on the ablations.
Summary: This paper proposes a test-time adaptation (TTA) method that adjusts pretrained image classifiers at test time by leveraging the power of pretrained, text-conditioned generative diffusion models. Given an unseen image, the proposed method Diffusion-TTA minimizes the diffusion loss with respect to the weights of the pretrained classifier and/or diffusion model via gradient descent at test time. In other words, Diffusion-TTA provides guidance to the classifier to adapt its performance particularly under the out-of-distribution setting. From the empirical evaluation, Diffusion-TTA shows its effectiveness in encouraging consistent improvements over the initially employed classifier. Furthermore, the method is applicable to utilize different neural net architectures for the classifiers such as ResNet, ViT, and ConvNext-Tiny. ---- After reading the rebuttal, I've raised my score from 6: weak accept to 7: accept. Strengths: Sufficient novelty: Diffusion-TTA follows a similar approach from Diffusion Classifier (Li et al. 2023) by utilizing the same objective, i.e., minimizing the diffusion loss, but executing a different optimization approach towards the objective. Diffusion-TTA employs pretrained classifiers and updates the weights via back-propagation of the diffusion loss, while Diffusion Classifier performs a discrete optimization over categorical text prompts to the generative model without using any pretrained classifier. This approach undertaken by Diffusion-TTA, which results in a positive empirical outcome in the context of TTA, is a new perspective to me. Rigorous ablation study: The impact of each implementation component defining Diffusion-TTA is empirically investigated and reported so that the readers are aware about the contribution from each component. Weaknesses: Empirical evaluation: I think that the added values of Diffusion-TTA over the existing SOTA methods remain a bit inconclusive. Specifically, it’s unclear to me that Diffusion-TTA is generally better than TTA-MAE given two occasions in ImageNet experiments (in-distribution and ImageNet-V2) where TTA-MAE seems to provide slightly or statistically insignificant negative outcomes. On the other hand, TTA-MAE produces larger improvements than Diffusion-TTA. I’d suggest having comparisons in which the other existing methods use the same backbone networks as Diffusion-TTA for the classifiers (ResNet18, ViT-B/32, or ConvNext-Tiny) whenever applicable. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How’s the performance of TTA-MAE or other existing TTA methods on the Open-Vocabulary CLIP experiment? Have the authors tried partial weight updating on the classifier, e.g., only adjusting weights of a few top layers, and observing the performance gain? In L175, it’s mentioned that adapting the weights of both generator and classifier slightly boosts the performance of Diffusion-TTA for the ImageNet distribution shift case. By how much boost in comparison to keeping the weights of the generator fixed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitation in terms of the generality of the method, i.e., only shown to be effective for image classification thus far, has been addressed. Furthermore, although the inference speed is perhaps not a focus in this study, I’d recommend also addressing it (~55 seconds per example on a single NVIDIA-A100 40GB) as another limitation. I’m hoping that the code implementation is made available soon. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q4.1: Improvements over TTT-MAE unclear** If we consider the **after-TTA absolute scores** our method significantly outperforms TTT-MAE on all except ImageNet-V2 dataset. On ImageNet, ImageNet-A, ImageNet-R, ImageNet-C, and ObjectNet, our method gets (+1.1, +4.5, +10.5, +13.9, +4.8) boosts respectively over TTT-MAE. We don’t think relative improvement is the right number to track, rather one should track **the absolute score after TTA**. The reason for this is the following: One can easily abuse the relative TTA improvement metric by getting very low before-TTA results, and then showing high gains in performance on top of it, while still having very low absolute after-TTA numbers! On a separate note, adding a different classifier in TTT-MAE requires re-training the whole network on ImageNet-1K, while for us we can easily plug-in and improve with any pre-trained classifier. For online TTA, Diffusion-TTA significantly outperforms TTT-MAE across different types of distribution shifts using a smaller classifier: ConvNext (28.5M parameters) vs. customized ViT-L+ViT-B (392M parameters). Our method achieves 47.4, 65.9 69, 62.4, and 46.2 on Gaussian-Noise, Fog, Pixelate, Snow, and Contrast split in ImageNet-C, compared to 37.9, 51.1, 65.7, 56.5, and 10 obtained by TTT-MAE. See the online adaptation results in the global comment of Q1.2 for more details. **Q4.2: Mention the backbones used for COTTA, TENT and TTT-MAE baselines.** Please see Q1.1 and Q1.2 in the global comment. **Q4.3: Add TTA baselines on the Open-Vocabulary CLIP experiment.** We compare our method against TPT [1] on the task of open-vocabulary classification in the following table. TPT is a test-time adaptation method for large-scale text-to-image models such as CLIP. TPT optimizes the text prompts to encourage consistent predictions across augmented views of the same test image by minimizing the marginal entropy. Our method on the other hand optimizes the network parameters using generative feedback. [1] Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models, NeurIPS 2022. | | Food101 | CIFAR100 | FGVC | Pets | Flower102 | ImageNet | Average Improvement | | :--- | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | **CLIP-ViT-B/32** | 78.4 | 60.0 | 18.8 | 77.8 | 64.1 | 56.3 || | $~~~~$+TPT | 79.8 (+1.4) | **65.2 (+5.2)** | 18.4 (-0.4) | 77.3 (-0.5) | 62.8 (-1.4) | 57.5 (+1.2) | +0.91 | | $~~~~$+Diffusion-TTA | **80.2 (+1.8)** | 61.8 (+1.8) | **22.2 (+3.4)** | **81.1 (+3.3)** | **64.3 (+0.2)** | **58.1 (+1.8)** | **+2.01** | | **CLIP-ViT-B/16** | 84.7 | 68.8 | 21.4 | 80.5 | 67.6 | 60.6 | | | $~~~~$+TPT | 85.2 (+0.5) | **68.4 (-0.4)** | 20.4 (-1.0) | 80.0 (-0.5) | **70.0 (+2.4**) | **61.6 (+1.0)** | +0.31 | | $~~~~$+Diffusion-TTA | **85.5 (+0.8)** | 67.6 (-0.8) | **22.6 (+1.2)** | **80.8 (+0.3)** | 69.2 (+1.6) | 61.5 (+0.9) | **+0.67** | | **CLIP-ViT-L/14** | 91.2 | 79.6 | 29.0 | 89.2 | 75.2 | 68.9 | | | $~~~~$+TPT | 90.6 (-0.6) | 80.6 (+1.0) | 30.2 (+1.2) | 87.6 (-1.6) | **76.9 (+1.6)** | **70.6 (+1.7)** | +0.55 | | $~~~~$+Diffusion-TTA | **91.2 (0.0)** | 80.6 (+1.0) | **30.6 (+1.6)** | **89.8 (+0.6)** | 76.1 (+0.9) | 69.9 (+1.0) | **+0.85** | We conclude that: 1. Our Diffusion-TTA consistently outperforms TPT across all CLIP backbone architectures. 2. TPT only adapts prompts and is thus restricted to vision-language classifiers like CLIP. Our method instead can improve any pre-trained classifier, and even dense pixel labeller (e.g., semantic segmentation) 3. Our method works for both single-example and online test-time adaptation, whereas TPT only works on single-example settings. **Q4.4: Adjusting certain parts of the weights of the classifier?** We present the ablation study of adapting different layers of the classifier: 1. Batch Normalization layers only 2. The last FC layer only 3. The whole classifier We employ ConvNext-Tiny and subsample 1 image per category for the ablation study. We find that adapting the whole classifier results in the best performance. We will include these ablations in our paper. | Adapt Classifier: | BN | Last FC layer | Whole model| | :--- | :----: | :----: | :----: | | ImageNet | 69 | 68.6 | **78.2** | | ImageNet-R | 37 | 37.5 | **42.5** | **Q4.5: Adding an ablation for freezing or not the weights of the image diffusion model** We present the ablation study of freezing the diffusion model or not. We employ ConvNext-Tiny and subsample 1 image per category for the ablation study | Adapt | Diffusion + Classifier | Classifier only | | :--- | :----: | :----: | | ImageNet | **78.2** | 72.6 | | ImageNet-R | **42.5** | 37 | We show that adapting the diffusion model results in significant performance gain. See the same observation in Table 3 in our paper. We will include these ablations in our paper. **Q4.6: Method is limited to classification.** Please see Q1.3 in the global comment. **Q4.7: The proposed method is slow** In a single GPU, our method takes 55 seconds for adaptation mainly due to sequential gradient accumulation steps. We can reduce the computation latency to 1.2 seconds per example using multiple gpus. Also, we can further speed up our method by: 1. Dynamically deciding when to adapt the model based on the entropy in the diffusion loss. 2. Incorporating better timestep weighting mechanisms [1] and recent advances with Consistency Models [2] which allow one-step image generation can provide significant boosts in the speed. [1] Consistency Models. Song et al. ICML 2023. [2] Text-to-image diffusion models are zero-shot classifiers. Clark, K et al. arXiv:2303.15233. **Q4.8: Code Availability?** Yes, we will open source our code upon paper acceptance. --- Rebuttal Comment 1.1: Comment: Thank you for providing a rigorous feedback. I have read all the responses addressed to myself as well as to the other reviewers. I truly appreciate the substantial effort not only clarifying the doubts but also providing new insights and empirical results. I'm now convinced with the effectiveness of the proposed method. I'm happy to promote my score up to +1 level. Please ensure all the new important insights are included in the main manuscript or appendix. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for reading our responses in detail. Their questions about weight ablations and additional baselines majorly improved our work. As the reviewer suggested, We will include the new results and insights in the final manuscript.
Summary: Authors aim to improve test-time adaptation of CLIP and ImageNet trained models on different distribution data. They leverage text conditioned image diffusion model to adapt image classifier’s parameters by maximizing image diffusion likelihood. In particular, image classifier’s output probabilities are used as weights for class conditioning the diffusion model and diffusion likelihood gradients are backpropagated to classifier through the probabilities. Adaptation of CLIP across different datasets and adaptation of ImageNet trained models across different ImageNet variants using the diffusion loss show improved top-1 accuracy. Strengths: 1. Paper is well-written and easy to understand. 2. Proposed setup is a simple plug and play method that does not require any additional training or warm-up of the models. 3. Results demonstrate that the proposed adaptation improve the results, particularly benefit smaller models. 4. Appreciate the analysis that shown diffusion loss and cross-entropy loss are correlated, which support the gains observed in the results. Weaknesses: 1. Proposed setup can be seen as straight forward extension of Li et al. [22] which introduced similar guidance strategy to optimize over text prompts. Here, classifier parameters are updated instead of text prompts. 2. Increased time complexity of test time adaptation with the diffusion model, where each sample takes 55 seconds long for adaptation. 3. Increased model complexity by including diffusion model with large number of parameters in the adaptation process. 4. The adaptation is not as effective on larger models when compared to smaller models. 5. Prior works comparisons are included in Table 2 but the network architectures are different across these comparisons and wouldn't be a fair comparison. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Which diffusion model was used in Sec. 4.1? 2. CLIP adaptation is shown across multiple datasets. Adapting CLIP with ImageNet variants would be interesting. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q3.1: Straight forward extension of Diffusion Classifier** We believe this is not the case for the following reasons: Diffusion Classifier searches over **discrete** class labels while Diffusion-TTA optimizes the parameter of the diffusion model and the classifier thru gradient descents. Gradient-based optimization enables us to apply our method to online adaptation settings and extend it beyond classification tasks, such as in the task of semantic segmentation. In contrast, discrete label search does not apply to online adaptation and would be infeasible for dense labeling tasks (searching over pixel labellings is exponential wrt the number of pixels in the image). **Q3.2: The proposed method is slow.** In a single GPU, our method takes 55 seconds for adaptation mainly due to sequential gradient accumulation steps. We can reduce the computation latency to 1.2 seconds per example using multiple gpus. Also, we can further speed up our method by: 1. Dynamically deciding when to adapt the model based on the entropy in the diffusion loss. 2. Incorporating better timestep weighting mechanisms [1] and recent advances with Consistency Models [2] which allow one-step image generation can provide significant boosts in the speed. [1] Consistency Models. Song et al. ICML 2023. [2] Text-to-image diffusion models are zero-shot classifiers. Clark, K et al. arXiv:2303.15233. **Q3.3: Adaptation improvements smaller on larger models** That’s indeed true that larger models provide smaller improvements in single example settings. However we find that in online settings we get significant improvements even with larger models. In the Table below, we show results adapting two of the largest ImageNet Classifiers ViT L/16 (307M parameters) and ConvNext-Large (197M parameters). We test on the Gaussian-Noise split in ImageNet-C dataset. | | ConvNext-Large | ViT-L/16 | | :--- | :----: | :----: | | No TTA | 37.0 | 34.9| | Diffusion-TTA (single-example) | 37.0 | 33.3 | | Diffusion-TTA (online) | **54.9 (+17.9)** | **50.0 (+15.1)**| As can be seen in the table above, even though our model doesn’t improve in single-example setting, we get significant improvements in the online setting. **Q3.4: Which diffusion model was used in Sec. 4.1?** We used Stable Diffusion v1.4 for this section. **Q3.5: Baselines in Table 2 have different neural architectures** Please see Q1.1 and Q1.2 in the global comment. **Q3.6: Show CLIP adaptation on ImageNet dataset variants** Great suggestion! Our Diffusion-TTA improves CLIP-ViT/B-32 on ImageNet variants (ImageNet, ImageNetV2, and ImageNet-R) consistently. | | ImageNet | ImageNet-V2 | ImageNet-R | | :--- | :----: | :----: | :----: | | **CLIP-ViT/B-32** | 56.3 | 51.3 | 55.5 | | $~~~~$+Diffusion-TTA (single-example) | **58.1 (+1.8)** | **52.5 (+1.2)** | **58.2 (+2.7)** | --- Rebuttal Comment 1.1: Comment: **Q3.3: Adaptation improvements smaller on larger models** Dear Reviewer, Following our initial response to your question, we conducted further testing of ConvNext-Large model under various distribution shifts in an online setting. ConvNext-Large is the largest available ConvNext model with 197M parameters, the model achieves state-of-the-art performance on ImageNet. | | Gaussian-Noise | &nbsp; &nbsp; Fog | Pixelate | Snow | Contrast | | :--- | :----: | :----: | :----: | :----: | :----: | | ConvNext-Large | 37.0 | 34.4 | 49.3 | 44.5 | 39.8 | | + Diff-TTA (online) | **54.9 (+17.9)** | **67.7 (+33.3)** | **71.7 (+22.4)** | **64.8 (+20.3)** | **55.7 (+15.9)** | Based on our results reported above, we find that Diff-TTA consistently achieves significant performance improvements across multiple distribution shifts, reinforcing the adaptability of our method with larger models. Please let us know if you have any additional questions.
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and for spending the time to read our paper in detail. Below we address the common concern of all the reviewers: **Q1.1: Use same backbones when comparing against TENT and CoTTA baselines.** We re-evaluate COTTA and TENT using the same backbone as ours. Note that, TTT-MAE builds atop customized classifiers. Therefore we cannot evaluate TTT-MAE with the same classifiers without re-training it on the ImageNet dataset. We evaluate Diff-TTA / TTT-MAE under single-example settings, and CoTTA / TENT under online settings. COTTA and TENT are not applicable for single-example settings: they explicitly require pseudo-labels (entropy minimization) to improve the classification accuracy. ||ImageNet|ImageNet-A|ImageNet-R|ImageNet-C|ImageNet-V2| |:---|:----:|:----:|:----:|:----:|:----:| |**Customized ViT-L/16 backbone**|82.1|14.4|33.0|17.5|72.5| |$~~~~$+TTT-MAE (single-sample)|82.0 (-0.1)|21.3 (+6.9)|39.2 (+6.2)|27.5 (+10.0)|72.3 (-0.2)| |**ResNet18**|69.5|1.4|34.6|2.6|57.1| |$~~~~$+TENT (online)|63.0 (-6.5)|0.6 (-0.8)|34.7 (+0.1)|**12.1 (+9.5)**|52.0 (-5.1)| |$~~~~$+CoTTA (online)|63.0 (-6.5)|0.7 (-0.7)|34.7 (+0.1)|11.7 (+9.1)|52.1 (-5.0)| |$~~~~$+Diffusion-TTA (single-example)|**77.2 (+1.7)**|**6.1 (+4.7)**|**39.7 (+5.1)**|4.5 (+1.9)|**63.8 (+6.7)**| |**ViT-B/32**|75.7|9.0|45.2|39.5|61.0| |$~~~~$+TENT (online)|75.7 (0.0)|9.0 (0.0)|45.3 (+0.1)|38.9 (-0.6)|61.1 (+0.1)| |$~~~~$+CoTTA (online)|75.8 (+0.1)|8.6 (-0.4)|45.0 (-0.2)|40.0 (+0.5)|60.9 (-0.1)| |$~~~~$+Diffusion-TTA (single-example)|**77.6 (+1.9)**|**11.2 (+2.2)**|**46.5 (+1.3)**|**41.4 (+1.9)**|**64.4 (+3.4)**| |**ConvNext-Tiny**|81.9|22.7|47.8|16.4|70.9| |$~~~~$+TENT (online)|79.3 (-2.6)|10.6 (-12.1)|42.7 (-5.1)|2.7 (-13.7)|69.0 (-1.9)| |$~~~~$+CoTTA (online)|80.5 (-1.4)|13.2 (-9.5)|47.2 (-0.6)|13.7 (-2.7)|68.9 (-2.0)| |$~~~~$+Diffusion-TTA (single-sample)|**83.1 (+1.2)**|**25.8 (+3.1)**|**49.7 (+1.9)**|**21.0 (+4.6)**|**71.5 (+0.6)**| We conclude: - Diff-TTA outperforms TENT and CoTTA across various architecture backbones and distribution shifts. - Diff-TTA outperforms TTT-MAE even with a smaller classifier. ConvNext (28.5M parameters) optimized by Diffusion-TTA (online) achieves better performance than a much bigger custom backbone of ViT-L + ViT-B (392M parameters) optimized by TTT-MAE (online) --- Since submission we have extended the proposed method in two ways: **Q1.2: Online Test-Time Adaptation:** We have extended our method to an online TTA setting: We adapt model weights to a set of streaming examples without resetting the model weights at each input example. We compare our method to TTT-MAE, CoTTA, and TENT for online adaptation. |ImageNet Corruption:|Gaussian-Noise|$~~~$Fog|Pixelate|Snow|Contrast| |:--- | :----:| :----:| :----:| :----:| :----:| |**Customized ViT-L/16 classifier**|17.1|38.7|47.1|35.6|6.9| |$~~~~$+TTT-MAE (single-sample)|27.9 (+10.8)|45.1 (+6.4)|61.4 (+14.3)|43.2 (+7.6)|9.3 (+2.4)| |$~~~~$+TTT-MAE (online)|37.9 (+20.8)|51.1 (+12.4)|65.7 (+18.6)|56.5 (+20.9)|10 (+3.1)| |**ResNet50**|6.3|25.2|26.5|16.7|3.6| |$~~~~$+TENT (online)|12.3 (+6.0)|43.2 (+18.0)|41.8 (+15.3)|28.4 (+11.7)|**12 (+8.4)**| |$~~~~$+CoTTA (online)|12.2 (+5.9)|42.4 (+17.2)|41.7 (+15.2)|28.6 (+11.9)|11.9 (+8.3)| |$~~~~$+Diffusion-TTA (single-sample)|12.7 (+6.4)|33.0 (+7.8)|30.4 (+3.9)|28.7 (+12.0)|7.0 (+3.4)| |$~~~~$+Diffusion-TTA (online)|**19 (+12.7)**|**43.2 (+18.0)**|**50.2 (+23.7)**|**33.6 (+16.9)**|2.7 (-0.9)| |**ViT-B/32**|39.5|35.9|55|30|31.5| |$~~~~$+TENT (online)|38.9 (-0.6)|35.8 (-0.1)|55.5 (+0.5)|30.7 (+0.7)|32.1 (+0.6)| |$~~~~$+CoTTA (online)|40.0 (+0.5)|34.6 (-1.3)|54.5 (-0.5)|29.7 (-0.3)|32 (+0.5)| |$~~~~$+Diffusion-TTA (single-sample)|41.4 (+1.9)|47.3 (+11.4)|43.5 (-11.5)|36.1 (+6.1)|20.8 (-10.7)| |$~~~~$+Diffusion-TTA (online)|**46.5 (+7.0)**|**56.2 (+10.3)**|**64.7 (+9.7)**|**50.4 (+20.4)**|**33.6 (+2.1)**| |**ConvNext-Tiny**|16.4|32.3|37.2|38.3|32| |$~~~~$+TENT (online)| 2.7 (-13.7)|5 (-27.3)|43.9 (+6.7)|15.2 (-23.1)|40.7 (-18.7)| |$~~~~$+CoTTA (online)|13.7 (-2.7)|29.8 (-2.5)|37.3 (+0.1)|26.6 (-11.7)|32.6 (+0.6)| |$~~~~$+Diffusion-TTA (single-sample)|21.0 (+4.6)|46.9 (+14.6)|41 (+3.8)|45.6 (+7.3)|25 (-7.0)| |$~~~~$+Diffusion-TTA (online)|**47.4 (+31.0)**|**65.9 (+33.6)**|**69 (+31.8)**|**62.6 (+24.3)**|**46.2 (+14.2)**| From the table we see that: - Online adaptation of Diff-TTA significantly outperforms the single-example setting of Diff-TTA - Diff-TTA outperforms TENT and CoTTA across various backbones. CoTTA and TENT fail to improve classification on certain corruptions and classifiers. Our results are consistent with the analysis in [1]: the authors find that methods that primarily work under online settings (TENT or CoTTA) are not robust to different types of architectures and distribution shifts (as reported in Table 4 of [1]). - Our method outperforms TTT-MAE even with a smaller-size classifier. ConvNext (28.5M params) optimized by Diffusion-TTA (online) achieves better performance than a much bigger custom backbone of ViT-L + ViT-B (392M params) optimized by TTT-MAE (online) [1] On the Pitfalls of test-time adaptation. Zhao et al **Q1.3: Diff-TTA for Semantic Segmentation:** We have extended our method to semantic segmentation tasks by replacing the image classifier with a pre-trained segmentor. See Figure 1 and 4 in the attached pdf for the architecture diagram and qualitative results. Both the segmentor and the diffusion model are trained on ADE20K dataset. Under single-example settings, we find Diff-TTA consistently improves pre-trained SegFormer. | ADE20K Corruption: | Clean | Gaussian-Noise | $~~$Fog | Frost | Snow | Contrast | Shot | :--- | :----: | :------: | :----: | :----: | :----: | :----: | :----: | | **SegFormer** | 66.1 | 65.3 | 63.0 | 58.0 | 55.2 | 65.3 | 63.3 | $~~~~$+Diffusion-TTA | 66.1 | **66.4 (+1.1)** | **65.1 (+2.1)** | **58.9 (+0.9)** | **56.6 (+1.4)** | **66.4 (+1.1)** | **63.7 (+0.4)** Pdf: /pdf/dd1e6ff350afdaaa424ae5a1a3b6162a60295b4d.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work proposed Diffusion-TTA, a test-time adaptation method that, given a test image, updates the classifier’s weights to maximize the image likelihood (i.e., the denoising score matching loss) using the pre-trained diffusion models. Specifically, the classifier uses the clean input image to predict a distribution over labels, which is then used for a weighted sum of learnt class embeddings into a prompt input of diffusion models. In experiments, Diffusion-TTA is tested for adapting both open-vocabulary CLIP models and pretrained image classifiers, where it improves the classification accuracy across different network architectures. Strengths: 1. The idea of using pre-trained diffusion models as image priors to update classifiers at test time is novel. In particular, I like the idea of “using classifier’s output logits for a weighted sum of learnt class embeddings”, which is very intuitive to me. 2. It shows the improvement of classification accuracy across different network architectures after using Diffusion-TTA. 3. The ablation studies (in Table 3) are well conducted to show the importance of each component. Weaknesses: 1. One of my major concerns is about the comparison with baselines in experiments. In Section 4.1 (“adapting open-vocabulary CLIP models”), only Diffusion-TTA works with CLIP, other baselines do not use CLIP. For example, Diffusion Classifier does not use any image classifier, and both Synthetic SD Data and SD Features use the ResNet-50 classifier. I think this setting causes an unfair comparison with baselines. In Section 4.2 (“adapting image classifiers”), I also don’t see what the pretrained image classifiers the baselines (COTTA, TENT and TTT-MAE) have used. It would be good to report their accuracies on the same set of network architectures (ResNet-18, ViT-B/32 and ConvNext-Tiny). In the current version, I’m not sure if the proposed method is better than the baselines under the same conditions. 2. The claim “this is the first work that adapts pre-trained large-scale discriminative models to individual images” is wrong. From what I know, at least TPT [1] is a prior test-time adaptation method using only a single test image. I think TPT is more efficient than the proposed method in adapting CLIP models, since it only updates the input prompts instead of the CLIP weights, and it does not need to backpropagate through large diffusion models with a large batch size. 3. In Figure 6, it looks weird to me that the accuracy curve has a spike at K=5 and does not change after K>=10. Also, a smaller K (maybe K=1?) achieves very similar accuracy with K=5, it seems that we may not need a distribution of output to do a weighted sum. Instead, we can just use the class embedding from the top-1 prediction as the prompt input. 4. Some suggestions regarding the writing presentations: 1) In line 155 and line 156, the variable format is not consistent, such as $z_i$ and $i \in T$. 2) Since the baselines are already discussed in the related work section, there is no need to give a full detailed description of them in experiment section again (line 234-250). They can be moved to Appendix if necessary. 3) The fonts in Figure 5 is too small. [1] Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models, NeurIPS 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the authors provide results of Synthetic SD Data and SD Features with other image classifiers, such as CLIP-ViT models, to make the comparison more sound? Similarly, can the authors provide results of COTTA, TENT and TTT-MAE across different network architectures (ResNet-18, ViT-B/32 and ConvNext-Tiny)? 2. I suggest the authors add a comparison with TPT in both related work and experiments (adapting CLIP models). How does Diffusion-TTA compare with TPT regarding OOD accuracy and inference efficiency? 3. Can the authors give insights of why the accuracy curve behaves like this (a spike at K=5 and does not change after K>=10) and justify the selection of K (K=5 vs K=1)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q2.1: Synthetic SD Data and SD features baseline use ResNet-50 classifier and not CLIP.** You are correct, Synthetic SD Data, SD features, and Diffusion Classifier are not very relevant or fair baselines for Table 1 of the paper as they don’t use a pre-trained classifier. However, we thought showing their results help readers understand the TTA with discriminative models is critical for recognition tasks. To avoid confusion, we will remove these baselines from Table 1 and list them in a separate Table in the supplementary. We followed Diffusion Classifier’s codebase to run Synthetic SD-Data baseline using CLIP. Notably, SD-Feature baseline is not applicable to CLIP classifier: CLIP takes RGB pixels (HxWx3) as input, whereas SD-Feature takes high-dimensional latent features (HxWx64) from U-Net as input. | | Food101 | FGVC | Pets | | :--- | :----: | :----: | :----: | | SD-Data | 12.6 | 9.4 | 31.3 | | **CLIP-ViT/B-32** | 78.4 | 18.8 | 77.8 | | $~~~~$+SD-Data | 76.5 (-1.9) | 19.4 (+0.6) | 71.2 (-6.6) | | $~~~~$+Diffusion-TTA | **80.2 (+1.8)** | **22.2 (+3.4)** | **81.1 (+3.3)** | We find that SD-Data baseline fails to improve classification on Food101 and Pets dataset (-1.9 and -6.6), while the improvement on FGVC is marginal (+0.6). Our conjecture is that images generated by Stable Diffusion have a huge distribution shift compared to real images in the test datasets. Fine-tuning CLIP on synthetic images fails to generalize to real images. **Q2.2: Mention the backbones used for COTTA, TENT and TTT-MAE baselines.** Please see Q1.1 and Q1.2 in the global comment. **Q2.3: Weird Behavior of K in TopK** Yes, we find that the peak at K=5 was indeed an artifact of the specific dataset and architecture that we were ablating. Since submission, we have conducted a very detailed ablation of K across multiple datasets and architectures, we empirically find that not choosing topK, gives the best results on average. Currently all our reported results use K=5, therefore we expect our numbers to increase by a few decimals after incorporating this change. We subsample 1 image per category for the ablation study. | ConvNext-Tiny | ImageNet | ImageNet-R | | :--- | :----: | :----: | | K=2 | 82.6 | 47.5 | | K=5 | 82.9 | 45.5 | | K=50 | 82.6 | 49 | | No K | **83.0** | **51** | | ResNet18 | ImageNet | ImageNet-R | | :--- | :----: | :----: | | K=2 | 73.5 | 37 | | K=5 | 77.3 | 41 | | K=50 | **78.2** | **43** | | No K | 78.2 | 42.5 | | CLIP-ViT/L-14 | FGVC | Pets | | :--- | :----: | :----: | | K=1 | 21 | 79.3 | | K=5 | 21 | 79.3 | | K=50 | 21 | 79.3 | | No K | 21 | 79.3 | **Q2.4: “this is the first work that adapts pre-trained large-scale discriminative models to individual images” is wrong, TPT also adapts large scale pre-trained classifiers. Add TTA baselines on the Open-Vocabulary CLIP experiment.** Thank you for pointing out the TPT method. We were not aware of the method. We will remove the sentence and add TPT to the related work. We present comparisons with TPT on zero-shot classification in the following table. For TPT baseline, we tune the prompt and evaluate classification on each dataset. | | Food101 | CIFAR100 | FGVC | Pets | Flower102 | ImageNet | Average Improvement | | :--- | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | **CLIP-ViT-B/32** | 78.4 | 60.0 | 18.8 | 77.8 | 64.1 | 56.3 || | $~~~~$+TPT | 79.8 (+1.4) | **65.2 (+5.2)** | 18.4 (-0.4) | 77.3 (-0.5) | 62.8 (-1.4) | 57.5 (+1.2) | +0.91 | | $~~~~$+Diffusion-TTA | **80.2 (+1.8)** | 61.8 (+1.8) | **22.2 (+3.4)** | **81.1 (+3.3)** | **64.3 (+0.2)** | **58.1 (+1.8)** | **+2.01** | | **CLIP-ViT-B/16** | 84.7 | 68.8 | 21.4 | 80.5 | 67.6 | 60.6 | | | $~~~~$+TPT | 85.2 (+0.5) | **68.4 (-0.4)** | 20.4 (-1.0) | 80.0 (-0.5) | **70.0 (+2.4**) | **61.6 (+1.0)** | +0.31 | | $~~~~$+Diffusion-TTA | **85.5 (+0.8)** | 67.6 (-0.8) | **22.6 (+1.2)** | **80.8 (+0.3)** | 69.2 (+1.6) | 61.5 (+0.9) | **+0.67** | | **CLIP-ViT-L/14** | 91.2 | 79.6 | 29.0 | 89.2 | 75.2 | 68.9 | | | $~~~~$+TPT | 90.6 (-0.6) | 80.6 (+1.0) | 30.2 (+1.2) | 87.6 (-1.6) | **76.9 (+1.6)** | **70.6 (+1.7)** | +0.55 | | $~~~~$+Diffusion-TTA | **91.2 (0.0)** | 80.6 (+1.0) | **30.6 (+1.6)** | **89.8 (+0.6)** | 76.1 (+0.9) | 69.9 (+1.0) | **+0.85** | We conclude that: 1. Our Diffusion-TTA consistently outperforms TPT across all CLIP backbone architectures. 2. TPT only adapts prompts and is thus restricted to vision-language classifiers like CLIP. Our method instead can improve any pre-trained classifier, and even dense pixel labeller (e.g., semantic segmentation) 3. Our method works for both single-example and online test-time adaptation, whereas TPT only works on single-example settings. **Q2.5: Writing Suggestions** Thank you for your suggestions. We have revised our paper based on your suggestions. --- Rebuttal Comment 1.1: Title: Response to authors’ rebuttal Comment: Thanks for providing very detailed responses to my concerns. Since most of my concerns have been addressed, I'm happy to raise my rating to weak accept. I hope the authors can incorporate all the changes and new results into the revised version of the paper. --- Reply to Comment 1.1.1: Comment: Thanks for suggesting the additional TPT baseline, it help us improve our work. As suggested by the reviewer, we will include the changes and new results in the final paper.
null
null
null
null
null
null
BiSLS/SPS: Auto-tune Step Sizes for Stable Bi-level Optimization
Accept (poster)
Summary: This paper studies the bilevel optimization problem, an in particular, focuses on developing effective learning rate schemes for bilevel optimization. In specific, the authors propose two adaptive step-sizes methods named stochastic line search (SLS) and stochastic Polyak step size (SPS) as variants of methods such as SPSmax and DecSPS. Compared to existing approaches, the proposed methods do not require the step-sizes to be monotonic by replacing the constant $\gamma_{b,0}$ by a non-increasing $\gamma_{b,k}$. The adaptive stepsizes are further applied to bilevel optimization on both upper and lower levels. The convergence analysis is provided for SPS and SLS on single-level problems and also for bilevel problems. Simple experiments are provided. Strengths: 1. Studying adaptive learning rates in bilevel optimization is interesting, and has not been explored well in the literature. 2. The proposed adaptive learning rates are more practical than existing methods like SPSmax and DecSPS with milder requirements. Weaknesses: 1. The paper is well written and hard to follow. For example, there are quite a few assumptions and requirements in the paragraphs of describing the proposed methods. There are also many notations and inequalities in Section 2. It makes me a little bit hard to follow the principle of the designs. 2. The adaptive learning rates seem to introduce more hyperparameters such as $l^*_{i_k}, c_{k},\gamma_{b,k}$. It makes me wonder how it can be meaningful in practice given more efforts in tuning such parameters. In addition, no sensitivity analysis is provided for such parameters in experiments. 3. How to find the approximate $l^*_{I,k}$ of the lowest function value? Is the choice of this quantity important in terms of performance? A more comprehensive empirical justification should be provided. 4. In terms of the complexity, it seems to me the proposed stepsizes cannot provide improved sample or computational complexity in theory. Maybe I miss something, but I think the authors can elaborate on this 5. The experiments are not convincing enough. Since the biggest motivation lies in the design of adaptive learning rates, the benefits may come from the empirical side. Thus, it would be better to provide more comprehensive experiments on practical NN architecture and larger datasets for the validation. However, the current experiments are rather toy examples. If the contribution lies in the theoretical side, it fails to show improved complexity performance. Overall, I like the topic of adaptive learning rate for bilevel problems. However, the current theory and experiments are not convincing enough, and how important the proposed methods also are not clear to me. For this reason, I am on the negative side for the current version. I am also open to change my mind if more convincing experiments (e.g., how to select extra parameters, more datasets and backbones, sensitivity analysis, more problems) can be provided. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see Weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please see Weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to weakness 1:** We would appreciate if the reviewer could elaborate on specific issues that might be unclear to them. This would enable us to address these concerns and enhance the overall clarity of our work. **Reply to weakness 2:** Let us clarify the roles of these hyperparameters. Starting with $l_{i_k}^*$, note this is simply a **lower bound** on the minimum function value. In particular, it can be taken as $0$ for positive losses such as cross-entropy and we set it to be $0$ for all experiments. With regards to the parameters $c_k$ and $\gamma_{b,k}$, we emphasize that our algorithm is robust to these choices. To further illustrate this, we provide below additional experiments on the minimum train loss for different values of $\gamma_{b,0} \in$ \{$ 100,500,1000,2000$\} with $c_k = 1, \forall k$ (note the decay schedule for $\gamma_{b,k}$ is $\frac{\gamma_{b,0}}{\sqrt{k+1}}$). | $\gamma_{b,0}$ | 100 | 500 | 1000 | 2000 | | -------- | -------- | -------- | -| -| | Minimum Train Loss | 0.04821 $\pm$ 2e-05 | 0.04458 $\pm$ 3e-05 | 0.04433 $\pm$ 5e-05 | 0.04475 $\pm$ 9e-05| We further show the results for different values of $c_k \in$ \{$1,2,5,10$\} with $\gamma_{b,0} = 1000$. | $c_k$ | 1|2|5|10| |- |- |-|-| -| | Minimum Train Loss|0.04433 $\pm$ 5e-05| 0.04439 $\pm$ 6e-05 | 0.04453 $\pm$ 6e-05| 0.04497 $\pm$ 5e-05| In particular, we wish to contrast these results to the sensitivity of decaying-step SGD to its learning rate. For the latter, the table below shows the minimum train loss of SGD changes much more drastically as the learning rate changes when compared against our algorithm (note the learning rate schedule is chosen to be $\frac{\gamma_{b,0}}{\sqrt{k+1}}$). | $\gamma_{b,0}$ | 100 | 500 | 1000 | 2000 | | -------- | -------- | -------- | -| -| | Minimum Train Loss | 0.04945 $\pm$ 8e-05 | 0.073 $\pm$ 1e-03 | 0.119 $\pm$ 3e-03 | 0.220 $\pm$ 6e-03| **We did a thorough study on the extra parameters associated with our algorithmic design in the bi-level setting, specifically, search initializations for upper ($\alpha_{b,0}$) and lower-level learning rates ($\beta_{b,0}$) (Figure 6c, 6d), sensitivity of the algorithm to $\delta$ in eqn (14) (Figure 12, 13 in Appendix), performance of different reset options in Algorithm 2 (Figure 9 in Appendix), sensitivity of $\eta$ in reset option 3 (Figure 10 in Appendix), and upper and lower-level search cost (Figure 11 in Appendix). We want to emphasize that BiSLS is highly robust to the parameters associated with the algorithm, namely $\alpha_{b,0}$, $\beta_{b,0}$, $\delta$, and $\eta$** (note that reset option 3 has the best performance in terms of convergence speed, generalization, and computation cost). Besides this, we also observe that BiSPS is much more robust to $\alpha_{b,0}$ (upper-level learning rate bound) when compared against decaying-step SGD (Figure 4). **Reply to weakness 3:** see reply for weakness 2. **Reply to weakness 4:** The convergence rate matches the best rate of SGD [1] while not requiring exhaustive step-size tuning. [1] Chen et al. Tighter Analysis of Alternating Stochastic Gradient Method for Stochastic Nested Problems **Reply to weakness 5:** **The key contribution of this work is to address the question of how to remove the extensive manual tuning of the two learning rates in bi-level optimization (Figure 5b,5c)**. The experiments of hyper-representation learning and data distillation are adapted from [2] and [3], respectively, using neural networks and real datasets. These are important and recent applications of bi-level optimization. The experiments in the bi-level setting are more challenging than the single-level setting as the computation of hypergradient typically involves second-order gradient information which can incur a high memory cost. [2] Sow et al. On the Convergence Theory for Hessian-Free Bilevel Algorithms [3] Lorraine et al. Optimizing Millions of Hyperparameters by Implicit Differentiation To concretely demonstrate the computation efficiency of our approach, we performed additional experiments on the run time of an algorithm to reach 85% validation accuracy for different number of Conjugate Gradient (CG) steps (results with units in seconds are given in the table below). We observe the consistent improvement of our algorithm over the baseline. Besides this, the baseline requires extensive tuning of the two learning rates which adds significant more computation cost at first place (not included in the table). | CG steps | 5| 10 | 15 | 20 |25| | -| -|-|-|-|-| | Adam| 138.4 $\pm$ 15.5 | 131.8 $\pm$ 14.7| 144.6 $\pm$ 23.6| 158.1 $\pm$ 19.6| 169.93 $\pm$ 22.4| |BiSLS-Adam (ours) |84.7 $\pm$ 14.5| 84.0 $\pm$ 10.5| 98.5 $\pm$ 20.9| 110.3 $\pm$ 19.5|117.6 $\pm$ 22.7| Furthermore, we added experiments that change the number of gradient steps (given in the top row of the table below) for approximating $y^*(x)$ when executing the line search steps according to eqn (14) (note this number is limited to be $1$ in eqn (14)). The results suggest that a single step already gives good performance. Further increasing this number does not lead to significant improvement considering the increase in the run-time of the algorithm. | | 1| 2 | 5 | 7 |10| | -| -|-|-|-|-| | Best validation accuracy (units in 100%)| 91.42 $\pm$ 0.70 | 91.78 $\pm$ 0.70| 91.88 $\pm$ 0.83| 91.86 $\pm$ 0.99| 92.00 $\pm$ 0.95| |Time to reach 85% validation accuracy (units in seconds)| 110.10 $\pm$ 19.70| 113.76 $\pm$ 20.49|161.91 $\pm$ 26.85|181.74 $\pm$ 22.84|244.34 $\pm$ 38.02| Finally, our experiments demonstrate that the proposed algorithm can work with different types of hypergradient computation including CG (Figure 5a), Neumann series (Figure 7a), or the Hessian inverse being treated as identity (Figure 7b). We believe that our experiments are comprehensive. Thus, we are happy to discuss further if there is any remaining concerns. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I thank the authors' rebuttal, and I increase my score to 5.
Summary: This paper presents an adaptive step size algorithm for bi-level optimization, which improves the shortcomings of the existing BO method that requires careful adjustment of the upper and lower learning rate, and gives a proof of convergence. Experiments also verify the robustness of the method to learning rate. Strengths: 1.This paper presents an adaptive and robust bi-level optimization algorithm for adaptive step size, which can obtain a good set of step sizes without prior knowledge and careful modulation. 2.This algorithm is compatible with the accelerated solver Adam in addition to SGD, improving computational efficiency. 3.The analysis framework of this algorithm more generally unifies SPS and SLS. Weaknesses: 1.Theorem 1 only states that f is convex rather than strongly convex, but the full text lacks explanation for the case of multiple solutions at the lower level 2.Potential computational burden. Although the author states that only a small number of matrix vector multiplication and addition operations are usually used to approximate matrix inverses, this still implies additional computational complexity and there is a problem: the trade-off between the performance loss caused by approximation and the improvement of computational efficiency. Specifically, the lack of time analysis in the charts in the paper exacerbates this concern. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1.The drawing of figure is difficult to read. For example, the line of beta=10.0 in Figure 1 is not fully drawn, different colors and lines are mixed in Figure 2 but have no additional meaning, and the second half of the right subgraph in Figure 3 is too complex, resulting in some lines being unrecognizable. 2. The light colored parts in the figure do not seem to represent the three standard deviations, but rather the upper and lower bounds? 3.I noticed that the sequence of upper and lower bounds for the declared step size in the paper needs to be appropriately controlled, so my point of interest is whether the decay rate is not 1/ sqrt {k+1}? Or can I change the decay rate to change the rate of convergence? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the author clearly stated the assumptions of the algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our work. weakness 1: Theorem 1 only states that f is convex rather than strongly convex, but the full text lacks explanation for the case of multiple solutions at the lower level **Reply to weakness 1:** Let us please clarify that Theorem 1 is for the case of single-level optimization; thus, there is no lower-level optimization. weakness 2: Potential computational burden. Although the author states that only a small number of matrix vector multiplication and addition operations are usually used to approximate matrix inverses, this still implies additional computational complexity and there is a problem: the trade-off between the performance loss caused by approximation and the improvement of computational efficiency. Specifically, the lack of time analysis in the charts in the paper exacerbates this concern. **Reply to weakness 2:** We want to emphasize that there is no additional backpropagation operations when executing the line search steps until eqn (14) is satisfied. Hence, the matrix vector multiplications are mainly associated with computing the hypergradient. Here, we have added additional experiments on the run-time (in seconds) of the algorithm to reach 85% validation accuracy for different number of Conjugate Gradient (CG) steps given in the table below. We observe the consistent improvement of our algorithm over the baseline in terms of computation time. This is due to the suitable and potentially large learning rates found by our algorithm (Figure 5b, 5c). Besides this, a more significant computation cost of the baseline is tuning the learning rates (not included in the table) which is not required by our algorithm. | CG steps | 5| 10 | 15 | 20 |25| | -| -|-|-|-|-| | Adam| 138.4 $\pm$ 15.5 | 131.8 $\pm$ 14.7| 144.6 $\pm$ 23.6| 158.1 $\pm$ 19.6| 169.93 $\pm$ 22.4| |BiSLS-Adam (ours)|84.7 $\pm$ 14.5| 84.0 $\pm$ 10.5| 98.5 $\pm$ 20.9| 110.3 $\pm$ 19.5|117.6 $\pm$ 22.7| To further explore the trade-off between performance loss and computation efficiency, we have added experiments that change the number of steps for approximating $y^*(x)$ in eqn (14) (note that we limit it to be 1 in eqn (14)). We observe that increasing this number can improve the performance of the algorithm, but the gain may not be significant considering the extra overhead introduced, e.g., comparing the number of steps being 1 and 10. Figure 11 in Appendix provides additional information on the search cost for different values of $\eta$ in reset option 3. | | 1| 2 | 5 | 7 |10| | -| -|-|-|-|-| | Best validation accuracy (units in 100%)| 91.42 $\pm$ 0.70 | 91.78 $\pm$ 0.70| 91.88 $\pm$ 0.83| 91.86 $\pm$ 0.99| 92.00 $\pm$ 0.95| |Time to reach 85% validation accuracy (units in seconds)| 110.10 $\pm$ 19.70| 113.76 $\pm$ 20.49|161.91 $\pm$ 26.85|181.74 $\pm$ 22.84|244.34 $\pm$ 38.02| Questions: 1.The drawing of figure is difficult to read. For example, the line of beta=10.0 in Figure 1 is not fully drawn, different colors and lines are mixed in Figure 2 but have no additional meaning, and the second half of the right subgraph in Figure 3 is too complex, resulting in some lines being unrecognizable. 2. The light colored parts in the figure do not seem to represent the three standard deviations, but rather the upper and lower bounds? 3.I noticed that the sequence of upper and lower bounds for the declared step size in the paper needs to be appropriately controlled, so my point of interest is whether the decay rate is not 1/ sqrt {k+1}? Or can I change the decay rate to change the rate of convergence? **Reply to Q1:** Thanks for the suggestions regarding Figures 2 and 3. We will improve their readability in the revision. For $\beta=10.0$, the algorithm diverges after the period in which the line is drawn. **Reply to Q2:** The light colored parts are standard deviations. **Reply to Q3:** We expect that using different envelopes can give faster rates under additional assumptions (or achieve similar rates in more-general settings). --- Rebuttal Comment 1.1: Comment: Thanks for the author's rebuttal. It solved my concerns. I don't have any other questions at the moment.
Summary: This work studies adaptive step-size methods for both single-level optimization and bilevel optimization. The authors propose two novel variants of stochastic line search (SLS) and stochastic Polyak step size (SPS), and they unify these variants into a general envelop strategy. Importantly, these variants are simpler to implement and demonstrate good empirical performance, particularly in non-interpolating scenario. By using the unified envelop strategy, the authors also propose a bi-level line-search algorithm BiSLS-Adam/SGD with convergence guarantees, which demonstrates empirical robustness and generalizes well. Strengths: 1. Both adaptive step-size methods and bilevel optimization are currently active topics. The investigation of auto-tune step sizes for bilevel optimization algorithms is under-explored, and the studied topic in work is interesting and important. 2. The illustrations in Figures 1-5 are helpful in understanding the contributions. 3. The newly proposed variants of stochastic line search (SLS) and stochastic Polyak step size (SPS) in this work are novel and easy to understand. Moreover, these variants can be unified into a general envelop-type step size, and their effectiveness in the context of single-level optimization and bilevel optimization is well-supported by theoretical results and extensive experiments. Weaknesses: See the Limitations below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Q1: Can the authors explain in more details why it is possible to set the number of inner steps to be 1 in Line 213? Minor Comments: 1. For Equation (2), the $+$ should be $-$. 2. In Equation (13), $\nabla_y^2 yg$ should be $\nabla_{yy}^2 g$. 3. In Line 175, $\nabla f_x$ should be $\nabla_x f$. 4. The quadratic functions in Figure 2 and Section B.1 do not satisfy the gradient bounded assumption in Assumption 2. 5. The $+$ in Equation (17) should be $-$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors discussed some of the limitations on single-level optimization in the conclusion section of the paper. On bilevel optimization, a limitation is the lower-level strong convexity in Assumption 3. It would be of interest to investigate whether this condition can be relaxed or removed, taking into account recent advancements in the field, such as those presented in [1, 2,3]. [1] B. Liu et al. “Bome! bilevel optimization made easy: A simple first-order approach.” NeurIPS 2022. [2] R. Liu et al. “Averaged Method of Multipliers for Bi-Level Optimization without Lower-Level Strong Convexity.” ICML 2023. [3] H. Shen and T. Chen, “On penalty-based bilevel gradient descent method.” ICML 2023. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback! We appreciate your careful reading of our paper and thank you for pointing out a few typos, which will be fixed in the revision. Q1: Can the authors explain in more details why it is possible to set the number of inner steps to be 1 in Line 213? **Reply to Q1:** Although we have not formally analyzed this heuristic, we have found that it works well empirically while significantly decrease the computation cost. Specifically, while more steps can be used to give a better approximation of $y^*(x)$ in the nested loop, they don't seem to give significant improvements in terms of validation accuracy considering the extra overhead introduced. We have added additional experiments to justify this point with changing number of steps for approximating $y^*(x)$ given in the top row of the table below. | | 1| 2 | 5 | 7 |10| | -| -|-|-|-|-| | Best validation accuracy (units in 100%)| 91.42 $\pm$ 0.70 | 91.78 $\pm$ 0.70| 91.88 $\pm$ 0.83| 91.86 $\pm$ 0.99| 92.00 $\pm$ 0.95| |Time to reach 85% validation accuracy (units in seconds)| 110.10 $\pm$ 19.70| 113.76 $\pm$ 20.49|161.91 $\pm$ 26.85|181.74 $\pm$ 22.84|244.34 $\pm$ 38.02| Limitations: The authors discussed some of the limitations on single-level optimization in the conclusion section of the paper. On bilevel optimization, a limitation is the lower-level strong convexity in Assumption 3. It would be of interest to investigate whether this condition can be relaxed or removed, taking into account recent advancements in the field, such as those presented in [1, 2, 3]. **Reply to Limitations:** Thank you for highlighting these recent works. While lower-level strong-convexity is a standard assumption in bi-level optimization, e.g. [4,5,6,7], we are eager to study in detail the references that you mentioned hoping to extend our results to that context in future research. [4] Ghadimi and Wang Approximation Methods for Bilevel Programming [5] Hong et al. A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic [6] Ji et al. Bilevel Optimization: Convergence Analysis and Enhanced Design [7] Chen et al. Tighter Analysis of Alternating Stochastic Gradient Method for Stochastic Nested Problems --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and I do not have further question for the moment.
Summary: The paper introduces the use of stochastic adaptive-step size methods, namely stochastic Polyak step size (SPS) and stochastic line search (SLS), for bi-level optimization. This approach addresses the challenge of tuning both the lower and upper-level learning rates. Strengths: 1. SLS and SPS can be seen as special instances of general family of methods with an envelope-type step-size. 2. The unified envelope strategy enables the algorithm development and convergence analysis. Weaknesses: The paper compares the proposed algorithms to vanilla SGD or Adam versions, but it does not provide a comprehensive comparison with other existing algorithms for bi-level optimization. This may limit the understanding of the relative performance of the proposed algorithms in the broader context. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the overhead of using the proposed algorithm to tune the step size? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your overall positive assessment of our work. weakness: The paper compares the proposed algorithms to vanilla SGD or Adam versions, but it does not provide a comprehensive comparison with other existing algorithms for bi-level optimization. This may limit the understanding of the relative performance of the proposed algorithms in the broader context **Reply to weakness:** We would greatly appreciate if the reviewer could provide more specific details regarding particular methods they have in mind that could potentially be missing for comparisons. The primary focus of our work centers around the intricacies of tuning the two learning rates within the context of bi-level optimization. Thus, we try to make the comparison as straight and fair as possible under the alternating optimization framework given in eqn (3). For instance, some bi-level optimization algorithms that rely on variance reduction or momentum may require additional hyperparameter tuning, which falls out of the scope of our work. Nonetheless, we believe that our approach can be integrated with these methods, since the general motivation behind line-search is to find the learning rate given a suitable direction. Moreover, we have already shown the compatibility of our algorithm with various techniques for computing the hypergradient, such as Conjugate Gradient, Neumann Series, or the Hessian inverse being treated as identity (Figure 5a, 7a, 7b). question: What is the overhead of using the proposed algorithm to tune the step size? **Reply to question:** The overhead is the execution of line search steps until the modified Armijo line-search rule given in eqn 14 is satisfied. To avoid always searching from initial learning rates $\alpha_{b,0}$ and $\beta_{b,0}$, we have come up with a reset subroutine (Algorithm 2) that sets the search starting point of current iteration to be a factor $\eta$ times the previous iteration's learning rate (option 3). Its full descriptions can be found in Section 2 and Appendix B.2. We have shown that our algorithm is robust to the choice of $\eta$ in Figure 10 in Appendix. Furthermore, Figure 11 in Appendix shows the cost for finding the upper and lower-level learning rates are $9$ and $1$ respectively when $\eta = 2$ (measured in terms of average number of search rounds until eqn (14) is satisfied per iteration). To further address your question, we added additional experiments on the run-time of the algorithm measured in seconds. Despite the search cost, our algorithm reaches a threshold validation accuracy (85%) faster than the baseline for different number of Conjugate Gradient (CG) steps as shown in the table below. This is because our algorithm is able to find suitable and potentially large learning rates as demonstrated in Figure 5b and 5c. Moreover, tuning the baseline learning rates has a significant higher computation cost (not included in the table) than running the algorithm itself, which is resolved by our proposed approach. We are happy to discuss further if there are any additional questions. | CG steps | 5| 10 | 15 | 20 |25| | -| -|-|-|-|-| | Adam| 138.4 $\pm$ 15.5 | 131.8 $\pm$ 14.7| 144.6 $\pm$ 23.6| 158.1 $\pm$ 19.6| 169.93 $\pm$ 22.4| |BiSLS-Adam (ours)|84.7 $\pm$ 14.5| 84.0 $\pm$ 10.5| 98.5 $\pm$ 20.9| 110.3 $\pm$ 19.5|117.6 $\pm$ 22.7|
Rebuttal 1: Rebuttal: Dear Reviewers, We thank you for taking the time to carefully read and review our work. We are confident that integrating your suggestions will further improve our paper; thus we plan on doing so in the revision. In the specific replies below, we comment on specific issues brought up by different reviewers. We are open to further discussions should any additional questions arise.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Rethinking Conditional Diffusion Sampling with Progressive Guidance
Accept (poster)
Summary: This paper proposes a generalized classifier guidance method for diffusion models with progressive guidance along both the class and temporal dimensions, to handle the adversarial effect and diversity suppression problems of the vanilla classifier guidance. In experiments, the proposed method shows advantages over the vanilla classifier guidance and achieves a new state-of-the-art when combined with other methods under certain settings. Strengths: 1. This paper has clear and well-established motivations, working on two important problems in classifier guidance encountered by the community, the adversarial effect and diversity suppression. This can be a good contribution to the community. 2. The proposed progressive guidance method is simple, effective, intuitive, and well-aligned with the motivations. The entropy perspective is also very interesting. 3. The paper is well-written, with good clarity, nice flow, and good intuition. At the same time, detailed explanations and illustrative analysis are provided for the method for better understanding. The reading experience is good. 4. The experiments are relatively comprehensive with multiple datasets and baselines. The alleviation of the adversarial effect and diversity suppression is validated. The advantage over the vanilla classifier guidance is shown. New SOTA is achieved under certain settings. 5. Code is available. Weaknesses: 1. In Table 1, the results on CIFAR are not as good as on ImageNet. What is the potential reason and any insight? 2. The information degree is based on the description generated by ChatGPT. What are the prompts? What is the impact of different prompts? 3. Although the paper is well-written in general, there are still some clarity issues: - The derivation and introduction of the entropy perspective in the main paper are not very easy to understand. It should be improved in a more logical way in writing. - What do the superscripts on methods in Table 3 mean? What does “LS (NO SCHE.)” mean in Table 6 mean? 4. There are also some minor writing issues: - In Figure 1c, the meaning of darkness is not stated. - In Figure 1c, it should be stated that “c1” is the condition. - In Figure 1 caption, missing space before “Dataset: ImageNet64x64”. - In L57, “Propose” > “Proposing”. - In L61, “over” > “of”. - In L61, “SOTA” should be full. - “Markov chain” > “Markovian chain”. - In section 2, the definition of epsilon and sigma is not mentioned. The background should be self-contained. - In L79, “log_phi p” should be “log p_phi”. Phi is not explicitly defined. - In L87-88, p should be p_phi. Make it consistent. - In L91, >= should be a single symbol. - The flow at the start of section 3.1 is too fast. Add some rationale and definition of information degree first. - In L131, missing space after “ChatGPT”. - In Eq.7, s_t,i should be s_i,t. Also check other places. - In Table 2, should “CADM-G+ProG” be “CADM+ProG”? - In L260, remove spacing after “EDS [32]”. - In Table 3 caption, “State” > “state”. - The capitalization in the reference should be corrected, such as “gans”. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the writing issues then I would consider this paper as a good contribution to the community. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The limitation is briefly discussed at the end of the paper. I think the authors should elaborate on this to provide more insights. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Question 1: Low improvement on CIFAR-10 We have analyzed this problem and discovered that the problem lies in the semantic labels of CIFAR10 itself. Compared to ImageNet, the labels in CIFAR-10 has less relevant information than each other. Most of the shared information between classes is background information. For example, an airplane and a bird only share blue background; an automobile might only share a green background or a street background with a horse. As a result, most of the supporting information from other classes quickly turns into noise during the sampling. In contrast, in ImageNet, many classes share similar features, such as a set of breeds, Brittany Spaniel, Standard poodle, keeshond, and Eskimo dog, that share a lot of backgrounds, colors, and poses. The shared information from relevant classes is beneficial for constructing the primary class. ### Question 2: Prompts and its effect The prompt I use has the form: ``` Add text description. For example, "Tench" will turn into "Characterized by its distinctive olive-green to golden-brown coloration, the tench has a robust and slightly elongated body with a rounded tail fin. It inhabits slow-moving or still waters such as lakes, ponds, and slow rivers across Europe and parts of Asia. Renowned for its adaptability to varying water conditions, the tench can thrive in environments with low oxygen levels due to its unique respiratory adaptations.". Apply for the following fields: 1. Goldfish, Carassius auratus 2. Great white shark, white shark, man-eater, man-eating shark, Carharodon Zacharias ``` The primary motivation is hinting at the type of description we want. Suppose we use different types of prompts that do not have a hint. The output is a lengthy paragraph that includes information unrelated to the description, such as its origination place or history, which is harder to do text preprocessing and less relevant to generate image features. ### Question 3.1: Introduction and derivation of reverse entropy regularization We will re-write this part in the final manuscripts. In detail, we will add an explanation for each constraint as below: 1. Constraints (1), (2), (3) are to satisfy the information degree as a distribution with $s_c$ achieves the highest value. This is matched with the conditions of information degree mentioned in section 3.1. 2. As mentioned in eq.6, when $\mathbf{s}_t$ is varied, $$D_{KL}(\mathbf{s} _t || p _{\phi}(\mathbf{y}|\mathbf{x}_t)) = \sum _{i=1}^C s _{t, i} \log(s _{t, i}) - s _{t,i} \log p _{ \phi}(y_i | \mathbf{x} _t)$$ is considered in full form instead of only $\sum _{i=1}^C -s _{t,i} \log p _{ \phi }(y_i | \mathbf{x} _t)$ as equ. (4). However due to the term $\sum_{i=1}^C s_{t,i} \log s_{t,i}$, the objective of $\min _{\mathbf{s} _t} D _{KL}$ and equ. (7) in the main paper is conflict. To avoid this case, we introduce a constraint $$|s^*_{t,i} - s_{t,i}| \leq l, \forall 1 \leq i \leq C \quad \quad _{(4)}$$ This constraint (4) helps to keep every value of $s_{i,t}$ can not change larger than the bound $l$ at one timestep, resulting in a minimal change toward KL divergence while minimizing the entropy objective. ### Question 3.2: Clarification on Table 3 and Table 6. As we mentioned in lines 83 - 85 Appendix, $+$ is denoted for the score evaluated by the samples provided by the paper. $++$ means the values are directly used from the main article due to the unavailability of the pretrained model. The $*$ over classifier-free guidance means that we don't have available implementation and the pretrained model from the paper. As a result, we do our implementation as well as evaluation. We will state these clearly in the final manuscript. In Table 6, the LS (NO SCHE.) means we apply Label smoothing for information degree without a progressive schedule. This means that the values of information degree do not change during the sampling. This helps to verify that our improvement does not come from smoothing the gradient scales. ### Question 4: Minor writing issues Since we can not upload the revision, we will edit the writing according to the reviewer's suggestion in the final manuscripts: 1. The darkness in Figure 1c means the weight emphasized for that gradient. Darker means more weight put on that gradient. 2. In Table 2, "CADM-G + ProG" is utilized consistently throughout the paper. Thus, it is correct. However, in Table 3, All "CADM" should be "CADM-G"; we will correct them in the final version. 3. We will revise all the typos, grammar mistakes, and other space redundancy missing throughout the paper, including the reviewer's suggestion ### Question 5: Clarification on the limitations of the work In this work, we consider two problems: diversity and adversarial effects in generated images. For the adversarial effects, we characterize the problem as the generated image achieving very high confidence in the conditional class but having minimal features to belong to that class. However, we can observe many generated images that do not have high confidence in the conditional class. For these cases, our proposed method, ProG, can not solve. We hypothesize that the signals from the Diffusion model somehow have dominated the signals from the classifier under some conditions. This negates all the classifier gradients that can help to transform the images into the expected condition. --- Rebuttal Comment 1.1: Title: Looking forward to the response from the Reviewer mBBP Comment: Dear Reviewer mBBP, We want to express our sincere gratitude for your valuable feedback on our work. Your insights have greatly contributed to enhancing the quality of our manuscript. In response to your comments, we have taken thorough measures to address each concern: 1. Low improvement on CIFAR-10 2. Details to explain the entropy perspective. Sorry for the equations with index {t, i} in the rebuttals. It should be {i,t} as the reviewer suggested. We will carefully fix this issue in writing. 3. Clarification on Tables 3 and 6. 4. Minor writing issues. We acknowledged several minor issues during the writing and will fix that in the final version. Section 2 will also be rewritten to be more informative, and section 3 will be revised to be slower for readers. 5. Clarification on the limitations of the work: We provided the hypothesis behind the observations and left the problem for future work. Once again, we extend our heartfelt thanks for your invaluable input. Your dedication to the peer-review process has been instrumental in shaping the quality of our manuscript. Please feel free to reach out with any further comments, and we assure you that each concern will be dealt with with the utmost attention. Best regards, Authors #3926
Summary: In this paper, the authors proposed a new classifier guidance technique for diffusion models named Progressive Guidance (PROG). PROG is an extended version of the classifier guidance method and incorporates the relevant classes' information (beyond the target class alone) to determine the guidance direction, particularly in the presence of noisy images during the early sampling stage. Through extensive experiments on various datasets and diffusion models, the authors demonstrate the effectiveness of PROG in comparison to the standard classifier guidance technique. Strengths: [1] I think the paper is well written and easy to understand. Weaknesses: [1] I think the technical novelty of this paper may not meet the standards required for acceptance at NeurIPS. [2] In my opinion, the proposed PROG technique is only marginally better or comparable to the standard classifier guidance method (according to Tables 1 and 3). [3] Furthermore, I am not sure about whether the proposed classifier guidance (PROG) is superior to classifier-free guidance in terms of performance and applicability. The image generation community has recently witnessed a rapid shift from class conditional image synthesis to text-to-image synthesis. However, it appears that the proposed classifier guidance method may not be applicable to text-to-image synthesis tasks. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: As mentioned in the weaknesses section, I think that the performance improvement of the proposed method over the standard classifier guidance is marginal, and the limited applicability of classifier guidance is a significant limitation of this paper. To strengthen the submission, I recommend that the authors emphasize the strong advantages of their approach compared to standard method (classifier guidance). By highlighting these unique aspects, the authors can make their submission more compelling and is able to address the aforementioned limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Question 1: The novelty of the work Since the reviewer does not mention the reason why our work is not novel, we summarize our three main novelties below: [**Nov1**] : Detects and justifies **diversity suppression** problem due to classification gradient. As far as we know, this problem has **NOT been investigated** before. In GANs, the mode collapse problem that causes the lack of diversity has a different essence than the diversity suppression problem. The problem can be observed in Figure 2 (main paper) and Figures 5, 6, and 7 (Appendix). => **novel** [**Nov2**]: Quantifies the problem of **adversarial effect** due to classification gradient as observed in Figure 3 (main paper), Figure 8, 11, 12, 13, 14 (Appendix). As far as we know, this problem has **NOT been investigated** before. Although in [1], the authors have mentioned the adversarial effect of the classifier guidance, this problem has never been justified and solved. => **novel** [**Nov3**]: Develop an intuitive approach addressing both [**Nov1**] and [**Nov2**] simultaneously. First, mitigate adversarial effects by utilizing gradients from diverse classes, minimizing noise associated with one class. This is exemplified in Figure 3. Second, incorporate information from other classes to prevent diversity suppression and overemphasis on a single class, as illustrated in Figure 2. Lastly, introduce progressive guidance to enhance primary conditions toward sampling completion. Due to its unique philosophy and absence in prior literature, our proposal stands as a **novel** technical contribution. ### Question 2: The significance of the proposed method compared to vanilla classifier guidance. In the main paper and Appendix, we offer **5 significant improvements** compared to vanilla guidance: [**SIG1**] *Diversity quantitative* improvement: in Figure 4 (up to **45% improvement FID**, and **35% on Recall**); right Figure in Table 4 (improve up to **22% on FID** and **22.5% on Recall**), Figure 9 (improve up to **28.5% on FID** and **40% on Recall**) , Figure 10 (up to **50% on FID** and **37.5% on Recall**) in Appendix . Since the gap is large, it is **significant** enough to observe. [**SIG2**] *Quantitative robust features* improvement as detailed in Table 2, 4. The improvement is clear (around **6% in Table 2** and **3% in Table 4**). Thus, this improvement is considered **significant**. [**SIG3**] *Diversity qualitative* improvement as detailed in Figure 2, 5 (right), 6 (right), 7 (right). The improvement can be observed by human eyes. As a result, it should be **significant**. [**SIG4**] *Qualitative robust features* improvement as detailed in Figure 3(right), Figure 8, Figure 11, 12, 13, 14. The improvement can be observed by human eyes, so we believe it is **significant**. [**SIG5**] *Quantitative improvement over some traditional generative metrics* such as FID, sFID and IS. The improvement is shown by percentage as in Table R8 and R9. | | FID imp. | sFID imp. | IS imp. | |------|--|----|-- | ImageNet 64x64 | 19.37%/ 20% / 7% | 30.5% / 4% / 6.27% | 0% / 12.54% / 0% | | ImageNet 128x128 | 7% | 0.1% | 11% | | ImageNet 256x256 | 1% / 6% | 2.5% /18.5% | 2.5% / 18.5%| | CIFAR-10| 1.4%| 0.4% | -1.35%| Table R8: (Table 1 in main paper) Except for t the CIFAR-10, we achieved significant improvement. | | FID imp.| sFID imp.| IS imp.| |----|--|--|--| | ImageNet 64x64 | 2.5%| 4.30% | 6.42% | | ImageNet 128x128 | 4.3% | 0% | 9.40% | | ImageNet 256x256 | 3% / 0.88%| 0%| 8.77% / 1.46%| Table R9: (Table 3 in main paper) The improvement is clear on FID and IS on all databases and sFID on ImageNet64x64. In Table R8, Table R9, [**SIG1**], [**SIG2**], [**SIG3**], [**SIG4**], significant improvements are evident, except for CIFAR-10. The limited progress on CIFAR-10 is due to its classes having little shared information, reducing ProG's impact. ### Question 3.1: Extend ProG to Text-to-Image problem: We have successfully extended our proposed ProG to improve guidance for Text-to-Image. The experimental settings and results are in the joint rebuttal to all reviewers. ### Question 3.2: Text-to-image (Text2Img) condition vs. class condition (ClsCon) The reviewer mentions a shift from ClsCon to Text2Img, but recent works in both years (2022 and 2023) indicate continued interest in ClsCon [2-9]. Both ClsCon and Text2Img are crucial in generative models, and it's unjust to prioritize one over the other. ### Question 3.3: Classifier guidance vs. Classifier-free guidance The reviewer mentions the limited application of classifier guidance compared to classifier-free guidance. However, classifier guidance offers more flexibility: * Training: Classifier-free needs complete retraining of diffusion models for new conditions, while classifier guidance only requires classifier updates. * Sampling: Classifier guidance works with unconditional or conditional diffusion, unlike classifier-free, which requires both. * Computational cost: Classifier-free is computationally expensive (Table R6). * Extendability: Both can extend to various conditions, e.g., Text to image. | Model| Sampling cost (GPU hours) | |:--:|:----:| | Diffusion | 236 | | Vanilla guidance | 341 | | ProG guidance | 341| | Classifier-free guidance | 487| Table R6: Computational cost to generate 50000 images with 256x256 resolution. We summarize the features of each guidance method in Table R7. | | **Training flex.** | **Sampling flex.** | **Low cost** | **Robustness** | **Diversity** | **Extendibility** | |--|:-----:|:--:|:--:|:---:|:--:|:----:| | Vanilla guidance | yes | yes | yes | no | no | yes | | Classifier-free guidance | no | no | no | yes | no | yes| | ProG | yes | yes | yes | yes | yes | yes| Table R7: As we can see, the main reason for the popularity of classifier-free guidance is its robust features. However, ProG can combine all the advantages of Vanilla and Classifier-free guidance in one unified scheme. --- Rebuttal Comment 1.1: Title: Looking forward to the response from the Reviewer vxij Comment: Dear Reviewer vxij, Thanks for your valuable time in commenting on our work. We have analyzed your comments thoroughly and carefully and answered in detail with experimental backup/evidence for what we have claimed: 1. The novelties of our work. Since the observations on **diversity suppression** and **adversarial effects** are not available in any previous research, we consider them novel. Furthermore, the **proposed ProG** is carefully designed to solve the problem based on the observations, which never appeared in any previous work before, and should also be a novel technique. 2. We provided evidence that our proposed ProG is significantly better than vanilla classifier guidance in 5 ways. We hope the evidence we provide solves your concern about the significance of our proposed method. 3. We extended our ProG to improve Text-to-Image guidance which is the last main concern of the Reviewer. We hope that from the improvement in results, the Reviewer can reconsider the contributions of our works. We hope that from the evidence/experimental results we provide, the Reviewer can reconsider our work's novelty, significance, and application for the research community. If the Reviewer still has any concerns, please do not hesitate to let us know. We are happy to address them. Best regards, Author #3926 --- Rebuttal 2: Comment: Dear Reviewer vxij, We have tried our best to address all the concerns and provided as much evidence as possible. May we know if our rebuttals answer all your questions? Best regards, Author #3926 Title: Looking forward to the response from Reviewer vxij --- Rebuttal Comment 2.1: Comment: I appreciate your thorough response. The general feedback provided and the subsequent response from the authors have satisfactorily addressed some of my concerns, particularly regarding the novelty aspects. Nevertheless, I still believe there are certain points that require further clarification from the authors. (1) **Extending ProG to Text-to-Image Problem:** In the general feedback, the authors discuss the potential extension of ProG for guiding text-to-image synthesis. However, I've expressed my concerns into three points. Firstly, the FID values presented in Tables R1 and R2 do not align with those in the GLIDE paper (please see Figure 6 and Table 2 of the GLIDE paper). Notably, the FID score of ProG trails behind that of Classifier-Free Guidance (CFG; 256px FID = 12.89). Moreover, unlike CFG, the implementation of ProG necessitates pre-trained noise-aware classifiers, a constraint that restricts its applicability. This limitation is likely why the authors evaluated their method using GLIDE rather than Stable Diffusion. Lastly, as highlighted by Kynkäänniemi et al. [R1], using a classifier can introduce information leakage to FID, thereby leading to improper comparisons. Consequently, I believe that the authors should conduct a user study to thoroughly validate whether ProG genuinely enhances image quality and diversity more effectively than CFG. (2) **Additionally, I cautiously suggest that the authors should consider refining the arguments below:** **Training: Classifier-free needs complete retraining of diffusion models for new conditions, while classifier guidance only requires classifier updates.** => Conversely, the classifier-guided approach necessitates training an expensive noise-aware classifier. Particularly for text-to-image tasks, training such a classifier could potentially complicate the entire diffusion pipeline. **Sampling: Classifier guidance works with unconditional or conditional diffusion, unlike classifier-free, which requires both.** => Could the model trained using classifier-free guidance be employed for both unconditional and conditional image synthesis? **Computational cost: Classifier-free is computationally expensive (Table R6).** => I suspect the authors have overlooked the computational expenses incurred by ChatGPT. If the authors were to factor in the ChatGPT-related costs, would the proposed approach still maintain its computational efficiency in comparison to other methods? **Extendability: Both can extend to various conditions, e.g., Text to image.** => I'm uncertain whether ProG is applicable to text-to-image synthesis. Due to these considerations, I have modestly increased my score by one point. However, I am not yet prepared to advocate for the acceptance of this paper. [R1] Kynkäänniemi, T., Karras, T., Aittala, M., Aila, T., & Lehtinen, J. (2022). The Role of ImageNet Classes in Fr\'echet Inception Distance. arXiv preprint arXiv:2203.06026. --- Reply to Comment 2.1.1: Comment: Dear Reviewer vxij, Thank you for your reply. We address your two main concerns below: ### Concern 1: Extending ProG to Text-to-Image Problems > 1. "The Zero-shot FID values presented in Tables R1 and R2 do not align with those in the GLIDE paper." >> **Answer**: The main reason is that the authors of the GLIDE paper *do not release the full pretrained model* due to their **privacy concerns**. As a result, they only release a reduced version of the pretrained model for verification; hence, the quality is limited. This is stated in the GLIDE paper's abstract and section E (Appendix). > 2. "unlike CFG, the implementation of ProG necessitates pre-trained noise-aware classifiers, a constraint that restricts its applicability. This limitation is likely why the authors evaluated their method using GLIDE rather than Stable Diffusion." >>**Answer:**: We respectfully disagree with this point of view. After the noise-aware classifier is trained, it can be applied to any diffusion model with the same latent space size. The main reason that noise-aware CLIP can not be applied to Stable Diffusion is that its latent size is larger than Stable Diffusion's latent size. This does not mean noise-aware classifier/CLIP can not work with other diffusion models. In reply to the Reviewer cJ7t, we show a case where the noise-aware CLIP from the GLIDE paper [11] can be applied to the ADM diffusion model in [10] without any difficulties and achieves some improvement. On the other hand, given a diffusion model which has been trained to do classifier-free guidance, it has to be stuck with that diffusion model and can not work with any other diffusion models. For example, Stable Diffusion can not work with DiT [5] even when they share the exact latent diffusion mechanism. > 3. "as highlighted by Kynkäänniemi et al. [R1], using a classifier can introduce information leakage to FID, thereby leading to improper comparisons". >> **Answer**: The diffusion model and noise-aware CLIP (the equivalent model for classifier) in GLIDE do not use any classification or label information, especially classification information from ImageNet. Eq.r1 shows the sampling equation where we utilize the matching between image embedding and text embedding without any classification gradient through input. As a result, the result is not affected by the information leakage to FID. ### Concern 2: Clarify the argument: > 1. "The classifier-guided approach necessitates training an expensive noise-aware classifier. Particularly for text-to-image tasks, training such a classifier could potentially complicate the entire diffusion pipeline." >> **Answer**: >>>* The training of a noise-aware classifier is similar to training a classifier. The only difference is the augmentation step which we add the Gaussian noise to the image before feeding it to the classifier. As a result, this training process will be much cheaper than training a diffusion model. This is because training a discriminative task is always cheaper than training a generative task. On the other hand, training a model for classifier-free guidance must go with the training generative model, which is extremely expensive and unnecessary when a new condition is introduced. e.g., more text, more classes,... >>>* Furthermore, the training of the noise-aware classifier is separate from the training of the diffusion model. As a result, it can not complicate the entire diffusion pipeline. > 2. "Could the model trained using classifier-free guidance be employed for both unconditional and conditional image synthesis?" >> **Answer**: The classifier-free guidance can only be applied when the diffusion model is trained to incorporate the conditional information and the null condition. But the classifier guidance imposes no restrictions on the diffusion model, and it can be applied to any diffusion model, either trained to be conditional or unconditional or even a combination between condition and null condition. The scenario that classifier-free can not be applied is not rare in practice. What if, in case, we don't have conditions before training the diffusion model? It is impossible to train to combine conditional information and null condition with a diffusion model for classifier-free guidance later. > 3. "I suspect the authors have overlooked the computational expenses incurred by ChatGPT." >> **Answer**: ChatGPT-related cost is the pre-processing cost, where we only do it once for one set of labels. It is similar to the cost of **data collection**. We don't count data collection time as the cost for sampling/training. On the other hand, the expensive cost of classifier-free guidance comes from forwarding through the diffusion model twice every each timestep. > 4. Extendability: >> **Answer**: We have answered already in the Concern 1 and Table R1 and R2 can show that ProG can extend for Text-to-Image guidance. May we know if we have solved your concern? Best regards,
Summary: To tackle the generative issue of low diversity and artifacts of classier guidance for conditional diffusion generation, this paper proposes an entropy view for calculating the conditional score gradient. The proposed method proposes two modifications for classifier guidance, 1) exchanging one-hot class labels with soft-labels based on class similarity. 2) progressive score weights for different time steps. And the proposed achieves better results than the baseline method and classifier-free guidance method. Strengths: 1. The proposed method is simple and straightforward yet effective to achieve better generative results. 2. The proposed method suggests an entropy view to review the classifier guidance method is interesting. In the real dataset, we should consider the real conditional distribution of the label given image, which is neglected by the vanilla one. Weaknesses: 1. Although the proposed method is simple and straightforward, the flexibility would be restricted for different modalities of conditions such as (text and segmentation map), which can be easily addressed via classifier-free guidance. Is there any solution for classifier guidance with flexible conditional labels (different modalities)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This paper does not discuss the limitations. However, the proposed method might not be general for conditional generation with diverse modalities of conditional labels (etc. text, segmentation map). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Question 1: Extend the proposed method into Text-to-image guidance. GLIDE[11] is an easy extension of classifier guidance for text-to-image guidance. The sampling equation for GLIDE is shown below: $$x_{t-1} = \mu _t + \sigma _t * \mathbf{z} + s \sigma_t^2 \nabla _{x_t} (f(x_t) . g(c))\quad \quad _{(r1)}$$ $f(x_t)$ is the image embedding vector and $g(c)$ is the text or description embedding vector. Equation (r1) is mostly similar to equation (3) in our main paper; the only difference is the gradient term resulted from the similarity between two embedding vectors instead of the classification gradient as in the main paper. We add new experiments to apply ProG for GLIDE in equation (r1) following two scenarios: 1. Given one caption, we will utilize a random set of 1000, 5000, or 10000 captions to act as relevant information during sampling. we have: $$g(c) = \sum_{i=0}^{N+1} s_i g(c_i)\quad\quad_{(r2)} $$ with $i = 0$ is the index of the primary caption, and $i \neq 0$ represents other captions. The initial values of $s_i$ are set as: $$s_i = \frac{g(c_0) . g(c_i)}{\sum^{N+1}_{j=0} g(c_0) .g(c_j)} \quad \quad _{(r3)}$$ The value of $s_i$ is progressively updated during sampling as in section 3.2 in the main paper. This scheme is named **GLIDE-ProG** 2. Given one caption, we use four other captions that have the same meaning as the original caption but different words. Since four other captions, all have the same meaning; we have different strategies to set the $s_i$ values: $$s_i = \begin{cases}a, \quad \text{if } i = 0\\\ \frac{1-a}{4}, \quad \text{otherwise} \end{cases}$$, with $0\leq a \leq 1$ is hyperparameter, we try with $a=0.3$. This method is named **GLIDE-ProGsim** We set up an evaluation like GLIDE [11] to evaluate zero-shot FID on MS-CoCo. Note: 4 additional equivalent captions of GLIDE-ProGsim are taken from the set of captions available for each image in MS-Coco captions. 1k, 5k, and 10k captions are randomly sampled from the MSCoco training set. Table R1 and Table R2 show the evaluation results: | | zero-shot FID | computational cost (GPU hours)| |:----:|:------:|--------| | GLIDE | 24.80 | 34.27 | | GLIDE-ProG w N=1k | 23.47| 34.66 | | GLIDE-ProG w N=5k |23.50| 34.83 | | GLIDE-ProG w N=10k| **23.31**|34.83| | GLIDE-ProGsim | 23.87 |34.84 | Table R1: MSCoCo64x64 zero-shot FID evaluation where 30000 captions are sampled from the MSCoco validation set. | | zero-shot FID | computational cost (GPU hours) | |:----------------:|:-------------:|--------------------| | GLIDE | 34.80 | 38.45 | | GLIDE-ProG w N=1k | 32.55 | 45.50 | | GLIDE-ProG w N=5k | 32.37 | 45.80 | | GLIDE-ProG w N=10k| 32.28 | 46.10 | | GLIDE-ProGsim | **31.91** | 46.23 | Table R2: MSCoCo256x256 zero-shot FID evaluation where 30000 captions are sampled from the MSCoco validation set. **Conclusion**: From Table R1 and Table R2, the ProG scheme helps significantly improve the performance of text-to-image guidance in different scenarios with low additional computational costs. Given the many captions available, we can use the first scenario to improve the generated images. Otherwise, the second scenario is also very easy to implement. The additional captions can be gathered from Large Language Models (LLMs) to generate images. ### Question 2: The flexibility of classifier guidance. Like classifier-free guidance, classifier guidance can extend to different modalities. More than that, classifier guidance also has many advantages: * Training: Classifier-free needs complete retraining of diffusion models for new conditions, while classifier guidance only requires classifier updates. * Sampling: Classifier guidance works with unconditional or conditional diffusion, unlike classifier-free, that requires both. * Computational cost: Classifier-free is computationally expensive (Table R6). * Extendability: Both support or can extend to various conditions, e.g., Text to image. | Model| Sampling cost (GPU hours) | |:--:|:----:| | Diffusion | 236 | | Vanilla guidance | 341 | | ProG guidance | 341| | Classifier-free guidance | 487| Table R6: Computational cost to generate 50000 images with 256x256 resolution. We summarize the features of each guidance method in Table R7. | | **Training flex.** | **Sampling flex.** | **Low cost** | **Robustness** | **Diversity** | **Extendibility** | |--|:-----:|:--:|:--:|:---:|:--:|:----:| | Vanilla guidance | yes | yes | yes | no | no | yes | | Classifier-free guidance | no | no | no | yes | no | yes| | ProG | yes | yes | yes | yes | yes | yes| Table R7: As we can see, the main reason for the popularity of classifier-free guidance is its robust features. However, ProG can combine all the advantages of Vanilla and Classifier-free guidance in one unified scheme. --- Rebuttal Comment 1.1: Comment: Hi, Thanks for the reply, my question is addressed. --- Reply to Comment 1.1.1: Comment: Dear Reviewer xoqA, Thank you very much for your comments and strong support for us. Best regards, Author #3926 --- Rebuttal 2: Title: Looking forward to the response from the Reviewer xoqA Comment: Dear Reviewer xoqA, Thanks for your valuable and thoughtful comments about our work. In the rebuttal, we solved the only problem that the Reviewer was concerned about using classifier guidance and our proposed ProG for different conditional labels. We have shown that classifier guidance/ProG can easily be extended to Tex-to-Image guidance by replacing the classifier with a CLIP model. Furthermore, we generalized the author's concern by showing the flexibility of classifier guidance compared to classifier-free guidance. By this, we hope that the concern about the flexibility of our method is solved thoroughly. We hope that our rebuttals can gain strong support from the Reviewer. If the Reviewer still has any other concerns, we look forward to answering them. Best regards, Author #3926 --- Rebuttal 3: Title: Looking forward to the response from the Reviewer xoqA Comment: Dear Reviewer xoqA, Thank you for your valuable comments on our work. We sincerely hope that our rebuttals have addressed all your concerns. Kindly let us know if you have any other concerns, and we will do our best to address them. Best regards, --- Rebuttal 4: Title: Please take a look at authors' responses, Thanks! Comment: Dear Reviewer, please take a look at authors' responses and other reviewers' comments, Thanks!
Summary: This paper proposes to inject the graident of other clasess to improve the diversity for conditional sampling of diffusion model. Strengths: 1. The general idea is simple, the gradient of the classifier tends to use the most discriminative feature and thus hurts the performance, so we use the gradient of other classes in early phases to improve the diversity. 2. The paper also provides some entropy arguments, which partly justifies the method. 3.Extensive experiments are conducted and the performance gain is clear. Weaknesses: 1. Missing details. How many steps are used for the eq 4?. "In the later sampling stage, we progressively enhance gradients to refine the details 13 in the image toward the primary condition" Do you just use classifier guidance after some steps? 2. Authors use CLIP embedding to compute the similarity between different classes. Can we use clip gradient to guide the generation directly (insert the graident of CLIP similarity between the generated image and target class, say "dog" ) in DDIM steps? Will that also improve the diversity as CLIP has seen many different dog images. Some comparions are needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: as above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: as above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Question 1: Elaborating on details 1. For equation 4, the number of timesteps is 1000. However, similar to [10], we adapt the respace to 250 timesteps (skip four iterations each time). All other hyperparameters are detailed in Table 10 (Appendix) 2. "*Do you just use classifier guidance after some steps*"? -> No, we use classifier guidance from the start to the end. 3. "*In the later sampling stage, we progressively enhance gradients to refine the details in the image toward the primary condition*." We mean that when we start, the gradients from other classes are noisy. Following the iterations, the gradients from different categories are reduced, and the gradient from the primary class is emphasized (more weight). ### Question 2: Use CLIP embedding for guidance This would be an exciting idea since it allows using off-the-shelf information to improve generated images from diffusion models. We set up the experiments as below: 1. Use pretrained CLIP from GLIDE [11] (an extension of classifier guidance for text-to-image guidance) 2. Use the pretrained Diffusion model (ADM) from [10] as a model to generate ImageNet The method is named CLIP guidance. We evaluate on ImageNet64x64. The results are shown in Table R5: | | FID | sFID | Recall | |:----------------:|:---:|------|--------| | Vanilla guidance | 6.40 | 9.67 | 0.54 | | ProG guidance | **5.16** | **6.72** | 0.56 | | CLIP guidance | 8.18 | 9.4 | **0.59** | | Diffusion w/o guidance| 9.95| 6.58| 0.65| Table R5: ImageNet64x64. Comparison between Vanilla guidance, ProG guidance, and CLIP guidance. The results show that the ProG achieves the best FID/sFID among the guidance schemes. However, the Recall value of CLIP guidance is the highest one. This indicates the ProG provides the diversity, but in the original data distribution, while the CLIP guidance could help to achieve better variety outside of the data distribution. It must be noted that the FID obtained from the CLIP guidance is better than the original diffusion model without guidance. This is concrete evidence that the guidance from off-the-shelf information is working. However, it might need a lot of experiments to observe the fundamental problems inside. We like the idea and would be interested in further developing this idea in future work. --- Rebuttal Comment 1.1: Title: Looking forward to the response from Reviewer cJ7t Comment: Dear Reviewer cJ7t, Thank you very much for your insightful comments and interesting suggested ideas. We have addressed your primary concerns about the missing description details in our manuscripts. Besides, we are also very grateful to receive an exciting idea from the reviewer. We obtained some preliminary results which can help open a new door to solve the problem of diversity in the future. If you have further comments regarding the work, we are happy to address yours. We hope to gain your strong support for our work. Best regards, Authors #3926 --- Rebuttal 2: Title: Looking forward to the response from Reviewer cJ7t Comment: Dear Reviewer cJ7t, Thank you for your valuable comments on our work. We sincerely hope that our rebuttals have addressed all your concerns. Kindly let us know if you have any other concerns, and we will do our best to address them. Best regards, --- Rebuttal Comment 2.1: Comment: Thanks for your response. I read reviews from other reviewers and the responses. I share the same novelty concern with other reviewers. But the method is simple and works well on different datasets. So, I will keep my score. --- Reply to Comment 2.1.1: Title: Thank you for your reply, we actually addressed all the concerns related to novelty from others. Comment: Dear Reviewer cJ7t, Thanks for acknowledging the strength of our paper which are effectual/fruitful, simple and easy to implement. This is also well-recognized by Reviewers zai6, zHLG, xoqA, and mBBP. Regarding the novelty concerns, there are only two reviewers raise novelty concerns, and we have well addressed the concerns: >* For **Reviewer vxij**: >> * We don't know the reasons behind Reviewer vxij's novelty concerns since no reasons are provided in the comment. However, we address this comprehensively in our vxij rebuttal's Question 1, highlighting evidence for three key novelties: **diversity suppression**, **adversarial effects**, and **ProG method**. >>* The **diversity suppression** and **adversarial effect novelties** are recognized by **Reviewer mBBP** through the comment: >>> "This paper has clear and well-established motivations, working on two important problems in classifier guidance encountered by the community, the adversarial effect and diversity suppression. This can be a good contribution to the community.", >> * **Reviewer zHLG** recognizes the **novelty of the proposed method** through comments >>>" This work proposes a novel way to improve the classifier guidance method." >> * You and other reviewers all agree that the strength of the proposed method is simple and effective. >* For **Reviewer zai6**: The main novelty concern is the lack of novel insights in the main paper. We realized that most of the concerns could be solved by the Appendix file (We could not put everything in the main paper due to page limits). We have addressed this concern thoroughly: >> * First, In our Appendix, we have provided at least **7 insights**, which are matched with the Reviewer's suggestion to do observations on the **subset of the dataset** instead of dataset level. The details are in the answer to Question 3 of Reviewer zai6's Rebuttal. >> * Second, we investigated the sensitivity of $\\gamma$ with the FID score in Table 5 of our submission. This is also matched with the Reviewer's suggestion about the **sensitivity of the method**. >> * Lastly, we include the correlation table between $\\gamma$ and the accuracy of the classifier performance in Table R3 in the Question 1 to answer the concern about **sensitivity of the method regarding the classifier performance** May we know if we have solved your novelty concern? If not, can you help point out why you think our work is not novel? Your insights would not only assist us in addressing your concerns more effectively but would also contribute to the enhancement of our future work. Best regards, Author #3926
Rebuttal 1: Rebuttal: We extend our sincere gratitude to the esteemed reviewers for their insightful and constructive feedback, which has significantly contributed to the enhancement of our manuscript. In this joint reply, we address recurring inquiries raised by multiple reviewers and references for other rebuttals, thereby conserving space and referring to pertinent references to provide comprehensive responses. *Note: All the references to Table/Figure/Equation that appeared inside the rebuttal will have a mark "r" or "R" before the number to distinguish from numbering in the main paper* ### Gen. Question 1: Extend our proposed ProG for Text-to-Image guidance In GLIDE, [11] has proposed to extend classifier guidance for text-to-image guidance. The sampling equation for GLIDE is shown below: $$x_{t-1} = \mu _t + \sigma _t * \mathbf{z} + s \sigma_t^2 \nabla _{x_t} (f(x_t) . g(c))\quad \quad _{(r1)}$$ $f(x_t)$ is the image embedding vector and $g(c)$ is the text or description embedding vector. Equation (r1) is mostly similar to equation (3) in our main paper; the only difference is the gradient term resulted from the similarity between two embedding vectors instead of the classification gradient as in the main paper. Our proposed ProG is applied to GLIDE in equation (r1) following two scenarios: 1. Given one caption, we will utilize a random set of 1000, 5000, or 10000 captions to act as relevant information during sampling. we have: $$g(c) = \sum_{i=0}^{N+1} s_i g(c_i)\quad\quad_{(r2)} $$ with $i = 0$ is the index of the primary caption, and $i \neq 0$ represents other captions. The initial values of $s_i$ are set as: $$s_i = \frac{g(c_0) . g(c_i)}{\sum^{N+1}_{j=0} g(c_0) .g(c_j)} \quad \quad _{(r3)}$$ The value of $s_i$ is progressively updated during sampling as in section 3.2 in the main paper. This scheme is named **GLIDE-ProG** 2. Given one caption, we use four other captions that have the same meaning as the original caption but different words. Since four other captions, all have the same meaning; we have different strategies to set the $s_i$ values: $$s_i = \begin{cases}a, \quad \text{if } i = 0\\\ \frac{1-a}{4}, \quad \text{otherwise} \end{cases}$$, with $0 \leq a \leq 1$ is hyperparameter, we try with $a=0.3$. This method is named **GLIDE-ProGsim** We set up an evaluation like GLIDE [11] to evaluate zero-shot FID on MS-CoCo. Note: 4 additional equivalent captions of GLIDE-ProGsim are taken from the set of captions available for each image in MS-Coco captions. 1k, 5k, and 10k captions are randomly sampled from the MSCoco training set. Table R1 and Table R2 show the evaluation results: | | zero-shot FID | computational cost (GPU hours)| |:----:|:------:|--------| | GLIDE | 24.80 | 34.27 | | GLIDE-ProG w N=1k | 23.47| 34.66 | | GLIDE-ProG w N=5k |23.50| 34.83 | | GLIDE-ProG w N=10k| **23.31**|34.83| | GLIDE-ProGsim | 23.87 |34.84 | Table R1: MSCoCo64x64 zero-shot FID evaluation where 30000 captions are sampled from the MSCoco validation set. || zero-shot FID | computational cost (GPU hours) | |:---:|:--:|---| | GLIDE | 34.80| 38.45 | | GLIDE-ProG w N=1k | 32.55 | 45.50 | | GLIDE-ProG w N=5k |32.37 |45.80| | GLIDE-ProG w N=10k| 32.28|46.10| | GLIDE-ProGsim | **31.91** |46.23| Table R2: MSCoCo256x256 zero-shot FID evaluation where 30000 captions are sampled from the MSCoco validation set. **Conclusion**: From Table R1 and Table R2, the ProG scheme helps significantly improve the performance of text-to-image guidance in different scenarios with low additional computational costs. Given the many captions available, we can use the first scenario to improve the generated images. Otherwise, the second scenario is also very easy to implement. The additional captions can be gathered from Large Language Models (LLMs) to generate images. **Reference**: [1] Ho, J., & Salimans, T. (2022). Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598. [2] Gao, S., Zhou, P., Cheng, M. M., & Yan, S. (2023). A masked diffusion transformer is a strong image synthesizer. arXiv preprint arXiv:2303.14389. [3] Kim, D., Kim, Y., Kwon, S.J., Kang, W. &amp; Moon, I.. (2023). Refining Generative Process with Discriminator Guidance in Score-based Diffusion Models. Proceedings of the 40th International Conference on Machine Learning [4] Hang, T., Gu, S., Li, C., Bao, J., Chen, D., Hu, H., ... & Guo, B. (2023). Efficient diffusion training via min-snr weighting strategy. arXiv preprint arXiv:2303.09556. (ICCV2023) [5] Peebles, W., & Xie, S. (2022). Scalable diffusion models with transformers. arXiv preprint arXiv:2212.09748. [6] Sauer, A., Schwarz, K., & Geiger, A. (2022, July). Stylegan-xl: Scaling stylegan to large, diverse datasets. In ACM SIGGRAPH 2022 conference proceedings (pp. 1-10). [7] Singh, R., Shukla, A., & Turaga, P. (2023). Polynomial Implicit Neural Representations For Large Diverse Datasets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2041-2051). [8] Ganz, Roy, and Michael Elad. "BIGRoC: Boosting Image Generation via a Robust Classifier." Transactions on Machine Learning Research (2023). [9] Dinh, A., Liu, D. &amp; Xu, C.. (2023). PixelAsParam: A Gradient View on Diffusion Sampling with Guidance. Proceedings of the 40th International Conference on Machine Learning [10] Dhariwal, P., & Nichol, A. (2021). Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34, 8780-8794. [11] Nichol, A.Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mcgrew, B., Sutskever, I. &amp; Chen, M.. (2022). GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models.Proceedings of the 39th International Conference on Machine Learning Pdf: /pdf/16f78ddc13f291559d879906399e55b1dd5b081c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work points out that diffusion models with classifier guidance only focus on the given category but ignore the other relevant category information. Thus, this work proposes the Progressive Guidance (PG) method to address two problems, i.e., lack of diversity and the adversarial effect (samples have high scores but poor visual quality). The proposed method uses progressive gradients from the class dimension and the diffusion temporal dimension to change the gradient of the classifier guidance on a single condition. In terms of the class dimension, PG allows gradient-assisted conditions to be generated for other class information related to a given class. In terms of diffusion temporal dimension, the weight of the gradient also changes over time. The experimental results show that PG improves image quality, sample diversity, and robustness compared with the competing methods. Strengths: 1. The paper is well-organized and easy to follow. 2. This work proposes a novel way to improve the classifier guidance method. 3. The proposed dramatic diversity and non-robustness feature construction approaches show desirable robustness than the commonly used baseline method. 4. The proposed method can be combined with powerful backbone networks to achieve favorable performance. Weaknesses: 1. Though the theoretical analyses are convincing, the experimental results show that the proposed method sometimes underperforms the classifier-free guidance. The authors should explain this point to verify the effectiveness of their proposed method. 2. In Section 3, the analyses could be more convincing if more evidence is provided. In addition, the presentation of Sections 3.1 and 3.2 should be improved to make it clear. 3. The computational costs should be clarified, considering that the diffusion models are usually expensive to implement for producing high-resolution images. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see my comments above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Please see my comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Question 1: Clarify the performance of classifier guidance in some cases. Sometimes, the proposed classifier guidance performs poorly than classifier-free guidance, such as FID/sFID and Robustness score on ImageNet256x256. However, the two methods can be considered comparable on this case due to several reasons: * The gap between FID/sFID between the two methods is not significant (3.84 vs. 3.76) (~0.08) * The gap between Robustness score between the two methods is not so significant ( 86.60 vs 87.14) (~0.54%) * ProG has a much **better level of diversity** than classifier-free guidance, as shown in Figure 16(right) in the Appendix. FID and Recall values are much higher than classifier-free guidance when $w$ is large. * ProG has a much **lower computational cost** compared to classifier-free guidance, as shown in Table 8 (Appendix). Classifier guidance only needs 341 GPU hours to generate 50000 images compared to 487 for classifier-free guidance. Besides the comparability in the performance between the two methods, classifier guidance has several advantages over classifier-free guidance in terms of application: 1. **Training flexibility**: Classifier-free guidance used the information from the conditional diffusion model. As a result, when the condition is modified, or a new condition is available, there would be no light-cost solution but to train the whole expensive diffusion model. On the other hand, since classifier guidance allows separate classifiers. The update in condition allows updating on the classifier only without re-training expensive diffusion models. 2. **Sampling flexibility**: We can apply the classifier-free technique only when we have both unconditional and conditional diffusion models simultaneously (can be separated or joined). However, classifier guidance can be used given solely unconditional diffusion, solely conditional diffusion, or both. ### Question 2: Analyse in section 3 should be improved, and more evidence should be provided. Due to the page limit, most of the evidence from section 3 is put into the Appendix. We will return this evidence to the main paper for a better reading experience. In detail, we have: For **diversity suppression**, we have done on many types of breeds in ImageNet: + [**EVD1**] Problem of front-face features collapsing of vanilla classifier guidance in Figure 5 (for Brittany Spaniel), Figure 6 (for English Springer), and Figure 7(for Welsh Springer Spaniel). + [**EVD2**] Problem of collapsing all images into single pose features in Figure 5 (for Brittany Spaniel) + [**EVD3**] Problem of green grass background in vanilla guidance for some types of dog Figure 7(for Welsh Sprinter Spaniel) For **non-robust features construction**, we did mainly on some types of breeds and leopard class of ImageNet : + [**EVD4**] Problem of losing background in vanilla guidance in Figure 8. + [**EVD5**] Problem of non-robust features construction of vanilla guidance in Figure 11 (for ImageNet64x64) and Figure 12, 13, 14 (For ImageNet256x256) ### Question 3: The computational cost We have mentioned the computational cost in Table 8 (Appendix). To clarify the computational cost, we will revise Table 8 into the below table, where Diffusion computational cost and vanilla guidance cost will be detailed. | Model | Computational cost (GPU hours) | |:------------------------:|:------------------:| | Diffusion | 236 | | Vanilla guidance | 341 | | ProG guidance | 341 | | Classifier-free guidance | 487 | Table R4: Computational cost to generate 50000 images with 256x256 resolution. --- Rebuttal Comment 1.1: Title: Looking forward to the response from the Reviewer zHLG Comment: Dear Reviewer zHLG, Thank you for your thoughtful and insightful comments on our work. We believe that these comments help us to strengthen our submission. In the rebuttal, we have addressed all of your three concerns: 1. Clarify the performance of our method over classifier-free guidance 2. Provide more information for section 3. Section 3.1 and section 3.2 will be revised in our final manuscript. 3. We provided the computational cost comparison between guidance methods. If you still have further requests or concerns, please do not hesitate to let us know. We will deal with the concerns with the utmost attention. Best regards, Author #3926 --- Reply to Comment 1.1.1: Title: Looking forward to the response from Reviewer zHLG Comment: Dear Reviewer zHLG, Thank you for your valuable comments on our work. We sincerely hope that our rebuttals have addressed all your concerns. Kindly let us know if you have any other concerns, and we will do our best to address them. Best regards,
Summary: In this work authors propose to address challenges of classifier guidance Diffusion models and propose progressive-guidance where during sampling/reverse diffusion process. Initial iterations of reverse process receives classifier gradient from multiple relevant classes instead of just target class so that more relevant features can be retained. Authors also illustrate this enables generated images to be more diverse as features in intial iterations need not 'ONLY' be purely discriminative for current target class of interest. Strengths: Authors propose a simple and useful method to improve sample diversity of Diffusion models within classifier guidance setting and in appendix they show sample diversity is on-par with classifier-free guidance. Progressive Guidance makes sense more generally from generation perspective too, as within generative paradigm we first sample higher level semantics which is not very fine-grained then conditioned on that we sample latent variables/features relevant to fine grained details so it does make sense that we don't want to hyper-focus on one-particular class but it probably depends on complexity of class-taxonomy of particular dataset. Weaknesses: Core idea in paper is easy to follow but lack novel insights or significant contributions. Few suggestions in terms of details and writing: Though this paper follows classifier guidance from previous literature, it might be useful to summarize noise-aware classifier performance at difference noise-levels, especially from what iteration is classifier gradient incorporated in sampling process. What is value of gamma at different sampling steps w.r.t number of inference steps and schedulers to easily interpret results and setting. What is actual guidance scale value across sampling steps/reverse diffusion, more specifically at \gamma = 0.04 at what stage of sampling is classifier guidance focusing on one-hot vector. If you can explicitly state that, it might be useful to the reader to easily interpret setting. As in later iterations of sampling process much of semantics or high-level features are inferred and guidance might not play such a vital role? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In terms of Suppressed samples illustration and sample diversity, empirical metrics demonstrate on-par or worse diversity except ImageNet 64 x 64? So it is unclear in what setting is proposed method effective and what are current limitations of proposed approach. Also, it might be worthwhile to consider evaluation for few interesting sub-sets rather than dataset level, as it might be useful to evaluate in settings where categories have large sub-classes which needs to capture fine-grained details vs not many sub-classes? Does Pro-G have sensitivity What is sensitivity of proposed progressive guidance method based on accuracy of noise-aware classifier at different noise-levels? Analyzing such sensitivity and how it effects FID, Precision/Recall w.r.t generation would be informative. Also, how challenging is it to train noise-aware classifier? As except few of initial works there aren't many follow up works which use classifier guidance as pointed out by authors on few of challenges. I understand authors used pre-trained checkpoints but it would be informative for community towards better interpretation of applicability of proposed method and classifier guidance more generally. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Proposed method is simple and effective and its encouraging to see that classifier guidance is on-par with classifier free guidance in terms of sample quality and Diversity, but the quality boost is marginal. I am not sure if this is valuable enough contribution in terms of novelty and insights for NeurIPS, as proposed method is straight forward and does not provide extensive novel insights either from Diffusion models behavior or empirical properties of classifier guidance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weakness discussion ### Weakness 1: Noise-aware classifier performance at different noise-level and the sensitivity associated with $\\gamma$. In the ablation study in section 6.3, we have already discussed the sensitivity of $\\gamma$ value that has affected the generated image quality. Table 5 in the main paper shows the sensitivity that affects the IS/FID and sFID. We further follow the suggestion of the reviewer to explore the sensitivity of $\\gamma$ with the performance of noise-aware at different timesteps with image quality as in Table R3: | $\\gamma$ | FID | Acc@25 | Acc@75 | Acc@150 | Acc@200 | Acc@250 | |----------|------|--------|--------|---------|---------|---------| | 0.04 | 5.16 | 00.00 | 0.31 | 20.00 | 78.42 | 100 | | 0.06 | 5.4 | 00.00 | 0.31 | 20.31 | 79.06 | 100 | | 0.1 | 7.28 | 00.00 | 0.31 | 21.87 | 78.75 | 99.68 | | 0.2 | 8.67 | 00.00 | 0.31 | 20.62 | 79.37 | 100 | Table R3: Sensitivity of $\gamma$ regarding FID and the noise-aware classifier accuracy. Acc@25 means the classifier's accuracy at the $25^{th}$ timestep. As we can observe, FID is very sensitive with $\gamma$, which means the generated image quality is heavily affected by $\\gamma$. However, the classifier performance at different noise levels has little sensitivity regarding $\gamma$ and has little correlation with the image quality. ### Weakness 2: Value of $\\gamma$ at different timestep The value of $\gamma$ does not change following the timesteps. This value represents how fast the information degree should converge to a one-hot vector and is kept constant throughout the sampling process. ### Weakness 3: Trend of guidance scale value for the primary class Given $\\gamma = 0.04$, the information degree vector often converges into a one-hot vector at around $50^{th}$ timestep. We can observe the trend in Figure R1 in the attached pdf file in the joint rebuttal. In our experience, guidance plays a significant role in fine-tuning class-specific details at the end of the sampling process. ## Question discussion ### Question 1: Clarification on the diversity performance Diversity improvement is not only available for ImageNet64x64. For each resolution, we have: **ImageNet64x64**: Figure 4(a), (b) in the main paper **ImageNet128x128**: Figure 9 (left), (right) in Appendix **ImageNet256x256**: Table 4 (right Figure) in the main paper, and Figure 10, Figure 16 in Appendix. All the figures indicate a clear superiority of the proposed ProG compared to vanilla classifier guidance. *Why do some Recall values in Tables 1 and 3 have similar values between ProG and vanilla guidance?* Because we keep the same $w$ values as in the original papers. Except ImageNet64x64 ($w=4$) and ImageNet256x256 with unconditional diffusion model ($w=10$), other resolutions such as on ImageNet128, the gradient scale $w = 0.5$, ImageNet256(with EDS) $w= 0.5$, ImageNet256 (conditional diffusion) $w=0.7$. With very small $w$, vanilla guidance can achieve good IS/FID/sFID/Recall value but sacrifice the conditional information, meaning that the generated images do not have class information. Due to this small amount of gradients in the sampling process, the application of ProG in these cases did not bring such significant results in diversity. In this work, we show that, by increasing $w$ to achieve conditional information, ProG helps to avoid sacrificing diversity. ### Question 2: Limitations of the proposed approach As discussed in lines 294 to 299 in the main paper, although our proposed ProG successfully alleviates the adversarial effects where the images have high conditional confidence but has many suspicious features, it currently can not solve the case where we achieve low conditional confidence for the image. That is the case where classification information is ignored during the sampling process. ### Question 3: Clarification about the extensive novel insights for the classifier guidance We INDEED incorporated **7 insights** for the behavior of the vanilla classifier guidance for the diffusion model. (all images after Figure 4 are in Appendix due to the page limitation) For **diversity suppression**, we have done on the **subset** of many types of breeds in ImageNet: + [**INS1**] Problem of front-face features collapsing of vanilla classifier guidance in Figure 5 (for Brittany Spaniel), Figure 6 (for English Springer), and Figure 7(for Welsh Springer Spaniel). + [**INS2**] Problem of collapsing all images into single pose features in Figure 5 (for Brittany Spaniel) + [**INS3**] Problem of green grass background in vanilla guidance for some types of dog Figure 7(for Welsh Sprinter Spaniel) For **non-robust features construction**, we did mainly on some types of breeds and leopard class of ImageNet : + [**INS4**] Problem of losing background in vanilla guidance in Figure 8. + [**INS5**] Problem of non-robust features construction of vanilla guidance in Figure 11 (for ImageNet64x64) and Figure 12, 13, 14 (For ImageNet256x256) For intuition for solving the **two problems**, which are adversarial effects and diversity suppressions: + [**INS6**] Figure 2 provides the intuition of how the information from other classes helps to avoid the lack of diversity. + [**INS7**] Figure 3 provides the intuition on how the information from the tiger can help construct the leopard's robust features. ### Question 5: The training of the noise-aware classifier The training of a noise-aware classifier is mostly the same as that of a standard classifier. The only difference is the data augmentation step; in the noise-aware classifier, random noise is added to the image before training. As a result, training a noise-aware classifier is much more straightforward than training a diffusion model. We have tried training the models several times and have had no difficulty during training. The training details and hyperparameters are on page 27 in [10]. --- Rebuttal Comment 1.1: Title: Looking forward to Reviewer zai6's response Comment: Dear Reviewer zai6, Thank you very much for your thoughtful comments, which help us to have a more comprehensive view of our work. Based on your comments, we have: 1. Provided more analysis about the sensitivity of the proposed ProG with noise-aware classifier performance at different levels. This would be an interesting point of view for the readers since it has not been investigated before. 2. Clarified the hyperparameters problem and provided the trend of the gradient scale given a specific $\gamma$. We believe this is also one of the important aspects that the researchers, who work in this field, should concern about. 3. Clarify that the diversity improvement is not only available for ImageNet64x64, and the larger $w$ is, the more significant improvement we can get. 4. Discussed more about the limitations of the work, as mentioned at the end of the main paper. 5. Provided the insights that we have incorporated in the papers. Some insights have been put in the Appendix due to page limitations. 6. The details of training the noise-aware classifier. We believe that we have addressed all of the concerns of the Reviewer. Please let us know if you still want us to provide more information. We are very happy to solve your concerns. Best regards, Author #3926 --- Rebuttal Comment 1.2: Title: Looking forward to Reviewer zai6's response Comment: Dear Reviewer zai6, Thank you for your valuable comments on our work. We sincerely hope that our rebuttals have addressed all your concerns. Kindly let us know if you have any other concerns, and we will do our best to address them. Best regards,
null
null
null
null
On Representation of Natural Image Patches
Reject
Summary: - This study proposes a new framework to combine neural coding concepts of information transmission and probability density modeling. - This framework is based on an even code principle where the output response density strives to be even, given some arbitrary input density. - The authors show that this coding principle produces sensible bases for low-dimensional inputs, and orientation-tuned filters for natural image patches. - While conceptually straightforward, it is unclear to me whether this study provides unique insight into sensory coding in neural populations. UPDATE: Sep 1, 2023. I have read the rebuttal, and maintain my score (see details below). Strengths: - The study is clearly presented. - The concepts of max/min entropy on the output and input densities are conceptually simple to follow. - The numerical experiments are reasonable. Weaknesses: - This paper begins with what seems like a false dichotomy of information transmission vs. sensory probability density modeling. Indeed, from a pragmatic point of view, how can one guarantee optimized information transmission without having a good density model of the signal to be transmitted? There exists literature in this area (see questions section), and the motivation/framing of this present study is concerning. - There is a bit of a conceptual leap from 1 or 2 pixels to full image patches, with additional complexity and machinery introduced. The described rationale seems reasonable enough, but it is unclear whether the two-pixel orthogonal case can provide adequate intuition for the multi-dim case. Would a 2D non-orthogonal example be illuminating at all? - Unclear to me whether these results, which rely on binary coding provides theoretical insight for real neural coding. Spikes are inherently binary, yes, but typically spike counts/rate are what is considered the informative variable in neural coding. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I'm surprised there's no mention of the work by Ganguli and Simoncelli ("Efficient sensory encoding and Bayesian inference with heterogeneous neural populations", Neural Comp. 2014), which directly integrates density modeling with information transmission in neural populations. These authors showed that synaptic weighting functions and neural population responses are arrange so as to implicitly encode the probability of the sensory signal. There have also been attempts at generalizing these concepts to multi-dimensional stimuli by Yerxa et al ("Efficient sensory coding of multidimensional stimuli"; PLOS Comp Biol. 2020). At the very least I believe the present study should discuss how their approach fits in with this existing literature. - Minor point: is "even code" a standard information theory term? If it is then that's fine, but it seems like equalized probability would be a more descriptive term. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There was no discussion of limitations. Unclear to me what the drawbacks are of this approach compared to existing literature. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you have spent in evaluating our paper. * Regarding the concern on a false dichotomy In this paper, we aim to explore what an optimal early-stage information processing system would look like from first principles with as few assumptions as possible. We have identified the two fundamental goals. To start, it is crucial to examine the relationship between them carefully. We have indeed found that these two goals are not the same, and there's no guarantee that optimizing one goal will simultaneously optimize the other. As far as we know, this divergence has not been addressed before. * Regarding the conceptual leap We have tried to apply the method used for two pixels directly to image patches. It works, and the results are reasonable. However, it has a few problems, as we mentioned in the paper. We were stuck here for more than a year until we developed the current method described in the paper. In essence, the methods follow the same principle: the loss functions attempt to make the responses of neurons as different/orthogonal/uncorrelated as possible, either by enforcing output statistics explicitly as in the case for two-pixel systems, or by achieving this implicitly by allowing the response vectors to repel each other as in the image patches cases. The method for image patches is close to the two-pixel orthogonal case, where each base has only two states. It essentially does the following: a) We allow each neuron to react to a sequence of image patches and get a response vector for each neuron. b) We use a loss function to make the response vector for each neuron as unique (less correlated) as possible. * Regarding "2D non-orthogonal example" Perhaps you are referring to something like Fig. 1 in the attached pdf in global rebuttal? In this figure, we created an artificial 2D normal distribution and used one base to model it. Hexagonal lattice is generated by the model itself. It resembles some results in Yerxa's paper (our states mapped to their neuron). * Regarding the two papers by Ganguli et al. and Yerxa et al. They are indeed very relevant. Had we not overlooked these papers, we could have formulated the abstract and introduction more precisely and effectively. Here is the comparison: 1. Essentially, the output states in our model correspond to neurons in their model. Using each state to represent an equal portion of $p(x)$ implies that the density of states is proportional to $p(x)$ in the continuous limit. This agrees with their result. 2. Their analysis assumes the density of neurons is continuous (e.g. above Eq. 2.10), i.e. implicitly assumes an infinite number of neurons, while we assume a finite number, which is more realistic. With infinite states, regardless of how these states are arranged, $p(x)$ is always perfectly approximated. This is likely why $p(x)$ modeling is not seen as a problem by them. The problem will arise in a more realistic setting. 3. They use many assumptions and manually select features, such as rate coding, specific form of neuron tuning curves or a hexagonal lattice in the 2D case. While these assumptions are resonable or widely used they impose limitations. In our ab initio approach, we aim to capture the essence of an information processing system with only three main assumptions: a) An information processing system needs to achieve two fundamental goals. b) For an IPU, the number of output states N is significantly smaller than the number of input states M. c) In the limit M → ∞, the input is continuous, and the transformation is a (piecewise) smooth function. Assumption a) is nearly axiomatic. Assumption c) is also implicitly used by Ganguli's work and many allowing for the calculation of derivatives. Therefore only assumption b) is new. 4. For the model used by the two papers and many others, the mutual information between inputs and outputs, to cite them: > is notoriously difficult to compute (or maximize) as it requires summation or integration over the high-dimensional joint probability distribution of all possible stimuli and population responses. They instead chose to optimize a lower bound on mutual information. In our model, the mutual information equals to the output entropy, which is not only much easier to deal with analytically and numerically but also potentially more straightforward for the neural system to implement as it only requires local information at the output. 5. Our theory has been numerically applied above 2D and can also be readily extended to time-varying input. * Regarding whether the results provide insight for real neural coding With spike count conveying information, it still implies that the output has a finite resolution. If each neuron has N output states and receives input from L neurons, then the number of input states is $N^L$, which is much larger than $N$. This is in agreement with assumption b), thus making our theory applicable. For rate coding, the methods used in the two-pixel models may be more relevant. The method developed for image patches may be more related to temporal coding. Our theory also aligns with certain experiments, such as *Decorrelated Neuronal Firing in Cortical Microcircuits* by Ecker et al. * Regarding the limitation of the paper. We acknowledge the following limitations compared with others: 1. Our study is abstract and does not emulate some key aspects of real neurons, such as spiking activity. This makes comparison with existing neuroscience literature not straightforward. 2. This paper only studies the noiseless case. * Regarding the name It is not a standard information theory term. We wanted a short name to refer to our method and came up with this term, which means evenly distributed probability code or evenly partitioned probability code. We are considering more descriptive terms. --- Rebuttal Comment 1.1: Comment: Thanks and I appreciate your earnest responses to my comments and questions. I do believe that you have a novel concept/idea in this study that may be worth presenting. But, in its current form, there is lack of proper contextualization with respect to highly relevant work; and, it's a bit too abstract to understand how one might try to relate this back to neural coding in biological systems (e.g. testable predictions, fits, etc.). I believe remedying these pitfalls would undoubtedly drastically affect much of the manuscript, including the overall narrative of the study. For these reasons, I'm sorry to say that I am not inclined to change my score.
Summary: This paper presents a method for the representation of elementary natural images, based on the observation that classical studies in computational neuroscience focus mainly on methods to improve code efficiency, but that this could be complemented by a study of probability density modeling between neighboring pixels to improve image representation. This work consists in studying a coding principle based on a probabilistic representation and its formalization in a form of variational optimization. The paper presents the elementary method for a single pixel, then extends it to two pixels, and applies it to small images extracted from natural images. This method is enhanced by a heuristic that allows to formulate a cost function and thus derive an optimization algorithm. The results allow to numerically validate this principle by deducing output statistics, as well as the emergence of local contour detectors. Strengths: A major strength of this paper is that it derives the image representation algorithm from fundamental principles of machine learning, particularly probabilistic representations. In this way, it rigorously defines the problem of establishing dependencies between the luminance values of neighboring pixels. Weaknesses: The first limitation of this paper is that it applies to very elementary signals, i.e. a pixel, a pair of pixels, or small images of dots. As the initial aim of the paper is to understand the computational functioning of the biological networks that underlie the efficiency of vision, this approach is extremely caricatural, and dismisses many fundamental aspects, such as the largely parallel processing of large images, the use of large neural networks, or the ability to process multimodal images, in color or in motion, or more generally hierarchical processing that can be forward, but also modulated by feedback signals. Finally, the results that have been obtained, for example for the detection of local elementary contours, are difficult to interpret quantitatively and seem very preliminary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have a number of questions arising from reading the paper. First of all, it seems that the principle of optimization, and in particular its derivation for elementary changes in probability, is widely known in the literature. Can you highlight the originality of your approach compared with other probabilistic optimization methods, in particular all variational optimization problems? Figure 1 shows a dependency between pixel values. What is the relationship between your study and studies that have been carried out for many years, for example by Eero Simoncelli on divisive normalization? Also, the result shown in figure 2 shows an adaptation of the code according to the probability density. Are these results compatible with the homeostasis phenomena highlighted in biological neural systems? Finally, figure 5 shows the emergence of a representation of contours in an image, reminiscent of that obtained in studies of sparse coding, for example via Bruno Olshausen's framework. What is the relationship between your principle and these algorithms? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Finally, these questions about the paper reveal the main limitations of this work. In particular, the introduction to the paper presents at length principles that seem very general, such as Shannon entropy, and the rest of the paper does not sufficiently highlight the novelties that are brought forward. This brings to light a main limitation of the paper, which is the fact that the propositions that are put forward are very ambitious, but the results are applied to very limited situations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review. We will address your concerns point by point. * Regarding the limitation to elementary signals Our aim is to explore what an optimal early-stage information processing system would look like, from first principles with as few assumptions as possible. We assume that biological systems should be close to optimal, and we wish to compare which aspects of our results align with biological systems. We believe that this approach can shed some light on how early-stage biological systems function. We acknowledge that our approach is simplistic, but it's foundational to examining more intricate aspects of the visual system later on. We will enhance the abstract and introduction to clarify this goal. * Regarding the lack of quantitatively interpretable results We acknowledge this weakness. Our work, an abstract study from first principles, doesn't fully emulate real neurons, complicating comparisons with existing neuroscience literature. However, it can align with certain experiments, such as *Decorrelated Neuronal Firing in Cortical Microcircuits* by Ecker et al. On the other hand, the work is a theoretical exploration and not intended to solve a practical problem. This makes it difficult to compete with state-of-the-art methods in computer vision, at least initially (We did have one instance in Fig. 3). * Regarding the relation with divisive normalization Both methods aim to achieve statistical independence and both methods use non-linear local transformations. However, our method does not assume a specific form of transformation and thus is more general. Additionally, we are employing a completely different approach to learning the transformations. * Regarding the relation with homeostasis For biological neural systems to implement the even code principle, it is highly probable that they employ mechanisms such as homeostatic plasticity as well as lateral inhibition. This is indeed a very intriguing direction for future research. * Regarding our approach compared with probabilistic optimization methods and the originality of the derivations Variational optimization methods like variational autoencoders and other variational Bayesian techniques also try to approximate probabilities. However, their objectives and methodologies substantially differ from ours. Firstly, they focus on the posterior distribution, while we are only interested in the probability distribution of the input. Secondly, KL-divergence (or ELBO) is part of their loss function, while our loss function does not include this term. The reason is that we aim not only to learn $p(x)$ but also to optimize information transmission. In section 3, we have proven that these two goals are not identical. Consequently, one cannot typically find a solution where both goals are optimally achieved. Therefore, we propose a step-by-step approach to achieve these two goals. Firstly, we optimize information transmission, which also approximates $p(x)$ using a step function. We obtain the optimal solution for transmitting input information, but it is not the optimal solution for approximating $p(x)$ when the resolution is fixed. If the approximation of $p(x)$ is insufficient, N is increased to achieve this goal. The rationale behind this is that for living organisms, optimizing information transfer is a more urgent and crucial task than precisely learning $p(x)$. So, the derivations are necessary to understand the relationship between the two goals, and the method is new. We also have not observed any prior work on learning an image patch representation solely through an unsupervised method using a loss function, which only considers the outputs, not to mention the efficiency exceeding that of deep learning methods by a considerable margin. Contrastive learning, such as the one used in *A Simple Framework for Contrastive Learning of Visual Representations*, may seem like a counterexample, but it still requires labels and, therefore, is not a purely unsupervised learning method. (Note: On line 89, we state that "minimizing the KL divergence requires minimizing $H_q$". This should not be confused with the well-known proof that minimizing the KL divergence is equivalent to performing maximum likelihood estimation.) * Regarding how our approach compares with Olshausen's The current work is inspired by Olshausen's and many other related works. Here's what's new in our paper: 1. Many optimization methods, including those by Olshausen, use image reconstruction error minimization as one optimization goal. However, this is merely a reasonable first approximation. As stated in "Synaptic energy efficiency in retinal processing" (Vincent and Baddeley): > If the signal (the images) and the noise are Gaussian, then minimising mean squared reconstruction error maximises the information that the outputs provide about the inputs (Baldi & Hornik, 1995). It is known that natural images are not Gaussian distributed, but we would propose this as a reasonable first approximation. Our method directly maximizes the rate of transmission without any approximation. 1. To calculate image reconstruction error, one needs to know the value of both input and output, and one needs to calculate the input from the outputs. Given these requirement, such a method may lack biological plausibility. Conversely, our method, which solely requires local knowledge of the output, offers a more biologically plausible model for neural implementation. 2. Our methods are guaranteed to generate near-optimal utilization of output channels thus are very efficient. The representations for image patches learned by our simple model are even comparable to sophisticated deep learning methods while only using less than 3% of the deep learning method's storage space. (Fig. 3) 3. Olshausen's method learns linear filters while our method is non-linear. 4. Our method can be easily extended to study time-varying inputs like videos.
Summary: This paper explores the relationship between the information theory approach and the probabilistic generative model approach in the context of understanding neural coding. The author suggests that maximizing the information-carrying capacity of output channels and modeling the input probability distribution can be pursued as independent dual objectives. To investigate this hypothesis, the author begins by examining a one-pixel system, followed by a two-pixel system, gradually progressing to 2D image patches. The resulting codes obtained for the images exhibit similarities to edge detectors and orientation-selective neurons in V1, akin to many efficient coding models developed over the past two decades. Strengths: The presentation is reasonably clear. It is rather interesting that the author begins by examining a one-pixel system, followed by a two-pixel system, gradually progressing to 2D image patches. Weaknesses: While the idea that both information transmission and probabilistic modeling of the images should be taken into consideration simultaneously might be new, and is sufficient to learn edge detectors and orientation-selective neurons, the author has not established it is a necessary condition. In fact, literature in the last thirty years (from Law and Cooper's to Olshausen and Field and many others) that such codes can be learned based on either one of the criteria. It is surprising that the V1 neural codes were assumed to be sparse binary codes. What is the evidence? The distribution of output values as shown in Figure 2a has not been observed biologically. This brings the Even Code hypothesis into serious question. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the evidence for sparse binary codes? The distribution of output values as shown in Figure 2a has not been observed biologically. This brings the Even Code hypothesis into serious question. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Societal impact not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and insightful feedback. We appreciate your time and effort in providing us with valuable comments. We will address your concerns and questions in detail. * Regarding the learning of edge detectors and orientation-selective neurons in many previous studies The aim of this paper is not to provide yet another theoretical explanation for the emergence of edge detectors and orientation-selective neurons. Our goal is to explore what an optimal early-stage information processing system would look like, starting from first principles with as few assumptions as possible. We assume that biological systems should be close to optimal, and we want to compare which aspects of our results are in common with biological systems. We hope that with this approach, we can shed some light on how early-stage biological systems operate from a new perspective. We do find that in our solution there exist edge detectors and orientation-selective nodes, but this is more like a beneficial byproduct. What sets our work apart from many studies over the past three decades are: 1. Guaranteed by a rigorous first-principle method, our representation is optimal with independent output nodes. The efficiency of the representation surpasses that of the state-of-the-art deep learning models by a significant margin (Fig. 3). 2. The optimal representation can be learned by requiring only local knowledge at the outputs. This offers a more biologically plausible model for neural implementation compared to many works from the past three decades, including Olshausen and Field's, which requires non-local information of both input and output. 3. In our theory, we do not assume rate coding or temporal coding, thus we are coding-agnostic. * Regarding the use of "sparse binary codes" to describe neural coding in the early visual system We acknowledge that "sparse binary codes" is not a scientifically precise term. We intended to refer to "sparse binary signals". The exact coding scheme of the neural system is still not fully understood. While sparse coding is more generally accepted, neurons are often thought to use firing rates to convey information, which is not a binary code. Additionally, it's worth noting that our model is compatible with rate coding for the following reason: If a real neuron employs rate coding and uses spike count to convey information, it implies that the output possesses a finite resolution. If each neuron has N output states and receives input from L neurons, then the number of input states is $N^L$, which is significantly larger than N. This is in line with our definition of an Information Processing Unit (IPU). For rate coding, the methods used in the two-pixel case are likely more relevant (while we used the two-pixel case as an example, the method can be applied in higher dimensions). The method developed for image patches may be more related to temporal coding. However, careful studies are necessary and might require taking noise and time-varying inputs into account. These are interesting directions for future work. * Regarding Figure 2a The current study is abstract, and the model we propose does not try to model some specific aspects of real neurons, such as spiking activity. Therefore, Fig 2a cannot be directly compared to real neurons. The true meaning of Fig. 2a is as follows: In our numerical experiments, the initial setup is such that outputs can be any real value between 0 and 1. After the model has been trained, we have verified in Fig. 2a that almost all of the output values are either 1 or 0, signifying that our model encoded the images using binary representation. * Regarding biological evidence Our work, an abstract study from first principles, doesn't fully emulate real neurons, making comparisons with existing neuroscience literature more complex. However, it can align with certain experiments, such as *Decorrelated Neuronal Firing in Cortical Microcircuits* by Ecker et al. In Ecker's paper, they found: > We found that even nearby neurons with similar orientation tuning show virtually no correlated variability. This finding is in line with the even code principle. In fact, one version of the loss function in our image patches method aims to make the output nodes as decorrelated as possible. --- Rebuttal Comment 1.1: Title: Thank you for your clarifications Comment: I have read the reply by the authors. I agreed that there are some interesting and novel ideas there, but probably a bit too abstract, and need more contextualization, explanation and connections to biology to make the theory more concrete in order to have real impact.
Summary: The authors studies simple of neural encoding. The question is whether two distinct goals, accurate transmission of information and learning the distribution of environmental stimuli can be achieved simultaneously. The authors argue that yes, it can, using the key assumption of a uniformly partitioned input space. The coding principle of the authors is finally applied to image patches, where it yields edge-like features. Strengths: - The author studies an important question, namely simple coding schemes that reproduce filters that resemble those of deep convolutional networks or parts of the visual system. - The author develops intuitions in simple toy models before moving to applications on real images. - The filters shown in Fig. 3 bear a striking resemblance to the filters of a trained VGG model (although I have some questions on the methodology, see below) Weaknesses: I found the article confusing to read in a few places. For example, early on, the author states that "maximising the rate of transmission" is equivalent to maximising the entropy of the output distribution $H_Q$. I would think that what you transmission of information requires maximising the mutual information $I(X; Q)$ between the distribution over inputs and outputs. (around eqs 1 + 2; note that the notation is rather confusing here, using lower-case $p$ for the distribution over input stimuli $x$, and capital $Q$ for the distribution over output states $y$). Why are you maximising simply the entropy of the distribution over outputs? Similarly, in the section on the even code principle, I'm confused by the question of how the IPU models the input distribution. The way I read Sec. 2, the IPU is considered a function of the stimuli $y=f(x)$ -- in that sense, it doesn't model the input distribution, we cannot sample from it. It can give a more or less faithful representation of $x$, as measured for example by mutual information if the mapping is probabilistic, As you then move on to learn two pixel distributions, I'm confused about your use of MLPs. MLPs are powerful neural networks, but you seem to use them to "learn" to partition the input space into equal partitions - is this not possible by just writing down a simpler model? Given my trouble understanding the first few sections, I cannot competently comment on the experimental results - while the filters obtained by the authors do bear a striking resemblance to the filters of a VGG network, I don't really understand how the author obtained them. Some additional clarifications would therefore be more than welcome. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: See above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We will address your concerns and questions point by point. * Regarding maximizing the entropy of the output distribution In Appendix A, we have proven that for our model, maximizing mutual information (rate of transmission), defined as $H_Q - H(y|x)$, is equivalent to maximizing the entropy of the output distribution. * Regarding the use of capital $Q$ The seemingly unconventional notation is purposeful. In this paper, we use a capital letter to denote distribution over the output states $y$, and a lower-case letter for distributions over the input states $x$. In Section 3, we introduced the quantity $q(x)$, which is the translation of $Q(y)$ into the input space. Using $Q(y)$, instead of the seemingly natural lower-case notation, clarifies that it is a different function from $q(x)$. We will explain the notation in the text to prevent confusion. * Regarding how the IPU models the input distribution The function that IPU learns, $y=f(x)$, is a deterministic step function that evenly partitions the input probability distribution. Each value of y maps to an equal portion of $p(x)$. Consequently, we can translate the output probability distribution $Q(y)$ (a constant) into an probability distribution learned by the model $q(x)$ in the input space using Eq. (3) and (4). $q(x)$ is a discretized approximation of the real probability distribution $p(x)$ (see Fig. 2 in the attached rebuttal pdf). * Regarding the use of MLPs to model two-pixel distribution The MLP is used to approximate the function $y=f(x)$. The partitions were visualized using the `tricontour` function from `matplotlib`. Using MLPs to create a set of equally spaced parallel lines, as in Fig. 1(a), might seem like overkill. However, an information processing system should be general and work with any 2D (or higher-dimensional) distributions (see Fig. 1 of the one-page rebuttal pdf for an example). The partitioning must be learned by the IPU, which initially has no knowledge about the data it will encounter. For this reason, we use a versatile and powerful function approximator like MLP. Another reason is that we want to use the same kind of model for all numerical studies, and MLP serves this goal very well. * Regarding Fig. 3 This figure does not compare the filters learned by the model but studies how well the model embeds image patches in its representation space according to image similarity viewed by the model. The generation of this figure is described in its caption and the subsection "5.2.2 Image Patch Similarity" To add more detail, we used the following steps: 1. We randomly sampled 1 million 5x5 color image patches from the datasets. 2. For each image patch, we used our model to generate a representation, which is a binary vector of size 96 (12 bytes). 3. For each image patch, we used the first 10 layers of a pretrained VGG16 model (the `torchvision` default VGG16 model) to generate a representation, which is a float vector of size 128 (512 bytes). 4. We selected 16 random image patches out of the 1 million and plotted them as the first column in Fig. 3 (a) and (b). 5. For each of the 16 random images, we calculated the distance to the remaining 999,999 images using the representations we got in steps 2 and 3. Then, we chose the top 9 image patches with the smallest distances and showed them in the remaining 9 columns in Fig. 3 (a) and (b) in order, respectively. Our simple model's representation produces comparable results to those generated with VGG16 while using less than 3% of the storage space. This illustrates the efficiency of our model. I hope the above explanations answer your questions and make our paper more understandable. Should you have any further questions, we would be glad to clarify them. Thank you for the time and effort you have put into evaluating our paper. --- Rebuttal Comment 1.1: Title: Thank you for your reply Comment: I have read your reply. I appreciate the clarifications on my question; I think those points should be clarified in an eventual revision. Looking through the other reviews and the respective rebuttals, I think that this process has unearthed a few directions in which the paper could be strengthened. In the meantime, I will keep my score.
Rebuttal 1: Rebuttal: Attached are the two figures referenced in the rebuttal. Pdf: /pdf/ef70b16448c0cf952a313d09e2a15c6f8c813be2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimistic Rates for Multi-Task Representation Learning
Accept (poster)
Summary: This work studies multitask representation learning for some (general) function hypothesis classes. The estimators correspond to the composition of a "complex" function $h$ (corresponding to the shared representation) and a specific task regressor $f_i$ from a simple hypothesis class. This work aims at bounding the excess transfer risk of the ERM estimator. It extends the bound of Tripuraneni et. al (2020) to optimistic rates. More precisely, the square root bounds of Tripuraneni et. al (2020) are improved to linear bounds when 0 risk solutions exist on every task (e.g. no label noise). To do so, the authors do not need to assume the loss to be Lipschitz, but instead need to assume it is smooth (Lipschitz derivative). ------- I acknowledge having read the author's rebuttal. The authors answered some of my concerns as explained in the comments below. Strengths: Getting optimistic rates for multitask learning is a nice improvement wrt previous bounds on transfer excess risk. The obtained bound is quite general. Notably, the setup and results are well put in the context of the literature, easily relating this result to previous one. Weaknesses: My main concern is about the current writing of the paper. First of all, I think it deserves a few careful proofreading, as many typos or grammatical errors can be found. Here are a few examples: - l. 108: "we can reuse of" - l. 118-119: "is bound", "is bound by" - l. 161: I think there is a j subindex missing in the last sum in both the definition of $Pf$ and $P_n f$ - l. 192: "In order for there to be any value to learning" -> weird phrasing - l. 198: I guess it should be $f^*$ and $\hat{h}$ instead of $f$ and $h'$ Although these typos are not significant in many sentences, they bring confusion when they appear in definitions (lines 161 and 198). More generally, I find the mathematical content to be quite technical and hard to follow. Some claims are confusing if not inexact. For example the setting described line 171 *does not* imply that $f_j^{\star}\circ h^{\star}$ is the optimal predictor. The same claim indeed also holds for $\frac{1}{2}f_j^{\star}\circ h^{\star}$. I think the authors should here introduce the setting similarly to Tripuraneni et al (2020), saying that $f_j^*$ and $h^*$ are in the hypothesis class. Also, some notations are heavy/confusing, making the technical content (which is already hard to follow by definition) very hard to get. As an example, the variable $q$ in the equation line 204 represents both an integer and functions. In Theorem 1, it is not clear what $\psi$ exactly is. I find the whole $r_1^*$ and $r_2^*$ quite cumbersome: I would have preferred to directly state Theorem 1 with the bounds on these quantities provided in Theorem 2. More generally, the whole Section 3 is hard to understand. I think that illustrating the obtained results in the particular case of the linear representational model would help the reader. As a last concern, the current seems like an incremental improvement of Tripuraneni et al (2020), with tools leading to optimistic rates, that are mostly inherited from Srebro et al (2010). Even though the authors claim in the introduction that Srebro et al's tools cannot be directly used to MTL, the difference seems to be tenuous and only be due to bounding the Rademacher complexity. Also, the authors might provide more motivation regarding optimistic rates. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weaknesses section above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback and appreciating our contributions. Regarding the typos pointed out and suggestions, we appreciate the careful reading and have made corrections in the revision. --- ## About the setting with the optimal predictor We thank the reviewer for the suggestion. Indeed, from the assumptions, the optimal predictor is only guaranteed to depend on $x$ via $f_j^*\circ h^*$. We have revised the sentence as suggested by the reviewer. --- ## Regarding $q$ notation on line 204 Note that $q$ is an integer and $q_j(\cdot)$ is a function. However, we will modify the notation emphasize this difference. --- ## About $\psi$ within Thm. 1 We will clarify that it is *some* subroot function of $r$ which bounds the local Rademacher complexity. --- ## For $r_1^*$ and $r_2^*$ notation Our main result, Theorem 1, actually *only* uses Assumption 1.1., boundedness and non-negativity of the loss function $\ell$. We will change the writing to indicate this. This gives a bound in terms of fixed points of the local Rademacher complexities. Finally, under the additional assumption of smoothness of the loss, Lipschitzness of the predictor class $\mathcal{F}$ and the boundedness of $\mathcal{F}\circ \mathcal{H}$ we can give a more interpretable bound on the fixed point in terms of their individual complexities via the Gaussian chain rule. Hence, stating the result in terms of $r_1^*$ and $r_2^*$ is required for generality. --- ## About including a linear representation example That is a great suggestion; we will indeed include this in the final version. --- ## Regarding our contribution w.r.t. prior works We see our work as foundational, extending our understanding of multi-task representation learning. For a more complete discussion recall our Section 1.1 - Our techniques, dedicated to `"provid[ing] an overview of techniques and challenges overcome in the context of prior art." However, perhaps this is best emphasized visually, in working towards proving our main MTRL result - see Fig. 1., the proof graph - we have provided many necessary improvements and generalizations to build towards our final MTRL rates. The main technical contributions are, (a). extending core concentration inequalities tools to the general MTRL setting, (b). bounding the local Rademacher complexity. * **Concentration inequalities.** Most of the existing tools and techniques in learning theory focuses on the single task setting. In order to show our results we need concentration inequalities which apply to the MTL setting, however these results do not trivially extend to the MTL setting. Indeed, as [YLK+18] observed, the difficulties in deriving MTL results are foundational, going back to a Bennett like inequality for the suprema of empirical processes (our analog is Thm. 7). From here we developed foundational analogs, e.g. Thm. 6, of the single task local Rademacher complexity results applicable to a MTL setting. We believe this theorems are of independent interest. * **Optimistic rates for non-negative smooth losses.** Part of the proof in [SST10] first bounds the covering number by its fat-shattering dimension and then bounds the fat-shattering dimension by the Rademacher complexity. This works well in the standard single task setting. Yet, there are no analogs we know of for the fat-shattering dimension of the multi-task function class. Herein lies the weakness of applying this approach, whereas our approach uses the Gaussian complexity which is a much more general notion of complexity. Besides the generality of our proof technique, which achieves better rates even in the single task setting, our contribution is a more simple proof. * **Mistakes.** Finally, in the process of developing the tools which are needed for the MTL setting we have identified various errors within the literature. First, while seminal and foundational, the proof within [SST10] has some minor flaws, in an effort to correct the literature we included those within Appx F.1. Concretely, a fat-shattering inequality is used in the wrong direction, there is an assumption between parameters which is not specified, there is a missing term when converting between Rademacher complexity and width, finally the process is not centered which is required in order for the second moment within the upper limit of integration of Dudley's integral to be bounded. Second, we failed to generalize Lemma 17 to bounded and possibly negative functions see footnote 5 on page 27. Finally, while not a mistake, we clarify the literature w.r.t. a comment made within [YLK+18] about achieving the same constants within a single task setting, see lines 344-356. --- Rebuttal Comment 1.1: Title: Author rebuttal Comment: I thank the authors for their detailed answer. In the light of their answer, I decide to raise my score as the technical contribution seems far from incremental given the authors' answer. I thus think that this work is strong from a technical point of view. Yet, I still believe it requires a lot of polishing in terms of writing, and would recommend the authors to carefully improve this aspect in the revised version.
Summary: This paper shows novel statistical rates for generalizing to a target task via multi-task representation learning (MTRL) that attain the optimistic 1/nt + 1/m rate, where n is the number of samples per source task, t is the number of source tasks, and m is the number of samples per target tasks, when the optimal source and target risks achievable by the representation and predictor function classes are small. This is an improvement over the previous state-of-the-art rate of 1/rt(nt) + 1/rt(m), and matches the analogous optimistic rate in the single-task setting. Key to the results are novel technical contributions extending local Rademacher complexity analysis to the multi-task setting. Strengths: 1. The results are a very significant contribution in my opinion -- the established rates significantly improve over previous state-of-the-art in the near-realizable setting. Indeed, the near-realizable setting is important to consider. All assumptions are reasonable and consistent with prior work. Broadly, multi-task representation learning is an important research area. 2. The analysis is rigorous, there are no mistakes in the proofs to my knowledge. From my understanding, substantial technical innovation is required to achieve the results by extending the local Rademacher complexity framework to the multi-task setting. 3. The paper is very well-written. Weaknesses: 1. The Related Works section should also compare with [XT21]. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback and appreciating our contributions. Regarding [XT21], thank you for pointing us to this work -- we will add a discussion as suggested, in the related work section. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. I am maintaining my score.
Summary: This paper aims to examine the optimistic rates for multi-task representation learning. The authors illustrate that the rate may be faster than the standard rate, depending on the complexity of the learning tasks. The analysis comprises multiple theoretical contributions. Strengths: 1. The theoretical analysis sounds solid from my perspective. However, I am not an expert in this area and haven't gone through all the supplementary material. I would like to refer to other reviewers in the discussion period. 2. The authors provide detailed comparisons with other theoretical works and expand on the key contributions, helping to understand the critical points of this work better. However, it is still hard to understand the details for readers not in this field. Weaknesses: 1. The assumption 1 regarding the boundness of the loss function and its gradients is too restrictive, particularly since the feature domain is not bound. 2. Although the paper presents new findings in this field, it lacks a thorough explanation of the significance of these results. 3. No conclusion section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can you explain the distinction between multitask learning and multitask representation learning? Typically, MTL involves acquiring shared representation layers, which serve as the objective for MTRL. 2. What is the significance of optimistic rates in MTRL, and how does it manifest in real-world situations? 3. Could you offer an intuitive definition of task diversity as it pertains to definition 2? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: 1. Assumptions are too strong. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your questions, suggestions, and appreciating our contributions. --- ## Regarding the assumptions on the loss function Our main result, Theorem 1, actually only uses Assumption 1.1., boundedness and non-negativity of the loss $\ell$. We will change the writing to indicate this. For the subsequent results, we assume that the gradient is Lipschitz (i.e. loss is smooth), but this does not preclude the gradient from being unbounded. We note that these assumptions are standard and borrowed from prior works in (single-task) learning theory such as [SST10]. --- ## About a conclusion section We will add a conclusion section in the revision. --- ## Regarding the distinction between MTL and MTRL MTL is more general than MTRL. MTL is about learning several tasks simultaneously. This is not limited to procedures which learn a common representation for all tasks, which is the case for MTRL. For an example of work which is MTL but not MTRL see [MP04]. [MP04] - Micchelli, C., \& Pontil, M. (2004). Kernels for Multi--task Learning. Advances in neural information processing systems, 17. --- ## Significance of optimistic rates in MTRL The pursuit of optimistic rates is motivated by practice. In many settings, while we can prove only a rate of $1/\sqrt{n}$, it is observed in practice that the error converges at a faster rate. This is typically for the tasks that we can learn with good accuracy, which is often the case. This is what we observe in transfer learning and what we hope to understand through optimistic bounds here. The optimistic rates for MTRL that we provide show that with smooth losses, standard two-stage ERM can automatically adapt to the problem instance (both for the source tasks and the target task). As a result, we can interpolate from the standard rate of $\mathcal{O}(\frac{1}{\sqrt{nt}} + \frac{1}{\sqrt{m}})$ to the fast rate $\mathcal{O}(\frac{1}{nt} + \frac{1}{m})$. Also, note that in practice, we typically use a fairly complex class for learning the representations (e.g., multilayered neural networks) and a simpler one for the predictor (e.g., linear functions). Therefore, the price we pay for $\epsilon$-excess risk for the target task with the MTRL is $\frac{C(\mathcal{F})}{\epsilon}$ as compared to $\frac{C(\mathcal{F} \circ \mathcal{H})}{\epsilon}$; this can yield significant gains as the former is much smaller. This result further emphasizes the provable benefits of pooling data from multiple tasks for transfer learning. --- ## Intuition for task diversity assumption Intuitively the task diversity assumption assumes that the ratio of the excess risk of the target task and the excess risk of the source task is well-behaved, i.e. the ratio is $O(1)$. In other words, for any representation, excess risk w.r.t. the best predictors for the target task is upper bounded by the excess risk of the source tasks w.r.t. the best predictors for the source tasks. For example, when in the linear case this is when the source tasks span $\mathbb{R}^d$ and therefore are able to "learn" a target task in all directions (e.g. see [DHK+20]). We have a high-level discussion on this assumption within Appendix B - Task diversity digression section. --- ## Regarding our assumptions All our assumptions are standard in learning theory -- see for instance the foundational work of [SST10], in the single-task smooth loss setting, and the work of [TJJ20], in the multi-task Lipschitz loss setting.
Summary: The authors consider the transfer learning and the multi-task learning setting in a Representation Learning context: multiple source tasks are used to learn a good common representation to facilitate the learning process of a target task (transfer learning) or of the same source tasks (multi-task learning). Under regularity assumptions on the loss function and task similarity, the authors provide optimistic statistical rates for the transfer learning and the multi-task learning setting demonstrating the benefit of representation learning. These optimistic bounds interpolate between the standard -1/2 rate and the fast -1 rate, depending on the difficulty (i.e. the realizability) of the learning tasks. In order to reach such a result, the authors also provide the following intermediate contributions: they give a local Rademacher complexity theorem in the representation learning setting (for both the multi-task learning and transfer learning scenarios) and a chain rule for local Rademacher complexity for composite function classes which allows to decouple the (local) complexities of representation and predictor classes. Strengths: The authors address a topic that is interesting in a formal and rigorous way. The paper is written in a quite clear way. Weaknesses: Some bounds given by the authors, especially those related to the Rademacher complexities, and also Sec. 4 and Sec. 5 should be simplified more in my opinion in order to make them more readable. The authors did not provide computational experiments testing the performance of the proposed method in the main body. The authors adapt optimistic rates present in literature for the single-task setting in order to get their optimistic bounds for representation learning. I wonder if the theoretical contribution is enough for the venue. I would like to better understand which are the main technical difficulties the authors had to face to adapt the optimistic bounds from the single-task to the multiple-task setting. Some basic gradient based representation-learning references are missing, such as [1-2] below. References [1] Denevi et al. "Online-within-online meta-learning." [2] Khodak et al. "Adaptive Gradient-Based Meta-Learning Methods." Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Ass. 1, the Lipschitz and the boundedness assumptions are necessary? Smoothness is for sure necessary to get faster rates. The authors should recall in my opinion that m in the first equation represent the number of samples of the target task. Could you please make a detailed comparison between your optimistic bounds and the non-optimistic bounds in the references [1-2] I mentioned above by only keeping the leading terms w.r.t. the samples/tasks number and the complexity measures? What you call 'transfer learning' looks more similar to meta-learning. In transfer learning usually you only have one source task and one target task and you do not investigate the source task training, but only the transfer knowledge from source to target. The authors developed their analysis under a quite general notion of task similarity introduced in previous literature. However such similarity assumption seems to be not well motivated for the representation learning setting in which the natural task similarity assumption is that the target estimators of the tasks all lies in the range of the representation. This natural link is well explained for instance in the reference [1] I mentioned above. The paper 'Optimistic Rates for Learning with a Smooth Loss' does not use Rademacher complexity measures in order to give optimistic rates. Did you based the proofs on that? If yes, why are you instead using Rademacher complexities? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I do not see any potential negative societal impact related to this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback and appreciating our work. --- ## Our theoretical contribution and technical difficulties We see our work as foundational, extending our understanding of multi-task representation learning. For a more complete discussion recall our Section 1.1 - Our techniques, dedicated to `"provid[ing] an overview of techniques and challenges overcome in the context of prior art." However, perhaps this is best emphasized visually, in working towards proving our main MTRL result - see Fig. 1., the proof graph - we have provided many necessary improvements and generalizations to build towards our final MTRL rates. The main technical contributions are, (a). extending core concentration inequalities tools to the general MTRL setting, (b). bounding the local Rademacher complexity. * **Concentration inequalities.** Most of the existing tools and techniques in learning theory focuses on the single task setting. In order to show our results we need concentration inequalities which apply to the MTL setting, however these results do not trivially extend to the MTL setting. Indeed, as [YLK+18] observed, the difficulties in deriving MTL results are foundational, going back to a Bennett like inequality for the suprema of empirical processes (our analog is Thm. 7). From here we developed foundational analogs, e.g. Thm. 6, of the single task local Rademacher complexity results applicable to a MTL setting. We believe this theorems are of independent interest. * **Optimistic rates for non-negative smooth losses.** Part of the proof in [SST10] first bounds the covering number by its fat-shattering dimension and then bounds the fat-shattering dimension by the Rademacher complexity. This works well in the standard single task setting. Yet, there are no analogs we know of for the fat-shattering dimension of the multi-task function class. Herein lies the weakness of applying this approach, whereas our approach uses the Gaussian complexity which is a much more general notion of complexity. Besides the generality of our proof technique, which achieves better rates even in the single task setting, our contribution is a more simple proof. * **Mistakes.** Finally, in the process of developing the tools which are needed for the MTL setting we have identified various errors within the literature. First, while seminal and foundational, the proof within [SST10] has some minor flaws, in an effort to correct the literature we included those within Appx F.1. Concretely, a fat-shattering inequality is used in the wrong direction, there is an assumption between parameters which is not specified, there is a missing term when converting between Rademacher complexity and width, finally the process is not centered which is required in order for the second moment within the upper limit of integration of Dudley's integral to be bounded. Second, we failed to generalize Lemma 17 to bounded and possibly negative functions see footnote 5 on page 27. Finally, while not a mistake, we clarify the literature w.r.t. a comment made within [YLK+18] about achieving the same constants within a single task setting, see lines 344-356. --- ## Regarding Assumption 1 Lipschitz and the boundedness Our main result, Thm. 1, actually *only* uses Assumption 1.1., boundedness and non-negativity of the loss. We will change the writing to indicate this. This gives a bound in terms of fixed points of the local Rademacher complexities. Under the additional assumption of smoothness, we can further bound the fixed point. Finally, under the additional assumption of the Lipschitzness of the predictor class $\mathcal{F}$ and the boundedness of $\mathcal{F}\circ\mathcal{H}$ we can give a more interpretable bound in terms of their individual complexities via the Gaussian chain rule. Concluding, we would like to emphasize that these are standard assumptions, including the non-negativity and boundeness of loss, in learning theory, for instance, see [BBM02, SST10, TJJ20]. --- ## Regarding references [1-2] Thank you for these references. While these indeed bear similarities to our setting, there are crucial differences, which makes them incomparable. These include, a generative model of tasks in the mentioned papers, which we don't have, as well as the assumption of convexity therein (which we understand is made for computational reasons). We will add a discussion to this effect in the revised version. --- ## Regarding transfer learning vs. meta-learning We are interested in the problem of transfer learning by learning a common representation from multiple source tasks. The most related work is [TJJ20] so we decided to remain within their discourse and reuse their terminologies. We note that the setting of multiple sources tasks has appeared even in many earlier works on transfer learning, for instance, [Bax00]. However, there are indeed many similarities between our setting and the ones mentioned by the reviewer. --- ## Regarding the task similarity assumption The task similarity assumption has been studied in prior works, for instance [TJJ20]. Our main focus is to understand whether under this standard assumption, we can achieve fast rates, such as those in single task settings. However, in the special case of a linear representation class, the assumption recovers the task diversity assumption within many prior works, such as [DHK+20, TJJ21] and is similar to the one mentioned by the reviewer. --- ## Gaussian vs Rademacher complexity We assume you mean Gaussian complexity? A side-by-side comparison between our proof and the one within [SST10] shows that although they start similarly, with Dudley's theorem, they soon diverge substantially. The biggest reason is that there exists results in terms Gaussian complexity for which there are no known Rademacher complexity analogs. Nevertheless, since Gaussian and Rademacher complexities are related, so it is possible to state all the results in terms of Rademacher complexity. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I thank the authors for the reply. Below my comments. Regarding references [1-2], I would like to see a comparison, at least in the convex setting. This would be useful in my opinion in order to understand the meaning of the theoretical results presented by the authors. Regarding the task similarity assumption, the authors say: '..in the special case of a linear representation class, the assumption recovers the task diversity assumption within many prior works, such as [DHK+20, TJJ21] and is similar to the one mentioned by the reviewer.' I would like to have more technical details explaining the link between this and the standard tasks similarity assumption I explained before. --- Reply to Comment 1.1.1: Comment: **Comparison to prior work.** As we said in our rebuttal, strictly speaking, our results are incomparable with those prior works, due to different assumptions. However, we present the rates obtained in [1-2], and "compare" them with ours. First, recalling the differences, both works study the problem formulation of meta-learning, which, as they explain, is more general than learning with a (potentially non-linear) feature map, which is what they call "Feature Learning". Our work is limited to this setting of "Feature Learning". Besides these works assume a generative model for tasks whereas we have a task diversity assumption. Further, the works consider convex losses, which allows guarantees for gradient-based methods. Our work does not assume convexity. We now present the guarantees in the [1-2]. As requested, we only write the bound as a function of number of tasks $T$ and number of samples per tasks $n$. The work [1] is restricted to linear predictors and convex losses. In their Thm. 5, they get a rate of $O(\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{T}})$, on excess transfer risk. However, for the feature learning setting, as they point out, this rate contains terms with hidden dependence on $n$ and $T$. Their guarantee for feature learning, with linear representations (which is more restrictive than ours) Corollary 7, gets a rate of $O(\frac{1}{\sqrt{n}}+\frac{1}{T^{1/4}})$. The work [2] mainly considers two settings of convex and strongly-convex losses, respectively. Thm. 5.1 in the work, obtains the following rates, - $O(\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{nT}})$ for convex Lipschitz losses - $O(\frac{1}{n}+\frac{1}{\sqrt{n} T})$ for strongly-convex Lipschitz losses. In comparison, our rate is between $O(\frac{1}{nT}+\frac{1}{n})$ and $O(\frac{1}{\sqrt{nT}}+\frac{1}{\sqrt{n}})$ depending on the level of realizability in source and target tasks. This adaptivity to realizability is missing in the bounds in works [1-2], which is one of our key contributions. We remind that we are ignoring the hidden numerators in the rate, which interestingly contain complexity of representation and predictor class terms. Our worst-case rate then is $O(\frac{1}{\sqrt{n}})$ which is asymptotically no worse than that in [1] and [2] in the convex Lipschitz setting. The rate in [2] for strongly convex setting, with large number of tasks, $\frac{1}{n}$, is better than our worst-case rate but same as our optimistic rate. However, this is primarily due to strong convexity which enable faster rates. **Regarding Task similarity assumption** We elaborate the connection between our task diversity assumption, which is taken from [TJJ20], and those in works limited to linear representations such as [DHK+20] and [1]. The connections are established in the prior works, and towards this, we quote the relevant parts from these works. Note that [TJJ20] say the following about the task-diversity assumption they introduce. > Despite the abstraction in this definition of task diversity, it exactly recovers the notion of task diversity in [TJJ20] and Du et al. [2020], where it is restricted to the special case of linear functions and quadratic loss. We now motivate the task diversity assumption in [DHK+20] in context of [1]. Ass. 4.3 within [DHK+20] states that the matrix $W^*=[\{w}_1^*,\ldots,\{w}_T^*] \in \mathbb{R}^{k\times T}$ of optimal lower dimensional predictors, satisfies $\sigma_k^2(W^*)=\Omega(\frac{T}{k})$. They go on to say: >[This assumption] is equivalent to saying that $\frac{\sigma_1(W^*)}{\sigma_k(W^*)}=O(1)$. Roughly speaking, this means that $\\{w_t^*\\}_{t \in[T]}$ can cover all directions in $\mathbb{R}^k$. In contrast, the assumption in [1] is that with $B\_\rho=\mathbb{E}\_{\mu\sim\rho} v\_\mu v\_\mu^{\top}$, where $v\_\mu \in \mathbb{R}^d$, for any $\theta\in\mathbb{R}^{d\times k}$, we have that $\mathrm{Ran}(B\_\rho)\subseteq\mathrm{Ran}(\theta)$. Note that under a generative model, i.e. assuming that the labels are generated by conditional distribution which depends on the product of representation and some lower-dimensional predictor, this assumption is satisfied. So the assumption in [DHK+20] is stronger than [1]. The reason for this is stated in [DHK+20], > Unfortunately, as pointed out by Maurer et al. (2016), there exists an example that satisfies the i.i.d. task assumption for which $\Omega(1/\sqrt{T})$ is unavoidable. This means that the i.i.d. assumption alone is not sufficient if we want to take advantage of a large amount of samples per task. ... We replace the i.i.d. assumption over tasks with natural structural conditions on the input distributions and linear predictors. These conditions depict that the target task can be in some sense “covered” by the source tasks, which will further give rise to the desirable guarantees. Our task diversity assumption (taken from [TJJ20]) similarly allows us to "improve" upon the rates in prior works in the i.i.d. tasks setting for general loss functions.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Partial Matrix Completion
Accept (poster)
Summary: This paper introduces the learning problem of simultaneously estimating an unknown ground truth matrix based on the observed entries and providing a set of weights/confidence scores for each unknown entry with two properties: (1) the RMSE, weighted by the confidence scores, satisfies generalization bounds analogous to the standard results in the classic setting, (2) the confidence scores have sufficient *coverage*, i.e. sufficiently many entries have sufficiently high confidence scores. Thus, in light of the existing distribution-free results, this can be viewed as a targeted joint estimation of the ground truth matrix and the ground truth sampling distribution over entries. The main theorems in this first direction are Theorems 8 and 10: Theorem 8 shows that Algorithm 1, which relies on the intractable optimization problem "MP 3", achieves a sample complexity similar to existing state-of-the-art distribution-free results for both the max norm and the trace norm constraints, with the coverage of the matrix of confidence scores $C$ being at least as large as that of the ground truth sampling distribution. The key aspect of the algorithm is to solve the dual problem involved in MP3, which utilizes the following powerful observation (cf. lines 534 and 227-230): if a matrix of confidence scores $C$ guarantees that any matrix with small empirical $L^2$ norm (over the i.i.d. training sample) must also have small weighted $L^2$ norm w.r.t. $C$ will also guarantee that any matrix with small empirical $L^2$-deviation from a ground truth matrix will also have small $C$-weighted $L^2$ distance to the ground truth matrix. This simple observation is enough to ensure that the construction of the confidence scores $C$ only depends on the sampled entries, and not on the observed ratings. The algorithm from Theorem 8 is shown to be NP-hard. Theorem 10 shows a slightly worse guarantee ($1/\epsilon^4$ instead of $1/\epsilon^2$ due to going back and forth between $L^2$ and $L^1$ losses via Jensen's inequality as well as novel results) for an improved algorithm which replaces the $L^2$ loss in MP3 by a linear loss function. The proof of the result relies on a remarkable and original result which is proved in the main text via the probabilistic method (c.f. Proposition 11). The algorithm is tractable, but the result only applies to the max norm, not the trace norm. In the next part of the paper (i.e., Appendix B which is naturally part of the main paper and whose own appendix is Appendix C), the authors study the online learning setting, providing sublinear regret bounds for both algorithms in Theorems 18 and 19 (summarized in Corollary 17). Those results are valid under an adversarial sampling regime and apply to regret as defined by the loss function $h_t$ defined in line 656. The results are more favorable in terms of dependence on the size of the matrix than the results of [1], but not immediately comparable due to subtle differences in the definition of regret. Note that function $h_t$ is not uniquely defined. Rather, it has two possible definitions based on the choice of function $H(C)$, which lead to a different treatment (c.f. point 1.a and 1.b in lines 658 to 660). The proof techniques are a mix of techniques modified from [1] and [2] and more completely original methods (e.g. the proof of Theorem 23 (which is an analog of Proposition 11 in the online context, and the proof of Lemma 25). Finally, the authors empirically evaluate their method on a small-scale semi-synthetic dataset consisting of $250$ most popular users and items in MovieLens by comparing the 'confidence level' with the test error. Although it is not explicitly stated, the confidence level most likely refers to $\sum_{x\in\mathcal{X}} C_x$ (the sum of all the confidence scores at all entries). ========Post-rebuttal======== The authors have satisfactorily addressed most of my concerns, and I am still convinced that the paper is very high quality and I will keep my score. For the benefit of the community, I hope the authors polish the issues raised. ========= **References** [1] Elad Hazan, Satyen Kale, and Shai Shalev-Shwartz, "Near-optimal algorithms for online matrix prediction", COLT 2012. [2] Elad Hazan et al. "Introduction to online convex optimization." Foundations and trends in optimization, 2016. [3] Nathan Srebro, "Rank, Trace-Norm and Max-Norm", COLT 2005. [4] Prateek Jain, Soumyabrata Pal Online Low Rank Matrix Completion. ICLR 2023. [5] Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan. Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization. SIAM journal of Optimization, 2020. Strengths: This is an outstanding contribution: 1. To the best of my knowledge, the results presented in this paper are **highly original and impactful**. Many of the proofs require significant innovation and are non-trivial. The setting presented in the first part of the paper is an exciting new paradigm and the observations implicit in the results are very deep. I find it especially fascinating how the function class restriction on the set of predicted matrices is enough to allow us to construct the matrix $C$ (which is, after all, an estimate of the sampling distribution) without making any explicit assumptions or restrictions on the class to which the sampling distribution belongs. This is presumably because there is some implicit equivalence relation between distributions in terms of how their corresponding generalization gaps are evaluated on ground truth matrices inside a restricted class. 2. The online learning results are of great theoretical and practical importance and are **highly nontrivial** to prove. Weaknesses: Note: Overall, this is still an **extremely interesting and impactful paper** despite the limitations in terms of clarity described below. **Main weaknesses/Summary:** My main problem with the paper is that it is not very self-contained and could be presented better/more polished: **1** There are several minor errors in the supplementary. In particular, to the best of my knowledge, the **proof of Lemma 3** in Appendix A1 is **wrong**: the equation on line 476.5 is incorrect as it ignores cross-terms. The result is still correct, for different reasons: in the case of the nuclear norm, this is a classic result that can be shown without recourse to the factorized decomposition of the matrix. In the case of the max norm, it follows from the expression of the ball w.r.t. the max norm as the convex envelope of the set of rank 1 matrices (c.f. [3]). (*Please fix this*) **2**. The explanations are not always perfectly self-contained and assumptions are not always well introduced. **2.a** For instance, Theorem 18 requires a condition on the learning rate $\eta$ which is not stated in the theorem. It is clearly needed from applying what the authors call "standard Online Mirror Descent analysis", and it appears in line 763. It wouldn't hurt to add a complete citation and a restatement of the known result with more detailed calculations. (I found one at https://www.cs.cornell.edu/courses/cs6783/2019fa/lec17.pdf for instance). Note also that it seems that $\|.\|_t^{*}$ is undefined and $\tilde{C}$ doesn't appear in the formula below line 757 which makes it strange that it is introduced when presenting this equation. The same problem is on line **2.b** It is also not immediately apparent whether Corollary 21 (appendix, page 21) requires the sampling distribution to be uniform (as stated in the preamble to the Corollary (lines 702 to 703) or if it is an arbitrary distribution $\mu$ as stated in the first line of the corollary itself. **2.c** I found the proof of the NP-hardness somewhat confusing. See questions. **3** It seems like the matrix relative entropy from line 860 is not actually used in the paper (and certainly not in the definition of the regret/loss function $h_t$ in line 656 (even in the entropy case 1.a, this is an elementwise entropy. I am not sure but I think it is used in [1]. A more detailed comparison of the definitions of regret in both places would be nice. **Minor issues/mathematical typos:** 1. In Theorem 8, $\mu_{\max}$ is used in line 517 but is only defined later in Proposition 16 on page 16. 2. In line 65 in the main paper, I it should be $\tilde{O}(|U|)$ rather than $\simeq |U|$ since there is definitely a log term involved. 3.(minor) Algorithm 3 is only understandable when considered in combination with the algorithms that define $\mathcal{A}_C$ (line 751 page 24) and $\mathcal{A}_M$ (line 799 page 27). Algorithm 4 (definition of $\mathcal{A}_C$ itself depends on the quantity $R$ which is only defined later in line 756. In particular, algorithm 3 requires some hyperparameters such as $\eta$ from Algorithm 5 and $\theta$ from algorithm 3, making it difficult to read in the order in which it is presented. 4. It seems like the initialization of $\hat{C}$ in line 4 of Algorithm 4 is incorrect (missing normalization step to make the matrix belong to the simplex). 5. The notion of Bregman projection and the associated notation $B_R$ is not defined. It would help the reader a lot to finish line 8 with an additional $=argmin_{c\in\mathcal{C}'} R(C) -R( {\hat{C}}_{t+1})$ $-\langle$ $\nabla$ $ R(\hat{C}_{t+1}),C-\hat{C}_{t+1}\rangle $ [ apologies about the markdown not working properly], which probably can be done without adding a line. 6. It seems like the settings 1.a and 1.b are described in the wrong order (swapped) on lines 753 and 754. 7. In the caption of Figure 1, stochastic block models are mentioned in a somewhat tangential way. 8.It seems like the "matrix completion and recommendation system" part of the related works is missing relevant works. Probably most notably [5] and [4]. 9. I think there is a factor of 2 missing in equation (6) (line 300.5). This error is not present (the factor of 2 is there) in the proof of Theorem 23 in line 732. 10. Iines 684 and 744, there are missing references ("see section ?? for an example") 11. In line 149, I think the expectation should not run over $x\sim\mathcal{X}$ but instead over $x\sim \nu$. 12. There is an equal sign missing in the equation on line 867.5 13. There is a missing $\leq \epsilon$ at the end of line 207 14. Line 179, I think the square loss is actually 4-Lipschitz, not 2-Lipschitz, when both of its arguments are in $[-1,1]$ 15. I think there is a factor of $\alpha$ missing in the right hand side of the inequality in Lemma 25. **Typos and extremely minor issues:** 1. I found the use of "$\Delta$-inequality to mean triangle inequality in line 764 quite confusing given that $\nabla$ is a loaded notation: cf. the set $\nabla_{\chi}^\beta$ and perhaps more confusingly the matrix relative entropy from line 860 on page 32. 2. Line 755 is not a complete sentence. 4. It would be nice to define "negations" in lemma 3 (in addition to fixing the proof). 5. There shouldn't be a capital letter at "Let" at the top of page 15. 6. it would be nice to mention again the stability w.r.t. negations in line 534 as the argument is somewhat key. 7. Missing periods in equations (8) and (9) and in line 782. 8. Extra "and" in line 723. 9. An extra "That" at the beginning of the first sentence of the proof of Theorem 23 on page 23 is required to make a sentence. 10. Missing determinant in line 744. 11. Line 764 "approximately same performance", missing "the". 12. Line 788 "assume" should be "assuming" 13. 812: "notation convenience" should be "notational convenience" **References** [1] Elad Hazan, Satyen Kale, and Shai Shalev-Shwartz, "Near-optimal algorithms for online matrix prediction", COLT 2012. [2] Elad Hazan et al. "Introduction to online convex optimization." Foundations and trends in optimization, 2016. [3] Nathan Srebro, "Rank, Trace-Norm and Max-Norm", COLT 2005. [4] Prateek Jain, Soumyabrata Pal Online Low Rank Matrix Completion. ICLR 2023. [5] Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan. Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization. SIAM journal of Optimization, 2020. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In decreasing order of importance: **1.** The proof of NP-hardness contains some confusing elements for me. I apologize in advance if this is due to my lack of familiarity with the related results: **1.a** I don't understand why problem (7) is equivalent to problem MP 3. Indeed, I can't see why the PSD condition appears. I have a feeling this comes from the ($\beta$,$\tau$) decomposability, but there is definitely an argument missing. **1.b** The first argument in the "soundness" part which states that we can assume $X$ is rank one w.l.o.g. I also don't understand this argument: why is it the case that the matrix $\tilde{X}$ achieves value $k^2$ if the matrix $X$ does. This is absolutely not obvious to me. I agree with the rest of the proof assuming this is correct. **2** Can you clarify whether the expectation in the equation on line 603.5 (bottom of page 18) refers only to the randomness in the choice of the loss function $h_t$ (which is assumed to be "stochastically i.i.d.). In that case, shouldn't there also be an expectation on the right hand side in front of the regrets? **2.2** The lengthy discussion at the beginning of Section B culminating in the equation from line 603.5 seems to mostly justify the validity of the Game-Regret (see line 679) as a measure of performance of the algorithm by linking it to the minimax regret. Why not apply the result from line 603.5 and make a self-contained theorem which applies to the minimax regret in the paper's setting? **3** On lines 786-788, why do you assume $T\geq \tilde{O}(m+n)$ (first) and then $T=\tilde{O}(m+n)$ (also as in the statement of the lemma) rather than $T\geq O(m+n)$? Firstly, the result is only interesting if the condition is a lower bound on $T$ only (I am confident this holds but it is not stated as such. Secondly and more importantly, it doesn't seem like the tilde is required since there are no log terms. In particular, if we interpret the tilde notation as possibly involving dividing factors of log terms (since we want a lower bound on $T$ not an upper bound), the second to last inequality on the sequence of inequalities in line 788.5 could be incorrect (though the last inequality with $\tilde{O}(\alpha \sqrt{T})$ remains correct where the log terms are $\log(mn)$. **4** Could you include a more detailed comparison to the existing results in [1] and [4]? **References** [1] Elad Hazan, Satyen Kale, and Shai Shalev-Shwartz, "Near-optimal algorithms for online matrix prediction", COLT 2012. [2] Elad Hazan et al. "Introduction to online convex optimization." Foundations and trends in optimization, 2016. [3] Nathan Srebro, "Rank, Trace-Norm and Max-Norm", COLT 2005. [4] Prateek Jain, Soumyabrata Pal Online Low Rank Matrix Completion. ICLR 2023. [5] Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan. Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization. SIAM journal of Optimization, 2020. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: 1. The experiments section is very preliminary. No attempt is made to evaluate the online learning setting or to compare with other baselines. This is also an extremely small artificial dataset most likely constructed to be able to run experiments very fast on a laptop. 2. See "weaknesses". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your very detailed review of our work! Weaknesses: 1. Lemma 3 in Appendix A1: Thank you for pointing out the mistake. We will change this. In fact, the convexity of the constraint set follows directly from the (non-trivial) fact that max-norm is a well-defined norm. 2. The explanations are not always perfectly self-contained and assumptions are not always well introduced. 2.a: Thank you for pointing this out. Yes, we should remove the $\tilde{C}$. 2.b: Thank you for pointing this out. There is a mistake in the preamble. Corollary 21 holds for arbitrary distributions. We will change the writing accordingly. 3. Thank you for your suggestions. We will improve our writing. Matrix entropy is used in Algorithm 5. All minor issues and typos we will fix accordingly. Thank you very much for your very detailed feedback. Questions: 1.a I don't understand why problem (7) is equivalent to problem MP 3. Indeed, I can't see why the PSD condition appears. I have a feeling this comes from the $(\beta, \tau)$ decomposability, but there is definitely an argument missing. **Response**: Indeed it is not immediately equivalent, and there are subtleties here. Thanks for pointing this out. We made changes to the appendix accordingly. In fact, we could show that with added symmetric PSD condition, the problem is NP-hard. We add this as only an indication of computational hardness in a special case. We will clarify this in the appendix of the final version. 1.b The first argument in the "soundness" part which states that we can assume is rank one w.l.o.g. I also don't understand this argument: why is it the case that the matrix achieves value if the matrix does. This is absolutely not obvious to me. I agree with the rest of the proof assuming this is correct. **Response**: The proof shows that given a solution, we can construct a rank-1 solution with the claimed objective value, which is sufficient for our proof. The construction is simply taking the rank-1 matrix ww^T for w_i = root(X_{ii}). The objective value of this rank-1 matrix is at least the objective value of $X$ since $X_{ii}X_{jj}\geq X_{ij}^2$ for symmetric psd $X$. 2. Can you clarify whether the expectation in the equation on line 603.5 (bottom of page 18) refers only to the randomness in the choice of the loss function $h_t$ (which is assumed to be "stochastically i.i.d.). In that case, shouldn't there also be an expectation on the right hand side in front of the regrets? **Response**: Indeed. Here the regret is the expected regret. We will make sure to clarify this. 2.2 The lengthy discussion at the beginning of Section B culminating in the equation from line 603.5 seems to mostly justify the validity of the Game-Regret (see line 679) as a measure of performance of the algorithm by linking it to the minimax regret. Why not apply the result from line 603.5 and make a self-contained theorem which applies to the minimax regret in the paper's setting? **Response**: Line 603.5 holds for bilinear shared objectives. The shared objective function in the online PMC setting is not bilinear. However, it still motivates the use of Game-Regret as a measure of performance (also indicated by the Corollary 21). 3 On lines 786-788, why do you assume $T\geq \tilde{O}(m+n)$ (first) and then $T= \tilde{O}(m+n)$ (also as in the statement of the lemma) rather than $T\geq \tilde{O}(m+n)$? Firstly, the result is only interesting if the condition is a lower bound on $T$ only (I am confident this holds but it is not stated as such. Secondly and more importantly, it doesn't seem like the tilde is required since there are no log terms. In particular, if we interpret the tilde notation as possibly involving dividing factors of log terms (since we want a lower bound on $T$ not an upper bound), the second to last inequality on the sequence of inequalities in line 788.5 could be incorrect (though the last inequality with $\tilde{O}(\alpha\sqrt{T})$ remains correct where the log terms are $\log(mn)$. **Response**: Thanks for pointing this out. First of all, tilde is not needed, and we are upper bounding $T$ here. The more accurate statement is that the LHS is bounded by Te^{-\beta}mn. The range of parameters we are interested in, is T ~ poly(mn). We can therefore choose the constants in \beta to be as we need. We will clarify this in the later version. All we are trying to say is that by restricting to the constrained simplex, we do not lose much in the optimal point of the objective. For every $t$, the objective functions of the two $C$’s differ by order of magnitude $1/mn$. When summing over $T$ iterations, the difference between the two optimums is bounded by $\alpha T/mn$ (constant omitted). Since to achieve $\epsilon$-error in regret, we only need $T$ to be of magnitude linear (up to logarithmic terms) of dimension $m+n$, this difference is small. In fact, since our regret is dominated by $\sqrt{T}$, we might only need this sum of differences to be bounded by $\sqrt{T}$, i.e. $T/mn \leq \sqrt{T}$, which is clearly true as we at most need to see all $mn$ entries to complete the matrix. 4 Could you include a more detailed comparison to the existing results in [1] and [4]? **Response**: We are happy to do it in the full version. The main difference is that we are tackling a partial completion problem, whereas these papers consider full completion. However, we make extensive use of their techniques, especially in the online algorithm and its analysis. Notice that, however, we have a different primal-dual definition of regret. We are happy to elaborate more in the full version. --- Rebuttal Comment 1.1: Title: Thanks+ follow up Comment: Many thanks for the clarifications. Thank you, in particular, for agreeing to write a correct proof of Lemma 3 and to fix the minor issues in the Lemma 25. Also, thanks for clarifying my doubts regarding the first argument in the soundness part. That was my bad, it was reasonably understandable in the first draft, actually. On the other hand, there are two points where your answer is satisfactory but I would really like you to include a better explanation in the paper to avoid any confusion. Those are: 1. The fact that MP3 is not, in fact equivalent to quation (7). In addition, could you also clarify (in this rebuttal and in the paper), exactly what you mean by "we could show that with added symmetric PSD condition, the problem is NP-hard". Do you mean that simply adding a symmetric PSD condition makes the problems equivalent? If so, why? 2. The fact that since line 603.5 only holds for bilinear objectives, the section on the definition of the game-regret should be taken as a motivation but cannot be used to derive a bound on minimal regret (since in the current form, it may seem that you are claiming a minimax regret bound). Other than the concerns above, I am still convinced of the very high quality of this paper. Congratulations!
Summary: Typical matrix completion methods aim to recover the whole matrix, based on strong conditions on the matrix itself as well as the sampling distribution. In contrast, this paper proposes a method to complete a subset of the entries with high confidence, which can bypass the need of these conditions. More specifically, the proposed approach builds on top of existing matrix completion methods, and identifies which completed entries can be recovered well. A computational efficient algorithm (as well as an inefficient one) is proposed. Corresponding theoretical guarantees are also provided. Strengths: - This paper studies an important and interesting problem, as certain necessary conditions (on matrix structure and sampling distribution) required for matrix recovery are not always guaranteed to hold. In such ill-posed settings, partial guarantee is useful. I am not aware of any prior work for partial matrix completion, so it is a novel contribution to this area. - This work provides strong theoretical guarantee for the proposed methods. Weaknesses: The authors define the "Partial Matrix Completion Problem" (Line 223) as finding a specific $C$; but the theoretical guarantee does not seem to align directly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Corollary 2 (Section 1.1): is this $|C|$ the cardinality or the sum as defined later in Section 2.3? - The explanation after "The Partial Matrix Completion Problem" is unclear. In line 227, what is $C$ ? Based on the explanation, I think we can guarantee that for $C=\mu$ , but I am not sure if it works for all $C$ . - The authors specify "the partial matrix completion problem" in line 223 as finding a *specific* $C$ . I am curious if the proposed algorithms has any guarantee about finding such $C$ defined in 223. Note that the theoretical results (Theorems 8 and 10) provide guarantee in terms of $\frac{1}{C}\sum_{x\in\mathcal{X}}C_x(\hat{M}_x - M_x^\star)^2$ . This type of guarantee looks good to me. I just want to see the relationship with the *target* as specified in "The Partial Matrix Completion Problem". - How does misspecification in rank (in the MC step) affect the performance of the proposed method? - How to choose the tuning parameters in practice? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors do not provide an explicit discussion on the limitations of their work. I think more extensive numerical experiments would improve this work. I am particular interested in how the proposed methods perform under different sampling distributions (ranging from some easy settings where completion of the whole matrix is possible, to ill-posed settings.) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time and efforts in reviewing our work! Here are the resonses to your concerns: Weaknesses: 1. The authors define the "Partial Matrix Completion Problem" (Line 223) as finding a specific $C$; but the theoretical guarantee does not seem to align directly. **Response**: Our definition is not saying that there is a unique $C$ in particular which satisfies the problem. Hence, we find such a $C$ which is sufficiently good. Although our result is not strictly maximizing $|C|$, we provide a meaningful lower bound guarantee. Later in the online section we even give an agnostic guarantee: we compete with the best C, even though such a (optimal) C may have high loss. This is common in online and agnostic learning. Questions: 1. Corollary 2 (Section 1.1): is this $|C|$ the cardinality or the sum as defined later in Section 2.3? **Response**: Note that in Section 1.1, these two are the same as $C\in \\{0,1\\}^{m\times n}$. We will make sure to clarify this. 2. The explanation after "The Partial Matrix Completion Problem" is unclear. In line 227, what is $C$? Based on the explanation, I think we can guarantee that for $C=\mu$, but I am not sure if it works for all $C$. **Response**: $C$ in this case is the solution to problem (2) in line 223. The guarantees are spelled out in Theorem 8 and 10. This is just a summary of the result. We will make sure to clarify this. 3. The authors specify "the partial matrix completion problem" in line 223 as finding a specific $C$. I am curious if the proposed algorithms has any guarantee about finding such $C$ defined in 223. Note that the theoretical results (Theorems 8 and 10) provide guarantee in terms of $\frac{1}{C} \sum_{x \in {\cal X} } C_x (\hat{M}_{x} - M^*_{x})^2$. This type of guarantee looks good to me. I just want to see the relationship with the target as specified in "The Partial Matrix Completion Problem". **Response**: Note that $\frac{1}{C} \sum_{x \in {\cal X}} C_x (\hat{M}_{x}-M^*_{x})^2$ is exactly the constraint in line 223 defined in line 218. Yet it is interesting to explore other types of guarantees. 4. How does misspecification in rank (in the MC step) affect the performance of the proposed method? **Response**: While we haven’t explored this, our conjecture is that the theoretical guarantee should translate in the natural way (scale with the rank/complexity measure scale in the guarantee). In practice, we have not explored this yet. It is interesting to explore this direction. 5. How to choose the tuning parameters in practice? **Response**: Possible tuning methods include hyperparameter sweep, and it is possible that more sophisticated methods can help, such as hypergradient descent and meta optimization. Limitations: The authors do not provide an explicit discussion on the limitations of their work. I think more extensive numerical experiments would improve this work. I am particular interested in how the proposed methods perform under different sampling distributions (ranging from some easy settings where completion of the whole matrix is possible, to ill-posed settings.) **Response**: We agree and thank the reviewer for this suggestion—we will attempt to do this by the final version of the paper (or in future work, if that’s needed). --- Rebuttal Comment 1.1: Comment: Thank you for your response. Thanks for pointing out the $C$ defined by "Partial Matrix Completion Problem" (Line 223) might not be unique. My original comment focuses on that the target in Line 223 is defined as those that **maximizes** $|C|$ under the constraint. So I think that one natural measure of convergence would be based on some distance to this set (of C that maximize $|C|$ under the constraint). But I agree that the results obtained in this work are meaningful.
Summary: This paper considers a twist on the standard matrix completion problem where one is required to only complete a subset of entries (not the whole matrix) that includes the entries shown. This allows them to consider substantially more observation patterns unlike the standard missing-at-random response. In this context the paper makes the following contributions: 1. Propose a computationally-inefficient algorithm that with high probability recovers a subset of entries that is at least as large as the revealed set, with low target accuracy. 2. Develop a computationally-efficient relaxation of this algorithm that has worse statistical dependence on the target accuracy than the inefficient algorithm. 3. Provide an online variant of the algorithm that is iterative/gradient based that applies to adversarial online matrix completion. Strengths: In my view the main strengths of the paper are: 1. Posing an interesting partial matrix completion setting 2. Leveraging existing matrix completion work and separating the problem of matrix completion from that of obtaining good coverage (possibly fractional) 3. Generalizing to the online and adversarial setting Weaknesses: Some weaknesses, though these are better understood as interesting avenues for future work: 1. The proposed algorithms provide noisy completion even when the observations are noiseless (i.e. no obvious notion of completion to the inherent noise level). This does seem inherent to the algorithm/proof techniques used here, it is unclear to me that this can be overcome here. 2. There is an obvious sample complexity gap between the efficient and inefficient algorithms. Is this inherent, or a result of the proof technique here? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The algorithmic viewpoint here is to separate the coverage computation from the completion method. This is advantageous in some ways (notably the generality and e.g. obtaining an essentially free proof of the risk), but potentially disallows using structure in the completion. E.g. in the causal inference setting of (say) row $1$ being partially revealed from columns $1, 2\ldots, K \leq \text{dim}(M)$. 2. What is $\mu_\max$? It is defined in the appendix (and easy to guess) but is not in the text (unless I missed) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: No significant limitations, addressed previously. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time and efforts in reading and reviewing our work! Here are our responses to your concerns: Weaknesses: 1. The proposed algorithms provide noisy completion even when the observations are noiseless (i.e. no obvious notion of completion to the inherent noise level). This does seem inherent to the algorithm/proof techniques used here, it is unclear to me that this can be overcome here. **Response**: This is correct. It is currently inherent to our algorithm and proof techniques. It is also interesting to us to explore whether we can have an algorithm that completes missing entries without noise when the observations are noiseless. 2. There is an obvious sample complexity gap between the efficient and inefficient algorithms. Is this inherent, or a result of the proof technique here? **Response**: Great point, yes: we conjecture that the computational hardness of the nonconvex formulation is inherent, and it may be computationally hard to get the optimal statistical complexity. Such results are known in, for example, sparse recovery (LASSO), and it would be very interesting to research this direction further. In general, yes, there is a lot more investigation to be done here, and these are good points. We believe we have only initialized research in this direction. We’ll add it to the future works section. Questions: 1. The algorithmic viewpoint here is to separate the coverage computation from the completion method. This is advantageous in some ways (notably the generality and e.g. obtaining an essentially free proof of the risk), but potentially disallows using structure in the completion. E.g. in the causal inference setting of (say) row $1$ being partially revealed from columns $1,2,\dots,K\leq\mathrm{dim}(M)$. **Response**: Note that the offline algorithms do separate the coverage computation from the completion method, but the online algorithm does not as clearly do this through online computation of a version space for the completion. Thus, in effect we are showing that both can be done. 2. What is $\mu_{\max}$? It is defined in the appendix (and easy to guess) but is not in the text (unless I missed). **Response**: Yes, $\mu_{\max}$ is the maximum entry over all entries in $\mu$. We will add this definition earlier. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I'll first thank the authors for their response to mine and other reviewers' comments. They have addressed my concerns. Hope to see the paper published soon.
Summary: The authors discuss the Partial Matrix completion and present an algorithm for completing a large subset of the entries with high confidence. They also aim to find the number of entries that can be completed (coverage) with a small error where the samples are from an unknown sampling distribution. Strengths: - Identifying the entries which have high confidence without the incoherence assumption is an important problem and the problem is described well by the authors, including the relevant citations. - Two alternative methods are presented. An inefficient (but stronger) algorithm, as well as an efficient (but worse sampling bounds), for finding the optimal coverage for partial matrix completion. Their limitations and disadvantages are discussed honestly. (i.e. The inefficient algorithm requires solving a quadratic function as an optimization function which is also shown to be NP-hard. ) Weaknesses: - The online formulation of the "partial matrix completion" is said to be one of the main contributions of the paper however it is only discussed in the appendix. It would be nice to see the main contribution in the main paper. - Experimental support is missing significantly. The experiments can be enriched to support the claims of the authors, further baselines are missing. Time analysis is missing. Performance could be highlighted and leveraged more. - The guarantees are discussed before introducing the algorithms. - The definition of "Partial" could have been emphasized more, since the paper is directly built on that term. Minor : - The Figures are not numbered and not addressed in detail. - The organization of the paper could be further adjusted. Some terms are used before they were defined, the styling is not appropriate which makes it difficult to follow the paper. i.e. Algorithm 1, "FullComp", or the styling of the function (3). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How did the authors select the data? How did they decide which user and which items to consider? Which exact Movie Lens dataset is used? There are multiple versions of the same dataset. Some further clarification would be appreciated. 2. What are some practical domains that the presented algorithms can be useful? How is the time complexity reflected in the larger experimental settings? How much time does it take for your algorithm to run to find the coverage? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No direct negative societal impact exists. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the time to read and review our paper. We note that despite the score you gave us, your comments are generally positive, and you recognize the contribution and soundness of our work. Out of the four weaknesses you pointed out, three seem to be stylistic issues, and we are happy to try to improve the presentation based on your suggestions. We will address your concerns and hopefully these responses can give you a chance to reevaluate our work! Weaknesses: Major: 1. The online formulation of the "partial matrix completion" is said to be one of the main contributions of the paper however it is only discussed in the appendix. It would be nice to see the main contribution in the main paper. **Response**: if the paper is accepted, we will have an additional page for the camera ready version which we will use to discuss the online version in the paper body. 2. Experimental support is missing significantly. The experiments can be enriched to support the claims of the authors, further baselines are missing. Time analysis is missing. Performance could be highlighted and leveraged more. **Response**: Our contribution is mainly conceptual and theoretical, and therefore the experimental section was added to elucidate and support the framework and theory, rather than to give an extensive evaluation or comparison as in applied papers. Significantly more experiments would thus be out of scope, but we are very happy to compare to other methods (we don’t know any since it’s a new framework, please refer us if you have anything in mind), or add more experiments that would help explain the framework, can you please suggest such? We want to emphasize that there exists no baseline we can compare to as partial matrix completion is a completely new framework, which is part of the novelty of our work. This is also why we use a standard matrix completion package to compare the squared error of that package with the test values with respect to our obtained confidence matrix. We are not sure what you mean by time analysis. Can you elaborate on an experiment that would help the reader understand the main concepts we introduce? We would be happy to add any other evaluation, time permitting (space limitations would mean this would go in the appendix in any case, since we have so many other contributions in the paper). 3. The guarantees are discussed before introducing the algorithms. **Response**: Thank you for the suggestion. We chose to qualitatively discuss the guarantees first because we felt it was more intuitive. It is generally common to present the guarantees of an algorithm before its implementation details, however, if you feel that the inputs or the setting is not clear early enough, please let us know. 4. The definition of "Partial" could have been emphasized more, since the paper is directly built on that term. **Response**: Thank you for the suggestion. We have added a sentence emphasizing this in the introduction. **Response to minor weaknesses**: Thank you for your suggestions. We will change according to your suggestions. Questions: 1. How did the authors select the data? How did they decide which user and which items to consider? Which exact Movie Lens dataset is used? There are multiple versions of the same dataset. Some further clarification would be appreciated. **Response**: We use the MovieLens 100K dataset. We use a 250x250 submatrix from the 943x1682 matrix. We chose to select the MovieLens dataset because it is a classic dataset used for matrix completion problems. We use a simplified version of the ODD as outlined in Appendix D. 2. What are some practical domains that the presented algorithms can be useful? How is the time complexity reflected in the larger experimental settings? How much time does it take for your algorithm to run to find the coverage? **Response**: While our results are mainly theoretical, we believe they have meaningful applications as well. Specifically, the online algorithm is designed to be efficient and can be used to evaluate to what extent entries can reliably be completed in a matrix, which is an important problem in any scenario where decisions should only be made if there is high confidence. For example, in datasets which pertain to hospital patients and their features, one would only want to make predictions for patients which can be made with high confidence. Even in low stakes settings, such as movie recommendations, companies, such as Netflix seem to be already abstaining from making predictions on some movies today, as discussed in the paper. This is only a first step and we are hoping later papers will find even more applications. In light of our changes and rebuttal, would you consider adjusting your score? We are more than happy to answer any further questions. --- Rebuttal Comment 1.1: Comment: My main concern was about the experimental support provided in the paper. After carefully reading the comments of other reviewers' comments as well as the rebuttal comments of the authors, I changed my evaluation to borderline acceptance. I agree with the points mentioned by the reviewer wLHP as weaknesses. Thanks.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Interpretable factorization of clinical questionnaires to identify latent factors of psychopathology
Reject
Summary: This paper proposes a new technique to extract latent factors from psychological questionnaires. The method improves on previous methods both in terms of its ability to handle more flexible inputs (i.e., missing values and confounding variables) and in the interpretability of its outputs (i.e., the scale of its loadings are in the range of the original questionnaire). The proposed method is formulated as an optimization problem and an algorithm is provided to solve the problem and also to automatically determine an appropriate number of latent factors. Many experiments are provided that answer diverse sets of questions using both synthetic and real-world datasets. Strengths: 1. The paper effectively formalizes psychology models as a solvable optimization problem that possesses better characteristics than competing methods. 2. The experiments are varied and show the usefulness across many scenarios: both when the true answer is known (synthetic data) and when it isn't. 3. Extensive comparison to existing techniques commonly used by researchers in the field of psychology. 4. They provide a Python implementation of their algorithm. Weaknesses: 1. I realize space is tight, but many of the figures are incredibly small. Could some of the figures be moved to an appendix? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Could the intuition behind the ICQF regularization method be explained more? The abstract talks about regularization specifically made for questionnaire data but I didn't see this explained beyond the function $R$ being defined. Were other regularization methods evaluated? How was this one chosen? 2. How does the complexity of the optimization problem scale with respect to $n$ participants or $m$ questions? 2. The paper is high-quality, and addresses a real problem in research, but I'm not entirely convinced NeurIPS is the best venue. Could you provide a small one to two sentence argument for why you think NeurIPS is an appropriate venue? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper effectively addresses common limitations with this kind of work by including a diverse set of experimental problems. If I were to give any feedback I'd say it might be worth discussing the inherent challenge in latent variable discovery. This would not be a weakness of your work but simply a challenge of the problem in general that you effectively addressed in the structure and design of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses > many of the figures are incredibly small. Could some of the figures be moved to an appendix? We thank the reviewer for pointing this out, and apologize for the incredibly small figure ticks and text due to page limit. To address this, we: - removed both $x$- and $y$-axis ticks' labels in Figure 1, as this figure only shows the general sample distribution. The full version together with an example of reconstructed result is added in the supplementary section. - removed the $x$-axis ticks' labels in Figure 2 and included a full version (rotated and enlarged) in supplementary section. - increased the font size of Figure 3. ### Questions > Could the intuition behind the ICQF regularization method be explained more? Thank you for bringing up this question. We agree that providing a more explicit explanation of the intuition behind the multiple constraints in our model is essential. These constraints were carefully defined in collaboration with our clinical partners to formalize objective and subjective characteristics that would make the factorization interpretable to them. We have attempted to provide a more detailed description of our motivation in this regard in the common response, as multiple reviewers asked about this. We ask the reviewer to please refer to this, and hope that the response will be satisfactory. > How does the complexity of the optimization problem scale with respect to $m$ participants or $n$ questions? Theoretically, ADMM exhibits linear convergence but, in practice, it often performs even faster. Typically, most experiments reported require only 50 iterations to reach the desired error tolerance. Within each iteration, we utilize the FISTA algorithm, known for its quadratic convergence, to solve subproblems 1 and 2. Subproblem 3 has a closed-form solution. Given that ADMM doesn't require every subproblem to reach the exact global minimum at each step, we set an upper bound of $K=20$ on the maximum number of iterations in FISTA. As a result, the overall complexity can be expressed in big O notation as $O(mnK/\epsilon)$, where $\epsilon$ represents the pre-defined error tolerance. It's worth noting that the FISTA algorithm can be parallelized row-wise, which can further enhance computational efficiency. Alternatively, we can opt to replace the FISTA algorithm with coordinate descent, especially for machines with few cores, as it would be more efficient in such cases. > Could you provide a small one to two sentence argument for why you think NeurIPS is an appropriate venue? Certainly! The aim of our research group is to develop machine learning methods to enable better scientific practice in psychology, psychiatry, and neuroscience. The technical characteristics of our factorization were developed in response to specific requests of our clinical collaborators; at the same time, we believe they are complex enough -- and novel enough, in combination -- to warrant publication in a machine learning venue. This is why we believe this work fits squarely within the "Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)" topic in the call for papers. --- Rebuttal Comment 1.1: Comment: I remain positive about the paper. On its face, the work appears to be very valuable for the author's target domain (i.e., factor analysis of psychology questionnaires). I do admit, however, that I am not incredibly versed in the most recent literature for this domain. When I do require it, my own work uses traditional PCA and Factor Analysis to analyze questionnaires. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their kind words. To our knowledge, the method of choice by default is still factor analysis, with some form of rotation. We have, on occasion, encountered uses of nonnegative matrix factorization, but without any of the additional constraints.
Summary: This paper proposes a matrix factorization formulation in order to extract latent features from questionnaires for psychopathology. The proposed formulation is very similar to the well-known Non-Negative Matrix Factorization (NMF), with an l1-penalty on the dictionary and activation matrices. The biggest major different is they include some fixed dictionary vectors in the dictionary matrix. Due to some domain-related constraints they use the Alternating Direction Method of Multipliers (ADMM) method in order to impose the constraints. The experimental results claim that the resulting factorization is interpretable. They also claim that the resulting factorization better preserves the clinical classification when compared to other factorization schemes such as l1-NMF and Factor Analysis. Finally the authors also claim that the proposed method better preserves correlation between the the activation matrix that is obtained from the full data and the activation matrix obtained from the subset of the data. Strengths: - The method seems appropriate in order to obtain latent factors from a questionnaire data. - The method is lightweight. Weaknesses: - I am not really convinced that the proposed method is novel. It is simply an l1-constrained Non-Negative Matrix Factorization method. It is true that they add known variables inside the NMF dictionary, and impose additional value constraints, but in my opinion this does not seem like a novel method to me. Also, I do not think that the presented results present any novelty. - The manuscript is hard to follow at times. For instance, I tried to understand how do the authors argue that the proposed factorization is interpretable, but the arguments put forth in section 4.2.3 in order to explain figure 2, remain difficult to understand for me. For instance the authors write that `While there were factors that loaded primarily in questions from one subscale, as expected, we were encouraged by finding others that grouped questions from multiple subscales, in ways that were deemed sensible co-occurrences by our clinical collaborators`. This sentence is hard to understand for me. I understand that maybe the authors' clinical collaborators might verbally confirm that these findings are sensible, but it would have really helped if the authors could do a user study to more quantitatively argue that their proposed method is superior compared to the other methods that they have compared against. - In my opinion Figure 2 is critical in order to motivate the importance of your proposed method. It seems like there are some patterns with respect to different classes of psychopathologies, but I do not clearly understand what is the take away message here. The alternative method also seems to retain specific patterns for each class. - You are comparing against two other linear factorization methods. I think it would have better to show that your method has advantages compared to deep neural network (e.g. an autoencoder with several layers). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: -Why did you limit the comparisons to linear factorization models? It is possible to construct a deep autoencoder and compare with the latent factors found that way. Given that this is the age of deep-neural networks, one in-avoidably thinks about using neural networks. - I am not sure what is the motivation behind the experiment in section 4.2.5 ? More specifically I am not sure why reducing the number of subjects and measuring the correlation with the full data matrix is a good way overall to measure the factorization quality. - I am not sure if I missed this in the manuscript, but did you try to understand what do the factors F-1, F-2 ... , F-8 correspond to? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors do not discuss negative societal impacts of their work, but I think this is okay, given that this work does not really pose dangerous implications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses > The manuscript is hard to follow at times ... I understand that maybe the authors' clinical collaborators might verbally confirm that these findings are sensible, but it would have really helped if the authors could do a user study to more quantitatively argue that their proposed method is superior compared to the other methods that they have compared against. We apologize for the brevity of our writing, a result of page limitations, which might render the context less comprehensible, particularly for readers without a background in psychopathology. In the common response, we have provided a more detailed justification of why the constraints in our factorization promote "interpretability" as defined by our clinical collaborators. We ask the reviewer to please refer to that. The specific quoted sentence was meant to clarify that subscales, which are manual groupings of question items used in clinical practice, are used merely as a qualitative reference for evaluating the factorization. While subscales are undoubtedly useful, an active research question is whether they correspond to different underlying causes for the observed problems; alternatively, it's possible that the same cause leads to correlated answers across questions belonging to multiple subscales. Moreover, subscales cannot provide an indication of relative importance of different questions, which is also of interest to researchers. Factorizing a questionnaire matrix is a data-driven way of approaching this question, by seeing whether a given factor has loadings over questions in multiple subscales. As other reviewers had a similar question, we provide a detailed description of the ways in which the result of our method appears preferable to a clinical collaborator, versus that of factor analysis, in the common response section "Subjective evaluation". In the section "Motivation of the method", we also provide a more detailed explanation of how the constraints promote interpretability, as requested by our clinical collaborators. We agree with the reviewer that it would be ideal to have a comprehensive quantitative evaluation rooted in user studies involving clinical researchers. As it happens, this is something that is currently underway. This study encompasses not only the CBCL questionnaire in the HBN dataset, but also 21 selected questionnaires deemed pertinent to psychopathology. Our preliminary findings indicate that, across almost all questionnaires, the proposed methods yield more interpretable factor weights, which accurately convey the relative significance of different questions in terms of how informative they are for diagnosis. > It seems like there are some patterns with respect to different classes of psychopathologies, but I do not clearly understand what is the take away message here. The alternative method also seems to retain specific patterns for each class. Given our meticulous selection of baseline comparisons specific to our application's objectives, we expected to observe some patterns aligning with different classes of psychopathologies with either method. Questionnaires are designed assuming the existence of latent variables that combine linearly to produce observed answers. The primary distinction of our method lies in making interpretation of the resulting solution easier, as described in "Motivation of the Method" in the common response. As above, we refer the reviewer to the "Subjective evaluation" section for an illustration of how a clinician might contrast the solutions provided by both methods. ### Questions > I am not sure why reducing the number of subjects and measuring the correlation with the full data matrix is a good way overall to measure the factorization quality. The primary objective of conducting this experiment was to simulate scenarios where the data size is small, a common occurrence in many psychology and psychiatry studies. Through this experiment, we sought to empirically demonstrate the robustness of our model to varying sample sizes. We observed that our model consistently maintains a certain level of latent factor interpretability, which we quantified by measuring its correlation with the latent factors discovered using the full data set. This consistent performance suggests that our model can be extended to scenarios where studies are conducted in different populations with similar sample distributions. Moreover, it also suggests that the regularization induced by our constraints matches the characteristics of the domain. > did you try to understand what do the factors F-1, F-2 ... , F-8 correspond to? Yes, our collaborators, who are experts in psychopathology, carefully examined each factor in detail, and found it easier to ascribe meaning relative to factors extracted by factor analyses. We refer the reviewer to section "Subjective evaluation" of the common response for more details. We apologize for not including the naming of factors assigned by domain experts due to page limitations. We have added the following table in Appendix in the modified manuscript. | Factor | Theme | |---|---| | CBCL Factor-1 | irritability and oppositionality | | CBCL Factor-2 | anxiety | | CBCL Factor-3 | inattention and hyperactivity | | CBCL Factor-4 | cognitive problems, disociality and callousness | | CBCL Factor-5 | cognitive + fine motor problems | | CBCL Factor-6 | body-focused repetitive behaviors | | CBCL Factor-7 | somatic problems | | CBCL Factor-8 | body-focused repetitive behaviors |
Summary: The paper proposes a non-negative matrix factorization with a customized regularization term to identify interpretable latent factors from psychopathological questionnaires. The input data is represented in a matrix and a non-negative matrix factorization algorithm is applied to the input matrix. The factor matrices are bounded to be between 0 and 1, providing some interpretation of the presence or absence of the corresponding factor. Strengths: The proposed method is overall sound. The optimization problem formulated and the regularization terms incorporated could well serve the desired purposes. The application of ADMM to solve the optimization problem is also reasonable. Good experimental results are shown using multiple datasets. Weaknesses: - The technical contribution is somewhat limited. Non-negative matrix factorization has been extensively studied for decades and widely applied to various applications, including questionary data analysis. ADMM is also a classic framework to solve matrix factorization problems with constraints. It seems to me that the optimization procedures described in Eq. (1-5) are standard for the ADMM algorithm and the convergence follows directly from the ADMM properties. - The baselines used in the paper are very classic ones, $\ell_1$-NMF was developed in 2009 and FA-promax is developed in 1964. More recent methods for questionnaire data analysis should be compared. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses sections. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are not discussed in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses > The technical contribution is somewhat limited. > More recent methods for questionnaire data analysis should be compared. We ask the reviewer to please refer to the common response section for a detailed answer. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thank the authors for the response. I have carefully gone through them. Although I believe that the problem is well motivated, I respectfully disagree with the argument that there is no existing work that could incorporate the desired constraints such as probabilistic loading factors and handling missing data. In fact, there have been abundant papers in the past decades that incorporate those constraints/properties into NMF/PCA and apply them to different domains. Therefore, I will keep my rating unchanged. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for taking the time to read through our responses. If we could impose further, we would appreciate pointers to the specific work that you mention, as we were not able to find any method with our combination of constraints, despite thorough searching (as we hope the related work section would attest).
Summary: This paper presents an algorithm for factorizing matrix with multiple constraints desired in the study of psychiatric disorders using clinical questionnaires. These constrains include those that had been studied previously, such as sparseness in both the factor and loading matrices and non-negative values in both matrices. They also include two novel ones, that is the magnitude requirement on values in the matrix reconstructed from the factor and loading matrices and the value range requirement on the factor matrix (values in this matrix should be in [0, 1]). The authors also proposed to directly factor in confounding factors (e.g., age, gender) in the factorization. The algorithm was developed using the popular ADMM framework. Evaluation was done using both synthetic datasets and two practical datasets. Strengths: The results included in Table 2 is interesting, indicating the proposed method learns factors that are more stable than baseline methods with varying training set size. This is a desired behavior, implying it might learn the intrinsic patterns that are important. Weaknesses: Other than what’s mentioned in strength, the practical significance of this approach is limited given the existence of large amount of existing works in matrix factorization research. There is no clear statistical significance among results in Figure 3 to indicate obvious advantage of the proposed method over compared ones. The advantage of including age and gender in the factorization is not clearly indicated. How with/without them affecting the learned factors is not clear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is D in Eq. (10)? I was not able to find corresponding description. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses > the practical significance of this approach is limited given the existence of large amount of existing works in matrix factorization research. We ask the reviewer to please refer to the common response section for a detailed answer. > There is no clear statistical significance among results in Figure 3 to indicate obvious advantage of the proposed method over compared ones. We would like to point out that we observed statistically significant advantages of our method versus others as the sample size decreased, in both datasets, with the sole exception of $\ell_1$ NMF at 20% in CBCL-HBN. However, our primary goal for this and other diagnostic prediction experiments was to show that having all the constraints that promote interpretability comes at *no* cost in terms of prediction performance i.e. our method preserves that information. This is something that our clinical collaborators deeply care about. These results suggest that the additional regularization proposed in our method matches domain characteristics. > The advantage of including age and gender in the factorization is not clearly indicated. How with/without them affecting the learned factors is not clear. The age and gender information are two of the most well-known potential confounder variables present in clinical data. Incorporating this information enables us to model answer patterns that are correlated with them. This prevents the creation of erroneous connections between question that are only associated due to these confounding variables. This distinction is crucial when drawing conclusions about the relationship between behavioral issues and diagnoses. This generalizes to other auxiliary variables, e.g. environmental exposures, life circumstances, etc. ### Questions > What is D in Eq. (10)? I was not able to find corresponding description. We apologize for not including the definition of $D$ in the manuscript. The matrix $D$ is a binary matrix in which columns are correlated with a step-like pattern, where each "step" is of length 20 and entries on the step have weight 1. Every consecutive pair of steps is overlapped by 10 units to synthesize correlation between latent factors. By multiplying random variables $a$ and $b$, we obtain the matrix $W$ as shown in the left panel of Figure 1. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for responding to my comments. Considering the lack of study on the impact of involving age and gender in the factorization and the limited novelty of the work as indicated by other reviewers, I keep my initial slightly negative rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for taking the time to read through our responses. Regarding your comment, we would like to note that neither factor analysis nor sparse NMF explicitly involve age and gender, so we believe that would suffice to provide an indication that this affects performance.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for taking the time to provide thoughtful feedback on our paper. We were pleased to see that, in general, reviewers agree that the paper is methodologically sound, clearly presented, and has an appropriate experimental evaluation. Some issues have also been raised by multiple reviewers, specifically: - motivation of the method - technical contribution versus vanilla NMF methods - subjective evaluation of interpretability - the baseline methods compared against To save reviewers time, we will cover these issues in a common response, prior to addressing individual reviewer comments and questions. ### Motivation of the method Our method (ICQF) is motivated by psychological and psychiatric applications where questionnaires are the primary data type, and the goal is to make inferences about latent variables that correspond to constructs postulated by researchers. In this situation, interpretability and reproducibility across different datasets/populations are the two characteristics sought by them. While reproducibility is easy to quantify, interpretability is obviously subjective. One of the contributions of the paper is to capture many of the characteristics that researchers told us would make a factorization interpretable to them as constraints in the method. Having factors in the [0,1] range means the factors can be interpreted as the degree to which the factor is present. Having the loadings for a factor be in the same scale as questions means that they can be interpreted as a pattern of answers, present in a participant to the degree the factor is present. Separately modelling confound variables means that their influence can be separated from that of the factors of interest. Constraining the reconstructed matrix not to exceed the possible range of the answers regularizes both factor and loading estimates, and contributes to their sparsity. These characteristics may seem technically trivial, but *all* of them are missing from factor analysis (FA), the factorization method that has been the workhorse of psychological and psychiatric research for many decades. There, factors and loadings may be in an arbitrary range. Interpretation of loadings requires taking into account the sign and range of the corresponding factor, as well as tradeoffs between positive and negative loadings. There is no allowance for confound or ancillary variables, and judging their influence requires dividing the sample by values or levels of the confound. Hence, our method should be viewed as a replacement for factor analysis, if one would start from scratch with researcher desiderata in mind. If there existed a non-negative matrix factorization method with all of these characteristics, we would have used it instead. The other aspects of our method -- automated determination of dimensionality and sparsity, integrated handling of missing data -- are more practical in nature. They correspond, however, to steps that are challenging for researchers, and where practice is often ad-hoc. As we show in synthetic data experiments, our solutions outperform the approaches used in factor analysis at identifying the ground truth, and are completely integrated in the method rather than requiring separate steps and additional decisions by researchers. Given these motivations, we believe this work fits squarely within the "Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)" topic in the call for papers. ### Technical novelty versus vanilla NMF methods Our method was developed because we couldn't find any other non-negative factorization method that would satisfy all the constraints we wanted for factorization of a single questionnaire, as described above. We would not have embarked in developing a new method otherwise, especially given the need to prove convergence, examine performance with synthetic data, etc. This said, we understand reviewer concerns about technical novelty, and will try to address them here. The components of the procedure, such as ADMM, are well understood; the novelty lies in their composition to achieve our specific goals. The inclusion of bounded constraints for the factor matrix $W$ and the factor loading matrix $Q$ is essential for establishing convergence to a local minimum solution in Proposition 3.2. Furthermore, we demonstrate that the combination of our method with the block cross-validation procedure can lead to a solution that is close to a global minimum. Both of these results are non-trivial, and necessary for the method to be practically applicable. Finally, the bounded constraints on $W$ enable a direct application to ICQF to concatenated factor matrices derived from many different questionnaires obtained in the same participants, as they will all be in the same scale. This is follow-up work we are doing with our collaborators to identify common dimensions of psychopathology manifesting across questionnaires. Empirically, we found that these extra constraints contribute to the stability of the factorization process and result in improved interpretability and robustness, particularly for small sample sizes. [ Please proceed to "Author Rebuttal by Authors (Part II)" for further response. We apologize for splitting it into two posts.]
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Content-based Unrestricted Adversarial Attack
Accept (poster)
Summary: This paper proposes a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack (ACA). The author argues that current unrestricted attacks have limitations in terms of maintaining human visual imperceptibility, generating natural adversarial examples, and achieving high attack performance. This paper is well-written and easy to follow. Strengths: 1) The paper introduces a novel attack framework that addresses the limitations of current unrestricted attacks. The use of a low-dimensional manifold and optimization along its adversarial direction allows for the generation of diverse and natural adversarial examples. 2) The paper provides a clear motivation and problem definition, outlining the challenges and goals for unrestricted adversarial attacks. 3) The paper includes extensive experimentation and visualization to validate the effectiveness of ACA. The results show significant improvements over state-of-the-art attacks in terms of adversarial transferability. Weaknesses: The paper does not include sufficient evaluation and ablation studies for the proposed method. Since ACA used skip gradient, I think I should compare Skip connections matter [1] in the experiment. The paper could benefit from a more thorough review of relevant literature. While the authors mention existing unrestricted attacks, there is limited discussion on related work and the novelty of ACA compared to previous approaches. [1] Skip connections matter: On the transferability of adversarial examples generated with resnets, ICLR 2019 Technical Quality: 3 good Clarity: 3 good Questions for Authors: /Na Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: /Na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. What excites us is not the high rating you have given us, but the questions you have proposed. This assures us that you have a deep knowledge of the field and have recognized the significance of our work to the community. **Therefore, we hope you can champion our work in the discussion.** We are convinced that our work is worthy of being presented to a wider audience of researchers and can assist ML systems in mitigating the threat of unrestricted attacks in practice. We address your concerns as follows: **[Q1: Insufficient evaluation and ablation studies.]** Based on your valuable suggestions, we add an experimental comparison with SGM in Q2. For more evaluation, we add more attack and defense experiments (please refer to **Reviewer frod #Q1**). In the supplementary material, we provide ablation studies with momentum, differentiable boundary processing, and $\beta$. Further, we also supplement the ablation studies of momentum factor $\mu$ and perturbation value $\kappa$ (please refer to **Author Rebuttal #Q3**). **[Q2: Comparison on Skip connections matter (SGM).]** Thank you for pointing out that this is actually a duplication of terms, that is, our skip gradient (SG) is similar in name to the Skip connections matter (SGM) you mentioned, but they are technically different. - In terms of purpose, our SG is to solve the problem of memory overflow caused by gradient backpropagation during the sampling process of the diffusion model, but SGM is using more gradients from the skip connections rather than the residual modules to enhance transferability. - In terms of implementation, SG uses the derivative in the denoising process to approximate the gradient, while SGM uses a single decay factor to enhance the gradient on skip connections. - In terms of scope of application, SG is suitable for diffusion models, while SGM is suitable for neural networks with skip connections. Overall, both SG and SGM directly or indirectly improve adversarial transferability. We will add a corresponding discussion to illustrate the above issues in the final version. Furthermore, we supplement related experiments following your suggestion. Because SGM is only effective for gradient optimization-based methods and is a plug-and-play module, we choose ADer, ReColorAdv, cAdv, ACE and ACA for experiments (ResNet50 is the surrogate model). Experiments show that SGM has almost no effect on ADer and cAdv, but can significantly improve transferability on other methods (ACA improves Avg. ASR by 5.87%). Attack | MN-v2 | Inc-v3 | RN-50 | Dense-161 | RN-152 | EF-b7 | MobViT-s | ViT-B | Swin-B | PVT-v2 | Avg. ASR (%) :---------------:|:-----:|:------:|:-----:|:---------:|:------:|:-----:|:--------:|:-----:|:------:|:------:|:-------------: ADer | 15.5 | 7.7 | 55.7* | 8.4 | 7.8 | 11.4 | 12.3 | 9.2 | 4.6 | 4.9 | 9.09 ADer+SGM | 14.8 | 7.3 | 10.5* | 7.7 | 7.4 | 11.1 | 11.8 | 9.5 | 4.5 | 4.7 | 8.76 ReColorAdv | 40.6 | 17.7 | 96.4* | 28.3 | 33.3 | 19.2 | 29.3 | 18.8 | 12.9 | 13.4 | 23.72 ReColorAdv+SGM | 58.3 | 26.7 | 98.6* | 48.4 | 53.0 | 25.8 | 43.5 | 23.8 | 21.1 | 20.9 | 35.72 cAdv | 44.2 | 25.3 | 97.2* | 36.8 | 37.0 | 34.9 | 40.1 | 30.6 | 19.3 | 20.2 | 32.04 cAdv+SGM | 43.4 | 25.5 | 57.8* | 32.2 | 30.3 | 31.9 | 39.0 | 30.9 | 19.3 | 19.5 | 30.22 ACE | 32.8 | 9.4 | 99.1* | 16.1 | 15.2 | 12.7 | 20.5 | 13.1 | 6.1 | 5.3 | 14.58 ACE+SGM | 61.4 | 20.9 | 99.2* | 33.8 | 37.8 | 20.8 | 36.3 | 26.0 | 11.7 | 10.0 | 28.74 ACA (Ours) | 69.3 | 61.6 | 88.3* | 61.9 | 61.7 | 60.3 | 62.6 | 52.9 | 51.9 | 53.2 | 59.49 ACA (Ours) +SGM | 76.7 | 67.0 | 91.7* | 68.8 | 69.0 | 59.9 | 68.5 | 58.5 | 58.2 | 61.6 | 65.36 **[Q3: Thorough review of relevant literature.]** Thanks for pointing out related work and novelty issues discussed in the current version. **For related work**, we briefly summarize the shortcomings of current shape-, texture-, and color-based unrestricted attacks in Lines 35-45. Then we further analyze the shortcomings of ColorFool and Natural Color Fool in Lines 86-96. **For the novelty of ACA**, we summarize the innovations in the attack form in Lines 97-100: ACA can adaptively combine multiple contents (shape, texture and color), guarantee the photorealism of the image, and exhibit potential attack performance. In Section 3 , we demonstrate the technical innovations of ACA, including Image Latent Mapping and Adversarial Latent Optimization. Following your comments, we will add detailed descriptions of each unrestricted attack, systematically summarize the shortcomings, and discuss the differences with ACA in more detail to highlight the novelty of ACA. Thank you again for your help in improving the quality of the paper. --- Rebuttal Comment 1.1: Title: Response to authors Comment: The author answered my questions and I will improve my score.
Summary: The paper introduces a novel attack framework called Content-based Unrestricted Adversarial Attack, which aims to generate diverse and natural adversarial examples with high transferability. The authors argue that existing methods, such as lp norm-based attacks, have limitations in terms of perceptual similarity, naturalness, and robustness. To address these issues, they propose mapping images onto a low-dimensional manifold represented by a generative model trained on natural images. This manifold ensures both photorealism and content diversity. By optimizing the adversarial objective on this latent space, they generate unrestricted adversarial examples. The proposed method, called Adversarial Content Attack (ACA), utilizes Image Latent Mapping (ILM) and Adversarial Latent Optimization (ALO) techniques to optimize the latent in a diffusion model. The effectiveness of ACA is validated through experiments and visualization, demonstrating significant improvements of 13.3~50.4% in terms of adversarial transferability compared to state-of-the-art attacks. Overall, the main contributions of the paper are the introduction of the Content-based Unrestricted Adversarial Attack framework, the development of the Adversarial Content Attack method, and the experiments demonstrating improvements in generating diverse and transferable adversarial examples. Strengths: The authors effectively communicated the motivation, problem statement, and methodology of the proposed framework. By addressing the limitations (imperceptibility/photorealism/effectiveness) of existing methods, they proposed a novel attack framework that leverages a low-dimensional manifold represented by a generative model. By combining image mapping onto a latent space, optimizing adversarial objectives, and utilizing a diffusion model, the authors introduce a novel approach to generating diverse and natural adversarial examples. This paper might be the first to explore unrestricted adversarial examples through such a framework. This paper offers a thorough explanation of the proposed attack framework, detailing the underlying techniques of ILM and ALO. The authors further support their claims through experimentation and visualization, providing evidence of the effectiveness of their approach and demonstrating improvements in adversarial transferability compared to state-of-the-art attacks. The improvements in adversarial transferability also shed light on the potential impact of this method in uncovering vulnerabilities in security-sensitive applications and advancing our understanding of robustness in DNNs. Overall, this paper's strengths encompass originality in proposing a novel attack framework, quality in terms of methodology and experimental evaluation, clarity in explaining the concepts and techniques, and significance in addressing limitations and raising awareness of unrestricted but realistic adversarial examples. Weaknesses: While the paper has several strengths, there are some weaknesses that could be addressed to further improve the work: **Comparison with State-of-the-Art Attacks:** While the paper mentions that the proposed method achieves significant improvements in terms of adversarial transferability compared to state-of-the-art attacks, a more comprehensive comparison would strengthen the evaluation. It would be valuable to include a thorough analysis and comparison with a wider range of existing unrestricted attack methods, such as [1]. In particular, Laidlaw et al. proposed an efficient way to generate imperceptible adversarial examples. The reviewer also suggests evaluating the proposed attack on other adversarially trained models, for example, the defense method that could be generalized to unforeseen perturbations [1], or having used synthetic data during adversarial training [2]. **Defense Method:** This paper does not discuss how to defend the proposed attacks but primarily on the efficacy of the proposed method. It would be great if the authors provide potential solutions or mitigation strategies for the threats. **Generalization to Different Datasets:** The authors only evaluate their method on a subset of the ImageNet validation set and do not say how the results generalize to other datasets. It would be beneficial to investigate the generalization of the proposed approach to different datasets. **Unclear Claim:** The authors mentioned the Dunning-Kruger effect to emphasize that current defense methods against lp norm adversarial examples overestimate their abilities. However, this work does not provide further details and arguments to support this. Although similar arguments have also been proposed in [3], in which the authors argue that lp-based robustness evaluation might be biased, the reviewer thinks that using the Dunning-Kruger effect here is not rigorous. The reviewer suggests that the authors rethink this argument, and even consider removing it if it is not an important contribution of the paper. **Typos:** Although this paper is well-written and easy to follow, the reviewer found some typos and grammar errors. For example, in line 161, *follw*. It would be great if the authors have proofread the paper before submitting it. [1] Laidlaw et al. Perceptual adversarial robustness: Defense against unseen threat models. (ICLR 2021) [2] Croce et al. Robustbench: a standardized adversarial robustness benchmark. (NeurIPS 2021) [3] Hsiung et al. CARBEN: Composite Adversarial Robustness Benchmark. (IJCAI 2022) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness part. In brief, the reviewer would like to understand more about the following points: - Could the authors provide more attack and defense baselines as mentioned in the weakness? - Could the authors provide some experiments on other datasets? - Please address the mentioned unclear claim. - Why the authors did not provide the code for review but answered the Reproducibility as "yes"? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and effort in our work. We supplement the experiments with the attack and defense, and release the code. Hope you can further support our work. **[Q1: Comparison with state-of-the-art attacks.]** Thanks for your valuable suggestions, we supplement the attack experiments of Perceptual Projected Gradient Descent (PPGD) and Lagrangian Perceptual Attack (LPA) in [1]. PPGD exhibits poor attack performance, while LPA has better resistance to migration. However, compared with the state-of-the-art NCF and ACA, there is still a large gap (ACA exceeds 34.47% on Avg. ASR). Attack | MN-v2 | Inc-v3 | RN-50 | Dense-161 | RN-152 | EF-b7 | MobViT-s | ViT-B | Swin-B | PVT-v2 | Avg. ASR (%) :----------:|:-----:|:------:|:-----:|:---------:|:------:|:-----:|:--------:|:-----:|:------:|:------:|:-------------: PPGD | 23.1 | 12.3 | 99.7* | 16.6 | 18 | 13.3 | 14.9 | 10.6 | 6.3 | 6.9 | 13.56 LPA | 37.6 | 24 | **100*** | 34.4 | 38 | 22 | 29.2 | 13.5 | 12.2 | 14.3 | 25.02 NCF | **71.2** | 33.6 | 91.4* | 48.5 | 60.5 | 32.4 | 52.6 | 36.8 | 19.8 | 21.7 | 41.90 ACA (Ours) | 69.3 | **61.6** | 88.3* | **61.9** | **61.7** | **60.3** | **62.6** | **52.9** | **51.9** | **53.2** | **59.49** Furthermore, we complement the defense experiments on adversarially trained models. Since the pre-trained robust model weights are not released in [1], we cannot reproduce the adversarial training model on ImageNet quickly due to time and computing power. Therefore, we choose ViT -B-CvSt and ConvNext-L-CvSt in state-of-the-art on the leaderboard in [2] (the model comes from [A3] and the input size is 224). Our ACA still outperforms other unrestricted attacks by a significant margin. Attack | Clean | ILM | SAE | ADer | ReColorAdv | cAdv | tAdv | ACE | ColorFool | NCF | ACA (Ours) :--------------:|:-----:|:----:|:-----:|:----:|:----------:|:-----:|:-----:|:-----:|:---------:|:----:|:----------: ViT-B-CvSt | 8.4 | 8.7 | 38.9 | 11.2 | 10.5 | 20.6 | 11.3 | 16.9 | 31.2 | 35.8 | **51.1** ConvNext-L-CvSt | 7.4 | 7.9 | 34.5 | 11.0 | 9.5 | 17.3 | 10.7 | 15.4 | 26.9 | 33.0 | **49.7** Thanks again for your suggestion, we will add this part of the experiment to the final version. [A3] Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models, arXiv preprint arXiv:2303.01870. **[Q2: Defense method.]** We believe that improving the adversarial robustness of the model itself is a potential defense strategy. First, recent work on LLM [A4, A5] shows that larger models are more robust. Therefore, the construction of the foundation model may be a potential direction. Secondly, we can consider designing specific training strategies, such as the one you mentioned [1]. It may be an idea to add unrestricted adversarial examples to adversarial training. In the end, we thought that maybe a better way of visual encoding might also be a solution. We will incorporate these ideas into the final version. [A4] Nicholas Carlini et. al., Are aligned neural networks adversarially aligned? [A5] Jindong Wang et. al., On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective **[Q3: Generalization to different datasets.]** We choose ImageNet mainly to set up alignment with other methods, which is conducive to a fair comparison. In terms of models, ImageNet has the most image classification models, which is convenient for the experiment of transfer attack; in terms of difficulty, ImageNet has 1000 categories, rich in content and semantics, which are more in line with images in real scenes, so experiments on this dataset can make the results generalizable. In addition, in the field of transfer attacks, ImageNet-compatible Dataset is widely used. We admit that our method will have limitations on small-sized images, such as CIFAR because it requires a larger-sized input, but the results on ImageNet have been able to illustrate the effectiveness and generalization of ACA. If time permits, we will add experiments on more datasets in the future. **[Q4: Unclear claim.]** A recent paper [A6] describes the Dunning-Kruger effect as a cognitive bias in that humans tend to overestimate their abilities. We argue that the adversarial robustness of current adversarial defenses is currently overestimated. In fact, the current adversarial defense can only better defend against the $l_p$ adversarial example, but cannot effectively defend against unrestricted adversarial examples. We detail the experimental results of attacking defenses in Section 4.3. ACA achieves a transfer attack success rate of more than 50% in all defense methods. We think this greatly illustrates the insufficiency of current adversarial defenses in the face of unrestricted adversarial examples, so we interpret this overestimation as a Dunning-Kruger effect. Regarding this statement, we are happy to continue to communicate with you in the discussion session. [A6] Reliability in Semantic Segmentation: Are We on the Right Track? CVPR 2023 **[Q5: Typos.]** Thank you for your detailed review of our work, which is of great help to us in improving the quality of the paper. We will carefully check all typos and revise them in the final version. **[Q6: Reproducibility as "yes".]** Following the NeurIPS Paper Checklist Guidelines, our paper provides enough information to reproduce the experiment, so we choose "yes". Also during the rebuttal, we submitted the anonymous link containing the code to AC. In order to promote the research on unrestricted attacks, we promise to release the code after accepting this work. --- Rebuttal Comment 1.1: Title: Happy to Discuss with You Comment: Dear Reviewer frod: As the discussion period is closing, we sincerely look forward to your feedback. We deeply appreciate your valuable time and efforts spent reviewing this paper and helping us improve it. It would be very much appreciated if you could once again help review our responses and let us know if these address or partially address your concerns and if our explanations are heading in the right direction. Please also let us know if there are further questions or comments about this paper. We strive to improve the paper consistently, and it is our pleasure to have your feedback! Best regards, Authors --- Rebuttal Comment 1.2: Comment: I appreciate the authors' efforts in addressing my questions. While a significant portion of my concerns has been satisfactorily addressed, I find that the conclusion drawn regarding the application of the Dunning-Kruger effect in the context of current defense methods remains somewhat unclear to me. It's important to note that the Dunning-Kruger effect is commonly employed to elucidate cognitive biases exhibited by humans. This is also not surprising that the proposed method could be cracked by unseen attacks. Given that the Dunning-Kruger effect has been previously elucidated in [A6], the reviewer acknowledges that this particular concern is minor. I have raised my score. However, the reviewers suggest the authors discuss these previous works in the revision to make the claims more rigorous and substantiated.
Summary: In this paper, authors propose an unrestricted untargeted attack based on optimising the latent space of stable diffusion model. The generated adversarial samples are empirically shown to be more transferable than the existing semantic attacks. Moreover, authors validate the effectiveness of adversarial samples when attacking different representative adversarial trained models and show consistent performance book. Strengths: - The method is a replacement of popular GANs in the semantic attacks with the powerful stable diffusion models, pushing the state-of-the-art by many steps emperically. - Experiments are thorough in attacking both normal and adversarial trained models - Compared against many baseline unrestricted attacks and proposed attacks outperforms all the methods. Weaknesses: - The paper lacks technical novelty as the latent space optimisation for generating adversarial attacks as been popular from many years [17]. - Since this is a unrestricted attack, the generated image quality is difficult to assess with metrics. Nonetheless, authors have computed 5 metrics to understand the image quality - Authors did not discuss about the code release to reproduce the experiments. For this particular paper, the implementation of this proposed method is not simple as the section 3.1 and 3.2 mostly discusses about the difficulties in optimising the latent space and how they overcome it through skip gradient, momentum and boundary function. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Overall, I do not have major questions as the proposed method is properly motivated and leverages the powerful generative model to aid the attack generation. However, I have few minor questions mostly around implementation and design choices: - In section 3.1, authors propose to optimise the null text embedding $∅_t$ at every timestamp to offset the error. Is there any difference to your implementation as compared to the method of [32]. Can you please clarify your contribution in the section 3.1 ie. mapping image $z_0$ to latent space $z_t$. - In particular, authors show the benefit of momentum factor $\mu$ and boundary function $ϱ(·)$ to improve the ASR in Appendix D. Did you perform ablation study to set the value of $\mu$ to 1 and can authors provide more insights on the design of boundary function in particular for inputs outside the valid range [0, 1]? - How do you ensure the perturbation value of $k = 0.1$ in the latent space is not drastically modifying the original image. Can you present an ablation of this key parameter vs attack performance vs image quality ? In Table 3, there is no discussion about the non-reference metrics mentioned in the paper. Why is the method not evaluated with metrics such as LPIPS that captures perceptual similarity? Please also include FID to the metrics. - How is the performance of the attacks in targeted attack setting? Do you require larger shift in latent space to craft example? - Please also benchmark the baseline attacks in terms of attack speed for completeness. - Can you constrain the perturbation to a local region in latent space to generate a kind of patch attacks in image space? Finally, I request the authors to discuss their plans in releasing the codebase to reproduce the experiments as I believe this will be invaluable to the community. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - The inference time of the attack is much higher taking 2.5 minutes for image due to many many levels of optimisation such as for null text embedding, perturbation in latent space. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and we address your concerns as follows. **[Q1: Technical novelty of latent space optimization.]** Techniques for latent space optimization using Generative Adversarial Networks (GAN) are common, but latent optimization using diffusion models has not been widely explored. - First, the latent of GAN and diffusion model is different. The latent of GAN is usually decoupled and limited to certain attributes, while the latent of the diffusion model is aligned with the text. We choose the null text embedding for optimization, propose a corresponding attack algorithm, and illustrate through experiments that it can generate high-transferability adversarial examples. - Then, we find that the sampling process of the diffusion model is unable to construct the calculation graph of the gradient chain, because the GPU memory will overflow (Lines 192-198), so we propose the skip gradient to solve the unique problem of this diffusion model. Therefore, our work is designed for the latent of the diffusion model, which is the biggest difference between us and the previous work on latent space optimization. **[Q2: Choice of image quality assessment.]** Please refer to **Author Rebuttal #Q1**. **[Q3: Comparison on [32] and contributions.]** The biggest contribution of this paper is to propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack. Under this framework, we first employ Image Latent Mapping (ILM) to map images onto the latent space and utilize Adversarial Latent Optimization (ALO) to optimize the latent. We emphasize that our contribution is to introduce the ILM module to realize ACA so that images can be mapped to latent space to complete subsequent attacks. The existence of this model is very necessary, and [32] is just one implementation of ILM. This implementation can be replaced, or even superseded by better methods. Compared to other strategies [9, 39, 23, 49], the current strategy [32] is simple and effective and does not require fine-tuning to obtain high-quality image reconstruction. Our framework is not limited to this implementation, and the proposed framework can still be applied if there are better implementations in the future. **[Q4: Abaltion study on momentum factor.]** Please refer to **Author Rebuttal #Q3**. **[Q5: Insight about boundary function.]** The motivation of Differentiable Boundary Processing is to ensure that the value range of adversarial examples is between [0,1]. Because when the diffusion model generates an image, the value range may not be in [0,1]. When saving adversarial examples, it is usually directly cropped to [0,1], and the part of the perturbation that is not in [0,1] is ignored, which may cause the attack to fail. To reduce the ASR reduction caused by the error during storage, we use DBP to constrain the value range of the adversarial examples to [0,1] as much as possible. **[Q6: Perturbation value.]** Please refer to **Author Rebuttal #Q3**. **[Q7: Targeted attacks.]** Please refer to **Author Rebuttal #Q2**. **[Q8: Attack speed.]** Here, we illustrate the attack speed of various unrestricted attacks. We choose MN-v2 as the surrogate model and evaluate the inference time on an NVIDIA Tesla A100. The table shows the average time (in seconds) required to generate an adversarial example per image. ACA does have a significant time cost compared to other attacks. Further, we analyze the time cost and find that Image Latent Mapping and Adversarial Latent Optimization each accounted for ~50% of the time cost. However, most of the time cost of ILM and ALO lies in the sampling process of the diffusion model. We also take the initiative to explain this issue in the limitation. Attack | SAE | ADer | ReColorAdv | cAdv | tAdv | ACE | ColorFool | NCF | ACA (Ours) :-----------:|:------:|:------:|:----------:|:-------:|:------:|:------:|:---------:|:-------:|:----------: Time (sec) | 8.80 | 0.41 | 3.86 | 18.67 | 4.88 | 6.64 | 12.18 | 10.45 | 60.0+65.33=125.33 **In this paper, our main contribution is to propose a new unrestricted attack paradigm. Therefore, we focus on the improvement of the attack framework, rather than the optimization of time cost.** Since the time cost is mainly focused on the sampling of the diffusion model, we have noticed that many recent works have accelerated or distilled the diffusion model, which can greatly reduce the time of the sampling process. For example, [A2] can reduce the total number of sampling steps by at least 20 times. If these acceleration technologies are applied to our ACA, ACA can theoretically achieve an attack speed of close to 6 seconds. Thank you for your valuable suggestions, we think this is a valuable optimization direction. [A2] On Distillation of Guided Diffusion Models, CVPR 2023 **[Q9: Local edit.]** Local editing is currently not supported in this paper. But with the rapid development of diffusion models and the emergence of more control and editing techniques, we believe that it is possible to incorporate local editing into our proposed unrestricted attack framework in the future. In addition, this issue has been displayed in the main body as a limitation. **[Q10: Release codes.]** Thank you for recognizing the value of our work. During the rebuttal, we have submitted an anonymous link with the codes to AC. To promote the research on unrestricted attacks, we promise to release the codes after accepting this work. --- Rebuttal Comment 1.1: Comment: Thank you authors for the rebuttal and additional experiments. My concerns regarding momentum $\mu$ and perturbation value $k$ is addressed. The rebuttal experiments suggest that the attack success rate is not overly sensitive to these hyper-parameters. Moreover, I believe the boundary function is an additional contribution which can be extended to other attacks. On the other hand, I request the authors to incorporate the experiments about the attack speed in the revised paper and share an user-friendly reproducible code for the benefit of community. Overall, I believe this paper will inspire the future research on diffusion-based unrestricted attacks and will be of interest to the audience at NeurIPS. I increase my score to WA.
Summary: This paper proposes the Adversarial Content Attack based on the diffusion model. The proposed attack method first maps the image onto a low-dimensional manifold of natural images and then moves images along the adversarial gradient on the manifold to generate photorealistic adversarial examples. The authors conduct extensive experimentation and demonstrate the efficacy of the Adversarial Content Attack in both normally trained models and defense methods. Strengths: 1. The paper presents a host of visualization and experiments to support the intuitions and conclusions. 2. The investigated topic is important and useful. Weaknesses: 1. This work seems like an application of the diffusion model in model debugging research, finding some hard samples of deep models. However, it lacks mentions or comparisons with related work, including [1],[2], etc. 2. In Table 3, I can hardly understand why the generated adversarial image can achieve better photorealism than the real image. 3. In Table 2, Inc-v3$_{ens4}$ is used as the target model, which is a defense method. But confusingly, the attack success rate (**62.2**) surpasses the case when using normal Inc-v3 (**58.8**) as the target. I will be pleased to raise my score if these questions can be properly answered. [1] Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue. ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. [2] Maximilian Augustin, Valentyn Boreiko, Francesco Croce, Matthias Hein. Diffusion Visual Counterfactual Explanations. Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: This paper mainly focuses on the untargeted attack. I wonder whether the proposed attack method can also be applied to the targeted attack case, that is, to specify the misclassified category. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper has discussed the limitations and negative social impacts of the proposed attack method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for valuable advice. We will follow your advice to complement the discussion of related work and the explanation of image quality. We would appreciate it very much if you can champion us. **[Q1: Related works about model debugging.]** Thanks for your help in improving the quality of our paper. ImageNet-E [1] is released after the NIPS submission is over, so we do not discuss it in time. We will add the following discussion to the final version: - The image editing of ImageNe-E [1] focuses on the single element that explicitly controls the image. It evaluates the robustness of background, size, pose, and direction, and does not emphasize the resulting adversarial robustness. However, our method can adaptively and implicitly generate adversarial examples with shapes, colors, and textures, with a variety of adversarial content, emphasizing the adversarial robustness. Further, it is found through experiments that it also has strong adversarial transferability. The focus is different on control methods, elements and robustness. - DVCE [2] is similar to ours in terms of generation effect, but in terms of technical route, optimization strategy and parameter update are different. The proposed Cone Projection is only suitable for robust classifiers, while our method does not have any requirements on the model. In addition, model debugging focuses on the bias of the classifier, such as some spurious features. While ACA pays more attention to the adversarial vulnerability of the model, expecting to find small modified error examples. **[Q2: Explanation on image quality.]** Thanks to your serious review, we have also noticed that the adversarial examples generated by ACA are more photorealism than real images, and explained the possible reasons in Lines 296-304: - Our adversarial examples are generated based on the low-dimensional manifold of natural images, which can adaptively combine the adversarial content and ensure photorealism; - Stable Diffusion itself is an extremely powerful generation model, which produces images with very high image quality. It should be noted that this is not the first time a similar situation has appeared in this work. It also appeared in ColorFool (CVPR2020), where the unrestricted adversarial examples have better quality than the original image. In addition to the above 2 points, we further analyze the possible reasons. These no-reference image metrics are often trained on aesthetic datasets, such as ACA or KonIQ-10K. Some of the images in these datasets are post-processed (such as Photoshop), which is more in line with human aesthetics. Because ACA adaptively generates adversarial examples on a low-dimensional manifold, this kind of minor image editing is similar to post-processing, which is more in line with human aesthetic perception and better image quality. We will update the explanation of this part in the final version. **[Q3: ASR on defense models.]** Inc-v3$\_{ens4}$ is a robust model based on adversarial training and adversarial training generally reduces the clean accuracy of the model (compared to the normal training model). Furthermore, Inc-v3$\_{ens4}$ is based on $l_p$ perturbation for adversarial training, which does not have a good defense effect on unrestricted adversarial examples [55]. This experiment also verifies that the existing adversarial examples provide false adversarial robustness. **[Q4: Targeted attacks.]** Please refer to **Author Rebuttal #Q2**. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns and I appreciate the efforts the authors made to refine the paper. I have raised my score. Though I recommend this paper to be accepted, I am also willing to hear about the other reviewers' further opinions and discussion.
Rebuttal 1: Rebuttal: We sincerely appreciate all reviewers' valuable feedback and the efforts of the program chair and area chair. So, we are committed to addressing the issues you raised and improving our manuscript accordingly. Next, we provide point-to-point responses to each reviewer and address all details. **[Q1: Choice of image quality assessment.]** The quality assessment of generated images has always been a concern of the academic community, and numerous endeavors have been undertaken to tackle the issue. We acknowledge that objective metrics alone cannot perfectly reflect the image quality of unrestricted adversarial examples, so we provide qualitative and quantitative analysis in Section 4.4 to analyze image quality. The reason why we did not choose common perceptual image quality metrics such as LPIPS or FID is that the backbone (AlexNet/VGG/InceptionV3) they use is pre-trained on ImageNet and vulnerable to adversarial attacks, which may lead to biased image quality estimation. So we don't provide relevant values. Further, the unrestricted attacks recently published at the top tier conference like ColorFool (CVPR2020) and NCF (NIPS2022), both choose NIMA, a non-reference perceptual image quality measure, as the evaluation standard. As this metric has been rigorously peer-reviewed and verified to be feasible, we supplemented the NIMA metric with four additional metrics to achieve a more comprehensive evaluation of the visual quality. The proposed evaluation scheme may not be the optimal solution, but we believe that it is a relatively appropriate strategy at the moment. **[Q2: Targeted Attack.]** Yes, our ACA can achieve targeted attacks by modifying the loss function, just like PGD. Considering that previous unrestricted attacks pay more attention to untargeted adversarial transferability, our experiments are also mainly aligned with previous work. Further, we implement this targeted attack, integrate this attack into codes and submit it to AC. Therefore, we hope that reviewers will support our work to advance the field's attention to the threat of unrestricted attacks. **[Q3: More abaltion study.]** Following **Reviewer Rhy1**'s suggestion, we supplement the ablation study of momentum factor $\mu$ and perturbation value $\kappa$ (MN-v2 as the surrogate model). - **Momentum factor $\mu$**: As $\mu$ becomes greater, the black-box average ASR will rise first. It peaks at $\mu = 1$, greater $\mu$ leads to a slight drop in performance. It is worth noting that **even without momentum** ($\mu=0$), our ACA also outperforms the next highest attack (NCF) by 12.04%. $\mu$ | MN-v2 | Inc-v3 | RN-50 | Dense-161 | RN-152 | EF-b7 | MobViT-s | ViT-B | Swin-B | PVT-v2 | Avg. ASR (%) :---:|:-----:|:------:|:-----:|:---------:|:------:|:-----:|:--------:|:-----:|:------:|:------:|:-------------: 0 | 91.8* | 53.1 | 58.9 | 55.2 | 54.7 | 53.4 | 56.2 | 47.8 | 45.1 | 46.5 | 52.32 0.2 | 93.1* | 55.1 | 60.4 | 54.0 | 53.6 | 50.5 | 55.1 | 45.9 | 43.6 | 46.8 | 51.67 0.4 | 93.5* | 53.7 | 60.3 | 53.6 | 53.9 | 52.2 | 57.5 | 47.9 | 46.1 | 46.4 | 52.40 0.6 | 92.9* | 54.5 | 59 | 56.2 | 56.1 | 52.7 | 58.9 | 48.6 | 47.0 | 47.7 | 53.41 0.8 | 92.7* | 55.6 | 59.4 | 56.5 | 56.3 | 52.7 | 57.6 | 50.5 | 47.9 | 47.6 | 53.79 1 | 93.1* | 56.8 | 62.6 | 55.7 | 56.0 | 51.0 | 59.6 | 48.7 | 48.6 | 50.4 | **54.38** 2 | 87.8* | 58.2 | 60.2 | 56.1 | 55.7 | 52.6 | 59.0 | 49.3 | 47.9 | 49.9 | 54.32 - **Perturbation value $\kappa$**: In the table below, we can find that the attack success rate will increase with the increase of $\kappa$, because the increase of $\kappa$ will lead to an increase in the degree of image content change. But in terms of image quality, HyperIQA, MUSIQ-Koniq, and TReS degrade image quality as $\kappa$ increases. Since NIMA-AVA and MUSIQ-AVA are trained on AVA, AVA has some post-processed images. When $\kappa$ is small, the effect of ACA can be similar to the post-processing of the image, so these two metrics will increase slightly (For more explanation please refer to **Reviewer oMF6 #Q2**). But as $\kappa$ becomes larger, the degree of image content change becomes larger, and these two metrics also begin to decrease. In summary, we find that $\kappa=0.1$ can achieve a better image quality. $\kappa$ | Avg. ASR (%) | NIMA-AVA | HyperIQA | MUSIQ-AVA | MUSIQ-Koniq | TReS :----:|:------------:|:--------:|:--------:|:---------:|:-----------:|:-------: 0.01 | 28.09 | 5.46 | **0.718** | 4.31 | **57.85** | **87.82** 0.05 | 30.70 | 5.51 | 0.710 | 4.36 | 57.60 | 86.97 0.1 | 32.10 | **5.54** | 0.695 | **4.37** | 56.18 | 85.11 0.15 | 32.62 | 5.46 | 0.675 | 4.32 | 54.23 | 82.34 0.2 | **33.39** | 5.45 | 0.652 | 4.29 | 52.85 | 79.93
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
AdaVAE: Bayesian Structural Adaptation for Variational Autoencoders
Accept (poster)
Summary: The paper proposes a structure adaptation algorithm specifically tailored to the well-known Variational Autoencoders (VAEs). The main motivation lies in the fact that the structure of a generative model plays a significant part in the overall performance, something that is very under-explored in related VAE literature. To this end, the authors turn to the solid Bayesian inference framework and utilize a structural adaptation approach based on the Beta Process to infer the optimal network depth and the Bernoulli process to prune neurons in each hidden layer. The experimental evaluation focuses on the adaptation abilities of the proposed framework, how the structure sample size affects convergence and its over-fitting prevention properties. Finally the authors discuss how to integrate the proposed approach on VAE backbones and other VAE variants in general. Strengths: This work focuses on a very interesting aspect of modern architectures, that is their structure. The paper is overall easy to follow. The notation is clear and almost everything is well defined. By now there has been a lot of work in the community that aim to address this challenge through different views such as pruning approaches, component omission mechanisms, e.t.c. Compared to most of existing approaches the above formulation allows for simultaneously inferring both the network depth and width of each layer in a principled way via Bayesian arguments. Weaknesses: 1) As the authors note, the idea of using a Beta-Bernoulli pair in order to adapt the model capacity is not something new. There exist several works in both discriminative and generative models that aim to tackle this issue [1,2,3,4,5]. The authors however fail to appropriately introduce and discuss the differences with and advantages/drawbacks compared to these different methods leading to a single two sentence mention of one of the most similar approaches, i.e., BB-VAE. Instead the authors seem to focus more to the regularization aspect of other dissimilar methods. 2) In this context, the work presented in [4] should be the main focus on comparison because it is essentially the same exact method, with minor adaptations for the VAE structure. Apart from the difference in estimation, i.e., Gumbel Softmax vs MIWAE, the differences are minor and down to some notation changes. The authors do cite the paper at some point without any kind of discussion. 3) Taking into consideration that the method is very similar to [4], a core difference is the different estimator. Did the authors investigate the impact of other estimation methods? How does it compare to Gubmel Softmax in terms of both complexity and convergence? 4) The authors should briefly expand on the chosen metrics in the respective tables. The negative log likelihood is a well known measure, while the MI and KL divergence can have substantially different interpretations. The authors cite the work of [6], which dissects the ELBO into three different terms that probably correspond to the Tables 2,3. Further clarifications for the reader are essential. For example, why is the largest KL the better? The purpose of optimization is the minimization of the KL divergence and in [6] it is also noted that "whenever it is large it indicates a very strong and potentially unwanted regularization effect from the prior" 5) The stability of the training process might be an issue that isn't addressed in the main text, especially in the stick-breaking construction. Did issues arise during training due to very small or very large values? and how did the authors address them? 6) There is no discussion about the computational and memory complexity of the approach. It is apparent that the introduction of the additional latent parameters and the KL terms will significantly contribute to the overall footprint of the method, especially with multiple samples due to the MIWAE estimation. Some wall time measurements are necessary for both training and inference. 7) There is not a clear definition of the prediction process of the framework. Do you draw multiple samples from the learned posteriors and sample the active ones? do the authors use a threshold in "each sample"? (line 182) 8) The performance of the method on different VAR backbones is not clear. The authors mention the masking of convolutional channels in order to infer the number of convolutional layers, without any further expansion on the specific formulation and results. Does adaptation take place in specific features maps similar to [2] or the options are to drop the layer or not? 9) Apart from the t-SNE visualization, some reconstruction comparisons could be useful. Considering these points and the similarity of this work to [4], I believe that the novelty and overall contribution of this work is very limited for publication. [1] Sotirios P Chatzis. Indian buffet process deep generative models for semi-supervised classification. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) [2] Konstantinos Panousis, Sotirios Chatzis, and Sergios Theodoridis. Nonparametric bayesian deep networks with local competition. In International Conference on Machine Learning, pages 4980–4988. PMLR, 2019. [3] Rachit Singh, Jeffrey Ling, and Finale Doshi-Velez. Structured variational autoencoders for the beta- bernoulli process. In NIPS 2017 Workshop on Advances in Approximate Bayesian Inference, 2017. [4] Kishan KC, Rui Li, and Mohammad Mahdi Gilany. Joint inference for neural network depth and dropout regularization. In Advances in Neural Information Processing Systems (NeurIPS), volume 34, 2021. [5] Xu, W., Chen, R., Li, X., & Duvenaud, D. (2022). Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations. International Conference on Artificial Intelligence and Statistics. [6] Matthew D Hoffman and Matthew J Johnson. ELBO surgery: Yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, NeurIPS, volume 1, 2016. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see Weaknesses, especially concerning complexity and stability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Novelty of our framework comparing with other beta-Bernoulli based methods**: Our work diverges fundamentally from [1,2,3] by treating the expansion of VAE network structures as a stochastic process within a comprehensive framework, in contrast to their exclusive focus on the regularization of VAE latent variable, overlooking network structures. Moreover, our experiments in Figure 3 distinctly demonstrate that adapting network architectures is a notably more effective strategy to boost VAEs' performance than the one-size-fits-all methods. In comparison to [4], which infers feedforward network depths in supervised learning scenarios, we have innovatively introduced a novel generative learning approach tailored to diverse VAE variants. The novelty contribution of our VAE structural adaptation framework is non-trivial, because 1) we developed a new estimator to jointly infer both net structures and latent variables. 2) We also enabled the asymmetric encoding/decoding network structures in the inference procedure. 3) Our extensive experiments show our framework and our estimator boosted different VAE backbone networks' and various VAE variants' performance and achieved state-of-the-art. **Gumbel-softmax vs MIWAE**: Gumbel-softmax is a reparameterization trick, relaxing categorical distributions to continuous ones. It is not a VAE estimator we can employ or compare with. The technique is not involved or relevant to our work. **Discussion of chosen metrics**: We have a detailed discussion on the evaluation metrics in Appendix Section 5.1. **Stability with training**: We do not have any stability issues with our training algorithm. We provided a theoretical analysis of the algorithm in Theorem 1 and its proof in Appendix Section 2. We analyzed its convergence in training in Appendix Section 6. It shows no stability issues. We have the pseudocode in the Appendix Section 4 and we've also included the codes in the Supplementary Material. **Computational and memory complexity**: We have computational complexity analysis in Line 179-182. We also provided detailed comparison of the running times of the VAE variant with/without our framework in the Supplementary Material/Appendix Section 8. **Prediction process**: We described the prediction process in the Supplementary Material/Appendix Line 23-24. For prediction, we compute the ELBO i.e the IWAE estimator in Eqn. (2) using 5000 importance samples and 1 structure sample. We will move it to the camera-ready version of our paper. **Performance on VAE variants**: We detailed the implementation of combining our framework with VAE variants in Appendix Section 5.5. **Reconstruction evaluation**: We demonstrated reconstructions in Appendix Section 10.1 (Figures 7 and 8) and provided qualitative comparison and analysis. --- Rebuttal Comment 1.1: Title: Additional points on the rebuttal Comment: We added a couple of points to address Reviewer kqwq's concerns. **Comparison with [2]**: The work presented in [2] is primarily centered around reducing network complexity by inferring the Local Winner Takes All (LWTA) connections with Indian Buffet Process (IBP) in order to regularize the network size. It's important to note that this study still needs to pre-specify a fixed depth for the neural network prior to training. Also, the method solely demonstrates the proposed method in supervised learning setting. Notably, IBP can be derived from a marginalized version of beta-Bernoulli process. However, the key difference of our approach of utilizing beta-Bernoulli processes lies in the separate application of them. More precisely, we employ the beta process to model the growth of the encoding/decoding network depth, and regularize their width with the Bernoulli process, respectively. We will include the reference and our discussion in the revised version. **Comparison with [5]**: The infinite parameters in [5] is introduced by continuous neural networks based on differential equations. The application of their approach is confined to supervised learning scenarios, and it remains unclear how this technique can be extended to encompass VAE and their variants within unsupervised learning context. The method also has some limitations comparing with our work e.g., it requires Lipschitz property to guarantee a unique solution. We concur with the reviewer it is a good idea to further investigate how this class of methods could be applied on VAEs with different backbone networks. It will become part of our future research. We will cite and discuss this reference in the revised version of our manuscript.
Summary: The paper proposes a method called AdaVAE that adapt the sizes (depth and width) of the inference and the generative networks in VAEs. It extends the idea of beta-Bernoulli process VAE to use a beta process over the expected number of activated neurons over depth and Bernoulli processes per layer for the actual activations, followed by a dense Gaussian latent layer. The paper develops an estimator based on MIWAE to learn AdaVAE. Results show that AdaVAE can prevents overfitting and lead to performance improvement than previous regularization methods for VAEs. Strengths: ### originality The proposed AdaVAE for adapting VAE sizes is novel. ### quality The proposed method is technically sound. ### clarity The paper did a good presentation of the proposed methods with technical details, which is easy to follow. ### significance The proposed method is generally applicable to different VAE architecture and is potentially useful as a standard regularization technique. Weaknesses: ### originality The proposed estimator is a straightforward combination of previous works. ### clarity The paper is very related to [11,36,54] but only limited discussion is given. I encourage the author(s) to give some discussions between the AdaVAE and share intuitions of why those previous works (that only adapts the size of the latent layer) are not enough. Some of the figures with small legends are a bit hard to read. Figure 4 is purely qualitative. It's hard to convince the readers that one method yields better representation than others. It looks like the figure it trying to give an idea on how different clusters are separated. If it is the case maybe some clustering metrics could be used to assess the quality quantitatively. ### significance The proposed method can be still computationally heavy to use, compared to a naive one without any structure adaptation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In Figure 2, the last few layers of the encoder and the first few layers of the decoder seem to have no neuron activated. Can the author(s) clarify if it is the case and if so why the VAE still works in this case? Do we have any computation-wise comparison between the proposed methods and alternative/naive ones? It is useful to address practical concerns of actually using the method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper mentions its limitations on the fixed truncation level and some future work on removing it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to thank Reviewer p4pj for your constructive comment. They are valuable for our future research. **Relationship to [11,36, 54]**: Our experiment results in Figure 3 shows the methods as [11,36] only imposing an IBP prior to regularize the dimensionality of latent variables is not sufficient to mitigate overfitting caused by deep encoding/decoding networks. An intuition is large-scale encoding networks tend to learn redundant information or higher levels of feature abstraction from training data and embed it to the latent representation, which causes overfitting problems. Therefore, a more effective way to prevent it is by directly regularizing the network structures, rather than constraining the latent variables. [54] proposes a way to relax the truncation of stick breaking process. A part of our future research is to utilize the technique to improve our method. We will add this analysis to the camera-ready version. **Small legends**: We will enlarge all the legends and axis labels in the camera-ready version. **Quantitative evaluations of Figure 4**: The quantitative evaluations of the VAE variants' performance in Figure 4 are reported in Table 2. The purpose of Figure 4 is only to give readers a qualitative intuition. Moreover, in order to further evaluate the quality of latent representations, we conducted additional experiments and reported downstream classification accuracy in Supplementary Material/Appendix Section 10.3, Table 8. This is a more effective way to assess the clustering. **Skipping non-activated layers**: We have skip connections as in Eqn. 5, so that we can propagate the last activated layer to the output layer by skipping the non-activated layers in between. We will detail this point in the camera-ready version. **Computation-wise comparison**: We have computation complexity analysis in Lines 179-182. We also compared the running times between our method and alternative ones and reported and analyzed the results in Supplementary Material/Appendix Section 8. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I'm looking forward to the improvements in the revised version.
Summary: This paper introduces a Bayesian structural adaptation framework that automatically adapts VAE network structures (both encoder and decoder) to the current data. By modeling the number of hidden layers as a beta process and performing layer-wise dropout regularization with the conjugate Bernoulli process, the proposed model develops a joint ELBO that can optimize all parameters (network structures and latent variables) via the SGD algorithm. Empirical studies are conducted on three visual datasets and two graphical datasets. They also provide extensive ablation studies to show the robustness. Strengths: (1) The paper addresses one of the common issues in VAE communities. The developed AdaVAE attempts to prevent the overfitting issues by jointly learning the network structures and latent variables under the Bayesian framework. This motivation sounds good, and the novelty of the method is generally ok. (2) The paper is written clearly and the empirical results show the improvement of the proposed model. Weaknesses: (1) One of the main concerns comes from the experiment section. The proposed model is only tested on several simple network structures (the number of layer usually less than 25). Additional results on more expressive neural network structures (such as BIVA and NVAE) will improve the quality of this paper. (2) One of the goal of the proposed model is to prevent the overfitting by adapting the network structures under the Bayesian framework. The authors compared the proposed model with existing VAE regularization methods, and the results show the improvements. Unfortunately, other baselines (such as network structure search methods [1]) are not included in the experiments. [1] Corinna Cortes et.al. AdaNet: Adaptive Structural Learning of Artificial Neural Networks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) I suggest that the authors add a training algorithm, which can help the reader understand the algorithm more easily. (2) From Fig(6) (a) (b), we find that AdaVAE have similar results with cVAE. Can the authors provides deeply analysis? (3) Given that the proposed AdaVAE aims to find optimal structure for the dataset at hand, it is advisable to include a comparison of network parameters in Table 2. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to thank Reviewer jhJ9 for your constructive comments. They are valuable for our future research. **Application to expressive VAE variants**: BIVA is an extension of LVAE on which we conducted rigorous tests. We have additional results on BIVA/LVAE on the MNIST dataset as follows: | | | | |:-:|:-:|:-:| | **Methods** | **-LL** | **KL** | | BIVA/LVAE | 116.07$\pm$2.21 | **23.05$\pm$0.15** | | Ours+BIVA/LVAE | **86.07$\pm$0.07** | 19.78$\pm$0.01 | We show our framework's performance on NVAE on the cifar10 dataset as below by implementing a simplified NVAE by designing an encoding and decoding network with a series of NVAE blocks. | | | | |:-:|:-:|:-:| | **Methods** | **Reconstruction loss** | **KL** | | NVAE | **13313$\pm$09** | 150$\pm$03 | | Ours+NVAE | 13600$\pm$30 | **171$\pm$00** | We also provided reconstructed samples from the two methods in the attached pdf. These additional results suggest our framework's flexibility on more expressive VAE structures. We will add them to the camera-ready version of our paper. **Difference with AdaNet**: There are several major differences setting our method and AdaNet apart: 1). AdaNet employs an iterative procedure to search for optimal network structures. Namely, at each iteration, it selects a set of candidate subnetworks and trains/re-trains the expanded alternative models. This is a time-consuming procedure. In contrast, our framework models the growth of VAE network structures as a stochastic process. Consequently, we can jointly infer the network structures and latent variable in a single training pass. 2). AdaNet incrementally constructs feedforward neural network structures in supervised learning settings. It remains uncertain whether it can be effectively extended to VAEs, considering its iterative searching process. Additionally, scalability could be a concern, as AdaNet solely examines network configurations with a maximum of three layers. 3). AdaNet doesn't demonstrate if their method can be applied to different backbone networks, such as CNN and GCN. We will clarify these differences in the camera-ready version of our paper. **Training algorithm**: The pseudocode of our training algorithm was included in the Supplementary Material/Appendix Section 4. We conducted additional experiments to analyze its convergence in training in Appendix Section 6. We have also included our codes in the Supplementary Material. **Similar results between AdaVAE and cVAE in Figure 6**: The similar results between cVAE and AdaVAE for a smaller number of layers in Figure 6 (a) and (b) are due to the convolutional layers' preventive effect to overfitting. In comparison, Figure 3 shows that fully connected networks are more sensitive to overfitting. However, for deeper network structures, Figure 6 shows that cVAE still suffers from overfitting. AdaVAE, in contrast, can successfully mitigate it. We will elaborate on the results in the camera-ready version. **Comparison of network parameters**: First, we reported detailed parameter settings of the baseline methods and ours in Supplementary Material/Appendix Section 5.3. Second, since the baselines activate the whole network structures, so the total number of parameters for encoding and decoding network are $2\times O\times O \times L$, where $O$ denotes the maximum number of neurons per layer (i.e., width) and $L$ is the number of layers. For layers $L=25$ and width of $O=200$, they have $2M$ parameters. With the same size of truncation, AdaVAE only activates a part of it in general and fits the activated structures to data. Thus, the activated number of parameters (neuron activation percentage) for $T=L=25$ are as follows: | Methods | MNIST | Omniglot | Caltech101 | |---------|--------------|--------------|-----------------| | Ours | 0.32M(16%) | 0.36M(18%) | 0.21M (10.5%) | As the table shows AdaVAE uses a smaller number of parameters to achieve state-of-the-art performance. We will include the results in the camera-ready version of our paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications, which address most of my concerns. I would like to see this paper at the conference. --- Reply to Comment 1.1.1: Comment: We are truly grateful for Reviewer uhj9's positive feedback and endorsement of our research. We kindly inquire whether the reviewer might consider revising our rating, which could potentially enhance our prospects of showcasing our work at the upcoming conference. Your consideration would be greatly valued.
Summary: The paper proposes a novel VAE structural adaptation strategy called AdaVAE based on Bayesian model selection to enhance model performance. It introduces a scalable estimator that facilitates joint inference on both encoding/decoding network structures and latent variables. The paper conducts a comprehensive analysis of AdaVAE's regularization capabilities and demonstrates its ability to effectively mitigate overfitting in both shallow and deep VAE models and achieve state-of-the-art performance. The versatility of AdaVAE is showcased by demonstrating its compatibility with different types of VAE backbone networks. It can also be readily applied to various VAE variants, thereby enhancing their performance. The main contributions are: - Proposes AdaVAE, a novel VAE structural adaptation strategy based on Bayesian model selection to enhance model performance. - Introduces a scalable estimator that facilitates joint inference on both encoding/decoding network structures and latent variables. - Conducts a comprehensive analysis of AdaVAE's regularization capabilities and demonstrates its ability to effectively mitigate overfitting in both shallow and deep VAE models and achieve state-of-the-art performance. - Showcases the versatility of AdaVAE by demonstrating its compatibility with different types of VAE backbone networks. - Can be readily applied to various VAE variants, thereby enhancing their performance. Strengths: 1. The authors introduce a novel Variational Autoencoder (VAE) structural adaptation strategy, dubbed AdaVAE, which employs Bayesian model selection as a mechanism to enhance model performance. This innovative approach pushes the boundaries of current practices in the field and sets a precedent for future explorations. 2. The study further contributes by proposing a scalable estimator. This facilitates joint inference on not only the structures of the encoding and decoding networks but also the latent variables. This dual focus enhances the model's applicability and comprehensiveness, potentially opening new avenues in inferential methodologies. 3.A thorough analysis is presented on AdaVAE's regularization capabilities, showcasing its efficacy in mitigating overfitting across both shallow and deep VAE models. This is a critical point, as it demonstrates the proposed method's capability to achieve state-of-the-art performance across varying levels of complexity. 4. The versatility of AdaVAE is effectively demonstrated, as the authors show its compatibility with different types of VAE backbone networks. This level of adaptability reinforces the model's potential for broad application and usability within the field. The authors execute a rigorous evaluation of the proposed method on three benchmark datasets: MNIST, Omniglot, and Caltech101 Silhouettes. This broad evaluation allows a comprehensive understanding of AdaVAE's performance and robustly positions it in relation to other state-of-the-art methods. 5. The paper includes a qualitative evaluation of the latent representations, an element that strengthens the argument for the proposed method's compatibility with VAE regularization methods. This kind of analysis enhances the empirical rigor of the study and provides an additional perspective on the utility of AdaVAE. 6. The manuscript offers a theoretical analysis of the proposed method and derives a tight lower bound with a high signal-to-noise ratio for parameter gradients. This theoretical grounding ensures the model is not only empirically valid but also theoretically sound. 7.The manuscript is particularly well-executed in terms of its structure and style. The clear writing, logical organization, and the authors' ability to elucidate complex concepts makes it accessible and easy for readers to grasp the proposed method and its evaluation. In essence, the paper is a substantial contribution to the field, demonstrating methodological innovation, theoretical robustness, and a thorough evaluation, all of which underscore the potential and value of AdaVAE within the realm of VAE structural adaptation. Weaknesses: This manuscript is laudable for its comprehensive coverage, including a lucid motivation, rigorous theoretical proofs, and a thorough comparative analysis with benchmark models across various datasets. It demonstrates a high level of academic rigor and makes a compelling case for the proposed model. However, I do have a minor suggestion to enhance the paper further. While the evaluation conducted on Caltech101, among other datasets, certainly contributes to the robustness of the results, I believe there could be value in extending this evaluation to include more real-life, challenging datasets. Such datasets, replete with their inherent complexities, could serve to more thoroughly test and validate the model. This, in turn, would potentially bring the proposed model closer in comparison to other state-of-the-art generative models. It would also ensure that the model's performance is tested in conditions that mirror real-world applications, reinforcing the practical relevance and applicability of the model. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the above sections for detailed discussion. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, there is a section at the end of the paper discussing limitations and future research opportunities. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to thank Reviewer 9dFM for your time and your constructive comments. They are valuable for our future research. We agree with Reviewer 9dFM that extending the evaluation to include more real-life, challenging datasets will be valuable. It is part of our future work. In addition, we conducted some additional experiments to test our framework on CIFAR-10. We implemented a simplified NVAE by designing an encoding and decoding network with a series of NVAE blocks. We report the performance as below: | | | | |:-:|:-:|:-:| | **Methods** | **Reconstruction loss** | **KL** | | NVAE | **13313$\pm$09** | 150$\pm$03 | | Ours+NVAE | 13600$\pm$30 | **171$\pm$00** | We also provided reconstructed samples from the two methods in the attached pdf. We will add more additional results on real-life datasets in the camera-ready version of our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. With everything considered, I'd stay with my current rating.
Rebuttal 1: Rebuttal: Attached here is the reconstructed samples of the cifar-10 dataset (related to the additional experimental results.) Pdf: /pdf/96803a768fef34c27910e16b2bb1a2424af70912.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
One Fits All: Power General Time Series Analysis by Pretrained LM
Accept (spotlight)
Summary: In this paper, the authors suggest leveraging pre-trained large language models, specifically utilizing a frozen Transformer backbone, for comprehensive time series analysis. The unique method involves fine-tuning only the input embeddings of time series data and the output layers. The authors substantiate their proposal with large-scale experiments across multiple benchmarks, demonstrating that their approach consistently outperforms various established baseline methods in diverse time series analysis tasks. Strengths: 1. The paper's proposed method impresses with its simplicity, efficacy, and clear motivation. Despite not introducing any complex modules or additional hyperparameters to the existing model, it consistently outperforms more intricate techniques. 2. The experiments undertaken within the paper are robust and comprehensive. The authors have tested the proposed method across a broad spectrum of time series analysis aspects, such as long-term and short-term forecasting, and abnormality detection. In each scenario, the method demonstrates superior performance when compared with multiple strong baselines. 3. The insightful correlation drawn between self-attention and Principal Component Analysis (PCA) elucidates why the proposed method performs remarkably well. This strengthens the credibility of the results, making them more convincing. Weaknesses: 1. The paper neglects to detail the computational cost differences between the proposed method and alternative approaches. Given that a pre-trained GPT-2 backbone is potentially substantial in size, the computational expense of the presented method might significantly outweigh that of other baseline methods. 2. Although the proposed method seems compatible with various autoregressive Transformers, the experiments rely exclusively on GPT-2. It would be beneficial to extend the experimental scope to encompass additional models such as DeBERTa, GPT-J, OPT, among others. 3. The paper's overall presentation requires improvement. Tables appear congested, and font sizes within figures may be too small for legible printing. Additionally, there is an inconsistent distribution of space on some pages, and many section titles are overly lengthy. This compromises readability and detracts from the overall impact of the findings. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to express our sincere gratitude to Reviewer qj2Q for the positive assessment of our work. Your endorsement is both affirming and motivating, and it strengthens our resolve to continue our research in this field. We concur with Reviewer qj2Q's observation that the presentation of our paper could be enhanced. Specifically, we acknowledge that the tables appear cluttered and that the font sizes may be too small. This issue arose as we endeavored to comprehensively address a wide range of topics within the confines of this nine-page paper. Our goal was to deliver an exhaustive numerical analysis and provide a clear exposition of our knowledge transfer approach. We recognize that wading through all the supporting details can be challenging for reviewers, and we lament that space limitations forced us to omit some essential information in our supplementary material. It's noteworthy that our full paper, inclusive of all supporting information, spans over 30 pages. Nevertheless, we value your feedback and will make every effort to incorporate your suggestions in our revisions. ***Q1 for Reviewer qj2Q. The paper neglects to detail the computational cost differences between the proposed method and alternative approaches. Given that a pre-trained GPT-2 backbone is potentially substantial in size, the computational expense of the presented method might significantly outweigh that of other baseline methods.*** Thank you for pointing that out. We concur that evaluating the computational cost is crucial, especially for large models like the one under consideration. The subsequent table presents the results. Each baseline model is offered in two configurations, with model dimensions of 32 and 768 (analogous to GPT-2). Additionally, each baseline model consists of three layers. We assessed the computational expenses using a single batch of ETTh2 (with a batch size of 128) on a solitary V100 GPU. The results indicate that GPT-2(3) offers a marked enhancement in both time efficiency and parameter count relative to the baselines with equivalent model dimensions. This substantial uptick in time efficiency can be primarily attributed to the proficient optimization techniques employed by huggingface. Furthermore, the training parameters constitute only 6.12\% for GPT-2(3) and 4.60\% for GPT-2(6). | Model | Training Params | Training Params Percentage(%) | Training Time for 1 Batch (s) | Inference Time for 1 Batch (s) | | ------------- | --------------- | ----------------------------- | ----------------------------- | ------------------------------ | | FEDformer-32 | 437,319 | 100 | 0.889 | 0.17 | | TimesNet-32 | 1,905,015 | 100 | 0.747 | 0.302 | | PatchTST-32 | 543,232 | 100 | 0.043 | 0.022 | | ------------- | --------------- | ----------------------------- | ----------------------------- | ------------------------------ | | FEDformer-768 | 33,105,415 | 100 | 0.208 | 0.056 | | TimesNet-768 | 42,358,519 | 100 | 5.723 | 2.162 | | PatchTST-768 | 19,677,024 | 100 | 0.457 | 0.123 | | GPT-2(3)-768 | 3,906,912 | 6.12 | 0.093 | 0.032 | | ------------- | --------------- | ----------------------------- | ----------------------------- | ------------------------------ | | GPT-2(6)-768 | 3,916,128 | 4.60 | 0.104 | 0.054 | ***Q2 for Reviewer qj2Q. Although the proposed method seems compatible with various autoregressive Transformers, the experiments rely exclusively on GPT-2. It would be beneficial to extend the experimental scope to encompass additional models such as DeBERTa, GPT-J, OPT, among others.*** We wholeheartedly concur that broadening our experimental range to include more models would augment the depth of our study. However, given the significant effort and resources required to conduct a wide array of experiments, we primarily relied on GPT-2 for our core investigations. Nevertheless, we have also utilized the CV pre-trained model BEiT, and the NLP pre-trained model BERT. Both were trained on 5\% of ETTh2 and 5\% of ETTm2, underscoring that the capability for knowledge transfer isn't exclusive to GPT-2. Detailed results related to these models are available in Appendix H.5. As we move forward, our research will delve into the performance implications of transferring pre-trained models from diverse modalities to time series analysis. ***Q3 for Reviewer qj2Q. The paper's overall presentation requires improvement. Tables appear congested, and font sizes within figures may be too small for legible printing. Additionally, there is an inconsistent distribution of space on some pages, and many section titles are overly lengthy. This compromises readability and detracts from the overall impact of the findings.*** Thank you for your feedback. Owing to page constraints, we had to condense our figures and tables, inadvertently affecting their readability. In subsequent versions, we will refine the paper's layout, ensuring legible font sizes and optimal paragraph spacing. --- Rebuttal Comment 1.1: Comment: Thank you for your new experimental results and clarifications. I increased my review score to 8: Strong Accept --- Reply to Comment 1.1.1: Comment: We are thrilled to hear that our new experimental results and clarifications have made a positive impact on the paper. Your review has been invaluable in improving the quality of our work. Once again, thank you for your time, effort, and support.
Summary: This paper shows that fine-tuning language models along with some task specific layers, e.g. input and position embedding and lay norms, can achieve comparable and sate-of-the-art performance on various time series tasks, including forecasting, imputation, anomaly detection and classification tasks. Strengths: (1) The method for time series analysis is simple and can work for different time series analysis tasks. (2) Results are comparable or state-of-the-art in different benchmark datasets and time series analysis tasks. Weaknesses: (1) This work mainly conduct experiments on simple benchmark datasets. Many real-word datasets have complicated dynamics and large noise and the prior knowledge within pre-train models do not necessarily align with such dynamics. If the success of using pre-train models are due to the prior knowledge pre-train models own, what are results if the time series data does not align with such prior knowledge? This paper seems not to have deep discussion regarding this matter. (2) It is unclear why GPT-2 (6) perform better than the full-layer fine-tuning baseline. Deep layers often tend to learn global information within the data. It is not clear why authors use GPT-2 (6). Authors lack of deep discussion regarding this matter. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Weaknesses. Missing references: (1) LogTransformer was proposed in [1]. Results of LogTransformer was in this paper but [1] was not cited. (2) Using pre-trained transformers to model time series is not a new idea. [2] uses pre-train LMs to conduct time series forecasting and [3] uses Vision Transformer to model irregularly sampled time series. However, this paper does not discussion this two work and add them as baselines in appropriate tasks. [1] Li et al. Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting. NeurIPS 2019. [2] Xue and Salim. PromptCast: A New Prompt-based Learning Paradigm for Time Series Forecasting. [3] Li et al. Time Series as Images: Vision Transformer for Irregularly Sampled Time Series. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See Weaknesses and Questions section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer dRZ for the favorable evaluation of our methodology's simplicity and effectiveness, as well as its state-of-the-art performance across diverse time series tasks. We deeply appreciate your detailed and perceptive feedback. Rest assured, we are committed to addressing and resolving your concerns. ***Q1 for Reviewer idRZ. This work mainly conduct experiments on simple benchmark datasets. Many real-word datasets have complicated dynamics and large noise and the prior knowledge within pre-train models do not necessarily align with such dynamics. If the success of using pre-train models are due to the prior knowledge pre-train models own, what are results if the time series data does not align with such prior knowledge? This paper seems not to have deep discussion regarding this matter.*** We concur wholeheartedly with the reviewer that the absence of a comprehensive and large-scale benchmark dataset—akin to ImageNet in computer vision—is a substantial challenge for the whole community. We eagerly anticipate future advancements that will offer a more extensive and diverse benchmark suite for researchers. However, we argue that we have included most complicated benchmark datasets for time series forecasting in our study, e.g. the M4 dataset with 100,000 time series that served as the foundation for the fourth Makridakis forecasting competition—a competition widely recognized as one of the most representative in time series forecasting. The benchmark datasets used in our study have also been used by 400 papers in the last two years, with 20 papers from premier conferences, according to Semantic Scholar analysis. Although it is impossible for these datasets to showcase all the challenges faced by real applications of time series analysis, they do highlight a wide range of key challenges from real applications. We further argue that both LM and time series analysis is based on the idea of auto-regression. It is this high-level similarity that inspires us to explore frozen LM for time series analysis. When encountering a mismatch in prior assumptions between model and datasets, appropriate fine-tuning technology can always be used to fill out the gap. ***Q2 for Reviewer idRZ. It is unclear why GPT-2 (6) perform better than the full-layer fine-tuning baseline. Deep layers often tend to learn global information within the data. It is not clear why authors use GPT-2 (6). Authors lack of deep discussion regarding this matter.*** One reason for GPT-2(6) to perform better than full layer fine-tuning baseline is the rank collapsing property of transformer. In other words, as we go into deeper layers, we tend to see that more and more tokens exhibit similar vectors. Since most time series analyses depend on a relatively shorter history compared to texts, it further amplifies the impact of the rank-collapsing property. As a result, important detailed information may be removed from the outputs from the deep layers, limiting the prediction accuracy. ***Q3 for Reviewer idRZ. LogTransformer was proposed in [1]. Results of LogTransformer was in this paper but [1] was not cited.*** Thanks for the reminder, we will add the reference of LogTransformer. ***Q4 for Reviewer idRZ. Using pre-trained transformers to model time series is not a new idea. [2] uses pre-train LMs to conduct time series forecasting and [3] uses Vision Transformer to model irregularly sampled time series. However, this paper does not discussion this two work and add them as baselines in appropriate tasks.*** Although on the very high level, using pre-trained transformer for time series analysis is not completely new, our study is clearly new and novel: we show it is possible to directly use a frozen LM to achieve the state-of-the-art performance for time series analysis. Compared to texts, time series data is more noisy and diverse, and many time series data is application dependent, which makes the cross-modality transferring learning challenging. We support this claim by both extensive empirical studies and theoretical analysis. Both papers the reviewer mentioned are interesting, and we intend to incorporate them into the related works section. However, we would like to clarify that both works are completely different and do not explore the exact same problem as us. Specifically, [2] directly uses pre-trained LMs for time series forecasting through prompt engineering, its performance for multi-step forecasting is not close to the state-of-the-art. And its emphasis is the prompt engineering. In addition, the prompt-based forecasting is limited and cannot be applied in many real-world applications. [3] leverages the VIT structure for time series analysis, and does not address the main theme of this work, i.e. cross modality transferring -- directly use frozen LM for time series analysis. --- Rebuttal 2: Title: A Supplement Note Comment: We would like to express our gratitude to reviewer idRZ for taking the time to review our paper. Although it appears that reviewer idRZ is occupied in this period and we were unable to engage in a discussion to address the concerns directly, we would like to provide additional discussion in response to the review questions raised. **Supplement to Q1: what are results if the time series data does not align with such prior knowledge for many real world applications.** This question appears to be closely linked to the fundamental inquiry of why our method is effective in this context, as well as the circumstances under which cross-modality knowledge can be leveraged successfully. From the observation, i.e. we can directly use a trained LM for time series forecasting without having to modify it model, makes us belief that the underlying model is doing something very **generic and independent from texts** despite it is trained from text data. Our analysis aims to show that part of this generic function can be related to PCA as minimizing the gradient with respect to the self attention layer seems to do something similar to PCA. And if that's the case, it can apply to the unknown real application as the reviewer mentioned. The second direction we explored is the n-gram theory. The high-level idea is **we can sample a less complex distribution from a very complex distribution**. The time series dataset available to us is notably less complex in comparison to the NLP dataset used to train GPT2. Given this, if we assume that GPT2's induction heads and FFN are capable of modeling the intricate distribution found within the text data, we can always select a subset of heads and FFN that aligns with the less complex nature of our time series analysis. Hence, if we can reasonably assume that the distribution complexity in most time series datasets is lower compared to text distribution, achieving such transfer becomes feasible. **Supplement to Q3: About It is not clear why authors use GPT-2 (6)** GPT-2 is merely an example; in fact, many LLMs are also applicable to time series analysis. Given the significant effort and resources required to conduct a wide array of experiments, we primarily relied on GPT-2 for our core investigations. Nevertheless, we have also utilized the CV pre-trained model BEiT, and the NLP pre-trained model BERT. Both were trained on 5% of ETTh2 and 5% of ETTm2, underscoring that the capability for knowledge transfer isn't exclusive to GPT-2. Detailed results related to these models are available in Appendix H.5. As we move forward, our research will delve into the performance implications of transferring pre-trained models from diverse modalities to time series analysis.
Summary: The authors discuss a method by which a deep transformer model pre-trained on an NLP or CV task can be adapted to a wide variety of applications involving classification or prediction of scalar time series data. This includes an embedding scheme to represent the time series input in the same embedding space as expected by the transformer layers and a fine-tuning approach whereby the multi-head attention and feedforward components of the transformer blocks are frozen, but the embedding and layer norm components are trained. Using the GPT-2 backbone, they apply this method to classification, anomaly detection, imputation, and various forecasting tasks, showing generally good performance compared to many competitors. They Strengths: The problem of finding a broadly performant time series model is interesting and challenging, particularly in light of the diverse architectures proposed and mixed results obtained by deep neural networks for time series applications. The idea of simply using a pre-trained language model with minimal adaptation is surprising and worth discussion in the scientific community. The empirical analysis attempts to cover a very broad range of tasks and datasets, which is an appropriately high bar for a paper that proposes a “one fits all” approach to time series analysis. Weaknesses: * The time series applications I know best here are anomaly detection and long-range forecasting, and for both of these settings there are some issues with the reported results. - For anomaly detection, there is significant information missing. How are binary anomaly decisions made from the reconstruction error? How are precision and recall computed when anomalies are given by windows rather than points (e.g. SMAP, MSL)? These decisions can have a strong influence on the reported results. - While comparisons are extensive for anomaly detection, they do not seem to actually capture the state of the art. For example, the OmniAnomaly method [1], which also naturally generalizes to multivariate time series, significantly outperforms the GPT2(6) model in F1 score on all datasets for which both are evaluated (SMD, MSL, SMAP). - The long-range forecasting results are likewise missing comparisons to methods that significantly outperform all reported scores. For example, the S4 model [2] reports lower MSE and MAE on Weather, ETTh1, ETTh2, and ETTm1 (see Table 13, [2]). - Moreover, the results reported for Informer in [2] are significantly better than what is reported in Table 13 of the present paper, beating GPT2(6) on ETTh1 and ETTh2. It is unclear what the source of this discrepancy is, but it is large enough to be relevant when drawing conclusions across methods. * The idea that a lightly-adapted NLP or CV transformer backbone is a highly competitive, highly general time series model is mildly shocking, at least to a time series practitioner. This paper largely misses the opportunity to explore *why* this might be the case, and *what* are the main drivers of this result. For example, do the pre-trained weights actually matter, or is it the architecture? Is fine-tuning required, and if so, why is fine-tuning only the embeddings and layer norm components sufficient? As it stands, the paper communicates an interesting empirical result without providing much insight as to its explanation. * The specific adaptations of the FPT approach - i.e. the time series encoder and fine-tuning protocol - are both inadequately described. The encoder operations should be described in mathematical detail. The algorithmic details of fine-tuning should at least be provided in the supplement. * The PCA analysis seems largely unrelated to the preceding work in the paper. It seems to be a general observation about self-attention, without any particular connection to the specific time series applications considered. Moreover, PCA itself is certainly not a one-size-fits-all solution to time series modeling, so it cannot be the basis for a convincing explanation of the model’s success. [1] Su, Y., Zhao, Y., Niu, C., Liu, R., Sun, W., & Pei, D. (2019). Robust anomaly detection for multivariate time series through stochastic recurrent neural network. KDD. [2] Gu, A., Goel, K., & Re, C. (2021). Efficiently modeling long sequences with structured state spaces. ICLR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors comment on the discrepancy between Informer performance reported in their Table 13 vs. in [2]? What could explain the substantial difference in metrics on the same task? I certainly don’t mean to claim that [2] must be right, but the size of the difference is enough to affect the interpretation of the results, so it merits some discussion. [Addressed in discussion period] Could the authors share their perspective on *why* their FPT approach seems to work so well, and what are the main drivers of this result? What unique insights for time series representation are provided by the PCA analysis in Section 7? Why not report the results in Figure 1 as a table? They would be much easier to read and compare. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no direct potential for negative social impact. Potential limitations for the method and results are either discussed or covered above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer kdgy for the positive evaluation of our work's intriguing and challenging contributions. We're deeply heartened by the sentiment that "The idea of simply using a pre-trained language model with minimal adaptation is surprising and worth discussion in the scientific community." Such feedback not only affirms our efforts but also motivates us to further our research in this domain. Thanks for your detailed and insightful comments to help us improve our work. We hope your concerns will be addressed. ***Q1:Anomaly detection.binary anomaly decisions, precision and recall computed method*** To make a fair comparison, we mainly follow the binary anomaly decision methods from TimesNet[2] and using the same window size as 100 throughout the main text. Note that TimesNet[2] have performed an extensive comparison of different SOTA anomaly detection methods in the literature in this setting, which makes our empiricial studies comparable to a wide range of literatures. More specifically, we focus on unsupervised time series anomaly detection. Experimentally, each dataset includes training, validation and testing subsets. Anomalies are only labeled in the testing subset. We select the hyper-parameters following the Gap Statistic method (Tibshirani et al., 2001) in K-Means as described below: • After the training phase, we apply the model to the validation subset (without label) and obtain the anomaly scores(reconstruction loss) of all time points. • We count the frequency of the anomaly scores in the validation subset. It is observed that the distribution of anomaly scores is separated into two clusters. We find that the cluster with a larger anomaly score contains r time points. And for our model, r is closed to 0.1\%, 0.5\%, 1\% for SWaT, SMD and other datasets, respectively . Note that, directly setting the $\delta$ is also feasible. we can fix the $\delta$ as 0.1 for the SMD, MSL and SWaT datasets, 0.01 for the SMAP and PSM datasets, which yield a quite close performance to setting r. We understand that more complicated strategies on those issues can further improve the detection accuracy. We did not explore those choices as it departs from the main theme of this work, i.e. pointing out the possibility of leveraging frozen LM for various time series analysis tasks. ***Q2:Anomaly detection. The state of the art problem. the OmniAnomaly method [1], which also naturally generalizes to multivariate time series, significantly outperforms the GPT2(6) model in F1 score on all datasets for which both are evaluated (SMD, MSL, SMAP).*** Our approach is based on Anomaly Transformer [4] by substituting the joint criterion with the reconstruction error. In the upcoming table, it is evident that integrating an additional association discrepancy via a learnable Gaussian kernel significantly bolsters the Anomaly Transformer's performance. Yet, this introduces a quandary, as the Gaussian kernel doesn't impact the primary forecasting output or the reconstruction loss. To genuinely assess the inherent capabilities of each backbone algorithm—primarily forecasting methods suited for broad time series analysis—it appears more reasonable to focus exclusively on the reconstruction error. Gven the results reported in previous study [4], which indicated that the Anomaly Transformer outperforms OmniAnomaly, we chose to exclude OmniAnomaly from our comparison. Furthermore, as per Table 1 in reference [4], OmniAnomaly exhibits a comparatively weaker performance than GPT2(6), primarily on SMD, SWaT, and PSM datasets. | Methods | GPT2(6) | TimesNet | Anomaly.* | Anomaly | OmniAnomaly | | --------| -------- | --------- | --------- | --------- |--------- | | SMD | 86.89 | 84.61 | 85.49 | 92.33 | 85.22 | | MSL | 82.45 | 81.84 | 83.31 | 93.59 | 87.67 | | SMAP | 72.88 | 69.39 | 71.18 | 96.69 | 86.92 | | SWaT | 94.23 | 93.02 | 83.10 | 94.07 | 82.83 | | PSM | 97.13 | 97.34 | 79.40 | 97.89 | 80.83 | | Average | 86.72 | 85.24 | 80.50 | 94.91 | 84.69 | * We replace the joint criterion in Anomaly Transformer (2021) with reconstruction error for a fair comparison. ***Q3: Long-term forecasting. S4 comparison and discrepancy*** First, the discrepancy in the reported results for Informer between [2] and our work is due to the fact that results in [2] are on univariate forecasting, whereas our study focused on multivariate forecasting. It is well recognized in previous studies [3, 5] that multi-variate forecasting is considerably more challenging than univariate forecasting as it aims to make predictions for multiple channels by a single model. Thus, most recent studies of time series analysis, e.g. [3, 5], shift to multi-variate forecasting. Without exception, our study follows the same trend, and in fact, most of the results for baseline methods come from [3]. This also explains the concern from the reviewer that in [2] S4 reports lower MSE and MAE for several datasets -- again, these numbers are based on univariate forecasting, not mult-variate forecasting. Our empirical studies did reveal significantly better results for univariate forecasting, which will be included in the appendix for the final version. The performance of S4 in univariate forecasting is markedly inferior to that of FedFormer, AutoFormer, and FiLM, as detailed in Table 10 of the FiLM appendix. ***Q4,5,6:In the global response*** [3] Wu, H., Hu, T., Liu, Y., Zhou, H., Wang, J., and Long, M., TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis, ICLR, 2023. [4] Xu, J., Wu, H., Wang, J., and Long, M, Anomaly transformer: Time series anomaly detection with association discrepancy, ICLR, 2022. [5] Zhang, T., Zhang, Y., Cao, W., Bian, J., Yi, X., Zheng, S., and Li, J., Less is more: Fast multivariate time series forecasting with light sampling-oriented mlp structures --- Rebuttal Comment 1.1: Title: Does the author's response address your questions ? Comment: Dear reviewer kdgy, One of the main issues you raised was about the quality of reported results. Does the authors rebuttal address these concerns ? Thanks. --- Reply to Comment 1.1.1: Comment: Dear Chair wAFh It appears that reviewer kdgy may be occupied and unable to respond at the moment. To enhance transparency in addressing the Q3 discrepancy or quality of reported results issue, we have incorporated two concise tables summarizing the multivariate and univariate forecasting outcomes with baseline algorithms. These tables consist of average MSE values for four forecasting horizons (96, 192, 336, 720). It is evident from the tables that the univariate MSE is considerably smaller in most datasets, which explains the discrepancy. this difference arises solely from the distinct experiment settings employed in those two works. Furthermore, it is worth noting that even in the univariate forecasting setting, S4[2] does not achieve state-of-the-art performance. Many of the baseline methods we compared in our work outperform S4[2]. Additionally, multi-variate forecasting is considerably more challenging than univariate forecasting as it aims to make predictions for multiple channels by a single model. Thus recent baseline works [3, 4, 5, 6] primarily focus on comparing multivariate forecasting results without providing a comprehensive univariate forecasting table. We have followed this trend in our work. **Multivariate** | Dataset | GPT2(6) | Dlinear | Fedformer | Autoformer | | --------| -------- | -------- | ----------- | ----------- | | ETTm2 |**0.266**| 0.267 | 0.305 |0.327 | | Electricity | **0.167**| 0.166 | 0.214 |0.227 | | Traffic | **0.414**| 0.434 | 0.610 |0.628 | | Weather | **0.237**| 0.249 | 0.309 |0.338 | | ILI | **1.925**| 2.169 | 2.847 |3.006 | **Univariate** | Dataset |Dlinear | Fedformer | Autoformer | S4 | | --------| -------- | -------- | ----------- | ----------- | | ETTm2 | **0.112**| 0.118 | 0.130 |0.256 | | Electricity | --| **0.326** | 0.414 |0.401 | | Traffic | --| **0.177** | 0.261 |0.202 | | Weather | --| **0.007** | 0.008 |0.006 | | ILI | --| **0.694** | 0.812 |0.808 | [2]Gu, A., Goel, K., & Re, C. (2021). Efficiently modeling long sequences with structured state spaces. ICLR. [3] Wu, H., Hu, T., Liu, Y., Zhou, H., Wang, J., and Long, M., TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis, ICLR, 2023. [4]Zhou, T., Ma, Z., Wen, Q., Wang, X., Sun, L., & Jin, R. (2022). FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting. ICML [5]Zeng, A., Chen, M., Zhang, L., & Xu, Q. (2022). Are Transformers Effective for Time Series Forecasting? AAAI Conference on Artificial Intelligence. [6]Nie, Y., Nguyen, N.H., Sinthong, P., & Kalagnanam, J. (2022). A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. ICLR.
Summary: The paper presents a unified framework for time-series tasks using pre-trained language models. The author make a united model through a fine-tuning approach that focuses on fine-tuning specific parts of the pre-trained language model, rather than the entire set of parameters. As a result of this adaptation, the proposed fine-tuned model achieves comparable performance to previous methods across various datasets. Experimental results also demonstrate the versatility of the fine-tuned model by successfully transferring knowledge from pre-trained datasets, such as images and text. Strengths: * The proposed framework surpasses existing methods. * The paper is easily comprehensible and straightforward. * The evaluation of various time-series tasks highlights the advantages of the proposed approach in numerous cases. Weaknesses: * Novelty: The proposed fine-tuning approach appears to be more of an incremental improvement. Fine-tuning pre-trained language models is not entirely new. * Concerns regarding practicality: Does the architecture based on pre-trained language models require more computational resources compared to other models? It would be helpful to analyze the computation cost for inference in fair comparison settings, such as using the same number of parameters, to demonstrate the effectiveness of the proposed approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Why did the proposed approach fail to demonstrate good performance in the zero-shot task, unlike in the other tasks? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate Reviewer GLop's positive acknowledgment of our methodology's efficacy and numerical performance. We are especially grateful for the detailed and insightful feedback provided. Rest assured, we are dedicated to addressing your concerns and enhancing our work. ***Q1 for Reviwer GLop. Novelty: The proposed fine-tuning approach appears to be more of an incremental improvement. Fine-tuning pre-trained language models is not entirely new.*** Evidently, fine-tuning pre-trained language models for in-modality transfer learning is not a new concept and has been widely adopted in various domains. However, the key novelty of our work is cross-modality knowledge transfer showing that a pre-trained language model learned from texts can be successfully used for time series analysis with most of its parameters frozen. Compared to texts, time series data is more noisy and diverse, and many time series data is application dependent, which makes the cross-modality transferring learning challenging. In this paper, we support this claim both by empirical studies and the analysis of self-attention. We believe that such a transfer is "mildly shocking", as Reviewer kdgy highlighted, as there has been no previous work demonstrating that such a transfer is possible, let alone achieving state-of-the-art performance in all downstream time-series analysis tasks. Furthermore, current studies are limited to a relatively simple LM (GPT-2), we envision that a more complicated LM, such as LLAMA, can lead to more improvements in time series analysis. ***Q2 for Reviwer GLop. Concerns regarding practicality: Does the architecture based on pre-trained language models require more computational resources compared to other models? It would be helpful to analyze the computation cost for inference in fair comparison settings, such as using the same number of parameters, to demonstrate the effectiveness of the proposed approach.*** We highly agree that analysis of computational cost is helpful for investigating the practicality of the LLM-based model. The results can be found in the table below. Each baseline model comes in two variants, featuring model hidden dimensions of 32 and 768, which align with GPT-2's specifications. Furthermore, the majority of the baseline models consist of three layers. We assessed the computational cost using a batch from ETTh2 (with a batch size of 128) on a 32G V100 GPU. The results indicate that GPT-2(3) has substantially enhanced time efficiency and reduced parameter quantity compared to baselines with the same model dimension. This was a surprise since we initially anticipated that this large language model might be slower. However, we surmise that the efficient optimization of huggingface's GPT model implementation primarily accounts for such a significant improvement in time costs. Furthermore, GPT-2(3) and GPT-2(6) demonstrate a mere 6.12\% and 4.60\% proportion of learnable parameters among the overall parameter size, respectively. | Model | Training Params | Training Params Percentage(%) | Training Time for 1 Batch (s) | Inference Time for 1 Batch (s) | | ------------- | --------------- | ----------------------------- | ----------------------------- | ------------------------------ | | FEDformer-32 | 437,319 | 100 | 0.889 | 0.17 | | TimesNet-32 | 1,905,015 | 100 | 0.747 | 0.302 | | PatchTST-32 | 543,232 | 100 | 0.043 | 0.022 | | ------------- | --------------- | ----------------------------- | ----------------------------- | ------------------------------ | | FEDformer-768 | 33,105,415 | 100 | 0.208 | 0.056 | | TimesNet-768 | 42,358,519 | 100 | 5.723 | 2.162 | | PatchTST-768 | 19,677,024 | 100 | 0.457 | 0.123 | | GPT-2(3)-768 | 3,906,912 | 6.12 | 0.093 | 0.032 | | ------------- | --------------- | ----------------------------- | ----------------------------- | ------------------------------ | | GPT-2(6)-768 | 3,916,128 | 4.60 | 0.104 | 0.054 | ***Q3 for Reviwer GLop. Why did the proposed approach fail to demonstrate good performance in the zero-shot task, unlike in the other tasks?*** For the zero-shot tasks, our goal is to verify the representation power of LLMs for time series analysis, and thus focusing on the comparison with a few recently proposed algorithms, such as DLinear, PatchTST and TimesNet. Our empirical studies show that GPT-2(6) yields similar performance as the above state-of-the-art methods designed for time series analysis. However, none of these methods is specially designed for zero-shot learning. In contrast, N-BEATS, as noted in [1,2], has an unique model design (e.g. backcasting and ensemble learning) that enables domain adaption without having to modify its weights, making it particularly suitable for zero-shot learning. That explains why N-BEATS clearly outperforms other competitors in zero-shot learning. [1] Wu, H., Hu, T., Liu, Y., Zhou, H., Wang, J., and Long, M., TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis, ICLR, 2023. [2] Oreshkin, B. N., Carpov, D., Chapados, N., and Bengio, Y, N-beats: Neural basis expansion analysis for interpretable time series forecasting, arXiv:1905.10437, 2019. --- Rebuttal Comment 1.1: Comment: I appreciate the author's additional experiments and clarification of your work. Thus, I increased the score to 6. --- Reply to Comment 1.1.1: Comment: Your review has been immensely valuable in enhancing the caliber of our work. We appreciate your dedication, assistance, and encouragement. It is great to know that our latest experiments and explanations have had a beneficial effect on the paper.
Rebuttal 1: Rebuttal: We thank the Reviewers for the insightful comments and detailed feedback. We were delighted that reviewers find our paper has the following advantages: **Innovative Findings**: Using pretrained transformers for time series.(qAFk, kdgy, qj2Q) **Clear Writing:** Easily comprehensible content.(qAFk, GLop, qj2Q) **Robust Analysis**: Wide-ranging experiments across tasks/datasets.(qAFk, GLop, kdgy, qj2Q) **Simplicity & Top-tier Results**: Outperforms complex techniques.(GLop, idRZ, qj2Q) **Attention & PCA Insight**: Explains model's success.(qAFk, qj2Q) We've addressed certain concerns in individual rebuttals. However, owing to space limitations and the extent of questions from Reviewer kdgy, we've moved some responses to this global reply section. ***Q4 for Reviewer kdgy. The specific adaptations of the FPT approach - i.e. the time series encoder and fine-tuning protocol - are both inadequately described. The encoder operations should be described in mathematical detail. The algorithmic details of fine-tuning should at least be provided in the supplement.*** We use the GPT-2 model [6] as our time series encoder and the detailed mathematical detail and architecture can be found in [6, 7]. Thus, with the page limit, we cannot go into further detail on this in the paper. In the fine-tuning stage, we freeze certain parameters of GPT-2 by modifying the $requires grad$ to false in PyTorch as shown in our provided code. We will provide the pseudocode of GTP-2 method in the appendix for the final version. ***Q5 for Reviewer kdgy. Could the authors comment on the discrepancy between Informer performance reported in their Table 13 vs. in [2]? What could explain the substantial difference in metrics on the same task? I certainly don’t mean to claim that [2] must be right, but the size of the difference is enough to affect the interpretation of the results, so it merits some discussion.*** The tasks referenced in Table 13 and [2] are distinct. [2] exclusively reports results for univariate forecasting, whereas our experiments focus on multivariate long-term forecasting, leading to a noticeable disparity in performance. Our experiment settings and baseline citations are consistent with previous works, such as [3]. We will provide clearer details regarding our experimental procedures in the revised version. ***Q6 for Reviewer kdgy. This paper largely misses the opportunity to explore why this might be the case, and what are the main drivers of this result. For example, do the pre-trained weights actually matter, or is it the architecture? Is fine-tuning required, and if so, why is fine-tuning only the embeddings and layer norm components sufficient? As it stands, the paper communicates an interesting empirical result without providing much insight as to its explanation. Could the authors share their perspective on why their FPT approach seems to work so well, and what are the main drivers of this result? What unique insights for time series representation are provided by the PCA analysis in Section 7?*** We have run extensive experiments for various finetuning and we did not include them in the paper due to the space limitation. Our empirical studies show that the weights learned by LM are essential to the success of cross modality transferring. In fact, full fine tuning of all the weights leads to significant degradation in performance -- that is the reason why we freeze all the weights for FFN and attention layers and only tune embedding and layer norm. To explain why FPT works so well for time series analysis, we try to understand the function of self-attention layers and find that it may be closely related to PCA, a general-purpose function independent from any domain. This domain-independent property delivered by attention layers makes us believe in the possibility of using a frozen LM for time series. ***Q7 for Reviewer kdgy. Why not report the results in Figure 1 as a table? They would be much easier to read and compare.*** Thank you for your suggestion. Due to page constraints, we consolidated the classification task results into a figure, which admittedly affected readability. We plan to reorganize the layout and include a table displaying the results either in the main body or the appendix. Moreover, our approach mirrored that of the baseline. Since the TimesNET paper presents classification outcomes in a comparable format, we felt it prudent to maintain this consistency to facilitate easier comparison. | | XGBoost | Rocket | LSTNet | LSSL | TCN | DLinear | LightTS | TimesNet | | -------- | ------- | ------ | ------ | ---- | ---- | ------- | ------- | -------- | | Accuracy | 66.0 | 72.5 | 71.8 | 70.9 | 70.3 | 67.5 | 70.4 | 73.6 | | | Transformer | Reformer | Informer | Pyraf. | Autof. | Stationf. | FEDf. | ETSf. | Flowf. | GPT2(6) | | -------- | ----------- | -------- | -------- | ---------- | ---------- | ------------- | --------- | --------- | ---------- | ------- | | Accuracy | 71.9 | 71.5 | 72.1 | 70.8 | 71.1 | 72.7 | 70.7 | 71.0 | 73.0 | 74.0 | [2] Gu, A., Goel, K., & Re, C. (2021). Efficiently modeling long sequences with structured state spaces. ICLR. [3] Wu, H., Hu, T., Liu, Y., Zhou, H., Wang, J., and Long, M., TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis, ICLR, 2023. [6] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I, Language models are unsupervised multitask learners, 2019. [7] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser Lukasz, and Polosukhin, I, Attention is all you need, arXiv:1706.03762, 2017
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes to use pretrained Transformer for time series analysis. They freeze the attention and FFN layers but fine-tune the position embeddings and add-norm modules to adapt the model to a given task. This results demonstrate that this method leads to strong performances in time series analysis tasks. The authors proved a theorem to explain why the text-based pretrained model could generalize well to time series analysis, by drawing insights from PCA. Strengths: The paper presents very interesting theoretical and empirical findings. Their extensive experiments show that a pretrained text-based transformer model could be adapted to time series analysis and achieve strong performances. I think the findings are worth presenting to the ML community. Their theoretical analysis explains this by connecting self-attention mechanism with PCA, for which I have concerns/questions as expressed in Weakness. The paper is overall clearly written. Weaknesses: I didn't give a high soundness score because the paper hasn't well justified the choice of tuning add-norm layers and positional embeddings. Line-117 claims that it "is considered a standard practice" but I am afraid it is not true. Parameter-efficient adaptation has been a hot topic and there have been various kinds of methods, such as adding feed-forward adapters (Houlsby et al. ICML 2019) or attentional adapters (Zhao et al. EMNLP 2022), tuning bias terms (Zaken et al. ACL 2022), LoRA (Hu et al. ICLR 2022), and prefix tuning (Li and Liang ACL 2021, Qin and Eisner NAACL 2021). There hasn't been a "standard". I think the paper can be improved with experimental analysis on different kinds of LM adaptation methods, which will complement the current results. More important, the choice of tuning parameters is related to the theorem in the paper: the PCA insights are drawn for the self-attention mechanism, making me wonder: does the choice of tuning parameters lead to the analysis of the self-attention? Or does the analysis of self-attention lead to the choice of tuning parameters? Or neither? Will results with any other adaptation methods give you different kinds of interpretation? The mentioned references are: https://arxiv.org/abs/1902.00751 (already cited) https://arxiv.org/abs/2211.01979 https://arxiv.org/abs/2106.10199 https://openreview.net/forum?id=nZeVKeeFYf9 https://arxiv.org/abs/2101.00190 https://aclanthology.org/2021.naacl-main.410/ Another line of work that this paper should discuss is the NLP literature that also finds the "per-layer high cosine" phenomenon. This phenomenon is first discussed for skip-gram embeddings: https://aclanthology.org/D17-1308.pdf Then it is also found in Transformers: https://arxiv.org/pdf/1909.00512.pdf There is also argument about why cosine metric may not mean that much: https://aclanthology.org/2022.acl-short.45/ The above is to only name a few and there are other papers that one can find by tracing their citation relations. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer qAFk for the favorable evaluation of our work's theoretical and empirical contributions. It's particularly encouraging to note your belief that the findings merit presentation to the ML community. Such endorsements validate and inspire our continued dedication to this research. We deeply appreciate your detailed and perceptive feedback, and we are committed to addressing your concerns. ***Q1 for Reviewer qAFk. the choice of tuning add-norm layers and positional embeddings and following PEFT studies*** We agree with the Reviewer that efforts like tuning normalization layers, position embedding, and parameter-efficient fine tuning can further improve performance. In fact, we have conducted experiments with PEFT and observed the improvement. However, we chose not to include these findings in our current work in order to highlight the core contribution of the cross-module knowledge transfer to time series. We focus on demonstrating that pre-trained language models can yield strong performance in general time series analysis with minimal finetuning, with cross-module knowledge transfer as our central theme. To support this, we have conducted extensive experiments with frozen FFN and attention layers, using pre-trained parameters mixed with random values to show that the pre-trained-transformer block is vital. While our initial experiment with PEFT shows encouraging results, it introduces new learnable parameters and layers that do not fully align with our main argument, and thus will be left for the future examination. | Weather | 96 (MSE) | 192 (MSE) | 336 (MSE) | 720 (MSE) | | ---------------------- | -------- | --------- | --------- | --------- | | GPT-2 (6) | 0.162 | 0.204 | 0.254 | 0.326 | | PatchTST | 0.149 | 0.194 | 0.245 | 0.314 | | GPT-2 (6) + Adapter | 0.147 | 0.197 | 0.243 | 0.313 | | GPT-2 (6) + NewAdapter | 0.143 | 0.188 | 0.239 | 0.310 | ***Q2 for Reviewer qAFk. The PCA insights are drawn for the self-attention mechanism, making me wonder: does the choice of tuning parameters lead to the analysis of the self-attention? Or does the analysis of self-attention lead to the choice of tuning parameters? Or neither? Will results with any other adaptation methods give you different kinds of interpretation?*** On the high level, we aim to test with minimal tuning whether NLP pre-trained model parameters can be transferred to time series analysis, which was initially inspired by our empirical studies and further supported our analysis that self-attention can essentially deliver the general purposed function of PCA. ***Q3 for Reviewer qAFk. Another line of work that this paper should discuss is the NLP literature that also finds the "per-layer high cosine" phenomenon.*** Thank you for the invaluable suggestion! We will incorporate the literatures on "per-layer high cosine" in the revised version. --- Rebuttal Comment 1.1: Title: Thank you but new discussion needed. Comment: Thank you for your clarification and new results. However, I am afraid that my main concerns are not resolved yet. First, I was not suggesting tuning more things; instead, I was suggesting you give a deeper discussion on why you chose to tune certain parameters but not the others, and why you chose to use certain PEFT method but not the others. You mentioned "minimal tuning", which seems relevant. But what I expect is a more in-depth discussion, hopefully supported by numbers and statistics. Second, I was concerned if your theoretical analysis will be affected by your choice of PEFT, which doesn't seem to be resolved either. So I would still like to maintain my current rating. But I am open to more discussion. --- Reply to Comment 1.1.1: Comment: We sincerely apologize for any misunderstanding that may have occurred regarding your question. We have conducted several experiments, although they have only been briefly presented in the main draft and SI. Below, we summarized our findings in the table. Initially, we did not have any preconceived notions about which parameter needed to be tuned for optimal performance. Nevertheless, we did hope that minimal fine-tuning would suffice since it aligns with our message and also leads to a reduction in training time. However, if other fine-tuning choices yield better results, we would prioritize reporting those as our main settings, as we always prioritize performance above all else. These choices were based on an ablation study conducted during our time series forecasting experiment. Among the settings we tested were: no-freeze pretrained weight full fine-tuning, no-pretrained Kaiming initialization full training, freeze Kaiming initialization full training, only fine-tuning the FFN layers, only fine-tuning the attention layers, and the GPT2(6) attention-FFN pretrained frozen. We found that the attention-FFN pretrained frozen setting yielded the best results, leading us to choose it as our optimal tuning setting. | | FFN-Att pretrain-Freeze | No Freeze-Full-Finetune | No Pretrain-Full-training | No Pretrain + Freeze | Pretrain-Finetune FFN-only | Pretrain-Finetune Attention-only | | ---------------- | ------- | --------- | ----------- | -------------------- | -------------------- | ------------------------ | | ETTh2 96(mse) | **0.376** | 0.440 | 0.465 | 0.540 | 0.469 | 0.443 | | ETTh2 96(mae) | **0.421** | 0.449 | 0.457 | 0.497 | 0.463 | 0.446 | | ETTh2 192(mse) | **0.418** | 0.503 | 0.614 | 0.721 | 0.487 | 0.600 | | ETTh2 192(mae) | **0.441** | 0.478 | 0.536 | 0.580 | 0.470 | 0.524 | Based on this observation, we began to explore the reasons behind this success and why the pretrained frozen attention module seems to function universally across domains. This led us to propose a theoretical connection between PCA and the attention model, as we attempt to explain the aforementioned findings. Regarding the PEFT, as stated from the primary results, it did improve our performance, but we still kept the original FFN and attention layer fixed the same as this work, adding a new layer for training while keeping the pretrained weight untouched. We believe this still matters in that case and supports our PCA analysis to some degree. We hope this revised explanation has addressed your concerns. Please do not hesitate to contact us if we have not answered your question completely. In addition, we would like to express our gratitude for requesting clarification. Your feedback has been immensely valuable in enhancing the overall quality of our work.
null
null
null
null
null
null
Triangulation Residual Loss for Data-efficient 3D Pose Estimation
Accept (poster)
Summary: This paper focuses on the task of multi-view 3D human/animal pose estimation, and proposes Triangulation Residual Loss, termed as TR Loss, to constraint multi-view geometry consistency in a self-supervised manner. TR Loss is used to minimize the distances between the predicted 3D point to the view rays, and simply implemented as minimizing the smallest singular value of the triangulation matrix. Experiments verify the effectiveness of TR Loss on both laboratory mouse pose estimation and human pose estimation. Strengths: - TR Loss is simple yet effective, which self-supervises the multi-view geometry consistency by minimizing the smallest singular value of the triangulation matrix without 3D annotations or heavy computation of reprojection. - A new THmouse dataset is constructed to help build a benchmark for laboratory mouse pose estimation. - SOTA MPJPE results on Human3.6M dataset are achieved, which verify the effectiveness of TR Loss. Weaknesses: - In Fig. 1, there are three groups of methods discussed; however, the quantitative comparisons with those of the second group (cf. Fig. 1b), especially in terms of accuracy and efficiency, are not presented in the experiments. - For the newly built benchmark of laboratory mouse 3D pose estimation, experiments in Sec. 4.2 seem to be ablation studies. No SOTA methods are directly applied on this benchmark. It’s suggested to provide the results of some recent methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In Line158, the weights are scaled to (0.4, 0.6). Does it limit the effect of the predicted confidence? Why the superparameters (0.4 and 0.6) are set? Ablation studies are needed. - In Table 2, the best results are not bolded. - In Table 3, it seems that the result of MTFT-Transformer is not consistent with that in their paper. And there are some methods listed in their paper and not compared in this paper, e.g, [1] and [2], which achieve better results. For different ratios of training data (5% and 100%), what are the ratios of unlabeled data? - In Fig. 3(b), why TR Loss has stronger generalization ability than 3D supervised loss? Reference: [1] AdaFuse: Adaptive Multiview Fusion for Accurate Human Pose Estimation in the Wild. [2] Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not discuss the limitations and potential negative societal impacts of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: In Fig. 1, there are three groups of methods discussed; however, the quantitative comparisons with those of the second group (cf. Fig. 1b), especially in terms of accuracy and efficiency, are not presented in the experiments. R1: We chose GeneralTriang[10] as a comparison method here. As the results in the result table in R2 shown, the GeneralTriang performs worse than our method. It is also less efficient than our method. The average per-batch running time of the baseline model, our method, and GeneralTriang are 0.389s, 0.463s, and 2.05s. GeneralTriang takes a longer time since it needs to select view subsets and update the 3D hypotheses iteratively in each batch. Our method just calculates one more loss function than baseline so the computational time complexity is nearly the same. > Q2: For the newly built benchmark of laboratory mouse 3D pose estimation, experiments in Sec. 4.2 seem to be ablation studies. No SOTA methods are directly applied on this benchmark. It’s suggested to provide the results of some recent methods. R2: We add the DeepLabCut and GeneralTriang[10] as the SotA comparison methods. We chose these methods because DeepLabCut is the most popular animal pose estimation method, and GeneralTriang is the latest SotA pose estimation method proposed in CVPR 2022. The results are shown in the following table. | | DeepLabCut | GeneralTriang | Ours | |:----------:|:----------:|:-------------:|:----:| | Dannce | 4.20 | 4.15 | 3.54 | | THM-Dannce | 11.43 | 7.29 | 5.18 | In addition, We also plug our TR loss onto different 2D detectors. including PVT, SCNet, and MobileNetV2 (referring R1 to reviwer y7FT). The MPJPE errors in the following table show that our modules and loss functions provide consistent and improved results on all 2D detectors. > Q3: In Line158, the weights are scaled to (0.4, 0.6). Does it limit the effect of the predicted confidence? Why the superparameters (0.4 and 0.6) are set? Ablation studies are needed. R3: Learnable confidence mitigates the negative effects of inaccurate 2D estimates to triangulation. But in turn, it also affects the updating value of inaccurate 2D estimates. As discussed in L153, if the weights of a view are too small, then its 2D estimate is hardly updated. Theoretically, as long as the weights of each view are not all equal or differ by orders of magnitude (one with 1e-4, one with 9.999), TR loss can obtain a balance between mitigating and updating the inaccurate 2D estimates. We supplemented the sensitivity experiments with confidence thresholds, which showed that setting the parameters at (0.4, 0.6)-(0.1, 0.9) gives approximate results that are significantly better than without confidence and (0,1). | | without confidence | (0.4, 0.6) | (0.3, 0.7) | (0.2, 0.8) | (0.1, 0.9) | (0,1) | | :------------: | :----------------: | :--------: | :--------: | :--------: | :--------: | :---: | | Dannce dataset | 4.54 | 3.61 | 3.74 | 3.60 | 3.90 | 5.89 | | THM-Dannce | 8.11 | 6.21 | 6.52 | 6.72 | 6.62 | 29.42 | > Q4: In Table 2, the best results are not bolded. R4: We thank the reviewers for the careful review and we will update Table 2 in the revised paper. > Q5: In Table 3, it seems that the result of MTFT-Transformer is not consistent with that in their paper. And there are some methods listed in their paper and not compared in this paper, e.g, [1] and [2], which achieve better results. For different ratios of training data (5% and 100%), what are the ratios of unlabeled data? R5: Sorry for the mistake, we corrected the performance of the MTFT-Transformer to 27.46mm according to TABLE 9 of the MTFT-Transformer paper. We also checked the results of other compared methods. References [1] and [2] are both outstanding works in human pose estimation, AdaFuse [1] fuses the features of two corresponding points in two different views. ORPSM [2] integrates the IMU signal and multiview images to improve both 2D and 3D pose estimation. We will discuss the difference between them and our manuscript. However, AdaFuse used the MPII dataset as additional data, while ORPSM used the IMU data. For the 100\% training data, we compute the TR loss on 100\% training data with 2D label and 1\% test data with only image (without 2D/3D supervision). For the 5\% training data, we compute the TR loss on 5\% training data with 2D label and other 95\% training data with only image (without 2D/3D supervision). We used such a setting because we try to demonstrate TR loss can be used on unlabeled data. We will add the experiments with only training data. > Q6: In Fig. 3(b), why TR Loss has stronger generalization ability than 3D supervised loss? R6: The 3D supervised loss fits the 3D ground truth mainly by weighting the different views, so the 3D supervised loss may be small but the 2D estimates are erroneous. The 3D supervised loss does not have a direct cue to improve the 2D detector. Instead, the TR loss successfully enforces the geometrical consistency by minimizing the sum of distances between 3D estimation and view rays. Therefore, it is a reasonable result that TR loss demonstrates better generalization ability in cross-dataset experiments. > Q7: Limitations and societal impacts. R7: The limitation of our method is that it requires a 2D detector in giving a relatively accurate initial estimate. This is also a common limitation of 3D pose estimation without 3D supervision. We present some failure cases in the appendix. In this paper, we propose a solution for non-3d supervision pose estimation for both humans and animals, which has potential applications for both human and animal behavior analysis. As mentioned by other reviewers, our method and results have no negative impact on animal conservation and welfare, human health and safety, and social ethics.
Summary: The authors present an approach for 3D pose estimation that leverages multi-view stereo cues. A weak 2D keypoint detector is refined using an unsupervised triangulation consistency from posed cameras (TR loss) which iteratively minimizes triangulation discrepencies. The method is experimentally evaluated on an animal dataset of mice and Human3.6M. Strengths: * S1. Multi-view consistency as refiner. The idea to use multi-view consistency to refine a (potentially weak) 2D keypoint detector with a cross-view loss is interesting. * S2. Simple idea with potential. The use of unsupervised consistency loss can lift simpler 2D labels into 3D where annotation is more cumbersome. Weaknesses: * W1. Missing Baselines / Backup Experiments. In details: * L16 claims a "plug-and-play module that enables data-efficient training of all 2D keypoint detectors". However, the 2D keypoint detectors are not changed in experiments. * The related work talks about [11,12,13,14] as previous MVS setups - and L91 speaks about "local pariwise consistency", however, this can be globally optimized. Would it be possible to include a comparison against commonly used reconstruction methods / optimizations such as the one from COLMAP? No competitor baselines are used for comparison such as [11,12,13,14, ...], but only self-made baselines. Why? * The experiments on H3.6M are compared against 3D supervision. The idea of the paper, however, is conceptual. How would these methods/pipelines improve with the TR loss? * W2. Quesitonable robustness. Given the iterative nature of the MVS triangulation consistency, it is unclear to me how an outlier measurment would affect the procedure. Where is the robustness to single outliers in the process? * W3. Missing mathematical rigorosity. There are a couple of unclarities / mathematical details that should be specified such as: * Eqn (5): It is unclear whihc norm is used? L2? Depending on this, the equivalence of (5) and (6) migh not be given. Would it help to add a discussion to the paper about the minimization of a geometric vs. an algebraic error? * Considerations for non-degenerate / degenerate constraints for eqn (6) are missing. E.g. rank considerations * Around eqn (8). \sigma_i, u_i, v_i should be defined * L149f sigma not necessary in strict order >= should be used * L152 / eqn (9): What happens if the smallest eigenvalue is not unique? * Eqn (11), L168ff z with hat is missing a definition * W4. Minor grammatical errors etc. / typos * L53: colloquial: "doesn't" * L87: captial letters "All" * L95: \sigma_4 not defined * L102/L104/L112/L121 notation arrow is typically used to define the mapping (not the sets) * L114: missing space "(1)and" * Template: NeurIPS22 * Eqn.(1): Consistency on the naming (e.g. not italic) * L117: "indices" * L123: "objective"? * L131: norm not defined (L1 or L2 used?) * L135: "Eq.(6)" missing space * L145: norm definition is missing * Eqn (7): norm not defined (also in eqn (8,9,10)) * Colloquial (L209 / L211): doens't, can't * L261: doen't * L270: can't Most critical points have been addressed during the rebuttal! Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * Q1. Given the iterative nature of the MVS triangulation consistency, it is unclear to me how an outlier measurment would affect the procedure. Where is the robustness to single outliers in the process? * Q2. What is the reason for the choice of solely self-made baselines? Why are no MVS methods compared? How would other methods work if the TR loss is used (e.g. additionally)? * Q3. What would be the influence of camera pose error on the result? * Q4. Are all other baselines (H3.6M) trained with the same data amount (100%)? Are they most recent SotA? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Failure cases are discussed in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors thank the reviewer for constructive and helpful feedback. To address your questions and concerns: > Q1: L16 claims a "plug-and-play module ... However, the 2D keypoint detectors are not changed in experiments. R1: To demonstrate the "plug-and-play" ability, we complemented the experiments on the Dannce dataset by plugging our triangulation head and confidence head on different 2D detectors (including PVT [r1], SCNet [r2], and MobileNetV2 [r3]). In the following table, "baseline" triangulates the 3D estimates based on the 2D results of each 2D detector, "baseline+TR" is finetuned with our TR loss. The MPJPE errors show that our modules and loss functions provide consistent improvements on all 2D detectors. | Dannce | PVT | SCNet | MobileNet | | :-----------: | :--: | :---: | :-------: | | Baseline | 4.11 | 3.85 | 5.78 | | Baseline + TR | 2.94 | 2.86 | 4.33 | | THM-Dannce | PVT | SCNet | MobileNet | | :--------: | :--: | :---: | :-------: | | Baseline | 9.44 | 9.29 | 11.13 | | Baseline + TR | 6.25 | 6.32 | 7.22 | [r1] Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions, ICCV, 2021 [r2] Improving Convolutional Networks With Self-Calibrated Convolutions, CVPR, 2020 [r3] MobileNetV2: Inverted Residuals and Linear Bottlenecks, CVPR, 2018. > Q2: The related work ..., such as the one from COLMAP? No competitor baselines are used for comparison such as [11,12,13,14, ...], but only self-made baselines. Why? R2: We add the DeepLabCut[r4] and GeneralTriang[10] as the SotA comparison methods. We chose these methods because DeepLabCut is the most popular animal pose estimation method, and GeneralTriang is the latest SotA pose estimation method proposed in CVPR 2022. GeneralTriang is also based on triangulation. The results are shown in the tables in R1. [r4] Using DeepLabCut for 3D markerless pose estimation across species and behaviors, Nature Protocols, 2019. | | DeepLabCut | GeneralTriang | Ours | |:----------:|:----------:|:-------------:|:----:| | Dannce | 4.20 | 4.15 | 3.54 | | THM-Dannce | 11.43 | 7.29 | 5.18 | We have not directly compared with COLMAP since its typical implementation of the triangulation is nearly the same as the "baseline" and "RANSAC" in Table.1 and Table. 2. Specifically, i) ``TriangulateMultiViewPoint`` is the core function for COLMAP triangulation and it uses DLT (the same as our baseline). ii) According to its document, "a 3D similarity transformation will be estimated with a RANSAC estimator to be robust to potential outliers in the data.", and we have compared with RANSAC in Table.1. We will clarify this in the revision. > Q3: The experiments on H3.6M ..., How would these methods/pipelines improve with the TR loss? R3: Validating the effectiveness of the TR Loss under different setups is important. Thus, we have validated the effectiveness of the TR Loss even with 3D supervision in Table.1 on the mouse dataset. Moreover, the validity of the method is also demonstrated when 3D GT labels are unavailable (referring to R1). We will add additional validations on human datasets in the revision to make it more thorough. > Q4:Quesitonable robustness. R4: In the classical MVS triangulation, 2D outliers are excluded based on the reprojection errors during iterative optimization. However, 2D estimates in each view are not corrected throughout the process (including the outlier). Differently, our method updates the 2D estimates in an end-to-end training framework, and the TR loss forces the 2D outliers in each view to gradually converge to the positions with 3D consistency. (As mentioned by Reviewer ZQc8: "The iterative triangulation residual is a good formulation to realign the predicted heatmap locations to the accurate point using multi-view consistency"). We will release our code for reproducibility. > Q5: Missing mathematical rigorosity. R5: [1] Eqn(5) uses L2 norm. In fact, Eqn(5) and Eqn(6) are not fully equivalent, and we will tune the claim. Please refer to our answer "R2" to reviewer 4Fs8 for detailed discussion. [2] To avoid degenerate cases where A is not a full-rank matrix, we simply ignore the TR loss where the condition number of A is too large. [3] The definition of \sigma_i, u_i, v_i are presented in L149-L150 actually. We find that it does not follow Eqn (8) closely, which is confusing. We will clarify it. [4] In L.149, we will use >=. [5] We simply use the last eigenvalue of SVD. In the used datasets, cameras are arranged reasonably and equal samllest eigenvalue is hard to observe. [6] We will clarify z hat. > Q6: Minor grammatical errors etc. / typos R6: We will go through the paper carefully with a native speaker. > Q7: Given the iterative ... Where is the robustness to single outliers in the process? R7: Please refer to R4. > Q8: What is the reason for the choice of solely self-made baselines? ... R8: Please refer to R1 and R2. > Q9: What would be the influence of camera pose error on the result? R9: Currently, we assume the camera poses are correct as other related methods. However, the differentiable nature of TR loss may have the potential to correct erroneous camera poses. > Q10: Are all other baselines (H3.6M) trained with the same data amount (100%)? Are they most recent SotA? R10: Yes. All the other baselines in Table 3 are trained with 100% data. They are the SotA under the setting of single-frame, multi-view 3D human pose estimation on the Human3.6M dataset without using additional training data. Notice that some papers achieved better results in different settings, for example, STCFormer (CVPR2023) and DiffPose (CVPR2023) used the ground-truth 2D pose as input, Token-Pruned Pose Transformer (ECCV2022), AdaFuse(IJCV2021), TesseTrack(CVPR2021) used the additional training data. We exclude these methods for fair comparision. --- Rebuttal Comment 1.1: Comment: Many thanks for the additional experimental evidence and the correction of the mathematical flaws! I would encourage the authors to explicitly include a statement such as the one from ZQc8 or the answer to Q4 to make this point clear. While I was very unsure whether these things are possible within the rebuttal period, I am quite more positive about the paper now given that the commonly raised points (see your general answer 1,2,3,4) are adopted in the paper. I therefore provisionally raise my rating to a "boarderline accept" trusting in the authors to make these changes. --- Reply to Comment 1.1.1: Title: Thanks for Reviewer y7FT for the responses Comment: We greatly appreciate the reviewer's positive response to our revision and are glad to see the score change. We will present all changes of our rebuttal in the final paper. Thanks again for the patient review and constructive comments.
Summary: This paper proposes to perform 3D pose estimation from multi-view RGB images. The key contribution is to develop a new loss function to enable effective training with only 2D pose supervision. This loss function is intended to iteratively optimize the geometric consistency between multi-view rays of each keypoints. The goal is to minimize the distance between the initial 3D estimate and multi-view rays to converge to a stable 3D position. The performance has been evaluated on one 3D human pose dataset, Human3.6M, and multiple mouse pose datasets. Strengths: 1. Impressive quantitative results are achieved on multiple benchmarks. Extensive ablation studies testify the effectiveness of key designs. 2. The idea of developing a iterative loss to alleviate the in-consistency between 2D pose supervision is awesome. 3. The paper is well written and easy to follow. Weaknesses: 1. Reproducibility. The results obtained in this paper is very attractive. The code may not be released. The 5-lines implementation details at Sec. 4.1 clearly are not enough for reproducing the results. There is no guarantee for the reproducibility. 2. Limited qualitative results are provided to show the performance. More visual results, such as a video, would be very helpful. 3. Limitations are not discussed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there any way to ensure the reproducibility of the results in the paper? How does the proposed method perform in the wild? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are some obvious limitations, which have not been discussed in the paper. For instance, the performance is heavily relying on the initial 2D pose estimation. While the in-the-wild performance of this part has not been validated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewers' careful review and constructive comments. To address your questions and concerns: > Q1: Reproducibility. The results obtained in this paper is very attractive. The code may not be released. The 5-lines implementation details at Sec. 4.1 clearly are not enough for reproducing the results. There is no guarantee for the reproducibility. R1: We will open-source our code and dataset if our paper can be accepted.** We will also improve the implementation details. Model details: We apply HRnet as the backbone, build the heatmap head with a convolution layer with a 3x3 kernel size, and build the confidence head with two convolution layers followed by three linear layers [512, 256, num_joint] with a sigmoid activation function. The domain discriminator is an average pooling layer followed by three linear layers [512, 256, num_joint] with a sigmoid activation function. Training details: All models are trained on 1 NVIDIA 3090 GPU and an Intel i7-11700 CPU with the Adam optimizer and an initial learning rate of 1e−5. We will also add more specific implementation details for each dataset in the appendix. > Q2: Limited qualitative results are provided to show the performance. More visual results, such as a video, would be very helpful. R2: We add a video comparing qualitative results w/o TR loss with w/ TR loss on Dannce dataset. TR loss largely improves the visual quality by removing obvious errors such as floating legs/paws or falsely overlaped front paws. > Q3: Limitations are not discussed. R3: Our method's limitation is that it relies on a pretrained 2D detector, which is a common drawback in 3D pose estimation without 3D supervision, especially for triangulation-based methods. We have shown some failure cases in the appendix. > Q4: Is there any way to ensure the reproducibility of the results in the paper? How does the proposed method perform in the wild? R4: We will release our code and datasets on GitHub as long as our paper is accepted. As can be expected, our method can be applied in the wild. The 2D keypoint detector and 2D dataset are well-established, especially the human keypoint detector. As the 2D results shown in existing papers and the MMPose package, the 2D detector trained on the COCO dataset shows great in-the-wild performance, which can provide enough initial values. > Q5: There are some obvious limitations, which have not been discussed in the paper. For instance, the performance is heavily relying on the initial 2D pose estimation. While the in-the-wild performance of this part has not been validated. R5: Similar to common triangulation-based methods, our method also relies on the initial 2D pose estimation. However, most of the existing triangulation-based methods can not update inaccurate 2D estimates but only reduce their negative impact. On the contrary, our method can optimize the 2D estimates unsupervised. Thanks to the reviewer for the suggestions, we will make this more clear and evaluate our method on the in-the-wild datasets. --- Rebuttal Comment 1.1: Title: Performance shown in the video helps, I have upgraded to W.A. Comment: Thanks for providing such an impressive video, which makes the performance be more convincing. Providing code would also help the reproducibility. Therefore, most of my concerns have been resolved. I have upgraded rating to W.A. --- Reply to Comment 1.1.1: Comment: Many thanks for your patient review and constructive comments. We will open-source our code upon acceptance.
Summary: The paper proposes an triangular residual loss to optimize for the optimization of the 3D locations of the pose estimation. the iterative optimization framework address the problem of the erroneous predictions of the keypoints using learning based methods. The results are shown on multile pose estimation datasets with different objects like humans and mouses. Strengths: The paper has addressed an fundamental problem in triangulation i.e. is uncertain estimate of the keypoints in 2D and how to optimize for the 3D location given that the 2D locations have uncertainty. The iterative triangulation residual is a good formulation to realign the predicted heatmap locations to the accurate point using multi-view consistency. The method shows substantial result improvement on multiple datasets. Further they show reasonable improvements using much fewer images. Such formulation is easily generalizable and analysis has been provided for the same. Weaknesses: Compared to the previous methods using a single loss over the 3D triangulation, in the current framework using iterative formulation causes additional compute and time to optimize. Analyzing the tradeoff in time vs accuracy compared to baselines will be helpful. Experiments on additional datasets and with different objects should be explored .i.e. using panoptic studio or more datasets with different animals like monkey might show more benefits of the current formulation. Although the method works with 5-6 camera views. analyzing what happens with fewer than that like 2-4 should be analyzed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: what is the training time of the current formulation? What happens if the estimated keypoint from one of the views is erroneous? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations and societal impact have not been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful and constructive review. To address your questions and concerns: > Q1: Compared to the previous methods using a single loss over the 3D triangulation, in the current framework using iterative formulation causes additional compute and time to optimize. Analyzing the tradeoff in time vs accuracy compared to baselines will be helpful. R1: In the training process, our method is the same as typical deep learning models that compute the loss functions (including TR loss) only once per batch to update the weights. The TR loss does not require additional iterative optimization within a batch. So TR loss does not excessively increase the training time. During testing, our method also does not require iterative optimization. Regarding the training efficiency, we empirically compare the training time of our model with the baseline model. Both models are running under the same conditions (same batch size, same optimizer) on the same device (an NVIDIA 3090 GPU and Intel i7-11700 CPU). The average per-batch running time of the baseline model and our model on the Dannce dataset are 0.389s and 0.463s. Therefore, the training time efficiency of our method is (0.463-0.389)/0.389=19\% lower than baseline, but the performance significantly improved by (5.86-3.61)/3.61=62\%. We will add a more systematic discussion and analysis of training time in the revised paper. > Q2: Experiments on additional datasets and with different objects should be explored. i.e. using panoptic studio or more datasets with different animals like monkey might show more benefits of the current formulation. R2: We thank the reviewers for the constructive comments. We apologize for not being able to provide the experimental results within the short deadline of the rebuttal. We will add more quantitative and qualitative experiments on different subjects in the revision. > Q3: Although the method works with 5-6 camera views. analyzing what happens with fewer than that like 2-4 should be analyzed. R3: We evaluate the results of different numbers of camera views on the Dannce dataset. As shown in the following table, the accuracy drops significantly when the number of viewpoints is less than 4. However, our TR loss achieves consistent improvement in all cases. | num of cams | 6 | 5 | 4 | 3 | 2 | |:---------------:|:----:|:----:|:----:|:----:|:-----:| | without TR loss | 5.86 | 6.52 | 6.72 | 9.79 | 24.70 | | with TR loss | 3.61 | 4.41 | 4.49 | 6.78 | 18.04 | > Q4: what is the training time of the current formulation? R4: The training time for the laboratory mouse pose estimation experiments is around 30-40 minutes. The training time for human pose estimation experiments is around 3-4 hours. > Q5: What happens if the estimated keypoint from one of the views is erroneous? R5: If the 2D estimates on one view are erroneous, the TR loss will enforces it to converge to the point that has 3D consistency with other views. Its triangulation confidence will also be lower so that its negative effect is reduced. Therefore, the proposed method achieves relatively robust performance against erroneous 2D estimates. > Q6: Limitations and social impact have not been discussed. R6: The limitation of our method is that it requires a 2D detector in giving a relatively accurate initial estimate. This is also a common limitation of 3D pose estimation without 3D supervision. We present some failure cases in the appendix. We will discuss the limitation of our method in the revised paper. Regarding the social impact, our method is a novel solution for non-3d supervision pose estimation for both humans and animals, so it is meaningful for both human application and animal behavior analysis. Our method and results have no negative impact on animal conservation and welfare, human health and safety, and social ethics.
Rebuttal 1: Rebuttal: We thank the reviews for their insightful and valuable comments. In summary, reviewers are positive about the novelty, formulation, impact, performance, and potential of our method as mentioned: "address an important topic for the greated (3d) vision community, ... the formulation is elegant. " (**R-4Fs8-S1&S3**), "achieve substantial improvement on multiple datasets." (**R-ZQc8-S3**), "conduct extensive ablation studies testify the effectiveness of key designs." (**R-3C6c-S1**), "introduce Simple idea with potential." (**R-y7FT-S2**) and "contribute Simple yet effective idea and a new THmouse dataset for laboratory mouse pose estimation." (**R-iymk-S1&S2**). After carefully analyzing all the reviews, major concerns can be summarized: 1. More rigorous explanation of the mathematical formulation. (**R4Fs8-W1&W2 and Ry7FT-W3**) 2. More in-depth analysis of the robustness. (**R-4Fs8-Q3, R-ZQc8-Q2, R-y7FT-W2&Q1 and R-iymk-Q4**) 3. More thorough comparisons and demonstrations. (**R-3C6c-W2, R-y7FT-W1, R-iymk-W1&W2 and R-ZQc8-W2**) 4. Missing detailed discussion on the limitation. (**R-4Fs8, R-ZQc8, R-3C6c and R-iymk**)   We simply summarize the response to each major concern at here. For more detailed illustrations and experimental results, please refer to the reviewer-specific response letters. 1. For the description of the mathematical formulation, the algebraic error is indeed an approximation of the geometric error in the standard SVD algorithm. We will clarify the description and add a more detailed explanation in the revision. Moreover, although we use Pytorch auto diff in our implementation of the TR Loss, we regard the mentioned "robust differentiable SVD" method as an interesting direction to further improve the performance of the implementation, and we will discuss this method in both the related work and the future work section. 2. For a more in-depth analysis of the robustness, we analyze in detail the robustness of our method under different situations like occlusions, outliers, and partial observations. Please refer to the detailed response to the corresponding questions. Note that the overall improvement of our method mainly comes from the key design: optimizing the 2D keypoint detector with a more "global 3D aware" TR loss, which enhances multi-view 3D consistency in an unsupervised manner.  We also ablate the performance under different view number setups (from 2 to 6), which also demonstrate the effectiveness of our method when compared with other SOTA methods. 3. For more thorough comparisons and demonstrations, we provide an additional video result (will provide more in the revision), ablate the TR Loss with different 2D detectors and also compare with other SOTA mouse 3D pose estimation methods. Due to the time limit, we are not able to add experiments on additional datasets, and with different objects, we will add more experiments in the revision. 4. Missing detailed discussion on the limitation: we have discussed in detail the limitation of our method in the reviewer-specific response letter, and we will clarify this in the revision according to the review, the response, and the failure cases we provide in the appendix.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper addresses the problem of predicting human or animal joint positions in 3d from a set of posed images (with calibrated cameras). In order to do so the authors propose to train a neural network that leverages the multi-view constraints between images and predicts the 3d positions in and end-to-end fashion. The contribution of the paper is an objective formulation that doesn't rely on explicit 3d ground truth, but can be supervised from 2d ground truth only and is regularized via a loss on the triangulation residual (in image space, not in 3d). The authors demonstrate that the latter can be achieved by minimizing the smallest singular value of the triangulation matrix; that is, the 3d points never need to be computed explicitly for the loss computation and no 3d grund truth is required for training. Experimental results demonstrate the effectiveness of the approach for human and mice pose estimation. Strengths: **S1** 3D ground truth is expensive and hard to obtain compared to 2d annotations. Therefore the proposed framework and losses (requiring 2d ground truth only) address an important and relevant topic for the greated (3d) vision community. **S2** Experimental results demonstrate that the proposed method performs better than state-of-the-art and also methods that are trained with 3d ground truth. This is a strong contribution and underlines the importance of the unsupervised multi-view constraint in learning. **S3** The formulation of the the residual minimization is elegant as it does not require to actually triangulate and backproject points for residual computation. Weaknesses: **W1** Eq. (9) suggests to minimize the smallest singular value of the projection matrix. In order to do so its computation via SVD needs to be differentiable. A discussion on this topic is missing, but is required for the paper to be self consistent. References to related work are also missing, e.g. "Robust Differentiable SVD", Wei Wang, Zheng Dang, Yinlin Hu, Pascal Fua, Mathieu Salzmann, TPAMI 2022. **W2** A discussion about the relation of the minimized algebraic error (the smallest singular value) and the desired geometric error (the residuals in image space) is missing. Though, it would be helpful for the reader to understand the relation between the minimized error of Eq. (5) and Eq. (9). Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Q1** Does the number of 3d points / joints to estimate need to be known upfront? Do you aim to predict all joints in each view and how do you handle occlusions? **Q2** The residual loss in Eq. (9) is formulated for a single 3d point j. How is the loss over all joints formulated? If the optimization is separate per joint, what prevents that two headmap / confidence heads converge onto the same joint? **Q3** Fig. 3 lists performance numbers for the ablation study and shows that the confidence predictions only have a small influence on the overall prediction performance. Do you have an estimate about the frequency of occlusions per joint in the dataset? I'd imagine that with more occlusions the prediction confidence would become more important in order to be able to triangulate successfully. *Comment*: In the paragraph starting at line 153 it should read *trivial*, not trial. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper is missing a discussion about limitations of the approach. I appears that the proposed triangulation loss is of interest not only for the estimation of human / animal joint positions, but also for structure estimation, e.g. jointly reconstructing a point cloud and learning a local interest point detector. However, it remains unclear if the presented approach is applicable to this use-case and what constraints / limitations need to be considered. Interesting questions to address would be: a) Is there an upper limit on the number of 3d points that can be handled? b) How does the method perform if each 3d point is only observed by a (small) subset of images? Can the confidence estimation handle those cases and can the triangulation loss still be formulated over all cameras (Eq. (5-6))? Line 153 ff. mentions that the learnable weights are forced to be > 0 in order to avoid a trivial solution. Is this still applicable in case of only partial observations? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful and constructive review. To address your questions and concerns: > Q1: Eq. (9) suggests to ..., References to related work are also missing, ... R1: Thanks to the reviewer for recommending "robust differentiable SVD". We will add the reference and discuss this topic in our final paper. Robust differentiable SVD addressed the instability problem of eigendecomposition during deep network training. We will try to use it in the future instead of the current Pytorch-AutoDiff-based SVD. > Q2: A discussion about the relation of the minimized algebraic error and the desired geometric error is missing... R2: **The algebraic error Eq. (9) is an approximate geometric error Eq. (5).** The derivation from Eq. (6) to Eq. (9) is the standard procedure of the SVD decomposition solution of the least square optimization problem, which has been strictly proved, so we only need to clarify the relation between Eq. (5) to Eq. (6). By using homogeneous coordinates, for an observed point $x(u,v,t)$ and an estimated point $x’(u’,v’,t’)$ projected from estimated 3D point $X’$ using projection matrix $P_c$ (i.e. $x’=P_c \times X’$), the geometric error penalizes the 2-norm of the geometric error vector and writes $d_{geo}(x,x’) = ||(u/t-u’/t’)^2 + (v/t-v’/t’)||\_2$, where $t’$ is the estimated depth of point $X’$ from the $c$th view, which is usually different from each view. Therefore, directly minimizing $d_{geo}$ usually involves heavy iterative optimization. The algebraic error penalizes the 2-norm of the algebraic error vector and writes $d_{alg}(x,x’) = ||(vt’-tv’)^2 + (u’t-ut’)||\_2$, so $d_{alg} = d_{geo} * t * t’$. We always set $t=1$. By accumulating the errors from all the points from different views, $d_{alg}$ is not proportional to $d_{geo}$ because $t'$ from different views are inequal. Therefore minizing $d_{alg}$ yields a slightly different results. However, $d_{alg}$ can be solved linearly and is more suitable for end-to-end training. > Q3: Does the number of 3d points/joints to estimate need to be known upfront? Do you aim to predict all joints in each view and how do you handle occlusions? R3: The number of joints needs to be known for the 2D detector since the number of heatmap head layers should be equal to the joint number. However, TR loss itself does not require the number of joints to be known in advance. Our final goal is to predict the 3D position of each joint. We first simply filter out the obvious occluded 2D keypoints in each view according to the confidence value output of the 2D detector. For those keypoints with high confidence but still under occlusion, our TR loss can enforces them to converge to a position that is cross-view spatially consistent in 3D. > Q4: The residual loss ... How is the loss over all joints formulated? ..., what prevents that two headmap / confidence heads converge onto the same joint? R4: The TR loss over all joints is formulated as the sum of losses for all joints. The predefined one-to-one relationship between the heatmap channel and its target joint prevents two heatmap heads from converging to the same joint. The same 2D heatmap channel estimates the same joint in different views, while two different 2D heatmap channels are defined to predict different joints in the same view. Therefore, there is no need for additional cross-view matching(association). > Q5: Fig. 3 ... shows that the confidence predictions only have a small influence ... R5: We agree with the reviewer that prediction confidence would become more important for cases with more occlusions. Inspired by the comments, we checked the occlusion rate in our data. The average occlusion rate of all joints in the THM mouse dataset is about 27\%, but the occlusion rate of different joints varies a lot (more than 60\% for feet but less than 5\% for ears). The Human3.6M is labeled by 3D ground truth reprojection so that even the occluded joint has a 2D label. We will design a more systematic and detailed ablation study on mouse experiments in the revision. > Q6: The paper is missing a discussion about limitations of the approach... R6: The limitation of our method is that it requires a 2D detector in giving a relatively accurate initial estimate. This is also a common limitation of 3D pose estimation without 3D supervision. We agree with the reviewer's comment that TR loss may be able to improve some structure estimation methods since it provides better 3D spatial consistency of local points in different views. A potential challenge is that the scene in structure estimation/SFM is much more complex than pose detection, so a better detection model and system-level optimization may be needed to achieve this goal. We will discuss in the future work section. > Q7: Is there an upper limit on the number of 3d points that can be handled? Theoretically, there is no upper limit on the number of 3d points that can be handled because our method calculates each point individually. > Q8: How does the method perform if each 3d point is only observed by a (small) subset of images? ... R8: We set a threshold of 0.2 on the confidence of the 2D detector output to filter out invisible points (referring to "OpenMMLab Pose Estimation Toolbox and Benchmark"). These points will not be involved in the triangulation process. For other points, we set the learnable triangulation weights to adapt their contributions to triangulation. This weight is not allowed to be 0, because it may lead to a trivial solution, as discussed in L153 of the manuscript.
null
null
null
null
null
null
Labeling Neural Representations with Inverse Recognition
Accept (poster)
Summary: The paper proposes a new explainability method, named INVERT, that matches the learned representations with human concepts. Specifically, it tries to provide explanations to individual neurons by matching human concepts that the neuron predominantly detects. Compared to previous approaches, INVERT is computationally more efficient, does not depend on the annotation of segmentation masks, and is generalizable to different types of neurons. The paper shows INVERTs applicability in identifying neurons affected by spurious correlations and fine-tuning representations without explicitly training them. Strengths: The method is generalizable in terms of architecture, ie not restricted to convolutional neurons as previous ones. The method is more efficient by requiring less running time and computational resources. The method does not require additional annotations, such as segmentation masks. The method is simple and easy to follow. From Section 4.2, INVERT seems to provide outputs more visually similar to human concepts. Weaknesses: My main concern about this work is the lack of evidence that this method generalizes to other configurations. For instance, in the related work, one of the limitations that was described from previous work is the fact that they work only on convolutional neurons. However, most of the experiments are limited to ResNet-18 (except Section 5.2 which a ViT was used), which is mainly based convolutional layers. Since this is one of the important contributions of the method, it would be important to see how the results generalize to other types of neurons. Another suggestion is to expand the results shown in Figures 4 and 6 (for other neurons and architectures) and place them in the appendix. Minor writing feedback: 1. Line 35, in “(1)” should add the word “Figure”, eg “(Figure 1)”. 2. Missing parenthesis on line 142 to close the min equation 3. Figure 2 (in page 5) is never cited in the paper 4. Would encourage the authors to refer to specific sections of the appendix when referring to it in the main text Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * Does the conclusions of Figure 4(c), that maximizing IoU leads to a relatively sparse distribution of IoU scores while maximizing AUC results in a more densely concentrated accumulation of low IoU scores, generalizes to other lengths as well (eg L=2 or L=3)? For lack of space, one suggestions would be add such results to appendix. * In line 218, the conclusion of “high IoU scores are correlated with high AUC scores” from the Figure 4(a) is not clear to me since several points that have high AUC (x axis) also have low IoU (y axis). Could the authors use some form of correlation measure to produce a single score to better verify this statement? * Section 5.2 focuses on “creating” a model trained on ImageNet to classify Caltech images without any training procedure. However, this seems limited to the setting where classes have some form of overlap (which is the case in this example, as also mentioned in the text), or when you can overlap them by combining different concepts (described at the last paragraph of this section). Instead of relying on this assumption, could INVERT be used to somehow demonstrate how well a set of representations will be transferred to another dataset without actually training the model (ie as an analysis tool, not as a model construction itself as it was done)? If yes, then this could be a powerful application of the method, since it could be used to analyze whether the costs of fine-tuning a model will be paid off or not. This could be shown by comparing two models, one that fine tunes well and another that doesn’t, and how this method can be used to predict that beforehand. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: A section or a couple of sentences in the conclusion could be added to broadly discuss some of the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer LJJZ for the time spend reviewing our work and we are thankful for the detailed comments. In following, we will answer to the described shortcomings of our work and answer the raised questions. *The Lack of evidence that this method generalizes to other configurations.* **Answer:** We thank the reviewer for raising the following a concern. Indeed, previous methodologies, such as Network Dissection and Compositional Explanation of Neurons, were designed to explain convolutional neurons through the examination of intersections between neuron activation maps and object masks. INVERT overcomes such limitations by analysing and explaining scalar functions. It is important to emphasize that our ResNet18 experiments, as detailed in the original paper, were conducted on the AveragePooling layer—a configuration that produces scalar activations (512 neurons x 1 activation). This already demonstrates a difference from prior IoU-based techniques designed for convolutional neurons, which generate high-dimensional activation maps. It's worth noting that INVERT exhibits versatility, extending its applicability to transformer-based architectures, including Visual transformer models (ViT). This was aptly demonstrated in Section 5.2 of our original paper. Our efforts to bolster this aspect continue with the inclusion of additional qualitative examples in the updated version. Many of these new visualizations can be found in the PDF file accompanying this global rebuttal. Additionally, the updated version features a showcase of "handcrafted" circuits founded on ViT representations, further solidifying our method's compatibility and efficacy. *Question: Does the conclusions of Figure 4(c), that maximizing IoU leads to a relatively sparse distribution of IoU scores while maximizing AUC results in a more densely concentrated accumulation of low IoU scores, generalizes to other lengths as well (eg L=2 or L=3)? For lack of space, one suggestions would be add such results to appendix.* **Answer** We have indeed performed the quantitative comparison between IoU-based and AUC-based explanations, focusing on the same neurons but with varying formula lengths. Although these specific results are not included in the attached PDF, they have been incorporated into the appendix of our paper. While there does exist a general correlation between IoU and AUC, as highlighted in Table 1 of the PDF, our investigations revealed that often optimal IoU explanations did not align with high AUC scores, and vice versa. Our global rebuttal, with particular emphasis on Figure 1, elucidates a compelling case where an AUC-based explanation yields 0 IoU, while the best IoU-based explanation demonstrates a low AUC score. We argue that due to the interpretability inherent in the AUC metric, the reduced computational complexity, the capacity to ascertain random explanations through statistical testing, and the broader applicability, INVERT emerges as a robust and preferable alternative to IoU-based methods. *Question:In line 218, the conclusion of “high IoU scores are correlated with high AUC scores” from the Figure 4(a) is not clear to me since several points that have high AUC (x axis) also have low IoU (y axis). Could the authors use some form of correlation measure to produce a single score to better verify this statement?* **Answer:** We addressed this concern within our global rebuttal. To elaborate, we have conducted a quantitative evaluation that examines the correlations between IoU and AUC scores. As a result, we have relaxed our assertion in the updated version of our paper. Our analysis demosntrates a positive correlation between IoU and AUC scores. However, it is important to highlight that in instances where both IoU and AUC scores are at their highest, often metrics disagree, as illustrated in Figure 1 of the attached PDF, accompanying our global rebuttal. *Question: Empolying INVERT to demonstrate how well a set of representations will be transferred to another dataset.* **Answer:** We find such application very interesting, but we believe that it lies outside of the scope of the proposed approach. INVERT allows to connect neurons with human-understandable concepts – however, given, for example, explanation “farm” based on ImageNet data, it is hard to quantify how good such neuron would detect “farm” classes from another dataset, without the access or evaluation on the “farm” images from another dataset. The assessment of the scope of global explanations, including the assessment of the ability of neuron to generalise to other datasets based on its explanation might be a promising avenue for the future ressearh. Minor mistakes We agree with these mistakes and we fixed them in the updated version of our paper. We extend our sincere gratitude to the reviewer for their thoughtful and comprehensive feedback. The insights you have provided have proven invaluable in refining our work. We firmly believe that the combined impact of our global rebuttal, fortified by new quantitative and qualitative findings, will positively influence the re-evaluation of our paper. --- Rebuttal Comment 1.1: Comment: Thanks authors for the detailed rebuttal. While I acknowledge the potential of such method, I will keep my original rating since I think there is room for improvement.
Summary: This paper introduces INVERSE, a scalable approach called Inverse Recognition, that links learned representations to human-interpretable concepts by leveraging the ability to differentiate between concepts The applicability of INVERSE is demonstrated in diverse scenarios, including identifying representations influenced by spurious correlations and interpreting the hierarchical decision-making structure within the models. Strengths: 1. The paper introduces INVERT, a novel approach called Inverse Recognition, which enables the labeling of neural representations with human-interpretable concepts in a scalable and informative manner. 2. The authors make significant contributions by using INVERT to gain insights into the hierarchical decision-making structure within models, enhancing our understanding of their inner workings. They also propose an interpretable metric to assess the alignment between representations and explanations, providing a means to evaluate explanation quality. 3. In addition to the aforementioned contributions, the authors demonstrate the practical applications of INVERT, further highlighting its usefulness and credibility. 4. The paper exhibits strong writing quality with a well-structured and logical flow. The equations are clearly presented, and the figures and tables are easily comprehensible, effectively conveying the authors' ideas. Weaknesses: 1. While the paper showcases the practicality of INVERT in diverse scenarios, the evaluation is confined to a limited number of examples. Conducting a broader evaluation encompassing a wider range of models and datasets would be advantageous. 2. The paper offers a broad overview of the INVERT methodology, but certain aspects lack clarity. Providing a more detailed explanation of the methodology would enhance readers' understanding of its workings. Specifically, Section C in the Appendix could be better placed within Section 3 for improved organization and coherence. 3. One aspect that should be noted is that in INVERT, the concepts to be linked with the learned representations still need to be pre-defined or manually selected. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the motivation ot intuiation using the logocal forms, such as AND, OR, and NOT? 2. Do you have some ideas or suggestions dealing with the tradeoff between simplicity and precision? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our gratitude to Reviewer bbMD for their insightful and constructive feedback. We greatly appreciate the time and effort invested in reviewing our work. We are heartened by the positive acknowledgment of the significance of our contribution, as well as the recognition of the quality of our paper's content and visual aids. Herein, we address the noted weakness and questions raised by the reviewer: *While the paper showcases the practicality of INVERT in diverse scenarios, the evaluation is confined to a limited number of examples. Conducting a broader evaluation encompassing a wider range of models and datasets would be advantageous.* **Answer** We acknowledge the reviewer's concern and, in response, we have taken substantial steps to address it. The revised version of our manuscript includes an expansion of our quantitative experiments. We have investigated the correlation between AUC and IoU measures for explanations across different model and layers, an analysis that contributes to a more comprehensive understanding of our approach. Moreover, we have investigated the behavior of random explanations, shedding light on an aspect critical for establishing the efficacy of our method. These updates are described in detail in a global rebuttal. *The paper offers a broad overview of the INVERT methodology, but certain aspects lack clarity. Providing a more detailed explanation of the methodology would enhance readers' understanding of its workings. Specifically, Section C in the Appendix could be better placed within Section 3 for improved organization and coherence.* **Answer** We thank the reviewer for raising this point. In the revised manuscript, we have shortened several paragraphs and allocated space for a more thorough discussion of the algorithm within Section 3, which greatly improve theclarity and coherence in the presentation of our methodology. *One aspect that should be noted is that in INVERT, the concepts to be linked with the learned representations still need to be pre-defined or manually selected.* **Answer** The issue of data-dependency is a shared characteristic among methods aimed at explaining the concepts learned by neural representations, including Network Dissection and Compositional Explanation of Neurons. While INVERT does indeed necessitate predefined concepts, it distinguishes itself by mitigating the reliance on masked data, relying instead on image labels. This shift enhances the feasibility of accessing a more extensive spectrum of concepts, considering that labeled data is more easily accessible than masked image data. Moreover, this approach accelerates computational processes. Section 5.1 of our paper delves into this matter, illustrating how the set of concepts can be broadened by merging datasets from diverse sources, effectively encompassing a wider array of concepts. We consider the research towards data-free explanation method to be an avenue for the future work. *Question: What is the motivation ot intuiation using the logocal forms, such as AND, OR, and NOT?* **Answer** Main inspiration for the logical forms approach comes from the paper “Compositional Explanation of Neurons”, where such approach was introduced. Such method allows to enrich the basic, atomic concepts with interpretale combinations, allowing for the more versatile explanations. Although alternate (from logical) forms of function $\varphi$ (as defined in Definition 4) are feasible, we opted for logical forms due to their established nature and interpretability. Our framework, however, is versatile enough to accommodate other $\varphi$ forms, which we plan to explore in future work. *Question: Do you have some ideas or suggestions dealing with the tradeoff between simplicity and precision?* **Answer** Response: The trade-off between simplicity and precision is a captivating challenge in the realm of explainable AI. Within the context of INVERT, we believe enhancing precision involves incorporating more detailed and precise concepts into the dataset. By introducing more concepts, the general behaviour of the neurons could be explained more precise, while still being comprehensible for a human. This, for example, could be achieved by aggregating diverse concepts from various sources, as outlined in Section 5.1. Once again, we express our gratitude for your insightful feedback, which significantly improved the depth and impact of our work. We hope that the improvement of our work will contribute to a more favorable assessment of our work.
Summary: The paper introduces a method to understand what semantic concepts different neurons of a deep network are looking at. They do so by comparing the similarity between different concepts (taken from a pre-defined concept bank) and neuron representations using the AUROC metric, and assign the concept with the highest AUROC (to the neuron under study). The authors further illustrate how their uncovered concepts can be used to handcraft circuits for recognizing different classes from another dataset and detect spurious correlations. Strengths: I greatly appreciate the AUROC metric the authors presented. It extends current approaches to neuron interpretation based on segmentation maps and IOU metrics which are necessarily limited to convolution filters. Weaknesses: Unfortunately, I think the paper is poorly written with key details missing, prior work not appropriately mentioned and experimental validation of method lacking. Due to this I recommend rejection of this manuscript in its current form. Below I describe the weaknesses in detail in the questions section. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. **Incorrect Prior Work/Missing References:** While describing MILAN in line 73 "While alleviating the limitation posed by the necessity of having a labeled dataset". This statement isn't correct since MILAN does require a labelled dataset to train their captioning model (the MILANAnnotations dataset). I believe the authors meant to say human annotated segmentation masks are not required? The authors should also cite CLIP-Dissect which was published ICLR 2023, which did away with the need for labelled datasets by appealing to the zero-shot abilities of CLIP. 2. **missing details in figures:** The captions are unclear and sometimes not referenced in the main text so their interpretation is not clear. For instance, what is the density being plotted in figure 2a? Is it the density of neuron activations for images labelled positive for the concept and images labelled as negative for the concept? 3. **More rigorous comparison to prior work*** From figure 4a, it is not clear higher AUC is correlated with higher IOU, if anything it seems at IoU 0, AUC can be anything between 0 - 1. I would suggest the authors do a more rigorous study to substantiate their claim perhaps by computing a correlation coefficient. Moreover, figure 4b, is confusing it plots the distribution of iou for max iou? I understand the authors are trying to show that the explanation that maximizes the ioU is not necessarily concentrated and can be more spread out vs the distribution for max AUC is concentrated. However, the motivation for this experiment is not clear, nor is it clear from the discussion why the reported result is good for the AUC metric vs the IoU metric. Finally, I appreciate the comparison with compexp in terms of computational time, no reasoning is provided for it, merely the times are reported. Some more discussion as to why is compexp slow? what is the bottleneck that this work gets rid off would help appreciate the results. This is particularly important since the authors use the same algorithm compexp uses to compose concept literals into boolean formulas to form explanations. 4. **missing details in fine-tuning experiment:** In 5.2 what are the Caltech concepts? The imagines concepts were taken from wordnet, is the same Wordnet employed for the Caltech concepts. This should be clearly mentioned in the Appendix for reproducibility. The authors remark "By simply linking the most suitable representations from the latent layer to the output class logit using our approach, we were able to attain a substantial non-random accuracy". Could you please specify what most suitable representations mean here? how were they linked? Was a linear layer trained on top of the most suitable representations? Without these details it is hard to appreciate the merit for this experiment. 5 **more qualitative examples:** The paper makes several claims such as can detect spurious correlation, AUROC better reflects visual similarity than IoU and neuron explanations can be used to handcraft small circuits for classification on target datasets without any training (which I think is a great use-case!). However, only one or two examples are provided for each case. This raises concerns for cherry picking, the authors should provide many more examples in the supplement for readers to further substantiate the claims. Moreover, if possible the authors should also do quantitative experiments to validate their claims, perhaps a mechanical turk study for the statement "AUROC better reflects visual similarity that IoU". 6 **statistical significance of explanations**: The authors claim an advantage of AUROC over IoU is that their metric provides a measure of statistical significance. I think this is an interesting point but I am not an expert on hypothesis testing and the paper does not mention how the AUROC is a measure of statistical significance to appreciate this point. A paragraph dedicated to this would strengthen the paper in my opinion. **Minor Points** I think it should be made clear in the main text (as opposed to the supplementary) that for CNN filters, the authors treat the output as a scalar by averaging the feature map as opposed to thresholding the feature map to get highest activating regions (as in prior work). This is an important point and should be bought up before doing the comparison with previous work in the main text. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer UsfE, We extend our sincere gratitude for the dedicated time and the insightful comments you have shared with us. Your thorough review has proven invaluable in refining our work, and we deeply appreciate your efforts. In response to the questions and concerns you raised, we would like to provide the following clarifications: *Q1: Incorrect Prior Work/Missing References* **Answer** We acknowledge your observation and have made the necessary amendments. The incorrect reference to the MILAN method has been rectified, and we have included the reference to the CLIP-Dissect paper in the updated version of our manuscript. *Q2: Missing Details in Figures* **Answer** Thank you for pointing out this aspect. We have taken your feedback into account and have augmented the figures with additional clarifications. In Figure 2a, orange signifies activation of data points within the explanation, while blue represents data points that do not belong the explanation. *Q3: More Rigorous Comparison to Prior Work* **Answer** Your concern is well-received, and we have undertaken additional quantitative experiments, as detailed in our comprehensive global response to all reviewers. We acknowledge the intricacies of comparing our approach to prior methods, especially considering the absence of a standardized baseline. Our rationale for favoring the AUC approach over IoU-based methods is extensively explained in the global response. We have observed that, in certain cases, explanations with 0 IoU scores correspond to classes that systematically maximize neuron activation. For explanations with non-zero IoU, we have demonstrated a significant positive correlation with AUC scores. The AUC approach, offering advantages in terms of speed, dataset independence, and wider applicability, emerges as the more suitable choice, in our opinion. *Q3.2: Reason for Faster Computation* **Answer** Indeed, the optimization algorithm was inspired by the CompExp method. In comparison, INVERT's efficiency stems from operating on binary vectors (0 for data points outside the explanation, 1 for those within), whereas CompExpl conducts logical operations on high-dimensional binarized masks, incurring greater computational and memory costs. *Q4: Missing Details in Fine-tuning Experiment* **Answer** We acknowledge the validity of this comment and have made efforts to provide a more comprehensive explanation of this experiment in the updated version of our paper. Notably, the CalTech classes exhibit differences from those found within the ImageNet dataset. In our examination, we pinpointed 46 classes that share identical names across both datasets. Our approach involved a search for the latent layer representation within the ImageNet-pre-trained model that exhibited the highest AUC in relation to the ImageNet counterpart of the CalTech class. To illustrate, if we consider the CalTech “barn” class, we identified, within the latent layer of the model, a neuron with the most substantial AUC towards the ImageNet “barn” class. We then used the output of this representation as a logit-prediction for the CalTech “barn” class. Our findings underscore the efficacy of this relatively straightforward methodology, yielding commendable performance levels on the CalTech dataset. *Q5: More Qualitative Examples* **Answer** Your suggestion resonated with us, and we have not only provided additional quantitative results and explored emerging circuits but also augmented our findings with local explainability methods. Moreover, we have introduced extra examples showcasing "handcrafted" circuits. All of these enrichments are detailed in the attached PDF. *Q6: Statistical Significance* **Answer** Your observation regarding statistical significance is appreciated. We concur that IoU scores can be misleading, lacking in the ability to distinguish random occurrences from systematic patterns. The AUC measure offered by INVERT not only delivers interpretable measure (AUC) but also accompanies it with a p-value for a statistical test (i.e. testing hypothesis “AUC of given explanation = 0.5”), determining the dissimilarity between distributions of activations for explanations and non-explanations. This discussion is further elaborated in our global response. *Q7: Minor Points* We have taken heed of your feedback on minor points and have promptly rectified them in the updated version of our manuscript. We would like to express our sincere gratitude for your insightful questions and comments. We are hopeful that the evidence we have provided will positively contribute to the re-evaluation of our work. Your thorough review has been invaluable in shaping the enhancement of our manuscript --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: Thank you for your detailed response. I have read it carefully, along with all the other reviews and the corresponding rebuttals. While I think the improvements suggested in the rebuttal would certainly improve the presentation in the paper and help validate some of the statements made in the paper, I maintain my score since in it's current form I do not believe the paper is ready for publication. The suggested changes by the authors in this rebuttal, would require significant changes to the writing (added details about experiments which is crucial for reproducibility, added details about the statistical significance of their scores along with experimental results to back them up) which will not get reviewed for quality control.
Summary: This paper addresses the limitations of existing global explanation methods for Deep Neural Networks (DNNs) and proposes a new approach called Inverse Recognition (INVERT). Unlike previous methods, INVERT does not rely on segmentation masks, provides scalable interpretability, and enables statistical significance testing. The paper demonstrates the applicability of INVERT in various scenarios, including interpreting representations affected by spurious correlations and revealing the hierarchical decision structure within DNN models. Strengths: First of all - I'm not an expert in this field, so take everything with a grain of salt. That said, I like this approach for its simplicity. Also, I particularly like the experiments on detrimental representations (section 5.1) and on "finetuning without training" (section 5.2), which show that the proposed method has practical implications. Weaknesses: See Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the reason for selecting very specific neurons? Were they found empirically, is this based on previous work, or is there some other reason? - How applicable is this method to modern ViT-based architectures, such as those used in CLIP? - Is it possible to have an "open vocabulary" for the concepts? One major shortcoming I see is that we have to use a fixed set of concepts to explain representations (although we can combine them to some extent using logical operators, as discussed in the paper) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: not explicitly discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 6wCX, We extend our sincere appreciation for the dedicated time and thorough evaluation of our manuscript. We deeply value the recognition of the simplicity of our approach and the validation of our experiments' practical applicability. We will proceed to address the questions that have been raised. *Question: What is the reason for selecting very specific neurons? Were they found empirically, is this based on previous work, or is there some other reason?* **Answer:** In our study, we have showcased the versatility of INVERT across various models. While specific neurons such as neuron 154 in Figure 6 were selected due to prior research pointing to its susceptibility in recognizing Chinese watermarks, the neurons chosen for the circuits, both in the original paper and the attached PDF, were not deliberately selected. These examples are meant to illustrate INVERT's performance in authentic scenarios. Our overarching objective is to underline the universal applicability and adaptability of our method across diverse models and neurons. *Question: How applicable is this method to modern ViT-based architectures, such as those used in CLIP?* **Answer:** The presented INVERT methodology is universally applicable to any neural representations (neurons) producing scalar outputs, including ViT-based CLIP, or just transformer models. In general our experiments demonstrate successfully the application of INVERT to ViT-based architectures within section 5.2. Additionally, the attached PDF showcases an expanded set of examples rooted in ViT architecture, providing further evidence of our method's application scope. *Question: Is it possible to have an "open vocabulary" for the concepts?* **Answer:** So far the dependency on data remains a central constraint for methodologies aiming to explain learned abstractions of neural networks and connecting neurons to comprehensible human concepts. Methods like Network Dissection and Compositional Explanation of Neurons necessitate images with object masks. INVERT mitigates this dependency by requiring image labels only. This not only makes the data collection easier but also accelerates computational processes. As detailed in section 5.1 of our paper, the set of concepts can be expanded by merging datasets from diverse sources to encapsulate a wider array of concepts. To entirely liberate the approach from data dependency, a critical, presently undiscovered step akin to the transition from supervised to unsupervised learning is requisite—a challenge we aim to tackle in forthcoming research. Furthermore, we have augmented our global response with new qualitative illustrations and comprehensive quantitative evaluation results. We thank Reviewer 6wCX very much for their valuable feedback, which greatly improved the quality of our work and hope that these additions will persuade Reviewer 6wCX to reconsider and possibly improve the evaluation of our work. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you very much for your response, which I appreciate. In general, I would like to see a stronger focus on the analysis of modern transformer-based architectures but I understand the need to compare to prior methods mainly evaluated on ResNets (which are still widely used in practice). I will keep my original rating.
Rebuttal 1: Rebuttal: We extend our deepest gratitude for your invaluable dedication and expertise in assessing our work. The thoughtful feedback of all the reviewers has been instrumental in refining our research and elevating its quality. The positive reception of our work has been immensely gratifying, and with this global rebuttal, coupled with the attached PDF, we aim to address your questions and provide a comprehensive overview of our responses. Additionally, we address all the reviewers individually. **Evaluation** To address concerns about additional evaluations, we conducted an extensive quantitative assessment. We acknowledge the complexities of comparing our method to existing approaches due to the lack of a standardized baseline for the evaluation of global explanation methods. Our motivation was to demonstrate the limitations of IoU-based explanations. In Figure 1 of the attached PDF, we present a case where explanation yielding 0 IoU score is better aligned with the explanation goal. We provide evidence of IoU-based explanations resulting in low neuron activation, while INVERT achieves notable activation even when IoU scores are 0. Furthermore, we highlight a correlation between IoU and AUC scores in non-zero IoU cases across multiple models and layers (Table 1). This, coupled with INVERT's computational efficiency, lack of dependency on masked datasets, interpretable scoring measure (AUROC), and wider applicability, positions INVERT as a robust alternative to IoU-based methods. **Additional Qualitative Experiments** In the attached PDF, we have introduced new qualitative experiments to strengthen our claims and demonstrate the practicality of our proposed method. We enhanced existing figures with GradCam explanations (Figures 2 and 3). Additionally, we introduced a visualization for a new circuit (Figure 4) and included new "handcrafted" circuits for the ViT model (Figure 5), supplementing the example from the original paper (Figure 7). **Statistical Significance** IoU-based explanations often suffer from reporting small IoU scores for highest-IoU explanations, raising concerns about random coincidences (for example, Figure 1 in the attached PDF). Notably, there exists no statistical measure to test the hypothesis $H_0:$ IoU$ = 0.$ INVERT's AUC-based method naturally connects to the Mann-Whitney-Wilcoxon test statistic, providing a means to test the hypothesis "AUC of a given explanation = 0.5." Table 2 furnishes a sanity check for INVERT, demonstrating that, with random explanations, INVERT yields AUCs $\approx 0.5$. After diligently addressing the valuable feedback provided by the reviewers, we have implemented a series of updates to our paper. These revisions encompass new figures and experiments, as well as the rectification of minor errors. With these clarifications and evidence, we trust that we have addressed potential queries regarding our work. We remain optimistic that this comprehensive information will positively influence the re-evaluation of our paper. Pdf: /pdf/dc44f874caa3af10338014ab39b5ad3c7b967457.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Fed-GraB: Federated Long-tailed Learning with Self-Adjusting Gradient Balancer
Accept (poster)
Summary: This paper investigates a federated long-tailed learning (Fed-LT) task in which each client holds a locally heterogeneous dataset; if the datasets can be globally aggregated, they jointly exhibit a long-tailed distribution. The authors propose the Fed-GraB to coordinate the global long-tailed distribution and the local learning strategy. Specifically, it consists of a Self-adjusting Gradient Balancer (SGB) module that re-weights clients’ gradients in a closed-loop manner based on the feedback of global long-tailed prior derived from a Direct Prior Analyzer (DPA) module. Experiments are conducted on CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and iNaturalist. Strengths: 1. Investigating the federated long-tailed learning problem from the view of combining global and local sounds interesting. 2. The design of the proposed method is in line with the motivation of the paper, which seems technically feasible. 3. Experiments are conducted on many long-tailed benchmarks with various settings, which demonstrate the effectiveness of the proposed method. Weaknesses: Although the technical contribution of the manuscript sounds qualified, the presentation of the manuscript is not clear enough. There are some confusing issues that I will explain in the questions section. Please resubmit a revised version. I will update my final score according to your revision. Meanwhile, too many hyperparameter settings will affect the generalizability of the proposed method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What's the correlation between uj in Eq.3 and beta_j in Eq.4? And what's the correlation between beta_j and (beta_j^pos, beta_j^neg)? This is essential for understanding the re-weighting process of SGB. 2. In line 196, the author gives the definition of φ, but I could not find where the φ be applied. 3. In Eq.4, a random number r is adopted as a threshold for Pcj, what's the role of it in the SGB? 4. What's the definition of the expected target zj(t)? And why define the error feedback ej(t) as (gpos - gneg - zj(t))? 5. The EQLv2 strives to keep the cumulative positive and negative gradients to be equal for each category which seems to share similar ideas with the proposed method. Compared with EQLv2-FL, what are the novelty and advantages of this approach? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have declared the limitations in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **1. What's the correlation between uj in Eq.3 and beta_j in Eq.4? And what's the correlation between beta_j and (beta_j^pos, beta_j^neg)?Where is the φ in line 196 be applied?** > We apologize for any confusion caused due to the missing of certain details in the paper. We will reiterate the details here and update the revised version accordingly. The error feedback $e_j(t)=\Delta_j(t) - z_j(t)=\Delta_j(t)$ for class $j$ represents the distance between the current status $\Delta_j(t)$ and a target $z_j(t)$ during training. Hence, the input of the controller $e_j(t)$ is the cumulative difference of positive and negative gradients. Given $e(t)$, the output of the controller in SGB is $u_j(t) = K_{P}e_j(t)+K_{I} \sum_{j=0}^{t}e_j(t) +K_D(e_j(t)-e_j(t-1))$. The output $u_{j}(t)$ indicates the strength to adjust the reweighting coefficients $\beta_j$. We first map $u_j(t)$ through a simple activation function $\phi(\cdot)$ to calculate $\beta_j^{neg}=\phi(-u_j(t)),~\beta_j^{pos}=\phi(u_j(t))$, where $\phi(x)=\frac\gamma{1+\delta e^{-\zeta x}}$. Then, we combine the global information obtained by DPA to calculate the final re-weighting coefficients: $\beta_j = [\beta_j^{\mathbf{pos}}(t), \beta_j^{\mathbf{neg}}(t)] = \mathbb{I}_{r > \mathcal{P}_c[j]} \cdot [\beta^{pos}_j, \beta^{neg}j] + $ $\mathbb{I}_{r \leq \mathcal{P}_c[j]} \cdot [1, 1]$. $\mathbb{I}$ is indicator function and the random value $r$ is sampled from a uniform distribution ranging from 0 to 1, denoted as $r ∼ Uniform(0, 1)$, suggesting that tail classes $j$ with smaller priors $\mathcal{P}_c[j]$, have a higher probability of undergoing re-weighting. Afterwards, $\beta_j$ is employed to reweight the positive and negative gradients. We will update the missing part about $\phi(\cdot)$ in the revised version. > **2. In Eq.4, a random number r is adopted as a threshold for Pcj, what's the role of it in the SGB?** > Thanks for the comments. The value of $P_c(j)$ serves as a threshold, determining whether the SGB should operate during each local iteration. In essence, SGB is a class-wise balancer that can enhance the performance of designated classifiers. $P_c(j)$ is seen as the degree of imbalance, approximating the global distribution of class j. We've designed three deployment strategies: **S1**: SGB is applied to all classes and operates continually. **S2**: SGB is only applied to estimated tail classes and operates continually. **S3**: SGB is applied to all classes and operates based on priors derived from DPA. We discovered that **S1** can critically undermine the model due to overcorrection of the head classes, resulting in a catastrophic representation learning. As for **S2**, we observed that it facilitates model learning. Furthermore, applying SGB to different proportions of tail-end classes, as estimated by DPA, results in varying model performances (indicated by black dots, please refer to Fig. 5 (b) in the main paper). Therefore, **S2** requires the selection of an appropriate tail-end proportion. Thus, in step **S3**, there is no longer a need for the manual selection of specific-tailed classes. We utilize DPA to globally regulate the activation of SGB based on the comparison between the sampled value $r$ and $P_c(j)$, which readily achieves the optimal effect as seen in step **S2** (refer to the dashed line in Fig. 5 (b)). > **3. What's the definition of the expected target zj(t)? And why define the error feedback ej(t) as (gpos - gneg - zj(t))?** > The term $z_j(t)$ represents the desired target of $g_j^{pos}(t)-g_j^{neg}(t)$. During the training process, $g_j^{pos}(t)-g_j^{neg}(t)$ is adaptively constrained to stay close to $z_j(t)$. As for the desired target $z_j(t)$, in a simplified balanced scenario where there are $M$ samples each having an equal probability for the $M$ classes, the expectation of the target $z_j(t)$ for $\Delta_j(t)$ is given as: $E(z_j(t))=\sum_{i=1}^{M-1}(E(\sigma_i(t))-1)-E(\sigma_j(t))=0.$ This signifies that $\Delta(t)$ approaches zero when the distributions become identical. Hence, we define $z_j(t)$ as 0, which implies the ideal balance point from the perspective of positive and negative gradients. The term $e_j(t)$ is the input to the PID controller, typically defined as the distance between the set ideal value $z_j(t)$ and the actual $g_j^{pos}(t)-g_j^{neg}(t)$. The controller adapts the positive and negative gradient weighting coefficients based on this error distance, to ensure that $g_j^{pos}(t)-g_j^{neg}(t)$ is better constrained by $z_j(t)$. > **4. Meanwhile, too many hyperparameter settings will affect the generalizability of the proposed method.** > We thank the reviewer for pointing out this. We agree that the presence of certain hyperparameters in our algorithm, including the internal parameters of the PID controller ($K_P, K_I, K_D$) and the parameters of the mapping function ( $\gamma,\delta,\zeta$). ***We have provided a comprehensive ablation study to discuss the impact of these six parameters in Sec. 4.2, 4.3, and 4.4 of the Supplementary Material and Sec. 4.4 of the main paper***. The conclusion drawn is that $K_P$ has a substantial impact on model performance, with a total accuracy increase of $3.4\%$ observed when $K_P$ is adjusted from $3$ to $10$. Meanwhile, other parameters demonstrate significant robustness. In addition, we carried out experiments on the CIFAR-100/10, ImageNet-LT, and iNaturalist datasets. We welcome further discussions on issues related to generalizability. > **5. The EQLv2 strives to keep the cumulative positive and negative gradients to be equal for each category which seems to share similar ideas with the proposed method. Compared with EQLv2-FL, what are the novelty and advantages of this approach?** > Due to space limitation, this issue has been moved to the global response. Sorry for the inconvenience. --- Rebuttal 2: Comment: Thanks for the author's responses. Most of my concerns are addressed, therefore I improve my score slightly.
Summary: This paper presents an approach, Fed-GraB, for addressing the challenges of Federated Long-Tailed Learning, an issue characterized by data heterogeneity and privacy concerns. The authors tested their method on several benchmark datasets, which significantly outperforms state-of-the-art baselines. The paper is mostly an experimental work. Strengths: The proposed Fed-GraB model is based on an interesting technique called Self-adjusting Gradient Balancer to rebalance gradients. The experiments are comprehensive. Weaknesses: The authors propose to tackle the challenges posed by data heterogeneity and privacy concerns in the FL setting. However, the paper does not sufficiently justify the uniqueness of their approach. A clearer and more compelling argument is needed on how the proposed Fed-GraB model uniquely and effectively addresses these challenges. The proposed model needs to be corroborated with theoretical analyses or at least insights, especially regarding the function and operation of the DPA modules. The paper often assumes that global class priors are available for re-balancing, which may not always be the case in real-world applications due to privacy constraints. More importantly, it's unclear how the proposed method would handle issues where local distributions are not long-tailed or present diverse long-tailed characteristics. A more comprehensive description of the interplay between the SGB and DPA modules, and the technical novelty there, would be helpful. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the above comments Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: more technical analysis or discussions are needed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **1. How the proposed Fed-GraB model *uniquely and effectively* addresses these challenges?** > Fed-GraB comprises two components, Self-adjusting Gradient Balancer (SGB) and Direct Prior Analyzer (DPA), each addressing distinct challenges. Challenge1: In the context of Federated long-tail learning, addressing how to estimate the global long-tailed statistics infringing on privacy concerns is a significant challenge. ***Please note that global statistics are not available and need to be estimated through DPA***. The Direct Prior Analyzer (DPA) in the Fed-GraB model uniquely and effectively addresses the challenges in federated long-tailed learning by estimating global data statistics using the weight parameters from the global classifier. This allows the model to understand the global data distribution without transmitting additional information beyond the gradients [1]. Challenge2: In addition, figuring out how to perform local training that synergistically aggregates a global model that excels on both majority and minority classes under the Federated Learning (FL) setting, poses another challenge. SGB establishes an expected target for the cumulative difference of positive and negative gradients across all clients, aiming to achieve synergistic aggregation. Instead of relying on predetermined heuristic algorithms [2-4], SGB incorporates a feedback loop for explicit reweighting and mitigates the long-tail bias of the model and enhances overall performance through the adaptive adjustment of the weighting coefficients of positive and negative gradients, mediated by a feedback loop. > **2. The paper's assumption of available global class priors for rebalancing, which may not always be applicable in real-world scenarios due to privacy constraints.** > Thanks for the comments. We would like to clarify that in Fed-GraB, ***we did not assume that the global prior distribution is available***, neither to the clients nor the server. Instead, we only assume that the local distributions (heterogeneous or iid) from clients would aggregate a global long-tailed distribution, while the detailed characteristics of the global LT distribution is unknown. As for the local distributions, they were obtained by Dirichlet distribution-based sampling approaches, which might be balanced or long-tailed with different numbers of data samples. The detailed partition process are in the Supplementary Section 2.3. ***As the global class prior is not available, we try to estimate the characteristics of the global distribution using the DPA module in a privacy-preserving manner.*** For comprehensive contents of estimation accuracy under various imbalance factors and degrees of heterogeneity, please refer to lines 288-298 in the main paper. In real-world scenarios, we conducted experiments on real-world datasets including ImageNet-LT and iNaturalist, as demonstrated in Table 2 of the main paper. To further illustrate the accuracy of DPA estimations, we performed additional evaluations of DPA's proficiency in identifying global $50\%$ tail categories on CIFAR-100-LT. These results are recorded in the table of Question 2. The findings, as outlined in the table below, suggest that DPA offers accurate estimations across a broad spectrum of distributions. > **3. The ambiguity regarding how the proposed method would address situations where local distributions are either not long-tailed or exhibit a variety of long-tailed characteristics.** > Benefiting from the Dirichlet heterogeneous divisions in our work, ***the local distribution naturally encapsulates a variety of scenarios, including those that are not long-tailed and those that exhibit a range of long-tailed characteristics***. We have provided a heatmap (see Fig. 2 in the main paper), which allows for a convenient visual representation of the status of the local distribution. In the main paper, we conducted numerous experiments with various imbalance factors and alpha values (non-IID), which generate a vast array of local distribution scenarios. Please refer to Table 1 in the main paper for further details. To further demonstrate the performance of Fed-GraB across various local distributions, we also evaluated its performance under the IID condition. Specifically, we se CIFAR-10 with $IF_G=100$ and the results underscored the exceptional performance of our method under the IID condition, surpassing the current state-of-the-art by more than 1.9%. | Models | Many | Med | Few | All | | --- | --- | --- | --- | --- | | FedAvg | 0.929 | 0.720 | 0.595 | 0.733 | | CReFF | 0.938 | 0.734 | 0.592 | 0.738 | | FedNova | 0.927 | 0.736 | 0.625 | 0.749 | | EQLv2-FL | 0.932 | 0.734 | 0.595 | 0.738 | | Focal-FL | 0.929 | 0.710 | 0.573 | 0.721 | | Fed-GraB | 0.921 | 0.714 | 0.695 | 0.768 | > **4. A more comprehensive description of the interplay between the SGB and DPA modules is needed.** > SGB calculates $\beta_j^{\mathbf{pos}}(t), \beta_j^{\mathbf{neg}}(t)$. We combine global prior with SGB outputs: $\beta_j = [\beta_j^{\mathbf{pos}}(t), \beta_j^{\mathbf{neg}}(t)] = \mathbb{I}_{r > \mathcal{P}_c[j]} \cdot [\beta^{pos}_j, \beta^{neg}j] + $ $\mathbb{I}_{r \leq \mathcal{P}_c[j]} \cdot [1, 1]$. $\mathbb{I}$ is indicator function and the random value $r$ is sampled from a uniform distribution ranging from 0 to 1, denoted as $r ∼ Uniform(0, 1)$, suggesting that tail classes $j$ with smaller priors $\mathcal{P}_c[j]$, have a higher probability of undergoing re-weighting. > **5. The proposed model needs to be corroborated with theoretical analyses or at least insights.** > Due to space limitation, this issue has been moved to the global response. Sorry for the inconvenience. [1] Addressing Class Imbalance in Federated Learning. [2] Seesaw Loss for Long-Tailed Instance Segmentation. [3] Fed-focal loss for imbalanced data classification in federated learning. [4] Equalization Loss v2: A New Gradient Balance Approach for Long-tailed Object Detection. --- Rebuttal Comment 1.1: Comment: The reviewer has addressed most of my comments. I will increase the score.
Summary: In this paper, a self-adjusting and closed-loop gradient re-balancing framework Fed-GraB was proposed and long-tailed learning tasks processing performance was improved. Strengths: This paper presents a methodology named Fed-GraB, which incorporates a Self-adjusting Gradient Balancer (SGB) module. This module dynamically adjusts the weightage of gradients from individual clients in a closed-loop manner, guided by the feedback obtained from a Direct Prior Analyzer (DPA) module. By employing Fed-GraB, clients can effectively mitigate the challenges posed by disparate data distributions during the model training process. They achieve a global model that exhibits enhanced performance on underrepresented classes while maintaining the performance levels of the majority classes. Weaknesses: 1. Equation 4 lacks ending punctuation. 2. Figure 4(b) adds markers to make the curve more legible. 3. The definition of IFG is not clear. Please give a formula that says how to define IFG. 4. How to divide the dataset to get the Cifar-10/100-LT, there is no specific explanation. 5. The insight of the algorithm should be explained more clearly. 6. The baselines selected in this paper are not fully appropriate, because only one of them is designed for federated learning. The solutions to unbalanced data in FL are necessary to be compared. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful feedback. For weakness 1 and 2, we will ensure the equation contains proper ending punctuation and markers in the revised manuscript. > **1. The definition of IFG is not clear. Please give a formula that says how to define IFG. And how to divide the dataset to get the Cifar-10/100-LT?** > Thank you for the comment. We apologize for the lack of clarity in the definition of IFG in the main body of the paper. The imbalance factor $IF_G$ represents the ratio of the number of samples in the head (most populous) category to the number in the tail (least populous) category within the dataset. $n_k^{(i)}$ is the number of data samples of class $i$ in client $k$. The $IF_G$ is define as $IF_G = \frac{\max_i{n^{(i)}}}{\min_i{n^{(i)}}}$, where $\begin{aligned}n^{(i)}=\sum_{k=1}^{N}n_k^{(i)}\end{aligned}$ and $N$ is the number of clients. And also, there are more details regarding $IF_G$ in our supplementary in section 2.1. In regards to dataset partition, we have provided details on how the dataset was divided to obtain Cifar-10/100-LT in the supplementary material, specifically in section 2.3. First, we truncated the dataset followed by an exponential distribution class-wisely, and controlled the imbalanced level by the global imbalanced factor. Second, we conducted sampling to divide the dataset using Dirichlet distribution-based approach for the non-IID data partition. Please refer to the supplementary material for more information. > **2. The insight of the algorithm should be explained more clearly.** > We appreciate your feedback and agree that the insight of the algorithm can be further explained. Fed-GraB consists of two main components: the Direct Prior Analyzer (DPA) and the Self-adjusting Gradient Balancer (SGB). 1. **Direct Prior Analyzer (DPA)** The purpose of DPA is to ***analyze global long-tailed statistics***. It infers a prior vector of global data statistics by leveraging the weight parameters of the global classifiers. This prior vector allows us to understand the long-tailed distribution with imbalances between the head and tail. 2. **Self-adjusting Gradient Balancer (SGB)** The SGB primarily functions to rebalance gradients on a per-class basis at each client, informed by the global head-tail properties estimated by the DPA. Unlike heuristic methods that can lead to client divergence, SGB harmonizes all clients towards a common rebalancing direction. This approach not only reduces statistical variance across clients (as illustrated in Fig. 4 (a) of the main paper) but also mitigates divergence. Therefore, SGB effectively ***achieves data rebalancing and divergence alleviation simultaneously***. We hope that this explanation provides a clearer understanding of the Fed-GraB algorithm. We stand prepared to answer any additional queries you might have on this subject matter. > **3. The baselines selected in this paper are not fully appropriate, because only one of them is designed for federated learning. The solutions to unbalanced data in FL are necessary to be compared.** > We categorize the baselines into two major classes. The first class encompasses imbalance federated learning methods, including CReFF[1], Fed-Focal Loss[2], and FedIR[3], as well as traditional heterogeneity methods such as FedProx and FedNova. The second class assesses the federated effects of existing long-tail methods. We have also incorporated a newly proposed method, ETF[4], which addresses federated global imbalance bias. We conducted experiments on the CIFAR-10 dataset, with an imbalance factor of 100 and a heterogeneity parameter alpha set to 0.5. The results are compiled and presented in the corresponding table. | method | Many | Med | Few | All | | --- | --- | --- | --- | --- | | CReff | 0.962 | 0.726 | 0.611 | 0.731 | | Fed-Focal Loss | 0.883 | 0.703 | 0.581 | 0.728 | | FedIR | 0.976 | 0.726 | 0.562 | 0.715 | | ETF | 0.499 | 0.624 | 0.703 | 0.631 | | Fed-GraB | 0.910 | 0.698 | 0.713 | 0.761 | [1] Federated Learning on Heterogeneous and Long-Tailed Data via Classifier Re-Training with Federated Features. [2] Fed-focal loss for imbalanced data classification in federated learning. [3] Federated Visual Classification with Real-World Data Distribution. [4] No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. I have read the author responses as well as comments from other reviewers. The authors have provided more results and discussions regarding my concerns (the method details, insight of algorithms, and more baselines etc.). Overall, I think this paper identified an interesting Fed-LT setting, and the proposed DPA and SGB modules demonstrate to be effective with comprehensive experiments. Further extensions regarding other issues, such as privacy or theoretical analysis as other reviewer mentioned, could better enhance the quality of this paper, while the authors have provided some preliminary results and discussions in the rebuttal. Based on the overall quality of the paper/response, I’d like to keep my score.
Summary: This paper considers the globally long-tailed distribution of data and its impact on federated learning (FL). To address this issue, the authors propose the Fed-GraB algorithm that re-balances gradients. The problem is timely and the proposed algorithm is novel. Overall, the paper is well-written, and simulations are extensive. However, the limitations of the proposed framework are not rigorously addressed via for instance mathematical or quantitative analysis. Furthermore, the applicability of the proposed framework to various applications and other state-of-the-art FL algorithms are questionable. Strengths: The potential impact of the proposed framework would be significant, particularly on two emerging areas of research: fairness guarantee and rare-event detection. The proposed idea of re-balancing gradients is not entirely original, yet still sufficiently novel as it has yet been considered in FL for this purpose. Notably, the proposed idea does not incur significant costs in terms of computation and memory, which is a big plus for practical implementation. The literature of the considered problem and relevant frameworks is well reviewed, making the paper easily readable. Weaknesses: Some potential weakness of the proposed framework has been proactively described, yet the claims are not very convincing. In particular, in the Privacy Discussions, it claims that "potential privacy issue exists in the general FL frameworks rather than specific to our proposed DPA method...As the privacy issue of FL framework is beyond the scope of this we briefly include the discussion in this subsection." This justification hardly responds to the issues when applying the proposed framework to other privacy-preserving FL frameworks, for instance, that applies quantization and noise injection under which the proposed framework's performance may be degraded. Furthermore, in Computational and Storage Cost of SGB, it claims that "the extra computation which is implemented with several lines of code could be done very quickly. Besides, Fed-GraB needs some extra storage cost to store the weighted gradients which is quite cheap as well." This is not very convincing as it does not compare, for example, the resultant convergence speed and guarantee as well as FLOPS and bytes with the cases without the proposed solution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. According to (1), the proposed algorithm relies on classification and cross entropy loss, as opposed to the original FL that has no restriction on its task and loss function. Is the proposed algorithm applicable to 1) non-classification tasks as well as 2) non-cross entropy loss or cross-entropy with other regularizers? 2. There are various advanced FL algorithms that intentionally distort gradients via quantization, sampling, and noise injection for compression, privacy protection, and so forth. Is the proposed algorithm still applicable to these algorithms? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing thorough and insightful comments on our paper. > **1. Is the proposed algorithm applicable to non-cross entropy loss or cross-entropy with other regularizers as well as non-classification tasks?** > We thank the reviewer for this comment. Our proposed algorithm can be used in conjunction with different loss functions and regularization methods, as it acts on the gradients of the logits during backpropagation. During the rebuttal period, we have conducted experiment on the combination of cross entropy with L2 regularization and the focal loss. The experiments are based on the CIFAR-10 dataset, with results in the table below. As we can see, the Fed-GraB algorithm could work seamlessly with the Focal loss or the cross entropy loss with L2-regularizer, where the overall performance is even slightly improved. In contrast, the performance of original FedAvg degraded a bit when combined with the L2 regularizer. The results demonstrate that effectiveness of Fed-GraB for different loss and regularizers. | Method | Many | Med | Few | All | | --- | --- | --- | --- | --- | | FedAvg | 0.906 | 0.720 | 0.585 | 0.719 | | FedAvg + L2 regularizer | 0.981 | 0.778 | 0.503 | 0.709 | | Focal loss | 0.935 | 0.729 | 0.620 | 0.727 | | Fed-GraB+Focal loss | 0.958 | 0.718 | 0.640 | 0.735 | | Fed-GraB | 0.942 | 0.692 | 0.720 | 0.753 | | Fed-GraB+L2 regularizer | 0.962 | 0.667 | 0.743 | 0.757 | Regarding adaptability to non-classification tasks, theoretically, Fed-GraB can be applied to instance segmentation and detection tasks, such as Cascade Mask R-CNN on the LVIS v1 dataset. This is because Fed-GraB, similar to seesaw loss [1], EQL [2], Droploss [3] (which can be used for instance segmentation), and EQLv2 [4] (further extended to object detection), all employ the method of negative gradient over-suppression. They function by applying weighted positive and negative gradients to the classification heads in instance segmentation and object detection tasks. However, in the context of federated long-tail learning, for instance segmentation and object detection tasks, given the presence of long-tail class distribution within a single image, our dataset partitioning would require more complex and delicate design. Correspondingly, the formulation and metrics would also need to be appropriately designed. This goes beyond the scope of our current work. We believe that federated long-tail non-classification learning presents a challenging yet fruitful area with numerous application scenarios, making it a worthwhile focus for future research. > **2. How about the resultant convergence speed and guarantee as well as FLOPS and bytes with the cases without the proposed solution.** > Thanks for the comments. We conducted tests on CIFAR-10 with a heterogeneity of 0.5 and an imbalance factor of 100, comparing the in-process memory overhead of different baselines and our method. The number of clients was set to 40, with local epochs fixed at 5. We reported the convergence speed (how many rounds it would take to achieve a classification accuracy of 70%), the computational cost per round (how much time it would take to train the model for 1 epoch) and memory cost (expressed as a multiple of the memory consumed by the FedAvg, which is 3467MB) in the table below. Our method shows a computational cost the computation and memory nearly identical to the EQLv2, along with a much faster convergence rate. In terms of memory cost, the PyTorch gradient capture hook function does occupy some storage space. This overhead can be improved through official code refactoring. | Method | convergence speed (70%) | computational cost/round | memory cost | | --- | --- | --- | --- | | Fedavg | 280rounds | 2m 1s | 1.000x | | EqlV2 | 218 | 2m 34.2s | 2.023x | | Fed-GraB | 133 | 2m 35s | 2.024x | > **3. There are various advanced FL algorithms that intentionally distort gradients via quantization, sampling, and noise injection for compression, privacy protection, and so forth. Is the proposed algorithm still applicable to these algorithms?** > We have conducted experiments on CIFAR-10 with a heterogeneity of 0.5 and an imbalance factor of 100 about Fed-GraB with the bucket_quantile algorithm of quantization[1] and the federated differential privacy algorithm (noise injection)[2]. The performance is presented in the table below. It can be observed that distorting the gradients or inject noise tends to decrease model performance in the Fed-LT scenario. However, when compared to the FedAvg baseline, our Fed-GraB can obtain significant performance improvements in both cases, especially in the tail classes. The results are reasonable as the current Fed-GraB frameworks are not yet tailored for gradient distortion or noise injection. We thank the reviewer for this valuable comment, and we’d like to expect future versions of Fed-GraB to better solve these issues. | Method | Many | Med | Few | All | | --- | --- | --- | --- | --- | | FedAvg | 0.906 | 0.720 | 0.585 | 0.719 | | Fed-GraB | 0.910 | 0.698 | 0.713 | 0.761 | | FedAvg + quantization | 0.969 | 0.707 | 0.472 | 0.665 | | Fed-GraB + quantization | 0.949 | 0.560 | 0.686 | 0.689 | | FedAvg + injected noise | 0.977 | 0.751 | 0.538 | 0.711 | | Fed-GraB + injected noise (DP) | 0.956 | 0.705 | 0.696 | 0.752 | [1] Seesaw Loss for Long-Tailed Instance Segmentation. [2] Equalization loss for long-tailed object recognition. [3] Droploss for long-tail instance segmentation. [4] Equalization Loss v2: A New Gradient Balance Approach for Long-tailed Object Detection. [5]Sketchml: Accelerating distributed machine learning with data sketches. [6] Communication-Efficient Learning of Deep Networks from Decentralized Data. [7] LDP-Fed: Federated learning with local differential privacy. --- Rebuttal Comment 1.1: Comment: I have read the authors' responses to my concerns and comments. Most of them have been addressed at a satisfactory level via additional simulations. Although the baseline FedAvg is not state-of-the-art, it is still imporessive to see significantly faster convergence and lower computational complexity under various loss functions, demonstrating the huge potential of the proposed framework. Based on the overall paper quality and the aurhots' responses, I'd like to slightly increase my score.
Rebuttal 1: Rebuttal: # For Reviewer 3 > **5.The proposed model needs to be corroborated with theoretical analyses or at least insights, especially regarding the function and operation of the DPA modules.** > Regarding SGB, we provided the expected target $z_j(t)$ under ideal equilibrium conditions in Equation 2 of the paper. As for the error feedback regulator, our insights hail from the theory of automatic PID control within control theory, which is underpinned by a robust theoretical foundation. This includes various analyses such as Bode plots, Nyquist plots, and evaluations of phase margin and gain margin [1-5]. In the context of DPA, we explain the relationship between the Euclidean Norm of the classifier and the real distribution from two perspectives: the neural collapse and forward propagation. From the perspective of neural collapse, four deeply interconnected phenomena occur during the training and gradual convergence of the neural network [6]. Among these, the third phenomenon (Neural Collapse 3) reveals that, up to rescaling, the last-layer classifiers, which implicitly represent the classifier decision, collapse to the class means. For balanced datasets, class means are uniformly distributed and form an equiangular tight frame simplex. However, for imbalanced datasets, class means naturally exhibit bias, and the classifier decision converges to these biased class means [7,8]. Therefore, when the model tends to predict the head classes, the class means of these classes become larger, leading to greater decision boundaries and a larger Euclidean Norm of the class vector. Numerous theories and experiments detailing this phenomenon are presented in [6,7]. From the forward propagation standpoint, the class with the maximum logit is selected as the predicted class. The logit $s_{ij}=f_iw_j^\top$ can be written as $\Vert f_i \Vert_2 \Vert w_j \Vert_2\cos\theta_{ij} \propto \Vert w_j \Vert_2\cos\theta_{ij}$, suggesting a large Euclidean Norm of $w_j$ biases the model towards predicting the corresponding class. To validate the effectiveness of DPA, we conducted various experiments on CIFAR100-LT with different imbalance factors and heterogeneity, to demonstrate the estimation accuracy of DPA under various circumstances. The accuracy of global $50\%$ tail categories was determined as the ratio of correct predictions to the total number of predictions, with imbalance factors of 2, 50, and 100 and alpha values of 15, 0.5, and 0.05. The results are recorded in the following table. As can be seen, DPA demonstrates excellent generalizability and achieves higher accuracy under severe long-tail conditions. Additionally, we conducted experiments to illustrate the relationship between the Euclidean Norm of the classifier and the real distribution, as shown in ***Fig. 1 of the rebuttal material.*** | | Alpha=15 | Alpha=0.5 | Alpha=0.05 | | --- | --- | --- | --- | | IF=2 | 0.780 | 0.740 | 0.680 | | IF=50 | 0.960 | 0.980 | 0.860 | | IF=100 | 0.980 | 0.980 | 0.880 | [1] PID control system analysis, design, and technology. [2] Tuning of PID controllers based on Bode's ideal transfer function. [3] The direct Nyquist array design of PID controllers. [4] Tuning of PID controllers based on gain and phase margin specifications. [5] Performance and gain and phase margins of well-known PID tuning formulas. [6] Prevalence of neural collapse during the terminal phase of deep learning training. [7] Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. [8] Decoupling representation and classifier for long-tailed recognition. # For Reviewer 4: > **5.The EQLv2 strives to keep the cumulative positive and negative gradients to be equal for each category which seems to share similar ideas with the proposed method. Compared with EQLv2-FL, what are the novelty and advantages of this approach?** > We thank the reviewer for pointing out this. As one of our baselines, EQLv2, developed under a centralized learning, is an effective method to alleviate the long-tail problem and has provided significant insights for the design of our methods. We will expound on the limitations of EQLv2 in the context of federated learning (FL) via two implementation methods: global EQLv2 and local EQLv2. 1.Applying local EQLv2 in FL, we noted a considerable variance in $g_j^{pos}(t)-g_j^{neg}(t)$ across different clients during the model aggregation, indicating a substantial divergence among them, which is not favourable for federated learning (please refer to Fig. 4: (a) in the main paper). SGB, on the other hand, constrains all clients' $g_j^{pos}(t)-g_j^{neg}(t)$ with the feedback loop to satisfy $E(z_j(t))=0$, thereby ensuring coordination among different clients, without the individually designing re-weighting parameters ($\alpha, \gamma, \mu$ in EQLv2) that are suitable for each client's local distribution. 2.In the case of global EQLv2, it is necessary to upload each client's accumulated positive and negative gradients. As these accumulations closely mirror the local distributions (***refer to Fig. 2 of the rebuttal material***), this can potentially lead to privacy leakage concerns and associated risks. Conversely, SGB does not require the transmission of any extra information, thus mitigating the risks associated with privacy leakage. In summary, while EQLv2 is an effective solution in a centralized learning environment, FedGraB has been designed for federated long-tailed learning, taking into account factors of distributed training such as self-adjusting gradients synchronization among clients and privacy preservation. Pdf: /pdf/f3e885eb90990662d3852f286037d32a1a7584fb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Pengi: An Audio Language Model for Audio Tasks
Accept (poster)
Summary: Pengi is a novel audio language model that leverages transfer learning to frame all audio tasks as text-generation tasks. It takes an audio recording and text as input, and generates free-form text as output. The input audio is represented as a sequence of continuous embeddings by an audio encoder, and the text input is represented similarly by a text encoder. Both sequences are then combined as a prefix to prompt a pre-trained frozen language model. When evaluated on 22 downstream tasks, Pengi achieved state-of-the-art performance in several of them. Strengths: 1. The proposed model is a novel audio language model that can be used for multiple audio tasks, including close-ended and open-ended tasks. It does not require any additional fine-tuning or task-specific extensions. 2. The paper introduce a new learning framework that frames all audio tasks as audio and text input to text output tasks. This framework uses a single training procedure and a captioning objective function. For training, Pengi uses new audio task templates inspired by Instruction Tuning. 3. Pengi has been extensively evaluated on 21 (or 22?) downstream tasks across various audio domains. It achieved state-of-the-art performance in several of these tasks, establishing a baseline for general-purpose ALM. Weaknesses: . Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. Please add more baseline - a cascade model for each task. 2. Does the model has capabilities of ASR? 3. Can you add ablation analysis of each component? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing our contribution! We address every question and hope that our response resolves your concerns. Any follow-up questions are welcome. *** **Questions 1. Please add more baseline - a cascade model for each task.**\ For Table 3, we chose CLAP because it is the only Zero-Shot model with a comprehensive evaluation (16 downstream tasks). The next best evaluation was only on 8. Thus, providing no evidence of performance across other domains like speech and music, which tend to be the most difficult. For the rest of the Tables, we compared against SoTA results even if it came from different models and learning methods. We compared against SoTa Zero-Shot models in Table 8, a subset of Table 3, for Sound Event Classification. Even against SoTa from supervised learning models in Table 5 and 7 for Audio Q&A and Audio Captioning respectively. Table 9, against SSL, supervised and trained on speech audio. Training with ensemble models (cascade) will provide insights into how methods compliment, but it was our scope. **Questions 2. Does the model has capabilities of ASR?**\ Pengi was not trained on any speech audio-transcript data, so it does not support ASR. We believe that the key to integrating audio and speech tasks is to develop a universal audio encoder architecture, which is an exciting and important direction for the future, but beyond the scope of this work. **Questions 3. Can you add ablation analysis of each component?**\ We performed different ablation studies and a new experiment for text encoder: 1) We study the choice of text encoder and its mapper- m2 for Pengi. We denote exp A as Pengi without both the text encoder and m2 and exp B as Pengi without the text encoder. In exp A, we found that removing both m2​ and the text encoder resulted in a loss of coherence between the input text prompt and the output text. For example, an input prompt about identifying an emotion class "the emotion is " resulted in random text output and thus random performance. In exp B, we found comparable performance to Pengi. We attach this as Table 1 in PDF in the global response. 2) We analyzed the choice of text prompt in performance in Appendix Section C, and found that the prompt "generate metadata" is a good default choice. 3) We analyzed the prefix output from the audio encoder-mapper in Appendix Section B and found that it contains relevant keywords present in the output text, but can be noisy. 4) We analyzed freezing and unfreezing the audio encoder in Appendix Section E, and found that unfreezing the encoder yielded better performance.
Summary: This paper explores Large Language Models (LLMs) in the context of audio processing. The core idea is to do audio-injected instruction tuning for pre-trained LLM. The method is simple: collecting a lot of audio-text paired data and use it to fine-tune a pre-trained CLAP audio encoder together with a frozen LLM. Audio is injected by concatenating the CLAP feature sequence before text. Taking advantage on the nature of LLMs training strategy, the pair relation can be provided in any form (e.g., captioning, label, metadata) and simply put into a pre-defined template to train with next token prediction. Evaluation is done on a wide collection of audio-related tasks using the output sentence of the LLM given the input audio and question. Strengths: - This is the first (together with few concurrent papers) work to explore LLM for audio processing. - Performance on audio captioning is solid. - The method is simple in a good way, and this work can be served as a baseline for audio processing with LLMs if the evaluation can be more completed (see weaknesses). Weaknesses: The biggest concern I have for this paper is that **the evaluation seems to be limited/biased**. - Table 3: In Pengi's framework, CLAP serves as a feature extractor that is pre-trained and unfrozen. It is obvious that Pengi can (and should) be making improvement over CLAP considering the scale of the system, the total amount of data used, and the computational power required. Comparing Pengi to state-of-the-art methods on each bench mark would better justify its value. (Imho, this is also why LLM is a great success - they generalized to and performed well on most of the tasks so good to the point where people can live with the amount of data/computation it costs.) Even if the numbers are not overwhelming, it will still add value to this work as a first step of exploring LLM's application in audio. - Table6/Section5.4: It is unclear how the authors obtained the retrieval performance for existing contrastive methods using their text-to-text retrieval pipeline. How do they index the dataset with text given the audio/text encoder from contrastive learning? Moreover, the text-to-text setup could be biased and unfair for other methods, since Pengi builds on top of pre-trained LLM and focuses on text during training. While a 100% fair comparison might not be possible, it is clearly unfair to use significantly more resource AND compare in a way favoring the large model. Table 5/7 are good examples where at least the evaluation protocol is fair and I don't see a reason why Table 6 should be treated differently. Again, even if the numbers are not good enough, it would still help the community by establishing a standard for general purpose audio model. Overall, this work is interesting in terms of exploring general audio processing model, but the evaluation should be improved. I would happily raise my score if the above-mentioned concerns can be addressed. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: - The use of text encoder: line 112 said it's fundamental but there seems to be no ablation on this. Having LLMs taking text as input seems to be more intuitive. Any idea why? - For line 270: "... by 32%", isn't this 22-ish%? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: - Generalizability (data): Table 8 shows that in the cases (namely, ESC50, US8K, and DCASE17) where Pengi have not seen the dataset, results are not as competitive. It would be good if more details on evaluation pipeline can be provided and interesting to see some error analysis. - Generalizability (format): All the tasks this paper considered are covered by the templates hence the input is not in free form. One of the key strength of instruction-tuned LLM is the ability to take any input and answer accordingly. are there examples where the question (text input) is never seen during training? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing our contribution and novelty! We take every comment seriously and address every concern point by point. We hope that our response resolves your concerns. Any follow-up questions are welcome. *** **Questions 1. Table 3: In Pengi's framework, CLAP serves as a feature ... it will still add value to this work as a first step of exploring LLM's application in audio.**\ As the reviewer suggested, our goal is to add value exploring LLM's for audio rather than establishing SoTa in every task. For Table 3, we chose CLAP because it is the only Zero-Shot model with a comprehensive evaluation (16 downstream tasks). The next best evaluation was only on 8. Thus, providing no evidence of performance across other domains like speech and music, which tend to be the most difficult. For the rest of the Tables, we compared against SoTA results even if it came from different models and learning methods. We compared against SoTa Zero-Shot models in Table 8, a subset of Table 3, for Sound Event Classification. Even against SoTa from supervised learning models in Table 5 and 7 for Audio Q&A and Audio Captioning respectively. Table 9, against SSL, supervised, and trained on speech audio. To enhance our baseline from Table 3, we trained a CLAP model on the same amount of pairs data as Pengi (3.4M) to put aside variations due to data. Pengi's strong performance still holds. We attach this as Table 2 in PDF in the global response. **Questions 2. Table6/Section5.4: It is unclear how the authors obtained the retrieval performance ... it would still help the community by establishing a standard for general purpose audio model.**\ We acknowledge the reviewer’s point. Models trained with contrastive learning learn the similarity between audio and text and can do audio-to-text and text-to-audio retrieval in one step. Text generation models like standard audio captioning models and Pengi, don't learn the multimodal similarity and thus require additional steps to compare text and audio. Therefore, in Table 6, we compared Pengi using a setup proposed by Kim et al. [1] and the best-performing models using the same setup. Due to the inherent difference between contrastive and generative models, there is no fair comparison. However, for a purely numerical comparison, contrastive learning methods surpass all text-generation methods (Pengi or others) for the retrieval task. We explain this in Section 6 paragraph 1. We will include these numbers in Table 6 to provide a reference to the readers. [1] Kim, Minkyu, Kim Sung-Bin, and Tae-Hyun Oh. "Prefix tuning for automated audio captioning." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. **Questions 3. The use of text encoder: line 112 said it's fundamental but there seems to be no ablation on this. Having LLMs taking text as input seems to be more intuitive. Any idea why?**\ We appreciate the reviewer's question. We conduct an experiment and discuss the findings and results in global response question 1. **Questions 4. For line 270: "... by 32%", isn't this 22-ish%?**\ We are sorry for any misunderstanding. We report relative percentage improvements and not absolute percentage improvements. Since AudioCLIP and Pengi score 0.694 and 0.92 respectively on ESC50, we calculate the relative improvement as (0.92-0.694)/0.694 = 32.5%. **Questions 5. Generalizability (data): Table 8 shows that in the cases (namely, ESC50, US8K, and DCASE17) where Pengi have not seen the dataset, results are not as competitive. It would be good if more details on evaluation pipeline can be provided and interesting to see some error analysis.**\ We evaluated the generalizability (data) of Pengi by testing it on 22 downstream tasks (Table 3) and comparing its performance with different SoTA models in the literature (Table 4-9). Pengi’s zero-shot performance achieves SOTA on several but not all downstream tasks. Table 8, a subset of Table 3, evaluates zero-shot sound event classification on 4 tasks and includes the best models in the literature. None of the models, including Pengi, have seen the dataset during training. We believe Pengi's results are competitive at 92% vs SoTA of 91% for ESC50 and 72% vs SoTA of 77%. To better understand the errors made by Pengi, we conduct an error analysis. We identify three types of errors that cause a decline in Pengi’s performance. We classify them into audio concept errors, hierarchy errors, and text-matching errors and describe them in Appendix Section F. **Questions 6. Generalizability (format): All the tasks this paper considered are covered by the templates hence the input is not in free form. One of the key strength of instruction-tuned LLM is the ability to take any input and answer accordingly. are there examples where the question (text input) is never seen during training?**\ We apologize for any confusion and we would clarify in the manuscript that: - Pengi can handle different ways of phrasing the same prompt, thanks to its text encoder. For instance, it can recognize that “this is a sound of” and “detect sound events” are asking for the same kind of output (e.g.“dog barking") even if the template only has “this is a sound of”. If the prompt is completely different from the ones it was trained on, Pengi defaults to generating metadata as a general response. - We agree with the reviewer that it is an interesting challenge to make LLMs respond to any input appropriately. We have explored this direction in Appendix Section A, where we show that the user can use a default text prompt like “generate metadata” and then provide more information or ask follow-up questions. This enables the user to steer the conversation with additional unseen prompts (Fig. 6), such as “the background is”, “mention forest.”, etc. We acknowledge that this is not Pengi's strength. The literature suggests that scaling up training data could improve this issue. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for answering the questions. Overall, I am satisfied with the extra detail provided that complements this work. I will increase my rating to borderline accept.
Summary: This work is inspired by the visual language models (VLM) in the literature and presents an audio language model (ALM) for various audio tasks, including both open-ended and close-ended tasks. It also presents a comprehensive evaluation of the proposed ALM on a range of both open-ended and close-ended tasks, and showed very promising results. Strengths: This paper presents an innovative audio language model which leverage a pretrained audio encoder and a pretrained language model. Though the proposed network architecture is largely borrowed from vision language models like [47], it is still relatively new to the audio domain. This paper is well-written and presents comprehensive evaluations on 22 downstream tasks. Weaknesses: I feel this work is somewhat limited in that it only uses a relatively small language model (GPT-2 line, 124M params) and there is no study of how the language model can affect the performance on the downstream tasks. It is possible that with a stronger language model, the performance of the proposed model on open-ended tasks can be further improved, while the performance on close-ended tasks may more depending on the audio encoder quality. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - I feel it is a little bit counter-intuitive that in Figure 2, a `m2` mapping network is used after the text encoder, as text encoder in Figure 2 is already encoded the text prompt into the same space as the text in the response part. What's the purpose of `m2` here and do you have experimental results to show using `m2` is helpful ? - It is also unclear to me how the network handle the variable length in text prompt. It looks like to me that both audio prompts and text prompts in this works has a fixed length, i.e., 40 tokens for each part. However, it is unclear from the paper, how the authors would handle variable lengths of text prompt. It is also unclear what the effect of this fixed length (40 in the experiments) on the audio task performance. - The authors have proposed 2 methods for evaluating the proposed model's performance on close-ended tasks: log-likelihood and text matching. However, all the following experiments using text matching and there is no comparison of log-likelihood based method vs text matching methods. Are they give relatively close results? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have addressed the limitations of the proposed methods adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing our contribution and novelty! We hope that our response resolves your concerns. Any follow-up questions are welcome. *** **Questions 1. I feel this work is somewhat limited in that it only uses a relatively small language model (GPT-2 line, 124M params) and there is no study of how the language model can affect the performance on the downstream tasks. It is possible that with a stronger language model, the performance of the proposed model on open-ended tasks can be further improved, while the performance on close-ended tasks may more depending on the audio encoder quality.**\ We appreciate the reviewer’s comment. Our default language model is GPT-2 base (124M), but we also tested GPT2-XL (1.5B). A larger LM improved performance on the open-ended task of Audio Q&A but have mixed results on Audio Captioning --one dataset improved the other one worsen. For close-ended tasks, there was no significant change. We attach the AQA results below. We agree the audio encoder does impact task performance, and presents an exciting direction for future work. | LLM | Parameters | Audio Q&A| ------|------------|----------| | GPT2-base | 128M | 0.645 | | GPT2-XL | 1.5B | 0.701 | **Questions 2. I feel it is a little bit counter-intuitive that in Figure 2, a m2 mapping network is used after the text encoder, as text encoder in Figure 2 is already encoded the text prompt into the same space as the text in the response part. What's the purpose of m2 here and do you have experimental results to show using m2 is helpful ?**\ We appreciate the reviewer's question. We conduct an experiment and discuss the findings and results in global response question 1. **Questions 3. It is also unclear to me how the network handle the variable length in text prompt. It looks like to me that both audio prompts and text prompts in this works has a fixed length, i.e., 40 tokens for each part. However, it is unclear from the paper, how the authors would handle variable lengths of text prompt. It is also unclear what the effect of this fixed length (40 in the experiments) on the audio task performance.**\ The text encoder can handle input text of variable length. It produces sentence-level embedding for the input text. The mapping network $m_2$ then converts the sentence-level embedding into a fixed-length sequence of embeddings (prefix). The prefix length is a hyperparameter that we set to 40. Likewise, the model can deal with audio input of variable duration, and it maps the audio representation to a prefix of length 40. We empirically found that a prefix size of 40 achieved the best performance compared to a prefix size of 20 or 80. **Questions 4. The authors have proposed 2 methods for evaluating the proposed model's performance on close-ended tasks: log-likelihood and text matching. However, all the following experiments using text matching and there is no comparison of log-likelihood based method vs text matching methods. Are they give relatively close results?**\ We chose the text-matching method for experiments because it is computationally less expensive. The log-likelihood method requires more computation and has lower performance on most datasets than the text-matching methods. However, the log-likelihood method has an advantage on out-of-domain or rare words that the text encoder cannot recognize. For instance, when identifying specific bird species by their song, the log-likelihood method outperforms the text-matching methods. We concur that a large-scale study is necessary to determine which evaluation method is superior. --- Rebuttal Comment 1.1: Comment: Thanks for very much for the detailed explanation!
Summary: This paper proposes a new audio-language learning model by treating existing audio tasks as text-generation tasks. The model architecture allows both open-ended and close-ended tasks. By evaluating on 22 downstream tasks, this paper shows competitive performance on many of them. Strengths: The idea of treating various forms of audio tasks as text generation task is reasonable, which allows scaling the size of the data to large models. The evaluation is pretty extensive. Authors compared the proposed model on multiple benchmarks against multiple models. The paper is well-written and easy to follow. Weaknesses: It seems that the strongest baseline is LAION CLAP [53], while authors did not compare with it in Table 3 or 9. In Table 8, Pengi did not outperform LAION, which seems to indicate that this model does not outperform the SOTA approach. My other concern is that the model's performance is largely based on the LLM and thus limited to their weaknesses. This has been discussed in the limitation section as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My main concern about this work is its performance against SOTA. I'd appreciate authors' clarification on that. ==== post rebuttal === I'd like to thank the authors for their responses. However, the performance improvement against LAION is not really convincing and comparisons are lacking, and thus I'm keeping the original score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It has been discussed in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our contribution and providing constructive feedback! We address every question and hope that our response resolves your concerns. Any follow-up questions are welcome. *** We present a novel and unified model that can handle both open-ended and close-ended audio tasks without relying on external modules or fine-tuning. Our key contributions are: (1) Pengi, the first Audio Language Model (ALM) in the literature, (2) audio task templates inspired by Instruction Tuning, and (3) a method to perform close-ended tasks with ALM. We evaluate Pengi on 22 downstream tasks and show that it achieves state-of-the-art results on most of them. Therefore, our main goal is not to beat SoTA all around with a single model, but to demonstrate the versatility and strength in performance of our method. **Questions 1. It seems that the strongest baseline is LAION CLAP [53], while authors did not compare with it in Table 3 or 9. In Table 8, Pengi did not outperform LAION, which seems to indicate that this model does not outperform the SOTA approach.**\ We chose CLAP because it is the only Zero-Shot model with a comprehensive evaluation (16 downstream tasks). It is not conclusive if LAION will outperform Pengi in every task. First, the LAION model was only tested on 4 datasets related to sound event classification. Thus, providing no evidence of performance on multilabel classification (FSD50K) and across other domains like speech and music, which tend to be the most difficult. Second, in Table, 8 Pengi underperformed LAION on US8K but outperformed LAION on ESC50. LAION model cannot perform Audio Captioning or Audio Q&A without additional modules and finetuning. **Questions 2. My other concern is that the model's performance is largely based on the LLM and thus limited to their weaknesses. This has been discussed in the limitation section as well.**\ We agree with the reviewer. All the limitations of LLMs inherent to LLMs apply to Pengi as well. We explore this in Section 6. **Questions 3. My main concern about this work is its performance against SOTA. I'd appreciate authors' clarification on that.**\ In addition to our comment in Response 1, we compared against SoTa performance throughout our paper. We compared against SoTa Zero-Shot models in Table 8, a subset of Table 3, for Sound Event Classification. Against SoTa from supervised learning models in Table 5 and 7 for Audio Q&A and Audio Captioning respectively. Against SSL, supervised, and trained on speech audio models in Table 9. To enhance our baseline from Table 3, we trained a CLAP model (CLAP*) on the same amount of pairs of data as Pengi (3.4M) to put aside variations due to data. Pengi's strong performance still holds. We attach this as Table 2 in PDF in the global response.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for recognizing our contribution and providing constructive feedback, especially for acknowledging that **this paper presents a novel audio language model** (Reviewer vWxh, 4yQP), **Performance on audio captioning is solid.** (Reviewer sUJx), **One of the first approaches that incorporate LLMs to create a general purpose audio LM** (Reviewer iz2N, sUJx) and **the evaluation is pretty extensive. Authors compared the proposed model on multiple benchmarks against multiple models** (Reviewer o28f, Reviewer vWxh, Reviewer 4yQP). *** We would like to re-emphasize the novelty and technical contributions of this work. We present a novel and unified model that can handle both open-ended and close-ended audio tasks without relying on external modules or fine-tuning. Our key contributions are: (1) Pengi, the first Audio Language Model (ALM) in the literature, (2) audio task templates inspired by Instruction Tuning, and (3) an extensive evaluation on 22 downstream tasks showing that our unified model can achieve competitive and even sota performance in several tasks. We summarize the two main questions brought up by the reviewers and address them here: \ **Question 1: Why is an explicit mapping needed for input text? What's the purpose of m2 here and do you have experimental results to show using m2 is helpful ?**\ We need a mapping network $m_2$ to bring the output of the text encoder into the space of the LM. The text input is a sentence that is passed to the text encoder. The encoder outputs a sentence-level embedding that is passed to $m_2$, which outputs a sequence of embeddings that the LM can "understand". We conducted two experiments to evaluate the effect of omitting $m_2$ and/or the text encoder. We denote Exp A as Pengi without the text encoder and $m_2$ (input text directly to LM), and Exp B as Pengi without the text encoder​ but with $m_2$ (input text to $m_2$).​ In Exp A, we found that removing $m_2$ resulted in a loss of coherence between the input text prompt and the output text. For example, an input prompt about identifying an emotion class "the emotion is " resulted in random text output and thus random performance. In Exp B, we removed text encoder but retain $m_2$. We obtained slightly lower results than our proposed architecture with both components. We attached this as Table 1 in the PDF below. **Question 2: Concern about using CLAP as baseline. Whether the performance gains are due to additional data or Pengi's ALM formulation?**\ We chose CLAP as the baseline because it is the only Zero-Shot model with a comprehensive evaluation (16 downstream tasks). The next best evaluation was only on 8 tasks. Thus, providing no evidence of performance across other domains like speech and music, which tend to be the most difficult. For the rest of the Tables, we compared against SoTA results even if it came from different models and learning methods. We compared against SoTa Zero-Shot models in Table 8, a subset of Table 3, for Sound Event Classification. Even against SoTa from supervised learning models in Tables 5 and 7 for Audio Q&A and Audio Captioning respectively. Table 9, against SSL, supervised and trained on speech audio models. To enhance our baseline from Table 3 and answer whether the performance gains are due to additional data or Pengi's ALM formulation, we performed a new experiment. We trained a CLAP model (CLAP*) on the same amount of 3.4M audio-text pairs as Pengi. We observed Pengi's strong performance still holds. We attached this as Table 2 in the PDF. Pdf: /pdf/10e9d607eb1ccde5f26777e3554aa4c7fb5b4d9b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors present an approach to combine multiple non-ASR audio tasks into a single model. Motivated by audio language models and VLM, the authors propose using a frozen LLM to create an audio LM that can be used for open-ended (purely generative) and close-ended (classification) tasks. The main idea is to use an audio encoder and a text encoder to create fixed length prompts that can be used with a frozen LM after training. The authors train various tasks by using task specific text prompts (“generate audio caption”, “this is the sound of”, etc.). In this sense, the model is similar to a multitask learning model (more on this below). Results show that the model is competitive compared to other approaches in the literature that are task-specific, and an alternative multi-task model, CLAP. Strengths: - A single model that can handle multiple tasks, both open-ended and close-ended. - One of the first approaches that incorporate VLM / LLMs to create a general purpose audio LM tasks. - Results show gains using LLM for building the model, compared to a baseline that doesn't. Weaknesses: - While the model can cover a number of tasks with competitive results, the model still performs worse on some of the tasks considered compared to task-specific models. - Furthermore, it is unclear if the method generalizes to new tasks, which is a main strength of non-audio LLMs / VLMs. The authors show zero-shot capabilities for new labels within an existing task. Although impressive, not entirely novel given prior works have approached the problem along similar veins (like CLAP, e.g.) but in limited settings. The results in Tab. 9 also show poor performance in zero-shot settings for a new task. - The results in Tab. 8 also show that the model is not as best as some of the other models on zero-shot classification task. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) The authors tokenize and map text prompts. Why is an explicit mapping needed? Can’t they re-use the same tokenizer as the LLM? 2) Why does the audio and text prompt prefix have to be of fixed length? Does the length of the input audio affect quality? 3) Line 180 and Fig. 3: Text matching is not appropriately described and is a little hard to follow. Consider providing a more detailed description. Are the embeddings created for each token or for the entire sentence? If for the sentence, how are they created (summarized)? 4) The authors claim sota in a few different close-ended tasks. Are the gains coming from the new formulation (audio LM) or the extra datasets that the authors are now using? For example, what if the authors train a single model with multiple classification heads for the N-tasks (close-ended, at least)? Does it work worse than the current model that uses the frozen LLM and the proposed architecture? 5) Line 181: When computing log likelihoods, do the authors normalize based on the number of tokens in the desired values (dog barking vs. sea, e.g.)? Does the number of tokens affect the overall score? 6) Sec. 5.5: Does the presented result show that perhaps a good portion of the performance comes from training data. For instance, wav2vec2 works better on emotion recognition, likely because the training data is relevant. Are the training set the same for the remaining non-speech models in Tab. 9? Minor 1) The authors use close-ended and open-ended tasks a lot, but only define it Sec. 4.2. Consider moving this to an earlier section to avoid confusion. 2) The authors say the text encoder is frozen (Line 111). Where does this encoder come from? 3) Why are the results in Tab. 4 for the best setting, worse than those in Tab. 7? 5) Please provide more descriptive captions for Tab. 4 – 6. 6) Line 271: Why does Pengi outperform human performance by such a large margin? It is interesting, so perhaps an explanation is warranted. 7) It would be useful to include the best results from Mei et al to Tab. 8. Also, it’d be useful to include some representative contrastive methods to Tab. 6 for audio retrieval. 8) Why does Pengi work better on some zero-shot tasks and not the others, compared to techniques like CLAP? Is it related to the training data? Typos: 1) Line 51: For example, -> Examples include 2) Line 70: Missing citation (?) 3) Figure 2 caption: a text a prompt -> a text prompt 4) Line 151: question: question -> question: {question} 5) Line 205: 320 secs / 1024 secs: Perhaps the authors mean milliseconds? 6) Line 270: Missing Table# (Tab. 8?). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1) Does not support ASR, which is, arguably, one of the most important audio task. 2) Unclear if the model can learn new tasks. The existing tasks use a fixed prefix prompt, which is tokenized and mapped. This makes it cumbersome to add new tasks to the model. This is a significant drawback compared to current LLMs that can learn new tasks by prompting. 3) How much does the LLM help? What is the quality if the LM component is trained from scratch with the current dataset? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our contribution and providing detailed comments. We hope that our response resolves your concerns and welcome any follow-up questions. *** **Q1. The authors tokenize and map ... re-use the same tokenizer as the LLM**\ We conduct an experiment and discuss the findings and results in global response question 1. **Q2: Why does the audio and text prompt ... of the input audio affect quality?**\ We choose the audio and text prompt prefix to be of length 40. The length choice is a hyperparameter and does not have to be fixed. On the second question, yes, the length of the input audio affects downstream task performance. The model is trained with 7-second audio clips, so it performs best for audios of similar duration. Shorter audio degrades performance, especially for long sound events like bell gong. **Q3: Line 180 and Fig. 3: Text matching is not appropriately described ... they created (summarized)?**\ We apologize and will revise the description. For the text matching, the text encoder generates a sentence-level embedding for Pengi's textual output. The sentence-level embedding corresponds to the embedding of the [CLS] token from the text encoder. **Q4: The authors claim sota in a few different ... the proposed architecture?**\ We appreciate the reviewer’s question. More objectives and parameters may improve training, but only if the model can converge with multiple losses. Also, adding heads is complex and hard to scale. For example, we have 22 tasks, so we need 22 heads, with different losses and outputs (labels, regression, descriptions, etc). Pengi can scale to N tasks, with a single loss and training procedure, and the same output format (free-form text). To check the benefit of our formulation and remove the effect of additional data, we train a new CLAP (CLAP*) on the same 3.4M pairs as Pengi. We attach this as Table 2 in PDF in the global response. Overall, Pengi's strong performance still holds. **Q5: Line 181: When computing log likelihoods, do the authors normalize ... the overall score?**\ Yes, we normalized based on the number of tokens for the log-likelihood method. We found that the number of tokens does affect the overall score, a larger number of tokens may decrease performance. **Q6: Sec. 5.5: Does the presented result show that ... in Tab. 9?**\ The quality of the embeddings depends in good part on the training data. For example, adding more speech or music-related data usually correlates with improvement on relevant task. However, other components in Pengi are equally important, such as the audio encoder, text encoder, and LLM. They provide information, such as acoustic and text semantics and context. For instance, if our model never saw the word "dog" in the training data, but it saw "bark" and "animal", it can still associate and generate "dog barking" as a description. **Minor question 1, 3, 4 and Typos**\ We thank the reviewer and will fix the issues and typos the reviewer pointed out. **MQ2: The authors say the text encoder is frozen ... encoder come from?**\ This can be any off-the-shelf text encoder. We tested with text encoders from CLAP, BERT, T5, and CLIP and found little to no difference on downstream task performance. **MQ5: Line 271: Why does Pengi outperform human performance ... warranted.**\ Humans have limitations inherent to how much information a participant can handle at once. In the case of ESC50, humans listen to the audio once, and have to remember the audio content, task description, and choose among 50 different classes. Moreover, listeners have different degrees of familiarity with prototypical content from different sound classes, whereas Pengi has been exposed to similar content during training. In a sense, Pengi is an expert listener, whereas the humans in the listening experiment were not. **MQ6: It would be useful to include the best results from Mei ... audio retrieval.**\ The evaluation method for retrieval is different for contrastive and generative models, which makes a fair comparison difficult. We comment on this as well as the performance difference in Section 6 paragraph 1. As a follow-up, we will update our manuscript to explicitly mention these numbers in Table 6. **MQ7: Why does Pengi work better on some zero-shot tasks ... to the training data?**\ This is due to multiple factors like training data, contrastive learning formulation, and having a common audio-text multimodal space. **L1: Does not support ASR, which is, arguably, one of the most important audio task.**\ Pengi focuses on non-speech audio but we include some speech-related downstream tasks like speech emotion recognition to evaluate generalization. We didn't include speech audio and their transcripts, therefore Pengi does not support ASR. To the best of our knowledge, bridging speech, non-speech audio, and music in a single model is still an open problem. **L2: Unclear if the model can learn new tasks ... can learn new tasks by prompting.**\ We have explored this direction in Appendix Section A, where we show that the user can use a fixed text prompt like “generate metadata” and then provide more information or ask follow-up questions. This enables the user to steer the conversation with additional prompts (Fig. 6), such as “the background is”, “mention forest.”, etc. However, we acknowledge that Pengi still falls short of VLMs like Flamingo in this aspect, and we plan to investigate this further in the future. **L3: How much does the LLM help? What is ... scratch with the current dataset?**\ We appreciate the reviewer’s comment. Our default LM is GPT-2 (124M), but we also tested GPT2-XL (1.5B). A larger LM improved performance on the open-ended task of Audio Q&A, had mixed results on Audio Captioning, and no significant change on close-ended tasks. We attach the AQA results below. | LLM | Parameters | Audio Q&A| ------|------------|----------| | GPT2-base | 128M | 0.645 | | GPT2-XL | 1.5B | 0.701 | --- Rebuttal Comment 1.1: Comment: Thank you for addressing the comments. The results using CLAP trained using the same data is especially interesting. Clarification regarding tokenizer and mapping: Wouldn't using the same tokenizer as the LLM put the tokens in the same space at the LLM? Why does it need additional mapping to "understand" the tokens? --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for carefully reading our paper and rebuttal. For any additional clarifications, we are more than happy to address them. *** **Q1: Wouldn't using the same tokenizer as the LLM put the tokens in the same space at the LLM?**\ Yes, using the same tokenizer as the LLM will put them in the same space as LLM. However, our text encoder is not the same as the LLM (GPT2), so it's not producing tokens in the same space. **Q2: Why does it need additional mapping to "understand" the tokens?**\ Additional mapping is needed to understand the tokens for the case where we use a text encoder different than LLM (GPT2). If we remove the text encoder, additional mapping is not needed to understand tokens. Instead, the additional mapper helps the LLM produces different text output for different text prompts. For example, "this is sound of" should produce "dog barking" and "this emotion is" should produce "happy". This is out-of-domain for the frozen LLM and therefore requires a way to adapt the LLM for our data. We choose prefix-tuning as our choice of adaptation and introduce mapper m2. There are other ways to achieve this functionality i.e. instead of using mapper m2, one can use LORA updates or gated cross-attention on LLMs. We choose to keep the LLM completely frozen and instead tune the prefix using a mapper m2.
null
null
null
null
null
null
NeRF Revisited: Fixing Quadrature Instability in Volume Rendering
Accept (poster)
Summary: This paper describes a modification to the way NeRF performs approximate integration for volume rendering. Instead of assuming piecewise constant opacity, they propose to use piecewise linear opacity. They argue that this resolves what they dub the "quadrature instability" in NeRF. They re-derive the quadrature integration method using piecewise linear opacity and piecewise constant color, and show that this also enables them to derive a precise inverse of the ray termination CDF for importance sampling. Through experiments on standard NeRF datasets they show an quantitative improvement in rendering quality and also show qualitatively that their method makes the rendering sharper and more stable across change of viewpoint and camera distance. Strengths: This paper addresses an often overlooked aspect of NeRF which is the specifics of how the volume rendering is implemented and whether it could be improved. The propose what to the best of my knowledge is a novel modification, which is to approximate the opacity along the ray as piecewise linear (instead of piecewise constant) in order to obtain a more accurate quadrature estimate of integral. This formulation also leads them to derive a more precise method of importance sampling. They analyze conceptually how the piecewise constant assumption in NeRF leads to conflicting ray supervision between perpendicular and grazing angle rays. They claim that their piecewise linear formulation reduces this problem and leads to a more peaked opacity PDF. They provide extensive derivations for the formulas in the supplemental material. The new formulations are a "drop-in" replacement for the equations used in vanilla NeRF, and thus they can easily modify existing NeRF implementations to use them. In their evaluation they compare against vanilla NeRF on the standard synthetic and real forward-facing datasets and show a quantitative and qualitative improvement in rendering quality across the board. They also show that their method improves SCADE, a recent method that incorporates depth estimates into NeRF training, and show that swapping in their formulas leads to a modest improvement in quality. They hypothesize that this is because of their more precise importance sampling. Weaknesses: They seemed to have missed a highly relevant previous work: Wu, L., Lee, J. Y., Bhattad, A., Wang, Y. X., & Forsyth, D. (2022). DIVeR: Real-time and accurate neural radiance fields with deterministic integration for volume rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 16200-16209). This method is based on a voxel grid approach rather than an MLP. However, the DIVeR paper also discusses the inaccuracy of integration using piecewise constant opacity. Their solution allows for integration over more complicated functions than piecewise linear and so might be more accurate than the piecewise linear approach proposed here. Indeed, DIVeR reports higher average rendering quality metrics on the NeRF synthetic dataset than what is reported here. Because they claim that their method improves sampling and makes the opacity function more peaked at the surface, I would be very curious to see a comparison of surface reconstruction quality compared to NeuS: Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., & Wang, W. (2021). NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. Advances in Neural Information Processing Systems, 34, 27171-27183. The explanation of inverse transform sampling in the original NeRF method (section 3.2) should be expanded, as this is not explained thoroughly in the original NeRF paper. Their discussion (L175-178) is quite brief considering how critical it is to the paper. The paper has many grammatical errors and incomplete sentences which make it difficult to read. I included some examples here: * L135 "Hence the continuous probability density function (PDF) the ray r(s)" * L138 "s is a point on the ray r" -> it doesn't make sense that s would be a point (since it is a scalar) * L152-153 "P_j is the probability of each interval, which is mathematically equivalent to the probability of the interval." This sentence sounds like a meaningless tautology. * L154 missing a period * L117 "Then taking x = g^-1(u)." This is an incomplete sentence. * L117-118 "However, this does not necessarily result in the samples from the actual ray distribution p(s) from the model." This sentence doesn't work grammatically. Some more analysis of why this doesn't result in sampling from the distribution would be helpful (explanations and visualizations). * L203 "such an example of a sample-based loss used for NeRFs is depth" * L232-3 "we pointed the drawback" * L238 "thus the resulting CDF F being continuous" * Table 1 caption: needs a period * L273 capitalize Lego Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I think the most critical issue is the comparison with DIVeR. How does this method compare to DIVeR, which appears to support an even more accurate computation of the integral? I would also be interested to know whether they expect that this method will improve the surface reconstruction capability of NeRF, and how it compares to NeuS in this regard. Finally I would like to know if they have a reference regarding the original NeRF's method of inverse transform sampling the CDF -- is this just based on the NeRF code or is there a reference for that? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: A section on limitations does not appear in the paper. A frank discussion of the limitations of the approach would strengthen the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for finding our formulation to be a novel modification. We address each of the concerns below. ## Q1: DIVeR citation Thank you for bringing up the paper. We will cite DIVeR in our revision. ## Q2 Comparison with DIVeR We plug our method into DIVeR by using their voxel-based representation and feature integration and dropping in our piecewise linear opacity formulation for volume rendering (PL-DIVeR). Results are shown in Table 2 demonstrating that our approach is on-par if not better across the different scenes in the Blender dataset. We use their official implementation and configuration for DIVeR64 at 128 voxels trained on a single Nvidia v100 GPU for each scene. We include this additional comparison in our revised version. We highlight that this shows the improvement of using our piecewise linear opacity formulation, which is a drop-in replacement to existing methods. Note that DIVeR is not directly comparable to PL-NeRF, but is instead compared against PL-DIVeR. ## Q3: Proof that DIVeR’s solution does not quite integrate over more complicated functions We note that technically it is not quite accurate to claim that “DIVeR’s solution allows for integration over more complicated functions”. We show that there is a tradeoff between the plausibility of the learned radiance field and their $MLP_w$ being an affine transformation and color being piecewise constant. Here we define a plausible radiance field as one where there is a unique opacity defined at each 3D point. We prove mathematically below that the volume rendering integral of DiVER does not integrate over more complicated functions if the underlying opacity field is plausible. ### Claim 1: DIVeR’s volume rendering equation holds if and only if color is piecewise constant. From Eq. 12 in DIVeR supp A.1., Hölder's inequality becomes equality iff color is constant along the ray in each voxel. The reverse direction is proven by substitution. The forward direction is shown by proving the contrapositive – if the radiance field is not constant in an open interval of the line segment, the integral of radiance over that open interval will be strictly smaller than the $L_\infty$ norm of radiance field over the same domain. ### Claim 2: A plausible density field, i.e. a unique opacity $\sigma(x) \forall x \in \mathbb{R}^3$, only exists if DIVeR’s $MLP_w$ is an affine transformation. We show this by showing the Hessian matrix of MLP_w is zero ranked. Let $S_1, … S_6$ be the six sides of a voxel and $S$ be the union of the sides. For $y \in S_i$, we define a function $$x \mapsto \int_0^{\lVert x-y\lVert}\sigma(r_{x, y}(t))dt=MLP_w(\int_{t^{in}}^{t^{out}}\hat{f}(r_{x, y}(t))dt) = MLP_w(\int_0^{\lVert x-y\lVert}\hat{f}(r_{x, y}(t))dt)$$, for $x \in S - S_i$, where equality follows from Eq. 5 in DIVeR main. By taking the gradient at $x \in S - S_i$ and rearranging the terms, we derive that $$\sigma(x)=x\cdot[\nabla MLP_w(\int_0^{\lVert x-y\lVert}\hat{f}(r_{x,y}(t))dt)]C(x).$$ Notice that $C(x)$ only depends on x. Since the RHS depends on $y \in S_i$ while the LHS does not, we can take the gradient of both sides w.r.t. to y and conclude the Hessian of $f \mapsto MLP_w(f)$ is rank zero on a convex open set. Then it follows that DIVeR’s $MLP_w$ is an affine transformation. Hence, there is a tradeoff between the plausibility of the learned radiance field and $MLP_w$ being an affine transformation, and under this scenario, it turns out to simply be a trilinear interpolation of opacity. ## Q4 Our opacity function being more peaked We quantitatively evaluate our precise importance sampling by taking our models trained with depth supervision. Specifically, we compute the average L2 distance between the ground truth depth and 64 random samples drawn from the fine networks using our precise importance sampling for PL-NeRF and the original inverse transform sampling for Vanilla NeRF. PL-NeRF and Vanilla NeRF attain an average error of 0.019 and 0.033, respectively, demonstrating that our approach with precise importance sampling draws samples closer to the ground truth surface. ## Q5 Improvement in geometry for NeRF We show quantitative improvement in our extracted geometry in Table 3. We extract the mesh from the learned opacity fields using marching cubes with a threshold of 25 and compute the distance between the ground truth model and the output mesh. Figure 2 also shows qualitative examples. We note that the SDF methods are orthogonal to our contribution as their goal is to learn an SDF field and convert that into density to enable supervision through volume rendering. How to render density is orthogonal to how density is outputted, our focus is the former, while the SDF works explore the latter. In principle, it is possible to use our method as a replacement for the volume rendering integration that these methods use. Exploring this could be an interesting direction for future work. ## Q6 Discussion on original NeRF’s inverse transform sampling Yes, it is based on the official codebase from the original NeRF that its successors are based on. Under the piecewise constant opacity assumption, the PDF of the ray termination distribution is not continuous, making its corresponding CDF also not continuous and thus non-invertible. Hence, to be able to sample with inverse transform sampling, a continuous surrogate CDF function (that we denoted as $G$) is needed, which from the official codebase is a linear interpolation. The code snippet is included in Figure S5 in our supplementary. Because a surrogate function is used, i.e. $G \neq \tilde{F}$, then the cumulative distribution described by $G$ is not equal to the cumulative distribution described by $\tilde{F}$, and hence the resulting samples from inverse transform sampling using $G$ is not equal to the samples drawn from the distribution, i.e. PDF $\tilde{f}$. We will elaborate on Sec 3.2 of the main paper in our revision. --- Rebuttal Comment 1.1: Comment: Additionally, for limitations: A brief discussion of the limitations are found in our supplementary. Another additional limitation is we still assume piecewise color, i.e. within a bin we do not handle color integration. Modeling more sophisticated color integration can potentially handle other difficult scenarios such as double-walled colored glass or atmospheric effects such as fog or smoke. Thank you as well for pointing out the grammatical errors, and we will correct them in our revision. --- Rebuttal Comment 1.2: Comment: Thank you for your responses. I think the experimental results in the PDF are enough to show empirically that your method of volume integration is preferable to DIVER. I am less sure about the proof you introduce here. Fig. 5 of the DIVER paper shows how DIVER's MLP integration method can estimate the area under the curve of each segment more accurately than a constant or a piecewise-linear approximation. Are you arguing in your proof that this does not actually happen when DIVER is applied in practice? What if the "plausible density field" requirement is not met exactly but only approximately? --- Reply to Comment 1.2.1: Comment: Thank you very much for your response, especially for finding our experimental results convincing that our approach is preferable over DIVeR. The proof that we included in the rebuttal is to show that interestingly, DIVeR's integration method cannot actually integrate over more complicated functions, that is, their $MLP_w$ cannot approximate the integral of anything other than a trilinear interpolation. It seems that Figure 5 in the DIVeR paper is a schematic diagram accompanying their Sec. 4.2 that is used for illustrative purposes, rather than a demonstration of an actual experimental result. Yes, our proof does show that the right panel of Figure 5 cannot actually happen if there is a unique opacity defined at each 3D point. It turns out that (as shown in the proof) the only case when the output of their $MLP_w$ can be exact is when the integral is of a trilinear interpolation – this is a consequence from the proof that $MLP_w$ is an affine transformation. So if the function is any more complicated than that, their $MLP_w$ must either have large approximation errors, or their density field must become inconsistent, which would result in incorrect renderings from some other view. Yes, the "plausible density field" condition may not be exactly met, but the cost of that would be incorrect renderings from some other view, which is not desirable for novel view synthesis. We are happy to answer and clarify any further questions and concerns you may have.
Summary: Neural rendering methods like nerf relies on the integration of contributions (both color and density) along rays to predict views. Since analytical computation of the integral is not possible (the color and the density being both prediction from neural networks, usually MLPs), an approximation is estimated. Traditionally, the hypothesis done is that the color and the density are locally constant thus leading to a rectangle rule approximation. In this work, the authors propose to replace the the constant approximation of the density by a linear approximation. This allows for a more accurate estimation and a closed-form formula for sampling points during training. Strengths: The proposed modification is simple and elegant, it is mathematically driven leading to a closed-form formula. Moreover it is plug-and-play with most (all?) popular nerf-based methods and have a direct impact on the quality of the density estimation. It offers an important solution to the fuzzyness problem of the density learned by nerf methods, thus allowing for better surface reconstruction, and especially smaller details. The paper is clear and well written. It introduces well all the different concepts necessary to understand the contribution. Experimental results are convincing with a non-negligible improvement of 0.5dB in PSNR but also across other metrics (SSIM and LPIPS) on multiple classic datasets (one synthetic and two real). Weaknesses: While the paper is very clear and the ideas elegant, I find the experimental section lacking: * The proposed modification should impact mostly the density and thus the geometry of the reconstruction. The experimental section only focuses on a. While this is can be sufficient, I find disappointing that the authors did not take the opportunity to show the potential improvement of the geometry for the different datasets showed in the paper. * The choice of baseline nerf is surprising. Given the type of experiments (with large changes of camera-to-scene distances), I think that mip-nerf would have made for a much better baselines. It already fixes a large number of artifacts pointed out by the authors in Figure S1 and therefore would have been a more appropriate comparison. I nonetheless expect that the conclusions can be transcribed to more recent frameworks. * I find surprising that there is barely any discussion about the computation complexity of the proposed sampling (nor time measurements). I saw later that it is mentioned in a "Limitations" section in the supplementary but for me it's not a limitation per say and it should be studied in an ablation inside the paper. Especially since S1.1 shows that a better sampling can provide interesting convergence properties, thus allowing to change the number of samples used during the training. I would have expected a comparison with less samples than the 64 + 128 baseline (64 + 64 seems like an obvious thing to try based on the comments inside the text, I'm surprised that authors didn't even try that), to see the impact and find maybe a better compromise between computation time and performance. looking at the convergence plots would have also been interesting. Indeed, the proposed method might offer a faster convergence compared to the previous sampling, thus having a double impact. * While slightly outside of the scope of this paper, it still would have been interesting to show the impact on the more practical methods (instant-ngp or voxel based methods for example) that are often preferred because of their fast training speed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: RFF are more often referred to as LLFF. I suggest renaming to avoid any confusion (even though LLFF technically refers to the method proposed in the same paper) since a reader would expect that name. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Given the nature of the contributions, there's no major limitations nor potential negative societal impact to discuss. Some limitations are mentioned in the supplementary material but they are more related to the analysis of the method as I mentioned in "Weaknesses". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for finding our work simple, very clear and well-written with elegant ideas that are mathematically driven. We appreciated that you highlighted that our approach is a plug-and-play to the popular nerf-based methods and that our experimental results are convincing with non-negligible improvement across metrics on multiple classes datasets. We address your questions below. ## Q1 Geometry Extraction Thank you for your suggestion. We extract the geometry from the learned density field from the trained models of PL-NeRF and Vanilla NeRF using marching cubes with a threshold of 25. Table 3 reports the distance between the surface of the ground truth model to the predicted meshes by sampling point clouds via ray casting. We see that our piecewise linear approach achieves a lower error compared to Vanilla NeRF on almost all the scenes in the Blender dataset. Figure 2-a shows qualitative results on the reconstruction of our piecewise linear vs the original piecewise constant formulation. As shown, we are able to better recover the holes on the body and wheels of the Lego scene as well as the interior structure inside the Mic. Moreover, interestingly, the numbers on the Drums scene is due to the surface of the drum being visually transparent as shown in the figure. We will include these results in our revision. ## Q2 Mip-NeRF baseline We plug-in our piecewise linear opacity for volume rendering into Mip-NeRF (PL-MipNeRF), and results are shown in Table 1. We demonstrate consistent improvement across all scenes on the Blender dataset using the standard train and test splits. We use the official released hyperparameters for MipNeRF and rerun their models using NeRFStudio. Our PL-MipNeRF uses the two-MLP training scheme with a coarse loss weight of 1.0, keeping all other hyperparameters fixed. Our results show that our modification is a drop-in replacement and can be transcribed to other more recent frameworks. Figure 1 shows qualitative examples where we see that under difficult scenarios such as when ray conflicts arise in the fine details of the Chair and in the presence of grazing angle views in the Mic, our PL-MipNeRF shows significant improvement over the baseline. We will include these results in our revised version. ## Q3 Computational complexity under different number of samples We measured the total rendering time for a single 800x800 image under different numbers of samples for our PL-NeRF. The total rendering time for (64+64), (64+128) and (128+64) are 19.20, 25.85 and 32.35 seconds, respectively. We will report these timings in our revision. ## Q4 Comparison with less samples Thank you for your suggestion. We run both our PL-NeRF and Vanilla NeRF with 64 coarse and 64 fine samples results in an average of (30.09, 0.939, 0.056) and (29.86, 0.937, 0.059) for (PSNR, SSIM, LPIPS), respectively, on the Blender dataset. This shows that under less number of samples our piecewise linear opacity formulation is better than the original piecewise constant opacity assumption. We will include this in our revision. ## Q5 Convergence plots under different number of samples Figure 2-b shows the convergence plots under different numbers of samples of our PL-NeRF vs Vanilla NeRF. We see that under different number of samples, our linear approach converges to a higher training PSNR. ## Q6 Additional results on a practical voxel-based method. We also demonstrate that our approach can also be integrated into a recent voxel-based method, DIVeR. We take its voxel-grid representation and feature integration and plug our piecewise linear opacity rendering formulation (PL-DIVeR). Table 2 shows quantitative results on the Blender dataset showing that even in voxel-based representations, our approach is on-par if not better than the piecewise constant opacity baseline. We will include these additional results in our revised version. ## Q7 Renaming RFF to LLFF Thank you for clarifying this and we will rename this in our revision to avoid confusion. --- Rebuttal Comment 1.1: Comment: Thank you again for your time and effort in reviewing our paper. As the discussion phase is nearing to an end, please let us know if you have any further questions, and we will be more than happy to answer them. --- Rebuttal Comment 1.2: Comment: Thanks for the additional information. The new results (with more baselines, different numbers of samples) reinforces my opinion. The proposal is not only interesting from a theoretical point of view but also has a clear practical impact (by only replacing the sampling part, something that can be done easily for most NeRF frameworks).
Summary: This paper presents a fix for the "quadrature instability" arising from sampling of points for numerical integration used by NeRF2020 and its successors.The sampling inconsistence may not be big on classical rendering techniques, but can be significant when using neural architectures. Opacity and color values are assumed to be piecewise constant by NeRF. This paper offers a tractable solution with piecewise linear opacity values and piecewise constant color values. To avoid dense sampling the NeRF and follow up use a importance sampling approach based on corse-to-fine strategy. For drawing these importance samples the NeRF utilizes a surrogate CDF as original CDF is not invertible. The impact is shown on several standard scenes on difficult situations such as close to surfaces, fine feature areas, etc. Strengths: The proposed method is effective and efficient, drawing on existing volume-rendering literature. The modification suggested is rather simple but effective. The modified method should improve the quality of all Radiance Fields methods and hence of interest to researchers working on those areas. Weaknesses: The paper proposes a small, clean formulation of ray-integration that improves quality of NeRF reconstruction. The limited nature of the idea is its weakness, if it can be called so. The improvements in quantitative measures like PSNR is marginal, but the qualitative improvements are significant in particularly difficult situations. The usefulness and utility of the formulation could be established more clearly by incorporating it to other NeRF variants whose code is available. I would wholeheartedly support this idea in a vision conference like ICCV/CVPR. It is upto NeurIPS to decide if such a small-but-effective idea is of sufficient interest to its audience. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Here are some questions/points: - Can this method be integrated into all the other Radiance Field methods that essentially use the same ray-tracing formulation? Has it been done? That will increase its appeal - What is the computational overhead of the expanded formulation compared to vanilla NeRF? The paper mentions 18 hours for 500K iterations. I am interested in knowing about head-to-head comparison with vanilla NeRF to know if there is significant performance penalty in using this formulation. I would like to know it for both training and rendering for view generation. - Just curious: What is the adverse impact of using piecewise constant color values? What could the advantage be if we use linear color also? Does the other combination of piecewise constant density and piecewise linear color have similar closed-form solutions? What would the visual impact of that combination? - Are comparisons with later NeRF such as ZipNeRF/MipNeRF available? It would be interesting to see if the new proposed strategy scales well to those as well. - Depth supervised experiments seem valid. Does this pan-well against works like RGBDNeRF (Yuan et al. 2022 TPAMI)? Minor points: - Citations appear in ordinary parentheses as "(25)" instead of the more standard "[25]". This was very confusing in the beginning. Is this OK with NeurIPS style? - Line 164: Isn't it more correctly R^3 x S^2 --> R^3? Why discretize it to [0-255]? Are you doing something different here? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: What are the limitations of this approach in the authors' views? None are mentioned in the paper. Showing the formulations effectiveness on other Radiance Field recovery methods (they all use the same rendering equation) will enhance the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for finding our formulation clean that results in a method that is simple but effective. We appreciate that you find our work to be of interest to researchers working in the area of radiance fields and for highlighting that our improvement is significant in particularly difficult situations. We answer the questions raised below. ## Q1 Comparison with Mip-NeRF We integrate our piecewise linear opacity formulation to the volume rendering integral into Mip-NeRF (PL-MipNeRF). Table 1 shows our quantitative results demonstrating consistent improvement across all scenes in the original hemisphere Blender dataset. Our experiments are run on the standard train and test split with the officially released hyperparameters of Mip-NeRF using the NerfStudio codebase. For PL-MipNeRF, we use the two-MLP training scheme with a coarse loss weight of 1.0. Figure 1 shows qualitative examples where we see that under difficult scenarios such as when ray conflicts arise in the fine details of the Chair and in the presence of grazing angle views in the Mic, our PL-MipNeRF shows significant improvement over the baseline. Our results show that our piecewise linear opacity and piecewise constant color formulation scales well to Mip-NeRF as well. We will include this in our revision. ## Q2 Formulation’s effectiveness to other radiance fields methods We also demonstrate our formulation’s effectiveness on other radiance field methods, such as Mip-NeRF as presented above, and additionally DIVeR, a recent voxel–grid method. We plug our method into their approach by utilizing their voxel-based representation and feature integration and dropping in our piecewise linear opacity rendering formulation (PL-DIVeR). We similarly run the DIVeR models on a single Nvidia v100 GPU trained using their default configurations and hyperparameters for DIVeR64 at 128 voxels. Table 2 shows the quantitative results on the standard Blender dataset, showing that our formulation is an effective drop-in replacement in other radiance field methods. We will include these results in our revision. ## Q3 Computational overhead compared to Vanilla NeRF The total training time for 500k iterations on a single Nvidia v100 GPU is 17.78 and 21.43 hours for Vanilla NeRF and PL-NeRF, respectively. Figure 2-c shows the head-to-head comparison of training PSNR (y-axis) with respect to time (x-axis) of Vanilla NeRF vs PL-NeRF on the Lego scene. Rendering a single 800x800 image takes 25.59 and 32.35 seconds for Vanilla NeRF and PL-NeRF, respectively. We will include these findings and specify the training and rendering time in our revision. ## Q4 Possible advantage of piecewise linear color; impact of using piecewise constant color An example scenario where piecewise linear color could have an advantage is a scene with a thin glass (or some non-opaque surface) with one color, e.g. red, on one side and another color, e.g. blue, on the other. Piecewise linear color would smoothly blend the color of the two sides. ## Q5 Would piecewise linear color and piecewise constant opacity have closed-form solutions? Impact of such combinations. Yes, it is closed-form. Under piecewise constant opacity and piecewise linear color, we can write the expected color of the bin as $\int_{s_i}^{s_{i+1}} \tau(s)T(s)c(s)ds = T(s_i)\tau_i \int_{s_i}^{s_{i+1}} c(s) \exp{(-\tau_i(s-s_i))}ds$ (refer to Sec 2.3 from Max and Chen 2010). This expression is of the form $\int A(s)\exp{B(s)}ds$, where $A(s)$ and $B(s)$ are both linear functions, hence its integral is in closed form using integration by parts. This assumption can potentially tackle the situation presented above on the double-colored glass, however, it still has a problem for the scenarios we have presented in the paper such as grazing angle and cameras at different distances. ## Q6 Marginal improvements in quantitative measures Our results show significant qualitative improvements in particularly difficult situations as mentioned in your review. Unfortunately, the standard metrics are not designed to be sensitive to those specific scenarios, but nonetheless, we point out that Reviewer WoBC states that our experimental results are convincing with a “non-negligible improvement” across metrics in multiple classic datasets. We also note that as with any other rendering methods, if the underlying simpler assumption suffices, e.g. piecewise constant opacity, suffices for a particular scene, then the improvement of the more sophisticated model, e.g. piecewise linear opacity, is less apparent. ## Q7 RGBDNeRF RGBDNeRF is orthogonal to our method as it depth-supervised experiments as it does not use a depth-based loss. As mentioned by HvcE, the results of our depth-supervised experiments demonstrate that our piecewise linear approach improves on the sample-based depth loss of a recent method, SCADE, compared to the original piecewise constant approximation. ## Q8 Citation format Thank you. We will change the citation format to the more standard “[25]” in our revision. ## Q9 Ln 164 Thanks for catching this technicality. We are not doing anything different here – we will clarify and correct this in our revision. ## Q10 Limitations One potential limitation is our piecewise color assumption, i.e. within a bin we do not integrate color. This assumption will struggle in difficult scenarios such as two-sided colored glass or atmospheric effects such as fog or smoke. We will add this limitation in our revision. However, we also note a statement from Reviewer WoBC that “given the nature of the contributions, there are no major limitations nor potential negative impacts to discuss.” --- Rebuttal Comment 1.1: Comment: Thank you again for your time and effort in reviewing our paper. As the discussion phase is nearing to an end, please let us know if you have any further questions and we will be more than happy to answer them.
Summary: The paper addresses a fundamental limitation of existing NeRF formulation, namely piecewise constant integration, which results in sensitivity to point sampling, summarized as quadrature instability in this paper. To address this, the paper proposes a new formulation based on piecewise linear quadrature for density and piecewise constant quadrature for color. The paper demonstrates that the proposed method produces sharper and more stable results, and leads to better quantitative metrics. Strengths: The paper clearly analyzes the problem of existing methods. The proposed solution is theoretically sound. The proposed method leads to improvements in NeRF reconstruction, both qualitatively and quantitatively. The paper shows detailed theoretical analysis and visual comparisons in supplementary material and video. Weaknesses: The improvements appear to be quite small from the overall quantitative metrics. It seems that the improvements may be more visible in specific scenarios, but may be subtle for general inputs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is visual difference obvious for a random input, or if the example shown needs to be carefully chosen? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not discuss its limitations. It may happen that the method is robust, but some limitations (including limited improvements) in certain scenarios should still be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for finding our paper clear and theoretically sound that addresses a fundamental problem of the existing NeRF formulation leading to quantitatively and qualitatively improvements in NeRF reconstruction. Below we address the questions that were raised. ## Q1 Small improvements from overall quantitative metrics. As emphasized by Reviewer WoBC, our “experiment results are convincing with a non-negligible improvement of 0.5dB in PSNR but also across other metrics (SSIM, LPIPS) on multiple classic datasets (one synthetic and two real).” One advantage of our approach is highlighted in difficult scenarios where sensitivity to samples becomes a problem from the original piecewise constant formulation. The metrics are not designed to be sensitive to those specific scenarios, but as also mentioned by Reviewer am3W the “qualitative improvements are significant in particularly difficult scenarios”. ## Q2 Visual difference of random views In scenes with difficult scenarios as mentioned in the paper, random views will have visual differences. On the other hand, in cases where the piecewise constant opacity assumption suffices, the performance is similar – as with any other rendering methods, if the underlying assumption of a simpler rendering model suffices, then the improvement of the more sophisticated model is less apparent. ## Q3 Limitations One potential limitation is we still assume piecewise color. Enabling a more sophisticated color function can potentially handle difficult scenarios such as double-walled colored glass or atmospheric effects such as fog or smoke. We will add this to our revision, however, we also note a statement from Reviewer WoBC that “given the nature of the contributions, there are no major limitations nor potential negative impacts to discuss.” --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. --- Reply to Comment 1.1.1: Comment: You’re welcome. Let us know if you have any further questions.
Rebuttal 1: Rebuttal: We thank the reviewers for their feedback and for finding our approach novel (HvcE), simple (WoBC, am3W) and effective (am3W) that introduces a clean (am3W), elegant (WoBC) and theoretically sound (tVLL) formulation that is clear and well-written (WoBC). As summarized by the reviewers, we tackle a commonly overlooked problem for NeRFs on how integration is done for volume rendering. Specifically, we propose a piecewise linear opacity and piecewise constant color formulation to volume rendering that alleviates quadrature instability and leads to a simple and closed-form equation. Our new formulation is a plug-and-play (WoBC) or drop-in replacement (HvcE) to existing NeRF-based methods that assumes piecewise constant opacity and color. Our experiments show that this leads to quantitative improvements across different metrics on multiple classic datasets (WoBC) that is qualitatively significant in particularly difficult situations (am3W). Below we include a brief summary to each response. Please see individual reviewer responses for more details. ## [tVLL, am3W] Marginal quantitative improvement As mentioned by WoBC, our experiment results “are convincing with a non-negligible improvement across metrics on multiple datasets,” and the improvements of our approach are highlighted in “particularly difficult situations” (am3W). We note that the metrics are not designed to be sensitive to those specific scenarios. ## [tVLL] Visual difference of random inputs Random inputs will have visual differences in scenes with difficult scenarios as mentioned in the paper. However, as with any other rendering methods, if the underlying simpler assumption, e.g. piecewise constant, suffices, then the improvement of the more sophisticated model, e.g. piecewise linear, is less apparent. ## [am3W, WoBC] Comparison with Mip-NeRF We plug-in our piecewise linear opacity formulation into Mip-NeRF (PL-MipNeRF). Table 1 shows consistent improvement across all scenes in all metrics on the Blender dataset, demonstrating that results can scale to more recent frameworks. ## [am3W, WoBC] Integration to other methods In addition to MipNeRF, we also integrate our approach into DIVeR, a recent voxel-based approach (PL-DIVeR), as shown in Table 2. Results show that our formulation can be utilized as a drop-in replacement to the recent voxel-based approach. ## [am3W] Computation overhead compared to Vanilla NeRF On a single Nvidia V100 GPU, the total training time for 500k iterations of Vanilla-NeRF and PL-NeRF is 17.78 and 21.43 hours, respectively, while the total rendering time for one 800x800 image is 25.59 and 32.35 seconds, respectively. Figure 2-c shows the training curves with respect to time for the Lego scene. ## [am3W] Possible advantage of linear color An example scene where this can be advantageous is a thin glass with one color on one side and a different color on the other. ## [am3W] Will there be a similar closed-form expression for piecewise linear color and piecewise constant opacity? Yes, it turns out that it is in closed-form too. However, that assumption will still have the problems for the difficult scenarios presented in our paper. ## [am3W] RGBDNeRF RGBDNeRF is orthogonal to our experiments on depth loss with our piecewise linear formulation as this work does not supervise their NeRF training with depth. ## [WoBC] Geometry Extraction Table 3 shows the average distance between the surface of the ground truth model to the extracted meshes of the trained models for Vanilla NeRF and PL-NeRF. Figure 2-a shows qualitative examples. ## [WoBC] Computational complexity of different number of samples The total rendering time for a 800x800 image of PL-NeRF is 19.20, 25.85, 32.35 seconds for (64+64), (64+128) and (128+64) samples, respectively. ## [WoBC] Less number of samples (64+64) The results for PL-NeRF and Vanilla NeRF for (64+64) samples is (30.09, 0.939, 0.056) and (29.86, 0.937, 0.059), respectively, for (PSNR, SSIM, LPIPS). ## [WoBC] Convergence plots Figure 2-b shows the convergence plots under different number of samples for PL-NeRF and Vanilla NeRF. ## [HvcE] DIVeR citation We will cite DIVeR in our revision. ## [HvcE] Comparison with DIVeR We incorporate our approach to DIVeR by using their voxel representation and feature integration and plugging in our piecewise linear opacity volume rendering integration. Table 2 shows quantitative results. ## [HvcE] DIVeR does not quite allow for integration over more complicated functions. Interestingly, we show that there is a tradeoff between the plausibility of the learned radiance field and DIVeR’s $MLP_w$ being an affine transformation and color being piecewise constant. As a result, it does not quite allow for the integration over more complicated functions. ## [HvcE] Method improves sampling making it more peaky at the surface We quantitatively evaluate the samples drawn from the learned distributions of PL-NeRF and Vanilla NeRF by computing the average L2 distance between the samples and the ground truth depth. The results are 0.019 and 0.033 for PL-NeRF and Vanilla NeRF, respectively. ## [HvcE] Improvement in geometry for NeRF Table 3 shows the improvement in the extracted geometry of PL-NeRF compared to Vanilla NeRF across the different scenes in the Blender dataset. Figure 2-a shows qualitative examples. We note that the SDF methods are orthogonal to our contribution as their goal is to learn an SDF field and convert that into density to enable supervision through volume rendering. How to render density is orthogonal to how density is outputted, our focus is the former, while the SDF works explore the latter. ## [HvcE] Reference for inverse transform sampling of the original NeRF Yes, it is based on their official codebase where linear interpolation is used as the surrogate function $G$. Since a surrogate function is used then the samples are not being drawn from the original underlying distribution. Pdf: /pdf/766056796c3ed52400ae2e94fd1f6a3d5067abd7.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
How Does Adaptive Optimization Impact Local Neural Network Geometry?
Accept (poster)
Summary: This paper aims to understand why Adam works better than SGD for language model training from the perspective of local geometry of the training loss around the algorithm iterates. Strengths: The problem that this paper considers is indeed a very important question, and this work sets out to answer the question from first principles. This paper has many good intuitions. This paper also has some partial theory that backs up their empirical observation. Given their empirical observation is quite hard to theoretically analyze, I think their theory is quite valuable although it only shows a weak result for a very specific model. Weaknesses: - Many plots have too small fonts. Please adjust the font sizes for better readability. - In general, the plots have legends and axes that are quite hard to read, please adjust them. - In Figure 8, how can the spectrum on the right plot approximately low-rank? All singular values are greater than 0.5 and compared to the left plot, I wouldn't consider this approximate low rank. Please explain - Regarding Section A.6, the plots seem a bit inconclusive. Given how noisy the plots are, I don't see this as alignment. Do you have any intuitions behind why the layer gradients align with diagonal of Hessian? - Are the results established in Theorem 1 empirically tight? I think the experiments on this very simple setting to see if your empirical observations hold true for this very simple setting is required. In particular, for this two layer MLP, do you see lower R-value for Adam than SGD-M? - In Figure 9, I see that the observation doesn't hold for the right singular vectors.. This makes the observation quite brittle in my opinion. Could you explain why it doesn't hold for the right singular vectors? - In general, this paper lacks empirical investigation on smaller/toy models. I think to what extent this observation holds for simpler models would help readers comprehend how universal this observed phenomenon is. - Given that the language model training is done mostly with the cross entropy loss, I'm curious whether the conclusion you made in Theorem 1 similarly holds true for the logistic loss? - In the Gaussian initialization, I see the variances of the different layers are chosen quite carefully. Also the variance looks quite small. Could you justify this assumption as to why making these assumptions is practical. Also, could you explain intuition behind why these assumptions are required for the proof? - In the Theorem statement, how big are $T_1^{sgd}, T_2^{sgd}$ compared to $T_1^{adam}, T_2^{adam}$? Are they comparable? Or as you observed in the empirical results, are $T_1^{sgd}, T_2^{sgd}$ provably larger than $T_1^{adam}, T_2^{adam}$? - Could you share some intuitions as to why training behavior of vision models (CIFAR 10, IMAGENET) is so different than that of language models? I don't find the explanation that it has to do with transformer architecture quite conclusive... - Minor comment: It would be helpful if the proof sketch of Theorem 1 appears in the main text. I think it has many good intuitions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and answer their questions below. 1. **Fonts of plots**. Thanks for your advice. We will increase the fonts and adjust the legends and axes of the plots. 2. **Low-rank structure.** Figure 8 is to depict a low-rank trend of the spectrum instead of a mathematically rigorous low-rank property. Actually for practical deep models, due to various noises, we don't expect it to have a very nice low-rank property. 3. **Alignment in Section A.6.** Though not very significant, if we focus on *large* gradients, we can find that they align with *large* diagonals of Hessian. Intuitively, when the diagonal of Hessian is large (sharp region), we should use a small update. However, for SGD, the corresponding updates are usually large due to large gradients. This gives us some intuitive evidence of why a non-uniform diagonal Hessian will harm fast optimization for SGD. However, things are different for adaptive methods such as Adam. As is shown in the right columns of Figure 10-12, their *adaptive updates* do not align with the diagonal of Hessian and are more uniform. Actually, we don't want to demonstrate the alignment between *true gradients* and diagonals of Hessian. Instead, we want to emphasize the uniformity of Adam's *adaptive updates* and the misalignment between them and the diagonals of Hessian, as it provides intuition on the optimization advantage of adaptive methods. Finally, the figures in Section A.6 are just intuitive and we don't want to draw a conclusion. That's why we put them into an appendix. 4. **Empirical results on the setting of Theorem 1 and other smaller/toy models.** Below are the empirical results on a 2-layer linear network. We trained the models for 20 epochs to convergence and chose the best learning rates of SGD+M and Adam. As we can see, our empirical observations hold true for this very simple setting, i.e. $R^{\text{Adam}}\_{\text{med}}(t)<R^{\text{SGDM}}\_{\text{med}}(t)$. | | Epoch0|Epoch0|Epoch5|Epoch5|Epoch10|Epoch10| Epoch20|Epoch20| | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Layer # | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ |$R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$| |1|57.18 |57.18|59.85|35.37 |61.20 |38.16|75.10|59.13| |2|1.54|1.54|3.68|1.58|3.33|1.60|3.97|1.86| Moreover, Appendix A.1 presents empirical results on a shallow transformer (8 layers), which is a simpler model than those in the main text. Our observation still holds for this shallow transformer. The above results reveal the universality of our observation on simple models. 5. **Right singular vectors.** We planned to use a paragraph to present the behavior of right singular vectors and discuss possible explanations. However, due to an editing error, we forgot to put that paragraph into the appendix. Roughly speaking, we noticed that the right singular vectors do not have uniformity for Adam. We are not sure of the reason and one possible explanation is that for a weight matrix, its right singular vectors are closer to the input data than left singular vectors and more easily influenced by the data, therefore may not show uniformity. 6. **Logistic loss.** The answer is yes. Below are the empirical results on the setting of Theorem 1 but using the logistic loss. We trained the models for 25 epochs to convergence and chose the best learning rates of SGD+M and Adam. As we can see, the conclusion similarly holds, i.e. $R^{\text{Adam}}\_{\text{med}}(t)<R^{\text{SGDM}}\_{\text{med}}(t)$. | |Epoch0|Epoch0 | Epoch5 | Epoch5 |Epoch15| Epoch15|Epoch25|Epoch25| | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Layer # | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$| |1|47.75| 47.75| 18.5|15.55|32.08|14.54|56.42|12.58| |2|2.06 |2.06| 3.57|1.74| 2.28|1.81|4.34|1.60| 7. **About initialization.** Empirically, our observation still holds under standard initialization. The empirical results in the fourth point are conducted under standard initialization. That means the small initialization assumption in our theoretical analysis is not essential. Actually it is only for technical reasons to make the proof of low-rank structures easier. Besides, it is a common choice for theoretical work to make the initialization close to 0. 8. **Comparison between $T\_{\text{Adam,2}}, T_{\text{Adam,1}}$ and $T_{\text{SGDM,2}}, T_{\text{SGDM,1}}$.** In the detailed proofs in Appendix D and E, we prove that $T\_{\text{Adam,2}}-T_{\text{Adam,1}}=\Theta(\frac{1}{\eta\sqrt{d}})$ and provide an upper bound for SGD+M: $T_{\text{SGDM,2}}-T_{\text{SGDM,1}}\le\tilde{\mathcal{O}}(\frac{d^\alpha}{\eta})$. We can see that $T_{\text{SGDM,2}}-T_{\text{SGDM,1}}$ is larger than $T\_{\text{Adam,2}}-T_{\text{Adam,1}}$, but actually, they are not comparable because the bound for SGD+M is just an upper bound. 9. **Results on image tasks.** As is discussed in A.8, the behavior on image tasks seems different, but the underlying correlation between $R^{\text{OPT}}\_{\text{med}}$ and optimization speed is actually consistent with that on language tasks. As is shown in A.8, on image tasks, Adam does not converge faster than SGD+M and in the meantime, $R^{\text{Adam}}_{\text{med}}$ values are no longer smaller than $R^{\text{SGDM}}\_{\text{med}}$ during training. This reveals the connection between the local diagonal geometry and the convergence speed from another perspective. That is, when the diagonal of Hessian of Adam is not more uniform than SGD+M, its convergence speed is not better, either. --- Rebuttal Comment 1.1: Comment: I read the response and it's satisfactory. I raise my score to 6. --- Reply to Comment 1.1.1: Comment: Thanks a lot for raising the score!
Summary: This paper aims to study the connections between an optimization algorithm and the geometric properties observed during the training of a neural network with that algorithm. The authors argue that when analyzing these connections, it is important to consider the local iterates. To address this, they propose the utilization of a metric called $R^{OPT}_{med}$, which involves the local condition number of the Hessian calculated at the iterates. The authors conduct experiments to illustrate their findings that fast convergence is often associated with low statistics, and adaptive methods such as Adam often has low statistics. They proves that for some small theoretical setting, Adam has better statistic than SGD with some probability. Strengths: Studying the influence of algorithms on the loss geometry is interesting. This paper provides some insights on the uniformity of diagonal geometry by the $R^{OPT}_{med}$ metric where the values found by Adam are smaller than those found by SGDM in the empirical results. The theoretical setting shows that the diagonal of loss Hessian for Adam has good uniformity while the diagonal of loss Hessian for SGD+M is less uniform. Weaknesses: While there may exist a correlation between the proposed statistic and the optimization algorithm, the reasons behind the favorable behavior of these algorithms remain uncertain. Is it better due to the uniformity or is the diagonal uniformity a byproduct of Adam's inherent algorithmic properties that are unrelated to its overall effectiveness? As a result, it is more important to demonstrate the contribution of the proposed statistic to achieving successful convergence rather than focusing on the relationship between Adam and the the diagonal uniformity. Why the large batch setting are needed to prove the correlation of Adam and diagonal uniformity effectiveness? It is not clear how the algorithms behave in other settings and how high the probability can be in the theoretical analysis. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see above. --- I thank the authors for your rebuttal. My recommendation remains the same. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and answer their questions below. 1. **Contribution of our statistic to fast optimization.** We actually discuss and demonstrate the contribution of small $R^{\text{OPT}}_{\text{med}}$ to fast optimization. Please see the first paragraph on Page 7 and Appendix B for more details. Roughly speaking, to rule out the possibility that small $R^{\text{OPT}}\_{\text{med}}$ is just a byproduct of adaptive methods and is unrelated to fast optimization, in Appendix B.1, we add a supplementary experiment similar to that in Figure 1. We select two iterates $x_1$ and $x_2$ from two trajectories that both come from SGD+M (instead of one from Adam and one from SGD+M in Figure 1), such that the loss $f(x_1)=f(x_2)$ but $x_2$ has a smaller $R^{\text{OPT}}\_{\text{med}}$ than $x_1$. We then run SGD+M with the same configuration twice, once from $x_1$ and once from $x_2$. Under this setting, we get a similar observation: running SGD+M from $x_2$ (with smaller $R^{\text{OPT}}\_{\text{med}}$) achieves faster convergence than from $x_1$. This suggests that the uniformity of the diagonal of loss Hessian (measured by $R^{\text{OPT}}\_{\text{med}}$) reveals some intrinsic trajectory property beyond the algorithm choice and is indeed a contributing factor to fast optimization. In Appendix B.2, we theoretically prove the contribution of small $R^{\text{OPT}}_{\text{med}}$ to fast optimization in a simplified setting. 2. **The large batch assumption.** Our proof relies on a very elegant analysis and tracks detailed dynamical properties of weight matrices during training, which is a big technical challenge. To overcome the difficulty and simplify the proof, we require large batches to reduce the stochastic variances. Moreover, we want to emphasize that people in practice use very large batches when training language models, especially when using multiple GPUs. --- Rebuttal 2: Comment: Thanks for your feedback on our rebuttal. We would like to know which part of our rebuttal is still unclear and are willing to elaborate more on it. As for the demonstration of the contribution of our metric to fast optimization, please see the first paragraph on Page 7 and Appendix B for more details, in case of any unclear part in our rebuttal.
Summary: This study makes the comparison between the adaptive optimization method (Adam) and the non-adaptive one (stochastic gradient method with momentum) through a lens of ``uniformity'' of diagonal components of the Hessian denoted by $R_{\rm med}^{\rm OPT}(t)$. The comparison is conducted experimentally for various deep learning models such as BERT-small and theoretically for the two-layer linear neural networks under several assumptions. Both comparisons conclude that the uniformity will be smaller when using the Adam, which results in faster convergence. Strengths: - The experimental comparison is conducted carefully. - The theoretical results have a novelty, which gives new insight into analyzing the implicit bias of various optimization methods. Weaknesses: - I couldn't fully understand the motivation for employing $R_{\rm med}^{\rm OPT}(t)=\frac{{\rm max}\{|H_{ii}^{(t)}|\}}{{\rm median}\{|H_{ii}^{(t)}}|\}$ as for the notion of uniformity. I realize this value is more stable than the condition number; as explained in footnote 1, many types of variants could be considered. Indeed, in the experiment, there are some cases where $\frac{R_{\rm med}^{\rm SGD}(t)}{R_{\rm med}^{\rm Adam}(t)}$ is smaller than 1. - In the theoretical comparison (Theorem 1), there are several unclear points to me. First, the training dynamics will change by the selection of hyperparameter and affects to $R_{\rm med}^{\rm OPT}(t)$, but some parameters (such as $\alpha$, $\sigma$) can be taken in a different order. Will it be a fair comparison? Moreover, I think it will be more helpful for readers if the authors clarify the dependence of $T_{\rm OPT,(1,2)}$ on other parameters. - (minor) The reference to Figure 4. seems wrong. The authors write ``see Section 4.1'' in the caption, but it is explained in Section 6. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Although the initialization order required in Theorem 1 seems to be very small, can the authors import the theoretical result or its insights into the practical settings? If not, what is the difficulty? - The setting in Section 5 requires $d_1=d_0=d$, but can the overparameterized settings (i.e., $d_1\ge d_0$ holds) be treated in the same way? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and answer their questions below. 1. **About our metric $R^{\text{OPT}}_{\text{med}}(t)$.** First, we did consider another type of variant, which is a singular value-based metric. More discussions and experiments can be found in the paragraph starting from Line 79 and Appendix A.9. The short conclusion is that when measured by that singular value-based metric, our observation still holds: the local geometry obtained by Adam is more uniform than that obtained by SGD+M. Second, we want to emphasize that although in some cases, $\frac{R^{\text{SGDM}}\_{\text{med}}(t)}{R^{\text{Adam}}\_{\text{med}}(t)}$ are smaller than 1, this only happens on a small number of layers. For most layers, $\frac{R^{\text{SGDM}}\_{\text{med}}(t)}{R^{\text{Adam}}\_{\text{med}}(t)}$ are larger than 1. 2. **Selection of hyperparameters.** Theorem 1 holds for hyperparameters (such as $\alpha,\sigma$) in certain ranges instead of just particular values. The ranges of SGD+M and Adam overlap with each other. That means we can choose the same hyperparameters for SGD+M and Adam in the overlapped region to make a fair comparison, for example, the same $\alpha,\sigma$ such that $\alpha\ge 4(p+2)$ and $\sigma\le\min\left(\frac{\eta^{3/2}}{d^{\alpha/2+1}},\frac{\eta^{3/2}\xi^2}{d^{13/4}}\right)$. We provide the dependence of $T_{\text{OPT},(1,2)}$ on other parameters in the detailed proofs in Appendix D and E. For example, we prove that $T_{\text{Adam,2}}-T_{\text{Adam,1}}=\Theta(\frac{1}{\eta\sqrt{d}})$ and provide an upper bound for SGD+M: $T_{\text{SGDM,2}}-T_{\text{SGDM,1}}\le\mathcal{O}(\frac{d^\alpha\log(d/\epsilon)}{\eta})$. However, due to the page limit, we didn't write these details in the main text. 3. **About initialization.** Empirically, our observation still holds under standard initialization. That means the small initialization assumption in our theoretical analysis is not essential. Below are the empirical results on a 2-layer linear network under standard initialization. We trained the models for 20 epochs to convergence and tuned and chose the best learning rates of SGD+M and Adam. As we can see, our empirical observation still holds, i.e. $R^{\text{Adam}}\_{\text{med}}(t)<R^{\text{SGDM}}_{\text{med}}(t)$. | | Epoch0|Epoch0|Epoch5|Epoch5|Epoch10|Epoch10| Epoch20|Epoch20| | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Layer # | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ |$R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$ | $R^{\text{SGDM}}_{\text{med}}(t)$ | $R^{\text{Adam}}_{\text{med}}(t)$| |1|57.18 |57.18|59.85|35.37 |61.20 |38.16|75.10|59.13| |2|1.54|1.54|3.68|1.58|3.33|1.60|3.97|1.86| Actually, the small initialization assumption is only for technical reasons to make the proof of low-rank structures easier. Besides, it is a common choice for theoretical work to make the initialization close to 0. 4. **Overparameterized case.** The answer is yes. The assumption $d_0=d_1=d$ is to make the notation simple and is not essential. For the overparameterized case $d_1\ge d_0$, the bounds in our theorem will have a more complicated dependence on $d_1,d_0$ (for example, $\frac{1}{poly(d)}$ becomes $\frac{1}{poly(d_1,d_0)}$) but the main message will not change, i.e. $R^{\text{Adam}}\_{\text{med}}(t)<R^{\text{SGDM}}_{\text{med}}(t)$. 5. **Reference to Figure 4.** Sorry for the confusion. In this caption, we write "Singular values and $R_u$ of the weight matrix in the 27-th layer on the translation task (see Section 4.1)". Although Figure 4 is explained in Section 6, actually the note "see Section 4.1" is in terms of the translation task because the setup of the translation task is in Section 4.1. Thanks for pointing out this confusion. We will edit the caption to clarify this. --- Rebuttal Comment 1.1: Comment: Thanks for the author's reply. My concerns are adequately addressed.
Summary: This paper aims to study the interaction between optimizers and the local loss landscape. The authors introduce the notion of $R_{\mathrm{med}}^{\mathrm{OPT}}$ to characterize the uniformity of the Hessian diagonal. Through extensive experiments, they show that, compared with SGD, Adam biases the trajectory towards regions with higher Hessian diagonal uniformity, which, they argue, contributes to faster optimization. As a theoretical support, the authors demonstrate that Adam indeed possesses this property in a two-layer linear network setting. Strengths: 1. This paper is well-written, with extensive experiments and solid theoretical analysis. 2. The interaction between optimizers and local loss landscape is an important and interesting topic. The findings of this paper may inspire future studies on developing a deeper understanding of the success of adaptive methods. Weaknesses: I was wondering whether $R_{\mathrm{med}}^{\mathrm{OPT}}$ can characterize the degree of the loss landscape being ill-conditioned. While it is true that $R_{\textrm{med}}^{\textrm{OPT}}$ conveys the similar message as the condition number when the Hessian is dominated by the diagonal ($\nabla^2 \mathcal{L}(\theta)\approx\mathrm{diag}(\nabla^2 \mathcal{L}(\theta))$, it remains unclear whether $\nabla^2 \mathcal{L}(\theta)\approx\mathrm{diag}(\nabla^2 \mathcal{L}(\theta))$ holds throughout the training trajectory. Can the authors provide some empircial evidence? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Regarding the comparison of different training runs, specifically run 1 (using Adam throughout), run 2 (using Adam initially and then switching to SGDM halfway), and run 3 (using SGDM throughout), Figure 1 illustrates that run 2 achieves a considerably lower loss level compared to run 3. However, I am interested in the gap in the final loss between run 1 and run 2. If this gap is acceptable, it raises a possibility: Could we design an algorithm that employs Adam during the initial phase and then seamlessly switches to SGDM halfway through the training process? Such an approach may offer the potential benefit of memory savings. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and answer their questions below. 1. **Diagonal approximation.** We actually add empirical evidence on the diagonal approximation in Appendix F. In Appendix F.1, we conduct experiments on a language transformer model and demonstrate that the trend of loss Hessian is to become more and more diagonal during training. In Appendix F.2, we give a rigorous theoretical analysis on a two-layer linear network for this trend. 2. **Gap between run1 and run 2**. We add a figure in the global rebuttal to demonstrate that the gap between run 1 and run 2 is not acceptable. The convergence speed of run 2 (using Adam initially and then switching to SGDM halfway) is slower than that of run 1 (using Adam throughout). A possible reason is that after switching to SGDM halfway, the trajectory geometry becomes worse and worse, which harms fast optimization. --- Rebuttal Comment 1.1: Comment: The authors' rebuttal has well addressed my concerns. I will keep my positive score.
Rebuttal 1: Rebuttal: The attached figure is to address a question raised by Reviewer pmE6. Pdf: /pdf/7a0d3903708a9579158f97d71e860e2a3eff125a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper provides a new explanation of why adaptive gradient methods perform better than SGD with momentum though extensive experiments and theoretical analysis. The key insight is that adaptive gradient methods especially Adam bias toward solutions with uniform Hessian diagonal values, and this property may contribute to faster convergence. Theoretical analysis is conducted on a simple setting with 2 layer neural network, it is proven that Adam guarantees to converge to solution with more uniform Hessian diagonal values than SGD. Strengths: The insight that Adam bias toward solutions with uniform Hessian diagonal values is an interesting and novel observation and this measure is easy to compute compared with Hessian singular value-based metrics. The observation is supplemented with comprehensive experiment results and an analysis in simplified setting of neural network, which make the work pretty solid in both theory and empirical sides. Weaknesses: To my understanding, the weaknesses have two folds: The experiments in the paper mainly focus on settings where Adam outperforms SGD, in which more uniformity in Hessian diagonals is observed with Adam. However, it lacks a comparison in settings where SGD outperforms Adam. It remains a question whether uniformity is still a good measure of performance in this case. In other words, would SGD iterates observe more uniformity in Hessian diagonals instead? In the explanation of the paper, Adam biases towards solutions with more uniformity in Hessian diagonals, and this contributes to faster convergence. Thus, Adam converges faster than SGD. The bias in Adam has been well experimented with and studied. However, it is not very clear why uniformity in Hessian diagonals implies faster convergence. The paper does not provide a theory on it, and the experimental support of this claim is not strong. Additionally, intuitively, it is hard to connect uniformity with faster convergence since regions with more uniformity are allowed to have either large curvature or small curvature. The folklore intuition on a flat region encourages better optimization and generalization does not seem to hold for this measure. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors provide more intuition on how uniformity in Hessian diagonals can induce faster optimization? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and answer their questions below. 1. **Comparison in the opposite setting.** In Appendix A.8, we have empirical results on image tasks, which provide the comparison in the opposite setting. As is shown in A.8, on image tasks, Adam does not converge faster than SGD+M and in the meantime, $R^{\text{Adam}}\_{\text{med}}$ values are no longer smaller than $R^{\text{SGDM}}\_{\text{med}}$ during training. This reveals the connection between the local diagonal geometry and the convergence speed from another perspective. That is, when the diagonal of Hessian of Adam is not more uniform than SGD+M, its convergence speed is not better, either. 2. **Contribution of uniformity in Hessian diagonals to fast optimization.** We actually discuss and demonstrate the contribution of small $R^{\text{OPT}}\_{\text{med}}$ (more uniform diagonal of Hessian) to fast optimization. Please see the second and third paragraphs of Section 4.3 and Appendix B for more details. Roughly speaking, to rule out the possibility that small $R^{\text{OPT}}\_{\text{med}}$ is just a byproduct of adaptive methods and is unrelated to fast optimization, in Appendix B.1, we add a supplementary experiment similar to that in Figure 1. We select two iterates $x_1$ and $x_2$ from two trajectories that both come from SGD+M (instead of one from Adam and one from SGD+M in Figure 1), such that the loss $f(x_1)=f(x_2)$ but $x_2$ has a smaller $R^{\text{OPT}}\_{\text{med}}$ than $x_1$. We then run SGD+M with the same configuration twice, once from $x_1$ and once from $x_2$. Under this setting, we get a similar observation: running SGD+M from $x_2$ (with smaller $R^{\text{OPT}}\_{\text{med}}$) achieves faster convergence than from $x_1$. This suggests that the uniformity of the diagonal of loss Hessian (measured by $R^{\text{OPT}}\_{\text{med}}$) reveals some intrinsic trajectory property beyond the algorithm choice and is indeed a contributing factor to fast optimization. In Appendix B.2, we theoretically prove the contribution of small $R^{\text{OPT}}_{\text{med}}$ to fast optimization in a simplified setting. 3. **More about uniformity and curvature.** The uniformity measured by our statistic $R^{\text{OPT}}_{\text{med}}$ can be viewed as some variant of the diagonal condition number. We want to emphasize that the condition number is also a relative measure instead of an absolute measure of large or small curvature and is well-known to have a direct connection to the speed of optimization. As an analogy of the condition number, our statistic $R^{\text{OPT}}\_{\text{med}}$ also has an intuitive connection with the optimization speed. In the theoretical analysis in Appendix B.2 on a simplified setting, we can see this connection more clearly.
null
null
null
null
null
null
Generalizing Nonlinear ICA Beyond Structural Sparsity
Accept (oral)
Summary: This paper utilizes the structural sparsity assumption on the support of the Jacobian matrix of the mixing function to extend the identifiability of nonlinear ICA in more settings including under-completeness, partial sparsity and source dependence, flexible grouping structures. It is a technically solid paper supported by theorems, proofs and experiments. Strengths: 1. Overall, the manuscript is well written with clear organization, comprehensive literature review, technically solid theorems, detailed proofs and promising experiment results. 2. This work addressed some limitations of theorems about identifiability with Structural Sparsity in Zheng et al. 2022 and extended nonlinear ICA with Structural Sparsity to more general settings. The proposed theorems could be more practically useful in real-world datasets. 3. The notations, theorems and proofs are clear in general. Weaknesses: 1. This work is interesting, and it would be great if code is provided to replicate the results. Please consider making the code publicly available. 2. The meanings of some notations are not clear. See Questions. 3. Ablation study. The author(s) only evaluated MCCs w.r.t. the number of sources. In Figure 3, the MCCs for 8 or 10 sources are a bit low so I wonder if more samples can help to improve MCCs. It would be more informative and convincing if more experiment configurations are considered (e.g., number of samples, various grouping structures) to demonstrate the effectiveness of proposed Theorems. 4. Ablation study. Though the result comparison seems obvious visually, the authors should consider performing statistical tests to compare results between proposed methods and baseline method. 5. Minor: "exits" should be "exists" at line 236. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Theorem 3.1: Is $|\mathcal{F}_{i,:}|$ the $L_0$ or $L_1$ norm of $\mathcal{F}$? Is $\mathcal{C}_k$ a minimal set of sample indices to uniquely identify source $k$? How does the assumption ii show Structural Sparsity? The regularization constraint $|\hat{\mathcal{F}}| \leq |\mathcal{F}|$, which induces sparsity, should be included in Theorems. Also, I note that the author(s) tried to explain the assumptions in the following paragraphs, but I would suggest to describe the Theorems, at least Theorem 3.1, in plain words so that readers can better understand the Theorems. Or at least explain the notations (e.g., $\mathcal{C}_k$) which are not explained in Section 2 Preliminaries. 2. Theorem 3.1: Zheng et al. 2022 also proposed a Theorem on the undercomplete case. Could you kindly clarify the novelty between your proposed Theorem and that proposed in Zheng et al. 2022? 3. Theorem 4.1: The author(s) claimed that we do not need to know the dependence structures or the number of dependent sources, but it is not intuitive to me how Theorems 4.1 and 4.2 uncover the dependence structures and the number of dependent sources. Could you please clarify? 4. Theorem 4.2: What are $u_1$ and $u_2$? Are they two different sets of auxiliary labels? 5. Lines 281 - 283: The claim on multi-modal data is unclear. Could you please clarify how to identify linkage across multiple modalities? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed reading and insightful questions. All of these constructive suggestions have further improved the quality of the updated manuscript. Please find our point-by-point response below. **Q1:** Implementation availability. **A1:** Thanks for finding our work interesting. We are more than happy to make the scripts publicly available soon. For quick utilization, kindly note that all experiments are conducted using public GitHub repositories (FrEIA and GIN), detailed in Section B.1. Please feel free to let us know if you have any questions about the experiments. **Q2:** Clarification on some notations. **A2:** We are very grateful for your detailed reading, which helps improve our manuscript's clarity. We have emphasized all these points in the updated manuscripts: - **Q2(a):** Is $|\mathcal{F}|$ the $\ell_0$ or $\ell_1$ norm of $\mathcal{F}$? - **A2(a):** It is the $\ell_0$ norm of $\mathcal{F}$. - **Q2(b):** Is $\mathcal{C}_{k}$ a minimal set of sample indices to uniquely identify source $k$? - **A2(b):** We do not necessitate the set $\mathcal{C}\_{k}$ to be minimal, only that it uniquely identifies $k$. Of course, it is equivalent to consider $\mathcal{C\}_{k}$ as a minimal set. - **Q2(c):** What are $\mathbf{u}\_1$ and $\mathbf{u}\_2$? - **A2(c):** These are two distinct values of the auxiliary variable $\mathbf{u}$. **Q3:** More experiment configurations. **A3:** Thanks for your suggestion. In light of it, we have conducted additional experiments with different sample sizes, of which the results are shown in the attached PDF in the global response. From the results, it is clear that increasing the sample size could improve MCCs and further stabilize the performance. **Q4:** Statistical tests to compare results between proposed methods and baseline methods. **A4:** Thanks a lot for the suggestion. Accordingly, we have conducted statistical tests on all comparisons, and all p-values are less than $0.01$, which are consistent with the visual differences in the violin graphs. We have emphasized this in the updated manuscript. **Q5:** Word "exits" should be "exists" at L236. **A5:** Thanks! It has been corrected. **Q6:** How does assumption ii of Thm. 3.1 show Structural Sparsity? **A6:** We are grateful for that insightful question. If the connective structure between sources and observed variables is extremely dense (e.g., no zero entry in the Jacobian matrix), assumption ii cannot be satisfied, emphasizing its role in promoting structural sparsity. However, it is worth noting that, after extending from the bijective setting in [1], where the assumption of Structural Sparsity is originally proposed, to the undercomplete case in our manuscript, this assumption can be met even with a relatively dense structure, provided there are enough observed variables and the underlying graph is not fully connected. Thus, it leans more toward a "structural diversity" assumption in our generalization, and the name “Structural Sparsity” is chosen primarily to acknowledge its root and maintain continuity with its original name. **Q7:** Better include the regularization constraint during estimation in theorems. **A7:** Thank you for the suggestion. We have now incorporated it into all related theorems to avoid potential misunderstanding. Initially, we had just emphasized it following the theorem since it was not an assumption about the data-generating process, but including it directly in the theorem indeed improves the clarity. **Q8:** Better explanation of theorems and some notations (e.g., using plain words) so that readers can understand them more easily. **A8:** Thanks a lot for the kind suggestion. We have added descriptions in plain words of all theorems in the updated manuscripts. For example, Thm. 3.1 states that with sufficient sample size and structural sparsity (detailed later) on the connective structure between sources and observed variables, component-wise identifiability can be achieved even under undercomplete cases with sparsity regularization. We have also highlighted some notations used in the theorems. For instance, $\mathcal{C}_{k}$ in the assumption of Structural Sparsity denotes a set of source indices. **Q9:** Could you clarify the novelty between your proposed theorem and that proposed in [1] about the undercomplete case? **A9:** We appreciate the great question. As mentioned in L159-160, [1] only removes the rotational indeterminacy while our theorem removes all major indeterminacies and only preserves the component-wise transformation and permutation. In other words, [1] only gets rid of specific spurious solutions due to the rotational indeterminacy (e.g., the ‘rotated-Gaussian’ MPA) while we prove the full identifiability of the undercomplete case. **Q10:** How do Thms. 4.1 and 4.2 uncover the dependence structures and the number of dependent sources? **A10:** Thanks for your insightful question. For the set of dependent sources, i.e., $\mathbf{s}\_D$, Thms. 4.1 and 4.2 only provide the subspace-wise identifiabliity up to an invertible transformation, and thus not being able to uncover the structures and the number of dependent sources. The identifiability of $\mathbf{s}\_D$ up to an invertible transformation means that $\mathbf{s}\_D$ will not be mixed with sources in $\mathbf{s}\_I$ after estimation, i.e., the block-wise identifiability (e.g., Thm 4.2 in [2]), which does not mean the component-wise identifiability of sources in $\mathbf{s}\_D$. **Q11:** How to identify linkages across multiple modalities. **A11:** Thank you for the great question. Since we stil assume the (conditional) independence between different subspaces $\mathbf{s}\_{c\_j}$ (Eq. 4), we do not allow dependencies across subspaces and thus cannot identify linkages across multiple modalities. --- [1] Zheng et al. "On the identifiability of nonlinear ICA: sparsity and beyond." [2] Kong et al. “Partial identifiability of domain adaptation.”
Summary: The article serves as an extension to the work of Zheng et al. 2022, which posited the identifiability of nonlinear ICA based on specific structural sparsity assumptions related to the mapping of sources and mixtures. This current article expands on that by addressing the undercomplete case—where the number of mixtures exceeds the number of sources—and furthermore, it relaxes both the sparsity and source independence assumptions to yield more general identifiability results. In the end authors provide some numerical examples to illustrate the applications of their identifiability theorems. Strengths: The article tackles the foundational issue of the nonlinear inverse problem, making significant assertions regarding fundamental identifiability theorems. These are premised on assumptions of partial independence and structural sparsity.The problem under investigation is a fundamental problem and the article offers some important results for this problem. Weaknesses: The most significant shortcoming of the article lies in its presentation, particularly in its explanation of the core assumptions underpinning the theorems, as well as the motivation for the conditions applied in these assumptions. Absent a solid grasp of these theorems, it becomes challenging to properly evaluate the paper's contributions and ascertain its potential impact. In terms of specific issues: * Concerning Theorem 3.1: This theorem seems to aim at generalizing Theorem 1 from Zheng et al., 2022 for a complete (m=n) case to an undercomplete (m>n) case. The identifiability results for ICA setups typically do not depend on a specific estimator choice or estimation algorithm. However, the statement of Theorem 3.1 seems rather unclear in this context. According to the article's notation, \hat{f} refers to a specific estimate of the mixing function. Both the set of support matrices \mathcal{T} and the support \hat{\mathcal{F}} depend on this particular estimate. The assumption (i) used in Theorem 3.1 is based on \mathcal{T} and \hat{\mathcal{F}}, hence, this condition appears to be linked to a specific choice of the estimator for mixing. It would be beneficial if the authors could clarify whether the assumption (i) must hold on a particular \hat{f}, a specific set of functions, etc. This clarification will likely impact the proof in the supplementary material and the explanation given between lines 129-136. * Line 97: Traditional or linear ICA does not necessarily require m=n. * Line 114: Should \mathcal{S} be \mathcal{A}? * Line 111 vs Line 536: The symbol \mathcal{T} has two differing definitions - the set of matrices sharing the same support as T(s) and the support of T(s) itself. * Theorem 4.1: The vectors 'w' - whose independence implies identifiability - have not been sufficiently motivated or explained. * Similar comments can be made for other identifiability theorems. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * Line 190: Could you clarify what is meant by "changing" sources? * Line 195-196: 6, the phrase "For sources s_D, they do not need to be mutually independent as long as they are dependent on the variable u" is somewhat confusing. Subsequent equation (3) implies that S_D and S_I are conditionally independent when conditioned on u, and that the components of s_I are independent. What is the necessity for this latent variable u? Perhaps the authors could shed some light on this. * How does Theorem 4.1’s contribution compare with those from Khemakhem et al. (2020a) and Sorenson et al. (2020)? * Lines 215-217: Could you elucidate what condition (i) in Theorem 4.2 represents? What do s_d and B_{s_I} signify? Perhaps the discussions on lines 229-247 should precede Theorem 4.2 to better contextualize its contents and results, potentially with more lucid explanations. * Regarding Figure 4 and 5, could you expound on how these examples pertain to the nonlinear ICA setup (what are the sources, which appear to be images, and what are the nonlinear mixings)? How are the interpretations in the captions of these figures derived? Could you also elucidate how these examples relate to the identifiability theorems presented in the article and the conditions stipulated in these theorems? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the time dedicated and constructive feedback, which has greatly improved the quality of the updated manuscript. Please find the responses to all your comments below. **Q1:** More discussion on the assumption (i) in Thm. 3.1. **A1:** Thanks so much for the suggestion. The assumption (i) does not necessitate a particular $\hat{\mathcal{F}}$, and it is typically satisfied asymptotically as detailed in L129-136 (it only necessitates the **existence** of one $\mathrm{T}$ in the entire space). Except for the required sparsity regularization (L125-128), there are no specific functional classes to constrain $\hat{\mathcal{F}}$ during estimation. We have further highlighted it in the updated manuscript. **Q2:** L97: Traditional or linear ICA does not necessarily require m=n. **A2:** Thanks a lot. We have modified it to: “Different from settings where m=n …” **Q3:** L114: Should $\mathcal{S}$ be $\mathcal{A}$? **A3:** Yes, you are totally right. We have corrected the typo in the updated manuscript. **Q4:** L111 vs L536: $\mathcal{T}$ has two differing definitions. **A4:** Thanks for rasing this point. We have corrected the denotation of $\mathcal{T}$ in L536 to a set of matrices with the same support of $\mathbf{T}(\mathbf{s})$, and correspondingly, $\mathrm{T} \in \mathcal{T}$ in the following sentence. **Q5:** Thm. 4.1: the linearly independent vectors '$w$' have not been sufficiently motivated. **A5:** Thanks for the suggestion. The linear independence of $w$ requires that the conditional distribution varies sufficiently across different values of $\mathbf{u}$ (L229-230). The motivation of it is similar to the common assumption of variability used in [1, 2, 3]. Even though the assumption of variability is almost surely fulfilled as discussed in [1, 2, 3], our assumption is still strictly weaker from various perspectives as detailed in L229-241. For example, our definition of $w$ is only the first half of Eq. 8 in [1] and we require much fewer distinct values of $\mathbf{u}$. We have further highlighted it in the updated manuscript. **Q6:** L190: The meaning of "changing" sources. **A6:** Thanks for the question. The "changing" sources ($\mathbf{s}\_D$) mean that the distribution of these sources changes across different values of the auxiliary variable $\mathbf{u}$. For instance, styles of images change across domains while their content stays invariant. **Q7:** L195-196: More clarification on "For sources $\mathbf{s}\_D$, they do not need to be mutually independent as long as they are dependent on the variable $\mathbf{u}$". **A7:** Thank you for your suggestion. It means that sources in $\mathbf{s}\_D$ (i.e., those with indices $\\{n\_{I+1},\cdots,n\\}$) do not need to be (mutually) conditionally independent given the auxiliary variable $\mathbf{u}$; they only need to be influenced by (dependent on) $\mathbf{u}$. The variable $\mathbf{u}$ only provides changes on the distributions of these sources, like a domain label or time index. **Q8:** How does Thm. 4.1’s contribution compare with those from [2, 3]? **A8:** Thanks for raising this. We only require sources in $\mathbf{s}\_D$​ to depend on an auxiliary variable $\mathbf{u}$ with $n\_D​+1$ values, intuitively fitting scenarios where fewer changes (smaller $n\_D$) correspond to easier identifiability (fewer required values, i.e., $n\_D+1$). In contrast, prior works like [2, 3] assume all sources to depend on $\mathbf{u}$ with $nk+1$ values ($k$ denotes the order of the distribution), limiting identifiability to ideal situations without any degree of violations on either the number of sources dependent on $\mathbf{u}$ or the number of values of $\mathbf{u}$. This can restrict practical application, as often only subsets of sources are influenced by auxiliary variables, or those variables lack sufficient variability. Of course, as a trade-off, Thm. 4.1 itself does not provide component-wise identifiability. More details can be found in L224-247. **Q9:** L215-217: More explanation on the motivation and some notations ($\mathbf{s}\_d$ and $B\_{\mathbf{s}\_I}$) of condition (i) in Thm. 4.2. **A9:** We appreciate these insightful questions and suggestions. Condition (i) in Thm. 4.2 originates from [4], which intuitively means there exist two values of the auxiliary variable such that their influences on the sources are different. $\mathbf{s}_d$ is a typo and should be $\mathbf{s}\_D$, which denotes sources that dependent on $\mathbf{u}$ (sorry for the confusion). $B\_{\mathbf{s}\_I}$ is a subspace of $\mathcal{S}\_I$. In light of your suggestions, we have moved the discussion (L229-L247) to the paragraph directly following Thm. 4.2 and further emphasized these notations. **Q10:** More explanation on Figs. 4 and 5. **A10:** For these images (triangles in Fig. 4 and hand-written digits in Fig. 5), we assume that they are generated by hidden sources (e.g., angle, height, etc.), and try to recover these from their observed mixtures (i.e., images). Each row represents a source we identified, and we vary its value with the rightmost column showing a heatmap of the absolute pixel difference to visualize its influence. We interpret the estimated sources’ potential semantics from their influences, as listed in the captions. We only deal with single classes (e.g., zero in EMNIST), without auxiliary variables (e.g., digit labels). Thus, prior identifiability theories relying on auxiliary variables cannot support these seemingly reasonable results, but our generalized theorems could probably underpin them. --- [1] Hyvarinen et al. "Nonlinear ICA using auxiliary variables and generalized contrastive learning." [2] Khemakhem et al. "Variational autoencoders and nonlinear ica: A unifying framework." [3] Sorrenson et al. "Disentanglement by nonlinear ica with general incompressible-flow networks (gin)." [4] Kong et al. “Partial identifiability of domain adaptation." --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their clarifications. --- Reply to Comment 1.1.1: Comment: Thanks so much for your further feedback.
Summary: The paper extends identifiability theory of nonlinear ICA (NICA), and deep latent variable models in general, by utilizing structural sparsity. In particular, previous works have shown that NICA can be identified if there is some observed auxiliary data or latent dependencies that essentially capture the inductive biases in the data generative process. The approach of structural sparsity (Zheng, '22) instead takes an alternative approach, namely constraining the nonlinear mixing function and its Jacobian. In this paper the authors extend that work as follows i.e. assuming structural sparsity and some additional assumptions: 1. the authors show identifiability for a situation in which the mixing function is injective rather than bijective i.e. undercomplete case / i.e. smaller latent dimension than observed dimension. This is important for e.g learning low dimensional, semantically meaningful, interpretable latent features 2. the authors show identifiability for a situation in which the structural sparsity principle applies only to some of the independent components 3. further identifiability is shown for situation where not all the latent components are independent, rather components form independent subspaces. In fact some components may be dependent, conditionally independent or have some grouping structures. Strengths: First, this paper is in general of good quality in that it is well organized and, in general, clearly written. Main strengths: a.) most significantly, authors remove several strong limitations of previous works and extend identifiability of structural sparsity to undercomplete and case where not all latent components are structurally sparse (in those situations the remaining sources are shown to be identifiable). As a result these ideas are now more applicable to realistic data and scenarios. b.) These results have been reached, mostly, without too strong additional assumptions. For instance, it is shown that the necessary assumptions are more likely to hold in this new undercomplete case which is encouraging! c.) authors bridge gap between structural sparsity and the previous works that assume auxiliary variables. in particular, this work allows unconditionally independent components to follow structural sparsity and components which are conditionally independent given auxiliary variables. Whilst arguably to be expected, it is important to show this result (but see below for potential related weakness) Weaknesses: In general the weakness of this paper is that provides only few theoretical advances (albeit important; as mentioned above) but provides little beyond that. In particular, this is the results of: 1. Contribution of the paper is not as significant as the authors describe, or at least there is limited coverage of relevant works 2. Potential problems in some of the identifiability theorems 3. Novelty is limited to identifiabiltiy theorems -- no new algorithms 4. Experiments are lacking I will expand on each of these points below: More detail for 1.): In particular the authors state that "Therefore, we establish, to the best of our knowledge, one of the first general frameworks for uncovering latent variables with appropriate identifiability guarantees in a principled manner". I think this is too vague and general and fails to acknowledge the generality of some other works -- your work can be novel whilst admitting the generality of some other works too. First, Kivva '22 show a very general framework for identifiability by making, arguably, less strong assumptions on the mixing functions -- currently the work of Kivva '22 is only mentioned later on in section 3 (and even there in a problematic manner as ill point out below). Due to the generality of the results in Kivva, I would expect their result to be discussed in the introduction / early on in the text and tell the reader why yours is better or at least different. For example Kivva make different type of assumptions on the mixing function (piecewise affine) etc, while you on the sparsity. Second the work of Halva '21 (disentangling identifiable features) provides another very general framework and unlike what you claim, it is not limited to time-series but to any dependencies of arbitrary order, and also does not require condition independence on some auxiliary variables but rather also assumes unconditional independence. More detail for 2.): In Theorems 4.1 and 4.3 $S_d$ is "identified up to an invertible transformation". Surely if something is identified just up to invertible (vector-valued) function then we are not doing any better than nonlinear ICA i.e. we are essentially where one started and thus we have not identified anything. To me this is misleading and not a publishable identifiability result (if I have understood correctly -- please correct me if I'm wrong and I'll adjust my score accordingly). Authors do acknowledge this point but rather than talking about it they make a vague remark that "Thm. 4.1 may be helpful for some tasks that do not necessitate the recovery of each individual source, such as domain adaptation." This does not suffice in my opinion. And similarly about Thm 4.3 they say: "there exists an invertible transformation $h_{c_i}$ which is analogous to the previous element-wise indeterminacy. Consequently, even when dealing with mixtures of high and one-dimensional sources, like in the case of multi-modal data, we can still recover the hidden generating process to some extent." Again I think this is bit generous and hiding the fact that $\mathbf{s}_{c_i}$ is fully unidentifiable in the sense of nonlinear ICA. At least this limitation must be admitted more clearly -- preferably its usefulness would be shown empirically. More detail for 3.): There is a simple regularization term of the jacobian added -- but this is heurestic (vs. mle methods) from previous work. Undoubtedly there could be work done towards what is the best way to estimate a model that assumes structural sparsity but such is not done here. More detail for 4.) An important question is whether structural sparsity is a valid assumption. I think this can indeed be the case for many types of generative processes. But the question then is then do the experiments strengthen that intuition. I feel not. It is not clear to me why e.g. EMNIST experiment there would be structural sparsity. EMNIST is also a very simple data set. I would expect the experiments to show that the learned independent components are useful in practical applications (see the brain signal experiments in Halva '21 for instance or in Khemakhem (iVAE) '20). I'm not saying specifically this type of real data experiments need to be introduced, but something to further highlight the strenght of this method would be helpful. Another example is to evaluate the method more thoroughly on some benchmarks from the disentanglement learning literature. There are also some claims in the paper that would be good to justify experimentally for instance you claim on line 311-313 that "This is particularly helpful in the context of self-supervised learning 311 (Von Kügelgen et al., 2021) or transfer learning (Kong et al., 2022), where latent representations are 312 modeled as a changing part and an invariant part." If this indeed the case, why not show that on data? Indeed 311 to 324 gives nice discussion and its a great shame this has not been shown experimentally as it would really take this paper to the next level. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I will use this space for further suggestions and questions: - Could you please introduce structural sparsity bit more clearly and intuitively -- if one has not read the original Zheng'22 paper then it is difficult to follow. - In estimation, please clarify: is it required that we know which groups of latent variables are independent, and which are potentially dependent? How is this exactly established in practice? - please explain in more detail how your algorithm, in practice, allows dimension reduction without assuming observation noise, and how jacobian can be computed for non-bijective transformation - What is the level of nonlinearity in the mixing functions? I dont believe this is mentioned anywhere, e.g. number of layers or similar. - ". Since the proposed condition is on the connective structure from sources to observed variables, i.e., the support of the Jacobian matrix of the mixing function, it does not require the mixing function to be of any specific algebraic form.". Please make this sentence bit more precise or explain better what does 'specific algebraic form' mean. Because structural sparsity does still limit the form of the function -- f can not longer be any arbitrary function. - "Most of these methods require auxiliary variables to be observable, such as class labels and domain indices (Hyvärinen and Morioka, 2016, 2017". H&M 2017, really only require the previous data so it's not really a big limitation and it's arguable whether this really constitutes of having auxiliary variables...I would consider moving that reference to the next sentence since it's a time-series model : "with the exceptions being those for time series...[move H&M'17 reference here]" - "The most obvious one arises from the fact that it may fail in a number of situations where the generating processes are heavily disentangled." Please explain in more detail why this may be? - "We first present the result on removing one of the major assumptions in ICA, i.e., the number of observed variables m must be equal to that of hidden sources n." This makes it sound like it hasn't been done previously in general, which of course it has been done many times previously in linear ICA (e.g eriksson and koivunen '03) and nonlinear ICA (e.g. khemakhem '20, halva '21 etc etc). so rather than saying removing majort assumption in ICA, make it specific to sparsity - as for the title: are you really "generalizing beyond structural sparsity"? as I feel structural sparsity is still the fundamental building block here. I would say you are generalizing structural sparsity in nonlinear ICA. - "This is similar to Independent Subspace Analysis (ISA) (Theis, 2006)" Either explain why you cite Theis, or cite an earlier ISA work (e.g. Hyvarinen & Hoyer, 2000)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors should discuss limitations in more detail: - what are possible limitations of structural sparsity assumptions should be discussed more e.g. any scenarios where you expect it to be a poor assumption? - the point discussed in the "Weaknesses" section about the limitations in the identifiability theorems of 4.1 and 4.3 must be addressed and justified much more clearly or else those theorems should be removed - authors should note the heurestic approach of their estimation algorithm -- does GIN even have universal _function_ approximation capability? - related, discuss limitations of estimation algorithm in general. Is the algorithm guaranteed to find the correct sparsity for example? If not, then that should be pointed out as need for future works. - "However, our setting is more flexible in the sense that we do not assume all sources to be influenced by the auxiliary variable. Specifically, sources in $s_I$ are mutually independent as in the original ICA setting, while only sources in $s_D$ have access to the side information from the conditional independence given u,". This is true and a nice result of theorem 4.4, but there is the limitation that should be discussed, namely now you are making restricting assumption on _both_ the mixing function as well as on the auxiliary variables -- in some sense this is worst of both worlds (but still a nice theoretical result with possible practical uses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your time and insightful comments. We are glad that you find our results important and the paper of good quality. Please find our point-by-point response below. **Q1:** Limited coverage of some works. **A1:** Thanks for the suggestions. We have now detailed the discussion in the introduction. Specifically, in [1], there are assumptions on both source distribution and mixing functions: (1) the sources are assumed to be a Gaussian mixture, with an unobserved state $\mathbf{u}$ (L158-161); (2) the mixing function is assumed to be piece-wise affine. These allow identifiability of sources up to an affine transformation where mixtures remain, with more assumptions needed (e.g., conditional independence given $\mathbf{u}$) for component-wise identifiability. Thus, we differ by (1) having no distributional assumptions, and (2) allowing general nonlinear functions if the assumption on connective structures is met. We do not claim that our assumption on the mixing functions is better than those in [1]. Structural sparsity allows general nonlinearity with sparse connections, while piece-wise affine functions allow dense structures. In addition, we fully agree that [2] should be further emphasized for its highly significant contributions. We have highlighted its generalization to arbitrary dependency order (e.g., spatial dependencies) without assuming conditional independence. Meanwhile, additional information (e.g., time or spatial index) may not always be available, which is one of the motivations of our work. **Q2:** Implication of Thms. 4.1 and 4.3. **A2:** Thanks for the question. We have added more descriptions to clarify this. In Thm. 4.1, the identifiability of $\mathbf{s}\_D$ up to an invertible transformation means that $\mathbf{s}\_D$ will not be mixed with sources in $\mathbf{s}\_I$ after estimation. In practice, it implies that the **subspace** of the changing part (the distribution of $\mathbf{s}\_D$ changes w.r.t. $\mathbf{u}$) can be disentangled from the mixture of both changing and invariant parts ($\mathbf{s}$), which aids tasks like domain adaptation, e.g., the changing style (as a whole) can be disentangled from different images with invariant content. Similarly, in Thm. 4.3, the subspace-wise identifiability means sources in $\mathbf{s}\_{c\_i}$ will not be mixed with sources outside $\mathbf{s}\_{c\_i}$, which disentangles $\mathbf{s}\_{c\_i}$ as an individual high-dimensional component. **Q3:** Novelty appears limited to theorems, and whether sparsity penalty and GIN are just heuristics. **A3:** According to our theory, the additional sparsity penalty on the MLE objective during estimation (L339-341), together with assumptions on the data-generating process, is needed to guarantee the correct identification (L125-128). Moreover, according to [3], coupling-based flows (e.g., GIN) are universal diffeomorphism approximators. The volume-preserving nature of GIN does not hinder it from validating our theorems, as rescaling is one of the allowed indeterminacies after identification. Of course, there exists some approximation of $\ell_0$ norm for gradient-based optimization (MCP penalty), and more work could be done for further improvement. **Q4:** Experiments on more tasks. **A4:** Indeed, there are various tasks that benefit from our theory, and more applications would be intriguing. Meanwhile, we have emphasized in introduction (L88-89) and experiment (L329-334) with additional citations (>9) that prior research shows latent variable models are likely identifiable in complex scenarios, possibly involving undercompleteness and violations of sparsity and independence. Our theory may interpret these empirical results, and our ablation studies and experiments on both the synthetic and real-world datasets provide further validations, complementing previous works. **Q5:** Discuss structural sparsity and failure cases more. **A5:** Thanks, we have added more discussion on it. For failure, one may consider recording in a very crowded room, where every microphone records the mixture of signals from most sources at the same time. **Q6:** Is it necessary to know which latent variables are independent/dependent? **A6:** No. In practice, each data point can be assigned a class corresponding to the value of the auxiliary variable of dependent variables (L335-337). These labels do not provide extra information for independent variables since they do not need auxiliary variables. **Q7:** How to reduce dimensions and compute the Jacobian for non-bijective transformations? **A7:** As in prior works (e.g., [4]) and noted in L346-347, we concatenate latent sources with independent Gaussian noises for flow-based estimation's dimensionality needs and Jacobian computation. **Q8:** Level of nonlinearity, e.g., number of layers. **A8:** 10 layers (L881). **Q9:** What does 'specific algebraic form' mean? **A9:** Great question. We meant to refer to "specific" function classes like conformal mappings mentioned above but have clarified this by changing the term to 'the above-mentioned classes' to avoid confusion. **Q10:** Explain: "The most … are heavily disentangled." **A10:** Apologies for the typo. It should be "heavily **entangled**", like a crowded room where sources heavily influence each other, resulting in a dense Jacobian. **Q11:** More suggested updates: - (1) relocate a reference; - (2) specify undercompleteness claim to sparsity; - (3) a new title; - (4) cite an earlier ISA work; - (5) further highlight a limitation of Thm. 4.4. **A11:** We are grateful for all the constructive suggestions. All points have been updated accordingly. --- [1] Kivva et al. "Identifiability …" [2] Hälvä et al. "Disentangling …" [3] Teshima et al. "Coupling-based invertible neural networks are universal diffeomorphism approximators." [4] Sorrenson et al. "Disentanglement by nonlinear ICA with general incompressible-flow networks (gin)." --- Rebuttal Comment 1.1: Comment: Thank you once more for your time and suggestions. Since the discussion window narrows, might we kindly ask if our clarification has resolved the potential confusion, especially in **Q2 & A2**? Your further feedback is deeply appreciated. --- Reply to Comment 1.1.1: Comment: Sorry for the repeated reminders. As the discussion will end in **48 hours**, would you mind kindly checking if you have any further questions? For instance, we have clarified the **implication of Thms. 4.1 and 4.3**, and you have mentioned in the second point of weakness that > "if I have understood correctly -- please correct me if I'm wrong and I'll adjust my score accordingly" As all reviewers noted, our results are important to the community in a timely manner. Thus, we would like to try our best to address any potential confusion given the opportunity for discussion.
Summary: This paper extends a recent result from Zheng et al 2022, which introduces an assumption they call “structural sparsity” to induce identifiability in nonlinear ICA without relying on a common (but arguably unrealistic) assumption that the observed variables are conditionally independent given observed auxiliary information. Whereas Zheng et al 2022 gave identifiability results only in the setting where the structural sparsity assumption holds perfectly and there are an equal number of sources and observed variables, this paper relaxes these assumptions in several interesting ways and gives identifiability or partial identifiability results in these more general settings. The first theoretical contribution shows identifiability under structural sparsity in the undercomplete setting, where there are more observed variables than sources. This lets them relax the usual assumption that the mixing function must be bijective, and instead only requires that the mixing function be injective. The second theoretical contribution relaxes the structural sparsity assumption to the setting where you have partial structural sparsity (it holds for a subset of sources) or partial independence of sources and shows partial identifiability under these assumptions. Here the partial dependence of sources does not need to be known. The third theoretical contribution assumes that the dependence between sources is known, and the fourth theoretical contribution assumes the sources with dependencies are conditionally independent given auxiliary variables (which is distinct from existing work because they don’t assume all sources are influenced by the auxiliary variable, just the dependent sources). They use an estimation method using a sparsity regularizer (that encourages a sparse estimated mixing function) with a flow-based generative model. They perform experiments on two simple visual datasets (Triangles and EMNIST) and perform ablations where they generate data that satisfy two combinations of assumptions for their theory, compared to a base setting that does not satisfy their assumptions. Following existing work, they use MCC as their metric and their models achieve higher MCC when the assumptions are satisfied. Strengths: - Overall, this is a very interesting paper and makes novel contributions in what I think is an interesting setting: using sparsity to induce identifiability in nonlinear ICA. - They clearly motivate why relaxing each assumption makes the assumptions more realistic. - I agree that the “conditional independence given auxiliary information” assumption that is common in the literature is not a great assumption, and I’m happy to see recent work removing or reducing this assumption. - They don’t require distributional assumptions. - It is well-written and well-structured overall. It is very clear what the prior work accomplishes and what the contributions are. Weaknesses: - There could be more experiments in realistic settings. (However, given the strength of the theoretical contributions in this paper, I think the paper should be accepted as is.) Minor comments on the writing (did not affect score): - Line 84-85: You say “part of the sources can be grouped into irreducible independent subgroups…”, but “irreducible subgroup” is a term in algebra with a specific meaning. You could avoid this “collision” by saying “irreducible independent subgroupings” or something similar. - Line 156: You start a sentence with the word “Differently, …” which sounds strange. You could say “In contrast, …” instead. - Line 167: “While this removes the restriction of bijectivity between sources and observed variables, it remains uncertain as to whether Structural Sparsity holds in general, particularly for all sources in a universal way.” - This sentence is confusing - you are saying it is uncertain whether Structural Sparsity holds in general, but Structural Sparsity is one of your assumptions. Are you saying it is uncertain whether Structural Sparsity is a reasonable assumption, based on whether it is likely to be satisfied on real-world data? - Line 177: It’s also weird to start this sentence with “Differently”. - Multiple lines: You start a handful of sentences with “Besides, …” and each time that is not really the word you mean. You should rethink how each of these sentences connects to the previous sentences and find the appropriate word for each case. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Are you aware of Lachapelle et al 2022b? See https://arxiv.org/pdf/2207.07732.pdf. Lachapelle et al 2022a uses mechanism sparsity to induce permutation identifiability, but Lachapelle et al 2022b extends this approach to the partial identifiability setting. It would be (1) worth mentioning in the Introduction section that Lachapelle et al 2022a introduced the idea to use sparsity to induce identifiability, which inspired the approach of Zheng et al. 2022 (as stated in the text of Zheng et al 2022, see Section 3.1 of that paper), and (2) to cite Lachapelle et al 2022b as prior work using sparsity for partial identifiability (though in a distinct setting from your results as it relies on conditional independence given observed auxiliary variables). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: - Their experiments are only on visual disentanglement tasks and there are many other interesting disentanglement or other tasks that would be interesting to see in future work. - No concerns about negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your detailed reading and insightful suggestions. Your comments on the quality of our paper and the strength of our contributions mean a lot to us. Please kindly find our point-by-point response below. **Q1:** There are many other interesting real-world tasks that would be interesting to see in future work. **A1:** Thanks a lot for your great suggestions. Indeed, there are various tasks that could benefit from our theory, and, as you suggested, more applications would be intriguing. We have shown some positive results on the visual tasks, and perhaps natural language is also a promising field. At the same time, various empirical research have shown that latent variables are likely identifiable in complex scenarios, possibly involving undercompleteness and violations of sparsity and independence (L88-89, L329-334). Complementing previous works, our theory may interpret these empirical results, and our ablation studies and experiments on both the synthetic and real-world datasets provide further validations. Of course, validating our theory in even more tasks is exciting and will be constantly done in future works. In addition, we have also included new experimental results in the PDF attached to the global response. These results demonstrate that the quality of identification can be enhanced by increasing the sample size, further validating our theorems. **Q2:** L84-85: “Irreducible subgroup” is a term in algebra with a specific meaning. You could avoid this “collision” by saying “irreducible independent subgroupings” or something similar. **A2:** Thank you so much for the constructive suggestion. We have replaced “subgroup” with “subgrouping” in the updated manuscript. **Q3:** L156: “Differently, …” could be replaced with “In contrast, …” **A3:** Thanks. We have updated it accordingly. **Q4:** L167: “While this removes the restriction of bijectivity between sources and observed variables, it remains uncertain as to whether Structural Sparsity holds in general, particularly for all sources in a universal way.” This sentence is confusing. **A4:** Thanks for the insightful question. In this sentence, we are trying to motivate the importance of dealing with potential partial violations of Structural Sparsity among a subset of sources. In light of your suggestion, we have revised it to “... it remains uncertain as to whether Structural Sparsity always holds for all sources in a universal way”. We hope it could help to avoid potential confusion. **Q5:** L177: It is weird to start this sentence with “Differently”. **A5:** Yes, we fully agree with you. We have removed this in the updated manuscript. **Q6:** A handful of sentences are started with “Besides, ..”, which could be replaced with more appropriate words. **A6:** We are very grateful for the great suggestion. We have carefully modified the related connecting words in the updated manuscript as follows: - **L37:** Replaced “Besides” with “Moreover”. - **L64:** Replaced “Besides” with “In addition to”. - **L100:** Replaced “Besides” with “Furthermore”. - **L110:** Replaced “Besides” with “Additionally”. - **L212:** Replaced “Besides” with “Furthermore”. - **L278:** Replaced “besides” with “in addition to”. We hope these modifications could improve the transition between related sentences. Thanks again! **Q7:** Discuss [1] in the introduction and cite [2] as prior work. **A7:** Thanks so much for sharing these excellent works. In the updated manuscript, we have highlighted in the introduction that [1] introduced the idea of proving identifiability with sparsity, which then inspired [3]. Moreover, we have emphasized in both the introduction and theory that [2] also uses sparsity for partial identifiability. --- [1] Sébastien, et al. "Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA." [2] Lachapelle, Sébastien, and Simon Lacoste-Julien. "Partial disentanglement via mechanism sparsity." [3] Zheng et al. "On the identifiability of nonlinear ICA: sparsity and beyond." --- Rebuttal Comment 1.1: Title: Reviewer response to rebuttal Comment: Thanks to the authors for the thorough response to my comments and for including the additional plot in the one-page pdf. You've addressed all the questions and suggestions for improvements in my review. I will keep my score of 8. --- Reply to Comment 1.1.1: Comment: Thanks so much for your effort! We are very grateful for your encouragement.
Rebuttal 1: Rebuttal: We extend our sincere thanks to each of the reviewers for their thoughtful insights and the time devoted to reviewing our manuscript. We are encouraged that all of the reviewers have found our paper of good quality in various ways. With appreciation, we have provided detailed, point-by-point responses to each reviewer's comments in the individual replies. In this global response, we take the opportunity to present additional experimental results, which have been summarized in the attached PDF. Specifically, the quality of the identification improves w.r.t. the increasing of the sample size. Pdf: /pdf/77deb388a81995cfb13cad178502c703e4cf73d3.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces more flexible ways to perform nonlinear independent component analysis (nonlinear ICA). Nonlinear ICA involves identifying the sources s from the observed x when both s and x are related by x = f(s) and f is a nonlinear function. Previous work has developed a method for this problem under a strict structural sparsity assumption that the s's and x's are one-to-one and onto, and all the s's are independent of each other. Current work provides theorems that relax the assumption in various ways including: (1) undercompletness--there can be more observed variables x than sources s; (2) partial sparsity--only a subset of all the s's may map to x's; (3) source dependence--all sources s do not have to be statistically dependent on each other; and (4) flexible grouping structures--the possibility that some of the sources can be partitioned into independent subgroups of sources. There are experiments on synthetic and real world datasets that show the effectivness of their approach. Strengths: This paper is original because it introduces novel approaches, as far as I know, that extend the situations where nonlinear ICA can be applied. The paper exhibits good quality in various ways. First, there are various theoretical results included in the paper that extend the cases where nonlinear ICA can be applied. Second, there are also results in several experimental settings that back up the theory. The paper is mostly clear in its explanations. In terms of significance, extending the situations where nonlinear ICA can be applied is an accomplishment. Weaknesses: While in theory extending the cases where nonlinear ICA can be applied is a strength, because there wasn't any empirical qualitative evaluation of how this approach compares to other approaches in disentangling the sources, it is not clear how significant this work is. It does not have to be a comparison of how well it disentangles sources; it could be comparing them on some other application, such as how well they extract features that are useful for classification, for example. It does not even have to be comparing this paper's approach to previous approaches; it could be comparing the different extensions of nonlinear ICA presented in this paper. Also, as pointed out by the authors, another limitation of this work is that the experimental results were only on visual datasets but not on other modalities. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: While the undercompleteness result appears to me to be unique to this paper, (Zheng et al 2022) also has an undercompletness result. This paper is written so that it sounds like (Zheng et al 2022) has no undercompletness result. It would be nice if this situation could be explained or clarified. It was a bit confusing that on line 113, A is defined as a set of natural number tuples but on line 103 A is defined as a subset of natural numbers. I think on line 114 that A_{:,j} := { i | (i,j) \in S } should really be A_{:,j} := { i | (i,j) \in A }. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer’s time on carefully reading our manuscript and providing insightful suggestions. These have undoubtedly further improved the manuscript. Please kindly find our detailed, point-by-point response below. **Q1:** Suggestions on expanding empirical comparisons. **A1:** We are very grateful for these constructive suggestions. It is worth noting that, instead of proposing a better model to disentangle the sources, we focus on providing a theoretical guarantee for uncovering generating processes under certain conditions. Our result could be one of the lacking interpretations of many previous empirical studies showing that latent variable models are likely identifiable in complex scenarios, as mentioned in L88-89 and L329-344. The various extensions proposed for nonlinear ICA focus on the assumptions regarding the ground-truth data-generating process, rather than the estimation methods. Therefore, they can only be rigorously validated through ablation studies conducted on different data-generating processes. Complementing various previous empirical studies, we believe that our ablation studies and experiments on both the synthetic and real-world datasets provide further validations. In addition, we have also included new experimental results in the PDF attached to the global response. These results demonstrate that the quality of identification can be enhanced by increasing the sample size, further validating our theorems. **Q2:** Clarify the theorem proposed in [1] about the undercomplete case. **A2:** We appreciate the great suggestion. As mentioned in L159-160, [1] only removes the rotational indeterminacy while our theorem removes all major indeterminacies and only preserves the component-wise transformation and permutation. In other words, [1] only gets rid of specific spurious solutions due to the rotational indeterminacy (e.g., the ‘rotated-Gaussian’ MPA) while we prove the full identifiability of the undercomplete case. We have added additional detailed discussion earlier in the introduction to avoid potential confusion. **Q3:** Notations: - **(1):** It was a bit confusing that on L113, $\mathcal{A}$ is defined as a set of natural number tuples but on L103 $\mathcal{A}$ is defined as a subset of natural numbers; - **(2):** there is a typo on L114, i.e., $\mathcal{S}$ should be $\mathcal{A}$. **A3:** Thank you so much for reminding us, and sorry for any potential confusion. We have modified L113-114 as: “For any set of indices $\mathcal{B} \subset \\{1, \ldots, m\\} \times \\{1, \ldots, n\\}$, analogously, we have $\mathcal{B}\_{i,:}\coloneqq\\{j \mid(i, j) \in \mathcal{B}\\}$ and $\mathcal{B}_{:,j}\coloneqq\\{i \mid(i, j) \in \mathcal{B}\\}$." This modification also corrects the typo. Thanks again! --- [1] Zheng et al. "On the identifiability of nonlinear ICA: sparsity and beyond." --- Rebuttal Comment 1.1: Comment: I have read your response. Thank you for preparing it. It has clarified the meaning of certain passages in the paper. Maybe it would be an even better paper if the theory could tell you whether to use a certain identifiability approach given a particular set of empirical data, rather than having to perform ablation studies, but the current paper as it is does break new ground. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your encouragement. We are very grateful for all of your insightful suggestions.
null
null
null
null
null
null
Theoretical and Practical Perspectives on what Influence Functions Do
Accept (spotlight)
Summary: This paper reexamines the assumptions used in the deduction of influence function (IF) methods in order to explain the failure of IF in predicting leave-some-out-retrain performance. The authors find that all five assumptions used in the previous deduction will be violated to different degrees in practice and propose a combination of HIF and Arnoldi-based methods to overcome these violations. They find that four of the assumptions can actually be bypassed or fixed, however, the remaining challenge named *parameter divergence* seems inherent. They show that the predictive ability of IF will gradually fade over training steps due to this phenomenon and show that accounting for this effect, it is better to interpret IFs as proxies for the effect of a few fine-tuning steps. Strengths: * **Originality.** This paper shows an in-depth analysis of the assumptions on which IFs are based. Their analysis combines theoretical thinking well with experimental observations. * **Clarity.** The results are presented in a clean way and are a pleasure to read. * **Significance.** Although the reviewer is not an expert in the field of influence functions, the question this paper targets seems important and the paper sheds light on the real obstacles to solving this problem, as well as proposes possible ways to solve it. Weaknesses: The reviewer thinks the paper may improve in the following aspects. * The authors argue that (1) using Arnoldi-based methods helps boost the accuracy of HIF, and (2) accounting for training trajectories can improve the estimation for IFs. The theoretical deduction of these propositions is very reasonable but it would be better to include ablation experiments or cite relevant literature to showcase the phenomenons. * The authors observe that IF methods perform poorly on Resnet but do not have any explanation for this phenomenon. It would be beneficial if the authors can investigate the reason behind the difference of IF methods performance on NLP and CV tasks. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The reviewer is interested in the following questions. * Why does the correlation between IFs and performances seem to increase before decreasing? * How would the authors suggest modifying the current IF estimation methods based on their observations? * What is the author's explanation of the rapidly increasing phase 1 in figure 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations and the reviewer has not noticed any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses bullet point #1**: For (1) we can point the reader to two relevant references. First, Schioppa et al. who compare ABIF with exact Hessian on retrieving mislabeled examples (Fig. 2). Second, Fisher et al. (``Influence Diagnostics under self-concordance’’; Fig. 3 and the discussion in Sec 5.2) where they find that ABIF is the best solver on the QA task and is comparable to the other solvers on the text-completion task. (2) is a natural research direction; at the moment Theorem 2 gives an exact formula but making it practical requires further ideas. We will include an ablation showing the extra error introduced by TracIn vs the exact formula in Theorem 2 in tracking parameter changes, if the paper gets accepted. **Weankesses bullet point #2**: We focused on the limitations of the methods, in particular the fading of predictive power. We conjecture that the low correlation is due to the architecture. Indeed, using a ViT for the CV task leads to a peak correlation similar to that of the NLP task. We will include the ViT result in the final version, if accepted. We leave an explanation of the phenomenon, possibly because of the different loss landscape geometry between the ResNet and ViT, as a question for future work. **Question regarding correlation increasing and then decreasing**: When changing the loss to the perturbed one there is a phase of adjustment in the network. Using a dynamical analogy, the momentum the system had built on the old loss needs to change for the new loss and this introduces a lag to reach the peak performance. **Question regarding how authors would suggest modifying IF estimation**: To improve estimation quality: start from formula (6) which traces the dynamics exactly; however (6) is not practical so techniques need to be developed to make it efficient. To manage expectations and evaluate methods: proceed as in the fading of influence experiments identifying the time window in which correlation is good enough. **Question about the rapidly increasing phase in Figure 1**: This is honestly a hard question. Given the network's complexity, we think that the $A$ in the (theoretical) upper bound in 4.4 is quite large; for the $A$ that comes (empirically) in the lower bound we think that there are two regimes: 1) early regime were the the network tries to adapt quickly to the perturbed loss; 2) asymptotic regime in which a lower value of $A$ is needed to adjust on the perturbed loss. As BERT (a) uses the Adam optimizer, in that case it also seems the phase is very rapid because of the quickness of the optimizer. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Extended results on ViT will definitely improve the paper. The reviewer has read the response and decided to keep the score.
Summary: Influence functions allow one to measure the effect of up-weighting a training sample on the test loss (or functions of the model parameters) of a test example. This paper studies several of the assumptions needed for hessian-based influence functions to produce valid leave-out-one estimates of the effect on a test sample's loss. This assumptions include: convexity, stability of the hessian, and additivity of training trajectory. The paper shows that some of these assumptions are not as problematic, in practice, as previously assumed. They show that several of these assumptions can indeed be satisfied with changes to the formulation; for example, if the hessian becomes degenerate, one can approximate the hessian via arnoldi iteration. It turns out that the most problematic assumption results from a divergence between the initial set of parameters, and another obtained by re-training or even fine-tuning for longer time steps. The paper then bounds this parameter divergence using a discrete version of gronwall's lemma, and gives a set of takeaways for how to evaluate influence functions: the takeaway from this work is that influence scores only predict effect of a training sample over a small number of training steps. In the final portion of the paper, they evaluate how to use influence functions to correct errors. Strengths: Overall, I really enjoyed reading this paper, and found the breakdown of the assumptions really comprehensive. **Quality/Clarity**\ I found the paper to be of very high quality. Each assumption is stated clearly and cleanly discussed. For example, I found the resolution of assumption #1 to be interesting. Here the authors show that for hessian-based influence functions one does not need to assume strict convexity but we can relax it to assume that the final gradient steps do not change the hessian by too much. While this might seem like a simple change, it helps provide clarity to this literature. The bigger takeway from this work is that influence scores only predict effect of a training sample over a small number of training steps. This finding is important since it requires rethinking how these approaches are currently used. Overall, this paper sets out key assumptions of influence functions, discusses how to remedy violations to these assumptions, and presents experiments to corroborate the theory presented. **Significance**\ One important takeaway is that, for deep models, it now makes sense that influence scores obtained via retraining have poor correlation with leave-one-out retraining scores. Secondly, another important takeaway is that one can correct errors simply be reweighting or fine-tuning on opponents or proponents. The findings from the paper should be important for practitioners and others using influence functions as part of their debugging toolbox. **Originality**\ This paper mostly considers a question that was also studied by Bae et. al. (Neurips 2022), but does so in a different way and using mostly new techniques. I found the use of the discrete version of gronwall's lemma to be quite nice. I am not sure if this is new, but taken together, this work advances the understanding of influence functions quite substantially. Weaknesses: The main weakness I have with this work are minor and listed below: **Relation with Bae et. al. from Neurips 2022**: The main related work to this one is this paper. However, this work does not do enough to sufficiently contrast their analyses with theirs. I am mostly familiar with this previous work, and understand that they address exactly the same question as well, but arrive at a different conclusion. Their finding is that hessian-based influence functions approximate what they term the "proximal bregman response function", and that the ability of this quantity to match a sample loo estimate is mostly affected by the linearization error (i.e., due to the taylor expansion), and 2) hessian approximation. This work seems to not agree with theirs, i.e., you suggest that hessian approximation be handled with arnoldi, and that the key discrepancy comes from parameter divergence. It would be great if the authors could spend more time in the draft to contrast this work against that one. I think this is an important issue to address in the paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Some clarification questions: - For error correction, if I have k examples that I want to correct, then am I correcting each example independently or the taking gradient steps on all k examples at the same time. If it is independent, then isn't this too onerous? - One use of these influence scores is mislabelled training samples, how does assumption 5 affect that? When in training should the self-influence score be a useful metric for mislabeled label correction? - This is minor question, going from $\theta_\epsilon \approx \theta_0 + \epsilon^\top\nabla \theta_\epsilon$ to equation 1, where did the second $\nabla_\theta$ come from? Is this the chain rule? I am familiar with other ways to derive influence functions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors discuss limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comparison to Bae et. al**: Thank you for the suggestion, we will expand the discussion of Bae et al. Some comparison points: * Bae: Only Hessian-based influence functions; Us: we discuss Hessian and Gradient-based influence functions covering e.g., also TracIn. * Bae: convexity assumption is crucial (they add a regularization term); Us: we can drop convexity. Note that dropping convexity assumption implies one needs to use another solver (Conjugate Residual instead of Conjugate gradient). Under the convexity assumption there is agreement with their findings in the case of Hessian-based influence functions. However, advocates of the TracIn method might argue that they trace the training dynamics correctly while the Hessian-based IF does not and so their method would not be affected by the linearization error. However, we show that parameter divergence puts a limitation on TracIn too. **Question about $k$-examples for error correction**: We looked at $k=1$ as it matches the case of leave-on-out eval. But the method can be applied also on k examples by retrieving proponents of each test example in order to build $B$. In Figure 8 (Appendix) the median steps for correction are not so big, so for $k$ not too large doing one example at a time might not be that onerous. That being said, we agree that a proper understanding of the utility and limitations of IF for error correction deserves a deeper analysis. Given that the current submission focuses on the theoretical aspects, we leave such an analysis for future work. **Question regarding usage of self-influence scores**: To find outliers with self-influence, assumption 5 might not be so limiting. What we have seen in practice is that outliers tend to have big gradients so they get high self-influence scores. If the gradients stay big during training because of mislabeling then it does not really matter at which checkpoint they get measured. 5 affects the interaction between a test point and its proponents and opponents which is a finer quantity to measure. **Question regarding the second $\nabla$**: One $\nabla$ is with respect to the parameters $\theta$; the second $\nabla$ is with respect to the variation parameter $\varepsilon$; to get influence we need to map the loss to gradient space (hence $\nabla_\theta$) and then take the derivative with respect to $\varepsilon$ ($\nabla_\varepsilon$) which we denoted as a $\nabla^2_{\varepsilon,\theta}$. We will clarify the notation in the final version if the paper is accepted.. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks to the authors for responding to my questions. The response here w.r.t. Bae et. al. is important, and belongs in the paper. To me that is the most relevant related work to this one. I maintain my rating.
Summary: This paper studies the limitations of Influence Functions (IF) based on assumptions about convexity, numerical stability, training trajectory and parameter divergence. The authors discuss each of those assumptions in detail and propose ideas to address the shortcoming of those limitations. The authors describe solutions for 4 assumptions made in the paper and discuss the limitations of the fifth assumption, parameter divergence, which is harder to address. Apart from that the authors also illustrate the theory of parameter divergence on the example of NLP and vision classifiers. They confirm theoretical findings of fading influence over time when retraining the models on vision and test classifiers. The authors also show how to perform error correction based on proponent-correction and opponent-tuning using a few fine tuning steps. They show that their approach outperforms baseline re-training procedures. Strengths: + The paper is well written, especially the introduction, problem statement and the assumptions are well framed. + The related work is well cited and based on the studies in the recent papers, this paper proposes to address the limitations highlighted in the literature such as the LSOR problem for influential examples. + The paper studies 5 important assumptions made by IF and proposes solutions that help to alleviate the limitations posed by those assumptions. It also discusses and proposes a solution for the temporal dependency which is not accounted for in the original TracIn paper. Weaknesses: + The main contributions of the paper seem to be around studying 5 different assumptions and proposing solutions for them. The main contribution is for assumptions 3 - 5 where the authors show that IF can predict \theta_{\epsilion, t} only for a limited period of time and propose incorporating temporal term into the tracin's formulation. These are important findings, however, it is not very clear whether it is a large enough contribution and whether it is completely novel since similar limitations of Tracin were also discussed in the Kelvin Guu, et.al. + It looks like all the proofs corresponding to theorems mentioned in the main paper are in the appendix. Since it is not required for reviewers to read appendices it might be good to consider moving some of the important proofs or parts of them into the main paper. + Section 4.2 might be difficult to follow for someone if they do not already know about Arnoldi. There is a brief description that Arnolodi approximates P1 using subspaces spanned by eigenvectors corresponding to top-k eigenvalues but I think that overall intuition of Arnoldi is not clear if someone is not familiar with that work. + Overall I think that Section 4.2 might be a bit difficult to follow (e.g. how is Theorem 1 applied to restricted variation). + It feels that it is somewhat difficult to follow how we arrive from Eq 7 to Eq 8. Perhaps adding more explanation in section 4.4 would be good. **Minor comments** + The definition of C^k function on line 179 is a bit unclear. C gets used again in line 234 (C\epsilion) which makes it a bit confusing in terms of the readability of the paper. + Figure 3: X-axis is not annotated Technical Quality: 3 good Clarity: 3 good Questions for Authors: + Section 6: Proponents-correction: when the set B is identified and relabeled, is it done for truly mislabeled examples ? How were those mislabeled examples sourced ? For experiments with SST and ResNet how were set Bs curated ? + Section 6: Which formulation of IF (TracIn or Influence Functions) was used in the experiments ? + Section 5.2: Is R(t) pearson correlation between losses before and after re-training averaged for the same batches? Does the `Step` indicate different checkpoint on Figure 2 ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss several limitations of their work and propose ideas to address those limitations in the future. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comparison to SimInfluence**: A key difference is that, while Guu et al. (Simfluence) discuss the limitation of additivity in TracIn from a modeling perspective, we point at the limitation from its theoretical derivation. Our *first step* is showing that the original derivation is incomplete: starting from the same modeling assumptions of TracIn we derive the correct result that takes the time order into account. This is in contrast with Guu et al. who do not challenge the original derivation of TracIn (e.g. the role of the time order) but propose to extend TracIn with multiplicative terms. The *second step* is then showing that even if one were to use the corrected formula, one would have to face the issue of parameter divergence – something that Guu et al. do not discuss. We show that this issue is common to all the methods that are formulated with a first order expansion in $\varepsilon$, e.g. also the Influence Functions of Cook and Weisberg or the more recent ABIF. In sum, while it is true that both Guu et al. and we point out the additivity limitation, this is probably the only similarity: everything else, including the motivation, the theoretical framing and the experiments is very different. **Question regarding Proponents correction and mislabeled examples**: We use comparatively clean datasets, so the proponents are not mislabeled. However, our primary goal in the correction experiments is to verify that intervening on influential examples results in a faster correction than on other classes of examples. Also, please note that this simplification applies to proponents (whose likely correct label is changed) but not to opponents (which are not changed). Finally, we also did an experiment with a notoriously noisy Wikipedia Toxicity Subtypes dataset where we indeed identified a mislabeled proponent for every test error. We will include these results in the appendix. **Question regarding IF method used in Section 6**: ABIF (Arnoldi-based influence functions) for scalability reasons. So it is basically a scalable and numerically stable version of Influence Functions. **Question regarding Section 5.2 and $R(t)$**: For each pair $(i, j)$ ($i$ in test and $j$ in train) we do re-training to compute the loss-shifts at each time step (i.e. checkpoint, index $t$) obtaining a tensor $A_{i,j,t}$; for each $i$, $j$ and $t$. Influence functions at the starting checkpoint give a score $s_{i,j}$; so we compute the Pearson correlation on the $(i,j)$ pairs between $A_{i,j,t}$ and $s_{i,j}$ to obtain R_t. For each experiment run the pairs $(i,j)$ are always fixed at the beginning (as we need to do a re-training run for each $j$); however across different runs we use different $(i,j)$ to build the confidence estimates for $R_t$. Step indicates thus a different checkpoint, but the checkpoints correspond to each step; since the sequence of models fits on the CPU RAM, one just needs to offload the parameters at each step to the CPU. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: Thank you authors for the clarification and detailed responses to my questions. I've increased the rating by 1 point.
Summary: This paper verifies assumptions of Influence Functions that show discrepancies in theory and empirical results, convexity, numeric stability, training trajectory, and parameter divergence. Although the other assumptions are addressable, parameter divergence is not. Hence, the paper proposed a solution to it and also showed empirical results. Strengths: - Based on their observation of the issue, they proposed a simple solution to it. - To the reviewer, this paper reads as an instructional paper, which can contribute to the community. - It verified the applicability of the IF assumption one by one on convexity, numeric stability, training trajectory, and parameter divergence. Weaknesses: - Aside from its technical contribution, there’s room where its writing can be improved. In particular, the introduction can be better structured, and there are multiple repetitive places that can be concise. - In Fig 2 (1) BERT case, although the authors claimed that the predictive power degrades monotonically over time, the “fading” effect is actually not clear. There’s only one time of peak at an early step, and there's no decreasing trend in the later steps. There is no fading effect either in the Appendix (Fig. 5.) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Do the authors have other models to show the fading effect? As BERT’s case is unclear, only one empirical case of ResNet shows it. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Weaknesses bullet point 2 & Questions** *Regarding the fading effect for BERT results*: Multiple runs (n=9) were used to build confidence intervals; the goal is to show that eventually 0 falls inside the confidence interval. For the methods Hessian, TracIn, TracIn(3) we observe that 0 falls in the confidence interval. For Optimizer this was not actually the case and there was sometimes a small distance of up to ~0.1 to ~0.2 from the confidence interval; however the mean of the time series is oscillating quickly up and down around 0 so we conjectured that this effect was due to the high variance of computing influence scores with the Optimizer adjustment. We have done additional runs (for a total of n=25) and we find that then also for Optimizer 0 is eventually covered by the confidence interval. Please see Figure 1 in the rebuttal plots .pdf. In Figure 2(a) we plot the confidence interval for Optimizer with those 25 runs and it covers 0. As our proof is formulated for SGD, we additionally re-did the BERT experiments when using SGD instead of Adam: see Figure 2(b); the fading effect there is particularly clear. We hope that these plots illustrate the effect convincingly for BERT and we will include them in the final manuscript. We will add another NLP pretrained model and the ViT for the Vision Task for the camera-ready version if accepted. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I maintain my rating.
Rebuttal 1: Rebuttal: We thank the reviewers for all the comments and suggestions and the positive feedback! Concerning readability (e.g., moving some proofs from the Appendix to the main body, explaining Arnoldi, explaining $C^k$-differentiability, Theorem 1 and Eq. (7-8) in more detail and improving the plot presentation), we will incorporate the suggestions in the final version, if the paper is accepted. Pdf: /pdf/9df73f46bbc8872b1e809210b5235bcfb405ef53.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SOL: Sampling-based Optimal Linear bounding of arbitrary scalar functions
Accept (poster)
Summary: This paper proposed a method to upper and lower bound a scalar function within a convex set using linear functions. The proposed method works by solving a sequence of discrete linear bounding problems with an increasing number of sample points. Strengths: 1. The proposed method works for neural networks with general activation functions. Most robust verifier in the existing liturature can only handle popular activation functions such as ReLU, Sigmoid and Softmax. 2. This method achieve similar performance as the current state-of-the-art, LinSyn, while requires significantly less computational time. Weaknesses: 1. The experimental section is a little bit weak. The proposed method does not seem to improve the "fraction of properties certified" compared to LinSyn in Table 1. From Table 1, it seems like the proposed method is simply a more efficient implementation of LinSyn. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What kind of training method is used to train the models in Table 1? Are those models robustly trained to against $\ell_\infty$ attacks? 2. Table 1 suggests that lowering the $\epsilon$ down from $10^{-5}$ would only increase the runtime of SOL without gaining any improvement to the "fraction of properties certified". Is this empirical finding consistent among larger $\ell_\infty$ bound perturbations? Because the $\ell_\infty$ bound perturbation used in Table 1 is quite small, 8/255 for MNIST and 1/255 for CIFAR-10. Would SOL require smaller $\epsilon$ in order to achieve a similar "fraction of properties certified" as LinSyn? 3. It would be helpful to also include the upper bound on the "fraction of properties certified" in Table 1 in order to gauge how robust the models are. Such upper bound can be easily computed using projected gradient descent or algorithm for finding the attacks for the neural network. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and the feedback. [proposed method does not seem to improve the "fraction of properties certified" ...] We would like to point out that this can be easily explained by how LinSyn was tuned. While their method does not provide any tuning parameters for the user, it has a number of hardcoded parameters determining the trade-off between the running time and the accuracy. These parameters might have been chosen in a such a way as to make the bound tight enough for the certification rates to be close to saturation. This hypothesis seems quite realistic considering that the experimental setup we use is exactly the same. [training method] We train the models by optimizing a simple cross-entropy without implementing any robust training techniques. Comparison on the models trained specifically to be efficient at robustness certification would definitely be a useful extension. [scale of perturbation] We conducted additional experiments running SOL on CIFAR with perturbation radii 2/255, 4/255 and 8/255. Similarly to the results for 1/255 presented in the paper the certification rates for any other radius do not depend on whether the optimality target is set to 1e-7, 1e-5 or 1e-3. This means that for bigger radii the saturation still happens somewhere above 1e-3. The certification rates drop significantly with increased radius as can be seen here ``` gelu, loglog, swish 1/225 0.31 0.69 0.37 2/225 0. 0.38 0.02 4/225 0. 0.08 0. 8/225 0. 0.03 0. ``` Supposedly, the drop is so steep due to the models themselves not being very robust to begin with. The running times also increase with increased perturbation scale. The change is more pronounced for the small eps = 1e-7 ``` gelu, loglog, swish 1/225 400s 160s 340s 8/225 700s 312s 630s ``` then for the larger eps=1e-3 ``` gelu, loglog, swish 1/225 150s 87s 150s 8/225 179s 90s 165s ``` We hope to get the LinSyn results for the increased radii soon, so that we can make the comparison complete. However, so far nothing indicates that the comparison might be qualitatively different from the 1/255 comparison presented in the paper. [optimization-based bound on the certification rates] Thank you for the suggestion! It might, indeed, give us new insights. We'll try to implement such bounds in a couple of days. --- Rebuttal Comment 1.1: Title: Increased perturbation radii results Comment: We apologize for the delay. Here are the LinSyn results on CIFAR with increased perturbation radii. LinSyn certification rates: ``` gelu, loglog, swish 1/225 0.31 0.69 0.35 2/225 0. 0.38 0.02 4/225 0. 0.08 0. 8/225 0. 0.03 0. ``` LinSyn runtimes: ``` gelu, loglog, swish 1/225 582s 355s 546s 2/225 576s 349s 562s 4/225 586s 361s 563s 8/225 599s 360s 574s ``` The certification rates for SOL are indeed at least the same as those of LinSyn across the whole spectrum of perturbations. Interestingly, the certification runtime does not increase much for LinSin between perturbations of 1/255 and 8/255. However, it is still much higher than that of SOL with $\varepsilon = 10^{-3}$. To better analyze the saturation of certification rates as $\varepsilon \rightarrow 0$ we also evaluate SOL with $\varepsilon = 10^{-2}$ and achieve the following rates ``` gelu, loglog, swish 1/225 0.25 0.62 0.32 2/225 0. 0.33 0. 4/225 0. 0.08 0. 8/225 0. 0.03 0. ``` The results suggest that for the perturbations of 1/255 and 2/255 the saturation occurs somewhere between $10^{-3}$ and $10^{-2}$, while for the perturbations of 4/255 and 8/255 the value of $\varepsilon = 10^{-2}$ is already saturated. Seemingly, the saturation $\varepsilon$ increases with perturbation scale which should be beneficial in practice. The LinSyn results also support this saturation dynamics hypothesis: the implicit optimality accuracy LinSyn has is not quite good enough for it to saturate the certification rates at 1/255 (SOL rate for the swish network is better), but is enough to saturate the rates for 2/255, 4/255 and 8/255. Finally, we would like to emphasize that the main purpose of the evaluation presented in the paper was to compare SOL to the two known function-agnostic bounding approaches. The evaluation on a more diverse set of NNs trained using various techniques would surely enrich the analysis. However, since none of the approaches rely explicitly on any specific model characteristic or the training algorithm, we argue that the presented results are decisive enough to be representative on their own. Especially considering the following: 1) we follow the experimental setup of LinSyn, which, supposedly, was chosen by the authors to reflect the performance of their approach as well as possible; 2) LinSyn aims to produce bounds tight in the same $L_1$-distance sense as our optimal bounds, so it makes sense to attribute the difference in the robustness certification performance to SOL being able to produce tighter bounds using less time (see fig. 7); this should stay true for any alternative experimental setup.
Summary: This paper describes an approach for finding a linear upper bound of scalar functions that are Lipschitz. To find a bound that approximately optimizes discrepancy with the target function, it samples points to construct LP instances whose solutions bound the corresponding "discrete" bounding problem, and continues until finding a bound that matches an optimality target. This technique can be applied as a primitive in neural network verification routines, and the evaluation shows that when it is used in place of prior work, it leads to faster verification times with no penalty on accuracy for MNIST and CIFAR10 models. Strengths: This work addresses a specific problem with well-known applications, introducing a new technique that improves measurably on prior work. Notably, while the performance of this approach when used in robustness verification is quite a bit better than the recent best-in-class techniques, it does not seem to impose any additional restrictions that would limit its applicability to networks with different architectures or activation functions, and it does not seem to degrade the precision of the verifier. Unlike the most closely related work, this approach provides a parameter that can be tuned (i.e., the approximation threshold) to trade performance for precision. While this isn't explored too much in the paper, this flexibility could potentially be used by verifiers to adaptively scale to larger problems by offering progressively weaker guarantees. The writing is clear and understandable, although the main contribution takes some time to get through. The writing might be improved with a more focused exposition of the algorithm. Currently, some of the intuition is given in section 3, some at the beginning of section 4, and the full algorithm is finally presented in 4.3. Soundness conditions and a brief diversion into LP are discussed in between these sections, which was distracting. Weaknesses: While the performance gains are apparent (modulo some experimental concerns discussed below), the significance of this approach may be somewhat limited, as it targets a specific application and offers incremental improvements, not new capabilities. Nonetheless, this will be of interest to those working on neural network verification. The experimental analysis is limited, and focuses more on microbenchmarking LP solvers and SOL's isolated performance than showing gains in its most useful application, robustness verification. 1. Verification results are shown over two models, configured with three different activation functions. It's interesting to see the differences across activations, but this doesn't show consistent improvements as architectures grow deeper or wider. Additionally, MNIST results don't offer too many insights at this point, as prior work on certified training yields results than are difficult to improve on. 2. The experiments don't vary the robustness radius, so we can't determine how this impacts runtime or certification rates. In particular, it would be good to see that this approach's reliance on sampling doesn't break down for stronger guarantees. The 1/255 radius for CIFAR is not really standard anymore; although not a decisive argument, note that the [certified robustness community leaderboard](https://sokcertifiedrobustness.github.io/leaderboard/) doesn't have papers in this category, as most evaluate 2/255 or 8/255. 3. Important details of these models are not given: how many parameters, which layers, and how were they trained? Importantly, were these trained using techniques that would make them efficiently certifiable, or adversarial training, or something else? This will have a significant effect on the results. If these models were not trained for certification, then the experiments should include some models that were, as AutoLiRPA's default bound and LinSyn are likely to show improvement, and we would hope to see a comparable improvement in this approach. The limited experiments make it difficult to judge the significance of this work, which is positioned as a drop-in replacement for existing bound approximation methods within verifiers. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please provide information on the missing details discussed in (3) from above. 2. If you have done experiments with different robustness radii or architectures, but did not report them, please describe the results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Aside from the questions about experimental details discussed above, limitations were adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and the feedback. [exposition of the algorithm] Thank you for the suggestion. We will try to rearrange some of the paragraphs to make the narrative more coherent. [information on the missing details discussed in (3)] Here are the details. We will make them more clearly specified in the paper: 1) All of the models were trained by optimizing a simple cross-entropy. No robust training techniques were used. This is the setup LinSyn paper uses for their experiments, so we chose a similar approach. 2) The networks with "4l" in the name have 2 convolutional layers followed by 2 fully-connected layers, the "5l" networks – 3 conv + 2 fc. 3) The activation layer corresponding to the name of the model is present after each main layer except the last one. [experiments with different robustness radii] We conducted additional experiments running SOL on CIFAR with perturbation radii 2/255, 4/255 and 8/255. Similarly to the results for 1/255 presented in the paper the certification rates for any other radius do not depend on whether the optimality target is set to 1e-7, 1e-5 or 1e-3. This means that for bigger radii the saturation still happens somewhere above 1e-3. The certification rates drop significantly with increased radius as can be seen here ``` gelu, loglog, swish 1/225 0.31 0.69 0.37 2/225 0. 0.38 0.02 4/225 0. 0.08 0. 8/225 0. 0.03 0. ``` Supposedly, the drop is so steep due to the models themselves not being very robust to begin with. The running times also increase with increased perturbation scale. The change is more pronounced for the small eps = 1e-7 ``` gelu, loglog, swish 1/225 400s 160s 340s 8/225 700s 312s 630s ``` then for the larger eps=1e-3 ``` gelu, loglog, swish 1/225 150s 87s 150s 8/225 179s 90s 165s ``` We hope to get the LinSyn results for the increased radii soon, so that we can make the comparison complete. However, so far nothing indicates that the comparison might be qualitatively different from the 1/255 comparison presented in the paper. --- Rebuttal Comment 1.1: Title: Increased perturbation radii results Comment: We apologize for the delay. Here are the LinSyn results on CIFAR with increased perturbation radii. LinSyn certification rates: ``` gelu, loglog, swish 1/225 0.31 0.69 0.35 2/225 0. 0.38 0.02 4/225 0. 0.08 0. 8/225 0. 0.03 0. ``` LinSyn runtimes: ``` gelu, loglog, swish 1/225 582s 355s 546s 2/225 576s 349s 562s 4/225 586s 361s 563s 8/225 599s 360s 574s ``` The certification rates for SOL are indeed at least the same as those of LinSyn across the whole spectrum of perturbations. Interestingly, the certification runtime does not increase much for LinSin between perturbations of 1/255 and 8/255. However, it is still much higher than that of SOL with $\varepsilon = 10^{-3}$. To better analyze the saturation of certification rates as $\varepsilon \rightarrow 0$ we also evaluate SOL with $\varepsilon = 10^{-2}$ and achieve the following rates ``` gelu, loglog, swish 1/225 0.25 0.62 0.32 2/225 0. 0.33 0. 4/225 0. 0.08 0. 8/225 0. 0.03 0. ``` The results suggest that for the perturbations of 1/255 and 2/255 the saturation occurs somewhere between $10^{-3}$ and $10^{-2}$, while for the perturbations of 4/255 and 8/255 the value of $\varepsilon = 10^{-2}$ is already saturated. Seemingly, the saturation $\varepsilon$ increases with perturbation scale which should be beneficial in practice. The LinSyn results also support this saturation dynamics hypothesis: the implicit optimality accuracy LinSyn has is not quite good enough for it to saturate the certification rates at 1/255 (SOL rate for the swish network is better), but is enough to saturate the rates for 2/255, 4/255 and 8/255. Finally, we would like to emphasize that the main purpose of the evaluation presented in the paper was to compare SOL to the two known function-agnostic bounding approaches. The evaluation on a more diverse set of NNs trained using various techniques would surely enrich the analysis. However, since none of the approaches rely explicitly on any specific model characteristic or the training algorithm, we argue that the presented results are decisive enough to be representative on their own. Especially considering the following: 1) we follow the experimental setup of LinSyn, which, supposedly, was chosen by the authors to reflect the performance of their approach as well as possible; 2) LinSyn aims to produce bounds tight in the same $L_1$-distance sense as our optimal bounds, so it makes sense to attribute the difference in the robustness certification performance to SOL being able to produce tighter bounds using less time (see fig. 7); this should stay true for any alternative experimental setup.
Summary: [Context] Neural network verification algorithms are based on bounding the activations and outputs of neural networks. This is achieved by propagating linear bounds through the network. For each non-linear operation in the network, it is required to provide linear bounds of the operation: given the range of inputs that the function accepts, give a linear mapping that upper (or lower) bound the function. Using these linear bounds, propagation algorithm like the variants of Crown / Lirpa are able to bound the whole network. These linear bounds of operations are usually manually derived. [Contribution] The authors define an optimality criterion in the context of convex or concave activation function [Theorem 1]. The main contribution of the paper is an algorithm to computationally derive linear bounds for functions that are not convex or concave, as long as they are lipschitz continuous. The algorithm consists in sampling a finite number of points, solving a LP to obtain the optimal linear bound that upper bound the sampled points [4.1] , and using smoothness criterion to adjust these bounds and guarantee that it will be a valid upper bound [4.2]. This procedure can be re-iterated until the desired accuracy gap (to the optimal linear bound) is achieved (4.3) [Opinion of the paper] I think that the paper is very interesting and the proposed algorithm can be very useful. The general idea of the algorithm is clearly presented, however some important details are missing which makes the current description of the paper insufficient to reproduce / re-implement it. If my main weaknesses comments ([Missing Description], [Mising Experiment]) and questions ([Computational aspects] [Details about the experiments]) are addressed, I would be happy to increase my Soundness to good/excellent and my General Rating to Weak Accept / Accept. Strengths: * The problem solved is important, and the proposed solution is interesting. Having had to derive manually convex relaxations of activation functions for some project, I definitely appreciate the value of an automated algorithm to handle this problem. * The validation of the proposed measure is broad. It is both performed on small problems where internal design choices (picking the LP solver algorithm) can be validated [5.1] and compared to the most relevant baseline [5.2], but also on more (I assume, see Questions) large scale problems. * The paper is very well structured, framing first the problem and their work in the context of the existing literature, introducing the framework in which they operate, and then building the description of their algorithm part by part in a logical fashion. While some elements are not fully described (see Weaknesses + Questions), I believe that this can be remedied for the final version. Weaknesses: [Definition of the Optimal Linear Bounding Problem] The paper motivates the choice of the "Minimum volume criterion" by citing [36, 25, 13]. I could not find anything relevant to that point in the Popqorn paper [13]. In the DeepPoly paper [25] and Crown paper [36], the choice of lower bound for ReLU is indeed done by minimizing the area of the relaxation created, but without actually showing that it "correlates well with the performance of robustness certification" (except by showing that it performs by the old "parallel bounds"), and we know that those choices can be improved (as evidenced by paper like alpha crown that give better robustness results that those choices). In addition, no point is made about this for general functions beyond ReLU. I completely agree with the authors that this is probably a good criterion, but if this is to be a basis for the method, I think the paper would be much stronger if this point was more strongly defended. Some possible way this could be done: - Can we link final robustness results to the volume criterion in some way? Maybe in expectation over some random weights, or at least clarifying that on a restricted type of activation function (convex? Monotonously increasing?), the volume optimal bound can be shown to be dominating other bounds. [Missing description] The "1D bisect" algorithm seems critical for the performance of the method (around twice as fast as the best alternative option), but it isn't described anywhere. The LP to solve need to return a slope vector $\mathbf{a}$ and an intercept $b$, but the only explanation given is in line 288. and 289 which does not describe how to implement those values. [Missing experiment] One experiment that is missing is the comparison to a network where we *know* what the correct bounds should be. If you take a network trained with sigmoid or tanh activation, the handcrafted convex hull / optimal linear bounds have been derived and should be tested (in the setting of Table 1). It is expected that the proposed SOL method would lose to that baseline, but knowing the gap in performance would be quite valuable, so that the "cost" of not deriving manually optimal bounds is known. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: [Results of Theorem 1] This might be a result of the definition of "optimal", but in the case where conv_R(f) is not differentiable, but only subdifferentiable, then it is possible that we have several functions which are all "optimal" at the same time, is that correct? [Claim in l.136 to l.142] Unless I'm mistaken, both Softmax and SELU are neither convex nor concave, so should not be in the list of functions that are directly supported before introducing SOL. [Claim in l.190 to l.192] "It can always be chosen in such a way that $|a| \leq L_1$" This claim doesn't seem justified by either a proof sketch or a reference? [Computational aspects] It's unclear to me on what plaftorm is every operation run? I assume that Gurobi and Scipy are running on CPU, but what about the other algorithms? Is the implementation of "1D bisect" amenable to vectorization? [Details about the experiment] The paper is missing some information about what type of networks are used for the "end to end" test including the method in AutoLIRPA. What activations were those network using? What size are they? How were they trained? Are they from a standard "NN robustness verification" benchmark such as the one from VNN-comp? [Suggestion for future work - Ignore this for the review process] It seems like this method could potentially be improved quite significantly by introducing some sort of smart caching. Most of a time, a network will use the same activation function through the network and it seems non optimal to restart from a uniform splitting of the domain every time, even if certain points (inflection points, local extrema) are much more likely to be important to include in the sample. [Notes] * Throughout the whole paper, Lipschitz is spelled Lipshitz. I did not find any examples of that spelling anywhere so I'm wondering if it's a typo or a different accepted spelling. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper would be improved by adding an experiment showing the cost it introduced vs. deriving hand crafted optimal bounds. See the [Missing experiment] comment in Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and the detailed feedback. [Missing description] We should definitely describe the 1d algorithm in more details. The idea of the bisection is as follows 1) guess the value $g(x_c) = y$ which we want to optimize; 2) check its feasibility by finding $a = \min_{x_i < x_c} \frac{y - f(x_i)}{x_c - x_i}$ and $b = \max_{x_i > x_c} \frac{f(x_i) - y}{x_i - x_c}$ in $O(|S|)$ time; 3) if $a < b$, the guess is infeasible since to line passing $(x_c, y)$ upper-bounds both argmaxes from the previous step; 4) otherwise, the guess is feasible and a valid solution would be $g(x) = y + (x - x_c) (a + b) / 2$. So, starting with the initial feasible guess of $\max f(x_i)$ and the initial infeasible guess of $\min f(x_i)$ we can find the solution within $\varepsilon$ of optimum in $O(|S| \log \frac{1}{\varepsilon})$ time. [Missing experiment] Such an experiment would, indeed, be a great addition to the paper and we will gladly include it. Right now similar quantities can be estimated by comparing: 1) Certification rates of SOL with higher eps and the rates of SOL with low enough eps for rates to be saturated (1e-7). The saturated rates should be close enough to the hypothetical rates of returning the exact optimal bounds. 2) Running times of SOL and the running times of the default AutoLIRPA. For simpler functions decompositional bounding of AutoLIRPA should take time similar to how much calculating optimal hand-crafted bounds takes. Higher times for gelu networks are supposedly caused by massive decompositional structure of the function. [Definition of the Optimal Linear Bounding Problem] Connecting the choice of the tightness measure to the robustness properties would be very appropriate for the paper. We will look into it. Right now the motivation is more practical: 1) other researchers have tried it and didn't find it pathological; 2) this exact tightness measure is convenient to optimize. Also, we believe that for many practical scenarios the exact choice of a measure (withing a certain spectrum) might no affect the optimal bound much. [Results of Theorem 1] Correct, any subderivative passing through $(x_c, conv_R(f)(x_c))$ would be optimal. E.g., optimal bounds for $-|x|$ in [-1, 1]. [Claim in l.136 to l.142] For SELU we meant to refer to the case when the scale parameter of the linear portion of the function is set high enough for the function to be convex. Softmax, indeed, should not be mentioned there. What should have been mentioned instead is max_pool – which is a convex function of several variables. [Claim in l.190 to l.192] Thank you for noticing this one! A few weeks ago we gave this statement a little more thought and realized that it's not as straightforward as it had seemed. 1) Given an arbitrary solution to the discrete problem we can always "perturb" it in such a way that the corresponding hyperplane passes through $d + 1$ points $[(x_i, f(x_i))]$ of the sample and the center of mass $x_c$ lies in the closed simplex $[x_i]$. 2) For the one-dimensional case this is enough to satisfy the statement: $|a| = \frac{|f(x_1) - f(x_2)|}{|x_1 - x_2|} \le L_1$. 3) In higher dimensions it does not necessarily hold as is. Note that for an arbitrary point $x \in S$ the upper-bounding condition requires $$f(x_i) + a(x - x_i) \ge f(x) \ge f(x_i) - L_1 |x - x_i|$$ therefore $$a \frac{x - x_i}{|x - x_i|} \ge - L_1 \Rightarrow (-a) \frac{x - x_i}{|x - x_i|} \le L_1 \Rightarrow |a| \le L_1 / (\frac{-a}{|a|} \cdot \frac{x - x_i}{|x - x_i|}).$$ This gives us the general bound of $$|a| \le L_1 / \inf_{[x_i], v} \sup_{i, x \in S} (v \cdot \frac{x - x_i}{|x - x_i|}),$$ where infimum is taken over all simplexes $[x_i]$ such that $x_c$ lies in the simplex and $v$ are all possible unitary vectors corresponding to the direction of the gradient. It's easy to see that that this min-max optimization depends on two things: 1) the density of points in the sample: higher density gives lower bound; 2) the geometry of R which determines the limit value of the bound when density approaches infinity $$q = \inf_{[x_i]\subset R, v} \sup_{i, x \in R} (v \cdot \frac{x - x_i}{|x - x_i|}).$$ The geometric factor $q$ is quite important in practice. Intuitively, $R$ having "acute angles" on the boundary may make $q$ smaller than 1. One example of such situation is in 2d: let $R$ be the triangle {(0, 0), ($w$, 1), (-$w$, 1)}, for which $q \rightarrow 0$ as $w \rightarrow \infty$. If we chose the target function as $f(x,y) = |x|$, the Lipschitz constant is always $L_1 = 1$, but the optimal upper bound is $g(x,y) = wy$ with $|a| = w \rightarrow \infty$. And the estimated discrete optimal bounds might have similarly big |a|. On the other hand, $R$ with "no angles" – a closed sphere seems to have $q = 1$ in any dimension, so we can guarantee $|a| <= L_1 (1 + \varepsilon)$ for any positive $\varepsilon$ if the initial sample size is big enough. All in all, these factors only introduce an additional constant factor into the complexity. [Computational aspects] All methods only use CPU. We run the experiments from the paper in a VM with 3 cpus allocated. AutoLIRPA runs several instances of the bounding problem solver concurrently. The calculation of two $\frac{f(x_i) - y}{x_i - x_c}$ arrays in 1d bisect algorithm should be vectorizable. We will look into it. [Details about the experiment] These details should definitely be included in the paper: 1) We run AutoLIRPA with the simple "CROWN" method for bounding. 2) The networks with "4l" in the name have 2 convolutional layers followed by 2 fully-connected layers, the "5l" networks – 3 conv + 2 fc. 3) The activation layer corresponding to the name of the model is present after each main layer except the last one. 4) All of the networks are trained by optimizing traditional cross-entropy. Similarly to how it was done in LinSyn paper. [Lipschitz vs Lipshitz] It seems to be a typo in the spell-checker on our side. --- Rebuttal Comment 1.1: Comment: **Re:[Missing description]** Thanks for the explanation. Does it apply as well to the case where x is a vector, rather than a scalar? Or is the method limited to 1d activation function? (This is fine if that is the case, but it should be pointed out). Thank you for the detailed response. I have increased my scores. --- Reply to Comment 1.1.1: Comment: This version of the bisect procedure only applies to the 1d case. We haven't been able to generalize it to the higher dimensions yet. We used to have a couple of words mentioning this limitation, but seem to have lost them along the way. The "1D" in "1D bisect" was meant to highlight the fact. Again, thank you for noticing these details!
Summary: This paper focuses on finding tight linear bounds for the activation functions in neural networks, which are used for certifying the robustness of a neural network. It aims to provide optimality guarantees for the tightness of the bounds of a scalar function represented by the neural net. Two settings are considered: 1. for functions that are convex in some region $R$, optimal bounds are obtained through an optimality criterion for the tightness of the approximation in $R$; 2. for functions that are Lipschitz continuous in some region $R'$, a sampling-based approach is proposed called SOL. Given an instance of the bounding problem and a positive scalar $\epsilon$, SOL efficiently computes the tightest linear bounds within the $\epsilon$ threshold. The empirical simulations show that the proposed method SOL typically takes a quarter of the time other methods take. Strengths: **Novelty to more systematically analyze the linear bound tightness.** Although I am not working in this area, it seems this paper is among the first to study the tightness of the linear bounds for the activation functions used for neural nets. **Elegant/Simple approach for the convex setting.** The results for convex functions are very simple, but to me, that makes the approach very nice: it shows that it suffices to estimate the center of the mass, which is a point in the given region from the domain (in practice estimated after sampling), and compute the gradient at that point to obtain a linear bound. Weaknesses: **Incomplete results for $L$-Lipschitz setting.** While I personally enjoyed the overall idea, I think the results for the $L$-Lipschitz setting are incomplete. In particular: - these should be more rigorous (see also comment below), and moreover - I expected to see a theorem that relates the tightness of the bounds estimated with sampling with the $L$-constant, the sample size, the dimensionality of the problem, etc. (even if this is done for some specific non-convex function). **Writing.** The writing is, in general, easy to follow. However, parts of the text are relatively ambiguous or less formal. For example, the following should be improved: - *relating to motivation*: since robustness certificates seem to be the main motivation mentioned in the paper for linear bounds, explain how the latter gives the former (and if it is the most consuming step) - *Prop. 1*. The way it is written, the proposition is not a full statement -- write it more formally to include the previous definition of the volume or point to it when making the statement. - *Theorem 1*. Similarly, theorem 1, as stated, is incomplete -- the setting and the assumptions should be repeated here. - *Missing proof of Thm. 1*. - *the discussions for the $L$-Lipschitz setting should be made much more rigorous.* (and stated as lemmas/theorems). - Fig. $3$ should be explained better in the caption. - etc. ## Minor - Abstract: *tightness optimality* is hard to understand; elaborate better as it seems central - line 157: *the* sample $S$ Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the average time complexity of SOL, and does it relate better with the observed time? If not, could you discuss what are the limitations / worst-case examples that yield the worst-case time complexity? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for such a detailed feedback. We will definitely address clumsiness in formulations and unintentional ambiguities. [average time complexity of SOL] Unfortunately, we cannot provide a rigorous analysis of the average time complexity, since the dynamics of the estimated linear bound between the iterations seems too complex. However, the proposed intuitive reasoning (lines 252-261) aligns pretty well with what we get in practice (fig. 5a, 5c). The empirical complexity of $O(\log^2 \frac{1}{\varepsilon})$ can be attributed to: 1) 1d bisect taking $O(n \log \frac{1}{\varepsilon})$ time to solve each discrete problem with $n$ points; 2) the final discrete problem typically having $O(\log \frac{1}{\varepsilon})$ samples concentrated near the touching point as per the reasoning; 3) the number of points growing fast enough for the last iteration's runtime to be dominant. [worst-case examples] Bounding linear functions gives complexities close to our worst-case bound of $O(\varepsilon^{-2d})$ (lines 246-251). Currently, we don't know whether closer complexities are achievable on not. [theorem that relates the tightness of the bounds ...] Such a relation is given implicitly for the uniform (not adaptive) SOL at line 227 by defining the cell size in terms of the desired optimality and the Lipschitz constant. We should definitely make this relation more explicit for the sake of clarity. For the adaptive version, again, no rigorous connection of this sort is known to us.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a new method for finding tight (in the l1 error sense) linear bounds for a general non-linear activation functions. Under their definition of tightness, the authors propose a method for finding optimal bounds for any convex function. For non-convex functions with Lipschitz continuity, they propose a sampling based method that efficiently computes the linear bounds within some error threshold. The performance of the proposed methods are benchmarked in robust certification tasks. Strengths: This is an interesting paper tackling an important problem (linear bounding of arbitrary activation functions) in the field. It formulates and solves the problem in a rigorous fashion, and a systematic benchmarking proves the utility of the new method. Weaknesses: I do not see particular weaknesses in this paper, but I do have a question out of theoretical interest. The optimality criterion is based on the L1 error of between the bounds g(x) and the activation function f(x). And this L1 error is important in the derivation of Proposition 1. I wonder how the landscape would change if we change the loss function to, e.g., L2 or other meaningful functions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see Weakness section above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not see any limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for the interesting question. [significance of optimizing $L_1$ error] Right now we don't see any straightforward way to generalize our approaches to alternative tightness measures. Moving from $L_1$ measure to, for example, $L_2$ poses several immediate problems: 1) As you have already mentioned, Proposition 1 no longer works. What is worse, now the loss function depends on $\int_R x g(x)$ and $\int_R g(x)$ in a non trivial way. Generally, there is no analytical expression for these two. 2) The analogue of our discrete problem is no longer an LP, but a QP, which is harder. 3) Shifting the bound up to adjust for possible unsoundness of the discrete problem's solution no longer corresponds directly to incrementing the loss function. On the other hand, we expect that, at least in 1d, in many situations (when the optimal $L_1$ bound touches $f(x)$ in two distinct points) the optimal bound stays the same for a wide spectrum of discrepancy measures. Therefore, the exact choice of measure may not influence the performance of the robustness certification too much. Then we might as well choose a measure which facilitates the optimization. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarification of my question. And I will keep my score.
null
null
null
null
null
null
Inverse Preference Learning: Preference-based RL without a Reward Function
Accept (poster)
Summary: The paper proposes a way to learn from preferences in the offline setting without learning a reward model. The main insight is that reward and Q functions are interchangeable and the policy learning using preference can be formulated directly as a function of Q which represents the reward function implicitly. This allows for increased learning performance without requiring to train a seperate reward function when doing RL with preferences. Strengths: Strengths: 1. The paper presents a new way to do policy learning from preference data without requiring learning an intermediate reward function. 2. The experiments demonstrate that their method IPL outperforms baselines that learn from offline preferences in a number of prior tasks that include simulated MuJoCo locomotion tasks and manipulation tasks. Weaknesses: 1. Missing theoretical underpinnings: A number of questions come up with the proposed method that should be addressed: 1. Is the learned policy optimal for the implicit Q function? Equation 6 is the probability that one trajectory is preferred by other under the policy $\pi$ with the reward function $T^\pi Q$ . Replacing $T^\pi Q$ with $T^* Q = Q(s,a)-\gamma E[V^*(s')]$ , it is claimed that the preference learning objective will fit the optimal value function. There seems to be jump to this claim without a proof. In my opinion, it is important to show how minimizing Equation after line 201 leads to soft optimal $Q^*$ with ground truth reward function $r^E.$ 2. Large regularization weight A large regularization weight (1 and sometimes even larger than the main preference loss e.g in fig 2) is used in the work which makes it more confusing as to what combined objective is being optimized. While the intuition given by the authors makes sense, it might be great to show how this regularization does not change the stationary point of $Q^*$. 2. Clarification on novelty: It is mentioned multiple times in the paper that: ”Line 163: Our key insight is that Q-function learned by off-policy RL algorithm in face encodes the same information as the reward function” This is not a new insight. It has been presented in multiple prior works [1,2,3] where a change of variables for reward is performed to remove the intermediate step of reward learning for imitation learn and the mapping from reward to Q has been studied theoretically in the work of [4] and more recently [5]. It seems the novelty is to add the preference loss to the IQ-Learn loss function. 3. Prior literature on reward learning: Three baselines are used to compare the method IPL - MR, LSTM and PT. A number of other prior works exist which aim to do reward learning from preferences [6,7,8] and would be interesting to compare experimentally to and discuss the relationship in the paper. 4. Gap between proposed method and practical method: The paper discusses learning an optimal value function using the linex loss of XQL[9] but in the experiment section the value update is performed using IQL. Preference loss uses soft-optimal-Q whereas the policy update step in IQL updates for optimal Q. There seems to be a disconnect between the proposed method and practical method which might be important to address. 5. Experiments: a. How is the preference dataset constructed? Elaborating that in the main paper can increase the understanding considerably. b. Limited experiments: Only 1 baseline is compared against in the meta-world task and 4 simulated domains in total are considered in the locomotion and robomimic tasks. With limited theoretical understanding of the method, it might be important to establish the method empirically with previous baselines in order to show its merits. [1] Kostrikov, Ilya, Ofir Nachum, and Jonathan Tompson. "Imitation learning via off-policy distribution matching." *arXiv preprint arXiv:1912.05032* (2019). [2] Nachum, Ofir, and Bo Dai. "Reinforcement learning via fenchel-rockafellar duality." *arXiv preprint arXiv:2001.01866* (2020). [3] Ma, Yecheng Jason, et al. "Smodice: Versatile offline imitation learning via state occupancy matching." *arXiv e-prints* (2022): arXiv-2202. [4]Garg, Divyansh, et al. "Iq-learn: Inverse soft-q learning for imitation." *Advances in Neural Information Processing Systems* 34 (2021): 4028-4039. [5] Sikchi, Harshit, Amy Zhang, and Scott Niekum. "Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods." *arXiv preprint arXiv:2302.08560* (2023). [6]: Brown, Daniel, et al. "Safe imitation learning via fast bayesian reward inference from preferences." *International Conference on Machine Learning*. PMLR, 2020. [7]: Chen, Letian, Rohan Paleja, and Matthew Gombolay. "Learning from suboptimal demonstration via self-supervised reward regression." *Conference on robot learning*. PMLR, 2021. [8]: Sikchi, Harshit, et al. "A ranking game for imitation learning." *arXiv preprint arXiv:2202.03481* (2022). [9]: Garg, Divyansh, et al. "Extreme Q-Learning: MaxEnt RL without Entropy." *arXiv preprint arXiv:2301.02328* (2023). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Gradient through target Q: When implementing the regularization on $T^* Q = Q(s,a)-\gamma E[V^*(s')]$ it seems that only Q(s,a) is updated which means that reward is not being made to be zero centered and rather Q is made to be zero-centered. 2. XQL vs IQL: The main paper discusses XQL and the experiments use IQL. Is there a reason behind this disconnect? Are there XQL experiments to validate? Update: Thanks to the authors for responding to a number of my questions and providing additional baselines. I have updated my score (4->5) and will reiterate on my score upon further discussion with reviewers. I hope to see the clarification on novelty and the additional baselines in the updated paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and effort in reviewing our work and have made a number of changes as a result. **Theoretical Underpinnings** The reviewer raised a number of questions regarding the theoretical underpinnings of IPL, namely its optimality and regularization. We provide a proof of IPL’s convergence in our central response due to space constraints. It depends on a general proof of the bijection, which was included in our response to Reviewer ggMH, and shows that IPL returns the optimal policy for the regularized expert reward. We hope the proof addresses the following concerns of the reviewer: 1. IPL is formally characterized, increasing understanding across the board. 2. We have shown that IPL optimizes for the regularized reward $r^*$ 3. We have shown that IPL converges to the optimal policy for this reward. The reviewer also asked if the reward function we optimize is $r_E$ or not. $r^*$ is not equal to $r_E$. However, we would like to note that most preference based RL works do not optimize for $r_E$. For example, PEBBLE [2] bounds the reward values with a Tanh network in practice, which as a multiplicative clamp does not preserve $r_E$. Preference Transformer [1] normalizes reward by scaling it via the max and min episode, which is another multiplicative transform that does not preserve $r_E$. Like all these methods, the policy recovered by IPL is not exactly that of $r_E$, though they are likely similar. We will update the text to make this more clear, along with our proofs. The reviewer also asked about large regularization coefficients. Empirically we found that the stronger the regularization, the more bounded the $Q$-function becomes. Practically we found that in some domains a larger regularization weight was useful for some tasks – and this might have to do with the data. With higher quality data, there might not be too much difference between the $Q$ values for a good and bad segment. However, There is also an intuitive argument – that humans have relatively smooth preferences. For example, humans would probably attribute reward smoothly across a video clip instead of attributing it all to a single frame. **Clarification of Novelty** Thank you for bringing this up. We apologize if the contribution statement was not clear. We have now made it clear that our contribution is the application of the inverse bellman operator to the Preference-based RL, not the inverse-bellman operator itself. **Prior Literature and Other Baselines** We will make sure to include citations for these other, relevant works. However, each of these works seeks to solve is slightly different from the Preference-based RL problem that we deal with. To our knowledge, Preference Transformer is state of the art in all of our benchmark tasks, which is why we selected it as a strong baseline. “Safe Imitation Learning Via Fast Bayesian Reward Inference From Preferences” from Brown et al. is largely designed for image-based experiments (our benchmarks are state), uses a number of self-supervised losses, and uses entire demonstrations instead of trajectory snippets. However, most critically this method requires hundreds of online MCMC rollouts during training, making it inapplicable to our offline benchmarks. “Learning from Suboptimal Demonstration via Self-Supervised Reward Regression” by Chen et al. is not a preference-based RL method. Instead, it is an inverse-RL method that improves upon D-REX to generate preference data for improving the policy. They assume access to a set of demonstrations, and no rankings. Their method addresses a different complementary challenge than IPL (inverse RL) by generating preference data. In this way, it is complementary to IPL and could be used to generate more data. It also requires online samples (unlike IPL) in order to generate extra data from noised policies. “A Ranking Game For Imitation Learning” by Harshit et al. is also an online inverse RL method, which can leverage some preference data. It is designed to be used online with access to a set of demonstrations, unlike the methods we test against. While these baselines are highly relevant, they aren’t exactly designed for the same setup we are concerned with. Specifically, most either address the inverse-RL problem and require online samples. **Gap Between Proposed and Practical Method** See response to reviewer ggMH under Soft-Q vs Standard. **Experiments** 1. *How is the preference dataset constructed?* Our datasets are taken from PT [1]. They are constructed by sampling segments from existing offline benchmarks (D4RL, robosuite), and asking a human to choose their preferred segment. 2. *Limited Experiments* We compare all methods across eight standard benchmark tasks. The MetaWorld experiments were designed to show that across even more tasks and datasets scales IPL matches the performance of the best baseline that uses an explicit reward model (MR+IQL). This addresses the central motivation of our method – we can remove the reward function and still perform as well as the best baseline. If the reviewer feels strongly about the MetaWorld tasks, we are happy to run them with Preference Transformer for the final paper, but are compute limited and do not have the bandwidth to finish them immediately as this would require 5 tasks * 4 data scales * 5 seeds = 100 large transformer reward models, and as an academic lab we don’t have the resources available to do that on a short time-scale. **Question 1: Grad in Regularization** We apply a “stop gradient” on the target Q or V. The regularization $r^2 = (TQ)^2 = (Q(s,a) - \gamma V(s’))^2$ thus encourages $Q(s,a) = \gamma V(s’)$, which would imply that the implicit reward is zero. The $Q$-function is encouraged to be centered at the value. **Question 2: XQL vs IQL** See response to reviewer ggMH under Soft-Q vs Standard. [1] Kim et al.. Preference transformer. ICLR 2023. [2] Lee et al., PEBBLE. ICLR 2021. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thanks for the response. I have some follow-up comments > Theoretical underpinnings Thanks for the response on clarifying the convergence. My question was actually meant to ask a related question to what you described but still seems to be missing: How is $r^*$ related to the equivalence class of expert reward functions? What is the potential suboptimality incurred due to this regularization? > Preference model learns Q using soft-Q based preference model whereas IQL updates does not use soft-Q model. In the experiments, Q is learned via soft updates whereas V uses hard updates using IQL. I would be curious to see how XQL fares in the paper/discussion since that seems to be a theoretically principled algorithm. The experiments for XQL shouldn't be memory intensive and quite fast to run. > "we have updated the method section with a new expanded derivation of IPL under any off-policy RL algorithm that works via policy evaluation and policy improvement steps" Under a general off-policy RL method, I dont see how a closed form solution for optimal value in terms of Q can exist. Specifically, how do you go from equation after line 172 to equation after line 194? If it requires learning another policy that serves as a maximizer of Q function, that seems to defeat the point since we have just replaced the reward function for a policy network. > Proof that IPL Converges to the Optimal Policy corresponding to the regularized expert reward? I am not sure I see why $r^*$ should be unique. Consider a 4 state ($s\_1,s\_2,s\_3,s\_4)$ MDP with deterministic transitions. Let the dataset have uniform probability over these states: $s\_1$ -> $s\_2$ -> $s\_3$ -> $s\_4$ and suppose my preferences are $s\_1$ -> $s\_2$ -> $s\_3$ > $s\_1$ -> $s\_2$ -> $s\_4$ and one of the learned $r^*$ is (under the regularized preference loss): r($s\_1$ ) = 1 r($s\_2$ ) = 2 r($s\_3$ ) = 3 r($s\_4$ ) = 0 Then another possible solution that achieves the same optima is: r($s\_1$ ) = 1 r($s\_2$ ) = 3 r($s\_3$ ) = 2 r($s\_4$ ) = 0 Is it possible to prove uniqueness of reward function? > Significant writing changes to clarify contributions Thanks for acknowledging this, and I believe on reading the paper again that significant writing changes need to be made to make sure the contributions are clarified. > "“Safe Imitation Learning Via Fast Bayesian Reward Inference From Preferences” from Brown et al. is largely designed for image-based experiments (our benchmarks are state), uses a number of self-supervised losses, and uses entire demonstrations instead of trajectory snippets. However, most critically this method requires hundreds of online MCMC rollouts during training, making it inapplicable to our offline benchmarks." I believe this work does use trajectory snippets instead of entire demonstrations as the authors suggest. MCMC rollouts are made on a linear reward function making it efficient and fast. Prior work, "Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations" even used a pointwise estimate for reward without MCMC. Both of these baselines can work in the exact setup described here by replacing online RL with offline RL. --- Reply to Comment 1.1.1: Title: Response & Clarifications Comment: Thank you for engaging, we appreciate your response and are seeking to clarify any misconceptions and improve our paper. > Equivalence class of reward We don’t believe so. We think formally investigating this would take a significant amount of time warranting its own work. We would like to point out that many works in preference-based RL leverage other similar techniques for regularizing the reward function without analysis like PEBBLE and PT. > Experiments for XQL We ran experiments with XQL (amounts to changing theV loss in IQL) with alpha = 2 for locomotion and alpha = 5 for robomimic chosen based on the XQL publication for Hopper, Walker2d, and Franka Kitchen. We did no additional tuning. Our results show the avg performance (3 seeds) at the end of training, with the max avg in parenthesis. NOTE: Please refer to updated table below for results with tuning -- we increased the value of alpha for locomotion. | Dataset | IPL+IQL | IPL+XQL | |---------|---------------|-------------| | h-m-r | 73.6 (90) | 1.7 (11.7) | | h-m-e | 74.5 (77.5) | 20.3 (33.3) | | w-m-r | 59.9 (66.3) | 2.4 (15.0) | | w-m-e | 108.5 (109.5) | 1.9 (76.3) | Consistent with results in [1], we see that XQL is unstable due to its objective. Recent works show that choosing different losses on V (like IQL) result in different regularizers that perform better [2]. > Method Section New Derivation: To be more general, we wanted to extend our derivation of IPL to the standard approach for off-policy RL in continuous spaces: policy evaluation using $B^\pi$, where $B$ is the bellman operator, followed by policy improvement (usually greedy approximated by max Q, where there might not be a closed form). We thought the policy evaluation-improvement paradigm would be more general, as it is used to prove convergence for popular off-policy methods like SAC. In the context of IPL, this would mean using the inverse operator $T^\pi$, and repeatedly improving $\pi$. In fact, in our data-limited experiments we used AWAC, an off-policy algorithm that operates in this way. For algorithms that directly approximate $B^*$ (XQL), we could correspondingly use $T^*$ to combine policy evaluation with policy improvement. When using AWAC the equation after line 194 would use $V^\pi(s’) = E_{a' \sim \pi(\cdot | s')} [Q(s',a')]$, and SAC would use $V^\pi(s’) = E_{a' \sim \pi(\cdot | s’)}[Q(s’,a') - \log \pi(a' | s’)]$. The objective function (Eq 3 in the paper) would consequently also change to match that of the chosen RL algorithm. For these updates we are not using the closed form. At the end of the day we need a policy, and all offline RL algorithms need potential function for improvement. IPL with SAC and AWAC both use just an actor and a critic, the same number of networks as SAC and AWAC with known reward. IPL with any algorithm uses the same number of networks as that algorithm. We do not understand why the policy network would be redundant as the reviewer states, or how we could reduce the number of networks below what is required of the base algorithm. We are open to sticking to the publication's current presentation of the method using $T^*$ if the reviewers think it is more clear. > IPL Convergence Proof We apologize for not making our assumptions clear in the proof, and will update to make this more clear. We agree that the example provided by the reviewer is under-determined. There isn’t sufficient data to specify $r_E$ up to a constant equivalence $r_E + c$ even with standard BCE. Both solutions provided by the reviewer achieve the same un-regularized preference loss. We do not argue that this problem is fixed with regularization – it remains ambiguous. Our proof assumes that we have sufficient preference data to fit $r_E + c$ with the standard BCE preference loss, and a regularizer that removes ambiguity over $c$. We have added an addendum to our response to explicitly state this: Assume data and regularizer $\psi$ s.t. $r^*$ is unique. For ex, consider a simple preference optimization with $r_1$, and $r_2$. Let the sigmoid function be $g$. The BCE loss is $-y \log g(r_1 - r_2) - (1-y) \log (1 - g(r_1 - r_2))$. Taking the gradient with respect to $r_1$ and setting it to zero results in the condition $y = g(r_1 - r_2)$. Taking the gradient of the loss wrt $r_2$ results in the same condition! Thus, the system is under-determined since we can add any constant to $r_1$ and $r_2$. If we add regularization $\lambda r_1^2 + \lambda r_2^2$ to the loss and solve for the point where the gradients both equal zero we arrive at the condition $r_1 = -r_2$. For label y = 0 or 1 we can determine the system. If we didn’t have regularization the reward function could shift by $c$, making it hard to argue about policy improvement using $Q$. [1] Sikch et al. Dual RL: Unification and New Methods 2023 [2] Offline RL with No OOD Actions. Haoran Xu et al. ICLR 2023
Summary: This paper addresses the problem of learning a policy given offline pairwise preference data. The proposed algorithm, inverse preference learning, directly learns a value function without explicitly learning the reward function first. This setup simplifies the common two-step approach of first inferring the reward function in a supervised manner and then learning the policy using RL. Across a range of simulated robotics experiments, it is shown that this method performs at least as well as more complex approaches that have significantly more (hyper-)parameters. Strengths: The paper is well-motivated and clearly written. The presented idea is novel, simple and well-executed. The resulting algorithm is easy to implement, has much fewer (hyper-)parameters compared to prior work, and seems to perform on par or better than prior work. Source code is also available. Given the increasing interest in reinforcement learning with human feedback (RLHF) for robotics and natural language problems, this paper should be of high interest for the community. Finally, I really appreciate that the authors invested time to improve the Markovian Reward (MR) baseline. This by itself is a nice little contribution, and I hope that future work will adopt this much stronger baseline. Weaknesses: The presentation of the results in the tables could be improved. While in Table 1 the results are based on the final success rate at the end of training, it seems that the results in Table 2 are based on the best success rate achieved throughout training (particularly apparent for the Drawer open tasks). I found this to be a bit misleading, especially since IPL seems to heavily suffer from the problem that the success rate deteriorates over time. Furthermore, I didn't fully understand why some results in the tables are bold and others are not. For example, in the Assembly task with 500 queries (Table 2), it seems that IPL is not significantly better than MR. Minor comments: 1. I initially got a bit confused as to why $L_r$ depends on $D_o$ and $D_p$. Perhaps you could remind the reader that the preferences in $D_p$ are also (sub-)trajectories. 2. Typos: - line 89: "offlien" -> "offline" - line 101: "Our work build on" -> "Our work builds on" - Eq. 3: Subscripts t missing for $a, s$ in $\pi$ and $\mu$ - Eq. 4: $p(\cdot|s|a)$ -> $p(\cdot|s,a)$ - line 158: "to learned" -> "to the learned" - line 199: "KL-constrianed" -> "KL-constrained" - line 223: "thus" -> "is [thus]" - line 283: "that use" -> "that uses" - line 305: "Data" -> "data" - line 315: "form" -> "from" - line 308/319: "Preference-based" -> "preference-based" Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you please clarify the results in Table 2? I would suggest that both Table 1 and Table 2 either only report the final success rate or report both the final and best success rate. 2. Could you please double-check/clarify why certain numbers in the results are bold and others are not? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: One of the original motivations of inverse RL was that the reward function provides a succinct and transferable definition of the task (Abbeel and Ng, 2004). While recent approaches seem to no longer learn "succinct" reward functions, one could argue that transferability is still a good reason to explicitly learn a reward function. Thus, a potential limitation of the presented approach is that the ability to use the learned reward function to learn different policies gets lost since the value function depends on a particular policy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their in-depth review. We hope to have addressed all the reviewers' concerns, and would invite them to additionally examine the new experimental and theoretical results produced in response to other reviewers. **Clarification of Results in Table 2** In our uploaded response page we have included evaluations at a fixed stopping point, like Table 1, and find the trend to be similar: IPL and MR+IQL perform very similarly, with IPL performing slightly better in 3 of the 5 tasks. We will change this to the main version of the Table in the final version. We hope this improves the consistency of our experiments. **Bolding Scheme** Following IQL [1], we bold values when they are within 95% of the best method. We will state this explicitly in all Table captions. For the specific example brought up by the Reviewer, in Table 2 on Assembly for 500 queries IPL gets 0.9, 95% of which is 0.855, so MR would not be bolded. We understand that this scheme has less significance when performance values are quite low, and can make note of as well in the Table caption. **Limitations** Thank you for pointing this out! We agree that there are some settings where transferring a reward function may be more useful than transferring a Q-function – largely stemming from the fact that a Q-function is tied to an inherent policy. If one wants to collect sub-optimal data for a task, a reward function may be necesary. Many Meta-RL approaches are also designed to leverage reward functions instead of Q-functions. Reward functions learned from preferences could also be used to smooth sparse reward functions – a Q-function could not easily be used for smoothing (though the advantage could! Ng. 1999). We will note the additional limitation of the transferability of an implicit reward function in Section 5 and include this discussion. Thank you for also finding typos! We have fixed them. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I appreciate the improved presentation in the paper as well as improved clarification through the rebuttal. On the other hand, I wasn't aware that some of the ideas used in the paper have already been introduced in prior work (as brought up by other reviewers). Overall, both aspects cancel each other out, and thus I will maintain my original score. --- Reply to Comment 1.1.1: Title: Thank you for responding. Comment: Dear Reviewer, We appreciate your engagement in the discussion period! We are happy to have addressed your concerns by improving the presentation of the paper and clarification through the rebuttal. We will make sure to propagate your comments to the final draft. Though the soft-inverse bellman operator was introduced in prior work (IQ-Learn), it has only been used in the context of online imitation learning, and has not yet been used for learning from preferences. We believe that showing the application of the (general) inverse bellman operator to the preference-based RL framework is a strong and non-trivial contribution. First, preference-based RL has become increasingly popular because of its ability to align policies / models with human intent, which will be important for robots deployed in the real world. Second, IPL shows that we can greatly simplify the PbRL problem under any RL algorithm and make learning more efficient without degrading performance. Doing this required both theoretical and engineering innovations. Thank you for your consideration!
Summary: This paper proposes a new algorithm, inverse preference learning, to learn from (offline) preferences between behavior segments. In particular, they show that human preferences can be modeled using only the Q-function, therefore eliminating the need to learn a separate reward function. The authors show this approach can succeed in a variety of environments. Strengths: This paper has several strengths: - They study the important problem of preference-based RL. - This paper makes the novel insight that human preferences can be modeled using only the Q-function, and introduce an algorithm based on this insight. - They experimentally validate different aspects of their algorithm over a wide variety of tasks. Weaknesses: **W1.** The motivation that reward-modeling has a large cost is a little bit weak (especially with regards to memory). For example, InstructGPT uses a 6B reward model and a 175B policy. The reward model is therefore only a small fraction of the memory cost. In addition, one of the main hyperparameters of reward-modeling that the authors say is difficult to tune is the stopping criteria. However, the stopping criteria for reward model training is straightforward in most papers: Simply monitor the validation loss on a held-out fraction of comparison data. **W2.** This paper mainly focuses on offline RL, yet the most impactful RLHF applications are trained online, where behavior data is collected from the policy. It seems like IPL can be trained in an online manner as well, but it is not clear from the presentation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: **Q1.** Can this method be used in an online setting, where new data is generated according to the policy? **Q2.** How do MR and IPL perform without data augmentation? And why does MR receive such a large boost based on this augmentation? **Q3**. Are the reimplemented training setup and the training setup from [25] exactly the same? If not, PT and LSTM should be reimplemented for fair comparison. **Q4.** What is the bolding scheme in Table 1? **Q5.** Is there any way to experimentally validate the claim that IPL is more hyperparameter efficient? For example, random/grid hyperparamter search with an equivalent budget? From Figure 2, it looks like MR is actually more hyperparameter efficient than IPL. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors do discuss limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments. **Weakness 1: Reward modeling cost** While reward models in RLHF in language models can be smaller, as in InstructGPT in control-based domains, the current trend is the opposite: reward models are getting bigger. Kim et al. [1] and Early et al [2] use reward models that are significantly larger than the policy and/or critic. In control, we often don’t have access to large amounts of applicable pre-training data, and thus the quality of the reward function is paramount to attaining good success. Moreover, while reward models in RLHF, like those in InstructGPT, provide feedback like a single-step contextual bandit at the end of generation, reward functions for control need to provide feedback at every step, analogous to “at each token”. Using validation loss as a stopping criterion for reward models as the reviewer suggests, while potentially accurate, can be expensive in practice. Human preferences in many domains (including control/robotics domains) are quite expensive to collect. In addition these preferences are often high-variance, requiring an even larger validation dataset to account for the high variance. In addition, we note that IPLremoves the inherent hyper-parameters in reward modeling, such as the network architecture. **Weakness 2: Online RL**: We agree that the online setting is also interesting, and there is nothing stopping IPL from being used online since it can be combined with any off-policy RL algorithm. To test an online version of IPL, we combine our framework with SAC and compare it to PEBBLE, a standard, state-of-the-art method for off-policy online Preference-based RL [3]. We include learning curves in Figure 1 of the attached document. On two of the harder Meta-World tasks from Table 2, plate-slide and drawer-open, we find that IPL trades blows with PEBBLE (out performing by a large margin on drawer-open) while exhibiting lower variance. **Questions** *Q1: Online Settings*. Yes! See our response to Weakness 2. *Q2: Data Augmentation*: We ablate the use of augmentations for both IPL and MR+IQL in Hopper and Robomimic Can in Figure 2 of our uploaded page. We find that data-augmentation makes a substantial difference for MR+IQL on Hopper, but has a smaller effect in the robotics domains. The reviewer asks why this form of data-augmentation makes such a big difference. We would like to point out that this question was previously studied in SURF [5]. The fact that humans likely made similar judgements between subsets of segments is a good inductive bias for the reward function, helping its accuracy when relabeling the offline dataset. *Q3: Implementation*: We use the same exact datasets as [1] and the same evaluation procedure. We use the same exact network architectures, learning rates (and schedules). The primary difference in our implementation is the addition of data-augmentation, which we believe should be as it has already been shown to be effective in SURF [5]. Due to the use of data-augmentation, we also adjusted the reward training steps. Everything else remains identical. The data-augmentation techniques are inapplicable to PT [1] and the NMR baseline from [2], since they are sequence modeling approaches which require the whole data sequence. We would like to additionally point out that Reviewer HQUc views our improved MR baseline to be “itself a nice little contribution”. *Q4: Bolding Scheme*: We bold results within 95% of the best performing method, as done in IQL [4]. We will include this clarification in all Table Captions. *Q5: Hyper-parameter Efficiency*: The reviewer asked us to further discuss the hyper-parameter efficiency of IPL. The only parameter we searched over when tuning IPL was the regularization coefficient. We include additional ablations on the regularization coefficient $\lambda$ in our attached page, showing that in many cases, IPL is very robust to this value. We would also like to again highlight that reward-modeling methods also have to choose an architecture for the reward network, its learning rate, batch size, etc. in addition to the stopping point parameter we ablate for MR+IQL and thus Figure 2 is not the complete picture. Nonetheless, we will reward our points surrounding hyper-parameter efficiency. [1] Kim et al.. Preference transformer. ICLR 2023. [2] Early et al.. Non-markovian reward modeling from trajectory labels. NeurIPS 2022. [3] Lee et al., PEBBLE. ICLR 2021. [4] Ilya Kostrikov et al.. Offline RL with implicit q-learning.ICLR, 2022. [5] Park et al. SURF: Semi-supervised reward learning with data augmentation. ICLR 2022. --- Rebuttal Comment 1.1: Title: Response by Reviewer Comment: Thank you for the detailed response! My response have largely been addressed. I will adjust my score accordingly.
Summary: RLHF pipelines typically consist of (1) training a reward model over human preference data and (2) using this trained reward model with a well-known RL method. This two stage training is computationally expensive. The authors of this paper develop an algorithm "Inverse Preference Learning" to directly learn the $Q, V$ functions, which can be easily used to extract an aligned policy. By directly learning the value functions, this approach bypasses learning a reward model, which is expensive to train, and prone to problems like reward hacking. Strengths: Weaknesses: Technical Quality: 3 good Clarity: 3 good Questions for Authors: On Page 2, the authors mention - "This can be problematic as prediction errors cascade from the reward function, to the critic, and ultimately the actor causing high variance in downstream performance." Can we quantify these prediction errors more formally, to establish how serious this issue is? Only having two options in the preference data seems limiting. Can this idea be applied to the case where the preference is amongst more than two options? Eg. maximizing $p(a\succ b\succ c\succ d)$ rather than $p(a\succ b)$. On Page 6, line 214, the authors say "While such a reward function seems unrealistic", here, why do we want rewards to be necessarily continuous? In many cases, human defined rewards are discontinuous (eg. 0-1 type of reward where the reward is 0 everywhere except when the agent reaches the goal, when it is 1). The authors mention on Line 53-54 that "the key insight of our work is that, under a fixed policy, the Q-function learned by off-policy RL algorithms captures the same information as the learned reward function". This is not an insight developed in this paper, but rather a result from [1] (see Lemma 3.2 in [1]). Moreover, the result is for soft-Q functions, and not the standard Q function. I would recommend to make these points clear in the manuscript. Finally, there is parallel work [2] that also tries to bypass reward learning similar to this approach. It might be good to acknowledge, but I leave this decision to the authors. Suggestions related to language and typos: 1. Page 2, 2nd last line - "offlien" -> "offline" 2. Page 6, 3rd line - should "another perturbation of size \epsilon" be "another perturbation of size \epsilon prime"? References: 1. IQ-Learn: Inverse soft-Q Learning for Imitation, Garg et al. (2022) 2. Direct Preference Optimization: Your Language Model is Secretly a Reward Model, Rafailov et al. (2023) EDIT (16 Aug 2023): Updated score from 6->7. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review! The reviewer largely had theoretical questions about our work. We believe the answers to these questions will help all readers, and will correspondingly update the paper to include all the information below. **1. Cascading Errors** The reviewer asked if we could better characterize the cascading error problem. First, cascading errors in Preference-based RL can be empirically observed in many prior works. For example, in PEBBLE [1] Figure 4, we see that confidence bands for preference-based methods are far larger than those of SAC with oracle rewards. Second, we can actually theoretically characterize this problem using arguments from [2] designed for behavior cloning. In their work, once an error is made a policy goes out of distribution and will subsequently only make errors. This “cascading” error framing can be applied to the networks used in PbRL. Assume that a network makes a prediction error with probability $\epsilon$, and that all future networks necessarily make a prediction error when a previous one does, i.e. if the reward model makes a mistake, then the critic and actor also make a mistake. By [2] the total error across all networks can be bounded as $O(\epsilon N^2)$, where $N$ is the number of networks. By reducing the number of networks by 1, IPL can *theoretically* lower this bound. **2. Only Two Forms of Data** The reviewer stated that having two options in the preference data is limiting. We presented IPL only for binary preferences as it is the simplest case, but IPL can easily be extended to rankings using a Plackett Luce Model. Consider permutations $\tau$ over $K$ segments: $P_{r_E}(\tau) = \prod_{k=1}^K \left(\exp \sum_t r_E(s^{\tau_k}_t, a^{\tau_k}_t)\right) / d_k$ where $d_k = \sum_{j=k}^K \exp \sum_{t} r_E(s^{\tau_j}_t, a^{\tau_j}_t)$ Then, we make the same substitution using the inverse bellman operator giving us the permutation model implied by the Q function, and run maximum likelihood estimation over the model. $L_p (r) = \mathbb{E}_{\tau \sim \mathcal{D}_p }\left[ \log P_r (\tau) \right]$ We will include a full derivation of IPL for rankings in the Appendix. **3. Regularizing the Reward Function** The reviewer asked why we might want reward functions to be smooth instead of discontinuous, as in binary. We hypothesize that, though a human may design a binary reward function, human preferences are often smooth. For example, we might make judgments more smoothly across different comparisons, instead of attributing all reward to a single frame. Moreover, smooth reward functions have been shown to generally perform better [3], while sparse ones are harder to optimize. In our case the regularization is also necessary to remove the ambiguity of the Bradley Terry model – which can only recover $r_E$ up to a constant. See proof in the central reviewer response. **4. Key Insight** Thank you for pointing this out – we intended to frame our key insight as extending the inverse-bellman operator from IQ-Learn. We will update the text to make it clear that our insight is applying this to the reward function in preference-based RL. **Soft-Q vs. Standard, Proof of Bijection** While we presented our method (IPL) with XQL, IPL can in fact use any RL update. We thought that presenting our method with XQL would make it easier to understand in the context of IQ-Learn, which originally introduced the inverse bellman operator and used soft Q-learning. IQ-Learn requires the soft-Q framework because a) it was developed for soft-inverse RL, and b) it guarantees that the saddle-point between the reward and policy is unique for inverse-RL. We used IQL in experiments to match baselines, and because XQL is often unstable. However, the bijection between Q-functions and reward functions under a fixed policy exists regardless if one uses soft-Q learning, allowing IPL to use any RL update. We will make this more clear in the final draft. Lemma 3.2 in IQ-Learn in fact does not depend on the soft-Q framework: Let $P^\pi$ be the transition matrix for the MDP corresponding to a fixed policy $\pi$. In vector form, the bellman equation becomes $Q = r + \gamma P^\pi Q$, or $r = (I - \gamma P^\pi) Q$. We can establish a bijection by showing that $(I - \gamma P^\pi)$ is invertible. $||\gamma P^\pi|| < 1$ by construction, as $P^\pi$ is bounded by 1 as a probability distribution and $\gamma < 1$, which guarantees that its Neumann series converges. This implies the existence of $(I - \gamma P^\pi)^{-1}$. Thus, $Q = (I - \gamma P^\pi)^{-1} r$ and a bijection exists. **Connections to DPO** We are aware of the recent DPO work, which was released after the NeurIPS submission deadline. While IPL and DPO share some theoretical connections, we’d like to note that DPO is designed for and limited to the contextual bandit setting. This setting is appropriate for RLHF in LLMs; however, DPO would not apply to the more general setting of preferences over sequences of states and actions. IPL is strictly more general, as if you take IPL with XQL for bandits, you can exactly recover DPO. Within the bandits setting, there is no “next-state” and $V^*(s’)$ is removed, and the inverse bellman operator becomes just $Q(s,a) = r(s,a)$. The optimal XQL policy is $\pi^* = \mu(a|s) e^{Q^*(s,a)}/ Z(s)$ where $Z$ is the partition function. By rearranging, $TQ = Q^*(s,a) = \log \frac{\pi(a|s)}{\mu(a|s)} + Z(s)$. We can plug this into the preference model induced by Q in Eq 6 of IPL. In the RLHF setting, the partition function cancels since we assume the context to be the same between preferences. This exactly results in the DPO algorithm, showing the DPO is in fact just an instantiation of IPL. We will include this in the Appendix. Thanks for finding typos! [1] Lee et al., PEBBLE. ICLR 2021. [2] Ross and Bagnell. Efficient reductions for imitation learning, 2010. [3] Ng, A. et al. Policy invariance under reward transformations ICML, 1999. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: The author's response has answered most of my queries. I will increase my score to 7.
Rebuttal 1: Rebuttal: Dear reviewers, Thank you all for your detailed feedback. We have responded to all reviewers individually with more content but wanted to make a global list of the major changes that we have made to the manuscript and additional experiments included in our allowed one page upload. **Theoretical** A number of reviewers had questions about the theory and derivation of IPL. We have made a number of changes to improve our understanding of the method. 1. Reviewers ggMH and HK3n asked about why IPL was derived with XQL but our experiments were run with IQL. Originally, we used XQL for our derivations to be consistent with the Inverse RL literature that has previously used the MaxEnt Framework, but used IQL for experiments to exactly match baselines. To remove this confusion, we have updated the method section with a new expanded derivation of IPL under any off-policy RL algorithm that works via policy evaluation and policy improvement steps, not just XQL. 2. We apologize if the contribution statement was not clear (Reviewers ggMH, HK3n). We have now made it clear that our contribution is the application of the inverse bellman operator to the Preference-based RL, not the inverse-bellman operator itself. 3. Reviewers ggMH and HK3n asked about the validity of IPL’s practical implementation given it does not necessarily use a MaxEnt RL algorithm. However, the bijection between $Q$ and $r$ for a fixed policy holds in the general case (not just soft Q-Learning). In our response to Reviewer ggMH we include a proof of this bijection, analogous to Lemma 3.2 in IQ-Learning, for the general case. 4. Reviewer HK3n asked for a more theoretical understanding of IPL, which we believe will be of interest to all reviewers. We prove that IPL converges to the optimal policy for the expert reward function subject to regularization and show that regularization is necessary to guarantee this convergence. For space, this is included below and will be in the final version of the paper. **Experimental** See uploaded pdf 1. Reviewers kKfF asked about IPL’s applicability to online settings. Since IPL works for any Off-Policy RL algorithm, it also works online. We compare it with PEBBLE on two of the harder Meta-World tasks from Table 2, and find that IPL trades blows with PEBBLE while exhibiting lower variance. 2. Reviewers kKfF asked about our implementation of MR+IQL and data-augmentation. We ablate the use of augmentations for both IPL and MR+IQL in Hopper and Robomimic Can. We find that data-augmentation makes a substantial difference for MR+IQL on Hopper, but has a smaller effect in the robotics domains. 3. Reviewers kKfF and HK3n asked about the regularization values used by IPL. We include additional ablations on the regularization coefficient $\lambda$, showing that in many cases, IPL is very robust to this value. 4. Reviewer HQUc asked that we change Table 2 to use the same evaluation criterion as Table 1. We have included evaluations at a fixed stopping point and find the trend to be similar: IPL and MR+IQL perform very similarly, with IPL performing slightly better in 3 of the 5 tasks. **Proof that IPL Converges to the Optimal Policy corresponding to the regularized expert reward** First, note that *the Bradley-Terry Model is underspecified* and only recovers $r_E$ up to a constant shift, as constants are canceled by the Boltzmann. While constant shifts don’t change the optimal policy, it does change the Q-function. To prove convergence, we show that the sequence of Q-functions is increasing, as is standard practice for off-policy RL algorithms. This is only possible if the reward function does not shift when optimizing the preference loss. We prove this statement in the Tabular setting. Let $Q_t \in \mathbb{R}^{|S \times A|}$ and $\pi_t$ indicate the Q-function and policy after update $t$, respectively. Let $Q_0 = 1/(1 - \gamma) \min_{S \times A} r(s,a)$. The inverse bellman operator tells us, in vector form, that $r = (I - \gamma P^\pi)Q$ where $P^\pi$ is the transition matrix. Let $r^* = \arg \min_r \mathbb{E}_{D_p}[y \log P_r + (1-y) \log (1 - P_r) + \lambda r^2]$, or the minimizer of the regularized preference loss. At each step of IPL, we substitute the inverse bellman operator into the preference loss and optimize. Thus at convergence, $ (I - \gamma P^{\pi_t})Q_t = r^*$ uniquely because of the bijection between $r$ and $Q$ under a fixed $\pi$ (See proof under rebuttal to Reviewer ggMH Soft Q vs Standard). Then, we use any off-policy RL algorithm that guarantees convergence (ie $Q^{\pi_{t+1}} \geq Q^{\pi_{t}}$) to obtain a new policy $\pi_{t+1}$ from $\pi_t$ and $Q_t$. Using $\pi_{t+1}$ we can obtain the transition matrix $P^{\pi_{t+1}}$ in tabular settings. Finally, we optimize the preference loss again using $P^{\pi_{t+1}}$ in the inverse Bellman operator to obtain $Q_{t+1}$. At convergence $ (I - \gamma P^{\pi_{t+1}})Q_{t+1} = r^*$ holds. Due to regularization $r^*$ is unique, and thus $Q_t$ and $Q_{t+1}$ are both Q-functions for the reward function $r^*$, just under different policies. If we did not have regularization, this might not be the case. However, we know from the policy improvement step, that $Q^{\pi_{t+1}} \geq Q^{\pi_{t}}$ necessarily, and thus $Q_{t+1} \geq Q_{t}$ for any $t$. The base case holds because $ \forall \pi, Q_0 \leq Q^\pi$ by construction. This concludes the proof. Thus, by repeatedly optimizing the Preference Loss and improving the policy, we obtain the optimal policy corresponding to $r^*$. This can also be seen (very) informally by looking at Eq. 6 in our paper: If the policy improvement step works, then $V^{\pi_{t+1}}(s’) \geq V^{\pi_{t}}(s’)$, and thus the $Q(s,a)$ will need to *increase* to fit the same optima of the regularized preference loss. **Conclusion** We hope that the increased theoretical and experimental rigor addresses the reviewers concerns. See individual responses for more clarifications. Thank you! Pdf: /pdf/a4b9b0d15e006958a0a02f1fa477c1daa5df49f3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Rethinking Semi-Supervised Medical Image Segmentation: A Variance-Reduction Perspective
Accept (poster)
Summary: Edit: Updating score from 6 t o7 based on the discussions. Extracting useful representations from unlabelled data for label efficient training of medical image segmentation task is a widely studied problem. This work approaches learning useful representations using contrastive learning (CL) within a semi-supervised setting. Strategies for obtaining variance reducing pixel partitions for CL are presented, along with theoretical analysis that show their variance reduction properties. The variance reduction estimation is also used to improve training stability and convergence. Comprehensive experiments on multiple medical imaging and computer vision datasets are performed showing strong performance improvements compared to other SSL methods. The authors show their method is label efficient across all the datasets. Strengths: * Focusing on variance reduction guarantees to extract contrastive samples for pixel level training is a strong contribution of this work. * The theoretical analyses showing their unbiasedness and use of variance reduction techniques to improve training stability can have important applications in related domains like self-supervised learning. * The experimental evaluation is extensive, with strong performance improvements on multiple datasets. * Contrastive loss landscape visualisation in Fig 3 is insightful; that the contrastive learning holds across datasets is quite convincing. Weaknesses: * **Robustness**: This work makes two main claims about the usefulness of their CL framework. Firstly, and convincingly so, about label efficiency. There are several places in the paper where model robustness is alluded to, or strong claims made, without any evidence. This could be because the authors view robustness simply to be good performance across multiple datasets? If so, this should be clarified. Currently, the claims about model robustness are misleading. See [1,2,3] for different robustness analyses of deep neural networks. * **Assumption in Th. 3.2**: The guarantees of $Var[H_{SG}] < Var[H_{NS}]$ only holds if different $P_m$ do not have the same expected value over the aggregation function h(x;p). How is this ensured? In medical images, there are scenarios where the differences between classes are small in both intensity and feature spaces? How does the variance reduction guarantees hold in such situations? * **Aggregation functions**: Aren't the aggregation functions some type of a distance measure? And why is it expensive to compute these on dense pixel grids (L213)? * **Main method in Appendix**: While I appreciate all the details presented in this paper, moving the main method to the Appendix is not a good idea. Several of the important details are in the Appendix and does not serve the purpose of what Appendices are supposed to be. * **Literature overview**: I was curious as to why the authors refrained from discussing self-supervised representation learning both when motivating the work, and also in their general discourse. [1] Bastani, Osbert, et al. "Measuring neural net robustness with constraints." Advances in neural information processing systems 29 (2016). [2] Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." 2017 ieee symposium on security and privacy (sp). Ieee, 2017. [3] Singh, Gagandeep, et al. "Fast and effective robustness certification." Advances in neural information processing systems 31 (2018). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors do not discuss any limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for acknowledging our contribution to the self-supervised field, appreciating the strong performance improvement on multiple datasets, and providing constructive suggestions for the presentation of our work! We’ve made a substantial revision to the paper, which addresses all the issues, with emphasis on improving clarity of our work. If you have further concerns, please feel free to contact us. > **Q1**: clarifying robustness in our paper. **A1**: Thank you for the great suggestion! “Robustness” is a comprehensive concept used to describe the performance of a segmentation model. A model is said to be robust if (1) it has a high segmentation quality with only using extremely limited labels in long-tailed medical data (please see Line 141); (2) and fast convergence speed (please see Line 207). We agree with the reviewer and appreciate them for raising the concern about the wording. We will surely polish them in our final revision. > **Q2**: Assumption in Th. 3.2: The guarantees of variance reduction only holds if $P_m$ do not have the same expected value over the aggregation function h(x;p). How is this ensured? In medical images, there are scenarios where the differences between classes are small in both intensity and feature spaces? How does the variance reduction guarantees hold in such situations? **A2**: Thm 3.2 simply indicates that, provided that all the pixel groups $\{P_m\}_m$ do not have exactly same distributions, it is guaranteed $Var[\hat{H}(SG)] < Var[\hat{H}(NS)]$. Considering the unlikely condition where all pixel groups share an identical distribution within the feature space, it can be confidently asserted that SG will almost certainly realize a reduction in variance. Expanding on the previous point, the degree of variance reduction achievable, as anticipated, is contingent on the differentiation among pixel groups. We totally agree with you that there exist scenarios where the differences between classes are small in both intensity and feature spaces. Under these circumstances, the effect of variance reduction attributed to stratified group sampling would be comparatively insignificant. This condition aligns with the mathematical principle that variance is inherently linked to the extent of diversity in the dataset. > **Q3**: Aren't the aggregation functions some type of a distance measure? And why is it expensive to compute these on dense pixel grids (L213)? **A3**: Aggregation function refers to a function that is additive in pixels. That is, it can be expressed as a sum of functions of pixels, as is defined in Eq (3.1). Therefore, an aggregation function should be viewed as a general concept, which is not limited to a distance measure. In our paper, a loss function (e.g. $L_{contrast}$) is an example of an aggregation function. The computation would be overwhelming on dense pixel grids because in this case, we will have to aggregate information from all available pixel points across all images. Note that a typical 2D image can have 256x256 pixels, and there can be tens of thousands of training medical images (e.g., LiTS: 16684 slices). Thank you for the feedback, and we will further clarify in the revision. > **Q4**: While I appreciate all the details presented in this paper, moving the main method to the Appendix is not a good idea. Several of the important details are in the Appendix and does not serve the purpose of what Appendices are supposed to be? **A4**: Thank you for the great suggestion! We agree with your point that the relevant main method details should be in the main paper. In our final revision, we will follow your constructive advice to move the key details from the appendix to the main paper. > **Q5**: I was curious as to why the authors refrained from discussing self-supervised representation learning both when motivating the work, and also in their general discourse? **A5**: Thank you for your insightful suggestion! We agree with your view that an in-depth discussion on self-supervised representation learning would enhance the comprehension and context of our work. We truly appreciate your constructive advice and will be incorporating it to improve our manuscript. Given the limited space for this rebuttal, we aim to expand one subsection within the 'Related Work' section and offer a comprehensive analysis of self-supervised representation learning within the 'Introduction' section in our final revision. In line with your valuable advice, we're sharing a few snippets of our intended discussion below. ``` Self-supervised representation learning is a subclass of unsupervised learning, but with the critical distinction that it incorporates “inherent” supervision from the input data itself. The primary aim of self-supervised representation learning is to enable the model to learn the most useful representations from the large amount of unlabelled data for various downstream tasks. Self-supervised learning typically relies on pretext tasks, including predictive, contextual, and generative or reconstructive tasks. Among them, contrastive learning is considered as a popular approach for self-supervised representation learning by pulling the representations of similar instances closer and representations of dissimilar instances further apart in the learned feature space. ``` Your advice is greatly appreciated. We will make corresponding revisions to clarify and provide details on them. Thank you once more for your valuable feedback! We have performed a substantial revision to the paper, placing particular emphasis on enhancing the clarity of our methodology. These changes will be incorporated into our final submission. We trust that these modifications better position our work for publication. Please do not hesitate to reach out if you have further comments or queries. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: I have now read the author rebuttal, which addresses most of the concerns raised in my initial review. I have also seen the discussions between other reviewers and the effort invested by authors in these discussions is appreciated. I am willing raise my score from 6 t o7. --- Reply to Comment 1.1.1: Title: Thank you again for your review and very valuable feedback! Comment: We sincerely thank you for taking the time to review our rebuttal and for recognizing the positive changes we have made to address the concerns raised. We deeply appreciate your consideration in adjusting the score. Your feedback has been invaluable in refining our work, and we are committed to ensuring the highest quality in our final manuscript. Again, we express our gratitude for your thoughtful review and positive reassessment!
Summary: The authors present ARCO, a novel semi-supervised contrastive learning framework that employs a stratified group sampling strategy (i.e. **SG** and **SAG**) to compute gradient estimators with reduced variance, thereby enhancing representation learning in dense contrastive learning. By improving dense contrastive learning, ARCO addresses issues related to class imbalancedness and enhances the performance of semi-supervised segmentation, particularly in scenarios with long-tail distributed anatomical classes. The authors provide theoretical evidence demonstrating the effectiveness of the proposed sampling techniques in reducing the variance of the aggregation function, specifically the contrastive loss. The efficacy of the proposed sampling technique is validated on eight 2D/3D benchmark datasets with different label settings, further reinforcing its effectiveness and practical applicability. Strengths: The motivations are clear, and the method is reasonable. This study shows me a new insight/direction to consider medical image segmentation. The following are my detailed comments. --- **Empirical contribution** Pixel/voxel-wise sampling constitutes a critical facet of contrastive learning at the pixel/voxel level. With the aid of variance-reduction estimation, the authors proffer two pragmatic approaches - Stratified Group (SG) and Stratified-Antithetic Group (SAG), tailored for pixel/voxel-level segmentation tasks with exceedingly scarce labels. * The authors introduce a novel framework termed ARCO (strAtifed gRoup COntrastive learning) devised for multi-class segmentation tasks. This framework appears to be both intriguing and efficacious. The authors undertake a rigorous validation of the proposed methodologies across eight benchmark datasets, encompassing three 2D medical image segmentation, two 3D medical image segmentation, and three semantic segmentation benchmarks. * The empirical results, both quantitative and qualitative, attest to the efficacy of the proposed model across all label ratios and datasets. For instance, the model demonstrates a marked enhancement in segmentation accuracy (up to 4.1% absolute improvements in Dice coefficient) on the challenging multi-class MMWHS dataset under a 1% label setting. * The authors conduct comprehensive ablation studies to substantiate that the proposed mechanisms merit consideration. These studies encompass eight benchmark datasets, diverse network architectures, and varying label ratios to validate the efficacy, model-agnostic nature, and label efficiency of the proposed methodology. * Lastly, the proposed methodologies are not only facile to implement but also boast of universal applicability. For instance, they can be seamlessly integrated into any scenario necessitating pixel/voxel sampling. The paper presents a robust and versatile framework for pixel/voxel-level contrastive learning, which is empirically validated through extensive experiments and ablation studies. The methodologies are characterized by ease of implementation and broad applicability, making them a valuable contribution to the field of image segmentation. --- **Theoretical contribution** * The proposed methodologies, SG (Stratified Group) and SAG (Stratified-Antithetic Group), have exhibited remarkable efficacy in the experimental study. Consequently, it is intriguing to ascertain whether theoretical insights can elucidate this enhancement in performance. To this end, the paper furnishes a cogent theoretical analysis of the methodologies, revealing that the variance-reduction attribute of the two sampling methods is instrumental to their performance. * First and foremost, in Section 3.3, the paper meticulously delineates the SG and SAG sampling methodologies through lucid mathematical equations. SG is executed by segregating pixels into mutually exclusive groups, followed by uniform sampling of a specified number of pixels from each group. SAG, which is predicated on SG, imposes an additional constraint of symmetry among the sampled pixels within each group. * Subsequently, the sampled pixels are amalgamated through an aggregation function, which acts as an estimator for the target quantity. In the realm of image segmentation, this quantity could be, for instance, the contrastive loss function. It is posited that an optimal balance must be struck in the sample size; an overly diminutive sample size may fail to encapsulate the salient information from the underlying image, whereas an excessively large sample size would entail high computational complexity. Thus, the ideal scenario would be for SG to capture the crux of the image information through a relatively modest sample size. The paper demonstrates that SG possesses this attribute by establishing that it achieves reduced variance in comparison to the naïve sampling method (i.e., uniform random sampling from all pixels). Specifically, Theorem 3.2 establishes that SG is an unbiased sampling methodology, with a variance that does not exceed that of naïve sampling. More precisely, the variance of SG can be decomposed into the variance of naïve sampling minus a non-negative term. This non-negative term is conjectured to be almost certainly greater than zero, as it would be zero only if the expectation of the aggregation function within each group is identical to the expectation over the entire image, which is highly improbable. Thus, Theorem 3.2 suggests that SG is likely to consistently outperform naïve sampling. * Figure 5 reveals that SG/SAG exhibits marginally expedited training convergence compared to naïve sampling. This observation is theoretically substantiated in the concluding paragraph of Section 3, which is commendable. * In my assessment, the theoretical analysis presented in the paper is inextricably linked to the empirical component and provides a persuasive rationale for the empirical performance augmentation of SG vis-à-vis naïve sampling, thereby bringing the empirical narrative full circle. --- **To sum up** **1. Clarity** The manuscript is eloquently composed, proffering a lucid and cogent progression of information. The authors adeptly elucidate the procedural framework, facilitating the readers' comprehension of the proposed methodologies, namely, the two instance sampling methods - Stratified Group (SG) and Stratified-Antithetic Group (SAG). In Section 3.3, the SG and SAG sampling methods are meticulously delineated through precise mathematical formulations. The discourse within the Methodology section (Section 3) furnishes an exhaustive exposition of the innovations introduced by the study, adeptly accentuating the distinct contributions of the research to the scholarly domain. **2. Novelty** This work is the first work, empirically and theoritically, to validate the variance-reduction approach within the context of pixel/voxel-level contrastive learning for semi-supervised medical image segmentation, particularly in scenarios characterized by a paucity of labels. **3. Experimental Comprehensiveness** The authors undertake a comprehensive suite of experiments encompassing eight medical datasets, which include both 2D and 3D modalities, as well as semantic segmentation benchmarks. Additionally, a diverse array of contrastive learning frameworks and varying label ratios are employed to rigorously assess the efficacy, model-agnostic properties, and label efficiency of the proposed methodology. This extensive experimental evaluation substantiates the robustness and versatility of the technique in the domain of medical image segmentation. **4. Theoretical Implication** The authors furnish a cogent and meticulously articulated theoretical analysis of the proposed approach, elucidating the underlying principles with clarity and precision. This analytical exposition contributes to a deeper understanding of the methodology's foundations and its implications. Weaknesses: The proposed method is very interesting. There is no obvious weakness in the proposed ARCO. However, I do have a few questions. See the following section. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * The proposed two sampling techniques are intriguing. I wonder if you could kindly provide insights on the applicability of these sampling techniques to different tasks. Such information would prove beneficial not only to the current domain but also to other related domains and scenarios. Understanding the potential applications of these techniques would undoubtedly contribute to the advancement of various fields. * Lemma 3.1 demonstrates that the variance of SAG is at most twice of that of SG, which implies that the variance reduction magnitude of SAG and SG are roughly at the same level. Could the authors further elaborate on when SAG be preferred to SG? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have addressed the potential broader impact in clinical scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for acknowledging our contribution and providing suggestions for the presentation of our work! In particular, we agree that discussing the potential application and more detailed classification are necessary to position this work properly. If you have further concerns, please feel free to contact us. > **Q1**: The proposed two sampling techniques are intriguing. I wonder if you could kindly provide insights on the applicability of these sampling techniques to different tasks. Such information would prove beneficial not only to the current domain but also to other related domains and scenarios. Understanding the potential applications of these techniques would undoubtedly contribute to the advancement of various fields? **A1**: Thank you for the great suggestion! Our sampling techniques can provide pragmatic solutions for enhancing variance reduction, thereby fostering their application in a wide array of real-world applications and sectors. These include but are not limited to 3D rendering, augmented reality (AR), virtual reality (VR), trajectory prediction, and autonomous driving. We agree with the reviewer and appreciate them for the constructive suggestion. We will add the potential applications in our final revision. > **Q2**: Lemma 3.1 demonstrates that the variance of SAG is at most twice that of SG, which implies that the variance reduction magnitude of SAG and SG are roughly at the same level. Could the authors further elaborate on when SAG be preferred to SG? **A2**: Thank you for the great suggestion! SAG, compared with SG, only samples half the number of pixels, and the other half are chosen as the opposite with respect to the group center. This is described by the equation $c_m -p = p’ - c_m$ following Line 219. In other words, these two pixels, $p$ and $p’$, are opposite of each other w.r.t. $c_m$. In general, SAG will outperform SG if the value of the component function $h(\cdot)$ are negatively correlated for two pixels $p$ and $p’$ that are opposite of each other w.r.t. $c_m$. To demonstrate this more rigorously, we consider the variance of the aggregate function restricted to two pixels, which can be written as $Var [h(p)+h(p’)]$. For SG sampling, $p$ and $p’$ are independently sampled in the pixel group $m$, and therefore we have $Var [h(p)+h(p’)] = Var[h(p)] + Var[h(p’)] = 2\sigma_m^2$. In stark contrast, for SAG, since the choice of $p’$ depends on $p$, we have $Var [h(p)+h(p’)] = Var[h(p)] + Var[h(p’)] + 2 Cov[h(p), h(p’)] = 2 \sigma_m^2 + 2 Cov[h(p), h(p’)]$. This indicates that, when $Cov[h(p), h(p’)] < 0$, i.e., $h(p)$ and $h(p’)$ are negatively correlated, it holds that $Var [h(p)+h(p’)] < 2\sigma_m^2$. In such cases SAG would have smaller variance than SG. We will make corresponding revisions and provide detailed elaborations of them. Overall, thank you again for your suggestions and review! We believe that discussing the potential application and more detailed classification will greatly improve the paper. We will include all the modifications in our final revision. We hope that the revision puts our work in better shape for publication. Please feel free to contact us for further concerns. --- Rebuttal Comment 1.1: Title: Response to Authors' rebuttal Comment: Thank you for your responses to the questions I raised. The response was satisfactory and addressed my concerns. Besides that, I'd also like to discuss/highlight the following point. - The notable contribution of the proposed SG sampling method in improving the representation quality for pixel/voxel-level contrastive learning. The superiority of SG over na&iuml;ve sampling has been shown across various datasets in the draft. Nonetheless, I would appreciate further insight into the SAG method. From my observation, SAG does not surpass SG in most experiments, except in the specific context of the SUN RGB-D benchmark under 50-label setting. Yet, the performance of SAG remains comparable to SG. Consequently, I interpret the SAG method as an alternative to SG, designed to achieve comparable results but with a reduced sample size. Would this be a correct interpretation? --- Reply to Comment 1.1.1: Title: Response to Reviewer LvuQ Comment: We sincerely thank the reviewer for acknowledging the positive changes we have made to the paper. If you have further concerns, please feel free to contact us. > **Q1**: The notable contribution of the proposed SG sampling method in improving the representation quality for pixel/voxel-level contrastive learning. The superiority of SG over naïve sampling has been shown across various datasets in the draft. Nonetheless, I would appreciate further insight into the SAG method. From my observation, SAG does not surpass SG in most experiments, except in the specific context of the SUN RGB-D benchmark under 50-label setting. Yet, the performance of SAG remains comparable to SG. Consequently, I interpret the SAG method as an alternative to SG, designed to achieve comparable results but with a reduced sample size. Would this be a correct interpretation? **A1**: We thank the reviewer for acknowledging the positive changes we have made to the paper. Yes, your interpretation is correct! SAG can halve the sample sizes compared to SG, while largely preserving SG's variance reduction property, offering enhanced theoretical efficiency. We thank the reviewer again for the constructive feedback which helps shape this revision! Please do not hesitate to reach out should you have any additional feedback or questions.
Summary: This paper proposes two new sampling strategies, SG and SAG, for contrastive learning in semi-supervised frameworks. Compared with randomly sampling pixels for contrastive learning, pixels are grouped into several subsets, and then pixels are sampled from each subset. The proposed method is proven to reduce the variance of sampled pixels. Solid experiments are conducted to support the above claim. Strengths: 1. Solid experiments and good performance. 2. Simple yet effective method that benefits contrastive learning for semi-supervised frameworks. 2. Sound theoretical analysis. Weaknesses: 1. It would be better to also compare NS with the proposed SG/SAG in natural image datasets. 2. What is the application scenario of SAG? SG seems to have better performance and stability than SAG. 3. Line 222-223, what is the meaning of "p is orthogonal to p'"? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have included a discussion on the potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for acknowledging our contribution to the medical image analysis field, appreciating the good performance improvement on our multi-class medical segmentation task, and providing constructive suggestions for the presentation of our work! We’ve made a substantial revision to the paper, which addresses all the issues, with emphasis on experiments in natural image datasets and clarity of the explanation of our work. If you have further concerns, please feel free to contact us. > **Q1**: It would be better to also compare NS with the proposed SG/SAG in natural image datasets? **A1**: Thank you for the great suggestion! We agree with your point that it is better to compare NS with the proposed SG/SAG on natural image datasets. Indeed we have compared NS with the proposed SG/SAG on three natural image datasets in Appendix (i.e, Cityscapes [84], Pascal VOC 2012 [85], indoor scene segmentation dataset – SUN RGB-D [86]). All the experiments are conducted under the same experimental setting [87]. For your convenience, the following Table shows the comparison results on Cityscapes, Pascal VOC, and SUN RGB-D (Please see Appendix K line 806 - 824 for qualitative/visual results). | | | | Pascal VOC | | | CityScapes | | | | SUN RGB-D | | | | :----------- | -----------: | :-----------: | :-----------: | :-----------: | -----------: | :-----------: | :-----------: | :-----------: | -----------: | :-----------: | :-----------: | :-----------: | | Method | 60 labels |120 labels | 600 labels | all labels |20 labels| 50 labels |150 labels | all labels | 50 labels |150 labels |500 labels|all labels| | Supervised | 39.4 | 55.5 | 64.6 | 77.8 | 38.2 | 45.9 | 55.4 | 70.9 | 20.0 | 29.2 | 38.9 | 51.8 | | ReCo [87] + ClassMix | 57.1 | 69.4 | 73.2 | - | 49.9 | 57.9 | 65.0 | - | 30.5 | 40.4 | 44.6 | - | | ARCO-SAG (9 Grid) + ClassMix | 58.3 | 70.5 |75.4 | - | 50.2 | 60.2 | 66.5 | - | 31.5 | 40.9 | 45.7 | - | | ARCO-SAG (16 Grid) + ClassMix | 58.7 | 70.9 | 75.1 | - | 50.1 | 60.6 | 66.3 | - | 37.8 | 40.2 |45.7 | - | | ARCO-SAG (25 Grid) + ClassMix | 59.1 | 70.9 | 74.9 | - | 49.8 |60.6 | 66.7 | - |**38.5**| 40.5 | 45.5 | - | | ARCO-SG (9 Grid) + ClassMix | 59.2|**71.8**| 75.3 | - | 52.5 | 60.9 | **66.8**| - | 32.4 | 41.4 | 46.6 | - | | ARCO-SG (16 Grid) + ClassMix |**59.6**| 71.7 |**75.5**| - |**53.7**|61.2| 66.2 | - | 37.7 |41.0 | 46.4 | - | | ARCO-SG (25 Grid) + ClassMix | 59.5|71.7| 75.2 | - | 51.5 |**61.8**|66.4| - |38.3|**41.5**|**47.3**| - | As we can see, we can see that for all cases, SG and SAG consistently improve performance, compared to NS, in all the semi-supervised settings. The results, quantitatively (Appendix Table 5 Page 25) and qualitatively (Appendix Page 26 - 34) clearly prove the effectiveness of our proposed SG/SAG. Due to the limited response space, we will surely provide detailed discussions on them in our revision. > **Q2**: What is the application scenario of SAG? SG seems to have better performance and stability than SAG. **A2**: We greatly appreciate your attention to this matter. SAG, compared with SG, only samples half the number of pixels, and the other half are chosen as the opposite with respect to the group center, making it theoretically more efficient. This is described by the equation $c_m -p = p’ - c_m$ following Line 219. In other words, these two pixels, $p$ and $p’$, are opposite of each other w.r.t. $c_m$. Consequently, this theoretical efficiency suggests a wider scope for SAG in various real-world applications, encompassing fields like 3D rendering, augmented reality (AR), and virtual reality (VR). To be more specific, SAG will outperform SG if the value of the component function $h(\cdot)$ are negatively correlated for two pixels $p$ and $p’$ that are opposite of each other w.r.t. $c_m$. To demonstrate this more rigorously, we consider the variance of the aggregate function restricted to two pixels, which can be written as $Var [h(p)+h(p’)]$. For SG sampling, $p$ and $p’$ are independently sampled in the pixel group $m$, and therefore we have $Var [h(p)+h(p’)] = Var[h(p)] + Var[h(p’)] = 2\sigma_m^2$. In stark contrast, for SAG, since the choice of $p’$ depends on $p$, we have $Var [h(p)+h(p’)] = Var[h(p)] + Var[h(p’)] + 2 Cov[h(p), h(p’)] = 2 \sigma_m^2 + 2 Cov[h(p), h(p’)]$. This indicates that, when $Cov[h(p), h(p’)] < 0$, i.e., $h(p)$ and $h(p’)$ are negatively correlated, it holds that $Var [h(p)+h(p’)] < 2\sigma_m^2$. In such cases SAG would have smaller variance than SG. Due to the limited response space, we will ensure a more detailed discussion on these methods and their potential applications in our final revision. > **Q3**: Line 222-223, what is the meaning of "p is orthogonal to p' "? **A2**: The “orthogonal” sign means ‘independent’. In line 222, "p is orthogonal to p' " means that the pixel p and pixel p’ are sampled independently. Thank you for the suggestion, and we will further clarify in the revision. We deeply appreciate your patience and engagement throughout this review process. We have conducted an extensive revision of our paper, which addresses all the issues. This revision focuses particularly on enhancing the clarity of our method's explanation and has conducted appropriate comparisons through experiments on three natural image datasets. We believe that these improvements have significantly shaped our work for the better and brought it closer to being ready for publication. Please do not hesitate to reach out to us should you have any further concerns. Your ongoing interest and insightful feedback on our work are greatly valued. --- Rebuttal Comment 1.1: Title: Rebuttal to QWqe Comment: Dear QWqe Could you have a look at the rebuttal to see if your questions have been clarified? Thanks, Your AC --- Rebuttal Comment 1.2: Title: Look forward to further feedback Comment: Dear Reviewer QWqe: We are genuinely thankful for your thoughtful feedback, which has been pivotal in refining our manuscript. As the author-reviewer discussion period is nearing its conclusion, we kindly request your review of our rebuttal and any further reflections you might have. Please feel free to indicate any additional clarifications or experiments that could further strengthen our paper. We aim to unequivocally convey the significance of our work. If you feel that our responses have adequately addressed your concerns, we would be grateful if you might consider raising the paper's rating. Once again, we deeply value your comprehensive review and are thankful for the positive evaluation. Your feedback has been pivotal in refining our work, and we are committed to incorporating all of the suggestions into our manuscript. Best Authors of Paper1499
Summary: The authors propose a sampling strategy to improve contrastive-learning-based medical image segmentation performance with limited labeled data and training stability. The sampling strategy is used in conjunction with a previously-published contrastive semi-supervised training strategy. The main contributions include this sampling strategy, some theoretical support that the strategy should improve training stability, and experiments evaluating the method. Strengths: - The baseline comparison experiments are thorough. - The authors address a real applied problem (that most medical image segmentation datasets have limited data) with a new method and theoretical support, resulting in a well-rounded paper. Weaknesses: Major weaknesses - The paper’s writing is a major limitation. The prose is difficult to understand, which limits the entire paper—it is hard to clearly understand the motivation, the proposed method, the contributions, or the benefits. The work would benefit from more rounds of grammatical revision; it is hard to parse what the author is trying to communicate a lot of the time. - Partly due to the writing issues, it is unclear how the proposed sampling strategy differs from previous dense contrastive learning-based image segmentation strategies; it seems that the methodological contributions here are minor, if existent. - The method is complex, consisting of two backbones (connected via EMA), global/local instance discrimination losses, augmentations, supervised losses, nearest neighbor loss, global contrastive loss, unsupervised loss… with all of these components, a very thorough ablation section is needed to understand how much the novel component (a pixel-level sampling strategy) is contributing to performance. The existing ablation section is not so thorough. As a result, the paper does not contribute much understanding about the strengths/weaknesses of different components of this complex pipeline. Minor weaknesses - MONA is not a well-known training strategy; it would be useful to provide a longer overview on what MONA does and how the proposed approach differs. This discussion could go in an appendix. - The same concept is often referred to using different words: model “convergence,” “robustness,” and “stability.” I’m not sure if you’re always talking about the same concept, or if you are using different words to refer to the same idea. If the latter, it helps the reader to always use the same word. - This is a minor stylistic note, but the use of bold and italics is often distracting. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Questions and suggestions discussed above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: Limitations and impacts adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for acknowledging our contribution, appreciating the strong performance improvement on our medical image segmentation tasks, and providing constructive suggestions for the presentation of our work! We’ve made a substantial revision to the paper, which addresses all the issues, with emphasis on clarity of the explanation of our work and appropriate ablation experiments on different components. If you have further concerns, please feel free to contact us. > **Q1**: Writing revision? **A1**: Thank you for the great suggestion! We will thoroughly assess the wording in our current manuscript and polish them accordingly. Here we highlight our motivation and contribution. Our motivation lies in the observation that the sampling procedure in contrastive learning introduces an additional source of variance, which could result in model collapse and undermine the overall performance of the model. To this end: we have devised two sampling techniques that (1) are easy to implement, functioning in a plug-and-play manner; (2) theoretically demonstrate variance reduction properties; and (3) empirically deliver improved segmentation quality in 8 benchmarks, i.e., 5 2D/3D medical and 3 semantic segmentation datasets, with different label settings. > **Q2**: Difference between our sampling strategy and dense contrastive learning (CL) segmentation. **A2**: Existing dense CL segmentation methods [46,20] use patch-level strategy, e.g., [46] first partitions the image into fixed-size patches (3x3), and then randomly selects 13 patches per image. For each of these chosen patches, they further select 5 negative samples from patches located in different regions of the same image. Thus, they implement dense CL with these local regions, as in Eqn. 3 [46]. In contrast, our SG/SAG first partition the image with respect to different classes into grids with the same size, and then sample, within the same grid, pixels semantically close to each other with high probability. Thus, we implement CL with these sampled different class pixels, as in Eqn. E2 (Appendix Line 766 - 767). This essentially allows us to obtain better segmentation quality and label efficiency by improving feature variance reduction in training, as shown in Sections 4. > **Q3**: Ablation on different components. **A3**: Thank you for your constructive feedback! We agree that the importance of providing a comprehensive ablation study to discern the contribution of each individual component, especially the novel pixel-level sampling strategy (Please see **“global” response PDF file** for requested ablations). To clarify, our work focuses on semi-supervised medical image segmentation, where we adhere to the widely accepted training standards, such as supervised loss, data augmentation, and EMA training, as referenced in [27,52,5,81,49]. Furthermore, to substantiate the necessity of the various components, we refer to the one-page **“global” response PDF file**. Tables 1,2,3 show the comparative results of various components, including pixel-level sampling variants (i.e., naive sampling (NS), Stratified Group (SG), and Stratified-Antithetic Group (SAG)), contrastive loss, nearest neighbor loss, unsupervised loss, and the global/local instance discrimination losses on the ACDC dataset with a 1% label ratio. These findings demonstrate: 1. The positive contribution of each component to performance gains. 2. The benefits of employing SG/SAG sampling over the NS setting, resulting in marked improvements in Dice. 3. Figure 5 (Line 323 - 347) illustrates how SG/SAG enhance convergence with reduced standard deviations, indicating enhanced robustness. Our results underscore the efficacy of our proposed method, especially in the realm of medical image segmentation. We are grateful for your valuable suggestions and will ensure these findings and clarifications are prominently featured in our final revision. We remain receptive to any further insights to enhance the quality of our paper. > **Q4**: Overview MONA. **A4**: Thank you for pointing this out! Indeed we have conducted a comprehensive review of the MONA framework. Due to limited rebuttal space, please refer to Appendix C (Lines 691 - 716), E (Lines 732 - 777), and F (Lines 778 - 789) for further details. In our final revision, we will follow your constructive advice to transfer the key details from the Appendix to the main paper for better clarity and visibility. > **Q5**: model convergence, robustness, and stability are unclear. **A5**: Many thanks! Here we further clarify the terms “convergence,” “robustness,” and “stability” and the difference between them. Specifically, “convergence” is a common concept which simply refers to the training speed, which is the number of epochs required to learn an accurate model. “Stability” is a desired property of a sampling method. It refers to the standard deviation of the sampling method. A stronger stability means the sampling method has a small standard deviation, which is good in the pixel sampling procedure of contrastive learning. “Robustness” is a comprehensive concept used to describe the performance of a segmentation model. A model is said to be robust if (1) it has a high segmentation quality with only using extremely limited labels in long-tailed medical data (please see Line 141); (2) and fast convergence speed (please see Line 207). > **Q6**: This is a minor stylistic note, but the use of bold and italics is often distracting. **A6**: Thanks for pointing this out. We will modify it accordingly. We have extensively revised our manuscript, focusing on enhancing the clarity of our explanations and conducting detailed ablation studies on various components. We trust these adjustments improve the paper's readiness for publication. Please do not hesitate to reach out should you have any additional feedback or questions. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I have read the other reviews and rebuttal responses and I appreciate the authors’ thorough rebuttal. I have reread the paper as well during this rebuttal period. To summarize my thoughts: - I think the paper shows promising empirical results at training segmentation models with few labeled images. I am encouraged by the authors’ addition of of new ablation experiments. - I think reviewer SMeb brings up a good point about the term and associated claims of “robustness” that I did not include in my original review. - I still find the paper very difficult to interpret and the proposed training method to contain so many components that it is difficult to know which components are useful for other pipelines. I recognize that the other reviewers do not seem to have the same perspective. I will change my score from a 3 to a 4 due to the additional experimental support during this rebuttal period, but I would still vote to reject this submission. I would point the AC to Sections 3.2 and 3.3 of the paper, which contain the paper’s contributions, if they wish to investigate further and see if they find the same problems I do with the submission or if they are more aligned with the other reviewers. --- Reply to Comment 1.1.1: Title: Response to Reviewer J4c5 (Part 1) Comment: Thank you for taking the time to re-examine our paper during the rebuttal phase! We'd like to further address your additional queries as follows: > **Q1**: I think the paper shows promising empirical results at training segmentation models with few labeled images. I am encouraged by the authors’ addition of of new ablation experiments. **A1**: We appreciate your recognition of the paper's promising empirical results and our commitment to improving the manuscript with the addition of new ablation experiments. If you have further concerns on ablation studies, please feel free to contact us. > **Q2**: I think reviewer SMeb brings up a good point about the term and associated claims of “robustness” that I did not include in my original review. **A2**: Many thanks! We appreciate the opportunity to provide clarity on the term "robustness": In this context, “robustness” is a comprehensive concept used to describe the performance of a segmentation model. A model is said to be robust if (1) it has a high segmentation quality with only using extremely limited labels in long-tailed medical data (please see Line 141); (2) and fast convergence speed (please see Line 207). Our introduced sampling methods, namely Stratified Group (SG) and Stratified-Antithetic Group (SAG), have shown improvements in segmentation performance across different pipelines. Furthermore, these methods can seamlessly integrate with existing frameworks, further enhancing the robustness (as detailed in Appendix Lines 825-840, Section L and Appendix Page 25, Table 6). We greatly appreciate the feedback and will make it a priority to refine and highlight this to improve the paper's clarity.
Rebuttal 1: Rebuttal: In response to the query from Reviewer-J4c5 (Question 3), we appreciate the emphasis on the depth of our analysis. Given the constraints of this rebuttal format, we wish to assure our commitment to augmenting the ablation section for a more comprehensive understanding. To fully understand the contribution of each component, especially our novel pixel-level sampling strategy, we present comparative outcomes of components. These include pixel-level sampling variants (i.e., naive sampling (NS), Stratified Group (SG), and Stratified-Antithetic Group (SAG)), global contrastive loss, nearest neighbor loss, unsupervised loss, the global/local instance discrimination losses, and data augmentations assessed on the ACDC dataset with a 1% label ratio. For a detailed overview, please consult the “global” response PDF file. Pdf: /pdf/126b2d6ceba37761fcf44748602a7b160f837ce8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Where Did I Come From? Origin Attribution of AI-Generated Images
Accept (poster)
Summary: This paper addresses the problem of distinguishing between images generated by a generative AI model or acquired by a camera (real images). The motivation comes from the concerns of AI community about the potential misuse and intellectual property (IP) infringement associated with image generation models. The authors approach the classification problem (generated image vs real image) by first developing an alteration-free and model-agnostic origin attribution method via reverse-engineering (i.e., inverting the input of a particular model for a specific image) followed by computing the reconstruction loss of reverse-engineering to infer the origin. The authors provide the intuition behind the approach by stating that reverse-engineering task is easier for belonging images to a particular model than for non-belonging images generated by some other models Strengths: The strengths of the paper lie in (1) Introducing “alteration-free and model-agnostic origin attribution" algorithm (2) Analyzing the differences in the reconstruction loss for reverse engineering between the generated images of a given model and other images, and then checking whether the reconstruction loss of the examined sample falls within the distribution of reconstruction losses observed in the generated images. (3) Evaluating the method on eight different image generation models to quantify the accuracy Weaknesses: The weaknesses of the paper lie in (1) Missing description of computational costs (2) Reproducibility of the algorithm is the paper presents the algorithm 1 at very high level which makes it difficult to reproduce the algorithm (unless the code becomes available) (3) Assuming of having belonging images to a specific model (Section 4.4 refers to 100 images labeled as belonging images to a specific model) which might not always be the case in practice Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Would it be possible to create a table summarizing the combinatorial setups for Architecture = {the same, different} x Training Data = {the same, different, overlapping} given selected two models M1 and M2? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The section “Discussion” describes the limitations of the approach in its higher computational cost than the two other approaches based on watermarking [19-22] and classifiers [23-26]. This opens up a question about tradeoffs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your precious time, thoughtful comments, and recognition of the significance of our work. We hope the following results and clarifications can adequately address your concerns. **Q1**: Missing description of computational costs. **A1**: Thanks for your valuable comment. The discussion of the efficiency and the computational costs can be found in Appendix.E. We will add a more detailed discussion in the revised version. **Q2**: Reproducibility of the algorithm is the paper presents the algorithm 1 at very high level which makes it difficult to reproduce the algorithm (unless the code becomes available). **A2**: Thank you very much for your insightful suggestion. The link to our code repo can be found in line 541 of the appendix (supplementary materials). We will open-source our code upon acceptance. **Q3**: Assuming of having belonging images to a specific model (Section 4.4 refers to 100 images labeled as belonging images to a specific model) which might not always be the case in practice. **A3**: Thanks for your thoughtful comment. In our problem formulation, the defender has access to the examined model. Thus, the belonging images of it can be generated by directly using the examined model. We will add more discussion to make it more clear. **Q4**: Would it be possible to create a table summarizing the combinatorial setups for Architecture = {the same, different} x Training Data = {the same, different, overlapping} given selected two models M1 and M2? **A4**: Thank you very much for your helpful suggestion. The summary of combinatorial setups can be found in Table 3 of the attached PDF in the global response. We will add the table accordingly in the revised version. --- Rebuttal Comment 1.1: Title: Read all reviews. The entire paper rests on Theorem 4.2 and model calibration. Comment: Several reviewers made a comment about missing computational cost. I do not think that have one sentence in Appendix E is addressing the concern of understanding the computational cost: "The average running time for StyleGAN2-ADA and the Consistency Model are 55.16s and 152.83s, respectively. " In my opinion, the computational cost should be mapped to Table 3 and include uncertainties. The authors should also include whether all 64 CPUs and six Quadro RTX 6000 GPUs were utilized during each run. I still think that it is a strong paper among the papers I have reviewed. However, the plethora of reviewers' comments suggest that the narrative should be significantly improved. --- Reply to Comment 1.1.1: Title: Thank you very much for your feedback and support Comment: Thank you very much for your valuable feedback and support. We have started to run the experiments for measuring the computational cost more comprehensively. We will make sure the discussion about the computational cost will be mapped to Table 3 and include uncertainties in our revised version. We will revise our narrative based on the comment accordingly. Thanks again for your insightful comments and support.
Summary: This paper develops an origin attribution method to determine whether a specific image is generated by a particular model. The key idea is based on reverse-engineering generation models, and the decision is made by thresholding the reconstruction loss. The proposed method is evaluated on eight different image generation models. Strengths: 1. This method could be applied to different types of generative models (model-agnostic) without requiring any extra operations in the training phase and image generation phase (alteration-free). Weaknesses: 1. The novelty is lacking. The authors state that "this paper is the first work focusing on this problem to infer if a specific sample is generated by a particular model in an alteration-free and model-agnostic manner". However, as far as I know, there are several works working on the same problem listed below, and the proposed methodology is very similar to these works. I doubt whether the authors have conducted sufficient literature research. [1] Source Generator Attribution via Inversion, CVPRW2019 [2] On Attribution of Deepfakes, arXiv2020 2. The proposed method could not be a "perfect reverse-engineering algorithm". Model inversion has been a difficult problem due to the nonlinearity of neural networks. Existing works make many efforts to improve the precision of inversion. The methods proposed in the work only follow the most simple optimization-based inversion method without innovations in technology. 3. The experiments are far from sufficient. In the real-world scenario, the number of unknown models is far greater than the known models. However, in Table 3, the author only uses a single model M2 as the negative model. The high accuracy doesn't reflect the performance in the open-world scenario. Besides, there is no comparison with existing methods. The author should at least compare with [1] and [2], which solve the same problem as this paper and use a similar method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How is the attribution efficiency? As the proposed method is optimization-based, the attribution procedure may take more time than other methods by straightforward prediction. 2. The authors should conduct more sufficient literature research and conduct more comparison experiments. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The limitations are listed in the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and insightful comments. We have run all the suggested experiments. We hope the following new clarifications and results can address your concerns. We are willing to perform more experiments if you have further suggestions. **Q1**: The authors state that "this paper is the first work focusing on this problem to infer if a specific sample is generated by a particular model in an alteration-free and model-agnostic manner". However, as far as I know, there are several works working on the same problem listed below, and the proposed methodology is very similar to these works. I doubt whether the authors have conducted sufficient literature research. Besides, there is no comparison with existing methods. The author should at least compare with Albright et al. and Zhang et al., which solve the same problem as this paper and use a similar method. **A1**: Thank you for the suggested papers. * First of all, the problems focused by Albright et al. and Zhang et al. are **different** from our problem. More specifically, their problem is that given an image and a set of provided models, how to determine which model in the given model set is the source of the given image. However, our problem is determining if a given image is generated by a single given model or not, which is a fundamentally different problem. Albright et al. and Zhang et al. have several drawbacks compared to our method. For example, they can not distinguish the real images and the generated images. Also, if the given image is not generated by the model in the provided model set, their method will always give a wrong prediction. Since their problem formulation is different from ours, we did not empirically compare our method to theirs (i.e., their methods are not applicable to our problem). * Besides the optimization for reverse-engineering, to solve the problem of determining if a given image is generated by a single given model or not, we propose the important calibration step to make the belonging images and non-belonging images more separable. Our origin attribution framework is also based on the designed statistical hypothesis testing. Both these techniques are not discussed in Albright et al. and Zhang et al. * While Albright et al. and Zhang et al. are limited to the inversion of noise-to-image GANs, our work also includes the reverse-engineering on more models, e.g., text-to-image diffusion models and GANs. We also discuss different reverse-engineering methods for the latest diffusion models (see Reviewer FZhr A3), which are also insightful to the community. We will add more discussion to make it more clear. Albright et al., Source Generator Attribution via Inversion. CVPR Workshop 2019. Zhang et al., On Attribution of Deepfakes. arXiv 2020. **Q2**: The proposed method could not be a "perfect reverse-engineering algorithm". Model inversion has been a difficult problem due to the nonlinearity of neural networks. Existing works make many efforts to improve the precision of inversion. The methods proposed in the work only follow the most simple optimization-based inversion method without innovations in technology. **A2**: Thanks for your thoughtful comment. Although the reverse-engineering in the real world is not exactly perfect, Theorem 4.2 reflects the relaxation and the approximations of real-world cases. Thus, Theorem 4.2 is meaningful as the guidance of our method. Our experiments in Section 5 of the main paper and Table 1 of the attached PDF in the global response demonstrates that the reconstruction loss values for belonging and non-belonging images are highly separable (i,e, the average detection accuracy of our method is 96.4%). **Q3**: The experiments are far from sufficient. In the real-world scenario, the number of unknown models is far greater than the known models. However, in Table 3, the author only uses a single model M2 as the negative model. The high accuracy doesn't reflect the performance in the open-world scenario. **A3**: Thanks for your helpful comment. We have conducted the experiments on 5 negative models accordingly during the rebuttal period. The results are shown in the Table 1 of the PDF file in the global response. The results demonstrate that our method has good performance for distinguishing between the belongings of a given model and the images generated by other models with different architectures. We will add the above results and more discussion in the revised version of this paper. **Q4**: How is the attribution efficiency? As the proposed method is optimization-based, the attribution procedure may take more time than other methods by straightforward prediction. **A4**: Thank you very much for your thoughtful comment. The discussion of the efficiency and the runtime can be found in Appendix.E. We admit the computational complexity of our method is larger than watermarking and classifiers-based method. However, our method is alteration-free and model-agnostic while existing methods are not. In addition, our method can be accelerated by mixed precision training (Micikevicius et al.). We will add more discussion in the revised version. Micikevicius et al., Mixed Precision Training. ICLR 2018. --- Rebuttal Comment 1.1: Comment: **Supplementry for A2**: Although our inversion approach may appear straightforward, it is underpinned by our theoretical analysis presented in Theorem 4.2. Furthermore, empirical evidence illustrates its remarkable efficacy, as demonstrated in Table 2, Table 3, and Table 4 in our main paper. We also explored various inversion techniques for the state-of-the-art diffusion models, with outcomes suggesting that our approach proves to be the most effective (please refer to Review FZhr-A3 for more details). Notably, our methodology is general to different types of models including GANs and diffusion models. We hold the conviction that our simple yet highly effective solution for the novel origin attribution problem formulated in Section 3 of our main paper stands to greatly benefit our research field. **Q5**: The authors should conduct more sufficient literature research and conduct more comparison experiments. **A5**: Thank you very much for your constructive suggestions. We will add more discussion about more related literatures (e.g., Albright et al. and Zhang et al.). For the comparisons to Albright et al. and Zhang et al., please refer to A1. Thanks again for your valuable comment. Albright et al., Source Generator Attribution via Inversion. CVPR Workshop 2019. Zhang et al., On Attribution of Deepfakes. arXiv 2020. --- Rebuttal Comment 1.2: Title: Thanks for the response Comment: Thanks for the response. The response partly solves my questions. However, some concerns remain unsolved: A1: The response states that Albright et al. and Zhang et al. address the issue of determining which is the source model of the given image from a provided model set, while this paper aims to determine if a given image is generated by a single given model or not. So the authors argue that the two problems are different. Although the formulation seems different, the former problem is inherently aligned with the latter. To address the task of distinguishing among N models, the attribution process actually involves N iterations of one-versus-rest comparisons. Moreover, as shown in Figure 7, Albright et al. have also conducted one-versus-rest experiments. A4: While inversion-based attribution is inherently more complex than watermarking and classifier-based methods, it would be valuable to incorporate efficiency comparisons with other reverse-engineering methods. A5: Despite slight variations in the experimental configurations of Albright et al. and Zhang et al., their proposed methodologies are also grounded in inversion error and could be easily adapted for this paper's experiments. Consequently, it is suggested to compare the two methods in the future. Based on these concerns, I keep my score. --- Reply to Comment 1.2.1: Title: Thanks for your feedback (Part 1) Comment: Thank you very much for your valuable feedback and suggestions. Belows are our further responses. We are happy to answer more questions and perform more experiments if you have further concerns. **Further Response-A1**: Thanks for your helpful feedback. * Given an inspected image, Albright et al. and Zhang et al. works by enumerting and conducting inversion on **all models** within a set of suspicious candidate models (referred to as the "candidate set" in this response), and attribute the model with the lowest reconstruction loss as the source of the image. Their methods rely on the assumptions that the inspector can have the **white-box access to all models** in the candidate set, and the examined image **must be generated by one of the models in the candidate set**. These assumptions diminish the practicability of their methods, whereas **our method does not have such requirements**. * In Table 3 of the main paper, we carry out the experiments for distinguishing belonging images of the inspected model (i.e., $\mathcal{M}_1$) and images generated by other models (i.e., $\mathcal{M}_2$). It is important to clarify that in our settings the inspector only has access to the inspected model $\mathcal{M}_1$, but he/she does not have any information or any access about the model $\mathcal{M}_2$. The goal behind these experiments is to investigate our method's effectiveness for distinguishing the belonging images of the inspected model and the images generated by other **unknown** models, which remain undisclosed to the inspector. In contrast, both the one-versus-rest experiments and other experiments in Albright et al. assume the inspector has full white-box access to all models involved in the experiments (equivalent to having white-box access to both $\mathcal{M}_1$ and $\mathcal{M}_2$ in Table 3), and they conduct inversion on all models in their candidate set. Thus, our threat model and experiment settings are fundamentally different to theirs. * *"Why will the methods proposed in Albright et al. and Zhang et al. fail in our formulated problem?":* Our paper focus on the problem of determining if a given image is generated by a single given model or not. Consider a scenario where a model owner wants to verify if an image is generated by a model owned by him/her. While our method only needs to conduct the inversion on this specific model. Albright et al. and Zhang et al. need to compare the reconstruction loss of this particular model with a large number of suspicious models. There are several cases where Albright et al. and Zhang et al. prove ineffective in addressing this problem: 1. In cases where the inspector lacks white-box access to some of the suspicious models, deriving reconstruction on them and getting inference results becomes infeasible. Notably, more and more state-of-the-art image generation models (e.g., such as Midjourney and DALL-E2) are close-sourced and they only provide the black-box API to the users. 2. Albright et al. and Zhang et al. are prone to making wrong predictions if the real source model is not included in the candidate set. This is attributed to their underlying assumption that the examined image must originate from one of the models within the candidate set. Ensuring the real source model is included within the candidate set is a very hard problem in practice. 3. Equally noteworthy, Albright et al. and Zhang et al. do not work when the inspected images are real images due to their strong assumption (i.e., the examined image must be generated by one of the model in the candidate set). Our method does not have the above problems. Will make it more clear in the revised version. --- Reply to Comment 1.2.2: Title: Thanks for your feedback (Part 2) Comment: **Further Response-A4 and A5**: * Let's now consider the empirical comparisons. Despite the threat models are different, we have conducted the comparison experiments accrodingly. We consider the setting for distinguishing the belonging images of the inspected model $\mathcal{M}_1$ and the generated images of other models $\mathcal{M}_2$, and the inspected model (i.e., $\mathcal{M}_1$) here is Stable Diffusion 2.0. For our method, we assume the inspector can only access $\mathcal{M}_1$. For Albright et al. and Zhang et al., we assume the inspector can access both $\mathcal{M}_1$ and $\mathcal{M}_2$. The results when $\mathcal{M}_2$ is the StyleGAN2-ADA trained on CIFAR-10 dataset are shown in the following table: Method| TP |FP |FN |TN | Acc| ---- |---- |---- |---- |---- | ---| Albright et al. | 100 |100 |0 |0 | 50.0%| Zhang et al. | 100 |100 |0 |0 | 50.0%| Ours |96|7 |4 |93 | 94.5%| The results when $\mathcal{M}_2$ is the Consistency Model trained on ImageNet dataset are shown in the following table: Method| TP |FP |FN |TN | Acc| ---- |---- |---- |---- |---- | ---| Albright et al. |100 |100 |0 |0 | 50.0%| Zhang et al. |100 |100 |0 |0 | 50.0%| Ours |95|10 |5 |90 | 92.5%| As evident from the results, our approach demonstrates a significantly superior performance compared to that of Albright et al. and Zhang et al. There are several factors contributing to these outcomes. While Albright et al. and Zhang et al. attribute the origin of the generated images solely through direct comparisons of reconstruction losses across different models, they overlook the variations in inherent complexities and expressive capabilities among different models. For example, Stable Diffusion models can easily achieve relatively low reconstruction losses for both belonging images and non-belonging images compared to other simpler models. Consequently, a direct comparison of reconstruction errors without accounting for the models' capacities introduces bias. Furthermore, they ignore the differences in images' inherent complexity (which we address through our calibration step to mitigate its impact), which is also a factor that significantly influences performance. In conclusion, our method outperforms Albright et al. and Zhang et al. even though they have much stronger assumptions. * Our method is also much more efficient than Albright et al. and Zhang et al. when the number of models in their candidate sets are large. For example, consider a scenario where there are 10000 models in the candidate sets, then they need to conduct inversion on 10000 models for predicting the source of a single image, which is much more time-consuming than our method as we only need to conduct inversion on a single model. * Please refer to Reviewer FZhr-A3 to see the comparisons for different inversion methods (i.e., Song et al., Mokady et al., and Parmar et al.) for diffusion models. We will make it more clear in our revised version. Thanks again for your helpful comment and feedback. We sincerely hope for your further feedback. Albright et al., Source Generator Attribution via Inversion. CVPR Workshop 2019. Zhang et al., On Attribution of Deepfakes. arXiv 2020. Song et al., Denoising Diffusion Implicit Models. ICLR 2021. Mokady et al., Null-text Inversion for Editing Real Images using Guided Diffusion Models. CVPR 2023. Parmar et al. Zero-shot Image-to-Image Translation. SIGGRAPH 2023. --- Reply to Comment 1.2.3: Title: A Friendly Reminder Comment: Dear Reviewer rv4w, Thanks once again for your valuable comments and precious time. As the discussion period is closing, we genuinely hope you could have a look at the new results and clarifications and kindly let us know if they have addressed your concerns. We would appreciate the opportunity to engage further if needed. Best, Authors of Paper 5345
Summary: The method proposes a model-agnostic attribution method. Given a synthesized image as input, the goal is to attribute the correct model that generates the image. Different from prior work, the method is not restricted to a fixed set of models. The main idea of the work is to use reconstruction error -- the model that generates the image should also reconstruct the image better. On top of this, the work introduces a relative reconstruction measure to calibrate the difficulty of reconstruction of each image, along with a thresholding method based on hypothesis testing. The method is tested on multiple generative models or different training datasets, and the method shows strong performance in most cases. Strengths: 1. The author clearly defines the inspector's goal and what the inspector can see. This is helpful in understanding the problem setup easily. 2. The quantitative results look promising, indicating that the relative reconstruction loss is effective for this problem. 3. The method section is well-motivated, where the authors introduce relative reconstruction loss for calibration, and hypothesis testing to find thresholds. 4. Although the method is simple, it is interesting to have a model-agnostic attribution method. Weaknesses: 1. Using reconstruction error is not new for membership inference [1]. Although membership inference focuses on whether an image is used for training the model or not, finding whether a synthesized image is generated by the model shares a similar spirit. It will be good to have more discussion on this. 2. A type of generative model can have multiple ways to reconstruct an input. Currently, the paper does not provide detail about how reconstruction is done for each model. Please check the question sections for this point. 3. Although the authors have mentioned in the limitation section already, the reconstruction-based algorithm requires a huge runtime cost, making it a less favorable option for model attribution. [1] Hilprecht et al. Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How is reconstruction done on StyleGAN2-ADA and StyleGAN-XL? A common practice is to reconstruct the images by optimizing the extended intermediate latent space (W+) [1], instead of the random noise (z). Will these make a difference in the proposed task? 2. There exist multiple reconstruction strategies for diffusion models as well [2, 3, 4]. It will be great to provide detail on this too. [1] Abdal et al. Image2StyleGAN++: How to Edit the Embedded Images? [2] Song et al. Denoising Diffusion Implicit Models. [3] Mokady et al. Null-text Inversion for Editing Real Images using Guided Diffusion Models. [4] Parmar et al. Zero-shot Image-to-Image Translation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In my opinion, the authors have addressed the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and insightful comments. We hope the following new clarifications and results can address your concerns. **Q1**: Using reconstruction error is not new for membership inference. Although membership inference focuses on whether an image is used for training the model or not, finding whether a synthesized image is generated by the model shares a similar spirit. It will be good to have more discussion on this. **A1**: Thank you for your valuable suggestion. We focus on the origin attribution problem that inspecting if a given image is generated by a given model or not, which is different from the membership inference problem that focuses on the membership of the training samples. Also, our method is guided and supported by our theoretical analysis for the origin attribution problem in Theorem 4.2, while the method proposed by Abdal et al. is heuristic, and it does not have theoretical support for membership inference problem. We will add more discussion about the suggested work (i.e., Abdal et al.) accordingly in the revised version. **Q2**: A type of generative model can have multiple ways to reconstruct an input. Currently, the paper does not provide detail about how reconstruction is done for each model. How is reconstruction done on StyleGAN2-ADA and StyleGAN-XL? A common practice is to reconstruct the images by optimizing the extended intermediate latent space (W+), instead of the random noise (z). Will these make a difference in the proposed task? **A2**: Thanks for your insightful comment. For unconditional generative models, we use gradient descent to optimize the input in the noise space. For text-to-image models, we optimize the input in the intermediate feature space. More details about how reconstruction is done for each model are provided in Table 2 of the PDF file attached to the global response. For StyleGAN2-ADA and StyleGAN-XL, we use the random noise space (i.e., z space) to reconstruct the images by default. During the rebuttal period, we also conducted experiments on using intermediate latent space (i.e., W+ space) to reconstruct the images for distinguishing belonging images and images generated by other models (see setting for the Tabel 3 of the main paper). The results on StyleGAN2-ADA with the CIFAR-10 dataset are shown in the following table: Space | Acc| ---- | ---| z space | 97.0%| W+ space| 96.0%| We also have the results on StyleGAN XL with ImageNet dataset and the identical setting used in the Tabel 3 of the main paper, and the results are demonstrated as follows: Space | Acc| ---- | ---| z space | 93.0%| W+ space| 93.5%| As can be observed, using the W+ space has similar accuracy to using z space, meaning that our method is not sensitive to the selection of the input spaces used for optimization. We will add more results and discussion in the revised version. **Q3**: There exist multiple reconstruction strategies for diffusion models as well. It will be great to provide detail on this too. **A3**: Thank you very much for your constructive comment and valuable suggestions. We conduct experiments for different inversion methods suggested and our default inversion approach on the Stable Diffusion 2 model with the setting described in Section 5.3 of the main paper. The results are shown in the below table: Inversion Method | Acc| ---- | ---| Default | 87.5%| DDIM| 55.5%| Parmar et al.| 62.5%| Mokady et al.| 85.0%| * DDIM inversion is an inversion approach performing the reverse of the DDIM sampling. It is based on the assumption that the ODE process can be reversed in limited of small steps. It only has unsatisfying inversion performance on the conditional diffusion models (e.g., Stable Diffusion) because it will magnify the accumulated error in the inversion process (since it ignores the classifier-free guidance in the diffusion process). As can be seen in the above table, the accuracy of using DDIM inversion is only 55.5%. * Parmar et al. improve the DDIM inversion by using an approximated prompt as the conditional guidance in the inversion process. The approximated prompts are generated by a caption model (i.e., BLIP). This method's inversion quality is dependent on the captions used. Given a generated image of the inspected model, since the caption model can not get the accurate prompt used for generating this image, the inversion using this method is also not accurate, leading that the reconstruction losses of the belonging images and non-belonging images are not highly separable (the accuracy is 62.5%). * Mokady et al. use the diffusion trajectory estimated by the DDIM inversion as the pivotal, and conduct pivotal tuning on the null-text embedding. It achieves comparable accuracy (i.e., 85.0%) to our default inversion method. We will add more results and discussion in our revised version. Thanks again for your valuable suggestions. **Q4**: Although the authors have mentioned in the limitation section already, the reconstruction-based algorithm requires a huge runtime cost, making it a less favorable option for model attribution. **A4**: Thank you very much for your thoughtful comment. The discussion of the efficiency and the runtime can be found in Appendix.E. We admit the computational complexity of our method is larger than watermarking and classifiers-based method. However, our method is alteration-free and model-agnostic while existing methods are not. In addition, our method can be accelerated by mixed precision training (Micikevicius et al.). We will add more discussion in the revised version. Micikevicius et al., Mixed Precision Training. ICLR 2018. --- Rebuttal Comment 1.1: Comment: I have read the reviews and rebuttal and thank the authors for the clarification. In general, I would still recommend a simple yet effective method to be published in a conference. I would also encourage the authors, in the revised text, to acknowledge that reconstruction is a commonly-used technique in other tasks (e.g., membership inference). In my opinion, adding this reference, along with the clarification on the reconstruction details, will make the paper stronger. --- Reply to Comment 1.1.1: Title: Thank you very much for your feedback and support Comment: Thank you very much for your constructive comment and support. We will recognize accordingly and add the corresponding citations and discussions in our revised version. Thank you again for your valuable feedback and suggestions.
Summary: This paper proposes a new problem: given an image $x$ and an generate model $M$, predict whether this image $x$ belongs to $M$ or not. To this end, this paper proposes to do hypothesis testing based on the reconstruction error after conducting latent optimization to reconstruct the given image $x$. They hypothesize that if the image belongs to the model, the reconstruction error while be lower, if it does not belong to, the loss will be higher. Strengths: - This attribution problem is relative new, despite it has ambiguity with a more popular attribution problem, where given a generated image from a model, how can distribute the credit to each image in the training set. - their algorithm show high accuracy in their relevant but maybe limited setup. Weaknesses: while I understand the motivation of considering this problem, but I feel the formulation of this problem is problematic: - (1) The first is that, this framework will only work if the real images used to train these generative models do not belong to the output space of the generative models. If these training images belong to the output space, then they will be credited as the generated images of the given generative model, which is wrong. If you assume the training images do not belong to the output space, it's also weird, since this is how generative models, such as VAE, is optimized. If a generative model, such as VAE, perfectly fits the training data, your model will wrongly give IP to this generative model. - (2) relevantly, this framework will determine any reconstructed images as generated images. i.e., given any real image $x$, let's feed it through the vae autoencoder, and then get vae reconstruction as $x'$. Apparently $x'$ belongs to the output space of the vae model, but can we ethically classified it as images generated by the model, and give IP to the VAE? I do not think so. - (3) what if I train two models with the same architecture on the same dataset (just with different random seeds), then how can you determine which model generate which images? In this case, your method may wrongly attribute output from one model to the other model. - (4) This method sounds like very easy to get attacked, despite the authors argue that it's advantageous over wartermaking and classifier based one. One apparent question is that, if I do some photoshoping-ish editing or as simple as running a gaussian blurring over the generated images and real images, is this framework still able to distinguish which model generate which image? - (5) For line 301, you train the same model on different datasets, what if we train different models (architectures) on the same datasets? - (6) For the study 5.3, one question is that many other papers have shown that diffusion model can almost identically output some images used to train it. So how do you guarantee that there is no similar images in the training set for your generated images? The calibration step is not very intuitive. Can you use any other model for calibration? will they be equally effective? why do you just choose this consistency model? it's not well justified. Assuming these generative models are white-box is too strong. The reality is that, many real images are generated by private models where you have no access to even weights, such as Mid-journey. More realistic situation is also that, let's say we found one image is guilty, then how do we know which model is from? Then it's infeasible to iterate all models to see which model creates this given image. Writing-wise, - (1) the abstract is not easy to follow, the inverse engineering makes me think it's something very different that what you actually did, which was just latent optimization for reconstructing the output. - (2) because of this terminology, it also renders paragraph 67 not easy to understand what you actually do Isn't Theorem 4.2 trivial? I do not understand why authors put it as a theorem? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: see above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and valuable comments. We hope the following new clarifications and results can address your concerns. We are happy to provide further responses and perform more experiments if you have further suggestions. **Q1**: The first is that, this framework will only work if the real images used to train these generative models do not belong to the output space of the generative models. If these training images belong to the output space, then they will be credited as the generated images of the given generative model, which is wrong. If you assume the training images do not belong to the output space, it's also weird, since this is how generative models, such as VAE, is optimized. If a generative model, such as VAE, perfectly fits the training data, your model will wrongly give IP to this generative model. **A1**: Thank you for your thoughtful comment. We respectively disagree. * Although the goal of the generative model's training is fitting the training data, to the best of our knowledge, the training data **can not be perfectly fitted**, at least until now. Even the state-of-the-art generative models can not reach exactly 0 training loss. Our empirical results in Table 2 of the main paper also demonstrate that the training samples and the generated samples for the state-of-the-art generative models are actually distinguishable by our method. There are various existing works (e.g., Rossler et al., Yan et al. and Zhu et al.) demonstrate the real images (even the training data for the generative models) and the images generated by the real-world generative models are distinguishable. We want to note that the fundamental goal of machine learning is not perfectly memorizing the training data. In fact, various existing works (e.g., Carlini et al.) demonstrate that deeply memorizing the training data is harmful and we should avoid memorizing so much. * While generative models can produce highly realistic synthetic data, differences between generated and real data often remain detectable. The growing field of generated data detection continues pursuing improvements, suggesting this is still an open research problem rather than a meaningless endeavor. Carlini et al., Extracting Training Data from Diffusion Models. USENIX Security 2023. Rossler et al., FaceForensics++: Learning to Detect Manipulated Facial Images. ICCV 2019. (1491 citations) Yan et al., DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection. arXiv 2023. Zhu et al., GenImage: A Million-Scale Benchmark for Detecting AI-Generated Image. arXiv 2023. **Q2**: This framework will determine any reconstructed images as generated images. **A2**: Thanks for your insightful comment. In this paper, we focus on the IP infringement and misuse problem related to the novel-generated images of the generative models. Note that generating creative novel images is also the fundamental goal of the image generative models. Forcing the generated images to be close to some real images means the IP infringement and misuse are mostly associated with the imitated real images, instead of the generative model itself. The detailed problem formulation, application ranges, and use cases of our method can be found in Section 3. We admit that the reconstructed images will be considered as the belongings of the model, but we do not need to consider the attribution of the reconstructed images in our use cases. We will make it more clear in the revised version. **Q3**: what if I train two models with the same architecture on the same dataset (just with different random seeds), then how can you determine which model generate which images? In this case, your method may wrongly attribute output from one model to the other model. **A3**: Thank you for your constructive comment. For our main use case where the model holder wants to defend the IP of his/her trained models (see Section 3 of our main paper for more details), we focus on protecting the IP of the models that are trained on the private dataset or close-sourced model architecture. If two models trained by different parties only have the difference on the random seeds, then both the dataset and the model architecture used are open-sourced and we might not need to protect the IP of the models. We will add more discussion to make it more clear. **Q4**: Photoshoping-ish editing based adaptive attack. **A4**: Thank you for your useful comment. The discussion about the image-editing-based adaptive attack can be found in Appendix.D in the supplementary materials. Following Ali et al., to keep the quality of the images while conducting editing on them, we use the _1977 Instagram filter to conduct image editing. The model used here is the DCGAN trained on the CIFAR-10. The results show that the detection accuracy of RONAN is 90.5% even under the image-editing-based adaptive attack, demonstrating the robustness of our method. Ali et al., ColorFool: Semantic Adversarial Colorization. CVPR 2020. **Q5**: For line 301, you train the same model on different datasets, what if we train different models (architectures) on the same datasets? **A5**: Thanks for your valuable question. The results for different models (architectures) on the same datasets can be found in Table 2 (also see line 289) in our paper. Also, see rv4w-A3 and Tabel 1 of the PDF file in the global response for more results. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thank authors for answering my questions. Below are my further comments > A1 Thanks for the answer. I know that the point of machine learning is not to overfit, and I was talking about the theoretical flaw of your method. What I was suggesting is a thought experiment. I don't think there is completely no overfitting cases for generative model, and it's just about relative scale: suppose you only have 10 mnist images, very likely you can train a model that is large enough VAE to perfectly fit those pixels, right? More broadly speaking, assuming that real images cannot belong to the output space of generative models is too brute-force. > A2 Thank you for your answer, but I don't think it solve my concern at all. > A3 we focus on protecting the IP of the models that are trained on the private dataset or close-sourced model architecture This contradicts to your assumption that you have white-box access to the provided model in line 146. > A4 Thank you for sharing this experiment, but I think the given results are far from a comprehensive understanding. What you suggested is your method is robust to editing ( you do not provide how you edit it). Consider a thought experiment, if I run a very large box filter to filter out all high-level details, do you think it's still detectable? As this aspect is very important about the applicability of this method, I recommend conducting a through study understand it. > A5 Maybe I am not clear enough. What I meant was, you still train the same method, e.g. VAE or GAN, but you alter the network details, e.g. number of layers, number of neurons. > A6 This makes sense and thank you for clarifying it. I feel most of my concerns remain unaddressed, so I will keep my rating. --- Reply to Comment 1.1.1: Title: Thanks for the feedback (Part 1) Comment: Thank you very much for your feedback. Below are our further responses. We are happy to answer more questions and perform more experiments if you have further concerns. **Further Response-A1**: Thank you very much for your thoughtful comments and suggestions. * As pointed out by existing works (e.g., Frank et al., Wang et al. and Corvi et al.), morden generative models (e.g., VAEs, GANs, and Diffusion models) including the state-of-the-art models such as Stable Diffusion and DALL-E will leave forensics traces in the frequency spaces of the generated images. This is caused by indispensable operations used in modern generative models (e.g., Upsampling operations and Convolutional Layers). * Also, perfectly fitting the training samples means finding the global optimum in the optimization process. However, the essential gradient descent optimizers used in modern generative models (e.g., SGD and Adam) typically can not get the global optimum. * Empirically, we conducted the suggested thought experiments accordingly. In detail, we randomly sample 10 images from MNIST dataset, and use VAE (i.e., Kingma et al.) with different numbers of neurons in the hidden layer to fit these 10 images. We set the epoch number to 5000 to ensure the training losses that measures the distance between the generated samples and the training samples of the model are converged. The detailed final training losses with different model sizes are shown as follows: Num of Neurons in Hidden Layer | Training Loss| ---- | ---| 1 | 184.24| 5| 105.27| 10| 77.20| 50| 53.52| 100| 53.23| 500| 53.04| 1000| 53.81| With the increases in the model sizes, the final training losses reduce at first. However, when the model is large enough, the final training loss becomes stable at the region from 53.00-54.00. The results mean that even the VAEs with enough large sizes can not fit all pixels perfectly. We also conduct the reverse-engineering of the training samples on the trained VAE with 1000 neurons in the hidden layer. The results show that we can not reconstruct the exact training samples. Based on the above analysis and empirical results, at least we can conclude that the probabilities that the real images belong to the output space of generative models are very low. Kingma et al., Auto-Encoding Variational Bayes. ICLR 2014. Frank et al., Leveraging Frequency Analysis for Deep Fake Image Recognition. ICML 2020. Wang et al., CNN-generated Images Are Surprisingly Easy to Spot... for Now. CVPR 2020. Corvi et al., On the Detection of Synthetic Images Generated by Diffusion Models. arXiv 2022. **Further Response-A2**: Thanks for your insightful feedback. * While we admit that the reconstructed images will be considered as the belongings of the model, it is essential to clarify that infering an image as a belonging of a model does not imply the IP of this image is totally belonging to this model. In fact, determining the ownership of the IP related to the generated images remains an unresolved challenge in the field of law. This complexity arises due to the involvement of multiple entities (such as contributors of training data, model trainers, input/prompt providers, and the models themselves) throughout the image generation process. The infering results of our origin attribution method can serve as a valuable reference for addressing IP protection concerns, instead of a definitive conclusion. * In this paper, we focus on our formulated origin attribution problem of the generative models. Beyond serving as a reference for safeguarding intellectual property, our method has versatile applications, including tracing the source of maliciously generated images and detecting AI-powered plagiarism. For instance, imagine a scenario where an individual generates AI-created images (e.g., using Midjourney) and dishonestly presents them as their own original artwork (e.g., photographs and paintings) to gain recognition and reputation. In such cases, the model owner (e.g., Midjourney's owner) may suspect that the image is generated using their model (e.g., Midjourney). Our proposed method can then be employed to uncover instances of AI-powered plagiarism. Importantly, the concern regarding the impact of reconstructed images is minimized in this context. This is because the malicious user's goal is garnering acclaim through the dissemination of novel images, and they are unlikely to use the reconstructed versions of real images for this purpose. Thanks again for your thoughtful comment, we will revise our paper accordingly to make it more clear. --- Reply to Comment 1.1.2: Title: Thanks for the feedback (Part 2) Comment: **Further Response-A3**: Thanks for your valuable feedback. * We want to clarify that the main user of our method is the model owners. For example, a model owner trains a model using his/her private dataset or his/her own close-sourced model architecture (and only provide the black-box API to the downstream users of this model, such as Midjourney and DALL-E 2), and they can use our method to infer or demonstrate if a specific image is generated by this model. Since the model belongs to the model owner, it is natural that the model owner has white-box access to the model. Thus, having white-box access and using private datasets or close-sourced model architecture is not contradictory. * Furthermore, it is important to highlight that as modern models continue to develop, their sizes are progressively expanding, resulting in the demand for substantial time and resources to train these larger models. For example, the training of GPT-4 incurred a cost exceeding $100 million (source: https://en.wikipedia.org/wiki/GPT-4). It is noteworthy that, in practice, industries are unlikely to undertake repeated training of such state-of-the-art models solely varying in random seeds. We will make it more clear in our revised version. **Further Response-A4**: For the experiments in the Section Appendix.F, the details of the _1977 Instagram filter we used can be found in the link we provided in line 597 of the Appendix (supplementary material). We also conducted the suggested thought experiment accordingly. The image editing method is the suggested image box filter. The other settings are identical to that in Section Appendix.F. Besides the detection accuracy (Acc) of our method, we also demonstrate the Structural Similarity Index (i.e., SSIM proposed in Wang et al.) between the original images and the edited images, which can measure the similarity between them. A higher SSIM value means the edited images are more similar to the original images. The results under different box sizes of the image filter are shown in the following table. Box Size | Acc| SSIM| ---- | ---|---| 1 | 92.5% |0.8920| 2 | 83.0%|0.7446| 3 | 58.0%|0.5174| 4 | 53.5%|0.3530| As can be observed, our method remains effective under relatively small box sizes. As the box sizes expand, however, the detection accuracy of our method diminishes. This outcome is understandable and acceptable, as it corresponds to a rapid reduction in the Structural Similarity Index (SSIM) between the edited images and their unaltered counterparts. When employing larger box sizes, it is conceivable that an adaptive attacker might find ways to elude our method's scrutiny, yet this comes at the cost of substantially compromising the quality of the edited images. Consequently, our method maintains its effectiveness even in the face of adaptive attacks that seek to maintain the quality of the edited images. Wang et al., Image quality assessment: from error visibility to structural similarity. TIP 2004. **Further Response-A5**: Thanks for your constructive suggestions. We conducted the experiments as suggested. The model and the dataset used here are DCGAN and CIFAR-10, respectively. We first provide the empirical results when the $\mathcal{M}_1$ (i.e., the inspected generator) and the $\mathcal{M}_2$ (i.e., the other generator) have different numbers of layers. The results are shown in the following table: $\mathcal{M}_1$'s Number of Layers|$\mathcal{M}_2$'s Number of Layers| Acc| ---- |---- | ---| 4 |2 | 97.5%| 2 |4| 98.0%| We also demonstrate the results when the $\mathcal{M}_1$ (i.e., the inspected generator) and the $\mathcal{M}_2$ (i.e., the other generator) have different numbers of channels in the first Convolutional layer in the following table: $\mathcal{M}_1$'s Number of Channels in the first Conv Layer |$\mathcal{M}_2$'s Number of Channels in the first Conv Layer | Acc| ---- |---- | ---| 64 |48 | 97.5%| 48 |64| 96.0%| As can be observed, our method achieves high detection accuracy among these settings. These results indicate our method is effective when $\mathcal{M}_1$ and $\mathcal{M}_2$ are trained using the same method, but with different network details, e.g. number of layers, and number of neurons.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their thoughtful comments and precious time. We provide our responses below to address concerns. Please let us know if there is anything still not clear. We are willing to answer more questions and perform more experiments if the reviewers have further concerns. Due to the length limition, we put the responses for Q6, Q7, Q8, Q9 and Q10 of Review FUt3 in here. **Review FUt3-Q6**: For the study 5.3, one question is that many other papers have shown that diffusion model can almost identically output some images used to train it. So how do you guarantee that there are no similar images in the training set for your generated images? **Review FUt3-A6**: Thanks for your constructive question. We are aware of some methods (e.g., Carlini et al. and Webster et al.) that can extract generated images similar to some training samples of the model. Although the extracted images are much more similar to some training samples than the randomly generated images, these generated images still have a certain $l_2$ distance to the corresponding training images, and there are obvious artifacts for human vision in these generated images (see Carlini et al.). Thus, the distribution of the extracted images is different from that of the corresponding training images. We also conduct experiments on using our method to distinguish the extracted images (by using Webster et al.) and their corresponding training images. The model used is Stable Diffusion. The results are that our method still achieves 85.0% accuracy for distinguishing the memorized training samples and the corresponding generated samples. We will add more details and results in our revised version. Carlini et al., Extracting Training Data from Diffusion Models. USENIX Security 2023. Webster et al., A Reproducible Extraction of Training Images from Diffusion Models. arXiv 2023. **Review FUt3-Q7**: Calibration step and reference models. **Review FUt3-A7**: Thank you very much for your constructive comment. We have conducted experiments that used different models as reference models during the rebuttal. The inspected model here is the StyleGAN2-ADA model trained on CIFAR-10 dataset and the setting here is identical to that used in the Table 2 of the main paper (belongings vs training data) distinguishing the belonging images and the results can be found in the following table. Reference Model | Acc| ---- | ---| Consistency Model | 97.0%| StyleGAN XL| 95.0%| Stable Diffusion| 96.0%| As can be observed, using different reference models yields similar results, meaning that our method is not sensitive to the selection of the reference models. We will add more discussion and results in our revised version. **Review FUt3-Q8**: Assuming these generative models are white-box is too strong. **Review FUt3-A8**: Thanks for the useful comment. The detailed application ranges and use cases of our method can be found in Section 3. The inspectors in all use cases can have white-box access to the model, and they know the range of the examined models (i.e., they do not need to iterate all models). For example, the main use case of our method is protecting the IP of the model owner. In this scenario, a party suspects that a specific image may have been generated by their generative model without authorization, such as if a malicious user has stolen the model and used it to generate images. The party can then request an inspector to use our proposed method to infer if the doubtful image was indeed generated by their particular model. The situation where the inspector does not have white-box access to the model and needs to iterate all models is out of the scope of this paper. We will add more discussion to make it more clear in our revised version. **Review FUt3-Q9**: Theorem 4.2. **Review FUt3-A9**: Although it is straightforward, Theorem 4.2 establishes the theoretical separability of the reconstruction loss values for belonging and non-belonging images. Thus, Theorem 4.2 is meaningful as the guidance of our method. **Review FUt3-Q10**: Other writing issues. **Review FUt3-A10**: Thank you very much for your helpful suggestions. We will revise accordingly in the revised version. Pdf: /pdf/53a1dac6749828e5fc16d83eb6aeb832d1be105f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Linker-Tuning: Optimizing Continuous Prompts for Heterodimeric Protein Prediction
Reject
Summary: The paper proposes a strategy of predicting heterodimers with ESMFold. Chains of heterodimers are linked by glycine linkers and inputed to esm. The output representations are then added with a learnable embedding layer, and then folded with the folding module in ESMFold. The finetuning is only done for the learnable embedding, with a weighted distogram loss. Evaluations are done on 3 datasets, while one of them should be considered pointless. Some elevation is gained by the method, compared with directly using links. Strengths: 1. The writing of the paper is clear. 2. The proposed method is reasonable and intuitive. 3. The method achieves comparable performances with other linker-based hacking strategies. Slight elevation of performances is gained (TMscore 0.62->0.65, ~0.03) from finetuning compared with directly using linkers. Weaknesses: Method: 3. The proposed idea, i.e. exploiting ESM models to predict protein complexes by architecture adjusting and finetuning has already been explored previously [1]. Seems that their implementations are more "neat": no external linkers are involved; permutation invariances are regarded; they solve complexes with arbitrary numbers of chains, both homomers and heteromers; and the performance elevation is more significant (TMscore 0.27->0.66, ~0.39 in their benchmark). However, no discussion let alone comparison is shown in this paper. 4. In fact, I don't know why linkers are needed to fold multimers at all: simply modifying the relative position indices suffices to tell the model that the residues are from separate chains. Involving linkers will implicitly pose in the model a geometrical constraint on the C-terminal of chain A and N-terminal of chain B (as they'll have relative position of +-L). Also the running time can be (slightly) added. Also, I don't personally like the saying that connects the linker idea to "prompts": they are totally different things. Doing so is more of attempting to ride the wave of LLM. Evaluation: 5. The elevation of performances is in all sense too marginal. The results simply tell me that both ESMFold-linker and the proposed method are not reliable (avg DockQ of 0.11/0.17), instead of telling me that the proposed method is useful. In this case, maybe the percentage of success is a better metric to show. 6. The VH-VL docking benchmark is totally pointless and lacks commonsense. One who has basic senses of the domain knows that all protein folding efforts on antibodies should focus on Ab-Ag instead of VH-VL, because all interaction modes between VH-VL are the same, i.e. they fold almost identically (In table 2, as one can expect, all TM-scores are above 0.92). Therefore, they shouldn't be used as heterodimer folding or protein docking benchmarks. Even if one focuses on the structures of VH-VL, the metrics on the CDR loops should be independently reported, rather than global RMSD. [1] Zhu et al, Uni-Fold MuSSe: De Novo Protein Complex Prediction with Protein Language Models. https://www.biorxiv.org/content/10.1101/2023.02.14.528571v1.full.pdf Technical Quality: 1 poor Clarity: 3 good Questions for Authors: 8. I don't know how the standard deviation of RMSD reachs 8.39 when the mean is 8.59 (0.35+- 0.36 for DockQ). This is so counter-intuitive. The authors may need a histogram to explain this. 9. Why is AF-multimer performances in Table 1 (left) missing? 10. Boxplots are shown only for VH-VL. Why not show them for Table 1 benchmarks? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: limitations are addressed in section 6. The suggestion would be 11. More solid benchmarks. 12. Discussions and comparisons with [1]. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
null
Summary: In this paper, the author applies prompt tuning in the heterodimeric protein prediction task. Instead of using the poly-Glycine linker, this method automatically finds the best linker in the continuous space. The author compares this method with several existing methods including the current start-of-art algorithm AF-multimer, the best PLM-based algorithm ESMFold-Linker, and the rigid docking algorithm HDOCK. The performance shows this method which is PLM-based is better than ESMFold-Linker but worse than AF-multimer. Strengths: 1. The novelty and contribution of the paper mentioned in the paper are clear and correct. 2. The motivation behind the method and its design are well-founded and logical. 3. Overall, the writing is great. Weaknesses: 1. Based on my understanding, the methodology seems to be limited in its applicability to the pre-trained language model-based protein structure prediction method, which is not considered the most accurate algorithm for protein structure prediction. Furthermore, the performance of this methodology appears to be influenced by the linker's position and the location suggested in the paper is not the PLM part, raising concerns about its compatibility with other PLM-based methods like Omegafold. Consequently, the paper's value may diminish if there are more robust folding algorithms available. 2. Given that all other methods are unsupervised, there is a possibility of the method benefiting from overfitting. Regarding the antibody dataset, from my understanding, antibodies generally exhibit a rigid overall structure except for the six CDR loops. Consequently, I suspect that the flexible CDR loop will not lead to significant variations in the docking of the light chain and heavy chain. Unless supported by evidence, I prefer to believe that the results obtained from the antibody dataset are not reliable and it may not be an appropriate dataset for this task. As for the Heterodimer test, while a 40% threshold seems acceptable, I believe setting a lower threshold, preferably below 30%, would be better. 3. In comparison to the state-of-the-art method AF-multimer, this method exhibits a significant decrease in performance. When people have the opportunity to utilize a substantial number of CPUs for preparing MSA information, the speed advantage offered by this method does not compensate for its accuracy limitations. Moreover, as far as I understand, there aren't a lot of downstream tasks that necessitate high-throughput calculations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The definition of the Gap model is not clear. It seems it is the version that you set the residue_index_offset for the second chain as 512. If the understanding is correct, is there any explanation for the improvement? It seems like this trivial strategy can achieve 50% of your improvement (according to the DockQ score for the Heterodimer test). Is that possible to also apply this trick to AF-multimer and AlphaFold-Linker? 2. The reason why you choose to use a 21-residue linker for AlphaFold-Linker which is different from the number you used for ESM. 3. Could you show the distribution of the score of the metrics for the Heterodimer test and HeteroTest2? It seems the standard deviation of scores is extremely high. In this case, it seems it is necessary to prove the difference is statistically significant for some key comparisons. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation is well addressed and there is no potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
null
Summary: Inspired by the prompt tuning technique used in the field of NLP, the authors leverage the prompt tuning to adapt the single-chain pre-trained ESMFold for heterodimer protein structure prediction. To be specific, a learnable soft prompt is placed between protein chains. With such links, the pre-trained ESMFold treats complex structure prediction as the monomer structure prediction, and the prompt is tuned on the heterodimer dataset. They show model with such a trick can outperform ESMFold-Linker baseline by large margins on both contact and structure prediction tasks on the heterodimer test set. Strengths: 1 Leveraging the prompt tuning idea in multimer structure prediction using a single-chain pre-trained model is quite novel and interesting. 2. The proposed trick does improve the performance of the ESMFOLD-link baseline. The trick can potentially be applied to different single-chain pre-trained protein structure prediction models and can be considered a general trick. Weaknesses: 1. The link-tuning idea is only validated on ESMFold. I'm curious about its effectiveness when applied to other protein structure prediction (PSP) models, e.g., Alphafold, and Omegafold. Prompt tuning is a general trick in NLP. Justifying the generalizability of the link-tuning trick on different PSP models can definitely make the manuscript stronger. 2. The idea of Linker-tuning is simple and effective, but I feel the current manuscript is not informative enough to be published on NeurIPS conference. If the authors can justify the generalizability of the link-tuning trick, I'm willing to adjust my score. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: NA Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
null
Summary: This work predicts the structure of heterodimeric protein chains by optimizing poly-G linkers that connect two chains of a heterodimer. Strengths: - This research, compared to other existing deep learning methods, finds an alternative to protein complex prediction methods, which is to make use of poly-G linkers. The idea connects closely to many biological applications and thus has more potential impacts on wider communities. - The evaluation metrics cover various aspects to comprehensively assess the model's performance. Weaknesses: - ProteinMPNN as another famous multimeric structure prediction tool should have been compared. - It seems very often the proposed method does not achieve the optimal performance (in Table 1 and Table 2). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It is not very clear to non-LM experts how adding a linker is an analogue to the prompt in LM. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impacts were discussed in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions. We would like to clear up some misunderstandings and answer your questions. 1. About ProteinMPNN [1], it is a protein design model that takes structure as input and predicts amino acid sequence. It is not a structure prediction model. So we do not compare our model with ProteinMPNN. 1. About performance, our method is a very lightweight adaptation method built on ESMFold (with ESM2-3B). We cannot expect it to perform as well as the AF-Multimer for three reasons: (1) The base model for AF-Multimer is AF2, which is a model stronger than ESMFold (with ESM2-15B) in general, especially for those proteins with high-quality MSAs. (2) AF-Multimer is a fully fine-tuned version of AF2, with all the parameters in AF2 retrained, while our model is a prompt tuning method that contains only a tiny fraction of trainable parameters (0.0256M). (3) AF-Multimer ensembles five models, while we only use a single model. Although the performance of our method is not as good as AF-Multimer, it is much faster and simpler both in training and inference. For a fair comparison, we think the baselines should be linker-based methods, including ESMFold-Linker and AF-Linker. As shown in Table 1 and Table 2, our method achieves better results than ESMFold-Linker. Furthermore, compared with AF-Linker, our method achieves comparable results on general heterodimers and better results on antibodies. 2. About the connection between linkers and prompts, let us take Natural Language Inference (NLI) task as an example. NLI aims to predict the relationship (entailed, contradicted, or neutral) between two given sentences. In the “Pretrain, Prompt, Predict” paradigm [2], the input can be reformulated as <Premise sentence> ? [MASK] <Hypothesis sentence>, where the prompt is represented as “? [MASK]”. The pretrained LM such as BERT will take the whole sentence as input and predict the masked token which will be further converted into one of the three answer choices. The key idea behind using prompts is to convert the downstream task into the pretrained task so it can be directly solved. This approach helps alleviate inconsistencies between the pretrained and downstream tasks and often leads to a better performance than the traditional fine-tuning method. In our task, we use the linker to convert the two chains into a single sequence <chain 1> linker <chain2>. Then predicting a complex structure (downstream task) becomes the same as predicting a single-chain structure (pretrained task) under the consumption that the model will recognize the linker as an unstructured region. To sum up, linker and prompt are similar in three aspects: (1) they both serve as a connector, (2) they both convert the downstream task as the pretrained task, (3) they both steer the pretrained model to generate desired output by providing a specific context. Reference: [1] Dauparas, Justas, et al. "Robust deep learning–based protein sequence design using ProteinMPNN." *Science* 378.6615 (2022): 49-56. [2] Liu, Pengfei, et al. "Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing." *ACM Computing Surveys* 55.9 (2023): 1-35. --- Rebuttal Comment 1.1: Comment: Thanks for the prompt response and for addressing the questions I raised. I understand the challenges inherent in expecting a lightweight model to match the performance of larger models. Nevertheless, the authors might consider incorporating a specific task or evaluating the model within a particular application context. By doing so, they can empirically demonstrate the indispensability of crafting a compact model, even if its performance may fall slightly short of optimal. This would greatly enhance the paper's impact and its recognition within the field. For the current version, I would like to maintain my original score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Prioritizing Samples in Reinforcement Learning with Reducible Loss
Accept (poster)
Summary: This paper proposes an experience replay priority scheme based on the notion of reducible loss (ReLo), estimated as the difference in Q-loss between using the online Q-network and the target Q-network, and show in experiments in DM Control Suite, Mujoco tasks, and Atari environments that it improves upon prioritized experience replay (PER) (TD-error based) and uniform replay. They also demonstrate through a toy example that their method is able to avoid the common pitfall of stochasticity that prioritized experience replay struggles with often. Strengths: The strength of this paper is in its simplicity and clarity. The idea and implementation is very simple and straightforward, requiring no new changes to existing algorithms and networks, and takes advantage of the target network that is already present. The priority is very simple to compute, as it just requires computing the Q-loss one more time. The experiments are quite clear, and demonstrate how ReLo can outperform uniform sampling and PER across control/robotic tasks and MinAtar tasks. This all makes this idea very easy to adopt and further experiment with for follow-ups. Weaknesses: The main weakness is in the experiments. One weakness is in the ALE experiments, which do not run Rainbow for very long (only 2M). While these preliminary results are promising, they are ultimately too early in training to be relied upon. Nevertheless they are still useful to see than not to have. Another weakness is the lack of more stochastic domains, as control/mujoco and Atari are all very close to deterministic. Was sticky-actions enabled for Atari or MinAtar tasks? If not, then that would be something very good to include that would not require running on a new domain. ---- After Author Rebuttals ---- After reading other reviews and author rebuttals, I do agree that the experiments are a little lacking. There should be a clear (non-gridworld) stochastic domain that can highlight the difference between uniform, PER, and ReLo. The addition of noisy DMC results would fill this gap, and as the authors will include it in the final paper. I maintain my score. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: As mentioned in the weaknesses section, for the MinAtar and Atari experiments, did you enable sticky-actions? If yes, what was the sticky probability? Because ReLo uses the target network to compute priorities, it is currently tied to the hyperparameters of the target network. Have you considered using a second target network to disentangle this? Could there be cases where for the RL, you would want a very fast updating target network, but for ReLo you would want a much slower updating target network? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: This is related to the question about having a second target network. ReLo currently cannot easily change the hyperparameters for the target network, as it may have a great impact on the RL optimization. It may end up being the case that ReLo would prefer a much slower (or much faster) updating target network. This seems like a possible future direction to explore. Perhaps there should be a brief mention of this in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive criticism and recommendations for making this paper better. We discuss the points below. ### Sticky Actions Yes we used sticky actions in Atari and MinAtar, utilizing the default values of 0.25 and 0.1 respectively. ### Decoupling Target Network and Held Our Model We thank the reviewer for pointing this out. We agree it would be an interesting question to see if ReLo could be implemented with a separate target network. For our experiments we wanted to highlight the robustness and generality of ReLo by studying it with different target update mechanisms and target update frequencies and hence we did not use an additional target network. But we agree this would indeed be an interesting proposition. We performed some preliminary experiments with separate target networks and the performance was similar. However we are running some ablation experiments with faster and slower update frequencies. ### Additional experiments To highlight the ability of ReLo to mitigate forgetting, we created an experiment where an agent is given access to a sequence of tasks. This is explained in detail in the consolidated response. The experiment shows how the ReLo agent suffers the least degradation in performance on prior tasks while still learning on the new task. This is in contrast to the PER agent which forgets how to solve the previous task and also takes longer to learn the new task. --- Rebuttal Comment 1.1: Comment: After reading the other reviews and author rebuttals, I think some of the points made in the other reviews make sense, such as MinAtar and DMC/Mujoco are perhaps not the best environments for a main experimental result. They are great as supporting results, but having a (non-gridworld) stochastic dynamics environment where PER clearly struggles. In light of this, I'm inclined to slightly lower my score, but I do still think this paper presents a clear, simple and interesting idea. --- Reply to Comment 1.1.1: Comment: Thank you for pointing this out. While we appreciate the concerns raised by you and the other reviewers, we would like to point out that MinAtar and Atari already have stochastic dynamics due to the presence of sticky actions. We also highlight that PER suffers significantly in MinAtar and is worse than the uniform sampling baseline in this benchmark. ReLo on the other hand does better than the baseline and PER as evident by the higher IQM scores. In Atari too, there is an increase in performance when utilizing ReLo as the prioritization scheme when compared to PER. However, to provide further evidence, we also conduct an additional study on stochastic versions of DMC environments. Specifically, we added noise sampled from $\mathcal{N}(0, \sigma^2)$ to the environment rewards during training. During evaluation episodes, no noise is added to the reward. This is similar to the stochastic environments used by Kumar et al. 2020 [1]. We chose a random subset of the DMC suite given the time constraint, choosing Quadruped Run, Quadruped Walk, Walker Run and Walker Walk. We are running the entire suite and can add the results to the revised version of the paper. The results of this experiment after 500K steps and 1M steps are presented below. For Quadruped Run, Quadruped Walk, Walker Run we used $\sigma = 0.1$, and for Walker Walk we used a higher level of noise ($\sigma=1$) since there wasn't much change in the performance when using $\sigma=0.1$ compared to deterministic version. The tables clearly show the sample efficiency of ReLo, with it having higher performance than the baselines even early in training. Results are calculated over 5 seeds. ### 500K | | Quadruped Run $\sigma = 0.1$ | Quadruped Walk $\sigma = 0.1$ | Walker Run $\sigma = 0.1$ | Walker Walk $\sigma = 1$ | |:--------|:------------------------|:------------------------|:------------------------|:------------------------| | PER | 311.35 (262.91, 359.79) | 615.69 (569.33, 662.06) | 639.24 (621.78, 656.71) | 245.41 (223.57, 267.24) | | ReLo | **428.76 (389.24, 468.28)** | **867.15 (850.01, 884.29)** | **670.22 (663.87, 676.56)** | **287.18 (284.04, 290.33)** | | Uniform | 128.04 (118.93, 137.15) | 262.71 (228.01, 297.42) | 568.65 (536.16, 601.15) | 153.48 (120.94, 186.02) | ### 1M Steps | | Quadruped Run $\sigma = 0.1$ | Quadruped Walk $\sigma = 0.1$ | Walker Run $\sigma = 0.1$ | Walker Walk $\sigma = 1$ | |:--------|:------------------------|:------------------------|:------------------------|:------------------------| | PER | 523.66 (433.24, 614.07) | 919.14 (914.74, 923.54) | 716.75 (700.55, 732.94) | 711.5 (676.09, 746.92) | | ReLo | **821.16 (800.2, 842.13)** | **936.26 (932.04, 940.47)** | **759.28 (754.95, 763.62)** | **911.19 (907.23, 915.14)** | | Uniform | 553.91 (514.67, 593.16) | 616.06 (524.01, 708.1) | 625.14 (592.51, 657.78) | 495.18 (441.55, 548.82) | We presented the stochastic gridworld setting in Section 4.6 to - Showcase a drawback caused by PER in stochastic environments - Show how ReLo is able to handle stochasticity by prioritizing points relevant to the main task over 'unlearnable' points (the point with stochastic reward). We would also like to draw your attention to the gridworld setting that was suggested by Reviewer DXpG to study task switches. We show how ReLo can mitigate the effect of forgetting while PER and uniform sampling show higher levels of degradation in performance. Based on feedback from Reviewer DxPG, we ran the experiment for longer, training for 1M environment steps and more seeds. There is now minimal overlap between the confidence intervals of the different methods. | Algorithm | Task A | Task B | |:------------|:-----------------:|:-----------------:| | PER | 0.29 (0.19, 0.39) | 0.26 (0.16, 0.36) | | Uniform | 0.43 (0.32, 0.54) | 0.40 (0.29, 0.51) | | ReLo | 0.63 (0.52, 0.74) | 0.74 (0.65, 0.84) | In our paper, we motivate ReLo by highlighting the pitfalls of TD error prioritization through gridworld environments. Our main experimental results on the DMC, MinAtar, Mujoco and Atari benchmarks are to highlight the versatility of ReLo prioritization in 1) varied domains (pixel and proprioceptive), 2) control schemes (continuous and discrete) and 3) TD update mechanisms (EMA and hard copy updates). We would also like to emphasize that our goal is to have an alternative stable prioritization scheme which addresses issues in PER. Thank you again for your insightful feedback which improves the quality of our work. We believe these additional experiments address your concerns about the validity of our results and would kindly request you to increase the score. We are also happy to address any other concerns. [1] DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction
Summary: The authors propose a novel form of prioritized experience replay based on the “learn-ability” of the sample, or how much the training loss on a sample can be reduced. They point out that previous methods that prioritize data with high TD error are not optimal if the high TD error is irreducible error, which can be caused by a noisy environment. Instead the authors prioritize high reducible TD error using a new metric that is the difference in TD error between the current Q function and a delayed, target Q function. This is a simple and computationally cheap method that can be applied to many off-policy RL algorithms. They demonstrate across a range of continuous and discrete control benchmarks that this sampling metric outperforms uniform and PER sampling and demonstrate that their method achieves lower TD error. Strengths: - Authors identify a valid limitation of prior methods: that it might over-emphasize samples with high reducible error leading to wasted training effort. - Reasonable application of supervised learning techniques to RL to address this limitation in a relatively simple and straightforward algorithm. Method also inherits generality of other prioritized experience replay algorithms. - Authors demonstrate strong experimental results across a wide range of tasks and using multiple base RL algorithms. Weaknesses: - Clarity in writing. Background and related work section can be more concise. In section 3.1, the introduction of $f_{map}$ with PER priority is not very intuitive. Some terms like "significance" and "importance" of data are not well defined, and "learn-ability" and reducible loss seem to be the same. - In general, the experimental results are missing learning curves, especially since sample efficiency would be one of the main benefits of prioritized replay. It’s also not clear how the final performance figures are determined (Are all methods trained until convergence? How are the policies evaluated?). - Section 4.5 TD Error Analysis. The interpretation of the TD errors seems a bit stretched. Many factors could affect TD error that does not necessarily relate to policy performance. For example, that PER actually prioritizes high TD error samples like the authors mention. Were the validation data gathered by the specific policy trained in each algorithm? Were these policies trained until convergence? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Section 3 makes a nice connection between the hold-out model in supervised and the target Q network in RL. Why isn’t the same reasoning applied to the hold-out data? If the ReLo is computed on data that has been used to train the target network already, isn’t this no longer the reducible loss? - Section 3.1: How are the priority values updated across the replay buffer as training progresses? If the priority values are only updated per training mini-batch, is it possible that there are a large proportion of out-of-date priority values, especially if ReLo was low to begin with but increases later like in the forgetting case that was mentioned. - Why is PER in most cases, worse than the baseline method? - Section 4.6: This is a nice toy analysis and may also be useful to mention earlier on in the paper to support the hypothesis that PER is limited due to unlearnable samples. The issue is the learning curves for this task aren’t very convincing because of the high variance and that average performance never really goes above 0.5 for any methods. - Why did you use LaBER as another comparison and why were the other prioritized replay methods not relevant to compare against? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: No limitations addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable insights and suggestions to improve the clarity of the paper. First, we would like to apologize for the lack of explanation and coherence in the paper. We have fixed all the points raised in the review. We will address them sequentially. ### Notation Clarification $f_{map}$ is used to denote the mapping from the raw ReLo values to a priority that can be used for sampling in the Priority Queue implementation of the PER buffer. We shall add clarifications in the revised version of the paper to highlight this. ### Significance, Importance and Learnability These are qualitative terms that we use to describe the phenonmenon that all data samples are not equally important for the learning process. We use learnability as a the qualitative measure of how useful a sample is for the learning process. The reducible loss is a quantitative metric that can be used to measure the learnability of a sample. ### How final performance is determined We follow the recommendations of Agarwal et al. (2021) for aggregating results across a benchmark. They propose treating the performance of a given training run as a random variable and suggest that authors report statistical measures on these random variables. The interquartile mean (IQM) computes the mean of the middle 50% of runs while the optimality gap is a measure of how far an algorithm is from optimal performance aggregated across environments. For computing these measures, each environment score first needs to be normalized. In the DMC benchmark, the optimal score for each environment is 1000, while we use the highest reported scores for each environment from the MinAtar paper and the OpenAI Gym environments for calculating the optimality gap for those benchmarks. The environment scores are normalized with respect to this max score. For the ALE benchmark, we normalize the scores of each game with respect to reported random and human level scores. ### Learning Curves The learning curves for the experiments are given in the supplementary material. ### Relationship between validation TD Error and Policy Performance We have addressed this in the Common Author Response. To summarize, there is indeed a correlation between the validation TD error and the policy performance in the trained off policy agents. ### Calculation of Validation TD Error Yes, the validation data was collected by the same policy after training was completed. The policies are trained for 1M steps (for DMC and Mujoco, which is standard for these benchmarks) or 2M steps (Atari). ### Separate Hold-out Data One approach would be to collect a subset of trajectories from the environment during the training process to create an evolving held out dataset. These trajectories would be only in the held out buffer and not the training buffer. The main RL agent would learn from the training buffer, and parallely we could learn a new Q network on only the held-out buffer to mimic the held out model. However this process would consume additional computational resources. In general, since the samples collected are non i.i.d, target networks can be a good approximation for this held out model. ### Updating Priority Values The priorities are updated for the batch of samples that are trained on. So the priorities in the buffer are the last encountered priorities of the sample. This is based on the implementation used by PER since it can be very expensive to recalculate the priorities of all samples in the buffer after every update. The addition of $\epsilon$ to the probabilities ensures that all samples are replayed and if there is an erroneous priority it would be corrected the next time the data point is sampled. ### Why PER does worse than uniform in certain environments? In a function approximation setting, it is generally quite difficult to obtain the correct values of every state accurately. When an agent enters a region of the state space that it has not thoroughly explored before, it is bound to get a spike in the TD Error for all these samples and PER will repeatedly sample these new states. This might make the slightly older states non-consistent because we are in the function approximation setting and thus this can lead to forgetting. ReLo prevents this by sampling states based on the reducible loss and thus prevents forgetting. ### Learning curves in Section 4.6 Section 4.6 motivates why PER is not a good prioritization scheme and how ReLo would be able to handle these cases. We did not train those agents for longer because we are more interested in the sampling strategies of the baselines and how they handle stochasticity in environments. ### Why LaBER was used as a baseline? LaBER did not require training any additional networks to obtain better prioritization schemes and required similar compututational resources to Prioritized Experience Replay. That is why, we think LaBER and PER would be fair baselines since ReLo also does not involve training any additional networks which can be expensive. --- Rebuttal Comment 1.1: Comment: Thank you for your response. This has addressed my main concerns about clarity and about how experimental results are reported and I have updated my score accordingly.
Summary: This work introduces a novel experience replay sampling technique (ReLo-based sampling) that prioritizes sampling experience that has the greatest potential for reducing the agent’s loss. Empirically, ReLo-based sampling yield better performance than uniform random sampling and prioritized experience replay (PER). Strengths: 1. The work tackles a known issue with PER, 2. The writing is generally very clear, and the authors consider a diverse set of benchmark tasks (discrete/continuous control, visual vs state-based observations). Weaknesses: 1. The authors discuss how ReLo-based sampling can help in tasks with stochastic dynamics and/or rewards and help prevent forgetting, though it seems like the chosen benchmark tasks don't illustrate all of these benefits. * DMC and OpenAI Gym environments have deterministic dynamics and rewards. * MinAtar has stochastic dynamics (sticky actions with probability 0.1), but this stochasticity is completely independent of the agent’s state and chosen action. Thus, this stochasticity makes every state “equally unlearnable” in a sense. * In contrast, I do see how ReLo is helpful if a subset of states have stochastic dynamics/rewards as in the toy example in section 4.6. Does this mean the observed benefits of ReLo can then be attributed to how it prevents forgetting? 2. I would like to see further discussion on how ReLo helps prevent forgetting, especially if the reasoning in my first comment is valid. Currently, there is one sentence mentioning this benefit. If the authors can provide a toy example similar to the gridworld example in section 4.6, that would more clearly illustrate why ReLo is important. Something that comes to mind: Consider an agent that must simultaneously learn 2 different tasks (e.g. navigate to point A and point B in a gridworld task). The first half of training, the agent sees task 1. The second half, it sees task 2. Presumably, ReLo would prevent the agent from forgetting task 2 while learning task 1? 3. I believe section 4.6 should be moved just before ReLo-based sampling is introduced; it concretely illustrates a problem with PER and demonstrates how ReLo-based sampling addresses it. 4. Related to my two previous comments: I feel ReLo needs more motivation. Section 4.6 does a good job of motivating one aspect of ReLo, though I would really like to see motivation for the other aspects (forgetting, stochastic rewards). I am willing to raise my score if the authors can (1) clarify which benefits of ReLo the experiments highlight, (2) better motivate ReLo e.g. through the use of additional tasks that highlight each potential benefit of ReLo. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the authors include the returns for the tasks shown in Figure 2? Does large TD-error correspond to poor performance for these tasks? Since LaBER addresses issues with PER, It would be informative to see the TD-error for LaBER in Figure 2 as well. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions to improve the paper. We strongly believe that these suggestions would make the paper better. We address all the concerns and questions below. ### How ReLo can help even without stochasticity in rewards or dynamics Yes, this can be attributed to how it prevents forgetting. We provide ReLo's motivation by illustrating how prioritising the TD error might be problematic when the reward or dynamics are stochastic (Section 4.6). However, ReLo can also help in preventing forgetting. In the function approximation setting, it is generally quite difficult to obtain the correct values of every state accurately because a change in the network affects the values of the entire state space. When an agent enters a region of the state space that it has not explored before, it is bound to get a spike in the TD error for all these new samples and PER will repeatedly sample these states. This might make the slightly older states non-consistent because we are in a function approximation setting. ReLo would prevent this by allowing the agent to still sample based on the reducible loss, so it would only prioritize learnable samples. ### How ReLo can prevent forgetting We thank you for this suggestion and we performed a similar experiment where we indeed confirmed that ReLo can actually prevent forgetting. We elaborate on this and share intereseting insights in the Common Author Response. ### Section 4.6 We shall move this section in the revised version of the paper. ### Benefits of ReLo In this paper we study the effect of the ReLo criterion on - handling noisy/stochastic points (Section 4.6) - mitigating forgetting of previous tasks (Forgetting experiment in the common response) We also highlight its - generality and robustness to mechanism and frequency of target network updates (EMA/Hard-copy target update) - applicability to varied tasks with discrete or continuous control, with pixel or proprioceptive inputs. ### Returns for experiments in Figure 2 The returns are given in Table 2 in the supplementary material. ### Relationship between validation TD Error and Policy Performance We have addressed this in the Common Author Response. To summarize, there is indeed a correlation between the validation TD error and the policy performance in the trained off policy agents. ### TD Errors for LaBER We included the validation TD Error for LaBER in the Common Author Response. We will add the traning TD Error in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments and clarification on the benefits of ReLo. I do still have a few concerns, and currently maintain my score. In the common response pdf, the success rates in Fig. 2 don't seem to be statistically significant; he standard deviations overlap quite a bit. Can the authors perform a paired t-test at a 95% confidence level for ReLo vs. PER and ReLo vs. Uniform to determine if these results are significant? It may also help to increase the training budget. I have these same concern for Fig 3. in the main paper; While ReLo avoids sampling a state with low learnability, it's unclear if ReLo's return is any better than than PER or Uniform. Overall, the results show that ReLo produces smaller TD errors during training (Fig. 2, Fig. 6 in appendix), though it's unclear if this reduction improves performance. In Fig. 3 of the appendix, ReLo clearly improves data efficiency in 3/9 tasks (quadruped run, quadruped walk, walker walk) and maybe in walker walk. In Fig. 4 of the appendix, it improves data efficiency only in HalfCheetah, and in Fig. 5, there's no obvious improvement offered by ReLo. I suspect many of the return results in Table 1 of the common response pdf are not statistically significant; could you also provide significance tests for this table and, for instance, highlight/bold cells where ReLo outperforms the others with significance? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for this additional feedback and concerns regarding statistical significance. We address their concerns sequentially below. ### Forgetting Experiment We increased the training budget to 1M environment steps (keeping the budget for task A constant at 100K) and 60 seeds, and report the results below. There is minimal overlap between the confidence intervals now and ReLo still shows the least degradation in performance compared to the baselines. | Algorithm | Task A | Task B | |:------------|:-----------------:|:-----------------:| | PER | 0.29 (0.19, 0.39) | 0.26 (0.16, 0.36) | | Uniform | 0.43 (0.32, 0.54) | 0.40 (0.29, 0.51) | | ReLo | 0.63 (0.52, 0.74) | 0.74 (0.65, 0.84) | ### Pitfalls of TD Error Prioritization (Figure 3) While the variance can also be reduced with a higher training budget and more seeds, the main attempt of Figure 3 was not to show the improved performance of ReLo in terms of rewards but to highlight cases where TD Error prioritization fails due to stochasticity in the environment due to poor sampling (the bar chart in Figure 3 which is statistically significant). We agree that running it with higher training budget will increase readabilty of the figure. These experiments are currently running and we will include it in the revised version of the paper. ### Significance of Table 1 in Common Response We thank the reviewer for pointing this out. We performed a paired t-test of the the returns and the validation TD error in DMC. For the return, the following environments where ReLo does better are statistically significant, Quadruped Run, Quadruped Walk, Walker Run and Walker Walk. For the validation TD errors, all the environments where ReLo has better validation TD are significant. Accordingly we have made bold the corresponding rows in the TD error - Return correlation table and present it below. We will also update the paper accordingly. | Method | $TD_{Best}$ | $Return_{Best}$ | |:---------------|:----------|:--------------| | CheetahRun | PER | PER | | FingerSpin | **ReLo** | ReLo | | HopperHop | **ReLo** | Baseline | | QuadrupedRun | **ReLo** | **ReLo** | | QuadrupedWalk | LaBER | **ReLo** | | ReacherEasy | **ReLo** | Baseline | | ReacherHard | Baseline | ReLo | | WalkerRun | **ReLo** | **ReLo** | | WalkerWalk | **ReLo** | **ReLo** |
Summary: The author proposes a simple modification on prioritized experience replay, preventing unlearnable samples from being prioritized. Prioritized experience replay (PER) prioritizes samples with high TD errors, but some samples that have irreducible TD errors due to stochasticity may also be prioritized. This will hurt learning since these samples are unlearnable. Thus, the author proposes to prioritize only learnable samples by using reducible losses as priorities. The reducible losses are measured by the difference of TD errors with respect to the online Q-network and target Q-network. The experimental results show that the proposed method demonstrates consistent improvements over PER and uniform sampling in various common benchmarks. Strengths: - The idea of prioritizing reducible is a critical insight on prioritizing experience replay. - The method is easy to implement - The performance improvement seems to be consistent, while I have a few concerns on experimentation details. Weaknesses: - Lack of justification on why the difference between online and target networks can be a good estimate of reducible loss. A low online network loss may not indicate the Q-function learns the right Q-values since online networks may be changing too fast and cannot provide stable value targets. - Experiments in Atari need more justification. Since not every Atari game is tested, it is required to show the reason of choosing these games. Do these games cover all types of games? Are they particularly challenging for PER? Also, the reason of training only 2M frames makes the experiments weak. It's understandable that training 200M frames is not attainable for most people. But I think it's good to think of why training 2M frames would be sufficient to tell the difference between PER (Rainbow) and ReLO. It seems that the performance of ReLO and PER in Atari is pretty close. For now, the results in Atari is a bit inconclusive. It's possible that in these compute-restricted settings, ReLO and PER won't have significant differences. Overall, I think the insight made in this paper is important. However, the current experimental results do not clearly demonstrate why this insight matters since the performance is not significantly improved. In Atari, it's quite close to Rainbow (which is PER). In other domains, ReLO is doing better than PER but quite close to baselines (uniform sampling). To strengthen this paper, I suggest the author looking for tasks that significantly show the performance gain of ReLO to other tasks. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Why PER degrades in mujoco (openai gym) but not in deepmind control suite? - In Table 1, ReLO doesn't have lower validation TD errors than PER and baselines in CheetahRun. Can the author comment on why? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Not mentioned in the paper. I suggest the author think of limitations when the target network is not a good held-out set and how to construct a better held-out set. Also, it's good to provide some theoretical analysis to show why the chosen priority will improve the convergence rate to the correct Q-function or similar analysis made in LaBER. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the useful suggestions and comments to help improve this paper. We have incorporated the recommendations into the paper and discuss them below. ### Motivation of ReLo in RL and target networks In Mindermann et al. (2022) the hold out model is trained on a validation dataset which is drawn from the same distribution as the training dataset. Infact, the implementation uses a subset of the training set to train the validation model. However in RL we deal with non stationarity data distributions so it is not possible to train a hold-out model. However, the problem of learnability of a sample still exists in RL. If we observe that the TD error has been high for a long time, then it might be the case that at those points there is inherent noise (e.g. receives random reward every time), making such points highly unlearnable. PER would emphasize such points, whereas ReLo will not. Thus, a lagging version of the online model (target network) can be used to capture such points. We believe target networks can be a good approximation for hold out model for two reasons: - Even though both models are trained on the same transitions, as explained earlier, there can be significant change in the data distribution that the online model sees that the target model does not till the next time the parameters are copied. As the policies approach close to optimal behavior, the difference in policy behavior between the online and the target network might reduce. However, as Schaul et al. (2022) mentions, the policies keep changing even when close to convergence, which means at no point are the distributions for training the online and target model same. - This is just an approximation for the holdout model, and we just use it as a prioritization scheme. So even the points that have low priority are still sampled. Thus in cases that the approximation may not hold good, it won’t affect the training in a major way as those points will still be sampled. ### Atari Experiments We have addressed your concerns in the Consolidated Author Response. ### Additional examples of the benefits of ReLo We have added another example to show how ReLo can prevent forgetting when switching to a new task. In addition, in our experiments in the paper, we show that ReLo is more robust and can be widely applied to many domains whereas PER only has limited benefits in particular environments. ### Poor Performance of PER in DMC From our inspection of the training process, we believe this could be due to instabilities in training caused by high TD errors. During training of the PER agent the TD error would sometimes explode and not recover. ### Higher Validation TD Error on CheetahRun ReLo does not outperform in the baselines in this particular task. It could be that prioritizing the TD error works really well in this environment which is why PER is the best performing algorithm. ### Limitations We will add a separate limitations paragraph in the Conclusions section in the revised version of the paper. * Separate held out Set instead of using Target Networks: One approach would be to collect a subset of trajectories from the environment during the training process to create an evolving held out dataset. These trajectories would be only in the held out buffer and not the training buffer. The main RL agent would learn from the training buffer, and in parallel, we could learn a new Q network on only the held-out buffer. However this process would consume additional computational resources. In general, since the samples collected are non i.i.d, target networks can be a good approximation for this held out model. * Theoretical Analysis: We agree it would be interesting to perform a theoretical analysis of the change in training dynamics and convergence rate introduced by sampling with ReLo. We have mentioned this as an avenue for future work in the conclusion of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. The responses answered my questions and justified two of my main concerns: why use target network and why test in Atari100k environments. However, the response didn't answer the rest of the questions (see questions). Also, the new experiments in gridworld are not well motivated. Why ReLo prevents forgetting is not explained. Does forgetting happen in other environments? A clear explanation and analysis of why the proposed method is also important for a good paper. That being said, since I believe that avoiding prioritizing data with irreducible loss is worth being presented to the community, I increased my rating to 5. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for the thoughtful discussions and reconsidering their score. We would like to address the further concerns raised in your comment. ### Why PER is bad in Mujoco Thank you for pointing this out. We now have a strong answer validated by our experiments. We believe that degraded performance in Mujoco is because of instability in learning caused by rapidly varying value estimates. To test this hypothesis we studied the Walker2d environment, where PER obtains a mean reward of ~700, compared to the baseline (uniform sampling) which obtains a mean reward of ~2800. This is surprising since PER does learn to perform well in WalkerWalk, which is the DMC equivalent of Walker2d (they have similar agent morphologies). An important difference in the two environments is that Walker2d has early termination (episode ends when the walker falls down) while WalkerWalk does not. This meant that the early terminations could make predicting the value of the fail state distribution difficult as the walker falls in different ways. There could be a lot of noise in the reward at that stage which can make the TD estimate noisy too. We hypothesize that since PER only samples datapoints proportional to the TD error, these states would be repeatedly sampled as the noisy estimate would make the TD error high. This means it would be less likely to prioritize the samples corresponding to good behavior which could have lower TD error than the noisy fail states. We looked at the value estimate of the fail state in the baseline, PER and ReLo agent and observed this was the case. These are calculated over 40 episodes. | Method | Mean (CIs) | Min | Max | |:---------|:-------------------------:|:---------:|:-------:| | Baseline | 29.055 (-62.551, 120.662) | -87.8392 | 258.484 | | PER | 149.982 (-76.382, 376.346) | -115.291 | 793.237 | | ReLo | 18.225 (-0.307, 36.756) | -14.4796 | 49.37 | There is high variance in the predicted value of the fail state for PER, meaning that invariably the TD error for these points would be high. But further training on noisy points does not help and instead makes the problem worse, causing the value estimate to diverge. This can cause instabilities in training and potentially derail learning. Finally we created a modified version of Walker2d without early terminations and PER acheives much better performance (Mean score of 1943.65 compared to 700.5 in the original Walker2d) in this environment, validating our hypothesis. Besides removing early termination, other parameters of the environment and the hyperparameters of the PER agent were the same in both experiments. We also looked at the value of the initial states (the environment is randomly initialized so there is an initial state distribution) and PER has higher variance in the predicted value even here. | Method | Mean | Min | Max | |:---------|:--------------------------:|:-------:|:-------:| | Baseline | 227.916 (215.415, 240.417) | 215.929 | 252.337 | | PER | 249.379 (177.265, 321.494) | 177.968 | 337.663 | | ReLo | 203.893 (195.357, 212.428) | 193.014 | 214.811 | This analysis adds credence to our hypothesis that PER suffers from high variance in value estimates which hurt learning. Additionally, the experiments also show that ReLo has the least variance in the predicted value of the state (initial or fail state), highlighting how ReLo is a more stable prioritization scheme. ### Why ReLo has worse validation TD error in Cheetah Run ReLo also achieves lower returns than the baselines in Cheetah Run, and worse return usually corelates with worse validation TD error. Although the degradation in performance is not as much and all algorithms manage to solve the task.
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. We greatly appreciate your valuable feedback and comments and we will incorporate the suggestions into the revised version of the paper. We would like to clarify a few points that multiple reviewers mentioned before addressing each comment individually. ### ReLo can help prevent forgetting We created a 6x6 gridworld consisting of two rooms and a goal state in each room. The left room is called Room A and the right is Room B. We define Task A as the agent starting in the Room A and reaching the goal state in Room A, and similarly define Task B as the agent starting in Room B and reaching the goal in Room B. There is a single gap in the wall allowing the agent to explore both rooms, but a timelimit is implemented such that the agent can not reach both goals in one episode. For the first 100k environment steps, the agent begins in Room A and we block access to Room B. So during this stage, the agent only learns about Task A. After 100K steps, the agent starts in Room B, allowing it to learn Task B. The agent no longer starts at Room A, thereby no longer collecting data about Task A and must retain its ability by replaying the relevant transitions from the buffer. We train three agents, a baseline DQN agent, a PER DQN agent and a ReLo DQN agent and monitor their performance on both tasks during training. We evaluate performance over 50 seeds, providing the average success rates at the end of training in the table below. The training curves and a visualization of the environment are included the PDF included in the general response. | Task A | Task B | | ----------- | ----------- | |PER: 0.40 |PER: 0.64 | |Uniform: 0.32 |Uniform: 0.72 | |ReLo: 0.80 |ReLo: 1.00 | From the training curves, we can see that there is a clear drop off in performance on Task A after 100K steps when the agent can no longer actively collect data on the task. However the ReLo agent exhibits least degradation in Task A while also outperforming the baseline and PER agent on Task B. This experiment clearly shows how ReLo helps the agent replaying relevant data points that could have been forgotten. ### Relationship between Validation TD Error and Performance (DXpG, Mxrp) We analyzed the validation TD error and the return for environments in the DMC suite and observed that there is indeed a correlation between the two metrics. Of the 9 games tested, in 5 games the method with the best (lowest) validation TD error is also the method with the best (highest) policy performance. There is a similar correlation between the worst validation TD error and the worst performing policy. ### Atari Experiments (ax9T, hZPB, DXpG) The subset of games were chosen from the games used in the Atari100K benchmark. This is a benchmark used for evaluating sample efficiency in Atari, where agents are provided with a budget of only 100K interactions with the environment. The suite was chosen to cover a range of games where non random performance can be obtained in 100K data regime. Hence we believe this subset would be a good suite of games to validate our hypothesis that ReLo improves sample complexity. Pdf: /pdf/ceeee82aeecca99b7b2861e3901d18d99eb1c153.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null