title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Robust Conformal Prediction Using Privileged Information
Accept (poster)
Summary: This paper focuses on conformal prediction with "privileged information" in constructing prediction intervals with missingness/noise. The proposed quantile-of-quantile approach handles the difficulty that the "privileged information" is not available for test data and achieves finite-sample coverage. Strengths: The paper is well-written and organized, which presents both solid theoretical guarantees and comprehensive simulation studies. The presentation is very good. Weaknesses: The intuition behind the notion of "privileged information" is not well explained. It would be better if examples of Z in real applications could be introduced and justified in the beginning. There are also hyperparameters, e.g. \beta, in the proposed approach, for which ablation studies are not provided. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. [Conditional independence assumption] Consider the scenario with split conformal prediction and Z is a function of X that is probably determined with the first half of training data, e.g. Z is the feature from X that correlates the most with Y. In this case, the conditional independence assumption may not hold precisely and a robustness result would be better to make theory complete. 2. [Choice of beta] Ablation study on the choice of the hyperparameter beta in the simulation would be helpful in understanding the role of beta and a tradeoff in terms of accuracy could be expected. 3. [Definition of Z] In some real applications, the "privileged information" is not pre-defined, and instead, it is usually learned from data. It would be very interesting to investigate the "optimal" partition of information in X into features and "privileged information". Sufficient dimension reduction, self-supervised training, and relevant techniques could be promising. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Comment: We appreciate your review of our work and thank you for the helpful suggestions and interest in our work. In what follows we respond to your concerns in detail. ## The privileged information We thank the reviewer for raising this point which is related to the one raised by Reviewer 3YMV. For the convenience of the reviewer, we repeat our answer below. In the updated manuscript, in Example 1 and Example 2 we expanded about the intuition behind the privileged information. Specifically, in the noisy response setup outlined in Example 1, we explained that since the PI, $Z_i$, is the information about the annotator, it is likely to explain the corruption appearances $M_i$. That is, in this example, the features $X_i(0)$ and the ground truth label $Y_i(0)$ do not provide additional knowledge about the corruption indicator given $Z_i$, i.e., $ X_i(0), Y_i(0) \indep M_i \mid Z_i=z$. Additionally, we added the following example inspired by a real-world medical task. Consider a medical setup where patients are being selected for a costly diagnosis, such as an MRI scan. Here, $X_i(0)=X_i(1)$ is the more standard medical measurements of the i-th patient, such as age, gender, medical history, and disease-specific measurements. The PI $Z_i$ is the information manually collected by the doctor to choose whether the patient should be examined by an MRI scan. This information is obtained through, e.g., a discussion of the doctor with the patient, or a physical examination, and could include, for instance, shortness of breath, swelling, blurred vision, etc. The response $Y_i(0)$ is the disease diagnosis obtained by the MRI scan, and $Y_i(1)=’\texttt{NA}’$. The missingness indicator $M_i$ equals 0 if the doctor decides to conduct an MRI scan, and 1 otherwise. At test time, our goal is to assist the doctors in future decisions before examining the patients, and hence the test PI $Z_\text{test}$ is unavailable. This task is relevant in situations where the number of available doctors is insufficient to examine all patients. Here, $Z_i$ explains the missingness $M_i$, and $M_i$ does not depend on $X_i$ or $Y_i$ given $Z_i$. Finally, we note that PCP can produce valid uncertainty sets even if the independence assumption $(X \indep M) \mid Z$ is not satisfied. In this case, we instead assume that both $X,Z$ explain the corruption indicator, namely, $(Y \indep M) \mid (Z,X) $, and that the features are uncorrupted $X(0)=X(1)$. Also, the weights $w_i$ are instead defined as: $w_i = \frac{\mathbb{P}(M=0)}{ \mathbb{P}(M=0 \mid X=x_i, Z=z_i)}$. We will clarify this point in the revised manuscript as well. ## The choice of $\beta$ Thank you for bringing up this important point, which was also raised by reviewer hB3A. First, we emphasize that Theorem 1 holds for any choice of $\beta\in (0,\alpha)$. Therefore, $\beta$ only affects the sizes of the uncertainty sets. Intuitively, as $\beta \rightarrow \alpha$, a higher quantile of the weighted distribution of the scores is taken, and a lower quantile of the $Q_i$’s is taken. Similarly, as $\beta \rightarrow 0$ a lower quantile of the weighted distribution of the scores is taken, and a higher quantile of the $Q_i$’s is taken. An optimal $\beta$ can be considered as the $\beta$ that leads to the narrowest intervals. Such optimal $\beta$ can be practically computed with a grid of values for $\beta$ in $(0,\alpha)$, using a validation set. Nonetheless, to maintain Algorithm 1 (PCP) simple and intuitive, we did not optimize for $\beta$ in the paper. In our experiments, we chose $\beta$ that is close to 0, so the quantile of the weighted distribution of the scores chosen by PCP is close to the quantile chosen by (the infeasible) WCP. In response to this comment, we added this discussion to the text, and we conducted an ablation study for the choice of $\beta$ on a synthetic dataset. The results of the ablation study are provided in the PDF file in the global response. This experiment indicates that the smallest intervals are achieved for $\beta$ that is close to 0, and the interval sizes are an increasing function of $\beta$. Yet, it is important to understand that different results could be obtained for different datasets. Therefore, we recommend choosing $\beta$ using a validation set, as explained above. Thank you for the opportunity to discuss this issue. ## Definition of Z Definitely! We agree that in some tasks it might be difficult to collect privileged information. Indeed, we view the problem of choosing an “ideal” privileged information that satisfies the conditional independence assumption as a thrilling future research direction. --- Rebuttal Comment 1.1: Title: Follow-up comment Comment: Thank you for the comments! I'll maintain my score at this moment. I think it is a very interesting paper and it raises the idea of using side information to leverage (conformal) inference. I'll be more than happy to see a discussion section on (1) how to identify privileged information and (2) potential extension beyond the conditional independence assumption. Thanks! --- Rebuttal 2: Comment: We are appreciative of the reviewer's comments and for acknowledging our previous response. We are thankful for the opportunity to further clarify and expand on the points raised. Below, we address each comment in detail. ## Identifying privileged information Identifying appropriate privileged information is indeed a challenging task. In general, this requires expert knowledge, similar to the unconfoundedness assumption in causal inference applications used for treatment effect estimation, for example. In more detail, our solution lies in the conditional independence assumption: $(X(0), Y(0)) \indep M \mid Z=z$. In words, $Z$ should explain the information about the corruption appearances that is encapsulated in $X(0)$ and $Y(0)$. For instance, in our noisy labels setup in Example 1, the information about the annotator is a good candidate for privileged information, as the noise pattern depends directly on the annotator. Yet, the challenge here is that the conditional independence requirement cannot be directly tested in practice, as we only observe $(X(0), Y(0)) \mid M=0$. In view of unconfoundedness, our work introduces a relaxation of this assumption as we do not assume that $Z$ is observed at test time. From a practical perspective, we found that PCP can produce valid uncertainty sets even if the independence assumption is not satisfied. For example, in our real-world noisy response experiment from Section 4.3, PCP attained the target coverage rate even though the conditional independence assumption was not confirmed. We believe this is attributed to our formulation of $Z$, being a variable that is highly correlated to $M$. Intuitively, when $Z$ explains the corruption indicator $M$ well, it is sensible to believe that the conditional independence requirement is approximately satisfied. More formally, below we analyze the setting where the conditional independence assumption is violated and provide a lower bound for the coverage rate attained by PCP. In simple words, this new result reveals that PCP achieves a coverage rate that is closer to the nominal level as the conditional independence assumption violation is smaller. ## Extension beyond the conditional independence assumption We are grateful for the opportunity to discuss this topic. We propose an initial extension of Theorem 1 to a setting where the conditional independence assumption is not fully satisfied. For the simplicity of this initial extension, we assume $X(0) \indep M \mid Z=z$. The independence assumption $ Y(0) \indep M \mid Z=z$ is equivalent to assuming that the density of $Y(0) \mid M=m, Z=z$ is the same for $m \in \\{0,1\\}$, formally: $$f_{Y(0) \mid M=0, X=x,Z=z}(y; 0,x,z) = f_{Y(0) \mid M=1,X=x, Z=z}(y; 1,x,z). $$ In our extension, we relax this assumption and instead require that $\forall x\in \mathcal{X}$, there exists $\varepsilon_x \in\mathbb{R}$ such that the difference between the two densities is bounded by $\varepsilon_x$: $$\forall y\in\mathcal{Y}, z\in\mathcal{Z}: | f_{Y(0) \mid M=0, X=x,Z=z}(y; 0,x,z) - f_{Y(0) \mid M=1,X=x, Z=z}(y; 1,x,z) | \leq \varepsilon_x .$$ $\textbf{Theorem [Robustness of PCP to conditional independence violation]}$ Suppose that $\\{(X_i(0),X_i(1), Y_i(0),Y_i(1),Z_i, M_i)\\}\_{i=1}^{n+1}$ are exchangeable, $P_{Z}$ is absolutely continuous with respect to $P_{Z \mid M=0}$, and $\forall x\in\mathcal{X}$ there exists $\varepsilon_x \in\mathbb{R} $ such that: $$ \forall y\in\mathcal{Y}, z\in\mathcal{Z}: | f_{Y(0) \mid M=0, X=x,Z=z}(y; 0,x,z) - f_{Y(0) \mid M=1,X=x, Z=z}(y; 1,x,z) | \leq \varepsilon_x .$$ Then, the coverage rate of the prediction set $C^{PCP}(X^\textup{test})$ constructed according to Algorithm 1 is lower bounded by: $$ \mathbb{P}(Y^\textup{test} \in C^{PCP}(X^\textup{test})) \geq 1-\alpha- \mathbb{E}_{X,Z}[ | C^\texttt{PCP} (X) | \varepsilon_X \mathbb{P}(M=1\mid X,Z)] . $$ We omit the proof due to the limitation of space. We can send the proof in a separate comment. This result provides a lower bound for the coverage rate of PCP in the setting where the conditional independence assumption is not exactly satisfied. Intuitively, as $\varepsilon_x$ decreases, i.e., as the two distributions $Y(0) \mid M=m, Z=z$ for $m\in \\{0,1\\}$ are closer to each other, the lower bound is tighter, and closer to the target level. Similarly, as the two distributions diverge, the lower bound becomes looser. We once again thank the reviewer for these insightful suggestions, which significantly improve our paper. We will include these discussions in the revised manuscript.
Summary: This paper introduces a method to create prediction sets with guaranteed coverage in the presence of training data that is corrupted by missing or noisy variables. The approach is an extension of conformal prediction that works by assuming access to privileged information available during training. This information is used to account for distribution shifts caused by the corruptions. The proposed method is supported by theoretical coverage guarantees and empirical examples are used to demonstrate that the approach produces more reliable and informative predictions compared to existing methods on real and synthetic datasets. Strengths: This paper stands out for its originality in addressing issues caused by corrupted training data through a novel extension of conformal prediction. Through a rather common assumption of access to privileged information the authors are able to develop an effective solution for obtaining conformal prediction sets despite the distribution shift cased by the corrupted data. The main results are rigorously proven and the presentation of the results and proofs are clear. The difficulty in proving Theorem 1 is well-explained and thus the the quality and significance in obtaining the result is evident. The practically of this result is apparent in that it potentially opens new possibilities for applying conformal prediction in high-stakes applications where there may be corrupted data. Weaknesses: The empirical evaluation, though comprehensive, could benefit from a broader range of real-world datasets. Also some discussion around access to and potential surrogates for the privileged information could better highlight the scope of the work. For example, some discussion on what could constitute privileged information and ideally even what characteristics of the information would make it most useful to the method would be interesting and really enhance the work. Technical Quality: 4 Clarity: 4 Questions for Authors: Could some discussion around what constitutes "ideal" privileged information for this particular method be possible? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes, sufficiently addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We very much appreciate your positive feedback and interest in our work. We thank the reviewer for classifying our contribution as a novel one. We also thank the reviewer for their helpful comments and suggestions. In what follows, we address your comments in detail. $ \newcommand{\indep}{\perp \\\!\\!\\! \perp}$ ## The Privileged Information We thank the reviewer for raising this point which is related to a comment raised by Reviewer R7TW. In the updated manuscript, in Example 1 and Example 2 we expanded about the intuition behind the privileged information. Specifically, in the noisy response setup outlined in Example 1, we explained that since the PI, $Z_i$, is the information about the annotator, it is likely to explain the corruption appearances $M_i$. That is, in this example, the features $X_i(0)$ and the ground truth label $Y_i(0)$ do not provide additional knowledge about the corruption indicator given $Z_i$, i.e., $ X_i(0), Y_i(0) \indep M_i \mid Z_i=z$. Additionally, we added the following example inspired by a real-world medical task. Consider a medical setup where patients are being selected for a costly diagnosis, such as an MRI scan. Here, $X_i(0)=X_i(1)$ is the more standard medical measurements of the i-th patient, such as age, gender, medical history, and disease-specific measurements. The PI $Z_i$ is the information manually collected by the doctor to choose whether the patient should be examined by an MRI scan. This information is obtained through, e.g., a discussion of the doctor with the patient, or a physical examination, and could include, for instance, shortness of breath, swelling, blurred vision, etc. The response $Y_i(0)$ is the disease diagnosis obtained by the MRI scan, and $Y_i(1)='\texttt{NA}'$ is the missing value. The missingness indicator $M_i$ equals 0 if the doctor decides to conduct an MRI scan, and 1 otherwise. At test time, our goal is to assist the doctors in future decisions before examining the patients, and hence the test PI $Z_\text{test}$ is unavailable. This task is relevant in situations where the number of available doctors is insufficient to examine all patients. Here, $Z_i$ explains the missingness $M_i$, and $M_i$ does not depend on $X_i$ or $Y_i$ given $Z_i$. Finally, we note that PCP can produce valid uncertainty sets even if the independence assumption $(X \indep M) \mid Z$ is not satisfied. In this case, we instead assume that both $X,Z$ explain the corruption indicator, namely, $(Y \indep M) \mid (Z,X) $, and that the features are uncorrupted $X(0)=X(1)$. Also, the weights $w_i$ are instead defined as: $w_i = \frac{\mathbb{P}(M=0)}{ \mathbb{P}(M=0 \mid X=x_i, Z=z_i)}$. We will clarify this point in the revised manuscript as well.
Summary: The authors introduce a calibration method called Privileged Conformal Prediction (PCP) to generate prediction sets that guarantee coverage on uncorrupted test data, even when the target label (Y) and/or input features (X) in the calibration data samples are corrupted (e.g., missing or noisy variables). The key innovation is leveraging privileged information (PI)—additional features (Z) available during training but not at test time—to handle distribution shifts induced by corruptions. They assume that the input and target features of clean data (X(0),Y(0)) are independent of the corruption indicator variable (M) given the privileged information (Z). This allows the authors to treat this setting as a specific case of covariate shift (and weighted conformal prediction), with the added challenge that the PI variable is not available at test time (Ztest). To address this, they propose a reformulation of the weighted conformal prediction framework, where an estimate of the non-conformity score threshold without Ztest is obtained by considering a conservative estimate based on a quantile of the calibration thresholds. Experiments on real and synthetic datasets show that PCP achieves valid coverage rates and constructs more informative predictions than existing methods that lack theoretical guarantees Strengths: The paper is well-presented, motivated, clear, and easy to follow. The problem seems relevant and novel within the context of conformal prediction, as far as I know. The authors effectively present the hypothesis and challenges of their problem. Discussing the two-stage approach before introducing the proposed solution is beneficial, as one might initially consider estimating Ztest from the input features and building the weight based on that estimate. I think the proposed method, where the authors estimate the non-conformity score threshold without Ztest by considering a conservative estimate based on a quantile of the calibration thresholds, is well-motivated and supported by theoretical guarantees, as presented in Theorem 1. Moreover, experimental results show the benefits of the proposed approach. Weaknesses: I do not see major weaknesses with this work, authors address the limitations of their approach in Section 5. Technical Quality: 3 Clarity: 3 Questions for Authors: . Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for the time and effort you put into this review. We are also very appreciative of your encouragement and positive assessment of our work. Your feedback is very important to us! Thank you again for your support.
Summary: This paper is studying the problem of conformal prediction in the presence of data (covariate or label) corruption leveraging privileged data. They build upon the framework of weighted conformal prediction by introducing a novel leave-one-out weighting technique which produces a conservative (upper-bound) estimate of the original threshold of the weighted conformal prediction. They further theoretically and experimentally evaluate the coverage validity of their method. Strengths: - The presented framework is very general and can potentially be applied in a range of problems in practice. - Their method is very intuitive and insightful in the sense that their leave-one-out technique can potentially be applied to adjacent problems in conformal prediction. - The presentation of the paper is very nice. I particularly like the authors decision to present the "two-stage naive" method first to prepare the readers to fully absorb the underlying challenge of the problem. Weaknesses: I am willing to show a thumb up for this paper. However, before that I would appreciate if the authors can respond to the following concerns: 1) I dont understand how one can compute the quantiles (Q_i) using the eq.(8). It looks like one need to know the values of w_i in order to compute Q_i. However, I can not find an explanation in the paper on how to compute w_i using data. A formulation of w_i is given in the paper using the probability distribution of M and Z. Are the authors assuming that the distribution of (Z, M) are known? if not how one can compute w_i? If the w_i is meant to be estimated from data, first how to do that estimation and second how does that estimation affects the coverage validity theorems? This is an important concern, and either way, this should be clearly stated before presenting the algorithm. In the current format, the algorithm is not complete! 2) It is interesting, yet concerning, that there is no trace of the choice of \beta in the presented theory. Can the authors comment on the choice of \beta from point of views of theory and practice? It is odd to me how the algorithm might produce meaningful prediction sets for both \beta = 0.00001 and \beta = 0.99999! It is then a natural question/expectation that how one should tune \beta in practice and how that choice affect the prediction set size and coverage validity. Technical Quality: 3 Clarity: 3 Questions for Authors: - How to compute w_i? - How to tune \beta? I might ask more questions after i hear the response of the authors. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is a limitation section that covers most of the limitations. However, I believe some of the limitations discussed in the very last section of the paper (specifically all the theoretical assumptions) must be presented and discussed much earlier in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive and valuable feedback and suggestions. In what follows, we address your concerns in detail. ## The weights $w_i$ We thank the reviewer for raising this point. As the reviewer suggested, the real ratios of likelihoods, $w_i$, are required to provide the validity guarantee in Theorem 1. While PCP can be applied with estimates of $w_i$, which can be computed with estimates of $\mathbb{P}(M=0 \mid Z=z)$ according to equation (3), the validity guarantee does not hold in this case. This restriction is similar to WCP which also requires the true weights $w_i$ to provide a validity guarantee. The effect of inaccurate estimates of $w_i$ on the coverage rate attained by PCP could be an exciting future direction to explore, e.g., by borrowing ideas from [1]. Following the reviewer's comment we updated Algorithm 1 (PCP) in the text, and added $w_i$ as an input. Furthermore, in Section 3.2 we explained if the real weights $w_i$ are unavailable, then they can extracted from estimates of $\mathbb{P}(M=0 \mid Z=z)$ according to equation (3), and this conditional probability can be estimated from the training data, using any off-the-shelf classifier. We also clarified in the updated manuscript that Theorem 1 does not hold if PCP is not used with the oracle weights. [1] Yonghoon Lee, Edgar Dobriban, and Eric Tchetgen Tchetgen. Simultaneous conformal prediction of missing outcomes with propensity score $\epsilon$-discretization. arXiv preprint arXiv:2403.04613, 2024. ## The choice of $\beta$ Thank you for bringing up this important point, which was also raised by reviewer R7TW. First, we emphasize that Theorem 1 holds for any choice of $\beta\in (0,\alpha)$. Therefore, $\beta$ only affects the sizes of the uncertainty sets. Intuitively, as $\beta \rightarrow \alpha$, a higher quantile of the weighted distribution of the scores is taken, and a lower quantile of the $Q_i$’s is taken. Similarly, as $\beta \rightarrow 0$ a lower quantile of the weighted distribution of the scores is taken, and a higher quantile of the $Q_i$’s is taken. An optimal $\beta$ can be considered as the $\beta$ that leads to the narrowest intervals. Such optimal $\beta$ can be practically computed with a grid of values for $\beta$ in $(0,\alpha)$, using a validation set. Nonetheless, to maintain Algorithm 1 (PCP) simple and intuitive, we did not optimize for $\beta$ in the paper. In our experiments, we chose $\beta$ that is close to 0, so the quantile of the weighted distribution of the scores chosen by PCP is close to the quantile chosen by (the infeasible) WCP. In response to this comment, we added this discussion to the text, and we conducted an ablation study for the choice of $\beta$ on a synthetic dataset. The results of the ablation study are provided in the PDF file in the global response. This experiment indicates that the smallest intervals are achieved for $\beta$ that is close to 0, and the interval sizes are an increasing function of $\beta$. Yet, it is important to understand that different results could be obtained for different datasets. Therefore, we recommend choosing $\beta$ using a validation set, as explained above. Thank you for the opportunity to discuss this issue. ## Limitations We thank the reviewer for raising this topic. We added a discussion about the limitations of PCP in Section 3.2. Specifically, we clarified that the true conditional corruption probability $\mathbb{P}(M=1 \mid Z=z)$ must be known to provide a theoretical coverage validity guarantee. We thank the reviewer for these comments, which greatly improve our paper! --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. The argument and the numerical evaluation for the value of $\beta$ is well taken. I also encourage the revisions with respect to the values of $w_i$, as suggested by authors. What is still missing for me is a principled way of estimating $w_i:=\frac{\mathbb{P}(M=0)}{\mathbb{P}\left(M=0 \mid Z=Z_i\right)}$. In the case of WCP, the estimated ratios are likelihood ratios of the two distributions, which can be effectively done by unlabeled data (at least when the covariates are low dimensional) and there is a rich literature on that matter. Here one has to estimate a specific ratio. Therefore, I would appreciate if authors can elaborate more on their comment "this conditional probability can be estimated from the training data, using any off-the-shelf classifier". --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their comments and for acknowledging our previous response. Below, we provide a detailed explanation of the approach we recommend for estimating $w_i$, which we also employed in our experiments. First, we estimate the conditional corruption probability given $Z$, i.e., $\mathbb{P}(M=0 \mid Z=z)$, using the training and validation sets with any off-the-shelf classifier, such as random forest, XGBoost, or a neural network. This classifier takes the PI $Z$ as an input and outputs an estimate for the conditional corruption probability, which we denote by $\hat{p}(M=0 \mid Z=z)$. Notice that this classifier can be fit on unlabeled data, similar to the approach suggested by the reviewer. In our experiments, we primarily used neural networks, and occasionally random forests or XGBoost. The specific models used for each dataset are detailed in Table 2 of Appendix D1. Next, we estimate the marginal corruption probability directly from the data: $$\hat{p}(M=0)=\frac{1}{n}\sum_{i=1}^{n} M_i$$ Finally, the estimated weights are computed according to equation (3), using the estimated probabilities: $$ \hat{w}_i = \hat{w}(z_i) = \frac{\hat{p}(M=0)}{\hat{p}(M=0 \mid Z=z_i)} $$ We appreciate the opportunity to elaborate on this integral aspect of our method and we will clarify this point in the revised manuscript. We apologize for any confusion and hope this discussion resolves the reviewer’s concerns. Please let us know if there are any questions, comments, or concerns left.
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for your time and effort in reviewing our submission and providing valuable feedback and suggestions. In response to the reviewers' comment, we attached to this reply a PDF file containing the results of an ablation study analyzing the effect of $\beta$ on Algorithm 1 (PCP). Once again, we thank the reviewers for helping us improve our paper! Pdf: /pdf/cffce7a7d9e06a46c79d399157f45fb20d36a522.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
HydraViT: Stacking Heads for a Scalable ViT
Accept (poster)
Summary: This study proposes a training scheme called stochastic dropout training, thereby presenting its resulting models, called HydraViT. From the observation that ViT with a smaller version can be understood as a subset of a larger version, the authors propose to sample a subnetwork of a larger ViT. Although this scheme trains the subnetwork of ViT, during inference, the size of ViT can be flexibly chosen considering hardware specs. Experiments demonstrate the validity of the proposed method. Strengths: I would like to admit good motivation for choosing the subnetwork of ViT for training by slicing its dimension. Further, incorporating order importance can be useful for other purposes during training and inference. The flexible choice of dimension during inference can be useful for several practical scenarios, as described in the manuscript. Weaknesses: When we change the size of ViT, such as hidden dim, head size, and embedding dim, can we guarantee consistent behavior of ViT? In other words, if several dimensions of ViT change, ViT might exhibit inconsistent behavior. For example, the self-attention operation is defined as $softmax(\frac{Q K}{\sqrt{d_k}})V$ and calibrates its scale using $d_k$, the dimension of the quary and key. I would like to know how the authors incorporated different dimensions into the self-attention layers. The concept of training subnetwork might be similar to Dropout. I would say that though Dropout applies stochastic dropping without order-awareness, the schochastic dropout training proposed in this study incorporates the ordering, starting from the first elements. In this regard, the proposed method can be understood as a special case of Dropout. I am not saying that the contribution is weak due to similarity; however, I would like to know whether the authors adopted a $1/p$ scaling similar to Dropout. Specifically, the Dropout scheme drops elements during training and uses all elements during the test phase. To have consistent output from Dropout, we apply $1/p$ scaling during training. For $p=0.5$, we amplify the output of Dropout by twice. By adopting this practice, we obtain a consistent output scale. In consideration of this, the training subnetwork of ViT may require scale calibration similar to Dropout. Similarly, though the size of ViT can be flexibly chosen during inference, this may require scale calibration with respect to the size of ViT to have consistent behavior of ViT. Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses. Although I would like to admit the motivation and novelty of the proposed method, changing demensions of ViT may cause inconsistent behavior of ViT in self-attention or may require scale calibration for an intermediate feature map similar to Dropout. The proposed method may seem plausible at first glance, but technically, we should be careful when changing the dimension of ViT. I would like to know whether the authors addressed these issues. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discussed several limitations as occasion arises. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer aEuL for their remarks and their thoughtful review. We answer the questions below: > When we change the size of ViT, such as hidden dim, head size, and embedding dim, can we guarantee consistent behavior of ViT? In other words, if several dimensions of ViT change, ViT might exhibit inconsistent behavior. For example, the self-attention operation is defined as 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑄𝐾𝑑𝑘)𝑉 and calibrates its scale using 𝑑𝑘 , the dimension of the quary and key. I would like to know how the authors incorporated different dimensions into the self-attention layers. 1- There appears to be a misunderstanding. In Vision Transformers, each attention head corresponds to a **subset** of the embedding, and the scaling factor $d_k$ relates to the embedding dimension assigned to **each head**, which is $ \frac{\text{embedding dim}}{\text{number of heads}}$, and for ViT’s variants this values is always 64. Unlike SortedNet, in the design of HydraViT, we adjust both the number of heads and the corresponding embedding dimensions (the numerator and denominator of the fraction) so we do not have this inconsistency. In the following we have attached a table which details the $d_k$ for each configuration of HydraViT: |Model|Embedding Dim|#Heads|Dim per Head ( $d_k$) for K,Q and V| |:---:|:---:|:---:|:---:| |HydraViT-12H|768|12|64| |HydraViT-11H|704|11|64| |HydraViT-10H|640|10|64| |HydraViT-9H|576|9|64| |HydraViT-8H|512|8|64| |HydraViT-7H|448|7|64| |HydraViT-6H|384|6|64| |HydraViT-5H|320|5|64| |HydraViT-4H|256|4|64| |HydraViT-3H|192|3|64| 2- In Figure 7, we illustrated the t-SNE representation of the last layer of HydraViT with different numbers of heads. As we can see, by adding more heads, the sparsity of the output of the last layer becomes more compact and forms a subset of the previous layer's output, demonstrating consistent behavior. This behavior ensures that the model's behavior remains stable when varying the dimensions. 3- Additionally, by closely examining the logits for each class reported in Figure 8, we observe that, for example, in Figure 8.b, which represents a relatively challenging image, the differences in the logits—i.e., the confidence of the classification—gracefully increase. This further indicates embeddings do not contradict each other and have consistent behavior across varying dimensions. > The concept of training subnetwork might be similar to Dropout. I would say that though Dropout applies stochastic dropping without order-awareness, the schochastic dropout training proposed in this study incorporates the ordering, starting from the first elements. In this regard, the proposed method can be understood as a special case of Dropout. I am not saying that the contribution is weak due to similarity Thank you for your comment. Most current dynamic models typically use either pruning [1] methods to remove redundant weights from the model or routing [2] methods to identify the best possible subnetwork for an input. However, these approaches introduce their own complexities, such as additional parameters or GMACs, to find the best parts. Despite these methods performing really well, we were thinking about the following question: Instead of identifying these important parts, can we train the model in a way that **orders** the information based on its importance so we don’t have to look for it for each image? If dropout is a well-established technique for enhancing generalization, can a **biased** version of dropout be used to organize information in a structured manner? Our results show that, despite the simplicity of the dropout idea, it can achieve a superior performance. To enhance understanding and provide clearer insight into our motivation, we will clarify this design intuition in the revised paper. > I would like to know whether the authors adopted a 1/𝑝 scaling similar to Dropout. Specifically, the Dropout scheme drops elements during training and uses all elements during the test phase. To have consistent output from Dropout, we apply 1/𝑝 scaling during training. For 𝑝=0.5 , we amplify the output of Dropout by twice. By adopting this practice, we obtain a consistent output scale. In consideration of this, the training subnetwork of ViT may require scale calibration similar to Dropout. Similarly, though the size of ViT can be flexibly chosen during inference, this may require scale calibration with respect to the size of ViT to have consistent behavior of ViT. This does not apply to HydraViT. In normal Dropout, the dropped neurons during training are used during inference, necessitating 1/p scaling to maintain consistent output scales. In HydraViT, however, the training and inference phases are **identical**, which means *neurons are being dropped during both training and inference*. For example, if $k$ heads are dropped during training, the model is evaluated with the same $k$ dropped heads. Therefore, there is no need for such scaling in HydraViT. We will add this clarification to the design discussion for better clarity. > Questions: We have addressed this in the preceding sections, and we hope our clarifications provide a clearer understanding of HydraViT. ------------ References: [1]: DynamicViT: Efficient Vision Transformers and CNNs with Dynamic Spatial Sparsification, NeurIPS 2021 [2]: Flextron: many-in-one flexible large language model, ICML2024 --- Rebuttal Comment 1.1: Title: Thank you for response. Comment: Thank you for your detailed response and clarification. Now I understand that the proposed method is free of the inconsistency issues I mentioned, such as $d_k$. I raised my rating (4->6). I hope that the authors add these clarifications in the next version of this manuscript. Good luck!
Summary: Building on the concept of Matryoshka Representation Learning, the authors propose a novel variant of Vision Transformers (ViT) that improves quality-speed trade-offs. Specifically, the proposed ViT model, trains on subsets of embedding dimensions and associated attention heads, which are ordered by importance. In each training iteration a subset of dimensions is trained. The authors demonstrate in a series of qualitative and quantitative experiments the improved performance. Strengths: The paper addresses the efficiency of Vision Transformers. With the increasing size of current models and the ongoing trend that larger models perform better, it is of great importance to the research community to improve the quality / size trade-off. I want to particularly highlight the excellent presentation of the paper. The contributions are precisely defined and very clearly communicated and explained. Related to the point above, the paper sports a good discussion of related work and clearly defines the relative positioning of the paper at hand to related work. Another key strength of the paper is the experimental section. The paper includes both good quantitative and qualitative evaluations. Weaknesses: This paper does not have many weaknesses. If I had to pick anything, the qualitative gain over the baselines is relatively limited. I can imagine that making stochastic training perform efficiently on modern hardware. It would be great to see an extensive discussion about how to efficiently implement such stochastic training. Technical Quality: 4 Clarity: 4 Questions for Authors: As mentioned in the weaknesses section, I am interested how the stochastic training impacts training time and hardware utilization. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors do not address limitations of the paper. I do not see key missing limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 1s2T for their remarks and their thoughtful review. We answer the questions below: > It would be great to see an extensive discussion about how to efficiently implement such stochastic training **Rebuttal**: This is a very interesting question. In HydraViT, we have modified all components of the ViT architecture to accept an additional **cropping** input variable. This modification enables us to selectively crop out the weights and gradients for the forward and backward passes. Consequently, if during training the random number generator produces a value of $k$, the forward/backward pass for a batch in HydraViT with $k$ heads is equivalent to forward/backward pass of a batch in a separate model with $k$ heads. Therefore, the average training time for HydraViT with $i$ to $j$ heads, is roughly as follows: \begin{equation} \text{Average Training Time} = \frac{\sum_{k=i}^{j} T_k}{j - i + 1} \end{equation} where $T_k$ ​represents the of training time for a ViT model with $k$ heads. For HydraViT with 3-12 heads, this average training time would be comparable to that of a separate model with 8 heads. We will add this to the appendix of our paper for more clarity. > Limitations: The authors do not address limitations of the paper. I do not see key missing limitations. We appreciate your feedback. As mentioned in the checklist, we have discussed limitations throughout the paper in relation to each result. We will incorporate a separate section in the final part of the paper to summarize all of the limitations for more clarity and transparency, please see the general rebuttal. --- Rebuttal Comment 1.1: Title: Final Review Comment: I would like to thank the authors for their exhaustive response to both my questions and the questions by the follow reviewers. I keep my positive score and remain a proponent of this paper.
Summary: ViTs that allow users to dynamically select the amount of RAM consumed or latency at deployment are valuable. This paper proposes HydraViT, a training procedure that enables plain ViTs to perform relatively well when attention heads and embedding dimensions are dropped on demand. During training, HydraViT samples the number of attention heads (H), selects the corresponding embedding dimension (H times 64), extracts the subnetwork, and updates the subnetwork's parameters. HydraViT achieves a better trade-off between compute and performance than prior work. Strengths: 1. The paper is well motivated, well presented, and clear. Resource flexibility is often a missing feature of pretrained models and is an important topic of study. 2. The experimental conditions are strong, i.e., the DeiT procedure is used, which is a commonly used recipe for training ViTs from scratch on ImageNet. DeiT III is a superior recipe, however I don’t expect an improved training recipe will change the conclusions of this paper. 3. HydraViT significantly outperforms SortedNet, especially at higher throughputs. And it offers far more flexibility than MatFormer. 4. More heads leading to more compact representations is an interesting finding. Weaknesses: 1. The only reported results are on the ImageNet-1k validation set. There are no segmentation or object detection experiments, or other ImageNet-1k test sets — which are easy to implement and would be a valuable addition to the paper. 2. To me, HydraViT is a straightforward combination of SortedNet (which adjusts embedding dimension) and DynaBERT (which adjusts the number of attention heads and MLP size). 3. Missing comparison to DynaBERT trained from scratch (table 3). Considering DynaBERT is 1 of 3 baseline methods, I expect a direct comparison to it when trained from scratch. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Some experiments initialize models from DeiT-tiny, which has dim=128. How are the weights of the full network, i.e., ViT-B, initialized in this case? Why not initialize from DeiT-B? 2. If one wishes to train 3 models — say ViT-Tiny, ViT-Small, and ViT-Base — do you advise training these three models separately or train a HydraViT with 3 sub-networks of the same size as the independent models? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: I don't believe limitations are adequately addressed. Potential limitations include: lack of scaling to larger models and datasets beyond ImageNet-1k. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer Yk3V for their remarks and their thoughtful review. We answer the questions below: > Evaluation on Additional Datasets? Thank you for your suggestion. We have evaluated our model and baselines on 5 ImageNet variants. Please see the general rebuttal (G.1). > Comparison with SortedNet and DynaBERT? We apologize for any confusion caused by the original explanation. To clarify these distinctions, we will revise our explanation and highlight the following differences. While HydraViT, DynaBERT, and SortedNet all address scalability, our approach differ significantly: 1- DynaBERT is mainly a gradient-based method that determines the importance score of each attention head by evaluating variations in the loss function when a head is removed. 2- Due to its design, DynaBERT requires Knowledge Distillation (KD) in all layers, resulting in significant training overhead. 3- Since DynaBERT uses KD for all blocks, it must maintain the same dimensionality as the original (teacher) model for distillation. This limits its scalability to the number of heads and specific parts of the MLP layers. In contrast, HydraViT does not rely on gradients to find the important attention heads. Instead, it trains the model such that heads are inherently ordered by importance. Additionally, HydraViT does not use KD, allowing scalability across all blocks, including MLPs, CNNs, normalization, MHA and matrix multiplications. Furthermore, DynaBERT focuses on Transformers for NLP tasks, whereas our approach is for ViTs. SortedNet adjusts the embedding dimension but keeps the number of heads in MHA fixed at 12. This can result in inconsistent behavior, as noted by reviewer “aEuL.” In transformer blocks, each attention head corresponds to a subset of the embedding. For example, in the ViT, each head has a dimension of 64, so embeddings with dimensions [0:64] always correspond to the first head, [64:128] to the second head, and so on. In HydraViT, when we change the embedding dimension, we also distribute the embedding to the heads in a way that ensures [0:64] always assigns to the first head, [64:128] to the second head, and so on. This coupling of heads with the embedding dimension maintains consistency. In contrast, SortedNet, by always having 12 heads, results in varying embedding dimensions for each head when the embedding dimension is changed, leading to inconsistencies such as different scaling and therefore lower accuracy. Additionally, SortedNet primarily evaluates with CNNs on CIFAR-10 and Transformers on NLP tasks. > Resuls of DynaBERT from scratch is missing? DynaBERT uses KD for all its blocks. However, for fairness, we trained HydraViT and all baselines without KD. We attempted to train DynaBERT from scratch without KD, but the accuracy was nearly zero. We attribute this to its design, which necessitates guidance like KD or a proper initialization for effective training. We will add this to the table for clarity. > Initialization of Weights from DeiT-Tiny? To initialize the model with DeiT-T (3 heads), we load its weights into the corresponding positions, i.e. first 3 heads within the HydraViT (12 heads) skeleton and the remaining 9 heads are initialized with random weights. During stochastic training, when a subnetwork with 3 heads is extracted, it benefits from pre-trained weights, maintaining a starting performance equivalent to the pre-trained DeiT-T (72.2% accuracy). > Why not initialize from DeiT-B? This is a very interesting question. In HydraViT, the attention heads are treated as “stacks”, with each layer of heads built on top of the previous ones. The smallest submodel, with 3 heads, is equivalent to DeiT-T. Initializing with DeiT-T positions this 3-head submodel at its optimal starting point and ensures that larger submodels, which also contain these 3 heads, also start from a good local optimum. By employing the stochastical dropout training introduced in HydraViT, additional heads (4th, 5th, etc.) are trained iteratively on top of the initial three, creating a layered "stack" of attention heads. DeiT-B initialization benefits HydraViT with 12 heads but doesn't effectively leverage smaller submodels with fewer heads, as these 12 heads are not always used with smaller submodels. Nevertheless, in the initial steps of our research, we trained HydraViT with 3, 6, and 12 heads using both DeiT-T and DeiT-B for initialization. These were the results after 400 epochs of training (refer to Table 2): |Initialization| 3H|6H | 12H | |:-:|:-:|:-:|:-:| | DeiT-B |70.83|79.62 |81.84| | DeiT-T |73.01|79.82|80.95| As the table shows, although the model initialized with DeiT-B has higher accuracy with 12 heads, the average accuracy of submodels initialized with DeiT-T is higher, which supports our explanation. > Should one train ViT-T, ViT-S, and ViT-B separately or use HydraViT with three sub-networks? This very much depends on your use case. We argue that HydraViT is a competitive alternative that can be used instead of separately trained models. But ultimately with a unified model you will lose out on a bit of accuracy as you cannot specialize for each head configuration as you can with separately trained models. Since our evaluation primarily focuses on accuracy, we recommend HydraViT if accuracy is your main concern. However, there might be side effects that we have not yet investigated. For example, if your use case involves attention maps or embeddings for specific purposes, such as in VLMs, where two models are coupled together, HydraViT's behavior could differ from that of separately trained models. Additionally, HydraViT simplifies deployment by requiring only one model, which reduces memory usage and loading delays compared to using multiple separately trained models. > Lack of Limitation? Please see the general rebuttal (G.4). --- Rebuttal Comment 1.1: Title: Final Review Comment: I thank the authors for their rebuttal. I am very familiar with SortedNet but not DynaBERT, so I appreciate the explanation. I also appreciate the added evaluations, I believe it strengthens the paper. Overall, I think the core contribution of this paper is simple — it splits the model along the embedding dimension, reducing the number of heads. Although this can be seen as a "straightforward" innovation over SortedNet, I like it and believe it could be very useful to the community. I will increase my score from 5 to 6.
Summary: The paper proposes a new training scheme for ViTs to enable creating models with varying sizes (in the terms of the number of attention heads in multi-head attention layers). Full version of the model can be then used when more computational resources are available, and smaller portions of the model (with less attention heads operating sorted via importance) can be used when less computational power is available. The authors perform experiments on ImageNet-1k and compare the results with DeiT and some techniques to obtain more ‘dynamic’ models. Strengths: - The idea is interesting, simple, straight-forward in interpretation and I believe that it could be useful in some practical applications. - It is a nice direction to treat deep neural networks in a more ‘modular’ manner. - The methodology is clearly described, the authors provide many helpful visualizations. Weaknesses: - The paper puts a lot of stress on suitability of different subnetworks in their approach and its adaptability for different hardware platforms and mobile devices (e.g. “HydraViT achieves adaptability across a wide spectrum of hardware environments while maintaining performance.”), however they do not test their approach on different hardware platforms (they test their approach on NVIDIA A100 80GB, for which the hard constraints are not valid at all). - The authors do not focus on the memory aspect in the part “during inference, HydraViT can dynamically select the number of heads based on the hardware demands” - while in terms of computation, dynamic reduction of the size can help, I assume that the parameters of the ‘dropped’ heads are still stored on a given hardware platform (therefore, take the disk space). Also, what about the delays caused by dynamic changes of the model architecture in the given example (video streams)? It would be nice to provide some numbers on how much time it takes to rebuild the model. - Although the authors state in the checklist that “Error bars are not reported because it would be too computationally expensive”, it should be stated as a limitation of the experiments, because it decreases the chance to fairly compare the algorithms. At least some smaller portions of the results could be given with error bars to support the findings and increase the soundness. - The authors do not focus on the fact that although DeiT-T is an equivalent of 5.7M parameter HydraViT (which they state), when trained from scratch, HydraViT obtains much lower accuracy (more than 3 p.p. less) compared to the case, in which DeiT’s weights are used as a base. Also, it is difficult to fairly compare some of the results as the setups are not equivalents. - The authors motivate their work by stating that their approach is the answer to building models with different sizes for the same model family “Despite not having a significant accuracy difference, each of these models needs to be individually trained”. In fact, smaller and larger models do exhibit large accuracy differences, which can be even seen in Tab. 2 in this paper (DeiT-T achieves 72.2 accuracy compared to 81.8 for DeiT base). Technical Quality: 3 Clarity: 3 Questions for Authors: - “We also show that training HydraViT for more epochs can further improve accuracy.” - isn’t it caused among others by the fact that not all images are seen by all attention maps during each epoch (as a result of subnetwork sampling)? - What possible impacts on generalization of the minimum and larger subnetworks can have the proposed sampling technique? The first part is always used in the subnetwork sampling procedure and the last attention heads are trained the least often.These aspects should be discussed more thoroughly. - Tab. 3: What is the reason for the large difference between a network trained from scratch and initialized with DeiT-T for the smallest model? They should be equivalents, as the first 3 attention maps are used in each training step. - Why are the results for HydraViT trained from scratch not given in Tab. 3? - How much time does it take to rebuild the model? Is it suitable for the example application mentioned in the paper - video streams? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations could be better discussed. While the authors state in the checklist that "while we did not include a separate section, we do discuss limitations for every result.", it would be nice to squeeze it somewhere in the last section to make it more visible and transparent. I mention some example limitations below: - E.g. not having the error bars and comparing the performance of the models (Especially those trained from scratch) based on single runs can be perceived as a limitation. Although the authors state in the checklist (which is a plus) that “Error bars are not reported because it would be too computationally expensive”, it would be nice to state it as a limitation. - Also, the authors do not discuss what possible impacts on learning/generalization/overfitting of the minimum subnetworks/maximum subnetworks can have increasing the number of epochs significantly. - The authors do not discuss the limitations of applying the dynamic model rebuilding in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 2Apv for their remarks and their thoughtful review. We answer the questions below: > Q1. The paper emphasizes the adaptability of different subnetworks for various hardwares but only tests on an A100, where constraints are not applicable. A1. We acknowledge this oversight and will revise the text to manage expectations more appropriately. Our main focus with HydraViT is on the efficiency and scalability on the same device, rather than explicitly fitting on smaller hardware. However, the metrics such as GMACs and params, are consistent across different platforms. Additionally, the skeleton of HydraViT is identical to DeiT, and other researchers have evaluated the latency and performance metrics of DeiT on various devices. For instance, FastViT [1] evaluates DeiT on the iPhone 12 Pro, MobileViT [2] on the iPhone 12, SPViT [3] on the ZCU102 FPGA and Galaxy S20, and GhostNetV3 [4] on the Huawei Mate 40 Pro. These studies provide insight into the expected performance and latency of HydraViT on different hardware, indirectly supporting our claims about HydraViT's adaptability. Please see the general rebuttal (G.4). > Q2: [....] I assume that the parameters of the ‘dropped’ heads are still stored on a given hardware platform. A2. Based on hardware demands, only a subset of the network is loaded into RAM for inference. Alternatively, the entire model can be loaded into RAM with only a portion of the weights utilized according to the specific demands. We will clarify this in the paper. Additionally, as discussed in response to Q3, the loading latency should be sufficiently fast for most use cases. >Q3: The paper does not address delays caused by dynamic model changes. A3. We will add a table in the appendix of the paper reporting loading latency on the A100. However, loading latency also depends on PCI latency and other hardware factors, not just the GPU. | Model| latency | |:-:|:-:| | HydraViT-12H| 138.6ms ± 5.8 | | HydraViT-6H | 133.3ms ± 6.1| | HydraViT-3H | 74.1ms ± 1.4| As you can see, the loading latency is very low and should be sufficiently fast for most use cases. >Q4: Lack of error bars A4: Error bars are often omitted due to computational costs. We provide code, initialization details, and weights to ensure reproducibility. Also, most papers in the field follow this practice. However, for some settings, we conducted multiple training runs and observed no noticeable differences. Please see the general rebuttal (G.4). > Q5: The authors do not focus on the fact that HydraViT with 5.7M is 3 p.p. worse when trained from scratch compared to using DeiT’s weights as initialization A5. Thank you for pointing this out. In HydraViT, the attention heads are treated as “stacks”, with each layer of heads built on top of the previous ones. By initializing HydraViT with DeiT-T, we essentially position the submodel with three heads to its “optimum position” in the loss space, i.e. 72.2% acc. Therefore, during training, “HydraViT from scratch” needs to move the accuracy of the submodel with three heads from 0%, while the “HydraViT initialized” starts at 72.2% and can leverage this to improve accuracy further. We will adjust our evaluation and focus on this difference more. > Q6: The paper claims minimal accuracy differences across model sizes, but there is a gap between smaller and larger models (DeiT-T and DeiT-B). A6. Thank you for pointing this out. We acknowledge this for DeiT-T and DeiT-B. However, DeiT-S and DeiT-B show only a 2% accuracy difference despite DeiT-B having 4x more parameters. We will clarify this in the manuscript to ensure a clear understanding . > Q7. The paper states that training HydraViT for more epochs improves accuracy. Could this be due to subnetwork sampling causing not all images to be seen by all attention maps A7. Yes, this is a valid observation. However, it is important to note that due to subnetwork sampling, the **training computational complexity** (number of parameters to be optimized) of HydraViT with 3-12 heads for one epoch is significantly lower than that of DeiT-B with 12 heads, see the general rebuttal. In addition to the mentioned reason, we believe the main factor is that HydraViT optimizes 10 different loss functions simultaneously, which adds complexity to the optimization progress and as a result needs more training. We will add these observation to the appendix of the paper. > Q8. The effect of the proposed sampling training on generalization? A8. We appreciate you bringing this to our attention. This aspect was missing from our evaluation, and we will address it in our revised manuscript. Please see the general rebuttal for more details (G2: Generalization of Submodels). > Q9. Accuracy difference between HydraViT scratch and HydraViT initialized with DeiT-T in Tab.3? A9. Please see A5. > Q10. Why are the results for HydraViT trained from scratch not given in Tab. 3? A10. Thank you for pointing this out. We initially wanted to focus on showcasing HydraViT's maximum potential by training it to 800 epochs, assuming that from-scratch results would not significantly impact evaluation. Nonetheless, after receiving the reviews, we initiated the from-scratch training, which takes approximately 10 days (Thursday). Once this process is complete, we will update Table 3 and provide a comment with the new results. > Q11: Model rebuilding latency? A11. Please see A3. > Limitations The comments raised by the reviewer here were already pointed out by them in the weaknesses section, so we will not repeat them. Also please refer to the general rebuttal for more details. --- [1]: FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization, CVPR2023 [2]: Separable Self-attention for Mobile Vision Transformers, TMLR 2023 [3]: SPViT: Enabling Faster Vision Transformers via Latency-aware Soft Token Pruning, ECCV2022 [4]: GhostNetV3: Exploring the Training Strategies for Compact Model --- Rebuttal Comment 1.1: Title: New results: HydraViT from scratch 800 epochs Comment: **UPDATE.** As promised, we have completed the from-scratch training for HydraViT-800e. The results are as follows and will be added to Table 3 and 4 in the revised version of the paper. The results for HydraViT-800e initialized with DeiT-T has already been reported in Table 3 of the paper. | Model | Accuracy from scratch |:-:|:-:| | HydraViT-3H | **68.78** | HydraViT-4H | 74.83 | HydraViT-5H | 78.07 | HydraViT-6H | **79.84** | HydraViT-7H | 80.98 | HydraViT-8H | 81.54 | HydraViT-9H | 81.73 | HydraViT-10H| 81.84 | HydraViT-11H| 81.90 | HydraViT-12H| **81.93** When HydraViT was trained for 300 epochs, none of its submodels reached their best local optima. Therefore, incorporating a pre-trained DeiT-T model during this phase enhanced accuracy across all submodels. However, with 800 epochs of training, the submodels had sufficient time to move towrad their best possible local optima. In this extended training scenario, adding a pre-trained DeiT-T model provided initial guidance for smaller submodels, such as HydraViT-3H, which improved their accuracy. This initialization, however, resulted in a slight decrease in accuracy for larger submodels, such as HydraViT-12H, favoring smaller submodels, see Table 3. --- Rebuttal Comment 1.2: Comment: Thank you for your very thorough response to my comments. I hope that the authors will add these additional results and discussions to their manuscript. As the authors have done a lot of work to improve the paper, I am changing my rating from 6 to 7, good luck!
Rebuttal 1: Rebuttal: We would like to extend our sincere thanks to all the reviewers for their time and valuable feedback. Your insights help us to improving the clarity and quality of our work. In the following sections, we address the key points raised by the reviewers and provide clarifications and additional details as requested. **G1.** HydraViT and Baselines on New Datasets: As suggested we have run new experiments to conduct an extended, comprehensive evaluation of HydraViT and its baselines across five new datasets: ImageNet-Real, ImageNet-V2, ImageNet-R, ImageNet-A, and ImageNet-Sketch. The comparisons are based on GMACs vs. Accuracy and Throughput vs. Accuracy across these datasets. Our findings indicate that, with the exception of HydraViT (9-12 heads) on ImageNet-R, HydraViT consistently outperforms its baselines on these strongly augmented datasets. The detailed results are visualized in the attached PDF to this comment and will be added to the paper for further reference. **G2.** Generalization of Submodels: Our training logs indicate that the stochastic training method used in HydraViT does not lead to overfitting across subnetworks. Instead, it effectively reduces the loss for all submodels simultaneously. We have included a plot of validation loss for all subnetworks of HydraViT-800e in the attached PDF, which demonstrates that stochastic training minimizes loss uniformly across all networks. This plot and discussion will be added to the paper. **G3.** Training Overhead of HydraViT: The computational load for training HydraViT with 12 heads for one epoch is significantly less compared to training DeiT-B (12 heads). Specifically, HydraViT uses a uniform random number generator to select a subnetwork with 3 to 12 heads for each batch, updating “only” those weights. Consequently, the overhead of training HydraViT with a variable number of heads (3-12) is roughly comparable to training DeiT with 8 heads, which is much lower than the overhead for DeiT-B with 12 heads. For a more detailed comparison, please refer to our response to Reviewer "1s2T". **G4.** Limitations: In response to the suggestions regarding addressing limitations, we will include a dedicated section in the revised paper to discuss the limitations of HydraViT: - Training and Accuracy: HydraViT optimizes 10 loss functions simultaneously, which increases the computational load on the optimization progress. As a result, it requires more training iterations to achieve accuracy comparable to that of individually trained models such as DeiT-T, DeiT-S, and DeiT-B. However, by training multiple models within a unified framework, HydraViT ultimately requires much less total training time compared to training each of these 10 models for 300 epochs individually (a total of $10 \times 300 = 3000$ epochs). - Evaluation on Different Models: While HydraViT has been evaluated on DeiT-T, DeiT-S, and DeiT-B configurations, which have the same number of layers, it has not yet been applied to larger models like DeiT-L with more layers. We plan to explore this in the future works. - Efficiency Across Different Hardware: The main focus of HydraViT is on efficiency on the same device, rather than evaluating HydraViT across different hardware configurations. Although we assessed our model from various perspectives, such as GMACs, RAM usage, number of parameters, and throughput on an A100, we did not explicitly measure latency on heterogeneous devices. However, since HydraViT's skeleton is exactly like the general DeiTs, other existing literature [1,2,3,4] has already evaluated the latency and performance metrics of DeiT on various devices, which indirectly supports our claims about HydraViT's adaptability. - Training Overhead and Reproducibility: Due to training overhead and as it is not common in the related community, we did not report error bars for our experiments. However, to ensure reproducibility, we have provided the code, weights, and initialization details. We appreciate the reviewers’ feedback and will update the manuscript of the paper to address these points thoroughly. [1]: FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization, CVPR2023 [2]: Separable Self-attention for Mobile Vision Transformers, TMLR 2023 [3]: SPViT: Enabling Faster Vision Transformers via Latency-aware Soft Token Pruning, ECCV2022 [4]: GhostNetV3: Exploring the Training Strategies for Compact Model Pdf: /pdf/63349612e47a37ce68559be2292cedbdb1588945.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A generalized neural tangent kernel for surrogate gradient learning
Accept (spotlight)
Summary: The paper addresses the challenge of applying gradient-based training methods to neural networks with non-differentiable activation functions, such as binary and spiking neural networks. These networks use surrogate derivatives to enable gradient descent, but this approach lacks theoretical foundation. The authors propose a generalization of the NTK, called the surrogate gradient NTK (SG-NTK), to provide a rigorous theoretical basis for SGL. Strengths: 1. Novel extension of NTK to non-differentiable activation functions using surrogate gradients using the results in [1,2,3,4]. 2. Strong empirical validation through numerical experiments. [1] Gaussian process behavior in wide deep neural networks [2] Neural tangent kernel: Convergence and generalization in neural networks. [3] Wide and deep neural networks achieve consistency for classification [4] Wide neural networks of any depth evolve as linear models under gradient descent Weaknesses: 1. The complexity of the theoretical framework may limit accessibility for practitioners 2. Additional experiments on real datasets (MNIST, CIFAR10, ImageNet) could further validate the approach. Technical Quality: 3 Clarity: 3 Questions for Authors: What is the application of the results? The activation function, the ReLU function is commonly used and I believe it can be solved in previous works. Are there any commonly used activation functions that can solved in your paper but not in previous works? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The results in this paper are only applicable to sufficiently wide networks with random initialization. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for your helpful suggestions. We appreciate the focus on the applicability of our work. - While the mathematical details behind the NTK theory are generally complex and not particularly accessible to practitioners, the analytic NTK can be easily calculated for different data sets, network depths and activation functions, e.g., using the Neural tangents package (see citations in line 843). The analytic NTK then shows how the class of functions learned with gradient descent looks like, without actually having to learn a network. As we write in lines 22 to 23, this is possible whenever the activation function is differentiable or at least semi-differentiable. This also includes the case of the ReLU activation function. However, whenever gradient descent is not applicable, we cannot define the NTK anymore. This includes binary neural networks, time-discrete spiking neural networks and any activation function with jumps. Surrogate gradient learning is then commonly used by practitioners, see lines 102 to 120. Surrogate gradient learning introduces the surrogate derivative, which is an additional unknown (besides network depth and activation function) that changes the class of functions learned with SGL. So far, there is no theoretical tool like the SG-NTK to systematically choose the surrogate derivative and the network depth. In particular, the SG-NTK yields a prediction for the class of functions that are learned with SGL. - We agree that our approach would benefit from additional experiments. Moreover, as we write in lines 338 to 339, "[a more] rigorous analysis should be carried out on how the connection between SGL and the SG-NTK carries over to activation functions with jumps, as shown by our simulations".
Summary: The paper considers an extension of neural tangent kernel methods for the analysis of training of neural networks with non-differentiable activations with surrogate gradient learning. The basic approach is the define a generalized NTK (the SG-NTK) based on the quasi-Jacobian matrix (that is, the Jacobian constructed using the surrogate gradients) rather than the (ill-defined) Jacobian based on the actual gradients of the activations. This construct is shown to be deterministic (distributionally determined) in the infinite width limit. For the sign activation (with erfm acting as surrogate gradient) both the NTK and SG-NTK are derived. Finally some experimental results are given that show that the distribution of networks trained with SG-NTK matches reality. Strengths: - The paper is clearly presented. - Motivation, aims etc are clear and compelling. - The paper appears to fills an important gap in existing literature (admittedly I am relatively new to this area of research so there may be predecessors I am unaware of). - The derivations given seem correct to the best of my understanding, though I did skim some of the proofs in the rather long appendix. - The experimental results would appear to confirm that the SG-NTK matches with reality. Weaknesses: - it would perhaps be more useful to present figure 1 with a log y scale as the linear scale used, combined with the divergence at $\Delta \alpha = 0$, tends to flatten all features of the NTK, masking the differences between the NTKs away from the point of divergence. - also regarding figure 1 is it feasible to plot the limiting case $m \to \infty$ with the point of divergence elided? Technical Quality: 4 Clarity: 3 Questions for Authors: See previous sections. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have addressed limitations adequately and there appear to be no obvious negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging and positive review and for your helpful suggestions. - We have refrained from using logarithmic scales to keep the plot as simple as possible and to facilitate the comparison between Figure 1 and Figure 2. However, we agree that a logarithmic y-scale helps to illustrate the divergence at $\Delta \alpha = 0$ and the convergence at $\Delta \alpha \not = 0$ in Figure 1, and provide an updated figure with a quasi-logarithmic y-axis using the inverse hyperbolic sine (see global response PDF, Figure R2). - Yes, we agree with this suggestion and add the analytic NTK for $m \to \infty$ as in Figure 2 (see global response PDF, Figure R2). Note that this singular kernel, $\Theta_\mathrm{sign}$, can never be fully plotted with a logarithmic y-axis, since $\Theta_\mathrm{sign}(\Delta \alpha) \to \infty$ as $\Delta \alpha \to 0$. --- Rebuttal Comment 1.1: Comment: Thank you for the response, I'm happy to keep my score of 8. --- Reply to Comment 1.1.1: Comment: Thank you for your quick response, we are glad that we were able to answer your questions.
Summary: This paper explore the neural tangent kernel (NTK) with regards to surrogate gradient learning for non-differentiable activation functions. The authors show that the standard neural tangent kernel is not equipped to deal with such activation functions and causes the kernel function to become singular. They provide a derivation of a surrogate derivative based NTK that can handle such activations thus further generalizing the NTK for a larger subset of networks. Strengths: This paper provides a brand new derivation of the NTK for non-smooth ANN activations and thus is very original. The overall paper is of high quality and clearly developed. The results of this paper are of high significance and allow for new avenues for application of NTK analysis. Weaknesses: While the paper is well developed overall, there are a few spots that can benefit from additional clarity regarding notation: Line 144 (Definition 2.1) - $r_l(m)$ refers to the number of neurons for a particular layer however $m$ is not directly described. My understanding is that it is referring to the output size dimension but this is not immediately clear. Line 215 - I am assuming that $\delta(z)$ refers to the Dirac delta function (delta distribution). Line 244 - Here we have $\delta_{ij}$ which I assume is *not* the Diract delta and instead a constant for a given kernel matrix entry. Line 289 - Since we are talking about divergence, notationally it would be better to use $\Theta_{\text{erf}_m} \not\rightarrow \Theta\_{\text{sign}}$ or specify the formal definition of divergence *or* avoid it all together. In regard to the experiments line 303 mentions that the NTK diverge as $m \rightarrow \infty$, however this does not seem to be clear from the plots especially since $m=20$ does not seem to be sufficient to show the divergence occurring. In addition, since we are discussing divergence, I believe the paper could benefit by including the average error values between the analytic and empirical kernels since graphs tend to do a poor job of illustrating this. Technical Quality: 4 Clarity: 3 Questions for Authors: Are $\delta(z)$ and $\delta_{ij}$ the same? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Everything is adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and thorough review and for your helpful suggestions. - We apologise for any confusion caused by the use of the variable $m$ in lines 133 to 137 and will change the variable. The parameter $m$ is used in Definition 2.1 to be able to consider a limit by taking $m \to \infty$. It does not refer to any dimension or size of the network itself, but rather indexes the limit. This is described directly after the Definition 2.1 in lines 148 to 149: "Every element of $\mathcal{R}_L$ provides a way to take the widths of the hidden layers to infinity by setting $n_l = r_l(m)$ for any $1 \leq l < L$ and considering $m \to \infty$". Example: We have two hidden layers and hence layer widths $n_0, n_1, n_2, n_3$. Suppose that $n_0 = n_3 = 2$ are fixed, layer one grows quadratically, and layer two grows cubically. Then we have $r_1(m) = m^2$ and $r_2(m) = m^3$. - Yes, $\delta(z)$ denotes the delta distribution. We will add this information right after the first occurrence of the delta distribution. - Yes, $\delta_{ij}$ denotes the Kronecker delta, i.e., $\delta_{ij} = 1 $ if $i=j$ and $\delta_{ij} = 0$ otherwise. To avoid confusion with the delta distribution, we will add a clarification after the first occurrence of the Kronecker delta. - It is true that $\Theta_ {\mathrm{erf}_ m}$ converges to the singular kernel $\Theta_\mathrm{sign}$, $\Theta_ {\mathrm{erf}_ m} \to \Theta_\mathrm{sign}$, because this is how $\Theta_\mathrm{sign}$ is defined in lines 221 to 222. This means that $\Theta_ {\mathrm{erf}_ m}(x,y) \to \infty$ as $m\to\infty$ for $x = y$. To be more precise, we will instead write in line 289, "We numerically illustrate the divergence of the analytic NTK, $\Theta_ {\mathrm{erf}_ m}$, as $m \to \infty$, [...]". - It is unclear in which way the plots do not sufficiently illustrate the divergence. Comparing the y-axis and the size of the peak at $\Delta \alpha = 0$ for $m=1,5,20$, the effect of the divergence can be seen, as the peak grows from $<5$ for $m=2$ to $>100$ for $m=20$. Figures can never fully confirm or disprove convergences or divergences and we will weaken the formulation in lines 302 to 303, "the plots confirm that the analytic NTKs diverge". In addition, we introduce a common quasi-logarithmic y-axis using the inverse hyperbolic sine to better illustrate the convergence for all $\Delta\alpha \not= 0$ (see global response PDF, Figure R2), as suggested by Reviewer Ljsu. By comparing the analytic NTKs for finite $m$ with the singular kernel for $m \to \infty$ in Figure R2, we can indeed see that $m=20$ is large enough to illustrate the convergence to the singular kernel. Finally, we agree that average error values help to illustrate the convergence of the empirical kernels to the analytic kernels and provide them in Figure R3 and Figure R4 (see global response PDF). --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal as well as your clarifications and additions to your manuscript. I read through the other reviews and your rebuttals as well. Given the additional changes and the clarity of your presentation, I am comfortable to bump up my score to an 8. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to consider the other reviews and rebuttals. We are glad that we were able to answer your questions and we appreciate the score improvement.
Summary: The paper adapts the neural tangent kernel framework to surrogate gradient learning (and so to learning in spiking neural networks). Strengths: - The paper generalizes NTK to (some algorithms for) spiking neural networks, which is an important scenario for neuroscience and neuromorphic computing - Like the original NTK results, analyzing learning dynamics with this approach seems easy (i.e. it works like kernel regression, Eq. 7) - The paper is well-written and self-contained Overall, I think this paper is important for theoretical analysis of spiking neural networks, and contains novel results. I didn't closely follow the derivations in the appendix, but I'm familiar with the NTK literature/proofs, and the approach of this paper seems reasonable to me (+ it follows the original derivations to some extent). Weaknesses: The main weakness is the NTK approach itself: NTK is known to be a poor approximation to learning in deep networks in many cases, which limits what kind of conclusions we can make with NTK-driven theoretical analysis. However, this weakness is not specific to this paper. Technical Quality: 3 Clarity: 3 Questions for Authors: - I don’t see how definition 2.2 iii can work for strictly increasing width functions. Sequential limits mean some of the widths stay fixed, right? - Fig.1 should have the black line at the back I think, otherwise it’s not clear if the peaks always overlap Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging, thorough and detailed review and for your helpful suggestions. - **Short answer:** Indeed, by parameterizing all hidden layer widths with the parameter $m$ and considering $m \to \infty$, we cannot cover the sequential infinite-width limit as described in [1]. This is because the parameterization requires all hidden layer widths to be finite during the entire limit procedure, which is not the case for the sequential limit. The fact that the width functions are strictly increasing is not creating this problem. However, by following the inductive proof of the sequential limit, we construct a parameterized limit using the width functions (see next paragraph). A consequence of this proof technique is that we have no control over the rates at which the hidden layer widths diverge. However, this is not a problem if one wants to prove weak convergence as defined in Definition 2.2 (iii), see also Section E.1, Lemma E.1 and E.2. This is why we wrote "In practice, this means that the statement holds as $n_ 1,\dots,n_L \to \infty$ sequentially". We agree that this wording is not very clear and we will add a more elaborate clarification. **Connection between the sequential infinite-width limit and our notion of weak convergence:** The simplest form of a sequential limit takes the form $\lim_{n_1 \to \infty} \left( \lim_{n_2 \to \infty} f(n_1,n_2) \right)$. Let us assume that $\lim_{n_2 \to \infty} f(n_1,n_2) = \hat{f}(n_1)$ exists for all $n_1$ and that $\lim_{n_1 \to \infty} \hat{f}(n_1) = a$ also exists. If $f$ is continuous, one can always find a parameterization $n_1(m), n_2(m)$, such that $\lim_{m \to\infty} f(n_1(m),n_2(m)) = a$. Now, the sequential limits in [1] correspond to the limit of the form $\lim_{n_1 \to \infty} \left( \lim_{n_2 \to \infty} f(n_1,n_2) \right)$. Following this analogy, we provide a way to find the parametrization $n_1(m), n_2(m)$ with Lemma E.1. This allows us to show convergence in the weak sense as defined in Definition 2.2 (iii). **Why parameterised limits are preferable:** There are two reasons why we think that our proposed way of unifying different infinite-width limits is relevant, even though it excludes the sequential infinite-width limit. First, in practice, the hidden layer widths can never be set to infinity, as required by the sequential infinite width limit. In this sense, it is more meaningful in practice to consider the weak convergence we have introduced instead of the sequential limits. Second, the infinite-width limits considered by [2] and [3] are both parameterized and provide elaborate ways of dealing with the infinite-width limit. Our definition shows how they relate to each other. - We have added grid lines in all revised plots (see global response PDF) to better show the horizontal alignment of the peaks. In addition, the quasi-logarithmic scaling in Figure R2 (see global response PDF) improves the visibility of the vertical alignment of the peaks. The black line is not sufficiently visible if it is placed at the back of the plot. [1] Jacot, Arthur, Franck Gabriel, and Clément Hongler. "Neural tangent kernel: Convergence and generalization in neural networks." Advances in neural information processing systems 31 (2018). [2] Lee, Jaehoon, et al. "Wide neural networks of any depth evolve as linear models under gradient descent." Advances in neural information processing systems 32 (2019). [3] Matthews, Alexander G. de G., et al. "Gaussian process behaviour in wide deep neural networks." arXiv preprint arXiv:1804.11271 (2018). --- Rebuttal Comment 1.1: Comment: Thank you for the answer! This response addressed my questions, so I'm keeping the score of 8. (The only note is that $f(n_1, n_2)$ would need to have additional conditions to guarantee exchangeability of limits, but that's a small point.) --- Reply to Comment 1.1.1: Comment: Thank you for your quick response. We agree that the continuity of $f$ does not guarantee the exchangeability of limits. A small note to avoid misunderstandings: We are interested in finding a parameterization that corresponds to a particular sequential limit, not in changing the order of that sequential limit.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their thorough reviews and helpful suggestions. We provide four additional figures in the attached PDF, where we have implemented the suggestions of Reviewer PShm and Reviewer Ljsu. The figures address the question raised by Reviewer L5ST. In Figure R1, we have added grid lines to Figure 1 to illustrate that the peaks overlap horizontally. In Figure R2, we have added grid lines and an asinh-scaling for the y-axis (approximately linear for small absolute values and logarithmic for large absolute values) to Figure 1. Moreover, we have added a plot of the singular kernel, which we obtain from the analytic NTK as $m \to \infty$. Note that this allows for a nice comparison with Figure 2, where we have plotted the non-singular kernel, which we obtain from the analytic SG-NTK as $m \to \infty$. In Figure R3 and Figure R4, we have plotted the mean squared errors between the empirical and analytic kernels for Figure 1 and Figure 2 respectively. We can see the convergence of the empirical SG-NTKs in R4, in accordance with our theoretical results. We answer the individual reviews in the order of reviewer comments. Pdf: /pdf/f9f82eb6bd5c0908db9929db8608cc2d751309ce.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Unified Multimodal Editing with Enhanced Knowledge Collaboration
Accept (spotlight)
Summary: The paper introduces UniKE, a novel multimodal editing method that addresses challenges in knowledge editing for Multimodal Large Language Models. UniKE unifies intrinsic knowledge editing and external knowledge resorting by vectorized key-value memories. By disentangling knowledge representations into semantic and truthfulness spaces, UniKE promotes collaboration between intrinsic and external knowledge editing, enhancing the post-edit MLLM's reliability, generality, and locality. Extensive experiments validate UniKE's effectiveness. Strengths: 1. UniKE establishes a unified framework for intrinsic knowledge editing and external knowledge resorting. 2. Extensive experiments demonstrate that UniKE consistently maintains excellent reliability, generality, and locality Weaknesses: 1. The presentation is confusing. For example, in intrinsic knowledge editing, how do you actual edit the intrinsic knowledge in FFN. Do you follow the same pipeline as T-Patcher or adding an additional neural network? Although intrinsic knowledge is considered as key-value pairs but they are different in the end. 2. The unified framework and disentanglement of knowledge representations into semantic and truthfulness spaces introduces additional complexity to the editing process. If I understand correctly from the implementation details, it requires more than 15k additional triplets to train the encoders. 3. In the experiments, I think /beta is another hyperparameter, but I did not find an experiment discussing it. Technical Quality: 3 Clarity: 2 Questions for Authors: Why would you add extra 10 key-value pairs in the FFN of the last four transformer layers in intrinsic knowledge editing? It seems that \zeta is quite sensitive. Could you explain that? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive comments. We will explain your concerns point by point. **Q1:** The presentation is confusing. For example, in intrinsic knowledge editing, how do you actual edit the intrinsic knowledge in FFN? **A1:** We apologize if our manuscript causes any confusion. **The intrinsic knowledge editing is implemented by adding 10 knowledge neurons in the FFN of the last four transformer layers** with a similar pipeline as T-patcher. The addition of neurons can be understood as an expansion of the dimension of the weight matrix in the FFN, as shown in Eq.3. This process retains the original $d\times d'$ weights in the FFN while appending a learnable parameter of dimensions $d\times ne$ as the new neurons. And Eq.2 demonstrates why intrinsic knowledge can be viewed as key-value pairs in this context. Although this pipeline may seem similar to TPatcher, **the key innovation of our paper lies in consistently transforming in-context editing to the latent level and integrating it into the self-attention**, which allows both external and intrinsic knowledge editing to **follow a unified paradigm within a cohesive framework**. Furthermore, **our intrinsic knowledge editing is actually an improvement of T-Patcher**. By synergistically unifying the paradigms of both editing methods and incorporating knowledge disentangling, we identify a truthful editing direction, $\zeta$, which **further guides intrinsic knowledge into a more generalizable direction**. To understand how $\zeta$ enhances intrinsic knowledge editing, please refer to Eq.6, which is inspired by the success of [Chen et al 2022]. [Chen et al 2022]AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition.Neurips2022. &nbsp; **Q2:** The unified framework and knowledge disentanglement introduce additional complexity to the editing. **A2:** Thank you for valuable feedback. We would like to address your concerns from two points. **(2.1)** The unified framework and knowledge disentanglement almost **incur no additional time or GPU memory overhead** during the editing process. Both the truthfulness and semantic encoders consist of several MLP layers **with only 4M parameters** (MLLM contains 7B parameters), which **remain frozen** during editing. As shown in **Table 10 in the Rebuttal PDF, these encoders enhance editing performance with NEGLIGIBLE additional time and memory overhead**. You can also refer to **Table 7 in the Rebuttal PDF** to compare the editing speed and performance of UniKE with baselines. It shows that UniKE achieves significant performance advantages without incurring additional resource costs or computation time. Therefore, we think that the additional complexity they bring to the editing process is acceptable. **(2.2)** Leveraging 15k additional triplets to train the encoders is a **one-time** pre-training process, taking less than an hour. Once pre-trained, the encoders are **frozen** for downstream knowledge editing tasks, **adding no training-time cost to knowledge editing.** As shown in **Table 10 in the Rebuttal PDF**, after removal of encoders without pre-training, the performance significantly decreases. So we think that the time cost of this one-time pre-training is acceptable. Furthermore, this pre-training is common in knowledge editing. Methods like MEND and SERAC also require pre-training, **with pre-training time often exceeding one day, which is 24 times longer than UniKE pre-training.** ***[Please Refer to Table 10 in the Rebuttal PDF for the Results]*** &nbsp; **Q3:** Did not find an experiment discussing /beta **A3:** We apologize for neglecting to discuss the value of $\beta$. In our experiments, we set $\beta$=1.0. And we conduct further experiments, ranging $\beta$ from 0.0 to 1.6 to evaluate the performance. As shown in **Figure 1 in the Rebuttal PDF**, within the range of 0.8 to 1.6, UniKE's performance shows no significant variation with changes in $\beta$. **This indicates that $\zeta$ is robust to the change of $\beta$**. ***[Please Refer to Figure 1 in the Rebuttal PDF for the Results]*** &nbsp; **Q4:** Why would you add extra 10 kv pairs in the FFN of last 4 transformer layers in intrinsic knowledge editing? It seems that \zeta is quite sensitive. **A4:** Thank you for the question. We would like to address your concerns from two points. **(4.1)** The implementation detail for intrinsic knowledge editing is to add an extra 10 kv pairs to the FFN of the last four transformer layers. "extra" means that **we preserve the original parameters of the MLLM while achieving intrinsic knowledge editing through the additionally added kv pairs**. Moreover, We choose these hyperparameters because **we find them to be the most suitable** for multimodal editing. Of course, we maintain a comparable number of neurons and transformer layers in the T-Patcher baseline for fair comparison. **(4.2)** Figure 4(c) in our paper highlights the importance of selecting an appropriate method for constructing the editing direction $\zeta$. Specifically, using a random vector for $\zeta$ degrades performance due to the introduction of noise during editing, which hinders the MLLM from learning effectively. In contrast, our proposed method for constructing $\zeta$ shows a noticeable performance improvement (especially in generality), as we learn a generalizable editing direction from a large number of samples, addressing the inherent limitations of generality in intrinsic knowledge editing. Moreover, once completing the construction of $\zeta$, as seen from the above **A3**, adjusting its weight $\beta$ has little impact on editing performance. It indicates that **the constructed $\zeta$ itself is robust to weight variations, although its construction method is sensitive.** &nbsp; Once again, we express our heartfelt gratitude for your valuable suggestions! In the revised version of the paper, we will improve the descriptions to make them clearer. --- Rebuttal Comment 1.1: Comment: The rebuttal have addressed most of my concerns. I will raise my score to 6. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score. Your valuable suggestions greatly contribute to the quality of our paper. And we are deeply grateful for your insightful suggestions on our work! --- Rebuttal 2: Title: Tables 7, 10 and Figure 1 in the Rebuttal PDF Comment: To facilitate your reading, we also paste our additional experimental results here, **which are consistent with the tables / figures in the rebuttal PDF**. &nbsp; **Table 7:** The computational speed, resource utilization and performance of each method. We use the average results of five metrics (Reliability, T-Generality, M-Generality, T-Locality, and M-Locality) as the performance measure. | Method | GPU memory | Editing time for each sample | Avg performance | |:----------|:----------:|:----------------------------:|:---------------:| | FT | 22G | 6.1s | 60.6 | | KE | 24G | 5.8s | 74.7 | | T-Patcher | 18G | 4.7s | 80.4 | | MEND | 36G | 5.2s | 90.3 | | IKE | 20G | 1.6s | 65.5 | | SERAC | 49G | 3.6s | 76.4 | | **UniKE** | 18G | 5.0s | **95.2** | &nbsp; **Table 10:** Editing time cost and performance with/without encoders for UniKE. The time refers to the average editing or inference time for one sample. Gen is the average result of T-Generality and M-Generality; while Loc is the average result of T-Locality and M-Locality. | Method | GPU Memory | Editing time | Inference time | Rel. | Gen. | Loc. | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | w/o encoders | 17.7GB | 4.92s | 0.212s | 96.2 | 91.2 | 90.3 | | UniKE | 17.8GB | 5.04s | 0.217s | **97.4** | **94.6** | **93.5** | &nbsp; **Table corresponding to Figure 1 in Rebuttal PDF:** Editing Performance on Different Value of $\beta$. Generality is the average result of T-Generality and M-Generality; while Locality is the average result of T-Locality and M-Locality. | $\beta$ | 0.0 | 0.4 | 0.8 | 1.0 | 1.2 | 1.6 | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | Rel. | 97.0 | 97.7 | 97.8 | 98.0 | 97.5 | 97.9 | | Gen. | 91.7 | 93.3 | 95.0 | 95.1 | 94.8 | 94.7 | | Loc. | 91.6 | 93.0 | 93.6 | 93.8 | 93.2 | 93.5 | &nbsp; We hope we have effectively addressed your concerns. Discussions are always open. Thank you once again for your time and insightful comments!
Summary: This paper proposes UniKE, a novel multimodal editing method that establishes a unified perspective for intrinsic knowledge editing and external knowledge resorting. On this basis, the authors combine both types of knowledge editing methods, executing them in the latent space with a unified paradigm. Furthermore, this paper proposes to disentangle the knowledge representations into the semantic and truthfulness spaces, effectively enhancing the collaboration between intrinsic knowledge and external knowledge resorting. Extensive experimental results show that UniKE achieves promising results under various settings, ensuring that the post-edit MLLM maintains excellent reliability, generality, and locality. Strengths: (1) I think knowledge editing for MLLMs is a relatively new topic. Previously, a few studies merely adapted existing knowledge editing methods from the NLP field into multimodal domain. To the best of my knowledge, this paper is the first to conduct a detailed and systematic analysis of the strengths and weaknesses of existing methods when applied to editing multimodal LLMs. (2) The proposed method is very novel and effective. Previous efforts in knowledge editing show significant differences between intrinsic knowledge editing methods and external knowledge resorting methods. In this work, the authors ingeniously convert in-context editing into the format of feature shifting, achieving a unification of the editing paradigms that can operate simultaneously within the same transformer layer with synergistic correlation. I find this to be a very inspiring design. Moreover, the design of knowledge collaboration is closely integrated with this unified paradigm. (3) The experiments are very solid and thorough, clearly demonstrating that UniKE effectively addresses multimodal knowledge editing tasks under various setups. Meanwhile, the authors have also provided the implementation code for the experiments. (4) I commend the authors for conducting an extensive set of ablations and analyses, which are very helpful in understanding the impact of each component within UniKE. (5) Additionally, I believe that the method proposed by the authors is not only applicable to knowledge editing tasks. By converting in-context learning into the representation space and avoiding the need to increase the context window space, it better synergizes with parameter update learning. I consider this to have significant implications for further studies on how to construct more powerful MLLMs. Weaknesses: (1) In the NLP community, some studies will discuss the resilience to overediting [1] of knowledge editing methods by adopting the contrastive knowledge assessment [2] (CKA). Unlike the locality property that measures whether LLMs forget previous knowledge, overediting can be understood as excessive generalized to seemingly similar but unrelated samples. Although there may be no current work on multimodal editing that discusses the phenomenon of overediting, I encourage authors to add relevant experiments for a straightforward comparison of the resilience to overediting among each method (UniKE, MEND, T-Patcher, and IKE). (2) A more challenging task of knowledge editing is counterfactual editing, where the edited answer $y$ to the question $x$ can sometimes be counterfactual to the real world. A typical counterfactual editing dataset in the NLP community is called COUNTERFACT [3], which more accurately reflects the true effectiveness of knowledge editing methods by avoiding the effects of LLMs knowing this knowledge before editing. I encourage authors to construct multimodal counterfactual editing datasets and conduct more experiments to verify whether UniKE performs better in counterfactual editing scenarios compared to MEND, T-Patcher and IKE. [1] Zheng, Ce, et al. "Can we edit factual knowledge by in-context learning?." arXiv preprint arXiv:2305.12740 (2023). [2] Dong, Qingxiu, et al. "Calibrating factual knowledge in pretrained language models." arXiv preprint arXiv:2210.03329 (2022). [3] Meng, Kevin, et al. "Locating and editing factual associations in GPT." Advances in Neural Information Processing Systems 35 (2022): 17359-17372. Technical Quality: 4 Clarity: 4 Questions for Authors: See the weaknesses. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have adequately discussed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments and high appreciation of our work! We are encouraged that our research is recognized as having significant implications for further studies on constructing more powerful MLLMs. We will address your concerns point by point. **Q1:** In the NLP community, some studies will discuss the resilience to overediting of knowledge editing methods by adopting the contrastive knowledge assessment (CKA). Unlike the locality property that measures whether LLMs forget previous knowledge, overediting can be understood as excessively generalized to seemingly similar but unrelated samples. Although there may be no current work on multimodal editing that discusses the phenomenon of overediting, I encourage authors to add relevant experiments for a straightforward comparison of the resilience to overediting among each method (UniKE, MEND, T-Patcher, and IKE). **A1:** Thank you for raising such a professional question! To construct the dataset for overediting evaluation, we randomly select a portion of the test data from the E-VQA task. We then make minor modifications to the text questions in each example so that they appear similar to the original questions but differ in semantics, thus constructing the test data for overediting. To improve experimental efficiency, we conduct experiments on MiniGPT-4 with the setup of one-step editing. The results of the CKA evaluation for each method are listed in the following table. We can see that UniKE outperforms all other methods. The results indicate that **UniKE has less influence on over-editing, which further demonstrates its robust capabilities. Table 1: The results of CKA evaluation on MiniGPT-4 with the setup of one-step editing. | Method | T-Patcher | MEND | IKE | UniKE | |:-|:-:|:-:|:-:|:-: | **CKA** | 1.38 | 1.33| 1.47 | **1.58** | We also present the results in Table 8 of the Rebuttal PDF. &nbsp; **Q2:** A more challenging task of knowledge editing is counterfactual editing, where the edited answer to the question can sometimes be counterfactual to the real world. A typical counterfactual editing dataset in the NLP community is called COUNTERFACT, which more accurately reflects the true effectiveness of knowledge editing methods by avoiding the effects of LLMs knowing this knowledge before editing. I encourage authors to construct multimodal counterfactual editing datasets and conduct more experiments to verify whether UniKE performs better in counterfactual editing scenarios compared to MEND, T-Patcher, and IKE. **A2:** Thank you for the insightful question. We first construct a counterfactual editing dataset based on the MMEdit dataset, ensuring that the MLLM did not have prior knowledge of the editing target. We also conduct experiments on MiniGPT-4 with one-step editing. The results of the counterfactual editing are shown in the following table. **It can be seen that UniKE significantly outperforms existing methods in this more challenging editing scenario, fully demonstrating the effectiveness of our method. Table 2: Performance of counterfactual editing on MiniGPT-4. | Method | Rel | T-Gen | M-Gen | T-Loc | M-Loc | Avg | | :- | :-:|:-:|:-:|:-:|:-:|:-:| | T-Patcher| 80.0 | 65.9 | 57.7 | 84.2 | 88.3 | 75.2 | | MEND | 90.6 | 83.2 | 74.1 | 93.5 | 82.1 | 84.7 | | IKE | 90.3 | 83.7 | **81.5** | 44.1 | 5.0 | 60.9 | | UniKE | **90.8** | **84.7** | 80.7 | **94.9** | **94.5** | **89.1** | We also present the results in Table 9 of the Rebuttal PDF. &nbsp; We will add these experiments to the main body or the appendix of our paper. Thank you once again for your insightful and professional feedback! --- Rebuttal 2: Comment: Thank you for your rebuttal response. I think that over-editing evaluation and counterfactual editing are critical tasks that can reflect whether a model truly possesses the capabilities of knowledge cognitive learning in knowledge editing scenarios. It's impressive to see that UniKE performs well on these two challenging tasks. I consider UniKE to be a strong contribution, and I will raise my score to 9. --- Rebuttal Comment 2.1: Comment: Thank you for raising the score. We deeply appreciate your recognition of our work and the constructive advice you've offered!
Summary: UniKE is a unified framework for multi-modal knowledge editing that includes three main aspects: 1.Knowledge Separation: UniKE divides knowledge into factuality and semantic spaces to manage and coordinate different types of knowledge more effectively. 2.Knowledge Collaboration: In the factuality space, UniKE standardizes new knowledge based on a learned factuality distribution, enhancing reliability and generality. In the semantic space, it adjusts the integration of external knowledge based on relevance to the input samples, maintaining locality. 3.Multi-step Editing: UniKE supports single-step, multi-step sequence, and cross-task editing while maintaining high reliability, generality, and locality. Through these innovations, UniKE significantly improves performance in multi-modal knowledge editing tasks. Strengths: 1.The paper introduces UniKE, a novel framework that seamlessly integrates intrinsic and external knowledge editing, enhancing the model’s ability to handle complex multimodal information effectively. The motivation behind the work is clear, and the article is well structured, guiding the reader through the innovative approach and its benefits. 2.By disentangling knowledge into semantic and truthfulness spaces, the proposed method ensures robust collaboration between different types of knowledge, significantly improving the model's reliability, generality, and locality. The method's effectiveness is demonstrated through comprehensive experiments across various settings, consistently outperforming existing state-of-the-art methods. 3.UniKE's design allows for application across different multimodal models and editing scenarios, making it a versatile and robust solution for enhancing multimodal language models. Weaknesses: 1.The paper does compare UniKE with other intrinsic knowledge editing and external knowledge resorting methods and highlights its efficiency in several aspects. However, it lacks a detailed discussion on computational speed and resource utilization. 2.The reliance on fine-tuning for knowledge updates could lead to overfitting, especially if the model is frequently updated. This could impact the model’s generalization abilities, making it less effective in unforeseen or less frequent scenarios. 3.The paper lacks detailed explanations for the evaluation metrics in Line 240. In Table 2, across multiple experiments, it is unclear why there is little difference compared to the SERAC method in the T-Loc metric, but a significant difference in the M-Loc metric. 4.The paper lacks detailed information about the parameter "n" used in the contrastive learning formula, making it difficult to understand its impact on model performance, and does not discuss how different values of "n" might influence the results and effectiveness of the method. Technical Quality: 3 Clarity: 3 Questions for Authors: Please address the comments in the weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of the article have been discussed by the authors Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind feedback and valuable comments. We will explain your concern as follows. **Q1:** Lacking a detailed discussion on computational speed and resource utilization. **A1:** Thank you for the suggestion. In **Table 7 of the Rebuttal PDF**, we list the computational speed and resource utilization of each method. As can be seen, compared to most baselines, **UniKE achieves significant performance advantages without incurring additional resource costs or computation time**. Moreover, although external knowledge editing methods (IKE, SERAC) are faster in terms of editing speed, their performance is unacceptable, **lagging 20-30 points behind UniKE**. These results demonstrate the superiority of UniKE. ***[Please Refer to Table 7 in the Rebuttal PDF for the Results]*** &nbsp; **Q2:** The reliance on fine-tuning for knowledge updates could lead to overfitting. **A2:** First, The objective and evaluation metrics of knowledge editing include not only successfully completing edits but also ensuring the post-edit MLLM can generalize over equivalent input neighbors (**Generality**) and maintain consistent output for irrelevant inputs (**Locality**). In fact, **Generality and Locality can be considered as two aspects of evaluating overfitting: the new knowledge should generalize well without disrupting the generalization of unrelated old knowledge.** In our method, to maintain Generality and Locality and prevent overfitting, **we retain the original model parameters while combining both intrinsic and external knowledge editing within a unified framework**. Intrinsic knowledge editing **introduces a minimal amount of tunable parameters** in a small subset of FFNs **to preserve locality**. Meanwhile, external knowledge editing incorporates retrieved in-context representations (with no tunable parameters) into the latent space of the transformer **to enhance generality**. We also propose knowledge disentangling to promote knowledge collaboration. In the semantic space, intrinsic knowledge helps select appropriate external knowledge, **preventing locality disruption**. In the truthfulness space, external knowledge identifies a generalizable editing direction, regulating intrinsic knowledge and **alleviating its restriction on generality**. The experimental results (Tables 2, 3, and 4 in our paper) demonstrate that UniKE achieves better locality and generality in various multimodal editing scenarios, **and more effectively addresses the issue of overfitting compared to baselines.** &nbsp; **Q3:** The paper lacks detailed explanations for the evaluation metrics. In Table 2, across multiple experiments, it is unclear why there is little difference compared to the SERAC method in the T-Loc metric, but a significant difference in the M-Loc metric. **A3:** We apologize for any confusion regarding the evaluation metrics. In Appendix B.1, we detail the definitions of the five evaluation metrics (see Eq.11-15). In brief, these metrics assess accuracy across different input samples: Reliability on edit targets, T-Locality on unrelated QA tasks, M-Locality on unrelated VQA tasks, T-Generality on samples that rephrase the text inputs based on the edit targets, and M-Generality on samples that redraw image inputs based on the edit targets. Furthermore, To analyze the reasons for the little differences in the T-Loc metric and the significant differences in M-Loc, we examined some intermediate outputs of SERAC during editing. We discover that the reason is **related to the pipeline of SERAC**. SERAC adopts a counterfactual model while keeping the original model unchanged. It employs a scope classifier to determine if new input falls within the range of stored edit examples. If the input matches any cached edit, output the counterfactual model’s prediction based on the input and the most probable edit. Otherwise, the original model’s prediction is given. For T-Loc, the input samples contain only text information, which differs significantly from the input format of edited samples. This makes the counterfactual model less likely to activate, resulting in outputting the original model's prediction and better locality. However, for M-Loc, the input samples, like the edited samples, contain multimodal information. As the scope classifier is trained to activate the counterfactual model for multimodal inputs even if they are unrelated to the editing target, **the incorrect choices made by the scope classifier result in significantly worse M-Loc performance of SERAC, causing the observed phenomenon.** &nbsp; **Q4:** The paper lacks detailed information about the parameter "n" used in the contrastive learning formula, and does not discuss how different values of "n" might influence the results and effectiveness of the method. **A4:** Thank you for raising an important concern! "n" is a crucial hyperparameter worth discussing. First, n represents the number of training samples used for contrastive learning during encoder pre-training. These training data come from the knowledge representations of the in-context knowledge we constructed, which we detailed in Appendix C on how to prepare and extract these in-context knowledge representations. In our experiments, we set n to approximately 16000. Building on this, we perform an ablation study on the value of n to test how different numbers of pre-training samples affect the final editing performance. As shown in **Figure 2 in the Rebuttal PDF**, the performance keeps increasing when the data number increases from 0 to 8000. Beyond this, escalating the data count from 8000 to 16000 yields only marginal enhancement. **It demonstrates that our encoder training is data-efficient and only requires relatively small amounts of data to achieve effective knowledge representation disentangling.** ***[Please Refer to Figure 2 in the Rebuttal PDF for the Results]*** &nbsp; Thank you once again for your time and valuable feedback! --- Rebuttal 2: Comment: Dear Reviewer:  Thank you very much for your kind feedback and valuable comments. I have carefully read your comments. However, it appears that the content from the "Strengths" section may have been inadvertently pasted into the "Weaknesses" section. As a result, the weaknesses you mentioned still seem to describe the strengths of our work. This makes it challenging to identify and address the specific concerns you may have intended to highlight.   Could you please review your comments once more and provide additional clarification on the weaknesses you observed? Your insights are crucial for improving our paper, and I am keen to address all your concerns effectively.   Thank you again for your time and valuable feedback! --- Rebuttal Comment 2.1: Title: Clarification on Weakness Comment: I apologize for the mistake in my previous feedback. I have reviewed my comments and provided the correct feedback on the weaknesses of your paper. I hope this clarifies my observations and helps you improve your paper. --- Rebuttal 3: Title: Table 7 and Figure 2 in the Rebuttal PDF Comment: To facilitate your reading, we also paste our additional experimental results here, **which are consistent with the tables / figures in the rebuttal PDF**. &nbsp; **Table 7:** The computational speed, resource utilization and performance of each method. We use the average results of five metrics (Reliability, T-Generality, M-Generality, T-Locality, and M-Locality) as the performance measure. | Method | GPU memory | Editing time for each sample | Avg performance | |:----------|:----------:|:----------------------------:|:---------------:| | FT | 22G | 6.1s | 60.6 | | KE | 24G | 5.8s | 74.7 | | T-Patcher | 18G | 4.7s | 80.4 | | MEND | 36G | 5.2s | 90.3 | | IKE | 20G | 1.6s | 65.5 | | SERAC | 49G | 3.6s | 76.4 | | **UniKE** | 18G | 5.0s | **95.2** | &nbsp; **Table corresponding to Figure 2 in Rebuttal PDF:** Editing performance on different value of $n$. Generality is the average result of T-Generality and M-Generality; while Locality is the average result of T-Locality and M-Locality. | $n$ | 0 | 4000 | 8000 | 12000 | 16000 | | :--- | :---: | :---: | :---: | :---: | :---: | | Rel. | 96.2 | 97.0 | 97.8 | 98.1 | 98.0 | | Gen. | 91.2 | 93.3 | 94.5 | 94.9 | 95.1 | | Loc. | 90.3 | 92.2 | 93.3 | 93.7 | 93.8 | &nbsp; We hope we have addressed all of your concerns. Discussions are always open. Thank you once again for your constructive suggestions!
Summary: This paper proposes UniKE, a novel multimodal editing method that establishes a unified perspective and paradigm for intrinsic knowledge editing and external knowledge resorting. Within such a unified framework, the authors further promote knowledge collaboration by disentangling the knowledge representations into the semantic and truthfulness spaces. Extensive experiments validate the effectiveness of UniKE, which ensures that the post-edit MLLM simultaneously maintains excellent reliability, generality, and locality. Strengths: 1. The paper is well-written and easy to follow and the motivation is clear and reasonable. 2. The paper gives a unified perspective on the intrinsic refinement of knowledge and the strategic reorganization of external knowledge, which can enhance subsequent research endeavors. 3. The experimental settings (one-step editing, sequential editing, cross-task editing) are fair. Weaknesses: 1. The authors only edit Qformer style MLLMs (MiniGPT4, BLIP2), more foundation models should be compared. e.g., LLava. Besides, the improvement on BLIP2 (Tab.1) seems incremental. 2. More recent model editing baselines should be compared to prove the effectiveness of the proposed methods. 3. The paper focuses on MLLM editing, what are the main differences between MLLM editing and LLM editing? What are the specific designs for multimodal models? I notice that some compared methods are proposed for LLM editing. Can the proposed methods be used on LLM editing? The authors should give some explanations and experimental results if possible. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness below. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper briefly mentioned limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the valuable comments! We are encouraged to see that our work can enhance subsequent research endeavors. We will address your concerns point by point. **Q1**: More foundation models should be compared (LLava). The improvement on BLIP2 seems incremental. **A1**: Thank you for raising the important concern. We first leverage LLaVA1.5 to perform multimodal editing, and then we will show that the improvement on BLIP-2 OPT is not incremental. **(1.1)Multimodal Editing on LLaVA** We conduct both one-step editing and 10-step cross-task editing on LLaVA1.5. The results are shown in **Tables 1 and 2 of the rebuttal PDF**. With LLaVA as the backbone, **UniKE still balances all three target properties well, outperforming all baseline methods** in both one-step editing and cross-task editing. This demonstrates that UniKE is **model-agnostic and effective across various types of MLLMs**. ***[Please Refer to Tables 1 and 2 in the Rebuttal PDF for the Results]*** &nbsp; **(1.2)Analysis on BLIP-2 OPT** Though improvement of one-step editing on Blip-2 OPT(in Table1 of our paper) may seem incremental, we aim to demonstrate the significant performance advantage of UniKE on BLIP-2 OPT from 3 aspects: (I) Across all ten metrics of BLIP-2 OPT one-step editing, UniKE achieves best results in five metrics, with the remaining suboptimal results only marginally lower than the best. We also calculate the average performance of each method across all ten metrics, as shown in the following Table. **The average result of UniKE in BLIP-2 OPT one-step editing significantly surpasses other methods, exceeding the second-best method (MEND) by 5.1 points**. Table: Avg performance of all metrics on Blip-2 OPT one-step editing |Method|FT|KE|TPatcher|MEND|IKE|SERAC|**UniKE** |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-: |AVG Perf.|56.2|70.3|82.3|90.5|65.3|77.6|**95.6**| (II) In one-step editing on BLIP-2 OPT, **UniKE is the only method that consistently performs well across all metrics, whereas other methods have at least one notably poor metric**. For instance, MEND's M-Gen(E-VQA), M-Gen(E-IC), and M-Loc(E-IC) results are 15.7, 19.0, and 14.9 points lower than UniKE's, respectively. IKE and SERAC perform especially poorly on M-Loc, with accuracy below 10%. (III) Additionally, the relatively low difficulty of one-step editing task may allow some baselines to maintain relatively high evaluation scores. In more challenging scenarios such as multi-step cross-task editing, UniKE's advantage becomes more apparent. Specifically, in **BLIP-2 OPT cross-task editing** with higher difficulty (shown in **Table 3 of the Rebuttal PDF**), **UniKE consistently outperforms all other baselines in all metrics, demonstrating a more significant performance advantage**. ***[Please Refer to Table 3 in the Rebuttal PDF for the Results]*** &nbsp; **Q2**: More recent baselines should be compared. **A2**: Thank you for the suggestion. We further compare UniKE with two recent baselines proposed in 2024: MENMET, WISE (a concurrent work). We leverage MiniGPT4 to conduct both one-step editing and cross-task editing. The results are shown in **Tables 4 and 5 of the Rebuttal PDF**. **The performance of UniKE surpasses these recent baselines, further demonstrating the effectiveness of UniKE.** [Tan et al 2024]MENMET: Massive Editing for Large Language Models via Meta Learning.ICLR2024. [Wang et al 2024]WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models.23 May 2024. ***[Please Refer to Tables 4 and 5 in the Rebuttal PDF for the Results]*** &nbsp; **Q3**: What are the main differences between MLLM editing and LLM editing?What are the specific designs for multimodal models?Can UniKE be used on LLM editing? **A3**: Thank you for your questions. We will address them from three points: **(3.1)** The main differences between MLLM editing and LLM editing lie in **task difficulty**. LLM stores single-modality NLP knowledge within its parameters, while MLLM stores multi-modality knowledge, making MLLM editing more challenging due to the need to edit knowledge from multiple modalities to fix errors. **Although mainstream editing methods proposed for both are similar, current methods can effectively edit LLMs but not MLLMs** [Chen et al 2023]. **(3.2)** [Chen et al 2023] attempted specific designs for editing MLLMs such as **editing the unique Qformer**, but found **these methods less effective than directly applying LLM editing methods**, as shown in the following table. This is because the bottleneck for MLLMs lies in enabling LLMs to reason with multimodal input, rather than in extracting visual information. Therefore, **mainstream MLLM editing methods still focus on editing LLM and directly adopt LLM editing techniques.**. However, directly applying LLM editing methods for multimodal editing still fails to balance both generality and locality. In this paper, we discuss the shortcomings of existing LLM editing methods when applied to multimodal scenarios and develop a unified framework along with knowledge disentangling, which leverages the strength and mitigates the weakness of each method type, leading to more effective multimodal editing. ||Methods editing Qformer|Methods editing LLM|UniKE |:-|:-:|:-:|:-: |AVG Multimodal Editing Performance|71.8|89.3|**95.2**| **(3.3)** Since LLM editing is merely a simpler case of MLLM editing, UniKE which successfully addresses MLLM editing, can also be effectively used for LLM editing. Following previous LLM-editing work, we use GPT-J-6B to conduct both one-step and multi-step editing on the ZsRE benchmark. As shown in **Table 6 of the Rebuttal PDF, UniKE still outperforms all baselines in LLM editing**. ***[Please Refer to Table 6 in the Rebuttal PDF for the Results]*** &nbsp; We will integrate these experiments into our paper. Thank you again for the valuable feedback! [Chen et al 2023]Can We Edit Multimodal Large Language Models?EMNLP2023. --- Rebuttal Comment 1.1: Title: Official Comment Comment: Thanks for your response. The Rebuttal addresses most of my concerns. I will raise my score to 7. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score. Your valuable suggestions greatly contribute to the quality of our manuscript. Thank you again for your precious time and valuable suggestions! --- Rebuttal 2: Title: Tables 1-6 in the Rebuttal PDF Comment: To facilitate your reading, we also paste our additional experimental results here, **which are consistent with the tables in the rebuttal PDF**. &nbsp; **Table 1**: Performance of one-step editing on LLaVA1.5 (We average the results on E-IC and E-VQA). | Method | Rel. | T-Gen. | M-Gen. | T-Loc. | M-Loc. | **Avg** | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | FT | 67.4 | 57.9 | 54.8 | 67.1 | 63.2 | 62.1 | | KE | 72.7 | 65.4 | 55.3 | 82.2 | 67.3 | 68.4 | | T-Patcher | 89.0 | 76.5 | 69.0 | 81.2 | 81.3 | 79.6 | | MEND | 95.4 | 92.6 | 78.3 | 83.5 | 80.3 | 86.0 | | IKE | 93.4 | 85.1 | 77.9 | 27.7 | 3.2 | 57.5 | | SERAC | **96.3** | 92.4 | 85.5 | 83.3 | 7.7 | 73.0 | | **UniKE** | 95.7 | **92.8** | **88.4** | **86.0** | **86.4** | **89.9** | &nbsp; **Table 2**: Performance of cross-task editing on LLaVA1.5. | Method | Rel. | T-Gen. | M-Gen. | T-Loc. | M-Loc. | Avg | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | FT | 66.9 | 57.2 | 51.0 | 62.3 | 54.4 | 56.4 | | KE | 69.8 | 60.3 | 52.5 | 79.3 | 62.1 | 64.8 | | T-Patcher | 81.2 | 60.0 | 57.4 | 77.4 | 76.5 | 70.5 | | MEND | 90.4 | 84.3 | 73.8 | 78.6 | 76.0 | 80.6 | | SERAC | 92.1 | 88.3 | 82.5 | 82.2 | 1.2 | 69.3 | | **UniKE** | **92.2** | **89.2** | **83.8** | **82.7** | **84.7** | **86.5** | &nbsp; **Table 3**: Performance of cross-task editing on BLIP-2 OPT. | Method | Rel. | T-Gen. | M-Gen. | T-Loc. | M-Loc. | Avg | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | FT | 57.2 | 49.9 | 43.2 | 52.2 | 49.7 | 50.4 | | KE | 64.2 | 60.1 | 57.2 | 83.5 | 59.2 | 64.8 | | T-Patcher | 83.1 | 69.7 | 65.9 | 84.5 | 77.9 | 76.2 | | MEND | 84.2 | 82.4 | 74.9 | 91.4 | 80.2 | 82.6 | | SERAC | 90.8 | 89.2 | 84.1 | 90.0 | 1.7 | 71.2 | | **UniKE** | **91.1** | **90.6** | **88.2** | **91.7** | **85.6** | **89.4** | &nbsp; **Table 4**: Comparison with recent baselines for one-step editing on MiniGPT-4 (We average the results on E-IC and E-VQA). | Method | Rel. | T-Gen. | M-Gen. | T-Loc. | M-Loc. | Avg | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | MENMET | 97.0 | 96.2 | 82.4 | 98.0 | 85.2 | 91.8 | | WISE | 97.2 | 92.2 | 88.7 | 98.4 | **88.2** | 93.0 | | **UniKE** | **97.4** | **96.6** | **92.6** | **98.8** | 88.1 | **94.7** | &nbsp; **Table 5**: Comparison with recent baselines for cross-task editing on MiniGPT-4. | Method | Rel. | T-Gen. | M-Gen. | T-Loc. | M-Loc. | **Avg** | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | MENMET | 88.4 | 87.2 | 78.0 | 86.1 | 82.5 | 84.4 | | WISE | 89.2 | 85.4 | 83.4 | 87.8 | 83.6 | 85.9 | | **UniKE** | **90.7** | **88.2** | **86.8** | **90.4** | **83.8** | **88.0** | &nbsp; **Table 6**: Performance of each method on LLM editing task (ZsRE) for one-step editing and 200-step editing. | | ONE-STEP EDITING | | ||200-STEP EDITING | | | | | :--- | :---:| :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **Method** |**Rel.** | **Gen.** | **Loc.** | **Avg.** | **Rel.** | **Gen.** | **Loc.** | **Avg.** | | FT | 77.4 | 76.7 | 35.5 | 63.2 | 19.5 | 17.2 | 5.4 | 14.0 | KE | 20.6 | 20.1 | 81.3 | 40.7 | 7.6 | 6.8 | 65.8 | 26.7 | T-Patcher | 97.1 | 95.0 | 96.2 | 96.1 | 81.4 | 70.6 | 91.3 | 81.1 | MEND | 98.2 | 97.7 | 97.4 | 97.8 | 0.0 | 0.0 | 0.0 | 0.0 | | In-Context Editing | 99.4 | 97.2 | 59.2 | 85.3 | - | - | - | - | | SERAC | 88.6 | 87.9 | 99.9 | 92.1 | 24.0 | 23.2 | **96.4** | 47.9 | MENMET | 99.1 | 86.8 | 97.4 | 94.4 | 82.9 | 73.6 | 90.2 | 82.2 | WISE | 98.8 | 96.3 | **99.9** | 98.3 | 82.8 | 74.7 | 95.5 | 84.3 | **UniKE** | **99.5** | **97.9** | 99.6 | **99.0** | **85.1** | **76.7** | 95.6 | **85.8** | &nbsp; These experiments will be integrated into the main body or the appendix of our paper. We hope we have addressed all of your concerns. Discussions are always open. Thank you again for your time and valuable suggestions!
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their insightful and valuable comments! Overall, we are encouraged that they find that: (1) The motivation is **clear and reasonable**, supported by a well-structured article. *(Reviewer mp35, Reviewer w8aR, Reviewer V1xr)* (2) UniKE establishes **a unified framework** for intrinsic knowledge editing and external knowledge resorting, which is **novel and effective**. *(All Reviewers)* (3) The experiments are **solid and thorough**, clearly demonstrating that UniKE consistently maintains excellent reliability, generality, and locality **across various settings**. *(All Reviewers)* (4) UniKE has **significant implications for further studies** on constructing more powerful MLLMs, can **enhance subsequent research endeavors**, and is a **versatile and robust solution** for enhancing multimodal language models. *(Reviewer mp35, Reviewer w8aR, Reviewer V1xr)* &nbsp; To address the concerns raised by the reviewers, we have conducted several additional experiments to further demonstrate the superiority of UniKE from various perspectives. And we include these experimental results **in the rebuttal PDF**, which contains 10 tables and 2 figures. (1) In Tables 1-3, we perform one-step editing and cross-task editing with LLaVA 1.5 as the backbone, also conducting cross-task editing on Blip-2 OPT. The results demonstrate that UniKE is effective across various types of MLLMs. (2) In Tables 4-5, we compare UniKE with more recent baselines, further demonstrating the proficiency of our proposed UniKE. (3) In Table 6, we apply UniKE to LLM editing tasks, showing that UniKE can effectively address LLM editing tasks in pure NLP scenarios. (4) In Tables 8-9, we leverage UniKE on two interesting and challenging tasks with newly constructed data (overediting evaluation and counterfactual editing), finding that UniKE minimizes the influence of overediting and effectively addresses counterfactual editing tasks. (5) In Table 7, Table 10, and Figures 1-2, we further analyze the editing time efficiency of UniKE and provide additional ablation studies, proving the robustness of UniKE's design. &nbsp; These experiments will be integrated into the main body or the appendix of our paper. Next, we will address each reviewer's detailed concerns point by point. We hope to address all of your concerns. Discussions are always welcome. Thank you! Pdf: /pdf/6a1a8815ed45bbd510d0137a735f8f4a9c1046c2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Localize, Understand, Collaborate: Semantic-Aware Dragging via Intention Reasoner
Accept (poster)
Summary: This paper proposes a novel insight to transform the “how to drag” issue into a two-step “what-then-how” by introducing an intention reasoner and a collaborative guidance sampling mechanism. They also raise the problem of image quality issues and design quality guidance to enhance performance. Experiments show their superiority in semantic-aware drag-based editing. Strengths: 1. The issue of ‘‘inherent ambiguity of semantic intention’’ seems crucial and interesting. And shifting “how to drag” issue into two-step “what-then-how” can improve the controlling effect of the dragging operation. 2. The paper is well-written and easy to follow. Weaknesses: Major: 1. How were the single experimental results (like Figure 4) selected from the diverse results conforming to the intention? If they were manually chosen, could this lead to unfair comparisons? 2. Providing the inferred potential intentions, source, and target prompts corresponding to each generated example may help explain the reasons for superior performance. For instance, in the third row of Fig. 5 on the left side, considering that the hand and the dragging operation should not be related to the corresponding prompt, why do other methods struggle with handling the hand information, while the proposed method can manage it effectively? Minor: 1. Fig. 5 caption, misspell of DragonDif(f)usion. Technical Quality: 3 Clarity: 3 Questions for Authors: I believe that, fundamentally, the user's dragging operation has an intuitive single expectation, but the ambiguity of the operation itself leads to an ill-posed one-to-many mapping. Moreover, this mapping is difficult to traverse and may not even include the user's actual expectation among the diverse versions. Therefore, is it possible to further narrow down the range of expected dragging results, or allow the user to explicitly indicate their needs through additional operations? For other detailed questions and suggestions please refer to the major weakness. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We are encouraged that you find our ''what-then-how'' paradigm to be novel and effective. Here is our response to address your concerns. > *W1: How were the single experimental results (like Figure 4) selected from the diverse results conforming to the intention? If they were manually chosen, could this lead to unfair comparisons?* As discussed in Section 3.1, the intentions of the single experimental results are selected based on confidence probabilities to make a fair comparison. We will include relevant explanations in the revised version to avoid misunderstandings. > *W2: Providing the inferred potential intentions, source, and target prompts corresponding to each generated example may help explain the reasons for superior performance. For instance, in the third row of Fig. 5 on the left side, considering that the hand and the dragging operation should not be related to the corresponding prompt, why do other methods struggle with handling the hand information, while the proposed method can manage it effectively?* (1) Thanks for your suggestion. We present these prompts in Fig. 13 and Fig. 14 in rebuttal PDF. We will incorporate the corresponding prompts in the updated version of our paper. (2) The intention, source prompt and target prompt of the third row of Fig. 5 are "move the cup to the top right.", "a cup of coffee on the bottom left." and "a cup of coffee on the top right.", respectively. - **Semantic guidance focusing more on the cup means less variation in other areas (hands)**. By using semantic-aware prompts, semantic guidance directs the editing process to focus on the cup area, thus avoiding changes to the hand. - **Quality guidance guarantees better image quality and avoids hand distortion.** We propose quality guidance, which maintains image quality via a score-based classifier in the editing process. This approach improves overall image quality and prevents hand distortion. > *W3: Misspelling of DragonDif(f)usion in Fig. 5 caption.* Thank you for pointing out the misspelling error. We will correct it in the updated version. > *Q1: The user's dragging operation has an intuitive single expectation, but the ambiguity of the operation itself leads to an ill-posed one-to-many mapping. Moreover, this mapping is difficult to traverse and may not even include the user's actual expectation among the diverse versions. Therefore, is it possible to further narrow down the range of expected dragging results, or allow the user to explicitly indicate their needs through additional operations?* (1) **Covering the user needs.** Our approach explores diverse plausible user intentions through repeated sampling with LLMs, significantly increasing the likelihood of covering the user's request. **For users without clear intentions**, we could present a diverse set of plausible intentions for them to choose from, inspiring new ideas, without requiring further input from the user. **For users with clear intentions**, we consider various scenarios (rigid, non-rigid, rotating, etc.) and use repeated sampling with LLMs to explore diverse possibilities, as outlined by Brown B, et al. [r14]. This approach significantly increases the likelihood of covering the user's actual intention. (2) **Allowing flexible interaction.** The intention reasoner allows the users to further narrow down the range of expected dragging results. Specifically, the user can input external constraints to the LLM to limit the scope of generated intents. They can also select the desired intent from a variety of generated intents. (3) **Advantages of the intention reasoner.** It offers significant benefits over user-providing intentions. The intention reasoner can handle vague requests, express complex needs, discover potential needs, and reduce cognitive load. [r14] Brown B, et al. Large language monkeys: Scaling inference compute with repeated sampling. arXiv, 2024. --- Rebuttal 2: Comment: Thank you for your responses. After considering other reviews and feedback, I'm inclined to maintain my score of 5 and am leaning toward accepting the paper. However, it could still go either way.
Summary: This paper aims to address the limitation of current dragging-based image editing methods that understand the intentions of users. To this end, the proposed method leverages the reasoning ability of LLMs to infer possible intentions, which are used to provide (asymmetric) semantic guidance in editing. Furthermore, a collaborative guidance sampling method that jointly exploits semantic guidance, quality guidance, and editing guidance is proposed. Among these three types of guidance, quality guidance is provided by a self-trained discriminator with an aesthetic score and images generated by a baseline. The experimental results on the DragBench dataset indicate that the proposed method outperforms several existing methods. Strengths: 1. This paper is well-written and the ill-posedness of dragging-based image editing is indeed a problem that should not be ignored. 2. The proposed LLM-based reasoner and the collaborative guidance sampling strategy improve the baseline performance. Weaknesses: 1. Despite the performance gain, there are observable artifacts in the synthesized images as follows: * Page 7, Figure 4, the last row, windows are missing. * Page 8, Figure 5, 1st row, right column, the shape and the texture of the mailbox are changed. * Page 16, Figure 7, 2nd row, wheels are changed; the shape of the target in the second last row cannot be preserved either. 2. Utilizing LLMs to infer intentions is interesting. However, it requires further clarification: * How can we ensure that the predicted outputs of LLMs (even with the highest confidence) really coincide with the intentions of users? * To the best of my knowledge, the selected baselines do not involve semantic priors from LLMs. Hence, the comparison might be unfair. To fully demonstrate the advantages of the proposed method, text/instruction-based methods should be included for comparison. * Even with the provided visualization of the ablation study (Fig. 6), it is still unclear how the introduced components affect the results. Sometimes the full model generates artifacts that do not appear in the results of the variants of the proposed method (e.g., the round bottom of the spinning top in Fig. 6 is changed mistakenly by the full model). Technical Quality: 2 Clarity: 4 Questions for Authors: Q1: In A.8 the limitations section, why the proposed method is training-free as it includes a trainable quality discriminator? Q2: How is the computational complexity of the proposed method? Q3: How do we select the optimal one from the $n$ sampled outputs from the LLM? Q4: Except for the GScore metric proposed by GoodDrag, has any other commonly-used quality metric (e.g., LPIPS) used for evaluation? Q5: What will the results look like if the predicted intentions and dragging movements are contradictory? I might change my score depending on the responses to the above questions. Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: Yes, the authors have discussed the limitations and the potential negative societal impact of the proposed method in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. Below is our response. > *W1: Despite the performance gain, there are observable artifacts.* We acknowledge these artifacts, a common challenge for drag editing [DragDiffusion, Shi et al. [2024]], but they do not diminish the overall effectiveness of our method. (1)**Diverse Generation Capabilities:** Our "what-then-how" paradigm offers a novel insight, demonstrating significant improvements in the diversity, reasonability of generated intentions, and interpretability of editing outcomes. (2)**Comparative Analysis:** Quantitatively, our method achieves superior editing accuracy and image quality compared to others. Qualitatively, as Fig.4 shows, we successfully positioned all objects in the target area, enhancing overall harmony, and reducing distortions. (3)**Adaptive adjustment:** Our method navigates the trade-off between editing accuracy and image quality [DiffEdit, Couairon, et al. [2023]]. Users can utilize adaptive controls to prioritize editing precision or visual quality based on their preferences. > *W2-1: How to ensure that the predicted outputs coincide with user intentions.* We use repeated sampling to explore various plausible user intentions, increasing the probability of covering their true intentions [r10]. **For users with clear intentions**, we utilize advanced intention reasoning techniques and repeated sampling to explore different scenarios (e.g., rigid, non-rigid, rotating). This repeated sampling can boost the probability of LLMs' outputs [10] aligning with the user's true intentions. **For users with unclear intentions**, our system automatically generates plausible semantic intentions by analyzing context and inferring implicit needs, reducing the need for explicit user input and providing reasonable intention estimates. The interactive intention reasoner also **allows users to adjust and refine their intentions in real-time**, helping align the generated results with user preferences and evolving needs, ensuring relevance and accuracy. Through these integrated strategies, our method effectively covers and aligns with user intentions, delivering high-quality predictive results. [10] Brown B, et al. Large language monkeys: Scaling inference compute with repeated sampling. arXiv, 2024. > *W2-2: Comparisons with text/instruction-based methods.* We present quantitative results (Table r3) and qualitative results ( Fig. 13 in Rebuttal PDF). The results show that we outperform them by a significant margin. This advantage may be attributed to the fact that drag editing primarily involves spatial manipulation, whereas the training data of existing text/instruction-based methods [r11, r12, r13] do not include samples with spatial position changes [Drag, Pan et al., 2023]. Table r3: Comparisons with text-based methods. | | InstructPix2Pix [r11] | MagicBrush [r12] | ZONE [r13] | Ours | | --- | --- | --- | --- | --- | | Mean Distance (↓) | 52.34 | 55.77 | 56.83 | **20.46** | | GScore (↑) | 7.14 | 7.16 | 6.83 | **7.37** | [r11] Brooks T, et al. Instructpix2pix: Learning to follow image editing instructions. In CVPR, 2023. [r12] Zhang K, et al. Magicbrush: A manually annotated dataset for instruction-guided image editing. In NeurIPS, 2023. [r13] Li S, et al. Zone: Zero-shot instruction-guided local editing. In CVPR, 2024. > *W2-3: How the introduced components affect the results.* (1) **Effects of the components.** Removing the intention reasoner results in semantically incorrect results (e.g. the shape of the spinning top and wheel). Removing the quality guidance reduces image quality (e.g. artifacts in the spinning top and the red frame of the front wheel) (2) **Reasonableness of the shape change of the spinning top.** The source and target prompts are "a photo of a wide spinning top" and "a photo of a narrow spinning top." Since the red bottom is part of the spinning top and included in the editing area, narrowing it is semantically consistent. Moreover, the collaboration between the intention reasoner and quality guidance results in a more pronounced narrowing effect in the full implementation. If the user wants to preserve the bottom part, they can exclude it from the editing area(see Fig. 15 in the Rebuttal PDF). > *Q1: The confusion of the expression training-free.* Thanks. We will correct it in the revised version. > *Q2: The computational complexity.* Our method has a relatively small inference time and comparable memory requirements. Table r4: Computational complexity. | | DragDiffusion | FreeDrag | DragonDiffusion | DiffEditor | Ours | | --- | --- | --- | --- | --- | --- | | Time  (s) ↓ | 80 | 92 | 30 | 35 | 48 | | Memory (GB) ↓ | 12.8 | 13.1 | 15.7 | 15.7 | 15.8 | > *Q3: Details of how to select the optimal one from the  LLM outputs.* We utilize LLM to infer N times. Each output contains text $d_j$ and corresponding confidence probabilities $P(d_j)$ which reflect the quality of the text. We then sample based on the confidence probabilities. Details will be added to the revised version. > *Q4: Performance on LPIPS?* Our method achieves better or comparable performance. However, we would like to point out that as LPIPS is trained on limited images, it may not effectively differentiate the performance of different models [GoodDrag, Zhang et al. [2024b]]. Table r5: Quantitative comparisons. | | DragDiffusion | FreeDrag | DragonDiffusion | DiffEditor | Ours | | --- | --- | --- | --- | --- | --- | | 1-LPIPS (↓) | 0.137 | 0.116 | 0.124 | **0.113** | 0.114 | > *Q5: What if the predicted intentions and dragging movements are contradictory?* We admit that semantic intent will have a side effect if they are contradictory. Fortunately, no contradictions occur in our experiments, which we attribute to the powerful reasoning ability of LLM [r8]. Even if the contradictions do arise, users can reply to the LLM which will incorporate additional information to satisfy the needs [r9]. --- Rebuttal Comment 1.1: Comment: I thank the authors for their efforts in addressing my concerns. I tend to maintain my score as borderline accept, but as mentioned by the other reviewer, it can go either way.
Summary: This paper presents a novel framework called LucidDrag for drag-based image editing. Compared to previous methods, LucidDrag first reasons the intention of the draging operation using LLM (GPT 3.5) and provides a semantic guidance for the following editing process. For the better image fidelity, the authors also design a GAN-based discriminator as the quality guidance, obtaining improved results. The authors perform sufficient comparison experiments and ablation studies to support their claims and verify the efficiency of their proposed components. And the authors promise to release the codes. Strengths: 1. The writing is well except for some minor issues. 2. The authors note the problem that the drag-based editing has the inherently ill-posed nature, which is because multiple editing results may correspond to the same input image and draging conditions. Based on this observation, the authors attempt to introduce additional prompts to provide semantic guidance for better editing results. 3. The comparisons and ablation studies are sufficient to support the claims. Weaknesses: 1. It's a good idea to introduce additional prompts to help the editing process. However, why using an Intention Reasoner to "guess" multiple intentions and then sampling between them? Maybe a better way is that the users provides their true intention and LLMs are just used to normalize the prompts? So, I'm skeptical about the utility of the Intention Reasoner. 2. Some confusions: - Line 162-163, how do you get reasonable intentions only taking the generated description of the object of interest 'O', the original image caption 'C', and drag points 'P' as input? Take Fig.2 as an example, 'O' is "The nose of a woman", 'C' is "A woman", 'P' is the draging points. There is no information about the original image, so how can LLMs output intentions like "A woman looking up"? Where does the "looking up" come from? - Line 173, $z^{gud}_T$ should be $z^{gen}_T$? or the $z^{gen}_T$ on the left in Fig.2 should be $z^{gud}_T$? Maybe the notations in Fig. 2 should be modified. - Fig. 4, LucidGrag needs additional generated prompts as inputs. You should also demonstrate these prompts to help us understand the editing process. BTW, due to sampling generated intentions based on the confidence probabilities (Line 169), the generated results of LucidGrag should be different. Maybe showing different results can also help us to understand. Technical Quality: 2 Clarity: 3 Questions for Authors: Refer to the weakness. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['Ethics review needed: Deception and harassment'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We are encouraged by your recognition of the value of our work and your acknowledgment that our experiments sufficiently support our claims. Below is our response addressing your concerns. > *W1: It's a good idea to introduce additional prompts to help the editing process. However, why using an Intention Reasoner to "guess" multiple intentions and then sampling between them? Maybe a better way is that the users provides their true intention and LLMs are just used to normalize the prompts?* Thank you for your suggestion. While allowing users to directly provide their true intent and using LLMs to normalize prompts is indeed feasible, especially when users can clearly articulate their needs, the intention reasoner offers several significant advantages: (1) **Handling vague requests.** Users' requests often lack specificity, such as "Drag the horse's head to the top right" in Fig. 1. The intention reasoner can analyze context data to infer multiple potential intentions ("long necks" or "heads up" or "closer"), allowing users to select the most appropriate one for execution. (2) **Expressing complex needs.** Articulating complex manipulations involving multiple points or objects can be challenging for users. The intention reasoner can generate precise descriptions automatically, ensuring accurate and consistent adjustments, thereby simplifying the user's task. (3) **Discovering potential needs.** As shown in Fig.1, the intention reasoner can present multiple possibilities, enabling users to discover and explore various editing choices they might find beneficial, thus enhancing the overall experience. (4) **Reducing cognitive load.** Many users, particularly beginners, may struggle to provide their intentions precisely. The intention reasoner can infer the potential intentions, alleviating the need for detailed instructions and significantly improving operational efficiency. Additionally, by leveraging the reasoning ability of LLMs, we can generate a variety of reasonable intentions and produce high-quality results. Generating diverse results is a challenging task [r7]. Our method can be used to construct an image editing dataset with diverse editing outcomes, which may inspire future work. [r7] Corso G, et al. Particle guidance: Non-IID diverse sampling with diffusion models. In ICLR, 2024. > *Confusion1: How do you get reasonable intentions only taking the generated description of the object of interest 'O', the original image caption 'C', and drag points 'P' as input?* The intention reasoner can produce reasonable results because large models possess in-context learning, space understanding, and reasoning abilities [r8]. Additionally, previous work has demonstrated that LLMs can generate reliable results through these abilities [r9]. For the case in Fig. 2, we first combine the inputs with detailed task descriptions and in-context examples to ensure the rationality of the output. Then, given the position of the source point and target point, the LLM can comprehend the dragging direction. Finally, combined with the region of interest ("the nose of a woman") and the whole image description ("a woman"), the LLM can deduce a reasonable intention, e.g. "looking up". This result is logical because dragging the face to the upper left may be an attempt to make the person look up. LLM can easily accomplish this kind of logical reasoning [r8, r9]. [r8] Zhang Y, et al. LLM as a mastermind: A survey of strategic reasoning with large language models. arXiv, 2024. [r9] Lian L, et al. LLM-grounded Video Diffusion Models. In ICLR 2024. > *Confusion2: Notations of  $ Z_T^{gud} $ and  $Z_T^{gen}$ in Fig. 2 and Line 173.* Thank you for highlighting the difficulty in interpreting the notations of   $Z_T^{gud}$ and  $ Z_T^{gen}$. To clarify the pipeline, the input image is inverted to $ Z_T^{gud}$ and the generation process starts with  $Z_T^{gen}$. In terms of value, we initialize  $Z_T^{gen}$ with $ Z_T^{gud}$, i.e., $Z_T^{gen} =Z_T^{gud}$. We will improve the clarity of these notations in the updated version. > *Confusion3: Should demonstrate generated prompts in Fig. 4. Suggest to show different results of various generated intentions.* (1) We present these prompts in Fig. 13 in Rebuttal PDF. We will incorporate the corresponding prompts in the updated version of our paper. (2) In Fig. 3 of the main paper, we present the results of various intentions **with different editing targets.** We supplement the results of various intentions **with the same editing target** in Fig. 13 in the Rebuttal PDF file. The results show that various text intentions generated by the intention reasoner with the same editing target will result in similar results and have some differences in details. --- Rebuttal 2: Title: Looking forward to the response from Reviewer 7sLK Comment: Dear Reviewer 7sLK, We have tried our best to address all the concerns and provided as much evidence as possible. May we know if our rebuttals answer all your questions? We truly appreciate it. Best regards, Author #9041 --- Rebuttal Comment 2.1: Title: Looking forward to the response from Reviewer 7sLK Comment: Thank you again for reviewing our manuscript. We have tried our best to address your questions (see our rebuttal in the top-level comment and above). Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Summary: The paper introduces a novel framework for semantic-aware drag-based image editing. Specifically, to address the limitations in understanding semantic intentions and generating high-quality edited images, it utilizes an intention reasoner to deduce potential editing intentions and a collaborative guidance sampling mechanism that integrates semantic guidance and quality guidance. Experimental results validate the effectiveness of the proposed method in producing semantically coherent and diverse image editing outcomes. Strengths: 1.The paper is well-organized and easy to follow. 2.The proposed what-then-how" paradigm for drag-based editing is novel and sound. 3.The proposed collaborative guidance is also interesting and has shown to be effective. 4.The proposed method is shown to outperform the existing methods on various editing tasks. Weaknesses: 1.It seems that the used LVLM and LLM models are not finetuned, and I am wondering how different models perform on the task. Are the confidence probabilities reliable or meaningful without any fine-tuning? 2.The efficiency of different methods should be provided for comparison. 3.The limitations are recommended to be added to the main paper. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are discussed in the supplementary material. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We are encouraged that you find our ''what-then-how'' paradigm and collaborative guidance mechanism to be novel and effective. Below are our responses to your concerns. > *W1: How different LVLMs and LLMs perform on the task? Are the confidence probabilities reliable or meaningful without any fine-tuning?* (1)**We conduct experiments to examine the performance of different LVLMs and LLMs in the Intention Reasoner module.** Specifically, we utilize Osprey [r1] and Ferret [r2] for LVLM and Vicuna [r3], LLama3 [r4], and GPT 3.5 [r5] for LLM. We test various combinations, with Osprey+GPT3.5 being the default setting in our paper. As shown in Table r1, all combinations outperform the experiment without the Intention Reasoner, confirming its reliability without fine-tuning. This reliability stems two-fold: the LVLMs are trained with large-scale point-level labeled data and can easily achieve point-level understanding [r1]. Therefore, they can understand the user-given points without further fine-tuning. For the LLMs, state-of-the-art LLMs have been proven to possess strong spatial reasoning abilities [r6], enabling them to deduce reasonable intentions without fine-tuning. (2) **The confidence probabilities are reliable and meaningful without fine-tuning. We present qualitative analysis in Fig. 12 in Rebuttal PDF.** As discussed above, the LLM's powerful reasoning capabilities guarantee its reliability without fine-tuning. The confidence probability reflects the quality of the output text in LLM. As shown in Fig. 12, a higher confidence probability indicates that the intention of the output is more reasonable, leading to better editing results. Table r1: Results with different LVLMs and LLMs | | w/o Intention Reasoner | Ferret+Vicuna | Ferret+LLama3 | Ferret+GPT3.5 | Osprey+Vicuna | Osprey+LLama3 | Osprey+GPT3.5 (Ours) | | --- | --- | --- | --- | --- | --- | --- | --- | | Mean Distance (↓) | 23.66 | 22.49 | 21.96 | 20.65 | 20.84 | 20.48 | **20.46** | | GScore (↑) | 6.76 | 7.12 | 7.11 | 7.35 | 7.27 | 7.13 | **7.37** | [r1] Yuan Y, et al. Osprey: Pixel understanding with visual instruction tuning. In CVPR, 2024. [r2] You H, et al. Ferret: Refer and ground anything anywhere at any granularity. arXiv preprint arXiv:2310.07704, 2023. [r3] Chiang WL, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. In URL: https://vicuna.lmsys.org, 2023. [r4] Touvron H, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [r5] OpenAI. Chatgpt. In URL: https://openai.com/blog/chatgpt, 2022. [r6] Gurnee W, et al. Language models represent space and time. In ICLR, 2024.. > *W2: The efficiency of different methods.* We present the efficiency of different methods in Table r2. Our method has a relatively small inference time and comparable memory requirements. Table r2: Efficiency of different methods. | | DragDiffusion | FreeDrag | DragonDiffusion | DiffEditor | Ours | | --- | --- | --- | --- | --- | --- | | Time  (s) ↓ | 80 | 92 | 30 | 35 | 48 | | Memory (GB) ↓ | 12.8 | 13.1 | 15.7 | 15.7 | 15.8 | > *W3: The limitations are recommended to be added to the main paper.* Thanks for your valuable suggestion. We will move the limitations section from the supplementary materials to the main paper in the updated version. --- Rebuttal Comment 1.1: Comment: Thank you for addressing the initial concerns raised. I will keep my score unchanged.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and insightful comments which have helped improve our paper. We are encouraged that the reviewers found the importance of addressing the ill-posedness of dragging-based image editing which we aim to solve(Reviewer 17V7, 6UnY). We appreciate their positive feedback on the novelty of our 'what-then-how' paradigm (Reviewers ND9N, 6UnY), the effectiveness of our collaborative guidance (Reviewers ND9N, 7sLK, 17V7, 6UnY), and the sufficiency of our experiments in supporting our claims (Reviewers ND9N, 7sLK, 6UnY). We have provided detailed answers to each question below. We hope that our response addresses these concerns. Pdf: /pdf/087f632ba1257231be1f3cabd37bc173515e973a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Operator World Models for Reinforcement Learning
Accept (poster)
Summary: This paper extends policy mirror descent to the RL setting. They solve the problem of requiring a exact value function by formulating an approximate value function based on operators over the transition and reward functions, which yields a closed form solution. Then they use this approximate value function for policy improvement. Lastly, they provide theoretical analysis on convergence guarantees and error bounds while explicitly laying out assumptions. They also perform experiments on toy RL problems and show that their algorithm outperforms typical RL baselines on these low-dimensional problems. Strengths: 1) This paper's presentation is great. How this work relates to prior work is clearly discussed. Key concepts from this paper are well explained and well motivated. 2) Theoretical analysis is sound and shows promising results. 3) Comparison to RL baselines, even on toy problems, is much appreciated. Results are encouraging. 4) Discussions of limitations and assumptions are detailed. 5) Key results of this paper, IE value functions with closed form solutions via operators, are novel and significant. Weaknesses: 1) I would like to see experiments on more complicated MDPs, with large state-action spaces, to test the scaling properties. I recognize that this paper already makes large contributions, so this is not necessary for now, but it is a great future direction. 2) I would like to see extensions to continuous action spaces. Again not necessary for now, but it seems like it should be possible given the formulation with additional approximations. Minor: Typo on line 335 Technical Quality: 4 Clarity: 4 Questions for Authors: How well can this algorithm scale to higher-dimensional settings? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Authors have adequately addressed limitation and assumptions. I see no potential negative societal impacts as a result of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. - __Experiments on more complicated MDPs__: we agree with the reviewer and we are very excited to test our methods on more complicated environments. As the reviewer pointed out, our main focus in this work was to prove the novel theoretical contributions of Prop. 3, Cor. 7, and Thm. 9. Our experiments in section 4 are meant as empirical support to our theoretical findings. As we discussed in the conclusions of this work, competitive evaluation of POWR on richer MDPs requires additional research on: Scaling the method to high-dimensional settings (see also our additional comments below) Combining POWR with deep representation learning schemes to learn the operator-world model beyond settings suited for Sobolev representations (e.g. images). - __POWR in Continuous Action Spaces__: that is a very interesting question. We refer to our global reply here on OpenReview since this was a topic of interest also for other reviewers. - __Scaling to high dimensional settings__: breaking down this question into the computational and statistical perspectives: - _Statistics_: from the statistical perspective (namely Thm. 9), the method is not (directly) affected by the dimensionality of the problem. The learning rates in Thm. 9 mainly depend on the norms of the operator world model and reward function as elements of the feature space where learning is carried out (cf. the final bound reported in Thm. A.11 in the appendix). These norms can be interpreted as capturing “how well suited” the chosen feature space is to represent the operator world model. To (partially) answer the reviewer’s question, in the case of the Sobolev feature spaces used in this work, there exists a relation between the dimensionality of the ambient space (namely the state space) and the norm of a function in terms of its smoothness (see [28,29]). In other words - and very loosely speaking - the larger the space dimension, the smoother the target function needs to be to maintain a small norm. This means that if the reward function or the transition operator are not very smooth, we might incur large constants in Thm. 9, hence slower rates. Choosing the hypothesis space is key to fast convergence, and we have discussed how to extend POWR with other hypothesis spaces than Sobolev spaces in our conclusion and future work. This behavior is well-understood, albeit rather technical, in the kernel learning literature for traditional Sobolev spaces (for instance, see Sec. 3 and Ex. 3 in [B] or combining Chapter 5 in [28] with the results on interpolation spaces from [C] or [D]. References provided below). It is less clear how this interpretation will extend to less traditional feature spaces. We thank the reviewer for the question. We will add the discussion above as a remark following Thm. 9. - _Computations_: by leveraging the kernel trick argument from Prop. 3, the dimensionality of the ambient space or the feature space does not come into play. In contrast, computations are affected by the number of samples observed (as is the case with most kernel methods). However, as observed in the recent kernel literature, this issue can be easily mitigated by adopting random projection approaches such as Nystrom sampling [25,26], significantly reducing computations without compromising model performance. - __Typos__: Thanks for pointing out typos. __Additional references__ [B] Cucker and Smale. "On the mathematical foundations of learning." Bulletin of the American mathematical society, 2002. [C] Steve and Zhou. "Learning theory estimates via integral operators and their approximations." Constructive approximation, 2007. [D] Caponnetto and De Vito. "Optimal rates for the regularized least-squares algorithm." Foundations of Computational Mathematics, 2007. --- Rebuttal Comment 1.1: Comment: Thank you for your comments.
Summary: This paper proposes an approach (the first PMD approach adapted to RL setting) that can learn a world model using conditional mean embeddings (CME). The operatorial formulation of RL is used to express the action-value function in closed form via matrix operations. The proposed algorithm is proved to converge to global optimum, and validated by simple/preliminary experiments. Strengths: To the best of my knowledge, this work is the first one to apply PMD to RL setting with provable global convergence guarantee. The references and previous work are cited and well-discussed. The theoretical contribution of this work is sound and solid. The proposed method is validated to be more sample efficient than the current state-of-the-art RL algorithms like PPO, DQN, TRPO Weaknesses: Line 88: "Borrowing from the convex optimization literature", please give the exact reference. Line 97: at the end, lack of a "space" From Line 91 to the end of the manuscript..., it seems like the hyper-link of references does not work. They lead me to incorrect place, and make it hard to follow the idea. The experimental validation is limited to simple/toy environments in OpenAI Gym. In the caption of Figure 1, I would change "dark lines" to "solid line". The authors should also mention the reward threshold is the "success" reward in the caption, not just in the main texts. It is probably better to add the lable of dashed line in the plot. Since the Figure 1 is in logscale, the authors should show the learning curves starting from 0 or 10^1, so that we can observe the full learning process. Technical Quality: 2 Clarity: 3 Questions for Authors: In Figure 1, it is a bit surprising that the variance of the proposed method in (a) and (b) is 0 (or almost 0). I know that general RL algorithms will still have some fluctuation. Maybe the authors can give some intuition why the proposed method achieves such small variance? In Figure 1(c), why the proposed method stops at 10^4 step? I would expect to see the full learning curve, otherwise it is not convincing that the algorithm is convergent. Does POWR work in continuous action space? Even simple tasks in Gym including Pendulum, MountainCarContinuous, LunarLanderContinuous, etc. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The major limitation of this work is the empirical validation, where the authors only consider three toy/simple environments in OpenAI Gym. Also, the experiments are limited to the discrete action space. It will be better to see if the proposed algorithm works in continuous action space like robotics, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and kind words. - __Line 88 - references__: the requested references are provided in the same sentence referred to by the reviewer, line 90. They are references [19,20] in the paper, namely: [19] Beck and Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 2003 [20] Sébastien Bubeck, Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 2015 - __X-axis Range for Plots__: In Fig. 1, we reported the average reward starting from 10^3 because, for most environments, the region $[0, 10^3]$ did not provide much information. However, we agree with the reviewer that the full plot is useful to have a complete picture. Please refer to the pdf attached to our global post here on OpenReview. - __Variance in Figure 1 (a) and (b)__: Both FrozenLake and Taxi are simple deterministic tabular environments. Once POWR has collected enough evidence about the reward function and the transition operator, there are no fluctuations anymore: the mean reward is constant, and the variance converges to zero. This is precisely the advantage of learning an operator-based world model rather than adopting an implicit one from which we can only sample possible trajectories. We also point out that the behavior is different for MountainCar, since the environment has infinitely many states, and POWR still incurs errors in the approximation of the transition operator. - __Fig. 1 (c) stops at ~10k steps__: Thanks for pointing this out. Indeed, in the original figure the convergence of POWR for MountainCar was not fully evident from the plot. We have re-run the experiment for a larger number of epochs. See the pdf attached to our global post here on OpenReview. Given the short time for the rebuttal period, we run POWR collecting over 40K samples. The new results show that POWR retains its performance after reaching the “success” threshold, suggesting that it achieved empirical convergence. - __POWR in Continuous Action Spaces__: that is a very interesting question. We refer to our global reply here on OpenReview since this was a topic of interest also for other reviewers. - __Other feedback__: We thank the reviewer for pointing out typos and other issues/suggestions. We will amend the paper accordingly. (Concerning the broken hyperlinks, it was due the way we separated the paper from the appendices/supplementary material. Hyperlinks, indeed, work as intended in the full paper pdf provided in the supplementary material uploaded in the original submission). --- Rebuttal Comment 1.1: Title: Rebuttal Comment: Thank you for providing the updated Figure 1. It indeed looks more reasonable. But is there a reason not to run 10^6 timesteps for MountainCar? I understand that rebuttal has limited time, but MountainCar is a very easy problem, which should not take that long. Will the authors plot 10^6 timesteps in the final version of the paper? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the prompt reply. Yes, we will align with the other baselines in the final version of the paper, running experiments up to 10^6 steps, as stated in the attached PDF. Given the theoretical nature of this work and the limited time available, we have not thoroughly optimized our implementation of POWR, which resulted in longer runtimes. While there is significant room for optimization—and we are excited to advance POWR further— we would like to highlight that the main focus of this work was to prove the novel theoretical contributions of Proposition 3, Corollary 7, and Theorem 9. Our experiments demonstrate POWR's statistical efficiency, achieving success at least an order of magnitude faster than competitors.
Summary: (This review has been updated after revert of desk reject) Strengths: (updated after revert of desk reject) It's a solid theoretical paper with nice writing and experimental results on continuous control benchmarks. Weaknesses: (updated after revert of desk reject) The experimental results and more interpretation/justification of the results are limited. It would be stronger if the authors could further connect the experiments with the theoretical results. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We kindly point out to the reviewer that the decision on desk rejection was reverted by the Program Chairs (The initial desk rejection had been due to the automated checker failure to detect the NeurIPS checklist in the supplementary material. This happened to several papers this year). Should the reviewer have any questions, we’d be glad to answer them. --- Rebuttal Comment 1.1: Comment: Thanks authors for the reminder. I updated the review and it's short, but I'm holding (weakly) a positive opinion. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their feedback. We appreciate the reviewer’s suggestion to enhance the connection between our experimental results and the theoretical analysis. In the experimental section (lines 367/368), we have already mentioned that our method, POWR, converges to a global optimum (as demonstrated in Theorems 7 and 9), and exhibits smaller sample complexity compared to other baselines, which aligns with the expected behavior of world models. According to the reviewer’s suggestion, we will make these connections clearer. Additionally, we would like to emphasize that the main contributions of our paper are theoretical. In particular, many highly influential works in this area such as [10, 11, 12] do not include an experimental section at all. Nonetheless, we included experiments to provide additional support for our theoretical claims. While the experimental section serves as a supplementary validation, our primary focus remains on the theoretical advancements presented in the paper.
Summary: The paper presents a practical implementation of Policy Mirror Descent (PMD) for Reinforcement Learning (RL). PMD requires knowledge of the action value function of the current policy at each iteration. Existing methods that approximate the action-value function depend on the ability to restart the Markov Decision Process (MDP) from each state multiple times. Instead, this paper proposes learning estimators of the reward function and the transition model and combining them to approximate the action-value function. To facilitate this, an RL formulation using operators is introduced, enabling the use of conditional mean embeddings for approximation. The authors study the convergence of the proposed algorithm to the global optimum and compare it with other RL methods on classic Gym environments. Strengths: - The paper is well-written, and the authors thoroughly introduce their method in relation to existing techniques. Weaknesses: - Considering the connection between the proposed method and world models, the authors should discuss world models more thoroughly [1-3]. [1] Schmidhuber, J. Making the world differentiable: On using supervised learning recurrent neural networks for dynamic reinforcement learning and planning in non-stationary environments. (1990) [2] Schmidhuber, J. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. (1990) [3] Ha, D., and Schmidhuber, J. World Models (2018) Technical Quality: 3 Clarity: 3 Questions for Authors: See above. The paper is rather technical, and several sections are beyond my expertise. I am looking forward to discussing it further with other reviewers to potentially increase the score for this submission. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review. In particular, we thank the reviewer for the additional references. We will add them to the paper with the following expanded discussion in the introductory section of our paper, on line 42 (we denoted with R1,R2,R3 the references suggested by the reviewer): _The notion of world models for RL has been introduced by Ha and Schmidhuber in [R3] (building on ideas from [R1,R2]), where RNNs are used to learn the transition probability of the MDP. Traditional world model methods such as those proposed in [R3,16] emphasize learning an implicit model of the environment in the form of a simulator. The simulator can be sampled directly in the latent representation space, which is usually of moderate dimension, resulting in a compressed and high-throughput model of the environment._ --- Rebuttal Comment 1.1: Title: Official response Comment: Thank you for your response. Please note that the claim "The notion of world models for RL was introduced by Ha and Schmidhuber in [R3]" is incorrect. World models in RL were already introduced in Reference R1 (1990) and in Dyna ([4], 1991). I trust that the authors will accurately discuss the background of world models. Having read the other reviews and the authors' responses, I am happy to increase the score for this submission. [4] Sutton, R., Dyna, an integrated architecture for learning, planning, and reacting. (1991) --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the feedback and for raising their original score. We will update the introduction to accurately reflect the prior work on world models in RL, as the reviewer suggested.
Rebuttal 1: Rebuttal: Thanking all reviewers for their feedback on our work, in this global reply we share our insights on the extension of our results to continuous action spaces, a question shared among reviewers. In addition, in the attached pdf we report Figure 1 updated according to the suggestions of reviewer __41Mf__. Extending POWR to action spaces of infinite cardinality could in principle be naively tackled by approximating the normalization integral in the denominator of eq. (13) via sampling methods such as Monte Carlo (MC) sampling. This approach, however, poses two questions: - _Computational_: the integral approximation is conditional to the input state, implying that whenever we need to evaluate the policy on a new state, a new approximation is required. Unrolling this computation backward through all PMD iterations is potentially computationally expensive. - _Approximation_: it is not clear whether this is a good approximation of the ideal solution to the PMD step in equation (2) and how it will impact the convergence rates proved in the theoretical section. This makes the naive approach to extending POWR via MC sampling potentially limited. An alternative promising approach is to approximate the solution to the PMD step in equation (2) via an iterative optimization strategy. This would still introduce approximation errors, but they could be controlled via the optimization rates for mirror descent. In particular, the recent work in [A] [A] _Aubin-Frankowski, Korba, and Léger. "Mirror descent with relative smoothness in measure spaces, with application to sinkhorn and em." Advances in Neural Information Processing Systems 35 (2022): 17263-17275._ shows that it is possible to cast and perform mirror descent also on general probability spaces (a question that was still open) maintaining analogous convergence rates to the finite setting. To our preliminary investigation, adapting the analysis in [A] to Policy Mirror Descent is feasible. However, the open questions in this sense are: How to concretely perform the optimization of eq. (2) in practice, and Whether this additional iterative procedure would compromise the computational efficiency of POWR’s pipeline. We hope that these comments provide some additional context to the question posed by the reviewers, and why continuous action spaces ended up not being addressed in the paper at this time. Pdf: /pdf/adc9753d52eab89fea40a36e0737b766ac1dabcf.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Prior-itizing Privacy: A Bayesian Approach to Setting the Privacy Budget in Differential Privacy
Accept (poster)
Summary: For data owners, determining the appropriate privacy level based on the standard interpretation of differential privacy—which limits the likelihood of distinguishability from an adversary's perspective—can be challenging. This paper addresses operational interpretations of (pure) differential privacy by examining the worst-case multiplicative and additive gap between the prior and posterior probabilities of distinguishability when observing the DP mechanism's output. These gaps are defined in terms of relative and absolute disclosure risk. The paper breaks down disclosure risk into two components: a) identifying the presence of an individual's data, and b) identifying certain sensitive features of an individual whose data is present. By contextualizing these two types of disclosure risks, the paper suggests that data owners should create prior-to-posterior risk profiles that reflect their acceptable levels of privacy risk. These risk profiles can then be translated into a pure DP budget by solving an optimization problem or, for a single risk profile, using the provided closed-form solution. The paper includes examples of reasonable risk profiles for specific data-disclosure scenarios and outlines the DP budget required under different prior probabilities of distinguishability by an adversary. Strengths: - The paper addresses a crucial issue in differential privacy: determining the appropriate DP budget. Examining the gap between prior and posterior probabilities of distinguishability from an adversary's perspective, before and after observing the mechanism's output, is a promising approach. - Decomposing disclosure risk into the risk of detecting inclusion and the risk of inferring sensitive attributes is valuable for understanding the impact of various DP budgets. This approach underscores that DP can mitigate multiple types of privacy risks, albeit with varying levels of effectiveness. - The proposed framework provides flexibility in modeling an acceptable disclosure risk profile and includes a closed-form expression for deriving a compliant DP budget from it. Weaknesses: - The paper suggests shifting the focus from fixing the budget constant $\epsilon$ to fixing a disclosure risk profile (both absolute and relative), which depends on prior risk probabilities $p_i$ and $q_i$. However, this approach complicates the decision-making process for data owners. They either need to define the disclosure risk behavior for an exhaustive range of prior probabilities $p_i, q_i \in [0,1]$, despite the actual population following specific (but unknown) $p_i$ and $q_i$ values, or they must estimate a reasonable range of these prior probabilities for the population (without using their own data) before determining the disclosure risk behavior within that range. - Decomposing privacy risk into multiple parts, as done in the paper, has certain flaws that have not been considered. This decomposition requires fixing a subgroup $\mathcal{S}$ that is specific to a violation the data curator wants to effectively prevent. - Firstly, deriving a DP budget based on one (or a few) choices of such subsets $\mathcal{S}$ is not secure. This is because for every such choice, it is possible to create a pathological mechanism $T^*$ that has a relative disclosure risk of 1 but is $\infty$-DP. Such a mechanism $T^*$ would ensure that the chosen sub-group captured by the subset $\mathcal{S}$ has no influence on the output, but subgroups that haven't been chosen are always revealed. - Secondly, for $d$ (binary) features, there are $2^d$ choices of $\mathcal{S}$, and revealing any of them could be considered a privacy violation. It is impractical for a data curator to tailor the privacy risk profile for each of these potential privacy violations to derive the overall DP budget $\epsilon$. - Finally, the gradation expressed by first modeling the inclusion bit $I_i \in \\{0,1\\}$ and then the sub-group disclosure bit $Y_i \in \mathcal{S}$ has not been explored thoroughly. By following this chain of thought, it should be possible to define a sequence of increasing specificity, going from the inclusion of a record, to the record being in a sub-population, to the record further being in a sub-sub-population, and so on. The way the $\epsilon$-DP budget affects the prior-posterior gap as we zoom in on a specific record could be crucial and help in deciding the appropriate budget to set. - It is not clear how the risk profiles considered in Examples 1 and 2 are reasonable and how one can derive such risk profiles for complex situations. The recommendations on choosing a risk profile in lines 229-239 highlight that setting a risk profile can be tricky. A methodology to help set the DP budget should be much more straightforward, which is not evident in these cases. - The framework is proposed only for $\epsilon$-DP. In practice, $(\epsilon, \delta)$-DP is the more common notion. The paper does not discuss whether and how the framework can be extended to set the $(\epsilon, \delta)$-DP budget. ### Minor Points: - The assumptions made in the paper, although standard, are incorrectly motivated: - Assumption 1, that the output distribution modeled by the adversary using $\mathcal{M}$ matches the actual output distribution of the mechanism, is not to rule out edge-case adversaries exploiting floating point attacks or ignoring the underlying mechanism. It is standard to make Assumption 1 because, in privacy, a data curator needs to protect against the worst-case adversary who knows exactly how the mechanism $T^*$ works, even though a reasonable real-world adversary might not. The rationale is that if the privacy arguments work for such a strong adversary, they will also work for a weaker one. - Assumption 2, that $Y_{-i}$ is independent of $Y_i$, is also standard, but not because assuming otherwise would make the adversary considerably stronger. If the data points are independent, but the adversary models them as dependent, the adversary would perform poorly due to the mismatched assumption. It is standard to make this assumption because differential privacy is designed to capture the effect that a single atomic unit of data has on the output distribution. If we assume that $Y_{-i}$ is dependent on $Y_i$, a single atomic unit would be the entire input dataset. - Lemma 1 is obvious: $\epsilon$-DP with respect to the add or remove operation trivially gives $2\epsilon$-DP with respect to the replacement operation because replacement can be simulated as an addition followed by a removal. Technical Quality: 2 Clarity: 2 Questions for Authors: Following are some questions that would help me understand the presented examples better: - In Figure 2, deviation of what quantity is being compared? - When does it make sense to set the relative disclosure risk to $\infty$ as done in equation (16) and (17)? - In Figure 1, what are the values of $\epsilon_i$ when $q_i \leq 0.1$ where $p_i = 0.05$? The plots in this range seem to be truncated. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper has several limitations regarding applicability of the techniques which haven't been discussed. I suggest the authors to shed some light on the following limitations: - How to choose the disclosure subsets $\mathcal{S}$ without knowing what a potential adversary might be interested in inferring? - How to decide the appropriate $\epsilon$ from the disclosure risk profile as plotted in Figure 1? For certain choices of $p_i, q_i$, the values of $\epsilon$ could be excessively lax. On the other hand, if we choose the smallest $\epsilon$ pessimistically, the value might be very close (or identical) to the trivial solution equation (7) - What are the guidelines that may help a data owner in deciding an appropriate disclosure risk-profile that caters to their specific setting? - What is a fail-safe disclosure-risk profile that should be included with the other risk profiles to ensure that blatant non-privacy never happens? - Can the framework be extended to $(\epsilon, \delta)$-DP? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful comments. To frame our response, we begin with two general points. First, our method is intended for the standard definition of DP, and we presume the agency will apply a DP algorithm with the selected epsilon. Thus, the DP guarantee associated with the selected epsilon applies to all subgroups, not only the group that satisfies the particular $\mathcal{S}$. Second, our work is an initial paper on determining an appropriate privacy budget for a given data release; there is more to be done. Based on our review of the literature, practitioners often have difficulty understanding how to set an epsilon that provides a satisfactory balance of risk and utility (see line 20 of the text). Yet decision-makers in statistical agencies have decades of experience interpreting disclosure risk measures when establishing (legacy) confidentiality protection methods. By linking formal privacy guarantees to these concepts, we intend to provide decision-makers familiar with statistical risk measures tools to assess risk-utility trade-offs of different epsilon. Weaknesses - We agree that implementation details may be complex, as can be the case for engineering DP solutions in general. Whether working with risk profiles is less complicated for decision-makers than reasoning about the risk-utility trade-offs for different epsilon using some other ad hoc heuristic is a matter of opinion, which may differ by decision-maker or problem setting. Our intention is to provide a framework for additional paths to setting epsilon that can be developed further to accommodate specific implementations. - The privacy guarantee comes from using a DP algorithm with the selected epsilon. The $\mathcal{S}$ (henceforth, S) is a tool for interpretation and does not affect the guarantee (other than through influencing the choice of epsilon), which applies for all individuals in the data. Of course, tuning epsilon based on a specific S and utility evaluations may result in a looser or tighter guarantee than if the agency were to consider other S or utility evaluations. This seems inevitable with any method for selecting epsilon that considers the risk-utility trade-off. For example, the Census Bureau relied on specific reconstruction attack success rates and utility metrics to decide the ultimate privacy budget (and algorithm itself) for the 2020 census products. The final product may have changed with other disclosure risk or utility evaluations. Nonetheless, the reviewer’s point suggests that the agency should include in S any secrets it deems critical to protect. This need not be every possible S—as the reviewer notes, this is impractical for any method that seeks to manage the risk and utility trade-off—but it could involve evaluations of multiple risk profiles. The question then, which we did not consider in this initial paper, is how the agency should select among them if they present different recommendations. We thank the reviewer for raising this issue and will revise the paper to point to it as future research. - We agree that some agencies’ risk profiles may be more complex than those in the text. Our goal is to lay out a general framework, with the expectation that researchers and practitioners could tailor implementations for their particular settings; we leave these developments to future work. That said, the literature indicates that practitioners presently do not understand how to set epsilon to ensure desirable risk-utility trade-offs. Given the lack of standards, even simple risk profiles can be helpful for managing and understanding the trade-offs. - See our response to reviewer HWnJ, where we discuss the extension to approximate DP. Minor Points - Thank you for this perspective on our assumptions. We are updating their presentation accordingly. Questions - In Fig 2, to avoid privacy leakage we consider hypothetical deviations of 0.25, 0.5, 1, and 2 from the target of 6.0. See the text beginning at line 268. To make this clearer, we are adding a note to the caption. - The aim in selecting risk profiles of this form is to simplify the analysis. In Ex 1, the agency focuses on a class of adversaries who already know the target’s demographic information; such adversaries have $q_i = 1$. Operationally, the bound on the relative risk is set to $\infty$ for $q_i \neq 1$ in (16) to represent that these adversaries are outside the analysis. Ex 2 and (17) employ similar logic with a different class of adversaries. We do not suggest setting $\epsilon = \infty$. We are updating the text to clarify this point. - In Fig 1, $\epsilon_i$ is large for small $q_i$, e.g., $\epsilon_i = 5.4$ when $q_i = 0.01$. We truncate values >3 for readability. We are updating the caption to clarify. Limitations - The agency can decide which S it considers particularly sensitive, e.g., it may consider whether an individual has a disease more sensitive than their age. When multiple S are of interest, the agency has a decision problem on its hands. As with designing DP solutions in general, the agency must prioritize some of these S over others, e.g., using decision-theoretic criteria. This is an important topic for future work. We thank the reviewer for pointing it out and are updating the text to highlight this topic. - We take the minimum epsilon as the recommendation (see (13)). This could be a pessimistic choice although, as discussed in our response to reviewer QVWW, there still can be large gains over the baseline recommendation. - We envision agencies could determine risk profiles analogously to the solicitation of utility functions in decision theory, e.g., by considering a series of bets. Since our work is a first effort at defining a framework, we leave this development to future work and update the text accordingly. - There is no risk of blatant non-privacy since the release satisfies DP with a finite epsilon. - See our response to reviewer HWnJ, where we discuss the extension to approximate DP. --- Rebuttal Comment 1.1: Comment: I appreciate the responses and I agree that we need interpretations of DP that allows setting the parameters appropriately. > Yet decision-makers in statistical agencies have decades of experience interpreting disclosure risk measures when establishing (legacy) confidentiality protection methods. Decision-makers rely on academics for deciding on the risk measurement and appropriate budget to set in my experience. There is rarely a clear consensus on risk measurements despite the decades of involvement in such decisions. While the framework proposed is a nascent attempt at solving a complex problem, the complexity of the approach in general is a considerable drawback. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read and respond to our rebuttal. We appreciate your comment that setting epsilon in this (or any) principled way can be complex. We imagine you might concur that the whole process of implementing formal privacy solutions in genuine contexts is a highly complex endeavor. That said, the literature on setting epsilon in DP in a formal manner is underdeveloped. Despite its importance, there are many settings where we are aware of no existing method for this task (prior to our work). We are optimistic that our approach offers a framework that can be further developed to provide options for addressing this gap in the literature. Thank you again for your detailed feedback throughout the review process. We appreciate the time you devoted to our work.
Summary: This paper proposes a bayesian framework for determining the DP budget \eps. In particular, the authors develop a mathematical technique based on how much posterior risk the agencies are willing to accept given some prior risk and the \eps obtained through their formulation is unique. Strengths: Although there has been much debate on the topic of how to determine privacy budget and different flavors of this formulation have been proposed in prior works, I have not seen this formulation in the context of agencies deciding their \eps privacy budget. Authors also do a great job in differentiating their work to prior works that have proposed similar ideas. The ideas in this paper seem novel and useful to real world scenarios. Weaknesses: This paper only discusses the setting of \eps in the context of pure DP. It does not discuss whether these techniques could be extended to weaker notions of DP such as approx DP. Since in most real world applications agencies use approx DP notions I feel like this is an important discussion point that is currently missing. Technical Quality: 3 Clarity: 3 Questions for Authors: How do your methods extend to weaker notions of DP? (see comment in weaknesses) Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses comment. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for raising this point. A few works such as Kasivisiwanathan & Smith (2014) and Kifer et al. (2022) discuss Bayesian semantics of approximate DP, although the details differ from the semantic characterization we use in this work for pure DP. We imagine one could use these results as the basis for a similar framework for approximate DP, following our framework as a blueprint. We did some initial investigations of this and found the generalization not to be straightforward. Since our work is a first effort on formalizing the selection of $\varepsilon$ by linking it to statistical disclosure risk profiles, we opted to focus on the pure DP setting and leave this extension for future work. But we agree with the reviewer about the significance of this extension and are updating the text to highlight that it is an important open problem, and we suggest some possible first steps for that future research.
Summary: This paper proposes a novel method for selecting epsilon that comes with a natural adversarial interpretation. They characterize an adversary’s auxiliary knowledge by two prior probabilities: the prior belief that an individual participated in the dataset, and the prior belief that an individual’s record has some characteristic. Based on these two parameters, an institution can choose a “risk profile”, that maps an adversary’s prior knowledge to their allowable increased posterior knowledge after viewing the DP output. The authors propose a mapping from risk profile to maximum allowable epsilon. A major insight is that risk profiles which allow adversaries with weak priors to learn more than those with strong priors allow for larger epsilon values than a constant profile would allow (constant bound on posterior/prior ratio). The authors go on to provide instructive examples of risk profiles that different institutions may want, and the resulting epsilon bounds. They include an analytic experiment based on Durham infant mortality rate data. Strengths: This paper addresses the two primary limitations of differential privacy at once: 1) an interpretable way of setting the privacy loss parameter, and 2) an analytic justification of higher epsilon values. The paper writing is strong: their framework is laid out very clearly, and the examples and guidelines are useful and clarifying. The appendix is thorough and supplements most content that I wanted to see added. Of course, much ink has been spilled over adversarial interpretations of epsilon and different adversarial threat models (especially Bayesian ones). I felt that the authors did their homework, and nicely positioned their work in the literature. Weaknesses: The two main weaknesses that I noticed revolve around baseline clarity: Little is offered in the main paper to show improvements over the naive baseline ($\epsilon = \log(r)/2$). I noticed a note on this after Figure 2, but felt it should be highlighted more. The improvement in utility over the baseline $\tilde{a} = 0$ is nonzero, but not enormous. The question is whether that improvement warrants the increased complexity of understanding the $\tilde{a} > 0$ regime (i.e. the complexity of choosing a non-constant risk curve). A more thorough exploration of risk profiles and the above baseline. Take a risk profile with minimum $r’$. Set its baseline to the constant risk profile with value $r’$. Can we say anything about the difference between their epsilons? Are there settings where this gap is very large? If so, are these settings realistic? I apologize if I missed the above addressed in the paper, but to me this is the most critical question at hand. If we can’t make statements about this gap, then we don’t know how much we can gain over the baseline. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper identifies its assumptions early on. I think the major limitation is the relatively small boost in epsilon value (and therefore utility) for the risk curves demonstrated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the useful feedback regarding baseline clarity. On the question of demonstrating improvements over the baseline, the extended versions of Examples 1 and 2 in the supplement demonstrate the potential for substantial improvements over the baseline. For example, Table 4 and the accompanying discussion (lines 862-866) show an increase by more than a factor of two in the recommended $\varepsilon$ in all cases. We agree that this comparison is an important aspect of our contribution and are updating the discussion of the examples in the main text to highlight this improvement. The second point you raise is quite thought-provoking and not one we had previously considered. The gap between the baseline and our recommendation can be large. As an illustrative example, suppose an agency has a simple, “point” risk profile, where for $p_i = 0.5$ and $q_i = 1$, they desire a relative risk bound of $r’ < 2$. That is, $r^*(0.5, 1) = r’$. The baseline recommends $\epsilon = \log(r’)/2 < 0.35$. The recommendation from Theorem 2 can be shown to have the form $\epsilon = \log(r’/(2 – r’))$. This diverges as $r’ \to 2$. Thus, the gap between the baseline and the recommendation from Theorem 2 can be arbitrarily large for $r’$ sufficiently close to $2$. The example above likely does not represent a realistic disclosure risk profile of a real-world agency, but it is instructive in determining when our method can outperform the baseline. In general, the baseline’s recommendation is smaller because it is necessarily enforcing a relative risk of $r’$ for adversaries with small priors. If $r’=2$, then an adversary with prior probability $p_i q_i = 0.001$ is allowed a posterior probability of at most $0.002$, which is very restrictive and leads to a small $\varepsilon$ recommendation. If adversaries with small priors are ignored - as in the above point example - or allowed to have large relative risks - as in the extended versions of Examples 1 and 2 - then the recommendation from our method will outperform the baseline, and possibly by a substantial amount. We agree that a more thoughtful discussion around these points provides useful context to our examples and are adding the main points we discuss above to the paper. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I've revised my score from 4 to 6. I want to emphasize that *the* value of adopting this methodology is the promise of an increased DP epsilon, and thereby improved utility. With the authors agreeing to update the discussion of the examples in the main text to highlight improvement over the baseline, I have decided to raise my score. The above comment on the possibility of arbitrary improvement over the baseline is interesting. A more complete analysis on when semantically meaningful profiles make arbitrary (or large) improvements on epsilon would encourage me to improve my score further. I missed the fact that Table 4 (implicitly) shows a 2x improvement of epsilon, which does feel significant. However, the fact that the table shows $\tilde{r}$ from which I need to derive the baseline (naive) $\epsilon = \log(\tilde{r})/2$ to see the improvement needs to be fixed. The fact that this most central point -- improvement of epsilon over the naive bound -- can only be numerically derived from a table deep in the appendix (page 25) is poor presentation that I hope the authors will improve in the main paper. Thank you again for the above clarified points --- Reply to Comment 1.1.1: Comment: We appreciate your willingness to raise your score and your helpful comments on our work. As you suggest, we will prominently emphasize the increase in $\varepsilon$ from our method generally and, in particular, clarify the gains in $\varepsilon$ from Table 4 as we update the main text. We also appreciate your suggestion of further analysis of when/why risk profiles admit increased $\varepsilon$. This is something we plan to investigate as part of our future work in developing this framework. Thank you for the research topic suggestion and the fruitful conversation.
Summary: The paper proposes a novel framework for setting an appropriate privacy budget via controlling Bayesian posterior probabilities of disclosure. The connection is established through the risk profile, an upper bound on disclosure risk involving privacy parameters. Theoretical justification and empirical evaluation are also provided to support this framework. Strengths: * The paper links differential privacy to Bayesian disclosure risk through the risk profile, providing a reasonable framework for selecting privacy parameters. * The paper provides some ready-to-use tools for implementing this framework, supported by empirical results. * The paper is well-written, and the authors provide a comprehensive discussion on its connection to related work. Weaknesses: * The framework is designed for releasing discrete statistics, and it is unclear whether it can be extended to release continuous statistics. Technical Quality: 3 Clarity: 3 Questions for Authors: Can the framework proposed in this paper be extended to handle cases where $T(Y)$ has continuous support? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors address this part in the limitations section of the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for noting this omission. Generalization of our results from the discrete case to the continuous case can be accomplished by replacing sums with integrals and probability mass functions with probability density functions throughout the theorem statements and proofs. We focus on the discrete case throughout the document for clarity of presentation. We apologize that this was not clear in the original manuscript and are updating the text to highlight this point. --- Rebuttal Comment 1.1: Title: Official comment by reviewer XDrZ Comment: Thank you for answering my question. I will keep my positive score.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GTA: Generative Trajectory Augmentation with Guidance for Offline Reinforcement Learning
Accept (poster)
Summary: This paper tackles the challenges of Offline RL, which involves learning effective decision-making policies from static datasets without online interactions. The authors introduce Generative Trajectory Augmentation (GTA), a novel approach that uses a diffusion model to enhance offline data by augmenting trajectories to be both high-rewarding and dynamically plausible. GTA partially noises original trajectories and denoises them with classifier-free guidance based on amplified return values. The experiment results indicate that GTA, as a general data augmentation strategy, enhances the performance of widely used offline RL algorithms in both dense and sparse reward settings. Strengths: (1) Interesting and important topic: The guided data augmentation for offline RL is interesting. The success in this domain will further benefit the real-world applications of offline RL algorithms. (2) Extensive experiments: the authors provide extensive experiments to validate their proposed method. Weaknesses: (1) Lack of theoretical guarantee to support the algorithm: The authors only provide empirical results, lacking theoretical analysis of the performance of the proposed method. For other potential weaknesses, please see my questions. Technical Quality: 3 Clarity: 3 Questions for Authors: I will adjust my rating score based on the rebuttal results. (1) How to select the hyperparameter \alpha in the denoising with amplified return guidance in equation (8). Can you also provide ablation study results for different selections of \alpha? (2) In the generation phase, do you generate whole trajectories or generate subsequences of trajectories? (3) Is the proposed method sensitive to the hyperparameter \mu? Based on the description in Figure 4 and Appendix F.3, when \mu = 1, it only performs well on the dataset with low quality. For high-quality datasets with abundant expert demonstration, does your method have trouble preserving and reconstructing the high-rewarding trajectories? (4) Can you provide comparison results with diffusion-based offline RL methods? For example, Decision Diffuser [1] and AdapDiffuser [2]. (5) In the ablation study on reweighted sampling, what base offline RL do you use for D4RL score calculation? (6) Since the reweighted sampling can implicitly change the dataset distribution, can you also apply the reweighted sampling technique as well for other baseline methods? Reference: [1] Ajay, A., Du, Y., Gupta, A., Tenenbaum, J., Jaakkola, T., & Agrawal, P. (2022). Is conditional generative modeling all you need for decision-making? arXiv preprint arXiv:2211.15657. [2] Liang, Z., Mu, Y., Ding, M., Ni, F., Tomizuka, M., & Luo, P. (2023). Adaptdiffuser: Diffusion models as adaptive self-evolving planners. arXiv preprint arXiv:2302.01877. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, the authors discussed the limitations in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback, which can enhance our manuscript. > **(Weakness 1)** Lack of theoretical guarantee to support the algorithm: The authors only provide empirical results, lacking theoretical analysis of the performance of the proposed method. While existing work [1] proposes theoretical bounds between conditioning value and generated value, these proofs rely on stringent assumptions, which may not align with complex tasks in Offline RL benchmarks. Empirically, on GTA, we conducted an analysis of the correlation between the conditioning value and the rewards of generated trajectories. As illustrated in Figure 1, attached in pdf, we observed Pearson correlation coefficients of 0.55, 0.91, and 0.99 for halfcheetah-medium, medium-replay, and medium-expert datasets, suggesting that GTA can generate trajectories aligned with the conditioned return. Augmented datasets with high-rewarding trajectories can aid offline RL policies in reaching higher performance. GTA has demonstrated consistent performance gains through extensive experiments across various environments. Thorough ablation studies have further proven the effectiveness of our proposed method. We would like to emphasize the empirical results presented in our paper, which underscore the effectiveness of our approach. [1] Yuan, Hui, et al. "Reward-directed conditional diffusion: Provable distribution estimation and reward improvement." Advances in Neural Information Processing Systems 36 (2024). > **(Question 1)** How to select the hyperparameter $\alpha$ in the denoising with amplified return guidance in equation 8? **(Question 3-1)** Is the proposed method sensitive to the hyperparameter $\mu$? As discussed in the general response, GTA outperforms other baselines in locomotion tasks even with a single hyperparameter configuration ($\mu=0.25, \alpha=1.1$), demonstrating the robustness of our method. We also provide a general recipe for selecting hyperparameters for new tasks: Increasing $\mu$ and $\alpha$ for datasets with low quality to promote further performance gain. > **(Question 2)** In the generation phase, do you generate whole trajectories or generate subsequences of trajectories? Instead of generating the entire trajectory, we focus on generating subtrajectories. Please refer to Table 8 in Appendix E.1 for more details about subtrajectory generation. > **(Question 3-2)** Based on the description in Figure 4 and Appendix F.3, when \mu = 1, it only performs well on the dataset with low quality. Does GTA have trouble preserving and reconstructing the high-rewarding trajectories? In addition to the dynamic MSE and oracle reward presented in Appendix F.4, we include data quality metrics for trajectories generated with a higher $\mu$ in the table below. The results indicate that while a $\mu$ of 0.75 results in a higher dynamic MSE than a $\mu$ of 0.25, it still maintains a lower value than SynthER. This demonstrates that GTA does not encounter significant issues in reconstructing high-rewarding trajectories. Table 1. Dynamic MSE of generated data on halfcheetah-medium-expert-v2. ||SynthER|GTA($\mu=0.25, \alpha=1.3$)|GTA($\mu=0.75, \alpha=1.3$)| |-|-|-|-| |Dynamic MSE($\times 1e-2$)|0.91|0.79|0.90| |Optimality|7.7|10.07|11.56| > **(Question 4)** Can you provide comparison results with diffusion-based offline RL methods? For example, DecisionDiffuser and AdaptDiffuser. We provide a comparison between diffusion planners [2, 3, 4] and GTA in the table 1, 2 attached in PDF. As shown in the table, GTA outperforms diffusion planners in locomotion tasks and demonstrates significant efficiency in test-time evaluation. As mentioned in related work section, our method resembles diffusion planners in terms of generating trajectories with high return guidance. However, adopting diffusion as an augmentation method and utilize the model-free offline RL policies can transfer extensive computational burden of diffusion models from the evaluation stage to the data preparation stage. It can bypass the crucial challenge of diffusion planners, which requires extensive cost for sampling action. [2] Janner, Michael, et al. "Planning with diffusion for flexible behavior synthesis." arXiv preprint arXiv:2205.09991 (2022). [3] Ajay, Anurag, et al. "Is conditional generative modeling all you need for decision-making?." arXiv preprint arXiv:2211.15657 (2022). [4] Liang, Zhixuan, et al. "Adaptdiffuser: Diffusion models as adaptive self-evolving planners." arXiv preprint arXiv:2302.01877 (2023). > **(Question 5)** What is the baseline algorithm in the ablation study of reweighted sampling? We choose TD3BC for the ablation study of reweighted sampling because TD3BC requires minimal computational cost along the offline RL baselines (IQL, CQL, MCQ). >**(Question 6)** Can you also apply the reweighted sampling techniques for other baseline methods? We conduct experiments on the effect of reweighted sampling for other baselines. Specifically, we sample transition from the buffer with probability proportional to its reward during policy training. Table 1 describes the performance of reweighted sampling on other baselines. We find that baselines with reweighted sampling underperform GTA. Table 1. Experiment results of reweighted sampling on baselines. ||Original|S4RL|SynthER|GTA| |-|-|-|-|-| |Halfcheetah-medium|48.52±0.39|48.70±0.31|48.73±0.83|57.84±0.51| --- Rebuttal Comment 1.1: Comment: Dear Reviewer UiuM, thanks for giving the authors a detailed list of questions. Did the authors answer these to your satisfaction or did they raise any concerns for you about accepting this paper? If so, it would be great to clarify while we can still interact with the authors. Thanks! --- Rebuttal Comment 1.2: Comment: I appreciate the detailed response, most of my questions have been addressed. Regarding the ablation study against \alpha and \mu, and the comparison results with reweighted sampling, I believe they should be evaluated more comprehensively with more testing values and tasks. Considering the limited time of the rebuttal phase, I understand that the authors can not provide the full results. However, I suggest to have a more detailed discussion regarding this part in their revision. After reading the reviews from other reviewers and the authors' responses, I decided to raise my score from 5 to 6 in favor of acceptance. --- Reply to Comment 1.2.1: Comment: Thank you for your thoughtful response and detailed review of our rebuttal. Following your suggestion, we will expand the ablation study presented in Appendix G by conducting additional experiments across various tasks and incorporating these results into the revised manuscript. If you have any further suggestions or considerations, please feel free to share them with us. Once again, we sincerely appreciate your insightful feedback.
Summary: To improve the quality of offline datasets, in this paper, a generative data augmentation approach is proposed by leveraging the diffusion models. Moreover, with the adoption of partial noising and denoising framework with amplified return guidance, the trajectory can be guided towards high-rewarding region. Finally, with the generated trajectories, existing offline RL methods can be utilized to learn the optimal policy. Strengths: The paper is clearly organized and the experimental results show the effectiveness of the proposed approach. Also, the separation of the data generation with diffusion models and the policy learning mitigates the possible time and computation cost in the policy learning. Moreover, the trajectory-level data generation can capture the transition dynamics, which is beneficial for environments with sparse rewards. Weaknesses: The proposed approach is mainly a direct combination of current techniques, e.g., the diffusion model for data generation, the adding noise for the exploration and the existing offline RL techniques for policy learning, the theoretical contribution of the paper is trivial. Also, experimental results are still not sufficient, e.g., the lack of the comparations of time and computation costs between different methods, and only two data augmentation baselines are compared. Moreover, the proposed approach is limited to trajectories with reward signals, which is not the case in many real-world applications. Furthermore, it is unclear to me whether there are any theoretical guarantees to push the generated trajectory toward the high-rewarding region. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In section 4.2, by denoising with amplified return guidance, the authors claim the trajectory can be guided towards high-rewarding regions, are there any theoretical guarantee for this statement? 2. How about the whole computation and time cost of the proposed approach when it is compared with other diffusion-free baselines? 3. As shown in the experiments, for different environments, different noise levels are required for the optimal performance, then how can we determine the noise level for a new task? 4. The proposed approach aims to deal with trajectories with reward signals, however, in reality, it may be difficult to obtain such signals, can the proposed approaches be extended to the cases where no reward signals can be provided? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed some limitations, but for some other limitations, e.g., the computation and time cost of the proposed approach, the application of the proposed approach in some real-world applications with no reward signal provided should be further stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your critical reviews and valuable feedback. > **(Weakness 1)** The proposed approach is mainly a direct combination of current techniques. Please note that we propose the hypothesis that augmenting the offline dataset with feasible and high-rewarding trajectories would boost the performance of offline RL algorithms. To achieve this, we made a novel combination of various techniques. GTA involves (1) generating trajectory-level data to capture sequential relationships between transitions and long-term transition dynamics, (2) introducing partial noising to control the exploration level of generated trajectories, and (3) guiding diffusion model with amplified return to find high-rewarding trajectories. We effectively balance exploration and exploitation with two strategies carefully tailored for diffusion models, as both strategies can seamlessly be integrated with the forward and reverse process of diffusion models. Our method consistently outperforms several baselines across different environments, highlighting the effectiveness of our proposed method. We want to emphasize that numerous machine learning studies continue to explore innovative combinations of established techniques. > **(Question 1)** In section 4.2, by denoising with amplified return guidance, the authors claim the trajectory can be guided towards high-rewarding regions, are there any theoretical guarantee for this statement? There is existing work [1] proposing theoretical bounds between conditioning value and generated value. However, these proofs rely on stringent assumptions, making it challenging to directly apply them to GTA. Meanwhile, related works with return guidance [2, 3, 4] show that the conditionally sampled trajectories yield high rewards, demonstrating the effectiveness of return-guidance. To verify that our conditional sampling indeed generates high-reward trajectories, we conducted analysis on the correlation between the conditioning value and the rewards of generated trajectories. As illustrated in Figure 1 attached in pdf, we observed Pearson correlation coefficients of 0.55, 0.91, and 0.99 for halfcheetah-medium, medium-replay, and medium-expert dataset, respectively. This indicates that diffusion model effectively samples trajectories aligned with the conditioned return. [1] Reward-directed conditional diffusion: provable distribution estimation and reward improvement [2] Janner, Michael, et al. "Planning with diffusion for flexible behavior synthesis." arXiv preprint arXiv:2205.09991 (2022). [3] Ajay, Anurag, et al. "Is Conditional Generative Modeling all you need for Decision Making?." The Eleventh International Conference on Learning Representations. [4] Liang, Zhixuan, et al. "AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners." International Conference on Machine Learning. PMLR, 2023. > **(Question 2)** How about the whole computation and time cost of the proposed approach when it is compared with other diffusion-free baselines? As mentioned in general response, we compare the whole computation and time cost of the proposed approach with other diffusion-free baselines. For model-based methods, we need to train dynamic model and generate synthetic trajectories during policy training. The table 2 attached in PDF summarizes the whole training and evaluation time cost of algorithms. While training dynamic models takes shorter time than training diffusion models, model-based RL takes much longer time for training policy as it rollouts synthetic trajectories during training. > **(Question 3)** As shown in the experiments, for different environments, different noise levels are required for the optimal performance, then how can we determine the noise level for a new task? As mentioned in the general response, we find that even if we fix $\mu$ across different environments, we achieve higher performance than other baselines. We also observe that we can improve the performance further if we increase $\mu$ for exploration when we have low-quality dataset. Therefore, we recommend selecting $\mu$ based on dataset quality for a new task instead of extensive online tuning. > **(Question 4)** The proposed approach aims to deal with trajectories with reward signals, however, in reality, it may be difficult to obtain such signals, can the proposed approaches be extended to the cases where no reward signals can be provided? Numerous offline RL methods [5, 6, 7, 8, 9] aim to learn decision-making policy that maximizes expected cumulative rewards. In this context, we want to emphasize that GTA focuses on enhancing the optimality of offline datasets while minimizing the degradation of dynamic plausibility to improve the performance of offline RL algorithms. Therefore, generating trajectories without any reward signal is beyond our scope, as they cannot be augmented toward high-rewarding trajectories and are not the objective of RL problems. Please acknowledge that we conduct extensive experiments on realistic and challenging tasks such as AntMaze, FrankaKitchen, Adroit, and pixel-based environments. [5] Fujimoto, Scott, and Shixiang Shane Gu. "A minimalist approach to offline reinforcement learning." Advances in neural information processing systems 34 (2021): 20132-20145. [6] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 33 (2020): 1179-1191. [7] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline Reinforcement Learning with Implicit Q-Learning." International Conference on Learning Representations. [8] Lyu, Jiafei, et al. "Mildly conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 1711-1724. [9] Yu, Tianhe, et al. "Combo: Conservative offline model-based policy optimization." Advances in neural information processing systems 34 (2021): 28954-28967. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response and additional experimental results,most of my concerns have been addressed, I'd like to raise my score to 6, best. --- Reply to Comment 1.1.1: Comment: Thank you for your kind response and for thoroughly reviewing our rebuttal. While most of your concerns have been addressed, if there are any remaining issues or points of discussion, please feel free to share them with us. We are always ready to engage in further discussion. Once again, we appreciate your thoughtful feedback.
Summary: This paper builds on ideas from SynthER but adds classifier free guidance to boost the returns of the generated trajectories. This makes sense as the resulting data has higher quality and the paper is a totally sensible next step in the series of works building upon SynthER, which is an exciting direction for research. The paper is well executed and a solid contribution. Strengths: * A highly relevant area building on synthetic data generation for offline RL. Out of all the current active areas in RL research this one benefits the most from the current foundation model/large data regime, and scaling offline RL has been shown to be highly impactful in areas such as Robotics (E.g. RTX). * The method is not overly complicated and builds upon the recent SynthER paper, presented at NeurIPS 2023. This is an example of a simple idea that makes a great deal of sense, and it is well executed. * The paper reads well. * The experiments go beyond what was done in SynthER, including some new environments. It is great to see this - as RL needs to keep pushing for more complex benchmarks and not just sticking to D4RL and Atari "because thats what the previous paper did". Weaknesses: There are no major weaknesses here, it is a solid paper taking a nice step in an exciting general direction of research. The below points are fairly minor: * The authors could cite Ball et al 2021 "Augmented World Models" as another example of data augmentation in offline RL. * The authors could discuss how this relates to another paper building on SynthER, Policy-Guided Diffusion by Jackson et al. This is very recent work so makes sense it is not needed as a baseline, but the comparison may be worth including in text. Technical Quality: 3 Clarity: 3 Questions for Authors: NA Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Nice to see this in the main body rather than the Appendix. It would be good to add a discussion on scalability if possible. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and valuable feedback! > **(Weakness 1)** The authors could cite Ball et al 2021 "Augmented World Models" as another example of data augmentation in offline RL. Thank you for pointing out crucial paper. The paper [1] augments learned model with simple transformations and approximate the augmentation for unseen environment in a self-supervised fasion for zero-shot generalization. While we focus on a single task offline RL problems, it is highly related in terms of data augmentation methods for offline RL problems. We will cite the paper in the related work section. > **(Weakness 2)** The authors could discuss how this relates to another paper building on SynthER, Policy-Guided Diffusion by Jackson et al. This is very recent work so makes sense it is not needed as a baseline, but the comparison may be worth including in text. Thank you for highlighting an exciting concurrent work. Policy-Guided Diffusion (PGD) [2] generates synthetic trajectories for augmentation with classifier guidance from the target policy. On the other hand, GTA generates trajectories from the original trajectory, and introduce partial noising and denoising framework. To generate high-rewarding trajectories, GTA uses amplified return guidance. While the method is slightly different, both papers share ultimate objective, generating high-rewarding trajectories while retaining low dynamics error to improve offline RL algorithms. We will add comparison with PGD in the manuscript. [1] Ball, Philip J., et al. "Augmented world models facilitate zero-shot dynamics generalization from a single offline environment." International Conference on Machine Learning. PMLR, 2021. [2] Jackson, Matthew Thomas, et al. "Policy-guided diffusion." arXiv preprint arXiv:2404.06356 (2024). --- Rebuttal Comment 1.1: Title: SGTM Comment: Thanks for the response! --- Reply to Comment 1.1.1: Comment: Thank you for your kind response! Please let me know if you have any further suggestions or adjustments you would like us to consider.
Summary: The paper introduces Generative Trajectory Augmentation (GTA), a data augmentation approach for Offline Reinforcement Learning (RL) that enhances the quality of static datasets by generating high-rewarding and dynamically plausible trajectories using a conditional diffusion model. GTA partially noises original trajectories and then denoises them with classifier-free guidance via conditioning on amplified return value. The authors demonstrate that GTA improves the performance of various offline RL algorithms across several benchmark tasks. Strengths: - GTA integrates seamlessly with existing offline RL methods. It builds on previous advancements, and new offline RL algorithms could also benefit from it, given its agnostic nature towards the specific RL method used. - The ability to generate high-return trajectories that do not exist in the logged data is a significant advantage. This feature potentially improves the performance of offline RL algorithms by enriching the dataset with valuable transitions, as supported by the experimental results. - The experimental results presented in the paper show significant improvements across various offline RL algorithms and environments. Additionally, thorough ablations are provided, highlighting the impact of different components and hyperparameters on the overall performance. This comprehensive evaluation demonstrates the practical effectiveness of GTA. Weaknesses: - The claim that GTA-generated data adheres to the dynamics of the environment (lines 145-148) seems unfounded. Is there a principled argument on why diffusion models would learn the dynamics of the environment well? Especially where the goal is to create transitions outside the dataset. Figure 5 does not conclusively show that GTA is dynamically plausible, especially for tasks like Cheetah compared to methods like S4RL. - Compared to S4RL, the augmented trajectories appear less plausible. This issue is evident in tasks like HalfCheetah, yet GTA still achieves better rewards. It is unclear why this does not affect the final results. - Based on Appendix B.2, the performance of GTA appears to depend heavily on finely-tuned hyperparameters, as suggested by the different values used for different environments in Table 6. This raises concerns about the generalizability of the method, indicating that it might rely on online manual tuning. Technical Quality: 3 Clarity: 3 Questions for Authors: - How do you ensure that the generated trajectories are plausible and adhere to the environment's dynamics? What mechanisms to pick &alpha to handle cases where the return used for high-reward guidance is not reasonable? How do you ensure that the generated trajectories remain valid and useful? - Does each task require a new &mu, &alpha hyperparameter? How does this impact the generalizability and practical application of GTA? - What is the proportion of augmented trajectories to original ones in the final dataset? Is the entire offline dataset kept before training, with augmented trajectories added on top? Do all the offline datasets have the same size? Are 5 million augmented tranitions added regardless of the original dataset's size? - Could you clarify if Figure 3 is based on a real example or if it is just an illustration of the method's intuition? If it is the latter, how can you ensure it accurately represents the method's practical application? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations and potential impacts in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and valuable feedback! > **(Weakness 1-1)** Is there a principled argument on why diffusion models would learn the dynamics of the environment well? Especially where the goal is to create transitions outside the dataset. **(Question 1-1)** How do you ensure that the generated trajectories are plausible and adhere to the environment's dynamics? As you correctly pointed out, diffusion model of GTA does not explicitly ensure dynamic plausibility. Instead, it implicitly learns the dynamics governing the trajectory through learning the data distribution. GTA concentrates on enhancing the optimality of the dataset while reducing the degradation of the environment's dynamics of the augmented dataset to improve the performance of offline RL algorithms. Therefore, we will adjust our statement from "ensuring dynamic plausibility" to "minimizing the degradation of dynamic plausibility". We apologize for the overclaiming of our proposed method. However, our method still has the potential to generate high-rewarding and dynamically plausible trajectories by virtue of two novel strategies: *partial noising and denoising framework* and *amplified return guidance*. As discussed in Appendix F.1, amplified return conditioning prevents the partially noised trajectories denoise conditioning on excessively high value, yielding lower dynamic MSE. Additionally, table 1 shows the dynamic mse of the generated dataset according to the differing level of $\mu$. The result demonstrates that low $\mu$ does help to lose dynamic information less. Table 1. Dynamic MSE of generated data on walker-medium-v2. |$\mu$|Dynamic MSE($\times 1e-2$)| |-|-| |0.1|2.80| |0.25|3.02| |0.5|3.04| |0.75|3.34| |1.0|3.32| To sum up, even though GTA does not explicitly enforce the model to generate dynamically plausible dataset, we introduce *partial noising and denoising framework* and *amplified return guidance* to minimize the degradation of dynamics of generated data. > **(Weakness 1-2, Weakness 2)** Figure 5 does not conclusively show that GTA is dynamically plausible, especially for tasks like Cheetah compared to methods like S4RL, yet GTA still achieves better rewards. It is unclear why this does not affect the final result. As the reviewer noted, the trajectories generated by GTA exhibit larger dynamic MSE than S4RL in some cases like Halfcheetah. However, we observe that the dynamic plausibility of GTA is still better than other baselines such as SynthER. More importantly, GTA augments trajectories with higher rewards, which significantly affects the performance of offline RL policies. > **(Weakness 3)** The performance of GTA appears to depend heavily on finely-tuned hyperparameters. **(Question 2)** Does each task require a new &mu, &alpha? To address the concern that the reviewer mentioned, we compared GTA with other baselines on single set parameter($\alpha=1.1, \mu=0.25$). The results demonstrate that fixed parameter configurations generally show performance gain across locomotion tasks, indicating that it may not require extensive online tuning. Furthermore, we propose a guideline for determining the appropriate $\mu$ and $\alpha$ for new tasks to boost performance. Please refer to the general response for a more detailed description. > **(Question 1-2)** What mechanisms to pick &alpha to handle cases where the return used for high-reward guidance is not reasonable? How do you ensure that the generated trajectories remain valid and useful? As you mentioned, excessively guiding towards high rewards can result in invalid trajectories. This phenomenon occurs, as shown in Table 17 in Appendix F.1, when we generate trajectories conditioned on the maximum return that can be achieved from the environment. To prevent conditioning on unreasonable return, we introduced amplified return guidance, which involves multiplying the original trajectory by $\alpha$ for conditioning. It controls the exploitation level of GTA, and we observe that our method empirically generates valid and useful trajectories to improve the performance of RL algorithms. > **(Question 3)** What is the proportion of augmented trajectories to original ones? Is the entire offline dataset added augmented trajectories on top? Do all the offline datasets have the same size? Are 5M augmented transitions added regardless of the original dataset's size? GTA augments 5M transitions for D4RL, and 1M transitions for VD4RL regardless of different sizes of original datasets. We follow the procedure of the prior method SynthER for a fair comparison. After the augmentation, we added augmented trajectories to the original dataset. There is no reason to exclude the original dataset, as it preserves the true dynamics of the environment. Please refer to Table 10 of Appendix G.4 for more discussion on the size of the augmented dataset. > **(Question 4)** Could you clarify if Figure 3 is based on a real example or if it is just an illustration of the method's intuition? If it is the latter, how can you ensure it accurately represents the method's practical application? While Figure 3 of section 4.2 is presented for intuitive comprehension, our empirical results of table 2 clearly demonstrate the relationship between $\mu$, oracle reward, and the deviation from the original trajectory. Table 2 shows that both oracle reward and deviation from the original dataset tend to increase as $\mu$ becomes larger. These results align well with Figure 3, which illustrates that trajectories deviate from the original trajectories and towards high-rewarding regions as $\mu$ gets larger. Table 2. Analysis of dataset generated by GTA with different $\mu$. |$\alpha$ = 1.4|Oracle Reward|Deviations($\times 1e-1$)| |-|-|-| |offline data|4.77|0.00| |$\mu$ = 0.1|4.84|0.01| |$\mu$ = 0.25|5.06|0.24| |$\mu$ = 0.5|7.21|11.31| |$\mu$ = 0.75|7.23|21.50| |$\mu$ = 1.0|7.24|21.99| --- Rebuttal Comment 1.1: Comment: Thank you for your answers. --- Reply to Comment 1.1.1: Comment: We would like to express our gratitude once again for your feedback, which led our paper to a more progressive and positive direction. In response to your comments, we will revise our paper to include additional details about Figure 5 and provide practitioner guidance for the hyperparameter settings. We are fully prepared to respond to any additional discussions or inquiries you may have, so please feel free to reach out at any time. Thank you.
Rebuttal 1: Rebuttal: We sincerely thank the review committee for their detailed feedback. We appreciate the recognition of our paper's strengths, highlighted by the reviewers: **Originality** (CNng, ABdH), **Significance** (CNng, WvGZ, ABdH, UiuM), and **Extensive experiments** (CNng, WvGz, ABdH, UiuM). In response to the reviewers' feedback, we provide a summary of conducted additional experiments: - **Comparison with offline model-based RL and diffusion planners (Table 1)** We present a table that compares the performance of GTA with model-based RL and diffusion planners in locomotion tasks. We observe that GTA outperforms recent baselines in terms of average performance. - **Analysis on computational cost (Table 2)** We present a table that shows total computational time for training GTA and other methods. We find that while training cost of GTA is relatively high, the evaluation time is much faster than other methods, especially when compared with diffusion planners. - **Guidance for selecting $\mu$ and $\alpha$ for a new task** We would like to address the concerns regarding the generalizability of GTA to new tasks, specifically on hyperparameter settings for noise level $\mu$ and guidance multiplier $\alpha$. We conducted experiments across gym locomotion environments using a single set of hyperparameters ($\mu=0.25, \alpha=1.1$). As shown in the table, GTA outperforms SynthER even with a single hyperparameter setting, highlighting the generalizability of our method. Table 1. D4RL normalized score on locomotion environments with fixed $\alpha$ and $\mu$. The experiments are conducted with TD3BC. | Env | None | SynthER | GTA($\mu = 0.25, \alpha=1.1$) | | --------------- | ------------- | ------------- | ----------------------------- | | halfcheetah-m-r | 44.64 ± 0.71 | 45.57 ± 0.34 | **46.48 ± 0.39** | | halfcheetah-m | 48.42 ± 0.62 | 49.16 ± 0.39 | **49.22 ± 0.52** | | halfcheetah-m-e | 89.48 ± 5.50 | 85.47 ± 11.35 | **94.98 ± 1.66** | | hopper-m-r | 65.69 ± 24.41 | **78.81 ± 15.80** | 72.86 ± 28.42 | | hopper-m | 61.04 ± 3.18 | 63.70 ± 3.69 | **66.16 ± 4.89** | | hopper-m-e | 104.08 ± 5.81 | 98.99 ± 11.27 | **107.09 ± 2.69** | | walker-m-r | 84.11 ± 4.12 | **90.67 ± 1.56** | 86.02 ± 8.98 | | walker-m | 84.58 ± 1.92 | **85.43 ± 1.14** | 85.42 ± 1.30 | | walker-m-e | 110.23 ± 0.37 | 109.95 ± 0.32 | **110.67 ± 0.89** | | Average | 76.92 ± 2.66 | 78.64 ± 2.38 | **79.88 ± 3.35** | We also provide a general recipe for selecting hyperparameters on new tasks. While setting $(\mu=0.25, \alpha=1.1$) generally works, we observe that for low-quality datasets, increasing $\mu$ and $\alpha$ leads to further improvements by promoting exploration, as shown in the tables below. Table2. D4RL normalized score on medium quality locomotion environments with fixed $\alpha$ and $\mu$. The experiments are conducted with TD3BC. | Env | None | SynthER | GTA($\mu = 0.5, \alpha=1.3$) | GTA($\mu = 0.75, \alpha=1.3$) | | ------------- | ------------ | ------------ | ------------------------------ | ------------------------------- | | halfcheetah-m | 48.42 ± 0.62 | 49.16 ± 0.39 | **57.92 ± 0.48** | 57.85 ± 0.27 | | hopper-m | 61.04 ± 3.18 | 63.70 ± 3.69 | **68.46 ± 1.32** | 61.58 ± 5.00 | | walker-m | 84.58 ± 1.92 | 85.43 ± 1.14 | **88.38 ± 2.70** | 87.14 ± 1.73 | | Average | 64.68 | 66.10 | **71.59 ± 0.45** | 68.42 ± 2.22 | | Table3. D4RL normalized score on medium-replay quality locomotion environments with fixed $\alpha$ and $\mu$. The experiments are conducted with TD3BC. | Env | None| SynthER | GTA($\mu = 0.25, \alpha=1.1$) | | - | - | - | - | | halfcheetah-m-r | 44.64 ± 0.71| 45.57 ± 0.34| **46.48 ± 0.39**| | halfcheetah-m | 48.42 ± 0.62| 49.16 ± 0.39| **49.22 ± 0.52**| | halfcheetah-m-e | 89.48 ± 5.50| 85.47 ± 11.35 | **94.98 ± 1.66**| | hopper-m-r| 65.69 ± 24.41 | **78.81 ± 15.80** | 72.86 ± 28.42 | | hopper-m| 61.04 ± 3.18| 63.70 ± 3.69| **66.16 ± 4.89**| | hopper-m-e| 104.08 ± 5.81 | 98.99 ± 11.27 | **107.09 ± 2.69** | | walker-m-r| 84.11 ± 4.12| **90.67 ± 1.56**| 86.02 ± 8.98| | walker-m| 84.58 ± 1.92| **85.43 ± 1.14**| 85.42 ± 1.30| | walker-m-e| 110.23 ± 0.37 | 109.95 ± 0.32 | **110.67 ± 0.89** | | Average | 76.92 ± 2.66| 78.64 ± 2.38| **79.88 ± 3.35**| In summary, we propose a general configuration of hyperparameters in GTA that mostly enhances baseline performance and also provides a recipe for selecting hyperparameters to allow researchers to apply GTA for new tasks without extensive tuning. Pdf: /pdf/2c97b008599935b88009374651434e35476fe593.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper presents Generative Trajectory Augmentation (GTA) that is aimed at improving offline reinforcement learning (RL). GTA uses a diffusion model conditioned on high returns to generate high-rewarding trajectories. These trajectories are used to augment the static datasets used to train offline RL algorithms. The augmentation process involves partially adding noise to existing trajectories and then refining them using a denoising mechanism guided by an amplified return value. The authors demonstrate that GTA improves the performance of popular offline RL algorithms across tasks in the D4RL benchmark. They also analyze the quality of the augmented trajectories, showing improvements in both data optimality and novelty over two baselines (S4RL and SynthER). Strengths: **Originality:** The use of a conditional diffusion model for trajectory augmentation is a novel approach that adds computation overhead on the data creation stage instead of the policy learning stage. **Quality:** The paper includes experiments across tasks from the D4RL benchmark, demonstrating the effectiveness of GTA in various settings, including dense and sparse reward tasks, high-dimensional robotics tasks, and a pixel-based observation task. **Clarity:** The paper is well-organized and presents the methodology, experiments, and results clearly and logically. The authors provide an anonymized link to their code enhancing the reproducibility and transparency of the proposed method. **Significance:** GTA is shown to be compatible with offline RL algorithms, making it a flexible solution for data augmentation in offline RL. By adding novel high-rewarding trajectories to the datasets in the D4RL benchmark, GTA shows improvement in the performance of offline RL algorithms. Weaknesses: 1. As shown in Figures 4 (a)(b) and Table 25, the performance of GTA is sensitive to the choice of hyperparameters, such as the noising ratio ($\mu$) and the multiplier for conditioned return ($\alpha$) which might require extensive tuning for different tasks. 2. While empirical results show that the generated trajectories using GTA are dynamically plausible (Figure 5), it is not explicitly enforced in GTA and might be a byproduct of generating high-rewarding trajectories. The reviewer believes that further evaluation of GTA on real-world data is needed to make claims regarding GTA's dynamic plausibility such as in lines 6-9 (“In response, we introduce … and dynamically plausible”), lines 39-40 (“GTA is designed … dynamic plausibility”). 3. Related to weakness #2, while the paper shows GTA's effectiveness on standard benchmarks, it lacks validation in real-world applications where the dynamics and data distributions may differ significantly from simulated environments. This poses a question on the real-world applicability of GTA. 4. The paper does not include any experiments comparing GTA with model-based RL baselines. Although the authors state that both approaches perform data augmentation at different stages (data generation vs policy learning), the reviewer believes that a comparative study should be included. This is because both approaches learn a separate model to generate augmented data. Minor: - Line 48: “... any offline RL algorithms …” should be “... any offline RL algorithm …”. - A pink curve is present in Figure 4 (b), but its legend is missing. The blue curve denoting $\mu$ = 1.0) is missing in this figure, Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why is IQL the only chosen baseline (and not TD3-BC, CQL, MCQ) for the tasks listed in Table 2 (Adroit and FrankaKitchen)? 2. Why is DrQ+BC the only chosen baseline and Cheetah-run the only chosen pixel-based observation task in Table 3? Why were results on the other baselines (IQL, TD3-BC, CQL, and MCQ) not presented for the pixel-based task(s)? 3. How compute-intensive (GPU or hours) is running GTA on the pixel-based observation tasks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. As the authors have mentioned, while empirical results on the D4RL benchmark tasks show that GTA has low dynamic MSE, in tasks where dynamic violations have a critical impact, the performance boost may not be significant. 2. The current evaluation is limited to simulated tasks, and the effectiveness of GTA in real-world offline RL tasks remains to be validated. 3. As shown in Figures 4 (a)(b) and Table 25, the performance of GTA is sensitive to the hyperparameters ($\mu$ and $\alpha$) which might limit its applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful review! > **(Weakness 1, Limitation 3)** The performance of GTA is sensitive to the choice of hyperparameters, which might require extensive tuning for different tasks. As we illustrated in tables in general response, we demonstrate that GTA outperforms other data augmentation baselines with a single set of hyperparameters across locomotion tasks. This result indicates that we may not need to extensively tune such hyperparameters. Furthermore, we also provide a general recipe for selecting hyperparameters for a new task: choosing high $\mu$ and $\alpha$ for low-quality datasets to promote exploration. > **(Weakness 2-1)** Dynamic plausibility is not explicitly enforced in GTA and might be a byproduct of generating high-rewarding trajectories. As you mentioned, GTA does not explicitly enforce dynamic plausibility. Instead, it implicitly learns the dynamics through the trajectories which inherit true dynamics of environment, resulting in the generation of plausible trajectories. We want to clarify that dynamic plausibility is not a byproduct of generating high-rewarding trajectories. Table 1 compares the dynamic MSE of total augmented trajectories to that of high-rewarding trajectories (top 10% in terms of reward). We found that the dynamic MSE of both cases are similar, validating our claim. Table 1. Dynamic MSE of data generated with GTA using halfcheetah-medium-v2 ||total trajectories|high rewarding trajectories (top 10%)| |-|-|-| |Dynamic MSE($\times 1e-2$)|0.843|0.846| > **(Weakness 2-2, 3, Limitation 2)** GTA lacks validation in real-world applications where the dynamics and data distributions may differ significantly. Thank you for suggesting problem setting that enhaces our manuscript . To verify the effectivenes of GTA in real-world data, we conduct further experiments on the NeoRL benchmark [1]. NeoRL is composed of datasets with narrower data distributions, enforced stochasticity, and aleatoric uncertainty to evaluate how offline RL algorithms behave in realistic environments. We train GTA on the halfcheetah-v3-medium-noise-1000 and halfcheetah-v3-low-noise-1000 of NeoRL benchmark to evaluate the performance. As shown in table 2, GTA enhances the performance of the base algorithm, indicating that GTA can also generate trajectories with real-world datasets. Table 2. Experiment results on NeoRL benchmark. Experiments are conducted with 4 random seeds. Base algorithm is CQL. | Env | Original | S4RL | GTA | |:- |:- |:- |:- | | Halfcheetah-v3-L-1000 | 4352.58 ± 79.81| 4320.42 ± 76.65| **4359.08 ± 46.74**| | Halfcheetah-v3-M-1000 | 6162.49 ± 713.11 | 5759.13 ± 224.87 | **6173.46 ± 739.10** | [1] Qin, Rong-Jun, et al. "NeoRL: A near real-world benchmark for offline reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 24753-24765. > **(Weakness 4)** The paper does not include any experiments comparing GTA with model-based RL baselines. Thank you for your insightful comment. As illustrated in the general response, we compare offline model-based RL methods, such as MOPO, MOReL, and COMBO, with GTA. We take the results of baselines from their original papers. For GTA, we report the score of TD3BC and CQL as base offline RL algorithms. As shown in Table 1 of the attached PDF, GTA outperforms offline model-based RL algorithms in locomotion tasks. It suggests that using a diffusion model to augment trajectories is more beneficial than learning and exploiting a single-transition dynamic model [2, 3]. [2] Janner, Michael, et al. "Planning with Diffusion for Flexible Behavior Synthesis." International Conference on Machine Learning. PMLR, 2022. [3] Jackson, Matthew Thomas, et al. "Policy-guided diffusion." arXiv preprint arXiv:2404.06356 (2024). > **(Question 1)** Why is IQL the only chosen baseline for the Adroit and FrankaKitchen? For Adroit and FrankaKitchen tasks, IQL is the most reliable baseline among offline RL algorithms we adopted (TD3BC, MCQ, CQL). TD3BC exhibits near-zero performance on these tasks. MCQ does not provide any hyperparameter setting for these tasks, while the main hyperparameter $\tau$ in MCQ significantly influences the performance of the algorithm. Although CQL offers hyperparameter settings for these tasks, its training is notoriously unstable, leading to significant performance fluctuations. > **(Question 2)** Why is DrQ+BC the only chosen baseline and Cheetah-run the only chosen pixel-based observation task in Table 3? Why were results on the other baselines (IQL, TD3-BC, CQL, and MCQ) not presented for the pixel-based task(s)? To evaluate GTA on more diverse algorithms and environments, we additionally conduct experiments on walker-walk environment and add BC algorithm for baseline as done in [4]. As shown in table 3, we achieve better performance than other baselines across both environments and algorithms, indicating the generalizability of our method. Table 3. Experiment results on pixel-based observation tasks. The mean scores of table are calculated with 3 seeds. ||BC|||DrQ+BC||| |:-|:-|-|-|-|-|-| || Original | SynthER | GTA| Original | SynthER | GTA | | Cheetah-run | 40.0 | 30.0| **41.1** | 45.8 | 31.2| **49.8**| | Walker-walk | 37.8 | 24.4| **40.0** | 39.7 | 23.8| **41.1**| [4] Lu, Cong, et al. "Synthetic experience replay." Advances in Neural Information Processing Systems 36 (2024). > **(Question 3)** How compute-intensive (GPU or hours) is running GTA on the pixel-based observation tasks? Please note that the diffusion model for pixel-based observation tasks generates a latent vector rather than raw pixel observation. In a single RTX 3090, training GTA in a pixel-based environment for 500k gradient steps requires 50 hours. We promise to fix minor issues raised by the reviewer of our manuscript. --- Rebuttal 2: Comment: Thank you for your detailed response and for providing additional supporting and clarifying empirical results. In line with reviewer **WvGz**, I am still skeptical about the paper's claims regarding GTA ensuring "dynamic plausibility". I would be more comfortable if such a claim is toned down; for instance, augmented trajectories generated from benchmark datasets using GTA exhibit dynamic plausibility. I am satisfied with the author's responses to the rest of my questions and concerns and have raised my rating from 5 to 6. --- Rebuttal Comment 2.1: Comment: We would like to express our gratitude for your thorough review and thoughtful comments on our rebuttal. Following your suggestion, we will tone down our statement from "ensuring dynamic plausibility" to "minimizing the degradation of dynamic plausibility". We apologize for the overclaiming of our proposed method. If you have any additional suggestions or considerations, please feel free to share them with us. Once again, we sincerely appreciate your thoughtful feedback.
null
null
null
null
null
null
Multi-Winner Reconfiguration
Accept (poster)
Summary: The authors study the multi-winner reconfiguration model in the approval setup. They focus on the following rules: AV, SAV, PAV, and CC. While AV and SAV can be solved in polynomial time, CC and PAV cannot. Therefore, the authors provide a more refined analysis of these latter two methods using the FPT approach. Strengths: The first contribution of this paper is the introduction of the new model. The second important contribution is a detailed study of four voting rules, particularly CC and PAV, since they cannot be computed in polynomial time. Besides, the paper is overall clearly written (although a bit dry). Weaknesses: It is a bit worrisome that there are side comments from the authors in the appendix. I also find one sentence in the “Conclusion” section a bit unusual. The authors wrote, “Our preliminary experimental investigations (see Appendix C) indicate that for most randomly generated committees, a reconfiguration path not only exists but can also be efficiently determined using a straightforward heuristic.” There is quite a large section regarding the experiments in the appendix. I think either this experimental part should be completely removed, or there should be a section (or subsection) about it in the main body. It is not a big issue, but it is a bit strange. To be sincere, I don’t know if in such a case I should comment and review these experiments or not. Technical Quality: 3 Clarity: 3 Questions for Authors: . Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the helpful feedback. We wish to apologize for forgetting to remove the two resolved to-do notes in the appendix. Weakness: “I also find one sentence in the ‘Conclusion’ section a bit unusual...” We believe the experimental section is of interest to anyone who wishes to know some preliminary results on how long the reconfiguration paths are in practice and how easy they are to find. However, our paper’s main contribution is the model and the complexity results, and we did not wish to distract the readers from that. Also due to space constraints, we decided to put the description and analysis of the experimental study into the appendix. --- Rebuttal Comment 1.1: Comment: I acknowledge that I have read the author rebuttal.
Summary: The paper studies the multi-winner reconfiguration problem. The goal is to find a transferring path between two winner committees and not change/decrease the score too much in the path. An example of this problem is switching products for streaming providers. This paper study this problem under for voting rule, CC, PAV, AV, and SAV. They show that this problem is solvable under AV and SAV, and do detailed complexity analysis for CC and PAV on multiple groups of parameters. Strengths: + Novelty: Firstly propose reconfiguration problem into a social choice setting. + Well-motivated with real-world examples. I am persuaded by the streaming provider case. + Thorough and solid complexity analysis. Weaknesses: - Section 3 is not well-organized. It's almost a list of theorems and proof without any high-level implications. - I can't follow Proposition 4's proof. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. What is the most important difference between your modeling of multi-winner reconfiguration and the reconfiguration problem in the previous work? 2. I'm a bit confused about the parameter of the complexity analysis. For example, $\ell$ is not an input in the problem definition, but your results contain $\ell$, and the proof of Proposition 4 says the path contains at most $\ell$ steps. Could you explain the relation ship between inputs of the computation problems, the parameter of the complexity analysis, and the constrains of the steps? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the helpful feedback. Q1: "What is the most important difference between your modeling of multi-winner reconfiguration and the reconfiguration problems in the previous work?". We’re not quite sure what reconfiguration problems in the previous work you would like us to discuss more, but we would like to emphasize that we’ve discussed differences to the graph reconfiguration problems and to the sequential multi-winner problem by Bredereck et al. in lines 86-87 and 106-111, respectively. In the following, we summarize the most relevant differences: - Graph reconfiguration problems have in general three types of modifications: token jumping, token addition, and token removal (token referring to the vertices). Our model generalizes token jumping. The crucial difference to the graph reconfiguration problems in the literature lies in the definition of closeness between two committees that may be on the reconfiguration path. We use two aspects: the score and the symmetric difference. In the literature, usually only symmetric difference of size 2 is allowed, and all the solutions on the path must be valid solutions (e.g., it must be a vertex cover). We generalize both of these settings. - Our reconfiguration is also fundamentally different from the one introduced by Bredereck et al. First, the input is different. In our model, a starting and end committee are additionally given, while in theirs no such committees are given. Second, our objective is to look for a path of committees that are close to each other, closeness being defined by the score and symmetric difference, whereas they look for a sequence of committees that satisfy additional constraints, such as requiring that alternatives remain in a number of consecutive committees. Third, their problem is NP-hard while our problem may be PSPACE-complete in general. Igaraishi et al. study reachability problem in fair division context. The underlying problem is different from ours and the reconfiguration step is more restricted. Ito et al. study how to reform envy-free matching by swapping an assigned item with an unassigned item. The setting is different from ours and the underlying studied problem as well. Q2: "I'm a bit confused about the parameter of the complexity analysis...". Our problem is in general PSPACE-complete, which implies that even a shortest reconfiguration path can be of exponential length wrt. m and n. Having the parameter \ell restricts the search space to paths of length at most \ell. This may help to design algorithms that scale well with \ell. For instance, Proposition 4(i) tells us that determining the existence of a reconfiguration path of length at most \ell can be done in polynomial time if both \ell and \delta_c have constant values. However, \ell does not always help. For instance, Proposition 2 implies that the problem remains NP-hard even if every voter approves only two alternatives, and we are looking for a reconfiguration path of length two. We will improve upon Proposition 4's readability. --- Rebuttal Comment 1.1: Comment: Thank you for your response! It addresses my questions. I will stand by my positive recommendation.
Summary: The paper studies the multiwinner setting with approval preferences. The authors propose a new framework of multi-winner reconfiguration, where the goal is to select a sequence of committees such that (1) the subsequent pairs of committees do not differ too much from one another, (2) the final committee is better or, at least, not "too much worse", from the initial one. The quality of a committee in the second part is measured using either Chamberlin-Courant scores, Proportional Approval Voting scores, Approval Voting scores or Satisfaction Approval Voting scores. The paper solves the following computational problem: having two committees W and W', and having fixed bounds on the maximal number of candidates that could be changed between committees and on the maximal score loss in comparison to the initial committee, decide whether it is possible to reconfigure W into W'. The question is polynomial for AV and SAV scores. For CC and PAV scores it is PSPACE-complete, while the authors propose several FPT algorithms and other complexity results for special cases where certain parameters of the model are constant. Strengths: The paper is very well-written, its structure and motivation are clear. I especially appreciate the real-world examples presented in the Introduction. The obtained results are technically challenging - as far as I checked, they are sound to me. Weaknesses: The only weakness that came to my mind is that the current definition of the multi-winner reconfiguration path is somewhat limiting --- for now it is impossible to apply it for other multiwinner rules that are not based on maximizing some kind of score (like the Method of Equal Shares, or the Phragmen's method), which otherwise would be a natural follow-up question of the paper. Hence, if anyone would like to further analyze different rules in this context, there would still be a need to reinvent the whole reconfiguration setup (and potentially, analyze all the four rules studied by the authors again, under the reinvented definition). This could limit the significance of the contribution, but it does not change my overall positive opinion about the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: You can respond to my doubts in the "Weaknesses" section -- in particular, do you have an idea how this setting can be generalized to the rules not based on maximizing committee score? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The limitations are adequately addressed and there is no potential negative societal impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and an interesting question. Q: “do you have an idea how this setting can be generalized to the rules not based on maximizing committee score?” There are two ways to extend our model to other voting rules. - We could just follow the graph reconfiguration model and restrict the reconfiguration paths to winning (optimal) committees only. This does not require us to introduce any distance measure to an optimal committee. Hence, it works for all voting rules. - To allow for nearly optimal committees, we only need to define a distance measure from each committee to an optimal one. For example, Phragmén’s rule can be seen as a greedy approach to the leximax-Phragmén-rule, that is, minimizing the maximum voter load and subject to that, the second highest voter load, and so on. (See Rule 10 in the book “Multi-Winner Voting with Approval Preferences” by Lackner and Skowron.) We could set the loads of the voters under the selected starting committee as the initial “score”, define \delta_s as a vector as well, and only accept committees W’’ whose loads are lexicographically smaller than or equal to that of our starting committee W + \delta_s; that is, there is an integer i \in {0,...,n}, where n is the number of voters, such that the i - 1 highest loads of W’’ are equal to the i - 1 highest loads of W + \delta_s, and the i^th highest load of W’’ is smaller than the i^th highest load of W + \delta_s. These vector comparisons can also be translated to number comparisons, if so desired. This allows us to compare Phragmén to the four voting rules studied in our paper without reinventing the whole setup. --- Rebuttal Comment 1.1: Comment: Thank you, this makes sense. I believe it would be nice to add this to the discussion of potential future work. Since my main concern was adressed, I increase my score.
Summary: The paper studies an attractive reconfiguration problem in the context of multi-winner elections. Suppose we have an n-voter m-candidate approval election and work with a committee-scoring voting rule outputting size-k committees. Fix such a rule r (in the paper, this is either approval voting, proportional approval voting, satisfaction approval voting, or Chamberlin-Courant). The goal is, given two committees W != W’ such that their score under r differs by at most some number x, and given a number y, to compute (or decide the existence of) a chain of committees W = W_0, W_1, …, W_t = W’ such that any two consecutive committees’ scores differ by at most x under r and the symmetric difference between them has size at most 2y (more precisely, the number of added and removed elements between any two consecutive committees are both at most y). The authors study the classical and parameterized complexity of this problem, almost tracing the complete landscape of its complexity with respect to a large number of natural parameters and combinations thereof. The paper also provides an experimental appendix on computing such chains in practice. Strengths: The paper is quite well-written and motivated. I enjoyed reading it. The parameterized complexity analysis is carefully executed, with well-chosen parameters and interesting findings. The paper’s initial segment introduces concepts at an appropriate pace, giving examples where they are due. I was particularly happy with the existence of this sentence: “We will see that all intractability results already hold for the restricted case (x = 0, y = 1), while all algorithmic results hold for the general case.” I think a paper like this could be a nice read for a general AI/ML audience (which is relatively rare for parameterized complexity papers). Moreover, I think the findings will be interesting to the specialized audience interested in multi-winner elections and/or voting in general. The experiments in the appendix clearly add a bit more color to the paper, which would otherwise be a pure complexity paper (I believe these should be more prominently featured earlier in the paper, even if just for a few sentences). Weaknesses: - One could argue that such “heavy” parameterized complexity papers shouldn’t appear in AI venues. While I tend to agree with this statement in general, as I already said, this paper could be a welcome exception. - The model permits chains of committees where a certain alternative is included/excluded multiple times throughout the chain. In clearer terms: it would be strange if, say, Netflix repeatedly added and removed a certain movie throughout the course of 1 month (extreme example: the movie is available only every second day). I’d say it’s more realistic to ask that once a certain alternative in W \setminus W’ is removed, it should never be added back. (**) - L131-138: {I am familiar with these definitions and I understand what you meant to write.} The way you work here with multi-winner rules implicitly assumes that they are irresolute (i.e., nondeterministic = return a set of solutions), since otherwise neutrality is almost meaningless; e.g., say m = 2 and k = 1 and we have two identical candidates, which one would a neutral resolute rule choose? This makes me also wonder about anonimity as well (outcome does not depend on the name of the voters). What do you actually assume here? (*) I am positive none of these omissions break your results in any way, but I think this should be clarified and revised in the paper. - L190-191: I think the correspondence only holds for paths that start at W (the vertex set of the graph is defined in terms of the score of W, so the made statement can’t be true). I understand the idea you wanted to convey though, i.e., we care about subpaths that are to be used in a longer path that starts at W. - I had great difficulty understandign the first half of the proof of Proposition 1 (the one making appeal to previous work). This is not because the rough idea is unclear to me, but because of the details; e.g., “determining whether a committee is on a reconfiguration path” seems as hard as answering the whole question. Instead you might have meant “whether it’s in the vertex set of the graph”. I would advise to revise this part. Minor: - You should highlight more in the first few pages that the path need not necessarily be a shortest path. In hindsight, it’s never stated otherwise, and there is no reason why one would believe that, but I still somehow managed to think it’s shortest for a few minutes (maybe because one of the parameters you introduce early on is the length of the shortest one!). - L56: It can’t be PSPACE-complete because it’s not a decision problem. I assume you wanted to say PSPACE-hard. - L39: The “But…” sentence reads a bit strange. Consider adding a comma or combining with the previous sentence. - Abstract: “minor yet impactful modifications” sounds a bit weird in this context. Instead of helping me, the addition of the word “impactful” made me start pondering what would “impactless” changes even mean in this model. The “meaningful” you use on L24 is a bit better, but really I wonder whether an adjective is even needed here. - L128: “If not” -> “Unless” - L129: Maybe use \emph{} over the word “type” to show that this is a definition. - L165: “Let” -> “Assume” - L178: “we use” -> “we write” - L194: Remove “the”. - L195: “there is no” -> “hence no” - I understand the need for them, but LaTeX is just not very appropriate for imbricated proofs (i.e., Claim 1.1 inside the proof of Theorem 1). The \qed symbol above L234 looks slightly strange if the reader is not accustomed to it. - L245: “remains” seems inappropriate here since the former part of the statement doesn’t concern W[1]-hardness. Another comment I will make here, but more generally applicable: it might be worth saying that the proof uses an optimal committee in the statement of the theorem (this is announced earlier in the paper, so it would be nice to have this clearly mentioned also in the theorem statements). I understand that this might have the side-effect of overburdening some statements, but I would ask that you consider clarifying this when revising the paper. - L267: “approach [to] the reconfiguration” - L294: some early intuition as to why this distinction will be needed would be nice to have. - Footnote on page 7: “is” -> “was” - In general: I would vote to use the same counter for Theorems, Lemmas and Propositions (it looks a bit strange at times, like on page 7). - L390: Remove “as to”. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Please clarify (*) above. 2. Would any of your results still apply to the alternative setting described in (**) above? 3. You mention that your hardness results apply even to the setting of working exclusively with optimal committees, which I personally find very attractive (and maybe it should even be somehow mentioned in the technical statements, similar to a point I made earlier). However, both PAV and CC are hard to compute, so it could be unrealistic to assume that access to optimal committees is available. Would any of your findings translate to assuming that the starting committee is the output of some approximation of PAV or CC? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: In general, the paper is honest about the quality of the work and where the unknown starts. I have no complaints in this regard. No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the detailed and helpful feedback. Q1: "Please clarify (*) above. What do you actually assume here?" We assume that the voting rules are irresolute, as we look into the case where there are multiple optimal committees and we consider score-based voting rules without tie breakers. Our considered score-based voting rules are neutral. We will make this more clear in our final revision. Although we didn’t discuss anonymity, our algorithmic results for the score-based voting rules (i.e., Propositions 3-5) do work for anonymous and non-anonymous voting rules. The remaining algorithmic results are for CC, PAV, AV, and SAV, which are known to be anonymous. Q2: “Would any of your results still apply to the alternative setting described in (**) above?" Yes, Proposition 3, Proposition 4(i), and Lemma 1(i) would still hold. Note that for Proposition 3, the running time would be longer; however, it would still be FPT wrt. m and XP wrt. k. For the hardness, we would lose PSPACE-hardness as by adding this restriction because the length of a valid reconfiguration path would be polynomially bounded in m. Due to this, a polynomial-size certificate would exist to verify a solution, meaning that the problem would be contained in NP. Q3: “Would any of your findings translate to assuming that the starting committee is the output of some approximation of PAV and CC". All algorithmic results should still hold. We conjecture that the hardness results should also hold. Second weakness: "The model permits chains of committees,...." We chose to allow removing an alternative and adding it back to a committee because it may be necessary to do so in order to obtain a reconfiguration path. For example, consider a profile with six voters, called v1 to v6, and five alternatives, called a to e. The alternatives are approved as follows: A(a)={v1,v2,v3}, A(b)={v4,v5,v6}, A(c)={v1,v2,v6}, A(d)={v3,v4,v5}, A(e)={v1}. Suppose we use the CC rule. If we set \delta_s=0 and \delta_c=1, then there is a reconfiguration path between the committees {a,b,e} and {c,d,e} only if we are allowed to switch out e and switch it back in afterwards. This means that a store that tries to keep an optimal set of items on display may want to use such a solution. Fourth weakness: "L190-191:..." You are correct; this is a typo and will be fixed. Fifth weakness: "I had great difficulty understanding the first half of the proof of Proposition 1..." Thanks for the comment. We will improve the first half of the proof in our final revision by describing Ito et al.’s general PSPACE-membership more clearly.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Rethinking Model-based, Policy-based, and Value-based Reinforcement Learning via the Lens of Representation Complexity
Accept (poster)
Summary: The paper presents an analysis of the representation complexity, the necessary complexity of a circuit, between different paradigms in reinforcement learning. The authors show, with several reductions to well-known theoretical complexity classes, that MDPs exist in which representing a model is "easy" while representing a value function or policy is "hard". Strengths: The paper dives deep into an established conjecture in reinforcement learning, that learning a model of an environment can be easier than learning a value function. With a clever reduction to a simple exemplary class of MDPs, the authors show that is "difficulty" is measured by the complexity of a circuit necessary to represent the respective function, then classes of problems exist in which this is indeed true. The mathematical derivations and proofs are straightforward and seem correct to the extent that I am able to verify them, although my expertise lies in RL and not complexity theory. They provide stronger results than previous work in the same direction [1] and seems like a good follow up work. [1] ON REPRESENTATION COMPLEXITY OF MODEL-BASED AND MODEL-FREE REINFORCEMENT LEARNING, Hanlin Zhu, Baihe Huan, Stuart Russell, ICLR 2024 Weaknesses: I have one major confusions which I list under questions since these are hopefully easily addressable in the rebuttal. I think the problem of the "strictness" of the hierarchy should be very carefully addressed. It only weakens the conclusion, all the proofs are correct as far as I was able to verify, but I would encourage the authors to engage carefully. This is my main reason for recommending rejection: I think the works is interesting and meaningful, but the conclusion seems wrong due to the problem outlined below. I am very happy to discuss this point in the rebuttal and raise my scores accordingly if appropriate. Technical Quality: 3 Clarity: 3 Questions for Authors: The authors explicitly call the studied phenomenon a hierarchy, but I am uncertain if this is actually true. It seems intuitive that their should be MDPs in which the situation is reversed: representing the value function is easy, while representing the full model is difficult. This seems especially true if we don't talk about strict error free representation, but allow for errors. One intuitive reason for me to conjecture this is the existence of an arbitrarily complex model (i.e. one that transitions from a 3-SAT formula with a high likelihood to one of two states depending on the formula is satisfiable or not) but with 0 reward and therefore a trivial value function as well. I think this is my biggest problem with the paper so far: the construction implies the existence of one direction for the hierarchy, but does not imply that the other direction cannot exist. This also makes the empirical section a bit more difficult: Why should we assume that the Mujoco locomotion environments fall into the category of problems described in the paper and not into the alternative? Empirically this seems to be true. Is there some reason to believe that "realistic" MDPs will exhibit this complexity relationship more, or is this an artifact of the choice of environments? Other, minor issues: - All figures are barely legible at a "reasonable" zoom level, I would encourage the authors to quickly redesign these. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate your feedback and will address each of your concerns individually. --- **Q1:** The authors explicitly call the studied phenomenon a hierarchy, but I am uncertain if this is actually true. It seems intuitive that their should be MDPs in which the situation is reversed: representing the value function is easy, while representing the full model is difficult. This seems especially true if we don't talk about strict error free representation, but allow for errors. One intuitive reason for me to conjecture this is the existence of an arbitrarily complex model (i.e. one that transitions from a 3-SAT formula with a high likelihood to one of two states depending on the formula is satisfiable or not) but with 0 reward and therefore a trivial value function as well. I think this is my biggest problem with the paper so far: the construction implies the existence of one direction for the hierarchy, but does not imply that the other direction cannot exist. **A1:** Thank you for your question. We have acknowledged in our paper that our complexity hierarchy does not hold for all MDPs, and provide a detailed discussion in Appendix C.2. For the convenience of the reviewer, we paste Appendix C.2 below: > First, we wish to underscore that our identified representation complexity hierarchy holds in a general way. Theoretically, our proposed MDPs can encompass a wide range of problems, as any $\mathsf{NP}$ or $\mathsf{P}$ problems can be encoded within their structure. More crucially, our thorough experiments in diverse simulated settings support the representation complexity hierarchy we have uncovered. In fact, we have a generalized result establishing a hierarchy between policy-based RL and value-based RL, as stated in the following proposition: > **Proposition:** Given a Markov Decision Process (MDP) $\mathcal{M}=(\mathcal{S}, \mathcal{A}, H, \mathcal{P}, r)$, where $\mathcal{S}\subset\{0,1\}^n$ and $|\mathcal{A}|=O(\mathsf{poly}(n))$, the circuit complexity of the optimal value function will not fall below the optimal policy under the $\mathsf{TC}^0$ reduction. > However, our representation complexity hierarchy is not valid for all MDPs. For instance, in MDPs characterized by complex transition kernels and zero reward functions, the model's complexity surpasses that of the optimal policy and value function (*this aligns with the "counterexample" noted by the reviewer*). However, these additional MDP classes may not be typical in practice and could be considered pathological examples from a theoretical standpoint. We leave the fully theoretical characterizing of representation hierarchy between model-based RL, policy-based RL, and value-based RL as an open problem. For instance, it could be valuable to develop a methodology for classifying MDPs into groups and assigning a complexity ranking to each group within our representation framework. **Q2:** This also makes the empirical section a bit more difficult: Why should we assume that the Mujoco locomotion environments fall into the category of problems described in the paper and not into the alternative? Empirically this seems to be true. Is there some reason to believe that "realistic" MDPs will exhibit this complexity relationship more, or is this an artifact of the choice of environments? **A2:** Thank you for your question. It's difficult to argue that Mujoco environments fall into the category of constructed MDPs for theoretical understanding. However, constructed MDPs and Mujoco environments (or more general real-world applications) share similar features: they encode complex decision-making problems into relatively simple models (reward and transition). Our theoretical framework aims to characterize a broad range of real-world problems by incorporating $\mathsf{P}$ or $\mathsf{NP}$ problems into the transition. This way of study aligns with the theoretical understanding of deep learning, where researchers typically demonstrate or explain phenomena (such as benign overfitting) in ideal theoretical models (such as linear regression or two-layer neural networks). Moreover, we want to emphasize that our work uses a completely new perspective --- the expressive power of neural networks --- to study RL problems, which is highly related to our experiments and can be considered an important step in bridging the gap between theory and practice. Finally, for the reasons stated above, we believe the demonstrated representation hierarchy, although not universal, characterizes a wide range of RL problems and is definitely not an artifact of the choice of environments. --- We sincerely hope the reviewer will reconsider their rating of our work, which is the first comprehensive study of representation complexity. We are also open to further discussion if the reviewer has any concerns about the correctness of our proof or the contribution of our research. --- Rebuttal Comment 1.1: Title: Answer Comment: Dear authors, thanks for the clarifications. I acknowledge that I missed the discussion in appendix C, I feel like this is a very important point that might deserve some more prominent space in the main paper. I still don't fully agree with the argument: the benchmarks are investigated here are all designed to have little to no task irrelevant features such as distractions. Generalizing from these to RL problems in general seems like it can mislead the community. The results also violate a lot of commonly held intuition: in many cases predicting the exact consequences of your actions is very hard, i.e. how many blades of grass are crushed by a step on the lawn, but from the perspective of any task, this is irrelevant. I wish I could provide a nice reference here, but I can't find one at the moment. I don't have a problem at all with your formal statements and I do applaud the novel and insightful technique. My whole problem is with the nuance of the framing, which I do think is very important to get right to frame your (very interesting results) in the right way. I am willing to increase my score provided you can give the nuance of this discussion some space in the main paper. Since I want to be an optimistic and constructive reviewer, I have updated my score to recommend acceptance, I hope you follow my concerns. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our response and update the score. We appreciate your feedback and will certainly follow your suggestions to polish our paper.
Summary: This paper studies three RL paradigms: model-based RL, policy-based RL, and value-based RL from the perspective of representation complexity. The authors demonstrate that representing the model emerges as the easiest task, followed by the optimal policy, with the optimal value exhibiting the highest representation complexity. Strengths: - The paper is well written. The problem studied in this paper is well-motivated and very interesting. - Analyzing model-based RL, policy-based RL, and value-based RL from the perspective of representation complexity, transitioning from simple scenarios to broader ones, is impressive. Weaknesses: - From the perspective of MLP, using simple 2 or 3-layer MLPs to calculate approximation error to validate conclusions provides limited insights for modern deep RL. - Although it is a theory paper, I would have liked to see more experiments designed to validate the conclusions. Technical Quality: 3 Clarity: 3 Questions for Authors: - Intuitively, could you explain in detail the fundamental reasons for the different representation complexities of the optimal policy and optimal value function under different settings in Sec. 3 and Sec. 4? - Maybe you should describe more examples to intuitively understand the representation complexity of the model, optimal policy, and optimal value function. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper is mainly theoretical, and no algorithm is implemented. There is no specific potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate your feedback and will address each of your concerns individually. --- **W1:** From the perspective of MLP, using simple 2 or 3-layer MLPs to calculate approximation error to validate conclusions provides limited insights for modern deep RL. **Response:** We appreciate the reviewer's point regarding the complexity of MLPs in modern deep RL. However, our experiments are conducted in Mujoco environments where the approximation challenges are relatively modest. In these settings, MLPs with 3 layers and 256 hidden units each have been shown to perform near the state-of-the-art (SOTA) when utilizing algorithms such as TD3 (and SAC...). This indicates that the complexity of the function approximators required for these environments is not high. Consequently, using simple 2 or 3-layer MLPs as our testbed is a reasonable choice for the scope of our study. Moreover, as Mujoco environments are classic and extensively utilized benchmarks within the deep RL community, insights gained from these experiments hold significant relevance. **W2:** Although it is a theory paper, I would have liked to see more experiments designed to validate the conclusions. **Response:** We recognize the value that additional experiments would bring, particularly in complex environments (such as robotics), to reinforce our conclusions. Unfortunately, due to constraints in resources, extensive experimentation in diverse real-world scenarios was not feasible within the scope of this study. However, we selected Mujoco environments for our experiments because they are classic and widely recognized benchmarks in the deep reinforcement learning community. The experiments are conducted in 4 environments and repeated with 5 random seeds, and the results are therefore fairly sufficient to support our theoretical results and contribute to the general understanding of our theoretical claims. If the reviewer has more concrete suggestions to enhance the experiments, we are happy to incorporate them. **Q1 & Q2:** (1) Intuitively, could you explain in detail the fundamental reasons for the different representation complexities of the optimal policy and optimal value function under different settings in Sec. 3 and Sec. 4? (2) Maybe you should describe more examples to intuitively understand the representation complexity of the model, optimal policy, and optimal value function. **Response:** Thank you for your question. The fundamental difference in representation complexity for optimal policies and value functions in Sec. 3 and Sec. 4 stems from the nature of the decision-making process in various reinforcement learning (RL) scenarios. In Sec. 3, we consider environments with long-term dependencies where the optimal policy needs to encapsulate complex strategies due to the high branching factor of possible future state trajectories. Here, the representation complexity of both the policy and the value function is high since they must incorporate information about the consequences of actions over many time steps. This is akin to games like chess, where the policy must be sophisticated enough to navigate a vast tree of possible moves. In contrast, Sec. 4 addresses environments with shorter planning horizons. The optimal policy in such settings can be less complex because it's more focused on immediate states which is sufficient to make a good decision. However, the optimal value function may still require a complex representation to accurately predict long-term returns from any given state due to the potential variability in future rewards. An example provided is a robotic gripping task where the immediate action (gripping with the correct force) is straightforward, but the long-term implications (successfully gripping various objects without dropping) add complexity to the value function. In the [Mujoco environment](https://www.gymlibrary.dev/environments/mujoco/index.html)(e.g. [Ant-v4](https://www.gymlibrary.dev/environments/mujoco/ant/)), the transition kernels and reward functions of these environments are some simple functions containing several rules. However, as for the optimal policy and optimal value function, they are so complex that need a complex neural network to approximate and a well-designed RL algorithm to learn. In summary, the complexity of the optimal policy and value function representations is intrinsically linked to the depth of foresight required for decision-making in a given RL scenario. We will add more analysis in our paper to illustrate this distinction and its implications for the design of RL algorithms. --- Please let us know if you have any further questions. If your concern is addressed, we would appreciate it if you would reconsider your score in light of our clarification. --- Rebuttal Comment 1.1: Comment: Dear reviewer, Thank you again for reviewing our paper. As the discussion period ends soon, we'd like to know if our response has adequately addressed your concerns. If not, we're happy to provide further clarification. We greatly appreciate your time and support. Best, Authors --- Rebuttal Comment 1.2: Comment: Thank you for your detailed response to my review. The authors' rebuttal has addressed some of my concerns. My intuition remains in favor of acceptance, though my limited knowledge in the area of representation complexity.
Summary: This paper delves into understanding the inherent representation complexities associated with three different RL categories: model-based RL, policy-based RL, and value-based RL. By studying computational complexity theory and neural networks, MLP, the paper posits a hierarchy in which representing the underlying model is the simplest, followed by the optimal policy, and finally, the optimal value function, which is the most complex to represent. Theoretical analyses and empirical results support these claims. Strengths: - The paper gives a new understanding and improves RL algorithms by examining RL paradigms through the lens of representation complexity. - The paper includes deep RL experiments that align with the theoretical findings, offering practical evidence of the proposed hierarchy. -The paper bridges theoretical insights with deep RL by discussing the expressiveness of MLPs. Weaknesses: - While the theoretical insights are significant, the paper does not extensively explore their direct implications for real-world RL applications. - Previous experiments show model-based RL is more sample-efficient than model-free RL, aligning with this paper's finding that representing the dynamic model is easier. However, as training progresses, model-free methods often outperform model-based ones. Could the theories in the paper explain this? While the policy is more challenging to represent initially, it allows model-free methods to optimize more efficiently once learned. - Could the proposed hierarchy of representation complexity be applied to other types of neural network architectures beyond MLPs, such as transformers? Technical Quality: 3 Clarity: 3 Questions for Authors: Please answer the points mentioned in the weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper does not discuss their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate your feedback and will address each of your concerns individually. **Q1:** While the theoretical insights are significant, the paper does not extensively explore their direct implications for real-world RL applications. **A1:** Thank you for recognizing the theoretical insights of our work. In fact, our research empirically demonstrates a consistent representation complexity hierarchy, accompanied by a theoretical understanding from the perspective of deep neural networks (a completely new perspective in RL theory!). Our work also has implications for explaining why model-based RL is more sample-efficient. We view our work as the beginning of a comprehensive understanding of representation complexity in RL; therefore, further direct implications for RL algorithms in real-world applications are left for future work. **Q2:** Previous experiments show model-based RL is more sample-efficient than model-free RL, aligning with this paper's finding that representing the dynamic model is easier. However, as training progresses, model-free methods often outperform model-based ones. Could the theories in the paper explain this? While the policy is more challenging to represent initially, it allows model-free methods to optimize more efficiently once learned. **A2:** Our theoretical analysis focuses on the **representation complexity** of RL paradigms, explaining the initial sample efficiency of model-based methods. We recognize that as training advances, the direct policy representation in model-free methods may lead to more efficient optimization, potentially outperforming model-based approaches. This difference primarily stems from the distinct **optimization properties** of various reinforcement learning algorithms. We emphasize that **representation complexity and optimization properties are two parallel and equally important aspects**. Our work fills a gap in understanding representation complexity in RL theory. We fully agree with the reviewer that integrating optimization efficiency with representation complexity to comprehensively understand the performance differences between various RL algorithms is a promising (and challenging) direction for future research. We appreciate the reviewer highlighting this important area of investigation. **Q3:** Could the proposed hierarchy of representation complexity be applied to other types of neural network architectures beyond MLPs, such as transformers? **A3:** Our hierarchy of representation complexity **can be applied to the transformer architecture**. Please see the **general response** for the details. --- Please let us know if you have any further questions. If your concern is addressed, we would appreciate it if you would reconsider your score in light of our clarification. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! My concerns have been addressed, so I decide to keep my positive rating. --- Reply to Comment 1.1.1: Comment: We are happy to hear that we've addressed your concerns. Thank you for your valuable time reviewing our paper.
Summary: This paper delves into the representation complexity in different RL paradigms. It focuses on the function class needed to represent the underlying model, optimal policy, or optimal value function. Strengths: 1. The study uses time complexity and circuit complexity to theoretically analyze the representation complexity among RL paradigms. It introduces new classes of MDPs (3-SAT MDPs, NP MDPs, CVP MDPs, and P-MDPs) to showcase the differences in complexity. 2. The paper finds that models and policies can often be effectively represented by MLPs, while optimal value functions face limitations, providing insights that may inform future research. Weaknesses: See questions. Technical Quality: 3 Clarity: 3 Questions for Authors: I apologize for my limited familiarity with the representation complexity area. I have read the paper thoroughly and examined the theorems. I appreciate your work, but I feel I cannot make a solid judgment. So if my questions are off the mark, please correct me. 1. Are the findings regarding the representation complexity hierarchy consistent across a wide range of task settings, or do they vary significantly with different types of tasks? 2. Can you provide case studies or real-world examples for understanding the representation complexity hierarchy that has led to improved sample efficiency in practical applications? I understand you have provided some explanations in the paper. If this question adds extra burden, feel free to disregard it. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do not have an explicit limitations section or paragraph. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for the positive feedback and appreciation for our work. **Q1:** Do the findings regarding the representation complexity hierarchy hold consistently across different task settings, or do they vary significantly with various types of tasks? **Response:** Yes, our findings on the representation complexity hierarchy demonstrate consistent behavior across a diverse array of task settings. (i) Theoretically, we can encode *any* $\mathsf{NP}$ or $\mathsf{P}$ problem into the construction of MDPs and establish the desired representation complexity hierarchy, characterizing a wide range of problems. (ii) Empirically, we have conducted experiments across various simulated environments, each designed to test different aspects of task complexity. These environments range from simple binary classification tasks to more complex structured prediction problems. In all these settings, the hierarchy we have identified remained robust and consistent. Thank you for raising this important question and we will further emphasize this in our revision. **Q2:** Can you provide case studies or real-world examples for understanding the representation complexity hierarchy that has led to improved sample efficiency in practical applications? **Response:** Thank you for your question. Our research findings indicate that the model typically benefits from the lowest representation complexity, which may explain why model-based reinforcement learning generally achieves better sample efficiency in real-world applications (see Appendix C.1 for more details). Moreover, we'd like to emphasize that our work primarily focuses on providing the first comprehensive theoretical understanding of representation complexity across various reinforcement learning paradigms. We hope this understanding will inspire the following empirical works and lead to practical advancements in sample efficiency. --- Rebuttal Comment 1.1: Title: Thank the authors for the rebuttal! Comment: Thank you for the detailed rebuttal! My concerns have been resolved. Apologize again for my limited knowledge in your area. Wish you good luck with this paper! --- Reply to Comment 1.1.1: Comment: We are pleased that our response has addressed your concerns. Thank you for taking the time to review our paper.
Rebuttal 1: Rebuttal: We appreciate all the reviewers for reviewing our paper. We have provided comprehensive responses separately and demonstrate in our general response that our findings can be extended to deep reinforcement learning with **transformer** architectures. Here are some informal theorems and proof sketches. We will incorporate these theorems in the next version of this paper. > **Theorem:** Assuming that $\mathsf{TC}^0\neq\mathsf{NP}$, the optimal policy $\pi_1^*$ and optimal value function $Q_1^*$ of $n$-dimensional 3-SAT MDP and $\mathsf{NP}$ MDP defined with respect to an $\mathsf{NP}$-complete language $\mathcal{L}$ cannot be represented by a Transformer with constant layers, polynomial hidden dimension (in $n$), and ReLU as the activation function. > **Theorem:** Assuming that $\mathsf{TC}^0\neq\mathsf{P}$, the optimal value function $Q_1^*$ of $n$-dimensional CVP MDP and $\mathsf{P}$ MDP defıned with respect to a $\mathsf{P}$-complete language $\mathcal{L}$ cannot be represented by a Transformer with constant layers, polynomial hidden dimension (in $n$), and ReLU as the activation function. > **Proof:** According to Lemma I.7 in our paper, and the previous work[1], a Transformer with logarithmic precision, a fixed number of layers, and a polynomial hidden dimension can be simulated by a $\mathsf{L}$-uniform $\mathsf{TC}^0$ circuit. On the other hand, the computation of the optimal policy and optimal value function for the 3-SAT MDP and NP MDP is NP-complete, and the computation of the optimal value function for CVP MDP and P MDP is P-complete. Therefore, the theorem holds under the assumption of $\mathsf{TC}^0\neq\mathsf{NP}$ and $\mathsf{TC}^0\neq\mathsf{P}$. > **Theorem:** The reward function $r$ and transition kernel $\mathcal{P}$ of $n$-dimensional 3-SAT MDP and NP MDP can be represented by a Transformer with constant layers, polynomial hidden dimension (in $n)$, and ReLU as the activation function. > **Theorem:** The reward function $r$, transition kernel $\mathcal{P}$, and optimal policy $\pi^*$ of $n$-dimensional CVP MDP and P MDP can be represented by a Transformer with constant layers, polynomial hidden dimension (in $n)$, and ReLU as the activation function. > **Proof Sketch:** It is important to note that the MLP is a submodule of the Transformer. According to Theorems 5.2 and 5.4, an MLP with constant layers, polynomial hidden dimension (in $n)$, and ReLU activation can represent these functions. Given an input sequence of states, the transformer can just use the MLP module to calculate the corresponding functions. [1] William Merrill and Ashish Sabharwal. The parallelism tradeoff: Limitations of log-precision transformers. Transactions of the Association for Computational Linguistics.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mitigating Reward Overoptimization via Lightweight Uncertainty Estimation
Accept (poster)
Summary: Quantifying the uncertainty of the reward model output can mitigate the issue of reward over-optimization. This paper introduces a lightweight method for quantifying reward uncertainty in RLHF, which can be integrated into existing trained reward models. Then the authors propose a distributionally robust optimization procedure to counter overoptimization during policy improvement. Experiment results verify the proposed method. Strengths: 1. The paper is generally easy-to-follow. 2. Uncertainty estimation is an important topic in LLM and worth specialized study. 3. The proposed method perform well in experiments. Weaknesses: 1. The proposed method requires calculating matrix inverse $M_D^{-1}$, which can be numerically unstable and time consuming when the dataset size $N$ is big. 2. The theoretically motivation (Theorem 3.1) has substantial gap with the proposed practical method, e.g., requiring infinitely wide neural network. 3. The proposed uncertainty measure $U^{CI}$ has no statistically justification/guarantee and seems to only upper bound the difference between the true reward and the learned reward, as seen in Eq. (4). It is unclear if $U^{CI}_{x,y}$ can measure the "uncertainty" of the estimated reward. And if so, does it measure epistemic or aleatoric uncertainty? And is it possible to construct any statistically valid confidence set based on the proposed uncertainty measure? 4. The proposed method requires reference responses, which is a non-standard and stronger assumption for RLHF's reward learning and policy optimization. This casts doubts on the practical value of the proposed method and the fairness of the experimental comparison. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. What is the definition of "uncertainty" that this paper centers around? 2. Why will the uncertainty measure $U^{CI}$ be smaller when $(x,y)$ is close to the training data samples and $U^{CI}_{x,y}$ be higher when $(x,y)$ is far from the training data? 3. L185-186, how is $C^r_{\delta}$ be constructed such the ground truth reward is included with probability of $1-\delta$ ? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have addressed the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reply to Reviewer fmPe > [Q1] "Clarification on uncertainty: (1) definition of uncertainty in this paper, epistemic or aleatoric uncertainty? (2) The proposed uncertainty measure has no statistically justification/guarantee and seems to only upper bound the difference between the true reward and the learned reward, as seen in Eq.4. (3) Assumption in Theorem 3.1.; (4) Calculating $M_D$ can be numerically unstable and time-consuming when the dataset size is big." **For (1)**, in this paper, we focus on quantifying the **epistemic uncertainty** of the estimated reward from the learned reward model. Formally, assuming the reward model is parameterized by $\varphi$. For a prompt-response pair $(x,y)$, let $r\_{\hat{\varphi}}(x,y)$ denote the estimated reward, and $r^*(x,y)$ denote the groundtruth reward under the optimal parameterization $\varphi^*$. Then the reward uncertainty $U^{\delta}\_{x,y}$ implies that with probability $1-\delta$, the inequality $$|r\_{\hat{\varphi}}(x,y) - r^*(x,y)| \leq U^{\delta}\_{x,y}$$ holds. In other words, with probability $1-\delta$: $$r^*(x,y) \in C^\delta\_{x,y} := [r\_{\hat{\varphi}}(x,y)-U^{\delta}\_{x,y}, r\_{\hat{\varphi}}(x,y)+U^{\delta}\_{x,y}]$$ where $C^\delta_{x,y}$ is called the confidence interval of the $r_{\hat{\varphi}}(x,y)$. **For (2)**, we thank the reviewer for pointing out an area that may have caused unnecessary misunderstanding. As discussed above and in Theorem 3.1 (lines 151-153), the uncertainty indeed has a statistical justification: it holds with probability $1-\delta$, and the uncertainty value is determined by $\delta$ (typically the smaller $\delta$ is, the larger uncertainty value will be). **For (3)**, we agree with the reviewer that Theorem 3.1 relies on certain assumptions regarding network architectures, specifically that the network width is infinitely wide. Such an assumption is commonly adopted in analyses involving the Neural Tangent Kernel, as in [1]. This is also why we empirically examined the effectiveness of the proposed lightweight uncertainty estimation using a synthetic setup with known ground-truth rewards in Section 5.1 (Figure 1). The results demonstrate that the proposed lightweight uncertainty estimation can accurately capture the divergence between the ground truth and estimated proxy rewards, effectively signalling over-optimization. **For (4)**, we thank the reviewer for pointing out this typo. The correct formula for calculating $M_D$ is $M_D = \lambda I + \frac{1}{N}\sum_{i=1}^N \sum_{y \in [y_c^i, y_r^i]}e(x_i, y)e(x_i,y)^T$. With $\lambda >0$, $M_D$ is a positive definite matrix, ensuring the existence of its inverse. Moreover, calculating $M_D$ requires only a single pass through the reward training data. This computational cost is significantly lower compared to querying each reward ensemble for every sample during RL training, as done by ensemble-based methods. In addition, the memory cost of maintaining $M_D$ is much lower than keeping multiple reward models in memory. [1]. Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, and Sanjeev Arora. A kernel-based view of language model fine-tuning. ICML 2023. > [Q2] "Why will the uncertainty measure be smaller when (x,y) is close to the training data samples and be higher when (x,y) is far from the training data?" Recall that for a prompt-response pair $(x,y)$, its uncertainty is calculated through $$U_{x,y}^{CI}(x,y) = b\sqrt{e(x,y)^T M_D^{-1} e(x,y) },$$ where $e(x,y)$ denotes the last-layer embedding for the pair. The term $\sqrt{e(x,y)^T M_D^{-1} e(x,y) }$ represents the Mahalanobis distance, measures the distance of a point $(x,y)$ from the distribution of the training data captured by $M_D$. Thus if the pair $(x,y)$ is far from the training data samples, it will result in a longer distance and, consequently, a higher uncertainty value. > [Q3] "Clarification on Reference Responses." The reference response can be any reasonably good answer, as long as it achieves a positive reward on average. In our experiments, we found that even when sampling responses from the SFT policy, this requirement can be met. Therefore, it does not introduce additional burdens in practical scenarios. Moreover, if high-quality reference responses are used, such as annotated good responses from users or responses generated by a well-performing model, the performance of AdvPO can be further enhanced, as demonstrated in the TLDR dataset in Table 1. This is because the inclusion of reference responses prevents AdvPO from being overly or wrongly pessimistic by guiding policy optimization towards the direction of the reference responses while optimizing against pessimistic rewards, as shown by Lemma D.1. For a fair comparison, we have incorporated PPO-ref, a modified version of PPO that includes reference responses, as one of our baselines. Table 1 demonstrates the benefits of AdvPO over PPO-ref through addressing over-optimization. Additionally, we also compare AdvPO with ensemble-based methods that use reference responses. Further details can be found in the response to CQ1. >[Q4] "In L185-186, how is $C_{\delta}^r$ constructed?" We cannot construct $C_{\delta}^r$ in practical scenarios with large parameter sizes. However, the success of the lightweight uncertainty estimation and the theoretical insight from Theorem 3.1 implies that the uncertainty primarily stems from the inaccuracy in the estimated projection weights. Thus, AdvPO opts for a relaxation that minimizes an upper bound (lines 195-204), resulting in the objective in Eq.(5), where the minimization is now taken over the projection weights instead of the reward functions Experimental results in Sections 5.2 and 5.3 show the effectiveness of AdvPO, even with this relaxation. We hope that the above has addressed all the reviewer's concerns and that the reviewer would consider raising their score. We are happy to answer any additional questions the reviewer might have. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Dear authors, Thank you so much for your detailed responses. I have increased my rating to 6. --- Reply to Comment 1.1.1: Title: Reply to the response. Comment: Thank you for your positive feedback. If you have any further questions or need additional clarification, please feel free to ask. We are more than happy to engage in further discussion.
Summary: This paper aims to tackle the problem of reward model overoptimisation in RLHF. To do this, they propose a new method for quantifying the uncertainty of the reward model on a given input, and penalise the reward of the policy during RLHF training based on this uncertainty estimation. The uncertainty estimation is calculated from the final layer embeddings of the reward model, and the authors provide several theoretical results to show that the adversarial objective they initially propose can be optimised with standard RL algorithms. Empirically, they show in simulated settings that their method performs better than mean ensembles and lora ensembles on both Anthropic HH and TL;DR datasets in terms of final performance and qualitatively mitigating overoptimisation. Several ablation studies are performed to demonstrate the importance of each of the components of the proposed algorithm. Strengths: The method the paper introduces is interesting and novel, and the benefit of lightweight uncertainty estimation vs training and holding in GPU memory an entire ensemble is substantial. While using uncertainty to address overoptimisation is not novel, this approach to quantifying uncertainty is. The theoretical analysis in the work seems sound and gives insight and intuition about how and why their method is effective. The presentation and clarity of the paper is good, and the paper is generally clear and easy to read. The problem the paper tackles is important, and while a variety of work currently exists in this space, there is not yet a clear solution to the problem of overoptimisation, so this work is useful and significant in this regard. Weaknesses: ### Insufficient comparison to baselines In general, the comparison to baselines in the settings you consider isn't sufficient to support the claims that the method is outperforming existing SOTA. It would be beneficial to compare to WARM from https://arxiv.org/abs/2401.12187 (or a variant of it that just uses the ensemble models), and UWO from Coste et al. This is specifically about the results in figure 2 and table 1. I think it's important to compare against an ensemble of RMs which are all the same size as the RM used by AdvPO. While this doesn't normalise the model sizes between the ensemble and the single RM, often the limiting factor here will be how good the largest pretrained model you have access to, which means comparing to an ensemble of RMs with the same size as the RM used for AdvPO makes sense, although AdvPO wouldn't need to beat this baseline, but it would be useful to see the comparison. ### Unclear simulated setting In the results of section 5.2, the gap between the trained and gold reward model is 7B -> 13B, which is quite small - Most previous works have had a gap of at least 1.4B -> 7B, and often much larger. This makes the setting less analogous to the setting with real human preferences. It would be beneficial to do these experiments with smaller trained RMs and policies but the same gold RM ### Limited model sizes While it's an easy comment to make, it is the case that the results are only for one policy and RM size in both sections. It would be beneficial to have results on additional model sizes (either smaller or larger), to ensure the method works robustly across scales ## Summary Overall I'm currently giving the paper a borderline accept (5). I think the promise of the method and results outweigh potential issues with the experimental setting and baseline comparison currently. I'd be willing to raise my score to a 6 or 7 if comparisons to more baselines were made, and experiments with different model sizes were performed. [EDIT]: I have raised my score to a 6 given the baseline being compared to was stronger than I previously thought. Technical Quality: 2 Clarity: 3 Questions for Authors: - In figure 1 c, why does ENS-3B start much higher than ENS-7B and CI? - What is the ENS-s method exactly? are you using the variance of the ensemble, or something else? How do you choose the hyperparameter for the weight of this variance. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of the work somewhat, but I would appreciate more discussion of the limitations of the experimental setup (smaller models, gpt-4 evaluation, limited realism of datasets). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reply to Reviewer NjZD First, we would like to thank the reviewer for the comprehensive review of our paper and for acknowledging the novelty and benefits of our proposed AdvPO. We have carefully read through your review and added corresponding experiments. We hope the following clarifies any misunderstandings. > [Q1] "Comparisons to baselines: (1) comparison to ensemble-based baselines. (2) What is the ENS-s method exactly? (3)compare against an ensemble of RMs which are all the same size as the RM used by AdvPO. " **For (1)**, we would like to clarify a misunderstanding. First of all, we have already included ensemble models (UWO from Coste et al.) in our experiments on both the uncertainty analysis in Figure 1 (ENS-3B and ENS-7B) and the resulting policy in Table 1 ("ENS-s" in Table 1). In Figure 1, we found that the proposed lightweight uncertainty (CI) surpasses reward ensembles with comparable parameter sizes—ENS-3B, which ensembles 3x3B reward models. In Table 1, we also observe that AdvPO consistently outperforms ENS-s, which ensembles 3x3B reward models for reward and uncertainty calculation, on both datasets. It's worth noting that this ensemble has more parameters than the reward model we used (7B), and hence we believe that this is indeed a fair comparison. Additionally, we performed further evaluation on ENS-s in line with the experimental framework depicted in Figure 2. The results are shown in Figure 2 in the attached PDF. We can observe that on the Anthropic HH dataset, with reliable uncertainty estimation as shown in Figure 1, ENS-s helps mitigate overoptimization, i.e., the gold rewards do not decrease as PPO optimization progresses. Although we found it needs a higher KL penalty than AdvPO, the performance is not as effective as AdvPO. However, on the TLDR dataset, without reliable uncertainty estimation, even with a higher KL penalty, the performance of ENS-s is significantly worse. **For (2)**, our implementation of ENS-s strictly follows the UWO implementation from Coste et al [1], using three 3B reward ensembles. The reward during PPO training is computed in according to Eq. (3) in the paper (Eq.(5) in [1]). Specifically, we compute the average value from three reward models and subtract their uncertainty, which is determined as the variance of rewards from different reward models. The weight of this variance is treated as a hyper-parameter, which is searched within the range [0.01, 0.05, 0.1, 0.5, 1]. We then select the hyper-parameter that achieves the highest reward on the validation set. **For (3)**, we tried to run ensembles with three 7B reward models. However, we encountered out-of-memory (OOM) issues, since this requires keeping six 7B models in memory (one policy model, one reference model, one critic model, and three rewards models). We would also like to thank the reviewer for pointing out WARM paper. WARM is still based on the idea of reward ensembles, but averaging them in the weight space. However, this approach still requires additional reward training and tries to perform **mean optimization**, i.e., optimizing towards average rewards from reward ensembles **without considering uncertainties**. We will incorporate these discussions into the final version of our paper. [1] Coste, Thomas, et al. "Reward Model Ensembles Help Mitigate Overoptimization." The Twelfth International Conference on Learning Representations. 2023. > [Q2] "Unclear simulated setting & Limited model sizes: Experiments with smaller trained RMs and policies but the same gold RM." We thank the reviewers for their valuable suggestions. In response, we have conducted experiments using 3B policy models and 3B reward models, while maintaining the same gold reward model as the reviewer suggested. Due to time constraints, we primarily focused on exploring the effectiveness of lightweight uncertainty estimation with both the policy model and reward models initialized from a 3B model (OpenLLaMA3B). Once we achieve a reliable uncertainty estimation, leveraging such uncertainties to mitigate overoptimization becomes straightforward. The experimental setup follows Section 5.1, with the only difference being the model size. Besides the proposed lightweight uncertainty method (CI), we also employ ENS-3B, which uses three 3B models thus requiring (3x) the computational resources, for reference. The experimental results are demonstrated in Figure 3 in the attached PDF. We can observe from Figures 3b and 3d that on both datasets, as the difference between gold and proxy rewards increases, the uncertainty calculated by our CI also rises, indicating the reliability of the uncertainty estimation method. Moreover, similarly to the experiments on 7B models, we can observe in Figures 3a and 3c that CI remains effective in signalling the divergence of gold and proxy rewards. > [Q3]"In figure 1 c, why does ENS-3B start much higher than ENS-7B and CI?" We thank the reviewer for the detailed review and for pointing out some parts that caused unnecessary misunderstanding. Different uncertainty estimation methods have different scales, so for ease of illustration, when plotting Figure 1, we rescaled the uncertainties of each method to the [0,1] range using min-max scaling. As a result, ENS-3B starts much higher than ENS-7B and CI suggests that its estimated uncertainty is initially higher but decreases as the PPO optimization steps progress (maybe because the model collapses as samples move further away from the original data and hence the variance decreases). This trend implies that the uncertainty estimates are not reliable, as over-optimization leads to divergence between the proxy reward and the golden reward (i.e., the uncertainty should be increased). We hope that the above has addressed all the reviewers concerns and that the reviewer would consider raising their score. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response and clarification. It is very unclear in the paper currently that you compare against UWO from Coste et al. as opposed to the mean ensemble or WCO. It would be beneficial to make that clearer, as well as the discussion of hyperparameter choice. I think it would also be beneficial to compare against the other methods there as well as just UWO. > [on WARM] However, this approach still requires additional reward training and tries to perform mean optimization, i.e., optimizing towards average rewards from reward ensembles without considering uncertainties I agree that conceptually it may be different to your method, but I believe it is still a necessary baseline to compare against, given it is tackling the same problem of overoptimisation. Thanks for the rest of your response and clarification, it was very helpful. I am happy to raise my score to a 6, given the baseline being compared against is better than I thought (I believed it was mean ensemble, but it is actually UWO). I would raise my score higher if there were more baseline comparisons or results for the 7B ensemble reward models (although I realise those are unlikely in the remaining time). --- Rebuttal 2: Title: Reply to the response Comment: We thank the reviewer for the instant feedback and valuable suggestions. We will revise the paper to make it much clearer that we compare against UWO from Coste et.al [1] and discuss the hyperparameters in detail. We want to further explain the selection of baselines. We only compared to UWO since, as observed in [1], UWO works the best compared to WCO or mean ensembles. We also thank the reviewer for pointing out the paper WARM. As previously discussed, WARM aims to learn a robust reward model through a weighted average of multiple reward models. We want to further remark that this is orthogonal to our methods. Our methods can be plugged into any learned reward model to calculate uncertainties and leverage uncertainty for better policy optimization under the reward model. Moreover, following the reviewer's strong suggestion to compare with 7B reward ensembles, we are trying to run experiments with 7B reward ensembles, and we will post the results when available. (Although this causes a significant computational cost with 3x7B reward models, while our method only employs one 7B model, as the reviewer remarks that AdvPO wouldn't need to beat this baseline.) [1] Coste, Thomas, et al. "Reward Model Ensembles Help Mitigate Overoptimization." ICLR 2023. --- Rebuttal Comment 2.1: Title: Results on comparison with 7B ensembles. Comment: We thank the reviewer for the valuable suggestions. In response, we compare AdvPO with ENS-7B, which uses three 7B reward ensembles (as defined in Eq.(3) in the paper). Note that AdvPO only requires one 7B reward model. The weight of uncertainty in Eq.(3) is still treated as a hyperparameter and searched within the range [0.01, 0.05, 0.1, 0.5, 1]. We then select the hyperparameter that achieves the highest reward on the validation set. The results are shown in the following table. | | Anthropic HH | Anthropic HH | Anthropic HH | Anthropic HH | TLDR | TLDR | TLDR | TLDR | |-------------------------|--------------|--------------|--------------|-----------------|--------|--------|-------|-----------------| | | Win | Tie | Lose | $\Delta$ | Win | Tie | Lose | $\Delta$ | | AdvPO v.s ENS-7B| 29.3% | 48.8%  | 21.9% | $\uparrow$ 7.4 | 60% | 7%  | 33% | $\uparrow$ 27 | | AdvPO v.s ENS-s(ENS-3B) | 43.0% | 26.5% | 30.5% | $\uparrow$ 12.5 | 77% | 3% | 20% | $\uparrow$ 57 | We can observe that with large reward ensembles, the performance of ENS-7B is much closer to AdvPO. However, AdvPO still achieves better performance than ENS-7B, especially on the TLDR dataset with good reference responses.
Summary: This paper introduces uncertainty-based methods to tackle the over-optimization issue in RLHF. Drawing inspiration from neural bandits, the authors first propose a lightweight uncertainty estimator based on the final embedding layer. They then formulate the problem as an adversarial optimization task. Empirical experiments are conducted using the Anthropic HH and TL;DR datasets. Strengths: This paper presents a lightweight method for uncertainty quantification of point-wise rewards, which reduces memory usage compared to standard ensemble-based approaches. Additionally, experimental results demonstrate its effectiveness in mitigating over-optimization issues at the 3B and 7B scale. Weaknesses: 1. In the experiment parts Table 1, ensemble-based baselines do not Incorporate Reference Responses, which leads to the ablation study being less completed. 2. The selection of the last embedding layer appears ad hoc and requires further analysis to justify the choice of different layers. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Table 1, ENS-s achieves similar performance to PPO in both the Anthropic HH and TL;DR datasets, which seems to contradict the results shown in Figure 1, where ENS-3B appears to help mitigate over-optimization. Could the authors provide more clarification regarding this discrepancy? 2. I observed that LoraEns performs significantly worse than other methods. Is training a LoraEns reward model more challenging than training a standard reward model, thereby making it less accurate? Providing additional results on reward model accuracy or variance, particularly for ensemble-based models, would be helpful. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reply to Reviewer 2q4b We would like to thank the reviewer for the thought-provoking questions to improve our manuscript. We would also like to thank the reviewer for acknowledging that our “experimental results demonstrate its effectiveness in mitigating over-optimization issues at the 3B and 7B scale”. Here below, we will address the reviewer's questions and clarify any misunderstandings. > [Q1] "In Table 1, ensemble-based baselines do not Incorporate Reference Responses." Please refer to our Common Response [CQ1]. > [Q2] "The selection of the last embedding layer appears ad hoc and requires further analysis to justify the choice of different layers." We thank the reviewer for asking this clarification question. Our work is the first to investigate leveraging the internal representation of the reward model for uncertainty quantification. We focus on the last layer due to: 1. The reward is acquired through direct projection of last layer embeddings. 2. As discussed in Section 3.1, previous work suggests that the last layer embedding captures generalized and rich information. Specifically, [1,2] show that freezing the network up to its last layer and retraining only the projection head with a smaller dataset, free of spurious correlations, improves robustness. [3] indicates that even fine-tuning an LLM's last layer embedding with noisy labels can yield high performance in subsequent classification tasks when the projection weight is accurately derived from ground-truth labels. 3. Using only the last layer embedding makes our methods easy to compute and scalable. Thus, we focused on the last layer in our project and have demonstrated its effectiveness. We also appreciate the reviewer for highlighting this interesting future direction. We will explore the potential of other intermediate layers for more accurate estimates in future work. [1] Kirichenko P, Izmailov P, Wilson A G. Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations. ICLR 2023 [2] LaBonte T, Muthukumar V, Kumar A. Towards last-layer retraining for group robustness with fewer annotations[J]. NeurIPS 2023 [3] Burns, Collin, et al. "Weak-to-strong generalization: Eliciting strong capabilities with weak supervision." > [Q3] "Clarification on the performance of ENS-s and PPO in Table 1." We first want to clarify that the results in Table 1 are based on pairwise comparisons; for each prompt, we directly compare responses generated by two models. Thus, a response from AdvPO might be better than those from both PPO and ENS-s, while ENS-s's response could still be better than PPO's. We would also like to clarify that there is no contradiction between the results in Table 1 and Figure 1. For example, in Table 1, on the Anthropic HH dataset, ENS-s has a winning rate of 30.5% against AdvPO, while PPO has a winning rate of 20% against AdvPO, indicating the benefits of ENS-s over PPO. We further conducted a direct comparison between ENS-s and PPO on Anthropic HH dataset. The win-tie-lose rate is as follows: | | Anthropic HH | Anthropic HH | Anthropic HH | Anthropic HH |-------------|---------------|---------------|---------------|---------------| | | **Win** | **Tie** | **Lose** | $\Delta$ | Ens-s v.s PPO | 37.5% | 29% | 33.5 % | 4% One can observe that the ENS-s performed slightly better than PPO, aligning with the observation that ENS-3B appears to help mitigate over-optimization on Anthropic HH dataset in Figure 1. Here, we only report the direct comparisons on the Anthropic HH dataset, since as we discussed in Section 5.3, we adopted a mixed evaluation strategy with human annotation involved. Considering the high cost of human annotation, and the observations that ENS-3B in the TLDR dataset in Figure 1 is not reliable, we only reported direct comparisons on the Anthropic HH dataset. > [Q4] "Performance of Lora ensembles, and the accuracy and variance of reward models." The accuracy and variance of fully finetuned 7B reward models, LoRA-based reward ensembles from 7B models, and fully finetuned 3B reward models are as follows: | | | Number of ensembles | Anthropic HH (mean acc) | Anthropic HH (std) | TLDR(mean acc) | TLDR (std) | | |---|----------------------------------|---------------------|-------------------------|--------------------|----------------|------------|---| | | Fully Finetuned 7B reward models | 1 | 0.7137 | - | 0.667 | - | | | | Lora-based 7B reward models | 5 | 0.6779 | $6e^{-3} $ | 0.6278 | $2e^{-3}$ | | | | Fully Finetuned 3B reward models | 3 | 0.6973 | $5e^{-4}$ | 0.6465 | $2e^{-3}$ | | We observe that LoRA-based ensembles suffer from lower accuracy and higher variance compared to fully finetuned models, leading to worse performance. We thank the reviewer again for the comprehensive review to improve our paper. If the above rebuttal and additional experiments have addressed the reviewer's concerns, we would appreciate it if the reviewer could consider raising their score. --- Rebuttal 2: Title: Replying Rebuttals Comment: Thank you for your response. I appreciate the additional results provided by the authors. I would raise my score to 6.
Summary: This paper studies the reward model overoptimization problem in RLHF. Specifically, they introduce a lightweight approach using adversarial policy optimization, provide corresponding justifications, and extensive empirical study to verify the proposed approach. Strengths: This paper studies important problems in RLHF, the solution is lightweight, and effectiveness is clearly demonstrated through experiments. The presentation is clear and the paper is easy to follow. Weaknesses: Please see the questions section. Technical Quality: 3 Clarity: 3 Questions for Authors: In the main text, it would be great to also show quantitative results on the uncertainty. To enhance the clarity, I would like to see two different types of presentations of the results: 1. (quantitative) correlation between the estimated uncertainty and the reward differences 2. (qualitative) a scatter plot showing the relationship between reward differences and estimated uncertainty (they are currently averaged in the figures) How is the scalability of the proposed method? To be specific, the ensemble approaches may achieve better performance by using more RMs (regardless of the cost of doing so), and in the original paper scaling laws of RM overoptimization, another approach is to use larger RMs. How does the proposed method trade-off between cost and performance? I would suggest the authors provide an algorithmic table of the proposed method to further enhance readability and clarity. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the questions section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reply to Reviewer JbJ9 First of all, we would like to thank the reviewer for encouraging comments as well as clarification questions. In particular, we would like to thank the reviewer for acknowledging that this paper “studies important problems in RLHF” and that the “effectiveness is clearly demonstrated through experiments”. In the following, we will clarify the remaining questions that the reviewer has mentioned in their review. > [Q1] "In the main text, it would be great to also show quantitative results on the uncertainty. To enhance the clarity, I would like to see two different types of presentations of the results: (1) (quantitative) correlation between the estimated uncertainty and the reward differences; (2) (qualitative) a scatter plot showing the relationship between reward differences and estimated uncertainty (they are currently averaged in the figures)" **Quantitative:** We calculated the Pearson correlation between the estimated uncertainty and the reward differences. The results are as follows: | Dataset | ENS-3B | ENS-7B | CI | | |--------------|--------|--------|-------|---| | Anthropic HH | 0.787 | 0.980 | 0.984 | | | TLDR | -0.594 | 0.909 | 0.994 | | We can observe that CI achieves a similar Pearson correlation to ENS-7B, although the latter employs three 7B models. Furthermore, CI surpasses ENS-3B, a reward ensemble with comparable parameters. **Qualitative:** In terms of a qualitative scatter plot, we would like to mention that we have already added standard deviations in Figure 1, which we believe also demonstrates qualitatively the significance of the correlations between the reward differences and the uncertainty. > [Q2] "(1) Scalability of the proposed method and its trade-off between cost and performance. (2) Effect of the number of RMS and size of RM ensemble in ensemble-based approach." **For (1)**, We thank the reviewer for pointing out this interesting point. The proposed method aims to enhance RL optimization by leveraging lightweight uncertainty to mitigate overoptimization. It still falls under the umbrella of the RLHF pipeline, so it follows the general scalability of RLHF, meaning that larger policy or reward models will generally lead to better performance [1]. On the other hand, our method does not require additional reward model training or maintaining multiple reward models in memory, making it highly scalable. Our method only requires precomputing and maintaining the matrix $M_D \in R^{d\times d}$, which summarizes all the last layer embeddings observed in the reward training data, where $d$ is the dimension of the last layer embedding in the reward model. **For (2)**, We found that simply scaling the number of ensembles without increasing the reward model size cannot improve its performance. Specifically, we extended our analysis on uncertainty estimations in Figure 1 to include a configuration with five 3B ensembles, denoted as ENS-3B-5, given that accurate uncertainty estimation is a prerequisite for subsequent processes. The results are shown in Figure 1 in the attached PDF. We observed that ENS-3B-5 performs similarly to the three 3B ensembles (ENS-3B). Specifically, on the TLDR dataset, where ENS-3B showed poor performance as discussed in Section 5.2, ENS-3B-5 also failed to achieve reliable uncertainty estimation, as shown in Figure 1b in the attached PDF. This suggests that the size of the reward model might be more crucial than the number of ensembles. However, ensembling larger RMs incurs higher computational and memory costs. This again highlights the benefit of the proposed lightweight uncertainty estimation methods, which do not require additional memory. If feasible, larger reward models can be leveraged to boost RLHF performance. [1]Gao L, Schulman J, Hilton J. Scaling laws for reward model overoptimization, ICML2023 > [Q3] " The algorithmic table of the proposed method." We thank the reviewer for this suggestion and have added the corresponding algorithmic table to the attached PDF. We hope that the above has addressed any outstanding questions and that the reviewer would consider raising their score if all the questions have been appropriately answered. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, I would like to keep my positive score.
Rebuttal 1: Rebuttal: # Common Response > [CQ1] "Comparison with ensemble-based approach with reference response incorporated." Following the reviewers' suggestions, we incorporated a variant of ENS-s, called ENS-ref, that leverages the same set of reference responses as our proposed method AdvPO. More specifically, ENS-ref optimizes the following objective: $$\max\_{\pi_{\theta}} \mathbb{E}\_{x, y \sim \pi\_{\theta}(\cdot | x)} \left[ r\_{\rm ENS}(x,y)\right] - \mathbb{E}\_{x, y\_{\rm ref}}\left[ r\_{\rm ENS}(x,y_{\rm ref})\right] -\beta \mathbb{D}\_{\text{KL}}(\pi || \pi\_{\rm sft}).$$ where $r\_{\rm ENS}(x,y)$ is the reward of ENS-s, as defined in Eq. (3) in the paper. We then follow the same experiment setup in Section 5.3 to perform RL optimization and compare the resulting policy with AdvPO. The results are as follows: | | Anthropic HH | Anthropic HH | Anthropic HH | TLDR | TLDR | TLDR | |-------------|---------------|---------------|---------------|------|------|------| | | **Win** | **Tie** | **Lose** | **Win** | **Tie** | **Lose** | | AdvPO v.s ENS-ref | 38% | 40.5% | 21.5% | 76% | 5% | 19% | We can observe that AdvPO consistently outperforms ENS-ref on both datasets, even when the references have been added to ENS-s. Pdf: /pdf/4c6d90746829a28e59c0fca59f6ffbe160c86ede.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Input-to-State Stable Coupled Oscillator Networks for Closed-form Model-based Control in Latent Space
Accept (spotlight)
Summary: In this paper, the authors introduce Coupled Oscillator Networks (CONs), extending the idea of coRNNs but with the difference that the generalized force is symmetric, enabling the definition of a potential energy expression which can be exploited for energy shaping based control methods. It is proven, that this model is asymptotically stable and, given an input, input-to-state-stable. It is shown that CONs provide similar performance compared to other state of the art methods. Then, the proposed network is used together with a variational autoencoder to learn a low-dimensional representation of the dynamics of a soft robot. Finally, a PID controller with an energy shaping part is designed to steer the soft robot to a given desired state. The main contribution is the new network structure which allows us to exploit the potential energy expression. Strengths: - It was a pleasure to read the article as it is nicely written, and all steps are explained in detail and clarity. - Computing the solution of the CON by splitting the dynamics into a linear part (where the solution can be computed in closed form) and a nonlinear part is an original idea. - The new structure of the proposed network that allows us to exploit energy-based control methods open doors for many interesting applications and might be a significant contribution. Weaknesses: - Theorem 1 seems to be trivial as it elaborates on the (nonlinear) coupling of passive systems (damped mass-spring systems) which is always passive since it does not “produce” any energy and thus, is inherently stable. (Same for Theorem 2). In this sense, the theoretical contribution seems limited. Is there anything I missed? - The statement in Line 266 “PID controller has several well-known drawbacks, such […] steady-state errors (in case the integral gain is chosen to be zero)” is misleading as in this case we would say it’s a PD controller. - The main contribution seems to be the new network structure that allows to exploit the potential energy for energy shaping methods. The VAE and controller are existing methods. Thus, a more detailed elaboration on the performance and limitation of CONs would be beneficial as it is evaluated for soft robotics data sets only. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can you add the inference time for the different methods in Table 1? If that’s not possible, could you give some general comments on the inference time of CONs? - Figure 3 visualizes that the controller with feed-forward part leads to heavy oscillations in the systems. Is that due to a poorly tuned controller or do you see the reason in the CON model? UPDATE: New score after discussion Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer Ys99 (R1) We thank the Reviewer very much for the kind words, for their interest in our research activities, and for the very insightful comments that they have provided. Because of space constraints, we respond to the request for inference times and questions about the oscillations of the controller in the global rebuttal. ## Is the stability of CON trivial? > R1: Theorem 1 seems to be trivial as it elaborates on the (nonlinear) coupling of passive systems (damped mass-spring systems), which is always passive since it does not “produce” any energy and thus, is inherently stable. (Same for Theorem 2). In this sense, the theoretical contribution seems limited. Is there anything I missed? We thank the Reviewer for their question and for allowing us to elaborate further on this topic. In short, we differ with the Reviewer's technical arguments, and, consequently, with their conclusions on the triviality of Theorem 1. In essence, we recognize that the Reviewer's arguments are well funded in the context of port-Hamiltonian theory. However, substantial extra properties/assumptions are needed to yield to the Reviewer's conclusion. Thus, the PH way of proving Theorem 1 would be interesting and proper, but it would definitely not result in the trivial proof the Reviewer is hinting at. More precisely, we point out two main points of technical disagreement, which we list below. ### (A): Passivity does not imply Stability Passivity, even in the scalar case, does not imply (global) stability. Additional assumptions on the (global) convexity of the energy landscape are needed, which would be hard to impose. As a trivial counterexample, think of a damped mass placed atop an infinite hill. To further illustrate this point, we propose two simple passive-but-not-globally-stable CONs that deviate slightly from the assumptions in Theorem 1 and 2. 1. Take a passive CON as stated in Eq. (2) in the scalar case with $K=1$ and $D=0.4$. Conflicting with the assumptions in Theorem 1, we select $W=-5$. Now, depending on the choice of $b$, this scalar system either has 1, 2, or 3 equilibria (see Fig. R1 in global response). All global stability guarantees are lost if the system has multiple attractors. 2. We study a scalar CON with $K=-1$, positive damping $D=0.4$, and $W = 0$, $b=0$ resulting in the EOM $\ddot{x} + K x + D \dot{x} = \tau$. We take the passive output $o = \dot{x}$, and we prove passivity according to Def. 6.3 (Khalil, 2001) using the storage function $V(x) = K x^2 + \dot{x}^2$: $\dot{V}(x, \tau) = \tau \dot{x} - D \dot{x}^2 \leq \tau o. $ However, this system is unstable, as it can be easily assessed by looking at the linearization at the equilibrium. Fig. R2 reports the globally repulsive vector field, which exhibits a single unstable equilibrium at the origin. Therefore, we conclude that passivity is not sufficient to prove global asymptotic stability. ### (B): Stable harmonic oscillators do not imply Stable Networks Even if the individual systems (i.e., the harmonic oscillators) are stable, care needs to be taken when coupling them not to create an unstable network. We illustrate this with the following example: consider a CON of dimensionality two with $K = [[1.0, -1.4],[-1.4, 1.0]]$, $D = \mathrm{diag}(0.4, 0.4)$ $W = \mathrm{diag}(3, 3)$, It can be easily shown that the oscillators individually with the EoM $\ddot{x}_i + k x_i + d \dot{x}_i + \tanh(w x_i)= 0,$ where $k=1$, $d=0.4$, and $w=3$ are globally asymptotically stable. However, the linear stiffness matrix is negative definite, and therefore, the system is not globally asymptotically stable. This is illustrated in Fig. R3 of the global response. ## PID vs. PD > R1: The statement in Line 266 “PID controller has several well-known drawbacks, such […] steady-state errors (in case the integral gain is chosen to be zero)” is misleading as in this case we would say it’s a PD controller. We thank the Reviewer for their comment and for pointing out the mistake. We agree that this sentence is indeed badly written and confusing. In the final version of the paper, we will remove the subsentence "_steady-state errors (in case the integral gain is chosen to be zero)_". ## Performance and Limitations of CON > R1: The main contribution seems to be the new network structure that allows to exploit the potential energy for energy shaping methods. The VAE and controller are existing methods. We thank the Reviewer for their comment. We want to stress that the proposed network imposes a beneficial inductive bias for conserving global stability and ISS. Therefore, we consider the two proofs (i.e., GAS and ISS) important contributions of the paper. Furthermore, we also regard the closed-form approximation of the CON dynamics as an important tool for deploying oscillator networks in practice. > R1: Thus, a more detailed elaboration on the performance and limitation of CONs would be beneficial as it is evaluated for soft robotics data sets only. For this rebuttal, we have performed additional experiments involving non-soft-robotic datasets. They demonstrate that the CON network can also learn the latent dynamics of other mechanical systems, such as a mass spring, a pendulum, and a double pendulum with friction, effectively and with SOA performance. We refer to the global rebuttal for more details. In totality, the results provided in the paper and in the rebuttal show that the performance of the CON model is on par with other SOA methods while adding physical structure and stability guarantees. As detailed in Section 6.2, the strong assumptions needed to provide global stability guarantees are the primary limitation of the presented CON network, making it unsuitable for applications where complex attractor dynamics (e.g., multiple attractors, strange attractors, etc.) are required. Relaxing the stability assumptions could make the CON network also suitable for these applications. --- Rebuttal Comment 1.1: Comment: Thank your for your response! You are right that passivity does not imply *asymptotic* stability but most often stability. That said, your counterexamples do not convince me because of the following: 1. Example: It is pretty obvious that for a negative W the system's equilibrium can be unstable as you introduce a negative stiffness coefficient in the system. Thus, I'd still still say that the result seems trivial as the systems seems always stable (asymptotic stable) for positive damping and stiffness coefficients. 2. Example: Your analysis is wrong. Following Khalil, the storage function must be a "positive semidefinite function" that is obviously not the case in your example for a negative K. I don't want to look into the details, but I think that for the linear system as shown in example 2, passivity leads to (asymptotic) stability. Please provide a valid counter example if my believe is wrong. Finally, it's the same story for deriving stability from the single oscillators to the network. Of course you can make the stiffness matrix negative definite to achieve a non-stable network but again, if all parameters are positive (definite), meaning defined so that you would expect the system to be stable, I think that the system is stable. I'd change me opinion if you can provide a counter example where the parameters are positive (definite) but would lead to an unstable system / network. --- Reply to Comment 1.1.1: Title: Technical response to Reviewer Ys99: Example 2 Comment: ## Example 2: A system where the W-coordinate transformation does not help The proof underlying Theorem 1 is only (relatively) simple as the coordinate transformation $x_\mathrm{w} = W x$ helps us to identify the potential energy function in the $\mathcal{W}$ coordinates, where the hyperbolic nonlinearity operates elementwise. However, this is not always possible. For example, consider the system $\ddot{x} + K x + D \dot{x} + W^{\mathrm{T}} \tanh(Wx + b) = 0$, where $K$, $D$, $W \succ 0$ are positive definite matrices. Unfortunately, the unactuated dynamics in the \mathcal{W} coordinates are now given by $M_w \ddot{x}_w + K_w x_w + D_w \dot{x}_w+ W^T \tanh(x_w + b) = 0$, for which we cannot easily derive a potential energy function (i.e., integrate), as the hyperbolic term $W^\mathrm{T} \tanh(x_\mathrm{w} + b)$ is not (easily) separable. Remark: In contrast to Example 1, this system actually has a single equilibrium point. This example motivates why we chose the proposed CON network architecture such that we can (easily) derive the kinetic and potential energy terms, subsequently prove GAS & ISS, and perform model-based control. --- Rebuttal 2: Title: General response to Reviewer Ys99 Comment: > Finally, it's the same story for deriving stability from the single oscillators to the network. Of course you can make the stiffness matrix negative definite to achieve a non-stable network but again, if all parameters are positive (definite), meaning defined so that you would expect the system to be stable, I think that the system is stable. I'd change me opinion if you can provide a counter example where the parameters are positive (definite) but would lead to an unstable system / network. We appreciate the direct nature of the Reviewer's comment, which rises indeed a compelling point with which however we still disagree. In our next response, we will try again to convince the Reviewer of our argument, although proving that something is _not trivial_ is quite a challenging task, as triviality is very subjective. So, before delving into it, we would like to take the chance of zooming out. Indeed, at the moment, we see the risk that we are debating on a relatively narrow point while it may be that we could essentially agree on several key aspects. Or at least, we believe we can work to find a common ground on these while we continue in parallel our discussion. 1. We believe that Theorem 2 is the main _stability proof_ contribution of the paper. To the best of our knowledge, we are the first in this community/problem setting to provide explicit input-to-state stability guarantees, including convergence rates into the region of attraction (Theorem 2). We believe that the proof of Theorem 2 is far from trivial. Does the Reviewer disagree on this point? 2. In this sense, we agree with the Reviewer that the proof of Theorem 1 is comparably simpler (although not trivial!) and less impactful than the one of Theorem 2. Would renaming "_Theorem 1_" as a "_Lemma_" or even as a "_Proposition_" better reflect the opinion of the Reviewer on the importance of this contribution? 3. Finally, we realize now that we did not make a great job, with the manuscript and with our previous answers, in conveying the message that: the relatively simplicity of the proof of Theorem 1 is no accident. The proof of Theorem 1 could be derived via _standard_ arguments from the control of mechanical systems, _because_ we designed the CON network in a particular way. So the Theorem 1 statement with its relative simplicity implicitly stresses this point. If the Reviewer agrees on this point, we are happy to include in the revision any remarks or changes that they see fit to better bring this point home. Specifically, the proof in its current form is possible because (a) a coordinate transformation exists that allows us to identify a Lyapunov candidate, and (b) the system has a potential energy. We see these two design choices a contribution by themselves as they allow the proof to be simpler as it could have been otherwise. We want to stress that already small modifications to the network formulation and the underlying assumptions in Theorem 1 would have made the proof much more difficult (or even impossible). --- Rebuttal 3: Title: Technical response to Reviewer Ys99: Example 1 Comment: > Finally, it's the same story for deriving stability from the single oscillators to the network. Of course you can make the stiffness matrix negative definite to achieve a non-stable network but again, if all parameters are positive (definite), meaning defined so that you would expect the system to be stable, I think that the system is stable. I'd change me opinion if you can provide a counter example where the parameters are positive (definite) but would lead to an unstable system / network. We are now ready to go back to our technical discussion, with the discussed counterexamples. These, hopefully, also better illustrate our point #3. Essentially, our argument here is that even positive-definite stiffness & damping matrices are not sufficient to claim global asymptotic stability. This should answer the Reviewer's direct technical question. In the following, we will give two examples of modifications to the CON network (in the original coordinates) that would have, according to the Reviewer's argument, appeared to be stable but for which global stability proof would be hard to prove. ## Example 1: A system with multiple equilibria and without a valid potential energy We consider the slightly modified network dynamics $\ddot{x} + K x + D \dot{x} + W^{-1} \tanh(Wx + b) = 0$, where $K, D, W \succ 0$ are positive definite matrices. The equilibria of this system are given by the characteristic equation $K \bar{x} + W^{-1} \tanh(W \bar{x} + b) = 0$. For the system to be globally asymptotically stable, it would need to have a single equilibrium point. However, if we simulate the system with $K = [[5.0, -2.2], [-2.2, 1.0] \succ 0$ (positive-definite as symmetric and positive eigenvalues $0.027$ and $5.973$), $D = \mathrm{diag}(0.2, 0.2) \succ 0$, $W = [[1.0, -2.2], [-2.2, 5.0]] \succ 0$ (positive-definite as symmetric and positive eigenvalues $0.027$ and $5.973$), we notice that the system is actually bistable with two attractors at $\bar{x}_1 = [-212.6, -475.2]$ and $\bar{x}_2 = [212.6, 475.2]$. This is illustrated in the time series plot and phase portrait attached at https://anonymous.4open.science/r/neurips24-20062-rebuttal-7770. Therefore, the system is **not** globally asymptotically stable. For $\tau_\mathrm{pot} = -K x -W^{-1} \tanh(Wx + b)$ to be a valid potential force, it would need to satisfy the property $\frac{\partial \tau_\mathrm{pot}}{\partial x} = \left ( \frac{\partial \tau_\mathrm{pot}}{\partial x} \right )^\mathrm{T}$. Therefore, we derive $\frac{\partial \tau_\mathrm{pot}}{\partial x} = -K - W^{-1} \mathrm{diag}(\mathrm{sech}^2(Wx + b)) W$. Its transpose is given by $\left ( \frac{\partial \tau_\mathrm{pot}}{\partial x} \right )^\mathrm{T} = -K - W^\mathrm{T} \mathrm{diag}(\mathrm{sech}^2(Wx + b)) W^{-\mathrm{T}}$. Therefore, $\tau_\mathrm{pot}$ only stems from a potential iff $W^{-\mathrm{T}} = W \rightarrow W^\mathrm{T} W = \mathbb{I}$ (i.e., $W$ is orthogonal). This is not the case for a general positive-definite $W$. This example motivates that even if the matrices are positive-definite, the system can still have multiple equilibria (i.e., lose global asymptotic stability) and not have a valid potential energy function. --- Rebuttal 4: Comment: Thank you for your response! I hope that you don't see my direct nature as rude because that was not at all my intention. 1. Yes, I agree 2. and 3. No need to rename it but I definitely like the perspective that you intentionally designed the network so that you can find a "simple" proof. That would be great to emphasize in the paper. Especially thank you for the new example. I appreciate your work here and now understand better the details. I will raise my score by 3 points.
Summary: - This paper proposes Coupled Oscillator Networks (CONs); Networks consisting of coupled one-dimensional dampened harmonic oscillators, coupled through a neuron-like connection. - It is shown that under some constraints, the unforced coupled oscillators have a single, globally stable equilibrium and the forced system is globally ISS stable. - As the CON has no closed-form solution, the authors propose to split the solution in one decoupled linear, analytically solvable term and a residual non-linear term which is. For this system, a closed-form approximation is derived (CON-CFA). - High dimensional observations are encoded via a VAE into a compressed latent space that is trained to reconstruct future observations that are predicted via a CON latent dynamics model. This system is trained jointly. - “CONs are an ideal fit for learning latent dynamics as they guarantee that the latent states stay bounded.” - The approaches (CON with two different latent dimensions,S and M, and CON-CFA) are evaluated on a simulated soft-robots dataset and benchmarked against other latent-space dynamics methods based on neural ODEs ore autoregressive models. - When predicting system state, CON-M performs on par with sota methods, CON-S and CON-CFA perform almost as good, while cheaper computationally. - For control, potential-shaping strategies are combined with PID, resulting in a latent-space control law. A forcing decoder is then trained to to predict control inputs. This approach strongly outperforms a simple latent space PID controller. Strengths: - Very interesting and (to my knowledge) novel approach of controlling robots in a latent dynamics space, modeled through a network of coupled oscillators with provable stability guarantees. - Extensive proofs and detailed descriptions of implementation. - Promising experimental evaluation. - Very concise and complete presentation of their approach. Weaknesses: - Limited experimental evaluation on a single simulated soft robot. - It is unclear how the obtained results generalize to other robots, in particular with less smooth contact dynamics. Technical Quality: 4 Clarity: 3 Questions for Authors: - Have you tried the method on other (non-soft) robots? Do you have any intuition on how your method would perform on environments with many contact forces? - Have you considered non-physical systems? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Many limitations of the method are clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer W9L3 (R2) We thank the Reviewer for the careful reading and the encouraging comments. In the following, we will respond to the questions raised by the reviewer, which also relate to the weaknesses mentioned by the reviewer. ## Application of CON to non-soft robots > R2: Have you tried the method on other (non-soft) robots? We thank the Reviewer for inquiring about results on non-soft robots. We provide additional results on various mechanical/robotic systems in the global rebuttal. > R2: Do you have any intuition on how your method would perform on environments with many contact forces? First, we want to stress that we have not conducted any experiments with contact-rich systems yet and reserve this interesting challenge for future work. We hypothesize that the highly discontinuous dynamics of contact-rich systems could present a challenge for the method in its present form. Still, we envision that the method could be augmented to handle such systems. For example, we could add stick-slip friction or similar mechanisms to the CON dynamics to increase their expressiveness while maintaining the physical structure. Furthermore, contact-rich systems often have multiple equilibria, which would be a violation of the stability conditions introduced in this work. As touched on in Section 6 of the paper, these strong global stability guarantees would probably need to be relaxed. ## Application of CON to non-physical systems > R2: Have you considered non-physical systems? We thank the Reviewer for their interest in this topic. While we do think that the CON / CFA-CON models could be potentially useful in other applications where strong stability guarantees are required, we strive to focus in this paper on learning the dynamics of physical/mechanical systems as we can here leverage (a) the shared stability characteristics between the original and the latent space systems, and (b) exploit the mechanical structure of the CON model for control. To make this focus clear, we will, in the final paper, modify the sentence starting on line 69 to (with changes marked in **bold**): _We resolve all the above-mentioned challenges by proposing Coupled Oscillator Networks (CONs), a new formulation of a coupled oscillator network that is inherently Input-to-State Stability (ISS) stable, **for learning the dynamics of physical systems,** and subsequently exploiting its structure for model-based control in latent space._ This work was specifically focused on how to impose structural biases that enforce second order Lagrangian structure and stability properties. However, we believe that the proposed model can be used more broadly, beyond mechanical systems, whenever strong stability guarantees are needed. This will be the focus of future research. --- Rebuttal Comment 1.1: Comment: Thank you for the response. **Application of CON to non-soft robots** I agree with the authors' view that their method might underperform for systems with highly discontinuous dynamics. Since this paper is framed as a method for prediction and control of systems, with soft robots being only one application area (rather than a method for the control of soft robots), I would appreciate a lucid analysis of classes of systems/application areas in which the method might perform strongly/weakly (including a clear reasoning why; and potentially a remedy left to future work). Please consider including this, both from a systems theory and an application standpoint into the paper. **Application of CON to non-physical systems** Thank you for explicitly limiting the application area to physical systems. --- Reply to Comment 1.1.1: Title: Response to Reviewer W9L3 Part 2 Comment: ## Planned additions to the Appendix Furthermore, we could envision adding a more detailed discussion (similar to the one below), or alternatively a table, to the Appendix: ### Systems for which we would expect the proposed method to work - **Mechanical systems with continuous dynamics, dissipation, and a single, attractive equilibrium point:** The proposed method is a very good fit for mechanical systems with continuous dynamics, dissipation, and a single, attractive equilibrium point. In this case, the real system and the latent dynamics share both the energetic structure and stability guarantees. Examples of such systems include many soft robots, deformable objects with dominant elastic behavior, other mechanical structures with elasticity, etc. - **Local modeling of (mechanical) systems that do not meet the global assumptions:** Even if the global assumptions of the proposed method are not met, the method can still be applied to model the local behavior around a local asymptotic equilibrium point of the system (i.e., in the case of multi-stability). For example, the method could be used to model the behavior of a robotic leg locally in contact with the ground, a cobot's interaction with its environment, etc. ### Systems for which we could envision the proposed method to work under (minor) modifications - **Mechanical systems without dissipation:** The proposed method would currently not work well for mechanical systems without any dissipation, as (a) the original system will likely not have a globally asymptotically stable equilibrium point, and more importantly, (b) we currently force the damping learned in latent space to be positive definite. However, these systems are not common in practice as friction and other dissipation mechanisms are omnipresent, and the proposed method can learn very small damping values (e.g., the mass-spring+friction system). A possible remedy could be to relax the positive definiteness of the damping matrix in the latent space, allowing for zero damping. This would allow the method to work for systems without dissipation, such as conservative systems. Examples of such systems include a mass-spring system without damping, the n-body problem, etc. - **(Mechanical) systems with discontinuous dynamics:** The proposed method might underperform for systems with highly discontinuous dynamics, such as systems with impacts, friction, or other discontinuities. In these cases, the latent dynamics might not capture the real system's behavior accurately, and the control performance of feedforward + feedback will very likely be worse than pure feedback. Again, the method should be able to capture local behavior well. A possible remedy for learning global dynamics could be to augment the latent dynamics with additional terms that capture the discontinuities, such as contact and friction models (e.g., stick-slip friction). - **(Mechanical) systems with multiple equilibrium points:** The original system having multiple equilibria conflicts with the stability assumptions underlying the proposed CON latent dynamics. In this case, as, for example, seen on the pendulum+friction and double pendulum + friction results, the method might work locally but will not be able to capture the global behavior of the system. A possible remedy could be to relax the global stability assumptions of the CON network. For example, the latent dynamics could be learned in the original coordinates of CON while allowing $W$ also to be negative definite. This would allow the system to have multiple equilibria & attractors. Examples of such systems include a robotic arm under gravity, pendula under gravity, etc. - **(Mechanical) systems with periodic behavior:** The proposed method will likely not work well for systems with periodic behavior, as they do not have a single, attractive equilibrium point. Examples of such systems include a mass-spring system with a periodic external force, a pendulum with a periodic external force, some chemical reactions, etc. Again, it is likely possible to apply the presented method to learning a local behavior (i.e., not completing the full orbit). A possible remedy could be to augment the latent dynamics with additional terms that capture the periodic behavior, such as substituting the harmonic oscillators with Van der Pol oscillators to establish a limit cycle or a supercritical Hopf bifurcation. --- Reply to Comment 1.1.2: Title: Response to Reviewer W9L3 Part 3 Comment: ### Systems for which we would not expect the proposed method to work - **Nonholonomic systems:** The proposed method likely would not work well for nonholonomic systems, as both structure (e.g., physical constraints) and stability characteristics would not be shared between the real system and the latent dynamics. Examples of such systems include vehicles, a ball rolling on a surface, and many mobile robots. - **Partially observable and non-markovian systems:** As the CON dynamics are evaluated based on the latent position and velocity encoded by the observation of the current time step and the observation-space velocity, we implicitly assume that the system is (a) fully observable, and (b) fulfills the Markov property. This assumption might not hold for partially observable systems, such as systems with hidden states or systems with delayed observations. Examples of such cases include settings where the system is partially occluded or in situations without sufficient (camera) perspectives covering the system. Furthermore, time-dependent material properties, such as viscoelasticity or hysteresis, that are present and significant in some soft robots and deformable objects are not captured by the method in its current formulation. --- Rebuttal 2: Title: Response to Reviewer W9L3 Part 1 Comment: > _RW9L3:_ I agree with the authors' view that their method might underperform for systems with highly discontinuous dynamics. Since this paper is framed as a method for prediction and control of systems, with soft robots being only one application area (rather than a method for the control of soft robots), I would appreciate a lucid analysis of classes of systems/application areas in which the method might perform strongly/weakly (including a clear reasoning why; and potentially a remedy left to future work). Please consider including this, both from a systems theory and an application standpoint into the paper. We appreciate the Reviewer's feedback and suggestion. We fully agree with the Reviewer that specific categories or examples of systems for which the proposed method might be suitable or unsuitable should be discussed in the paper. In the following, we will detail the planned changes. ## Planned changes to the Introduction First, we will add the following sentence to the end of the introduction (i.e., Section 1): > The proposed methodology is particularly well-suited for learning the latent dynamics of mechanical systems with continuous dynamics, dissipation, and a single, attractive equilibrium point. Examples of such systems include many soft robots, deformable objects with dominant elastic behavior, other mechanical structures with elasticity, or locally for other mechanical systems such as robotic manipulators, legged robots etc. For these systems, we can fully leverage the structural prior of the proposed latent dynamics including the integrated stability guarantees. If the system is actuated, the learned dynamics can be subsequently exploited for model-based control, as demonstrated in Section 5. ## Planned changes to the _Limitations_ section We will also extend the _Limitations_ section (i.e., Section 6): > **Limitations.** While we think our proposed method shows great potential and opens interesting avenues for future research, there are currently certain limitations. For example, the proposed method of learning (latent) dynamics implicitly assumes that the underlying system adheres to the Markov property (e.g., the full state of the system is observable), that it can be approximated by a system with mechanical structure, and that it has an isolated, globally asymptotically stable equilibrium. This is, for example, the case for many mechanical systems (e.g., some continuum soft robots, deformable objects, and elastic structures) with continuous dynamics, convex elastic behaviour, dissipation and whose time-dependent effects (e.g., viscoelastcity, hysteresis) are negligible. Even if these conditions are not met globally, the method can be applied to model the local behavior around an asymptotic equilibrium point of the system (e.g., robotic manipulators, legged robots) with added stability benefits for out-of-distribution samples. Alternatively, the method could be extended to relax some of these assumptions, e.g., by allowing for multiple equilibria, zero damping, or by incorporating additional terms to capture discontinuous dynamics (e.g., stick-slip models) or period motions (e.g., limit cycles such as the Van der Pol oscillator). For some physical systems, such as nonholonomic systems, partially observable systems, or systems with non-Markovian properties, the proposed method might not be suitable. Examples of such systems include mobile robots and systems with hidden states or delayed observations. Finally, the application of this method to non-physical systems, such as financial systems, social networks, or other complex systems, is out of the scope of this work. _THE DISCUSSION OF OTHER LIMITATIONS OF THIS PAPER WHICH ARE ALREADY MENTIONED IN THE INITIAL PAPER SUBMISSION CONTINUES HERE..._
null
null
null
null
Rebuttal 1: Rebuttal: # Global Rebuttal ## Performance of CON on non-soft-robotic datasets (R1 & R2) > R1: Thus, a more detailed elaboration on the performance and limitations of CONs would be beneficial as it is evaluated for soft robotics data sets only. > R2: Have you tried the method on other (non-soft) robots? We thank both reviewers for their interest in the performance of CON on non-soft-robotic datasets. For this rebuttal, we compared the performance of CON against the baseline method on three additional mechanical, non-soft-robotic datasets: a mass-spring with friction (_M-SP+F_) (i.e., a damped harmonic oscillator), a single pendulum with friction (_S-P+F_), and a double pendulum with friction (_D-P+F_). These datasets are based on an interesting publication by Botev et al. (2021) [25], which appeared in the _NeurIPS 2021 Track on Datasets and Benchmarks_ and benchmarks various models for learning latent space dynamics. The results, which we will refer to as Table R1 of the global response PDF, show that the NODE model slightly outperforms the CON network on the _M-SP+F_ and _S-P+F_. However, as the datasets do not consider system inputs, we can remove the input mapping from all models (e.g., RNN, GRU, coRNN, CON, and CFA-CON). With that adjustment, the CON network has the fewest parameters among all models and particularly two orders of magnitude less than the NODE model. Therefore, we find it very impressive that the CON network is roughly on par with the NODE model. For the _D-P+F_ dataset, we can conclude that the CFA-CON model offers the best performance across all methods. Finally, most of the time, the CON & CFA-CON networks outperform the other baseline methods that have more trainable parameters. ## Inference time of CON (R1) > R1: Can you add the inference time for the different methods in Table 1? If that’s not possible, could you give some general comments on the inference time of CONs? We thank the reviewer for their question about the inference time of the various methods. We note that the number of training steps per second of all methods included in Table 1 was already reported in the original submission in Table 4 of Appendix D. For this rebuttal, we performed additional evaluations of the inference time (i.e., without computation of loss function and gradient descent) of the various models and report the results in Table R2 of the global response PDF. ## Oscillations of the controller with FF term. (R1) > R1: Figure 3 visualizes that the controller with a feed-forward part leads to heavy oscillations in the systems. Is that due to a poorly tuned controller or do you see the reason in the CON model? We thank the Reviewer for their question and for raising the topic. When tuning the gains PID-like controllers, a trade-off naturally exists between transient behavior (e.g., oscillations and overshooting) and response time. In this case, we chose gains that minimized the response time but allowed for stable behavior. The oscillations are caused by a combination of (a) the underdamped nature of the system and (b) the magnitude of the proportional term. Importantly, to have a fair comparison, we kept the gains of the feedback controller the same for both the _P-satI-D_ and _P-satI-D + FF_ cases. A higher proportional term is beneficial for the response time (and the performance) of the _P-satI-D_, while it leads to overshooting and oscillations in the _P-satI-D + FF_ case. We stress that this is not an inherent problem of the feedback controller but can be mitigated by tuning the feedback gains differently. For this rebuttal, we tuned a controller with reduced proportional and increased damping term and the results, included as Fig. R4 in the global response PDF, show that the oscillations and overshooting are both significantly reduced. Pdf: /pdf/bb6e6a3368a784758e0c33029a1d71c2564d231d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
General Detection-based Text Line Recognition
Accept (poster)
Summary: This manuscript provides a new approach to line recognition of offline handwriting and printed documents with regard to multilingualism. This method can recognize a line based on its simultaneously recognized features based on transformers. This approach is interesting for document analysis. Promising results have been obtained for different datasets. Strengths: This paper is well written and organized. Since the method recognizes the characters simultaneously, this approach is efficient. This paper explores the impact of using a synthetic dataset to improve accuracy in practice. Weaknesses: This is not a whole system for the field of document analysis. This task is a part of document analysis that depends on line segmentation. This means that if the previous part (line segmentation) of the process has errors, these are transferred to the next steps. Therefore, this is an essential task for the work. What happens if you have a whole document for the task? What happens to your results if you use existing methods that segment lines? This method is not generally suitable for multilingual contexts. If there are several characters in different languages, this paper has no clear solution. You should therefore have several models for each language. It is better to explain the method for your synthetic data in more detail in the Appendix. It is better to clarify this with an illustration. Please check your text again for some errors, e.g. line 263 (annotations annotations). Is line 230 correct? "the English Volume 0002" Technical Quality: 2 Clarity: 2 Questions for Authors: When I compared your transformer architecture to the original [57], Figure 2 is different from the original. They use a query selection and matching. Why do not you have these parts? I can see them in your description, but not in your illustration. You cite Figure 1 in subsection 4.2, why do you give the figure as Figure 1 without citing it earlier? What happens if you have a document with multiple languages? Do you test your method for the type of input? Or should you train your network for this type of data? Why don't you create the synthetic data for other languages and use a fine-tuning model based on English? Have you considered the scenario? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: They showed some limitations in Figure 4. But it needs to add more samples in an Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and insightful questions. We appreciate the comments on the organization and clarity of the paper, as well as the recognition of the efficiency of our method in recognizing characters simultaneously. We address the concerns and questions raised below. ## Weaknesses ### This is not a whole system for the field of document analysis. [...] What happens to your results if you use existing methods that segment lines? The most common practice in OCR and HTR is to evaluate text recognition on cropped text lines, not the entire pipeline. Very few papers address both tasks (e.g., DAN, FasterDAN). However, as detailed in the common rebuttal, we have begun to implement a model similar to ours, based on DINO-DETR, which is capable of detecting lines (Figure 1 of rebuttal PDF) with accuracy close to state-of-the-art in baseline detection, as shown in Table 1 of the rebuttal PDF. We use this model to detect lines on IAM and run our text recognition model. Performance is reported in Table 2. ### Please check your text again for some errors, e.g. line 263 (annotations annotations). Is line 230 correct? "the English Volume 0002" We thank the reviewer for noticing these errors. We have reviewed the text and made the necessary corrections. ## Questions ### When I compared your transformer architecture to the original [57], Figure 2 is different from the original. They use a query selection and matching. Why do not you have these parts? I can see them in your description, but not in your illustration. Indeed, we use query selection at the end of the encoder as well as a denoising task for the decoder, similar to DINO-DETR. We initally removed these from Figure 2 for simplicity, but recognize that it can create confusion and will add them. We will correct the figure for the camera-ready version. ### You cite Figure 1 in subsection 4.2, why do you give the figure as Figure 1 without citing it earlier? Thank you for pointing this out. This Figure illustrates the capabilities of our method and the idea of our approach, we will refer to it explicitly in the introduction. ### What happens if you have a document with multiple languages?. Since our method does not heavily rely on language modelling (especially our general model), it is possible to use or fine-tune it on datasets that encompass multiple languages. We tested training a single model on a mix of Latin, French and Germanic printed manuscripts with 8 different historical fonts, and it works effectively with a cer of 1.68 % (note that better performances can likely be obtained by learning different models for different fonts, since different letters can have similar shapes in different fonts). We provide visual examples of results in Figure 5 of the rebuttal PDF, and we will include more in the appendix. ## Limitation ### They showed some limitations in Figure 4. But it needs to add more samples in an Appendix. We will add an appendix to the paper with more examples of failure cases. We will also add more qualitative results similar to Figure 2, 3, 4 and 5 in the rebuttal pdf. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will increase the rating.
Summary: The paper introduces a novel transformer-based character detection for text line recognition. The authors use a diverse set of synthetic data to enable localization part of the detection network to generalize to unseen characters during training. The transformer-based detectors can identify all characters in a text line in parallel and a masking strategy has been adopted to encourage detection interactions. The methods proposed by the authors also includes a process to fine-tune the detection network using only line-level annotations. Strengths: S1: The approach is novel for character detection for text line recognition using transformer-based models. S2: The authors demonstrate strong performance by the model across several datasets and outperformance for cipher recognition. S3: The model can generalize well to unseen characters and variations in text as the authors utilize synthetic data. Weaknesses: W1: Several typos throughout the paper. Ex: typo in the Fig 1 caption: “Our model is general can be”, W2: The authors can provide additional information on the fine-tuning process and the adaptability of the masking strategy. Did they explore other strategies? W3: The performance on real-world datasets that contain some-to-large amount(s) of noise has not been extensively discussed by the authors. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: How does the model handle noisy datasets or texts with degraded quality? Q2: What were the computational resource requirements? Can this approach be extended to real-world problems in a different setting? Q3: How would LLMs affect or change the impact of this approach? Have the authors considered this aspect, and if the proposed method can do better than some existing LLMs? Q4: Can this approach be extended to multi-line text detection and recognition? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: L1: The efficiency of the proposed method in real-world datasets with noises has not been fully explored. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and insightful questions. We appreciate their recognition of the originality of our method. Below, we address each of the reviewer's concerns and provide additional information to clarify our approach. ## Weakness ### W1: Several typos throughout the paper. Ex: typo in the Fig 1 caption: “Our model is general can be”, We thank the reviewer for pointing out these issues. We have carefully reviewed the paper and corrected the typos. ### W2: The authors can provide additional information on the fine-tuning process and the adaptability of the masking strategy. Did they explore other strategies? Regarding the masking strategy, we found empirically that combining both horizontal and vertical masking yielded the best results, similar to the approaches in [59, 10]. In addition, we used blur and random scaling as data augmentation strategies. Fine-tuning details, including the number of iterations and learning rate, are provided at the end of Section 3.2. Additional details on pretraining and finetuning will be included in an appendix, and the code will be released publicly. ### W3: The performance on real-world datasets that contain some-to-large amount(s) of noise has not been extensively discussed by the authors. We respectfully disagree with this assessment, as addressed in the common rebuttal. We have provided additional visual examples of READ and Copiale in Figures 2 and 3 of the rebuttal PDF, illustrating the challenges posed by these datasets. To further demonstrate the robustness of our model, we have included additional qualitative results for another cipher, Ramanacoil, in Figure 4 of the rebuttal PDF. Our model performs well on this dataset, that includes very slanted lines, leading to multiple lines being cropped jointly. We will include more examples for each dataset in the appendix. This appendix will also include visual examples from five other ciphers from the ICDAR2024 Competition on Handwriting Recognition of Historical Ciphers, which we did not include in the paper. # Questions ### Q1: How does the model handle noisy datasets or texts with degraded quality? Can this approach be extended to real-world problems in a different setting? As explained in response to W3, our method has been evaluated on several challenging real-world datasets, and we show additional results on challenging data in the rebuttal pdf. The method has been successfully applied to various settings, including printed text (google1000), handwritten text(IAM, READ, RIMES), multiple languages, and noisy data (ciphers, RIMES). It outperforms the state-of-the-art in cipher recognition and remains competitive on historical data (READ) that contains degraded quality text. ### Q2: What were the computational resource requirements? The pre-training took approximately one week, conducted on an RTX A6000 with 32 GB of memory, involving 225k iterations with a batch size of 4, as detailed in Section 3.1. The generation of synthetic data is expensive, accounting for 20% of the total time. Fine-tuning lasted 2 days, with details provided in Section 3.2. Iterations are faster during fine-tuning since no synthetic data is generated and the CTC loss is not costly to compute. ### Q3: How would LLMs affect or change the impact of this approach? 1. Multimodal LLMs capable of OCR cannot fully replace this approach. They will struggle with complex documents such as handwritten historical documents (e.g., READ) or ciphers that include rare symbols. 2. LLMs can complement our approach in several ways. For example, they can be used as external language models for post-processing to improve results, as demonstrated in [A]. However, this requires specific fine-tuning of the models. Indeed, we attempted to use ChatGPT 3.5 on the IAM dataset to refine predictions, but this approach was not successful due to the LLM hallucinating words, as the IAM dataset tends to split sentences and words. It is also possible to use an LLM to initialize part of the architecture, such as the decoder, as done in TrOCR. However, end-to-end training requires significant computational resources. [A] Thomas, Alan, Robert Gaizauskas, and Haiping Lu. "Leveraging LLMs for Post-OCR Correction of Historical Newspapers." Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA)@ LREC-COLING-2024. 2024. ### Q4: Can this approach be extended to multi-line text detection and recognition? We appreciate the reviewer's question regarding the extensibility of our approach to multi-line text detection and recognition. As detailed in the common rebuttal, we have begun to implement a model similar to ours, based on DINO-DETR, which is capable of detecting lines (Figure 1 of the Rebuttal PDF) with accuracy close to state-of-the-art in baseline detection, as shown in Table 1 of the rebuttal PDF. We use this model to detect lines on IAM and run our text recognition model. Performance is reported in Table 2. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions and comments. However, I believe the work requires some major revisions. Given this, I will maintain my score of 6: Weak Accept because the work presented is novel and interesting and the authors have given good explanation in the rebuttal. It just requires some major revisions.
Summary: This paper presents a novel detection-based approach to text line recognition for both printed (OCR) and handwritten text (HTR), covering Latin, Chinese and cipher characters. Traditional detection-based methods have been largely neglected in HTR due to the difficulty of reading characters separately and the high cost of character-level annotation. The authors propose a solution to these challenges through three main insights: (i) using synthetic pre-training with diverse data for character localisation across different scripts; (ii) using modern transformer-based detectors to handle multiple character instances simultaneously, using a masking strategy to ensure consistency; and (iii) fine-tuning a pre-trained detection model with approximate character localisation using line-level annotations on real data, even with different alphabets. Strengths: - Originality: The method is highly original, proposing a detection/classification approach to character recognition that differs from most state-of-the-art methods, which typically rely on autoregressive decoding from images of lines or pages. - Synthetic training data: The authors propose a way to train models using fully synthetic data, eliminating the need for actual character-level annotations. - Model efficiency: The model is relatively small (~40M parameters), requiring only 100k synthetic line samples for training. Error analysis: This approach allows for a better understanding of errors, distinguishing between detection and classification errors. - Computational cost: Characters can be independently predicted in parallel, reducing computational cost, especially without a language model. - Adaptability: The method can be easily adapted to any alphabet with minimal training data. - Code release: The authors are committed to releasing the code, facilitating further research and application. Weaknesses: - Lack of new technical contributions: Despite the originality of the approach, the paper lacks novel technical contributions. The model, training strategy, fine-tuning with CTC, data augmentation and synthetic data generation are not novel. - Performance on handwritten documents: The method is not competitive on Latin handwritten documents and is outperformed by existing methods on Chinese handwritten documents. It does outperform on cipher recognition, but this is a more specialised and less researched area. - Scope of evaluation: The framework is only evaluated on perfectly segmented lines of text, raising questions about its performance on full pages. Full page processing would require additional steps such as text line detection and reading order retrieval, which could impact performance. - Impact of line detection: There are concerns about how the quality of line detection would affect character detection and recognition, particularly in cases where vertical or horizontal lines merge. Technical Quality: 2 Clarity: 2 Questions for Authors: - Error cases: Can you provide examples of challenging real-world examples, such as rotated, upside-down or slanted lines; blank lines; vertically merged lines; strikethrough text; translucent paper; and mixed printed/handwritten characters? - Pre-training: How long was the model trained, on what hardware (GPUs), and why was pre-training limited to 100k lines of text? Was this value determined experimentally? - Input image size: What was the input image size used for training and evaluation? - Computational cost: How do you explain the difference in computational cost between DINO-DETR (25ms/line) and FasterDAN (7ms/line)? Why were comparisons made using a batch size of 1? Please provide a more comprehensive comparison, including batch sizes, CPU vs. GPU performance, impact of language modelling, input image sizes, and model parameters. - Language model decoding: What is the impact of a language model on inference speed? Can character recognition/classification still be parallelized with a language model? Can you decode on GPU with a KenLM language model, and what size of N-gram language model do you use? - Model release: Will the model be released to the public? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: - Conclusion: The conclusion lacks depth and provides no insight into future improvements to the method. - Paper and writing: - Figure 2: The figure is unclear and needs to be made clearer. - Table 1: The table is misplaced, it appears on page 6 but is referenced on page 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. We are glad they found the paper highly original, efficient, and adaptable. We address the concerns and questions raised below. ## Weakness ### Lack of new technical contributions. We respectfully disagree with this assessment, as detailed in the common rebuttal and the answer to R-hupU. ### Performance on handwritten documents. We respectfully disagree as answered in the common rebuttal. ### Impact of line detection. We agree that the method used to extract text lines has an impact on performance. However, evaluating on cropped text lines is the most common practice in OCR and HTR, rather than evaluating an entire pipeline. Very few papers (e.g., DAN, FasterDAN) address both tasks. In cases where the line detection quality is poor and multiple lines appear in one crop, our method can still learn to detect only the characters of the line of interest, as shown in Figure 4 of the rebuttal PDF. ## Question ### Error cases. In the paper, our method has been evaluated on several challenging real-world datasets with such issues. In Table 1 of the paper, we report results on the READ and Copiale datasets, which include translucent paper and slanted lines, as shown in Figures 2 and 3 of the rebuttal PDF. IAM and RIMES datasets include strikethrough text, which we will include in an appendix. Our method should perform well on mixed printed and handwritten text, as it has been effective on both printed (Google 1000) and handwritten datasets (IAM, READ, RIMES, HWDB), and on datasets mixing multiple fonts and languages, as shown in Figure 5 of the rebuttal PDF. We see "rotated, upside-down" text as out of the scope of the paper. We consider that correctly detecting the orientation and shape of such lines is the role of the line detector. Lines can be slightly slanted, as is the case in several of our datasets and the one reported in Figure 4 of the rebuttal pdf. ### Pre-training. The pre-training took approximately one week, conducted on an RTX A6000 with 32 GB of memory, involving 225k iterations with a batch size of 4, as detailed in Section 3.1. The generation of synthetic data is expensive, accounting for 20% of the total time. Finetuning lasted for two days (number of iterations are given in section 3.2). The number of synthetic lines was not tuned, we simply generated a high number of lines. We believe that using a very large number of text lines during pre-training is not necessary since our method only learns a limited implicit language model. However, we will explore the impact of the synthetic training dataset size systematically in the final version (since this requires re-training several models, this was not doable during the time of the rebuttal). ### Computational cost. As explained in Section 4.2 we evaluated the inference speed of FasterDAN on whole documents, and the inference time was divided by the average number of lines. It is not a fair comparison for our method since our model does not work on whole documents while FasterDAN does. As explained in the common rebuttal, we aim to adapt our model for whole documents by using an additional model to guide attention, thereby reducing inference time like Faster DAN. When we launch both methods on individual lines, our method requires 67ms for inference, while FasterDAN takes 140ms. **Batch Size:** Comparisons were made using a batch size of 1 to ensure a fair evaluation of inference time. Below, we provide a table comparing inference speeds in ms across different batch sizes: | Batch Size | 1 | 2 | 4 | 8 | 16 | 32 | |-|-|-|-|-|-|-| | Ours | 74 | 44 |34 |31 |28 |32 | | TrOCR | 271 | 208 | 111 |59 |49 |46 | | FasterDAN Document / average number of lines| 56 | 35 |21 |16 |16 |21 | | FasterDAN Lines | 140 | 74 |28 |16 |8 |7 | **Image Sizes:** We agree that image size impacts speed. However, text lines are approximately the same size and are resized to a maximum width of 1330 pixels. **Model Parameters:** We agree that model parameters impact both inference time and performance. But, analyzing these effects in detail would require extensive experimentation. Our focus was on demonstrating the effectiveness of our method in handling text recognition and character segmentation efficiently. ### Language model decoding. We employ the KenLM library and pytorch CTC decoder which do not support GPU acceleration. We have measured the inference speed on the RIMES dataset on a single CPU to be 960 ms per example. While it could be parallelized using CPU multiprocessing, this optimization is beyond the scope of our current work. Our aim was to illustrate that our predictions can be further refined through the use of a language model. One could perfectly employ a LLM that supports GPU decoding. The N-gram model used in our experiments is of size 6, tuned on the validation set. ### Model release: Will the model be released to the public? Yes, as stated in the introduction, all models will be released publicly, including pre-trained and fine-tuned models. ## Limitations ### The conclusion lacks depth and provides no insight into future improvements to the method. We can extend the conclusion by adding the following future directions: 1. **Text Recognition on whole documents**: detailed in the common rebuttal. 2. **Advanced Language Models**: Improving recognition by integrating more advanced language models, such as a GPT decoder. 3. **Unsupervised Learning**: Implementing character segmentation with a reconstruction loss, allowing the model to learn the characters (alphabet) of a dataset and perform text recognition without any annotations, similary to [A]. [A] Siglidis, Ioannis, et al. "The Learnable Typewriter: A Generative Approach to Text Analysis." arXiv preprint arXiv:2302.01660 (2023). ### Paper and writing As addressed in our response to RjpUH, we will clarify Figure 2 and have corrected the placement of Table 1. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answer. I'm convinced of the value of the proposed method, mainly because of its originality compared with current methods. Without being revolutionary, it opens up interesting perspectives and I would like to see it tested further. The availability of the code and models will allow this. I'm going to raise my score.
Summary: This paper treats text line recognition as an object detection task and proposes a two-stage training approach based on DINO-DETR. In the first stage, synthetic data with bounding box information is used to predict the bounding boxes and categories of text, due to the absence of character-level annotations for text line datasets. In the second stage, real text line data is employed to sequentially fine-tune the classifier and the entire model. The proposed method achieves state-of-the-art performance in cipher text. Strengths: - This paper is well-written and easy to follow. - The evaluation is well done and compares against several strong baselines. Weaknesses: - Abalation study is easy. - Utilizes more engineering skills than methodological innovations. Technical Quality: 3 Clarity: 3 Questions for Authors: All results in the paper were pre-trained on Latin text and fine-tuned on several datasets. What if we pre-trained and fine-tuned on data in the same language? For example, pre-training on synthetic Chinese text lines followed by fine-tuning on real Chinese datasets. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The proposed method performs well only on the cipher text recognition task. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and positive feedback on the paper, in particular positive comments on the clarity of the paper and the thorough evaluation against strong baselines. We address the concerns and questions raised below. ## Weakness ### Abalation study is easy. We evaluate in our ablation the impact of the pre-training dataset, the impact of erasing during pre-training and fine-tuning, and we demonstrate the importance of fine-tuning the complete network, including the bounding box prediction, despite the fact there is no supervision on bounding boxes during fine-tuning. We will happily add any other ablation that the reviewer thinks would clarify aspects of our method. ### Utilizes more engineering skills than methodological innovations. We respectfully disagree with this assessment. As explained in the common rebuttal, building an OCR/HTR method based on a transformer detection architecture requiered methodological innovations and not mere engineering, which we believe is the reason why it was never demonstrated. While the pre-training losses are the same as DINO, the fine-tuning with CTC for such an architecture is novel and requires (i) ordering the queries differently for each text line according to the bounding boxes, (ii) adapting the class probabilities produced by the DINO architecture (iii) adapting the CTC loss, which can be done by introducing additionnal blank tokens, as detailed in our methodology section. We believe that it is due to these methodological contributions, and not heavy engineering, that our method is, to the best of our knowledge, the first method competitive on many datasets, performing jointly text recognition and character segmentation, and processing characters in parallel rather than in an autoregressive way. ## Questions ### All results in the paper were pre-trained on Latin text and fine-tuned on several datasets. What if we pre-trained and fine-tuned on data in the same language? For example, pre-training on synthetic Chinese text lines followed by fine-tuning on real Chinese datasets. We used pre-trained models for each language (French, German, English) and one general model trained on random text for the cipher and Chinese datasets. We noticed that this information was missing from Section 3.1. We will include all the information in the camera-ready version. As shown in Table 4 of the ablation study, pretraining the model and the target language results with a masking strategy results to better results on IAM. We will compare performance with generic and language specific pre-training for all datasets in the final version of the paper. Regarding Chinese, we have pre-trained a model on Chinese characters and are currently fine-tuning it on CASIA. We achieved an AR of 94.3 and a CR of 95.3, surpassing the performance reported in the paper by 2.1\%. ## Limitations ### The proposed method performs well only on the cipher text recognition task. We refer the reviewer to the common rebuttal where we address this limitation.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback. We are pleased that our approach was found to be highly original (R9JMV), novel (RKLkw), efficient (R9JMV, RjpUH), rigorously evaluated (RhupU, RKLkw), and well written (RhupU, RjpUH). We address in the following main concerns, raised by several reviewers. ## Lack of Technical Novelty (RhupU, RKLkw) We respectfully disagree with this assessment. Building an OCR/HTR method based on a transformer detection architecture, allowing for parallel processing of characters instead of the sequential processing typical in current methods [13, 15, 19, 25, 30, 38], requiered methodological innovations and not mere engineering, which we believe is the reason why it was never demonstrated. While the pre-training losses are the same as DINO-DETR, the fine-tuning with the CTC loss for such an architecture is novel and requires several ideas and contributions: (i) ordering the queries differently for each text line according to the bounding boxes, (ii) adapting the class probabilities produced by the DINO-DETR architecture, (iii) adapting the CTC loss, which can be done by introducing additionnal blank tokens, as detailed in our methodology section. We believe that it is due to these methodological contributions, and not heavy engineering, that our method is, to the best of our knowledge, the first method competitive on many datasets, performing jointly text recognition and character segmentation, and processing characters in parallel rather than in an autoregressive way. ## Lack of Evaluation on Challenging Real Datasets (R9JMV, RKLkw) Our method has been evaluated on the historical READ dataset, which includes Germanic manuscripts and two ciphers. These datasets are characterized by noise, degradation, and can have slightly slanted lines as shown in Figure 2 (READ) and 3 (Copiale) of rebutal PDF. We have also included additional examples from two extra challenging datasets: the Ramancoil cipher (Figure 4) and a collection of early-modern prints from the ICDAR24 Competition on Multi Font Group Recognition and OCR (Figure 5). The Ramancoils cipher (year 1674) includes very slanted lines and the crops can include multiple lines, as shown in Figure 4 of the rebuttal PDF. Our model successfully recognizes the correct line even despite the presence of the upper and lower lines. The early-modern prints dataset features eight different fonts and three languages (Old French, Old German, and Latin). We trained a single model capable of handling these diverse fonts and languages, as illustrated in Figure 2. The model also processes lines containing mixed fonts and languages (German and Latin) within the same line, as shown in Figure 5. Note that better performances can likely be obtained by learning different models for different fonts, since different letters can have similar shapes in different fonts. ## Competive results limited to Ciphers We respectfully disagree. Our approach demonstrates strong performance across the datasets mentioned in the paper. While our results on the IAM dataset are below the state-of-the-art, it is important to note that leading methods like DTRoCR and TrOCR use significantly larger architectures than ours with up to 10 times more parameters, extensive computational resources (e.g., TrOCR used 32 V100 GPUs and a batch size of 1024), and much larger-scale training datasets (hundreds of millions of printed text lines for TrOCR). We believe that given comparable computational resources, similar performances could be achieved by our model. Our method also provides a valuable contribution by efficiently addressing both text recognition and character segmentation, which is not achieved by any current method. Following the suggestion of RhupU, we pre-trained a new model on synthetic lines of Chinese using characters from CASIA v2. We are currently fine-tuning this model on CASIA v2 and have achieved an AR of 94.3\% and a CR of 95.3\%, which is already 2.1\% above the performance reported in the paper, reducing the gap with state-of-the-art. Moreover, the training loss has not had time to converge during the rebuttal period, and better results can be expected after convergence. ## Text Recognition on whole documents We acknowledge the value of a system that can handle both text detection and recognition. However, the standard in text recognition typically involves evaluating crops of individual lines [19, 25, 30, 38]. Few methods address this combined task [12, 13]. Nevertheless, we provide proof of concept results of a model similar to ours, derived from DINO which performs text line detection in the rebuttal PDF. This method predicts 8 points for the baseline of the text and 2 for the line above it to segment the line, enabling both baseline evaluation on standard datasets and bounding-box extraction. It is pre-trained on synthetic data and fine-tuned on real data. The method achieves results close to the state of the art in baseline detection, as reported in Table 1 of the Rebuttal PDF. We also provide visual examples of line predictions on complex datasets (cBAD2019) and IAM in Figure 1 of the rebuttal PDF. Finally, we provide CER for combined line detection and recognition on IAM in Table 2. This is a promising first step, but it could be improved by fully integrating it with our approach and training it end-to-end by using the line detections to guide the attention weights of our text recognition model. Recognition for several lines could then be performed in parallel, similar to FasterDAN. Similar to our model, pretraining could be performed in a completely supervised way, while fine-tuning could be done with only page level supervision. However, we believe that demonstrating a detection-based model for text lines is a significant enough contribution for this paper, and introducing a complete pipeline would dilute the contribution and limit the diversity of datasets on which the full approach can be evaluated. Pdf: /pdf/9fe66e7d633f7b0315f24b0aa877e6355dfe4ec4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Multi-Scale Representation Learning for Protein Fitness Prediction
Accept (poster)
Summary: This work targets at the protein fitness prediction and introduces a sequence-structure-surface multi-modality (aka. multi-scale) self-supervised learning scheme. The results show that S3F outperforms baseline algorithms and achieves SOTA in the ProteinGym benchmark. I like the inspiration and favor S3F's promising performance. However, the study is very close to some recently published papers at ICLR 2024 (i.e., ProteinINR and PPIFormer) but completely ignores them. A fair comparison and analysis of the difference is necessary and I am afraid this is a big reason for me to give a borderline score. Strengths: (1) I advocate the research direction of pretraining on all feasible protein modalities, containing sequence, structure, and surfaces. Each representation form has its strengths and an incorporation is beneficial for the zero-shot protein fitness prediction. (2) The author conducts extensive ablation studies over function type, MSA depths, taxon, and mutant depth. This helps readers to better understand the impact of each component of their model. The OOD examination of unseen protein families is also encouraging to demonstrate the superiority. () In section 4.4, the author navigated the impact of structure quality and observed a clear performance drop when lower-quality structures were used. This phenomenon underscores the importance of accurate structures for fitness prediction. Notably, there have already been some studies that try to bridge the gap between representations in real and predicted structures [A], and I would recommend the author take a look if seeking further performance improvement. [A] Protein 3D Graph Structure Learning for Robust Structure-Based Protein Property Prediction. AAAI 2024. (3) The visualization and experimental analysis are elegant. I learned a lot from it. Weaknesses: (1) Missing of some closely related baselines and relevant work, GearNet [A] is one of the earliest studies in structural pretraining. ProteinINR [B] also leverages sequence, structure, and surfaces in a self-supervised way. The only difference is that they are not specifically designed to solve zero-shot problems. Based on these facts, the so-called "multi-scale representation learning" of S3F is no longer novel to me. From my point of view, both GearNet and ProteinINR can be simply adjusted to realize zero-shot prediction. Thus, it would be more interesting to see whether S3F outpasses them in all categories of downstream tasks. Besides, there are also some appealing self-supervised methods to predict mutant effects. RDE [C] pretrain a structural encoder by masking and predicting side-chain angles. The author also did not discuss this line of research. Last but not least, PPIFormer [D] pretrained a structural encoder with simple MLM in complex structures and can conduct zero-shot fitness prediction. I suppose it should also be compared and mentioned. [A] Protein Representation Learning by Geometric Structure Pretraining. ICLR 2023. [B] Pre-training Sequence, Structure, and Surface Features for Comprehensive Protein Representation Learning. ICLR 2024. [C] Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction. ICLR 2023. [D] Learning to design protein-protein interactions with enhanced generalization. ICLR 2024. (2) Some important relevant works are missing. For instance, the author claims that the weights of ESM-2 are frozen and only the structural encoder is tuned. This practice is very similar to the one in [A]. I would recommend the author at least cite it. [A] Integration of Pre-trained Protein Language Models into Geometric Deep Learning Networks. Communications Biology 2023. (3) The author used dMaSIF to generate the surface based on the backbone structures of proteins. This raises two doubts. Firstly, the side-chain atoms are ignored for surface generation, therefore the surface should be smaller than its standard one. Secondly, the fast-sampling algorithm developed by dMaSIF has unavoidable randomness. Different random seeds produce different surfaces. Has the author taken these two potential negative effects into consideration? Has the author adopted software like PyMol to acquire the surfaces? Technical Quality: 3 Clarity: 3 Questions for Authors: (1) In line 256, the author said "... by ensembling them with EVE predictions through the summation of their z-scores". I am not familiar with this alignment part. Can you please explain the details more? (2) The title used "mutli-scale" reprsentations of proteins. However, from my personal point of view, multi-scale refers to atom-scale, residue-scale, and protein-scale, As this study proposed to leverage sequence, structure, and surfaces, "multi-modality" would be more appropriate. (3) The author claimed that the backbone structures remain unchanged post-mutation. This is fine in most settings. However, did the author consider a more challenging circumstance, what if the structure varies significantly due to the mutation? [A] proposes a co-learning framework to simultaneously forecast the fitness change and the structural update. Does the author consider this factor? [A] Thermodynamics-inspired Structure Hallucination for Protein-protein Interaction Modeling. ICLR submission. (4) If I understand it correctly, the objective is to develop an unsupervised model that can predict a score for each mutant to quantify the changes in fitness values relative to the wild-type. Thus, it is more related to the mutant effect prediction task rather than the pure fitness prediction. I would recommend the author use "mutant effect prediction" instead of "fitness prediction" to better depict the target problem. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. We have replied to each of your questions below and included additional analyses that we believe greatly strengthen our submission. >**C1: Missing of some closely related baselines and relevant work.** Thank you for pointing out additional recent baselines we could have considered. We follow your suggestion and compare against 4 additional baselines: ESM-GearNet, ESM-GearNet-Edge, RDE and PPIFormer. We could not include ProteinINR, since no public code is currently available for this work. We report the performance of these new baselines in the following table. Table A. Average spearman correlation over 217 ProteinGym assays |#Method|Spearmanr.| |:----:|:----:| |ESM-GearNet|0.412| |ESM-GearNet-Edge|0.432| |RDE|0.220| |PPIFormer|0.224| |S2F|0.454| |**S3F**|**0.470**| **Note that all the new baselines significantly underperform our suggested methods.** RDE and PPIFormer are specifically designed for predicting the mutational effects on protein-protein interactions, a task that fundamentally differs from protein fitness prediction. We had initially considered GearNet as our structure backbone in earlier stages of development, but eventually switched to GVP as it provided superior fitness prediction performance, as can be seen in the table above. >**C2: Some important relevant works are missing. For instance, the author claims that the weights of ESM-2 are frozen and only the structural encoder is tuned. This practice is very similar to the one in [A].** Thank you for the suggestion -- we will include this work in the background section of the final version. >**C3: There are two doubts about dmasif surface generation: (1) ignoring side-chain atoms makes the surface smaller. (2) randomness in surface generation. Has the author adopted software like PyMol?** Several great points! We answer your questions as follows: - Ignoring side-chain atoms indeed makes the surface smaller, but results in significant computational efficiency for surface generation and message passing. Our results demonstrate that we can achieve significant performance gains with limited computational overhead, making our implementation appealing to practitioners. If we set aside any computational consideration, an ideal modeling strategy would likely involve all-atom mutant structures and surfaces. We plan to explore such approaches in future work. - Randomness in surface generation is indeed unavoided. To mitigate this effect, we use a high-resolution surface (resolution = 1.0Å, with 6K-20K points) and construct a dense surface-to-backbone correspondence graph (using 20 nearest surface points). This approach ensures that our learned surface representations remain robust across different surfaces. Given your feedback, we tested S3F using surfaces generated with five different seeds and find that the standard deviation of the overall performance is approximately 0.001, thereby confirming the robustness of our learned surface features. - There are several good software options for surface generation, including PyMol. We chose dMaSIF as it offered efficient GPU implementation and could easily be integrated into our codebase, but other software suites could have been similarly used. >**C4: Can you please explain the details of the alignment part?** Since the model output from EVE are unnormalized delta ELBOs, model predictions from EVE and S3F are on unrelated scales. To facilitate model ensembling we thus first standard normalize model predictions separately, then take their arithmetic average. We will add this clarification in the revision. >**C5: "Multi-modality" would be more appropriate than “multi-scale” in the title.** Thank you for the suggestion. We will change the title accordingly in the final version. >**C6: Did the author consider a more challenging circumstance, what if the structure varies significantly due to the mutation? [A] proposes a co-learning framework to simultaneously forecast the fitness change and the structural update. Does the author consider this factor?** Thank you for bringing this work to our attention. The idea looks very interesting and related. We will reference this paper in the final version and consider the idea in the future work. >**C7: I would recommend the author use "mutant effect prediction" instead of "fitness prediction" to better depict the target problem.** We used the particular terminology used in the ProteinGym [1] work for consistency, although the two expressions are used synonymously in the relevant literature. [1] ProteinGym: Large-Scale Benchmarks for Protein Fitness Prediction and Design --- Rebuttal Comment 1.1: Title: Update Comment: Thanks for your response. I am satisfied with the additional experiments and the outstanding performance compared to those important baselines. However, I still believe the way to generate surface using dMaSIF is not a smart choice despite their effectiveness in GPU parallel. However, I appreciate the efforts in answering my question and would like to raise my score to 5. I hope the authors can incorporate new changes during the rebuttal period into the final revision. --- Reply to Comment 1.1.1: Title: Concluding remarks Comment: Dear reviewer, Thank you very much for reading through our responses and for raising your score. We will make sure to include the changes discussed above in the final revision. Please let us know if there are any other points we can clarify before the discussion period ends. Kind regards, The authors
Summary: The paper presents a multimodal framework that integrates protein sequence, structure, and surface information together to predict protein fitness. The task of protein fitness prediction is a critical quality assessment of protein embeddings. Protein language models (pLM) are used for sequence representation which is embedded in a graph representation of protein structure and processed with geometric vector perceptron (GVP). The dMaSIF, a protein surface representation model, is used to encode protein surface, together with pLM embeddings of nearest residues. Surface features and sequence-structure features are integrated by concatenation and linear layers to predict masked residue identities. The model is evaluated on ProteinGym, a golden protein fitness benchmark dataset, against various pLM models and multiple sequence alignment (MSA) -based models and achieves favorable results in terms of spearman correlation with mutational effects. Evaluations are further run on AlphaFold predicted structures and less representative proteins to prove generalization. Strengths: **Originality** This is one of the first papers that integrates pLMs, protein structure and surface representations together to predict protein fitness and shows improvement on benchmarks. It is also novel to combine MSA information with it to further improve predictive power of the model. **Quality** The submission is technically sound with a detailed description of their framework and extensive experimental results on benchmark datasets with appropriate reasoning and explanation. **Clarity** The paper is clear to read and easy to follow. Presentation of results are simple yet effective. **Significance** This method achieves SOTA performance on protein fitness prediction benchmark and provides a standardized way to integrate three modalities of protein together for a unified representation. Weaknesses: 1. There is not adequate reference of prior work on similar topics, e.g. multimodal fusion and protein surface representation learning. The idea of integrating surface into protein representation is not novel, for example [1], but is neither acknowledged nor benchmarked in the paper. 2. Even though the performance of proposed method is better than competing methods, it's marginal. As the author writes in abstract '...these sequence-structure models have so far achieved only incremental improvements when compared to the leading sequence-only approaches.' However, SaProt (sequence+structure) provides 0.35 performance gain of correlation compared to ESM2 (seq) while S3F (proposed) provides 0.13 compared to SaProt. The advantage brought by surface representation is more incremental. 3. Statistical results should be provided for figure 2 when possible. How is the variation of performance on 217 assays? References: [1] https://arxiv.org/pdf/2309.16519 Technical Quality: 2 Clarity: 3 Questions for Authors: 1. line 195. ESM features are integrated into surface representations. Why not just use surface feature itself because it contains chemistry and geometry information already? 2. line 218. 'we avoid information leakage from surfaces by removing the top 20 closest surface points for each selected residue' How to ensure 20 is sufficient? 3. line 230. What percentage of results are output from ESM? 4. Figure 3. Why do some residues have negative correlation to model scores? Is there any quantitative analysis of differences rather than just qualitative visualization? 5. line 347. 'ignoring side-chain information'. If side chain is ignored, how to get surface information? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The majority of experiments in the paper is accomplished with experimental structure rather than predicted structures. And the results show that ordinary prediction quality could harm performance of the method. Thus, it is not clear if the model can be finetuned on AFDB to achieve better performance because of the quality of side-chain conformations in predicted structures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. We have responded to each of your questions below and included additional analyses that we believe significantly improve our submission. >**C1: The idea of integrating surface into protein representation is not novel, for example [1], but is neither acknowledged nor benchmarked in the paper.** We kindly refer the reviewer to our clarifications regarding the novelty and contributions from our work in our response to all reviewers above. Our background section (section 2) referenced several prior works that had already leveraged structure and surface features for general protein representation learning -- AtomSurf belongs to the same category, and we will include it as an additional reference in the revision. However, we would like to reiterate that we did not benchmark against any of these prior works leveraging surface features since none of them, including AtomSurf, supports (zero-shot) fitness prediction, which is the core task we focus on. The novelty of our work comes through the careful architecture design to best leverage structure and surface features to achieve state-of-the-art protein fitness prediction performance. >**C2: The advantage brought by surface representation is more incremental.** Please review our clarifications regarding the significance of our performance lift in our response to all reviewers above (point 2). The advantage brought by incorporating surface features in terms of fitness prediction performance is substantial: +0.02 Spearman between S2F and S3F across the 217 assays from ProteinGym. This is as much as the performance lift conferred by incorporating structural features (S2F vs ESM2). Together, structural and surface features provide a performance lift (+0.04 Spearman) that is comparable to factoring in epistasis vs not (Potts vs PSSM), which can hardly be characterized as incremental to fitness prediction performance. >**C3: Statistical results should be provided for figure 2 when possible. How is the variation of performance on 217 assays?** Thank you for suggesting this analysis. As discussed in our overall response to all reviewers, we compute the non-parametric bootstrap standard error for the difference in Spearman performance between a given model and the best overall model, using 10k bootstrap samples. We do this given the inherent variation in experimental noise and data quality across assays, which leads all models to consistently perform lower on certain assays and higher on others. By focusing on the standard error of the *difference* in model performance, we abstract away this assay-dependent variability, and provide a quantity that better reflects the statistical significance of the performance lift between our best model and any other baseline in ProteinGym. Please see the detailed results in the attached pdf, which confirms that the performance lift of our best model (S3F-MSA) vs the prior best baseline (SaProt) is statistically significant. >**C4: ESM features are integrated into surface representations. Why not just use surface feature itself because it contains chemistry and geometry information already?** We chose to combine ESM features with *geometric* features (such as Gaussian curvatures and Heat Kernel Signatures) to initialize the surface features. We believe that ESM can capture more informative chemical features by identifying residue types and co-evolutionary information, making it a strong complement to geometric surface features. >**C5: How to ensure removing 20 closest surface points for each selected residue is sufficient?** We empirically chose this number of closest surface points to remove based on validation accuracy during pre-training. For instance, without removing these points, the pre-training accuracy quickly reaches 100%, suggesting likely information leakage. However, after removing these points, the S3F accuracy drops down to 52.4%, comparable to that of S2F (51.0%). >**C6: What percentage of results are output from ESM?** Out of 2.46 million mutations, approximately 0.35 million (14%) rely on ESM predictions only due to low-quality structures. We also benchmarked the results of S2F and S3F without pLDDT filtering, yielding aggregate Spearman performance scores of 0.449 and 0.461, respectively. These results remain significantly higher than the ESM performance (0.414 Spearman). >**C7: Why do some residues have negative correlation to model scores? Is there any quantitative analysis of differences rather than just qualitative visualization?** These negative correlations are already present in the ESM-based predictions, likely reflecting residue type preferences in ESM that do not align with experimental results. By introducing structural and surface features, our methods mitigate this effect to some extent. To quantitatively evaluate how much S2F and S3F improve over ESM methods, we calculated the average Spearman correlation in the regions of interest (residues 234-252 and 266-282). The results for ESM, S2F, and S3F are 0.301, 0.397, and 0.443 respectively, demonstrating that the introduction of structural and surface features can more effectively capture epistatic effects. >**C8: If the side chain is ignored, how to get surface information?** Since point mutations can significantly alter the side-chain structure, we chose to only keep the backbone structure for surface generation resulting in a smaller surface graph that is broadly applicable to all mutated sequences for the same protein family. This approach significantly reduces the cost of surface generation and message passing, making model training and inference more efficient, while yielding the significant performance improvement discussed above and keeping computations tractable on the hardware we had access to. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thank you for the response. If I understand correctly, the surface is calculated only from backbone structure? Then this definition is largely deviated from what people normally conceive as true protein surface which is mostly dictated by side chains. I believe the wording regarding surface is misleading and would cause confusion for audience interested in protein surface representation. --- Reply to Comment 1.1.1: Title: Concluding remarks Comment: Dear reviewer, Thank you very much for reading through our responses and the additional question. Our surface features are indeed derived from the backbone structure only, which is both an approximation that has been used in prior literature (see for example [1]) and which has led to strong empirical performance in our experiments. We will further clarify this point in the revised manuscript, as well as include the rationale we had provided above for adopting this approach. Is there any other point of concern that we could help clarify before the discussion period ends? Kind regards, The authors [1] Hua et al. Effective Protein-Protein Interaction Exploration with PPIretrieval
Summary: This paper proposes a new protein fitness prediction model, the Sequence-Structure-Surface (S3F) model, which integrates protein sequence information from a protein language model embedding, protein structure information processed through a Geometric Vector Perceptron (GVP) module, and protein surface information processed through the dMaSIF model and a GVP module. After pre-training S3F on CATH (to be specific, pre-training the GVP modules while freezing the protein language model weights), the model outperforms state of the art baselines on zero-shot protein function prediction. The paper also analyzes the breakdown of these results across different fitness prediction settings, finding that all settings benefit from adding structure, surface, or MSA information on top of sequence information, with the largest gains arising in binding and stability assays and in settings where protein language models have limited training data or biased training data. Strengths: The paper is well written and presents a parameter-efficient, novel way to include protein surface information into a model for fitness prediction. The paper contains results that will be interesting to the protein modeling community, showing that modeling the surface explicitly improves function prediction beyond other baseline methods which include structure information. It seems very interesting to investigate why such surface information is not easily captured by other methods that do utilize structure information. Weaknesses: 1. The paper does not include results or analysis about the sensitivity of results for various hyperparameters of the model, such as the width, depth, and hidden dimension of GVP modules, the number of surface points to include in a neighborhood, and the choice of protein language model for embeddings. Since the paper focuses on the ProteinGym benchmark for its only evaluation, it would be nice to see more in-depth experimentation and/or ablation of different parts of the S3F model. 2. The central claim that surface information is important could also be strengthened with more experiments to understand how pre-processing structural information into surface features extracts useful information that is not as easily accessible with structure-based models that don’t do similar feature engineering/pre-processing. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Table 1, the bottom half of the table compares alignment-based models with S2F and S3F ensembled with EVE scores. It would be interesting to see results for each of the baselines in the top half of the table (e.g. MIF-ST, ProtSSN, SaProt) also ensembled with EVE scores. 2. What is your current intuition about why structure-aware fitness prediction models are unable to leverage surface information to its full extent? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors assess limitations of their work, including the fact that their method is limited to considering substitutions, and cannot handle insertions or deletions. I think this is a salient point to keep in mind, which limits the applicability of their method to real protein design tasks. See also weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. We provide detailed responses to your questions below. >**C1: Lack of analysis for various hyperparameters of the model. It would be nice to see more in-depth experimentation and/or ablation of different parts of the S3F model..** During model development, we performed our main hyperparameter search based on the validation set accuracy for the S2F model (eg., number of layers, number of hidden dimensions), and used these same optimal values for S3F given the substantial compute costs required to process the surface features for pre-training. We provide below detailed ablation results for these model hyperparameters, as well as modality ablations. While the former tend to have a relatively marginal impact on the downstream performance, the latter clearly demonstrate the contributions of structural and surface information in S3F. Table A. Ablation Study and Hyperparameter Analysis. |#Method|Spearmanr.| |:----:|:----:| |S2F wider (512-dim)|0.446| |S2F deeper (8-layer)|0.457| |S3F w/o structure & surface|0.414| |S3F w/o structure|0.392| |S3F w/o surface (S2F)|0.454| |**S3F**|**0.470**| Note: we used 5 layers in our final S3F models, instead of the 8 layers that provide marginally better performance for S2F, due to GPU memory bottleneck when leveraging surface features. >**C2: The central claim that surface information is important could also be strengthened with more experiments to understand how pre-processing structural information into surface features extracts useful information that is not as easily accessible with structure-based models that don’t do similar feature engineering/pre-processing.** Surface message-passing is intended to capture fine-grained structural aspects that complement the coarse-grained features learned through structure message-passing. Additional ablations in Table A above clarify the respective benefits from including the different modalities. >**C3: In Table 1, the bottom half of the table compares alignment-based models with S2F and S3F ensembled with EVE scores. It would be interesting to see results for each of the baselines in the top half of the table (e.g. MIF-ST, ProtSSN, SaProt) also ensembled with EVE scores.** We report the ensembling results with these 3 baselines in the table below. We observe that S3F outperforms existing baselines in both regimes: When no MSA is available, S3F outperforms the original MIF-ST, ProtSSN and SaProt. When MSAs are available and we are willing to train protein-specific alignment-based models (eg. EVE) to increase fitness prediction performance, S3F-MSA leads to statistically significant higher performance over MIF-ST-MSA, ProtSSN-MSA and SaProt-MSA. Table B. Model Performance with EVE Ensembling. | #Method | Spearmanr | Std. Error of Diff. to Best Score | |:----:|:----:|:----:| | MIF-ST-MSA | 0.475 | 0.005 | | ProtSSN-MSA | 0.480 | 0.002 | | SaProt-MSA | 0.489 | 0.004 | | **S3F-MSA** | **0.496** | **0.000** | >**C4: What is your current intuition about why structure-aware fitness prediction models are unable to leverage surface information to its full extent?** Explicitly including relevant information is a way to inject inductive bias into a task. For example, while structural information can be implicitly encoded in protein language models, we still want to use structure to augment PLM. Similarly, although surface information can be derived from protein structures, explicitly including it helps us better learn useful features for binding assays. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you to the authors for their additional experiments and careful response. I think the paper presents nice empirical results improving upon state-of-the-art ProteinGym performance, although the improvements are somewhat small (e.g. SaProt-MSA compared to S3F-MSA), and limited to that benchmark. Based on the responses and other reviewer comments, I will keep my score. --- Reply to Comment 1.1.1: Title: Concluding remarks Comment: Dear reviewer, Thank you very much for reviewing our responses. As we understand it is the remaining point of concern, we would like to share a few concluding thoughts on the significance of our performance improvements. We consider the ProteinGym benchmark, with its 2.4 million mutation effect labels, to be the most critical benchmark for protein fitness prediction, akin to what ImageNet is for computer vision. While our experiments were focused on this benchmark, we believe it is the most relevant for the task we addressed. Regarding the improvement magnitude, we appreciate your feedback and would like to note the following: 1) SaProt-MSA was not an existing baseline but rather one we specifically computed for this rebuttal based on your suggestion. This result may offer valuable insights for the community, and we plan to include it in our revision 2) Even a ~1-point increase in Spearman correlation across 217 assays from ProteinGym represents a meaningful advancement, which we believe underscores the importance of leveraging additional data modalities, such as surface information, as we have proposed in this work. Please let us know if there are any other points we can clarify before the discussion period ends. Kind regards, The authors
Summary: This paper augments a protein language model with two additional features, structure and surface information. The authors show that they can effectively use this information to predict protein function slightly better (though SOTA-level on relevant benchmarks) than sequence-only PLMs. Strengths: Significance: Perhaps the strongest point of the paper is that it clearly shows they can leverage structure and surface data to improve the zero-shot performance over sequence-only methods. The improvement is relatively small compared to models that only use sequence however. Clarity: Paper is well written, has a good flow and logical steps. Soundness: For the most part the right experiments are being done, the data benchmarks are correct and relevant models are tested (see weaknesses for a notable exception). Weaknesses: Novelty/Originality: I can’t tell how this paper is a real advance on (https://arxiv.org/pdf/2204.02337, ref 33 in this paper), with the title being nearly identical too, which is published 2 years ago. Yes the test set is different (evaluated on proteinGym), and some features of the architecture are also different, but the author choice of not benchmarking against Holoprot on the same dataset is problematic, if for nothing else because it’s the most clear way of showing the performance advance due to architectural improvement and no simply because of additional features (Because the “idea” of combining sequence, surface and structure is already executed.) From the abstract “Moreover, the function of certain 11 proteins is highly dependent on the granular aspects of their surface topology, 12 which have been overlooked by prior models. “ - Reference 33 (https://proceedings.neurips.cc/paper/2021/file/d494020ff8ec181ef98ed97ac3f25453-Paper.pdf) in your own citations is exactly about this, connecting sequence, structure and surface. Can you explain? - Relatedly I think the similarity (almost identical : "Multi-Scale Representation Learning on Proteins") in the paper title is a poor decision, if not for the appearance of plagiarism, for the fact that it makes the difference between the papers even less clear. It should be titled to embolden the difference. That paper is also doing fitness prediction. Technical Quality: 3 Clarity: 4 Questions for Authors: I think the authors really need to contrast this work with reference 33 and argue why it’s a meaningful conceptual or performance advance. I think the paper would be better if the authors would do at least two ablation studies where at least - structure is dropped - both structure and surface are dropped (dropping sequence is more complex but it may also be interesting). It is surprising to me that in the non MSA case S2F basically does not improve on benchmarks, and it could be that the structure data is adding no performance on top of the surface. Comparing to the original sequence-only model in this architecture would also help clarify how much improvement is simply due to other factors than surface information. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Conceptually the paper is doing something that has been done before, with some small innovations. It is relevant however that it can indeed consistently improve on existing language models. ::::: Updating my previous score to 5 Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the very thoughtful comments and suggestions. We address each of your questions below, including several additional analyses which we believe significantly strengthen our submission. >**C1: The improvement is relatively small compared to models that only use sequence.** The best fitness prediction models using only sequence information are TranceptEVE [1] and GEMME [2] -- two methods leveraging multiple sequence alignments. Their aggregate Spearman performance on the 217 assays from the ProteinGym zero-shot substitution benchmark is of 0.456 for both (Table 1). In contrast, the performance of our best model (S3F-MSA) is 0.496. This is a massive performance lift (+0.04 Spearman), especially since averaged across 217 diverse DMS assays. To put things in perspective: This performance lift is larger than the performance lift between a PSSM and a Potts model (EVmutation) (0.359 vs 0.395) -- difference which can hardly be characterized as relatively small, given the critical importance of epistasis in protein fitness Over the past 5+ years, all 50+ fitness prediction baselines that were introduced after DeepSequence, the first deep learning-based approach for fitness prediction in 2018 [3], allowed to move from 0.419 to 0.457 (SaProt) aggregate Spearman. This also represents +0.04 Spearman performance lift -- our work extends over that best baseline by an extra +0.04 Spearman. Finally, we note that the current best baseline on ProteinGym that combines sequence and structure features is SaProt, with an 0.457 aggregate Spearman. The corresponding performance lift over the best sequence-only methods (+0.001 Spearman) *is* marginal, illustrating the complexity of effectively integrating new data modalities to boost zero-shot fitness prediction performance. >**C2: Novelty of the paper compared with HoloProt.** We kindly refer the reviewer to our clarifications regarding the novelty and contributions from our work in our response to all reviewers above. In particular, we would like to reiterate that, while our work is not the first to leverage structure and surface features for protein *representation learning*, it is the first to focus on and support *zero-shot protein fitness prediction*. This is a significant difference as methods developed for general protein representation learning do not enable zero-shot fitness prediction or, if they do, typically underperform methods specifically developed for it [4]. For instance, in the work introducing HoloProt, Somnath et al. describe an approach for general protein representation learning using structure and surface features. The authors show that these representations can then be leveraged (via supervision) for several downstream tasks -- namely, ligand binding affinity prediction and an Enzyme-Catalyzed Reaction Classification (which the authors refer to as “function prediction” in their abstract). Nothing in the work from Somnath et al. covers fitness prediction, let alone in the zero-shot context, and it is not clear how one could use the corresponding architecture to address the task that we focus on in our work. >**C3: Ablation study for the effect of structure and surface.** Thank you for the suggested ablation analyses. We provide the corresponding results in the table below, confirming the performance lift from the various modalities involved. Table A. Ablation Study. |#Method|Spearmanr.| |:----:|:----:| |S3F w/o structure & surface (ESM2)|0.414| |S3F w/o structure|0.392| |S3F w/o surface (S2F)|0.454| |**S3F**|**0.470**| Notes: 1) The ablation dropping both structure and surface features was already in our original submission, and corresponds to just using the underlying pLM (ie., ESM2). 2) Surface message-passing is designed to capture fine-grained structural aspects that enhance the coarse-grained features learned by our S2F (sequence+structure) model. However, relying solely on these fine-grained features without the context from structural features, as we do in the ablation removing structural inputs, appears to be detrimental to performance. >**C4: It is surprising to me that in the non MSA case S2F basically does not improve on benchmarks, and it could be that the structure data is adding no performance on top of the surface.** We believe this statement is inaccurate as the performance lift of S2F over ESM2 (the underlying pLM) is of +0.04 aggregate Spearman (0.454 vs 0.414), which, as we argue in our response to your first comment above (C1), corresponds to a transformational performance lift, on par with factoring in epistasis vs not, or commensurate with the performance lift obtained from over 5 years of deep learning literature for protein fitness prediction. [1] Notin, et al. "TranceptEVE: Combining family-specific and family-agnostic models of protein sequences for improved fitness prediction." [2] Laine, et al. "GEMME: a simple and fast global epistatic model predicting mutational effects." Molecular biology and evolution. [3] Riesselman, Adam J., John B. Ingraham, and Debora S. Marks. "Deep generative models of genetic variation capture the effects of mutations." Nature methods [4] Notin, et al. "Proteingym: Large-scale benchmarks for protein fitness prediction and design." NeurIPS. --- Rebuttal Comment 1.1: Title: Thank you Comment: I thank the authors for their responses. Having read their answers, together with the other reviewers, I'm comfortable increasing my score. I do still feel like the conceptual advance on the previous paper and performance advance over the best benchmarks (given the auxiliary data it needs) is not at the level of a clear accept. --- Reply to Comment 1.1.1: Title: Concluding remarks Comment: Dear reviewer, Thank you very much for your final feedback and for raising your score! As we near the end of the discussion period, we wanted to share a few concluding remarks regarding the novelty aspect of our work, as it appears to be the remaining point of concern. Firstly, we would like to emphasize that achieving the performance we obtained on ProteinGym required significant craftsmanship to optimally leverage the structural and surface features within our proposed architecture. In our view, this represents one of the many forms of novelty that NeurIPS aims to highlight. Secondly, our work is the first to explicitly demonstrate the value of these modalities for fitness prediction performance, offering another novel insight that we believe will be highly valuable for practitioners. Lastly, our method introduces a model-agnostic approach to augment protein language models with structural and surface features. This innovative aspect ensures that our approach can be seamlessly applied to enhance future protein language models as they continue to evolve and improve. Please let us know if there are any other points we can clarify before the discussion period ends. Kind regards, The authors
Rebuttal 1: Rebuttal: Dear reviewers, We sincerely thank you for the time spent engaging with our paper and really appreciate the thoughtful comments. Based on your feedback, we have conducted additional experiments to further explore the strengths of our proposed approach, and have also clarified all points you had raised. We believe the submission is much stronger as a result. We summarize the key points of feedback and how we addressed them as follows: 1. **Novelty and contributions of this work (reviewers ChRY, rjUq, 6ssw)** - The main criticism across reviews is the perceived lack of novelty over the prior protein representation learning works that had already introduced approaches to leverage protein structure and surface information. We would like to emphasize that all these prior works had been strictly focused on protein representation learning, and that none of them supports zero-shot protein fitness prediction -- which is the core task that our work focuses on. Thus, while methods to efficiently process protein structure & surface information are not novel, the most adequate way to use these modalities to obtain state-of-the-art protein fitness prediction performance is novel. - Doing so is both non-trivial and of critical practical importance for the field of computational biology. It is important because fitness prediction is about understanding which proteins are functional and, as such, is one the most critical challenges underlying successful protein design: it is easy to generate novel proteins -- it is much harder to design functional ones. Additionally, zero-shot fitness prediction enables the quantification of mutation effects in settings where available experimental labels are scarce and/or difficult to collect (eg., environmental sensitivity, PTMs, allosteric regulation) - Our work presents an effective approach to augment protein language models (eg., ESM) with structure and surface features: the underlying pLMs have mediocre zero-shot fitness prediction performance on their own, but our suggested approach endows them with state-of-the-art performance. Lastly, since our method is pLM-agnostic, we expect continuous progress in protein language modeling to yield even higher fitness prediction performance when combined with our approach. 2. **Significance of performance improvement (reviewers ChRY, rjUq, 6ssw)** - We thoroughly evaluated our models against the 217 deep mutational scanning assays from the ProteinGym benchmarks. Our S3F and S3F-MSA models significantly outperform all 70+ baselines already present in ProteinGym, including recent top-performing models such as SaProt and ProtSSN (Table 1). As suggested by reviewers, we also compared against additional baselines, and significantly outperformed these as well (see next section). - The overall performance increase (+0.04 Spearman) compared to the best baseline (SaProt) is substantial and comparable to significant modeling improvements, such as accounting for epistatic effects versus not (ie., the delta between a PSSM and a Potts model). For more details, please refer to our response to C1 from reviewer ChRY. - To quantify the statistical significance of the performance, we follow the same methodology as in ProteinGym and compute the non-parametric bootstrap standard error of the difference between the Spearman performance of a given model and that of the best overall model (10k bootstrap samples). Our performance delta with prior methods are all statistically significant (see the attached pdf). 3. **New experiments conducted after reviews (all reviewers)** Based on the feedback from all reviewers we conducted several new analyses as follows: - Additional ablation keeping surface features but removing structure (reviewer ChRY; see C3): confirms the necessity to leverage both structure and surface features - Additional hyperparameter results for our GVP (reviewer 6FTz; see C1): confirms optimality of chosen hyperparameters - Sensitivity analysis wrt surface features (reviewer 6ssw; see C3): confirms performance is stable under various random seeds - Additional baselines (reviewer 6ssw; see C1): confirms the superiority of our proposed architecture and the non-triviality in properly leveraging structure-based features for protein fitness predictions - Statistical significance for fitness performance (reviewer rjUq; see C3): confirms performance deltas are all statistically significant In addition to this overall response, we provide detailed responses to all comments raised by each reviewer. Please do reach out to us if you would like us to clarify any remaining points. Thank you, The authors Pdf: /pdf/a6f61535a7ca402715a3e5d10da36a2da9ccfa7d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improving Adversarial Robust Fairness via Anti-Bias Soft Label Distillation
Accept (poster)
Summary: This paper aims to improve robust fairness in the adversarial distillation setting. The proposed method adaptively assigns a smaller temperature for hard classes and a larger temperature for easy classes during the adversarial distillation training. The smaller temperature means stronger supervision intensity for hard classes, realizing dynamic re-weighting in the training. Experiments on CIFAR100 and CIFAR-10 show some improvements in the worst-class performance at the cost of overall robustness degradation. Strengths: (1) The proposed algorithm is easy to implement. (2) Worst-class performance is enhanced at some cost of overall robustness. Weaknesses: (1) The proposed algorithm is to allow stronger supervision for hard classes with a smaller temperature during adversarial knowledge distillation. It seems to have similar effects to re-weighting, i.e., assigning a smaller class weight on hard classes for the KL loss during the knowledge distillation. What's the difference between the proposed method and such a baseline in nature? Why could the proposed method achieve better results? (2) Though the proposed method improves the worst class robustness, the overall robustness can't be maintained. (3) Regarding the proof of theorem 1, how do we get the conclusion in Eq. (19) with Eq. (18)? From my point of view, the two terms should be the opposites of each other in Eq. (19). (4) In the proof, the assumption that "the easy class error risk is less than the hard class error risk, so the gradient expectation of the easy class is higher than the gradient expectation of the hard class" is used multiple times. However, Why is it established? What's the relationship between the gradients and the error risk mathematically? Technical Quality: 2 Clarity: 2 Questions for Authors: (1) The proposed method has close relations with re-weighting. The authors should clarify the differences between the proposed method and the re-weighting. Why could the proposed method achieve better performance than re-weighting? (2) There seem to be some problems in the proof of theorem 1. What's the relationship between the gradients and the error risk mathematically? (3) The proposed method can achieve better worst-class performance. However, the overall robustness degrades. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations are discussed in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. We have taken great care to address all your concerns as follows: **Comment1(weakness(1) or question(1)): What's the difference between the proposed method and re-weighting method? Why could the proposed method achieve better results?** **Answer1:** The mentioned baseline (re-weighting) has been applied in Fair-ARD, while our ABSLD is a re-temperating method. We argue that the re-weighting and re-temperating methods belong to different ideology adopted to seek robust fairness. **The re-temperating method is more direct and accurate than the re-weighting method.** Specifically, in the optimization process, the essential optimization goal is to reduce the loss between the model's predictions and labels. Re-temperating directly adjust the labels, and its effect can be directly and accurately reflected in the final optimization results of the model. While re-weighting adjusts the loss proportion for different classes, which indirectly affects the model's optimization goal. **In addition**, we think **re-weighting and re-temperating will not conflict with each other.** Here we try to combine re-weighting and re-temperating strategies. As the result of Table 11 in the overall response PDF, we find that this combination can achieve better robust fairness compared with the re-temperating strategy, which demonstrates that **these two approaches will mutually promote the improvement of robust fairness.** **Comment2 (weakness(2) or question(3)): Though the proposed method improves the worst class robustness, the overall robustness can't be maintained.** **Answer2:** Actually, due to the “buckets effect”, the security of a system often depends on the security of the weakest component. In order to maximize the completion of the model's shortcomings, we focus on improving the model's worst-class robustness. Although the overall robustness remains unchanged or slightly decreases, **our ABSLD obtain the highest robust fairness compared with other methods.** Meanwhile, ABSLD shows the best comprehensive performance of fairness and robustness (NSD) compared with other methods, which means **obtaining the highest robust fairness with sacrificing the least average robustness.** **Comment3(weakness(3) or question(2)):Question toawrds Eq.(18) and Eq.(19)** **Answer3:** We sincerely appreciate your careful and professional check towards Theorem 1. We apologize that some writing mistakes exists in Eq.(19). Actually, from the Eq.(18), we can obtain the above conclusion combined with Eq.(14): $$ \mathbb{E}(\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial z_{c-}(x_{c-})}) =\mathbb{E}(\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial z_{c+}(x_{c+})})=\mathbb{E}(p_{c}^{I}(x_{c}) - p_{c-}^{\lambda1}(x_{c-}))=\mathbb{E}(p_{c}^{I}(x_{c}) - p_{c+}^{\lambda1}(x_{c+})), $$ $$ \mathbb{E}(\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial z_{c-}(x_{c+})}) =\mathbb{E}(\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial z_{c+}(x_{c-})})=\mathbb{E}(p_{c}^{I}(x_{c}) - p_{c-}^{\lambda1}(x_{c+}))=\mathbb{E}(p_{c}^{I}(x_{c}) - p_{c+}^{\lambda1}(x_{c-})). $$ And the revised conclusion will not have more negative impacts on subsequent theoretical proofs. **Comment4(weakness(4) or question(2)):Unclear description about the assumption.** **Answer4:** We apologize for the unclear description of this assumption. Here we further clarify it. Actually, the entire assumption is that “If the model is a uniform distribution before the optimization process and the easy class error risk is less than the hard class error risk after optimization process, then the gradient expectation of partial derivative about the easy class’s optimization goal toward the model parameter is higher than the gradient expectation of partial derivative about the hard class’s optimization goal toward the model parameter”. **The mathematical explanation for this assumption is as follows:** In the initial state, we assume that the model is a uniform distribution, and in this case, the error optimization risk is the same for the easy and hard classes: $$ \mathbb{E}(KL(f(x_{c-};\theta_{I}),P_{\lambda1}))=\mathbb{E}(KL(f(x_{c+};\theta_{I}),P_{\lambda1})), $$ then we will simplify the optimization of the model into a one-step gradient iteration process, which means we only consider the initial state before optimization and the last state after optimization. The model parameter is updated from initial $\theta_{I}$ to optimized $\theta_{opt}$. Since easy classes perform better than hard classes after the optimization process, the easy class error risk is smaller than hard class error risk: $$ \mathbb{E}(KL(f(x_{c-};\theta_{opt}),P_{\lambda1}))>\mathbb{E}(KL(f(x_{c+};\theta_{opt}),P_{\lambda1})), $$ the above result means the gradient expectation of partial derivative about the easy class’s optimization goal toward the model parameter ($\mathbb{E}(\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta})$) is higher than the gradient expectation of partial derivative about the hard class’s optimization goal toward the model parameter ($\mathbb{E}(\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta})$): $$ \mathbb{E}(\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta})>\mathbb{E}(\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta}), $$ so we can obtain the above assumption. --- Rebuttal Comment 1.1: Title: Thanks for the responses from the authors Comment: Thanks for the feedback from the authors. Some problems still make me confused. (1) The proposed algorithm is to allow stronger supervision for hard classes with a smaller temperature during adversarial knowledge distillation. It seems to have similar effects to re-weighting. The re-temperating method adjusts the strength of optimization by assigning different temperatures while the re-weighting can directly adjust the weights for each class. Could the authors provide any theoretical analysis to show the differences in nature between them? I don't see any advantages of re-temperating over re-weighting. (2) Fairness is important but the overall robustness is also important. (3) I still can't follow the logic that the larger the loss value the larger the gradients in Answer 4. Could you show the detailed derivatives between the loss value and the gradients? --- Reply to Comment 1.1.1: Comment: Thank you again for your valuable comments. **Comment5: The proposed algorithm is to allow stronger supervision for hard classes with a smaller temperature during adversarial knowledge distillation. It seems to have similar effects to re-weighting. The re-temperating method adjusts the strength of optimization by assigning different temperatures while the re-weighting can directly adjust the weights for each class. Could the authors provide any theoretical analysis to show the differences in nature between them? I don't see any advantages of re-temperating over re-weighting.** **Answer5:** Here we re-think and re-explain the advantages of our method and hope for your better understanding: Although both methods can play the role of adjusting the optimization strength, the obvious difference exists: **The re-temperating method is directly used to solve the fairness problem by avoiding overfitting of easy class and underfitting of hard class, while the re-weighting method can only alleviate overfitting or underfitting by adjusting the optimization strength, but cannot fundamentally avoid it.** Specifically, the fairness problem is essentially that the model overfits the easy class and underfits the hard class, leading to the low error risk for the easy class, and the high error risk for the hard class. Here we formulate the loss function without fairness constraints as follows: $$ L(x^{adv},x;f _ s,f _ {t}) = \frac{1}{C _ {hard}} \sum_{j=1}^{C _ {hard}} \underbrace{KL(f_{s}(x_{j}^{adv}), f _ {t}({x} _ {j}))} _ {\text{underfitting to hard class}}+\frac{1}{C _ {easy}} \sum_{i=1}^{C _ {easy}} \underbrace{KL(f _ s(x _ i^{adv}), f _ {t}({x} _ i))} _ {\text{overfitting to easy class}}, $$ for example, as shown in Figures 6 and 7 in the paper, without additional constraints, the error risk gap between different classes increases as the optimization process continues, which means the error risk of easy class will decrease more and more compared to the error risk of hard class. The results demonstrate that overfitting to easy classes and underfitting to hard classes definitely happen and have a direct relationship with the fairness problem. The re-weighting method only controls the optimization strength via assigning different weights of different classes, generally speaking, the weights $w_j$ of the hard class are larger than weights $w_i$ of the easy class. However, since the final optimization term $KL(f_s(x^{adv}), f_{t}({x}))$ has not changed, it just alleviates but does not solve the phenomenon of overfitting easy classes and underfitting hard classes and cannot fundamentally solve the fairness problem. We formulate the loss function with re-weighting constraints as follows: $$ L _ {re-weight}(x^{adv},x;f _ s,f _ {t}) = \frac{1}{C _ {hard}} \sum _ {j=1}^{C _ {hard}} w _ j * \underbrace{KL(f _ s(x _ j^{adv}), f _ {t}({x} _ j))} _ {\text{underfitting to hard class}}+\frac{1}{C_{easy}} \sum_{i=1}^{C _ {easy}} w _ i *\underbrace{KL(f _ s(x _ i^{adv}), f _ {t}({x} _ i))} _ {\text{overfitting to easy class}}, $$ different from the re-weighting method, the re-temperating method directly changes the label smoothness degree via the larger temperature $\tau_{i}^t$ for easy class and smaller temperature $\tau_{j}^t$ for hard class to design a new optimization term $KL(f_s(x^{adv};\tau^s), f_{t}^{'}({x};\tau^t))$: we increase the smoothness degree for easy classes to alleviate the overfitting, and reduce the smoothness degree for hard classes to alleviate the underfitting. In this way, we can achieve a relatively normal-fitting for both easy and hard classes and fundamentally solve the fairness problem. We formulate the loss function with re-temperating constraints as follows: $$ L _ {re-temperate}(x^{adv},x;f _ s,f _ {t}^{'}) = \frac{1}{C _ {hard}} \sum _ {j=1}^{C _ {hard}} \underbrace{KL(f _ s(x _ j^{adv};\tau^s), f _ {t}^{'}({x} _ j;\tau _ {j}^t))} _ {\text{normal-fitting to hard class}}+\frac{1}{C _ {easy}} \sum_{i=1}^{C _ {easy}} \underbrace{KL(f _ s(x_i^{adv};\tau^s), f _ {t}^{'}({x} _ i;\tau_{i}^t))} _ {\text{normal-fitting to easy class}}. $$ --- Reply to Comment 1.1.2: Comment: **Comment6: Fairness is important but the overall robustness is also important.** **Answer6:** Here we also believe that both overall robustness and fairness are important, and we try to improve fairness while maintaining the overall robustness as much as possible. Although our method obtains the highest robust fairness with sacrificing the least overall robustness and we can achieve the best performance of Normalized Standard Deviation(NSD), e.g., ABSLD reduces the NSD by 0.028, 0.032, 0.017, and 0.024 compared with the best baseline method against the FGSM, PGD, CW, and AA for ResNet-18 on CIFAR-10 (Table 1 in the paper), this trade-off phenomenon does exist similar to other fairness researches [1], [2], [3], [4]. We believe how to solve the trade-off between overall robustness and fairness is one of the directions that should be further explored in the future. 1.Xu, H., Liu, X., Li, Y., Jain, A., Tang, J.: To be robust or to be fair: Towards fairness inadversarial training. ICML (2021). 2.Ma, X., Wang, Z., Liu, W.: On the tradeoff between robustness and fairness. NeurIPS (2022). 3.Li, B., Liu, W.: Wat: improve the worst-class robustness in adversarial training. AAAI (2023). 4.Zhang, Y., Zhang, T., Mu, R., Huang, X., & Ruan, W.: Towards Fairness-Aware Adversarial Learning. CVPR (2024). **Comment7: I still can't follow the logic that the larger the loss value the larger the gradients in Answer 4. Could you show the detailed derivatives between the loss value and the gradients?** **Answer7:** **Here we further explain the detailed derivatives as follows:** In the initial state, we assume that the model is a uniform distribution, and in this case, the error optimization risk is the same for the easy and hard classes: $$ \mathbb{E}(KL(f(x_{c-};\theta_{I}),P_{\lambda1}))=\mathbb{E}(KL(f(x_{c+};\theta_{I}),P_{\lambda1})), $$ then we will simplify the optimization of the model into a one-step gradient iteration process, which means we only consider the initial state before optimization and the last state after optimization. The model parameter is updated from initial $\theta_{I}$ to optimized $\theta_{opt}$, since easy classes perform better than hard classes after the optimization process, the easy class error risk is smaller than hard class error risk: $$ \mathbb{E}(KL(f(x_{c-};\theta_{opt}),P_{\lambda1}))>\mathbb{E}(KL(f(x_{c+};\theta_{opt}),P_{\lambda1})), $$ at this time, we assume that model's parameter $\theta$ is differentiable and continuous, then we can approximate the gradient expectation of the partial derivative $\mathbb{E}(\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta})$ and $\mathbb{E}(\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta})$ as follows: $$ \mathbb{E}(\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta})\approx\mathbb{E}(\frac{KL(f(x_{c-};\theta_{opt}),P_{\lambda1})-KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\theta_{opt}-\theta_{I}}), $$ $$ \mathbb{E}(\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta})\approx\mathbb{E}(\frac{KL(f(x_{c+};\theta_{opt}),P_{\lambda1})-KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\theta_{opt}-\theta_{I}}), $$ combined with the relationship between easy class error risk and hard class error risk, we can obtain the result that the gradient value expectation of partial derivative about the easy class’s optimization goal toward the model parameter ($\mathbb{E}(|\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta}|)$) is higher than the gradient value expectation of partial derivative about the hard class’s optimization goal toward the model parameter ($\mathbb{E}(|\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta}|)$) as follows: $$ \mathbb{E}(|\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta}|)>\mathbb{E}(|\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta}|), $$ so we can obtain the above assumption. --- Rebuttal 2: Comment: Thank you again for your valuable comments. **Comment8:The most important thing is why the re-temperating is better than re-weighting. The conclusion is a little bit wired to me that re-temperating can fundamentally solve the fairness problem while re-weighting just alleviates the problem. There should be rigorous analysis.** **Answer8:** For the study of soft labels, the researcher usually explores and explains their effectiveness from an experimental perspective [1], [2]. Following the previous study, **to demonstrate that our method can suppress the overfitting of the easy class**, we directly compare the class-wise error risk gap between the train set and test set for the re-weighting method (Fair-RSLAD) and the re-temperating method (ABSLD) on easy class (1-th, 2-th, 9-th, 10-th class). The larger error risk gap denotes the less serious the overfitting. The experimental settings of Fair-RSLAD and ABSLD methods are exactly the same except for the re-weight and re-temperate operations. The error risk is obtained based on Cross Entropy Loss. Since the risk change curve cannot be displayed in the form of a picture, we select several typical checkpoints on CIFAR-10 of ResNet-18 in different training periods (the checkpoint in the 200-th, 250-th, and 300-th training epoch). The results are shown in the following Table-A8-1: It can be seen that compared with the re-weighting method, the re-temperating method has a smaller error risk gap between the train set and test set for all the classes, which shows that re-temperating can bring better generalization and can effectively suppress overfitting of easy class. Meanwhile, **to demonstrate that our method can suppress underfitting of the hard class**, we also list the error risk in the test set (Table-A8-2) for the hard class (3-th, 4-th, 5-th class), the less class-wise error risk denotes the less serious the underfitting. The results denote that the re-temperating method has a smaller error risk in the test set for hard classes, which shows that re-temperating can effectively suppress the underfitting of the hard class. **So the results directly and effectively illustrate the correctness of our explanation at the experimental level.** **Table-A8-1: the error risk gap beween the train set and test set for easy class.** |method|checkpoint|Class1|Class2|Class9|Class10| |:-----:|:----:|:------:|:----:|:----:|:----:| |reweight(Fair-RSLAD)|200-th|0.2812|0.2756|0.3317|0.3162| |**retemperate(ABSLD)**|200-th|**0.1061**|**0.1075**|**0.1414**|**0.1359**| |reweight(Fair-RSLAD)|250-th|0.4252|0.3971|0.4401|0.4481| |**retemperate(ABSLD)**|250-th|**0.1560**|**0.1587**|**0.1971**|**0.1914**| |reweight(Fair-RSLAD)|300-th|0.4559|0.4174|0.4326|0.4553| |**retemperate(ABSLD)**|300-th|**0.1691**|**0.1654**|**0.2012**|**0.1937**| **Table-A8-2: the error risk in test set for hard class.** |method|checkpoint|Class3|Class4|Class5| |:-----:|:-----:|:----:|:----:|:----:| |reweight(Fair-RSLAD)|200-th|1.571|1.861|1.503| |**retemperate(ABSLD)**|200-th|**1.511**|**1.734**|**1.394**| |reweight(Fair-RSLAD)|250-th|1.485|1.814|1.358| |**retemperate(ABSLD)**|250-th|**1.480**|**1.658**|**1.324**| |reweight(Fair-RSLAD)|300-th|1.498|1.780|1.455| |**retemperate(ABSLD)**|300-th|**1.483**|**1.613**|**1.360**| From a theoretical analysis, soft label methods can be applied to reduce the overfitting of unnecessary information by introducing smoothness degree for one-hot labels. **We believe that the re-temperating method fully utilizes the superiority of soft labels and reasonably decouples the superiority from smoothing the labels of the entire data to smoothing the labels of each class.** Under the premise of effective model capabilities, **the re-tempering method improves the information quality that the model needs to learn, leading to better performance for the model**: for the easy class, model can easily overfit the noise in the training data, so re-temperating introduce more uncertainty or noise in label level by increasing the smoothness degree to avoid the above overfitting; for the hard class, due to the relatively complex characteristics of its samples, the model tends to converge slowly and underfit, so re-temperating introduce less uncertainty or noise in label level by reducing the smoothness degree and give the model a clearer supervision signal to promote its learning of key semantic information. **The re-weighting method is more like a process of redistributing knowledge information based on class performance, but information quality does not change in this process**, and the model may still overfit some noise in the training data for easy class and underfit some key semantic information for the hard class. 1.Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z.. Rethinking the inception architecture for computer vision. CVPR(2016). 2.Müller, Rafael, Simon Kornblith, and Geoffrey E. Hinton. "When does label smoothing help?." NeurIPS(2019). --- Rebuttal 3: Comment: Thank you again for your valuable comments. **Comment9: It seems that the authors try to bridge the loss value and the gradients with Taylor's Formula. Apply Taylor's Formula to the function $f(x)$, we get that $f(x) = f(0) + xf^{'}(x)$. However, it is still not obvious to draw the conclusion that $\mathbb{E}(f^{'}(x))>\mathbb{E}(f^{'}(y))$ when $\mathbb{E}(f(x))>\mathbb{E}(f(y))$.** **Answer9:** Just as you say, we derive our results similar to the Taylor's Formula $f^{'}(x)=\frac{f(x)-f(0)}{x}$. We definely can not get the result $\mathbb{E}(f^{'}(x))>\mathbb{E}(f^{'}(y))$ when $\mathbb{E}(f(x))>\mathbb{E}(f(y))$. However, we can obtain the $\mathbb{E}(|f(x)-f(0)|)>\mathbb{E}(|f(y)-f(0)|)$ if $\mathbb{E}(f(0)) >\mathbb{E}(f(y))>\mathbb{E}(f(x))>0$, then we can further obtain $\mathbb{E}(|f^{'}(x)|)>\mathbb{E}(|f^{'}(y)|)$. **Here we further explain the detailed derivatives and correct the error derivation as follows:** In the initial state, we assume that the model is a uniform distribution, and in this case, the error optimization risk is the same for the easy and hard classes: $$ \mathbb{E}(KL(f(x_{c-};\theta_{I}),P_{\lambda1}))=\mathbb{E}(KL(f(x_{c+};\theta_{I}),P_{\lambda1})), $$ then we will simplify the optimization of the model into a one-step gradient iteration process, which means we only consider the initial state before optimization and the last state after optimization. The model parameter is updated from initial $\theta_{I}$ to optimized $\theta_{opt}$, since easy classes perform better than hard classes after the optimization process, the easy class error risk is smaller than hard class error risk: $$ 0<\mathbb{E}(KL(f(x_{c-};\theta_{opt}),P_{\lambda1}))<\mathbb{E}(KL(f(x_{c+};\theta_{opt}),P_{\lambda1}))<\mathbb{E}(KL(f(x_{c-};\theta_{I}),P_{\lambda1}))=\mathbb{E}(KL(f(x_{c+};\theta_{I}),P_{\lambda1})), $$ based on the above result, we can have the result as follows: $$ \mathbb{E}(|KL(f(x_{c-};\theta_{opt}),P_{\lambda1})-KL(f(x_{c-};\theta_{I}),P_{\lambda1})|) > \mathbb{E}(|KL(f(x_{c+};\theta_{opt}),P_{\lambda1})-KL(f(x_{c+};\theta_{I}),P_{\lambda1})|) $$ at this time, we assume that model's parameter $\theta$ is differentiable and continuous, then we can approximate the gradient expectation of the partial derivative $\mathbb{E}(\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta})$ and $\mathbb{E}(\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta})$ as follows: $$ \mathbb{E}(\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta})\approx\mathbb{E}(\frac{KL(f(x_{c-};\theta_{opt}),P_{\lambda1})-KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\theta_{opt}-\theta_{I}}), $$ $$ \mathbb{E}(\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta})\approx\mathbb{E}(\frac{KL(f(x_{c+};\theta_{opt}),P_{\lambda1})-KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\theta_{opt}-\theta_{I}}), $$ combined with the relationship between easy class error risk and hard class error risk, we can obtain the result that the **gradient absolute value expectation** of partial derivative about the easy class’s optimization goal toward the model parameter ($\mathbb{E}(|\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta}|)$) is higher than the **gradient absolute value expectation** of partial derivative about the hard class’s optimization goal toward the model parameter ($\mathbb{E}(|\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta}|)$) as follows: $$ \mathbb{E}(|\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta}|)>\mathbb{E}(|\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta}|), $$ so we can obtain the above assumption. Actually, the **gradient absolute value expectation** of partial derivative about the class’s optimization goal toward the model parameter denotes the difficulty of the model learning this type of class: the easier the class, the larger the gradient absolute value expectation. --- Rebuttal Comment 3.1: Comment: Thanks for your responses. For Q8, although your experiments show the advantages of re-temperating over re-weighting. But why is it? Where do the benefits come from? For Q9, it seems that whether the conclusion is established also depends on the sign of ($\theta\_{opt} - \theta\_{I}$) Based on the authors's responses, I will keep my initial rating. --- Reply to Comment 3.1.1: Comment: Thank you again for your valuable comments. **Comment10: For Q8, although your experiments show the advantages of re-temperating over re-weighting. But why is it? Where do the benefits come from?** **Answer10:** Actually, we have explained it in the Q8, the explanation is as follows: ''From a theoretical analysis, soft label methods can be applied to reduce the overfitting of unnecessary information by introducing smoothness degree for one-hot labels. **We believe that the re-temperating method fully utilizes the superiority of soft labels and reasonably decouples the superiority from smoothing the labels of the entire data to smoothing the labels of each class.** Under the premise of effective model capabilities, **the re-tempering method improves the information quality that the model needs to learn, leading to better performance for the model**: for the easy class, model can easily overfit the noise in the training data, so re-temperating introduce more uncertainty or noise in label level by increasing the smoothness degree to avoid the above overfitting; for the hard class, due to the relatively complex characteristics of its samples, the model tends to converge slowly and underfit, so re-temperating introduce less uncertainty or noise in label level by reducing the smoothness degree and give the model a clearer supervision signal to promote its learning of key semantic information. **The re-weighting method is more like a process of redistributing knowledge information based on class performance, but information quality does not change in this process**, and the model may still overfit some noise in the training data for easy class and underfit some key semantic information for the hard class.'' **Comment11: it seems that whether the conclusion is established also depends on the sign of $\theta_{opt}-\theta_{I}$** **Answer11:** Since we have added an **absolute value to the value** of this item, its sign of $\theta_{opt}-\theta_{I}$ will not have a final impact on the result: $$ \mathbb{E}(|\frac{\partial KL(f(x_{c-};\theta_{I}),P_{\lambda1})}{\partial \theta}|)>\mathbb{E}(|\frac{\partial KL(f(x_{c+};\theta_{I}),P_{\lambda1})}{\partial \theta}|), $$ Actually, we apply the above conclusion about the absolute value for the later derivative. Therefore, it does not affect any subsequent conclusions.
Summary: The paper introduces Anti-Bias Soft Label Distillation (ABSLD), a method aimed at improving robust fairness in deep neural networks. This paper identifies the smoothness degree of soft labels as a critical factor influencing this imbalance. ABSLD mitigates the robust fairness problem by adjusting the class-wise smoothness degree of soft labels during the knowledge distillation process. By assigning sharper soft labels to harder classes and smoother ones to easier classes, ABSLD reduces the error risk gap between classes. Extensive experiments on datasets like CIFAR-10 and CIFAR-100 demonstrate that ABSLD outperforms state-of-the-art methods in achieving both robustness and fairness. Strengths: 1. The proposed ABSLD method is innovative and provides a fresh perspective by focusing on the smoothness degree of soft labels, differing from the existing re-weighting approaches. 2. The paper provides a theoretical analysis to support the proposed method, strengthening the validity of the claims. 3. Extensive experiments on different datasets and models demonstrate the effectiveness of ABSLD. Weaknesses: 1. The paper provides many implementation details, but some aspects, such as the choice of hyperparameters (e.g., learning rate, temperature adjustments), could be discussed more thoroughly. Clarifying these details can help replicate the results and understand the method's practical implications. 2. Lack of more related works and SOTA methods, e.g.[1][2]. 3. What is the advantage of using Knowledge Distillation (KD)? The baselines used in the paper do not seem strong enough, and current state-of-the-art approaches [1][2] appear to offer better performance. Additionally, employing a teacher model increases the practical costs of time and memory. 4. How about re-weighting + re-temperating? or using re-temperating directly via label smoothing without a teacher model. 5. I am concerned that the comparison between ABSLD and Fair-ARD is due to the hyper-parameter used. [1] WAT: improve the worst-class robustness in adversarial training, AAAI 2023. [2] Towards Fairness-Aware Adversarial Learning, CVPR 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: See the weaknesses above. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. We have taken great care to address all your concerns as follows: **Comment1: The choice of hyperparameters (e.g., learning rate, temperature adjustments)** **Answer1:** Following your suggestion, we further discuss the hyper-parameter selection of the initial temperature learning rate $\beta$ and the initial teacher’s temperature $\tau_{k}^{t}$ on CIFAR-10 of ResNet-18. As Tables 9 and 10 in the overall response PDF show, the value of $\beta$ and $\tau_{k}^{t}$ can slightly influence the final results in a proper range, and the selection of hyper-parameter (initial $\beta$ as 0.1 and initial $\tau_{k}^{t}$ as 1) is reasonable. **Comment2: Lack of more related works and SOTA methods, e.g.[1][2].** **Answer2:** Following your suggestion, we compare our ABSLD with those two methods (WAT[1] and FAAL[2]). WAT[1] and FAAL are the re-weighting methods, which are different from our ABSLD (re-temperating method). Since FAAL does not provide reproducible open-source code, we directly apply the experimental results on CIFAR-10 of PRN-18 in FAAL’s origin paper. Following FAAL, we add the EMA operation to maintain consistency in the experimental setup. As the following table shows, our method outperforms FAAL by 1.02% and 0.2% in the average robustness and worst-class robustness under AA attack, which demonstrates the effectiveness. | Method | AA Avg. (%) | AA Worst (%) | | :----: | :----: | :----: | | WAT | 46.16 | 30.70 | |FAAL | 49.10 | 33.70 | | **ABSLD(ours)** | **50.12** | **33.90** | **Comment3: What is the advantage of using Knowledge Distillation (KD)? The baselines used in the paper do not seem strong enough, and current state-of-the-art approaches [1][2] appear to offer better performance. Additionally, employing a teacher model increases the practical costs of time and memory.** **Answer3:** **Firstly**, We think that applying KD has advantages as follows: 1. **Adversarial Robustness Distillation (ARD) is a type of state-of-the-art adversarial training method currently.** ARD can effectively bring strong adversarial robustness for trained models within the framework of KD. Therefore, based on the impressive performance, we can maximally pursue approaches that bring both strong overall robustness and fairness just as in previous work [3].   2. **Knowledge Distillation itself can bring competitive robust fairness.** We can notice that KD-based methods themselves (e.g., RSLAD and AdaAD) have competitive robust fairness compared with the other baseline methods (e.g., TRADES, FRL, and CFA). We believe that the KD-based method is more friendly in improving the robust fairness of the model. So we select the Knowledge Distillation as the baseline. **Secondly**, the results in **Answer2** demonstrate the effectiveness of our ABSLD compared with WAT [1] and FAAL [2]. **Thirdly**, for ABSLD, the teacher model is only applied to generate the soft labels without updating its parameters, thus, the additional cost is limited. Meanwhile, we calculate the time cost and memory. Here we calculate the costs on CIFAR-10 of ResNet-18. As the following table shows, we think the costs of time and memory can be acceptable while bringing performance improvements compared with main-stream methods. | Method | Time(Avg. Epoch)| GPU Memory | :----: | :----: | :----: | SAT | 175s | 2764MiB | WAT | 284s | 3624MiB | ABSLD(ours)| 224s | 3832MiB | **Comment4: How about re-weighting + re-temperating? or using re-temperating directly via label smoothing without a teacher model.** **Answer4:** Thank you for your valuable comment. Following your suggestion, we try to combine re-weighting and re-temperating strategies and evaluate the adversarial robust fairness. As the results of Table 11 in the overall response PDF, we find that re-weighting + re-temperating can achieve better robust fairness compared with the re-temperating strategy, which demonstrates that **these two approaches will mutually promote the improvement of fairness without conflicting with each other.** Meanwhile, we also try to explore the performance of using re-temperating directly via label smoothing without a teacher model and add the performance of label smoothing as the baseline for comparison. The results of Table 12 in the overall response PDF show that despite the re-temperating directly via label smoothing is better than the baseline, a certain gap still exists compared to the ARD-based method. **Comment5: I am concerned that the comparison between ABSLD and Fair-ARD is due to the hyper-parameter used.** **Answer5:** For the Fair-ARD, we completely retain the hyper-parameter settings and select the best performance versions shown in Fair-ARD's original paper (Fair-ARD for CIFAR-10 and Fair-RSLAD for CIFAR-100). To further clarify the effectiveness, we compare our ABSLD with Fair-ARD's different version on CIFAR-10 of ResNet-18, including Fair-ARD, Fair-IAD, Fair-MTARD, and Fair-RSLAD, and the results of Table 13 in the overall response PDF show that our ABLSD has better performance compared with different types of Fair-ARD versions. Especially, for the comparison between Fair-RSLAD and ABSLD, except for the specific hyper-parameters (e.g., the Fair-RSLAD’s for re-weighting or the ABSLD’s for re-temperating), the baseline (RSLAD) and its corresponding hyper-parameters are completely the same. **Thus, the effectiveness of ABSLD compared with Fair-ARD does not come from intentional hyper-parameter selection.** 1.Wat: improve the worst-class robustness in adversarial training. AAAI(2023). 2.Towards Fairness-Aware Adversarial Learning. CVPR(2024). 3.Revisiting adversarial robustness distillation from the perspective of robust fairness. NeurIPS(2023).
Summary: This paper explores the issue of robust fairness in deep neural networks (DNNs), particularly focusing on the disparity in robustness between different classes in adversarial training (AT) and adversarial robustness distillation (ARD) methods. The authors propose a novel method called Anti-Bias Soft Label Distillation (ABSLD) that aims to mitigate this problem by adjusting the smoothness degree of soft labels for different classes during the knowledge distillation process. ABSLD adaptively reduces the error risk gap between classes by re-tempering the teacher's soft labels with different temperatures, which are determined based on the student's error risk. Extensive experiments demonstrate that ABSLD outperforms existing AT, ARD, and robust fairness methods in terms of a comprehensive metric that combines robustness and fairness, known as the Normalized Standard Deviation. The paper contributes to the literature by providing both empirical observations and theoretical analysis on the impact of soft label smoothness on robust fairness and by advancing a new technique within the knowledge distillation framework to achieve better adversarial robust fairness. Strengths: 1. Well-designed experiments: used (Tiny)-Imagenet, set up AA as attack, and leverage NSD as metric Weaknesses: 1. In my humble opinion, I do not regard adversarial "fairness" is a critical problem. It is different from other fairness problem (e.g. demographic features) which brings social impacts. 2. For technical contribution, soft-label, knowledge distillation, as well as retemperature are well-known techniques. This paper applies the these techniques on the specific problem, but does not provide specific designs with respect to the problem. 3. According to the experiment results, this method also suffers the trade-off between clean acc and robustness, which indicates not to solve the critical issue with respect to the adversarial training. Technical Quality: 2 Clarity: 3 Questions for Authors: Please help to check weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Also in weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. We have taken great care to address all your concerns as follows: **Comment1:In my humble opinion, I do not regard adversarial "fairness" is a critical problem. It is different from other fairness problem (e.g. demographic features) which brings social impacts.** **Answer1:** Actually, we think **adversarial fairness problem is an issue worthy of being further studied in the field of adversarial robustness.** Previous research has always focused on improving overall robustness, however, the adversarially-trained models may exhibit high robustness in some classes while significantly low robustness in other classes. Due to the “buckets effect”, the security of a system often depends on the security of the weakest component. Specifically, an overall robust model appears to be relatively safe for model users, **however, the robust model with poor robust fairness will lead to attackers targeting vulnerable classes of the model, which leads to significant security risks to potential applications,** e.g., an autonomous driving system that has high robustness for inanimate objects on the road but lacks robustness when detecting pedestrians, may be misled by adversarial examples, leading to traffic accidents [6]. Due to the importance, many studies have been published to solve this critical issue, e.g., FRL[1] (ICML2021), FAT[2] (NeurIPS2022), BAT[3] (AAAI2023), WAT[4] (AAAI2023), CFA[5] (CVPR2023), Fair-ARD[6] (NeurIPS2023), TAAL[7] (CVPR2024). **We believe that related research is necessary and will make a positive contribution to the secure application of AI.** **Comment2: For technical contribution, soft-label, knowledge distillation, as well as retemperature are well-known techniques. This paper applies the these techniques on the specific problem, but does not provide specific designs with respect to the problem.** **Answer2:** Here we mainly want to clarify the technical contribution of the proposed method. Previous works always apply the re-weighting ideology to achieve robust fairness for different types of classes, however, as another important factor in the optimization objective function, the role of the labels has been ignored by previous researchers. Inspired by this, we try to explore robust fairness from the perspective of samples' soft labels. To the best of our knowledge, **we are the first one to explore the labels' effects on the adversarial robust fairness of DNNs**, which is different from the existing sample-based perspective. We find that the smoothness degree of samples' soft labels for different types of classes can affect the robust fairness from both empirical observation and theoretical analysis. To further enhance adversarial robust fairness, we propose a specific and novel method named ABSLD. Specifically, we re-temperate the teacher's soft labels to adjust the class-wise smoothness degree and further reduce the student's error risk gap between different classes.  The extensive experiments show our ABSLD can outperform other state-of-the-art methods in robust fairness problems. **We sincerely believe that our proposed method is innovative and effective.** **Comment3: According to the experiment results, this method also suffers the trade-off between clean acc and robustness, which indicates not to solve the critical issue with respect to the adversarial training.** **Answer3:** Actually, **ABSLD is proposed to solve the adversarial robust fairness problem, which is distinct from the accuracy-robustness trade-off problem.** We believe that even if the overall robustness of a model is high, the poor robustness on a specific class of data can also pose security issues. In order to maximize the completion of the model's shortcomings, we focus on improving the model's robust fairness problem. 1.Xu, H., Liu, X., Li, Y., Jain, A., Tang, J.: To be robust or to be fair: Towards fairness in adversarial training. ICML (2021). 2.Ma, X., Wang, Z., Liu, W.: On the tradeoff between robustness and fairness. NeurIPS (2022). 3.Sun, C., Xu, C., Yao, C., Liang, S., Wu, Y., Liang, D., Liu, X., Liu, A.: Improving robust fariness via balance adversarial training. AAAI (2023). 4.Li, B., Liu, W.: Wat: improve the worst-class robustness in adversarial training.AAAI (2023). 5.Wei, Z., Wang, Y., Guo, Y., Wang, Y.: Cfa: Class-wise calibrated fair adversarial training. CVPR (2023). 6.Xinli, Y., Mou, N., Qian, W., Lingchen, Z.: Revisiting adversarial robustness distillation from the perspective of robust fairness. NeurIPS (2023). 7.Zhang, Y., Zhang, T., Mu, R., Huang, X., & Ruan, W.: Towards Fairness-Aware Adversarial Learning. CVPR (2024).
Summary: This paper discusses the robust fairness problem, which is essential to solve for reducing concerns surrounding class-based security. The paper majorly analyzed the inheritance of robust fairness during adversarial robustness distillation (ARD). It is found that student models only partially inherit robust fairness from teacher models. To solve this, authors have examined how the degree of smoothness of samples' soft labels influences class-wise fairness in (ARD). They empirically and theoretically prove that appropriately assigning class-wise smoothness degrees of soft labels during ARD can be beneficial to achieve robust fairness. Therefore, as a solution to the fairness problem associated with ARD, they propose the Anti-Bias Soft Label Distillation (ABSLD), a knowledge distillation framework designed to reduce error risk gaps between student classes by adjusting the class-specific smoothness degree of teacher's soft labels during training, controlled by assigning temperatures. The authors provided experiments to demonstrate their method's effectiveness. Strengths: - The empirical and theoretical analysis on the impact of the smoothness degree of soft labels on class-wise fairness is interesting and well documented - Within the knowledge distillation framework, the use of temperature as a means of controlling the smoothness of the teacher's soft labels during training has proven to reduce the student's class-wise risk gap. Thus, improving the class-wise fairness. Weaknesses: - The evaluations on common corruptions is missing. (Does this method transfers robustness and fairness towards common corruptions as well?) - The evaluations on related fairness works, Wat [1] and FAT[2] is missing. - Most of the works mentioned as baselines focusing on fairness (CFA, BAT, FRL), in the paper are trained using PRN-18. May be better to evaluate the current approach using the PRN-18 as well (Optional). - The authors should explore if their method is also fair and robust in ViTs. The architectures suitable for training ARD and AT approaches are available in robustness benchmark [3]. - Few ARD related evaluations like IAD, MTARD in combination with Fair-ARD approach as mentioned in Fair-ARD [4] needs to be evaluated. - Comparison of ABSLD with Fair-ARD in writing can be improved. How different is approaches from each other, what benefits ABSLD the most over Fair-ARD? [1] Li, B., Liu, W.: Wat: improve the worst-class robustness in adversarial training. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 37, pp. 14982–14990 (2023) [2] Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan Kankanhalli. Attacks which do not kill training make adversarial learning stronger. In International conference on machine learning, pages 11278–11287. PMLR, 2020. [3] Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. [4] Xinli, Y., Mou, N., Qian, W., Lingchen, Z.: Revisiting adversarial robustness distillation from the perspective of robust fairness. NeurIPS (2023) Technical Quality: 3 Clarity: 3 Questions for Authors: How is the smoothness degree of different classes (easy & hard) during the training? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Broader Impact : Potential positive societal impacts and negative societal impacts of the work is missing but its mentioned as included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. We have taken great care to address all your concerns as follows: **Comment1: The evaluations on common corruptions is missing.** **Answer1:** We have chosen two common corruptions: Gaussian noise and color channel transformation. As in Table 8 in the overall response PDF, ABSLD improves worst-class robustness by 0.9% and 1.8%, and reduces NSD by 0.012 and 0.01 on CIFAR-10 of ResNet-18 under those two attacks. The results show that ABSLD can transfer robust fairness towards common corruptions. **Comment2: The evaluations on Wat [1] and FAT[2] is missing.** **Answer2:** Here we add the comparison with WAT[1] and FAT[2]. As the following table shows, ABSLD improves worst-class robustness by 0.8% and reduces NSD by 0.03 compared with the second-best method on CIFAR-10 of ResNet-18 under AA attack. Method | AA Avg.(%) | AA Worst(%) | AA NSD :----: |:----: | :----: | :----: WAT | 46.19 | 30.20 | 0.286 FAT | 41.83 | 16.80 | 0.409 **ABSLD(ours)** | **50.25** | **31.00** | **0.256** **Comment3: The application of PRN-18.** **Answer3:** We select the trained model as ResNet-18 following Fair-ARD [3], which is also the common setting in Adversarial Robustness Distillation (e.g., ARD, IAD, and RSLAD). To further verify the effectiveness, we also train PRN-18 on CIFAR-10 via ABSLD and the results in the following Table demonstrate the effectiveness of ABSLD on PRN-18. | Method | AA Avg.(%) | AA Worst(%) | AA NSD | :----: |:----: | :----: | :----: | FRL | 45.90 | 25.40 | - | CFA | 50.03 | 26.50 | 0.301 | **ABSLD(ours)** | **50.12** | **33.90** | **0.263** **Comment4: fairness and robustness in ViTs.** **Answer4:** Here we select ViT-B to train following [4]. The result shows that ABSLD achieves better performance compared with the baseline method (RSLAD), so ABSLD is effective applied to ViTs but not limited to CNNs. | Method | AA Avg.(%) | AA Worst(%) | AA NSD | :----: |:----: | :----: | :----: | RSLAD | **49.07** | 17.50 | 0.379 | **ABSLD(ours)** | 47.73 | **20.70** | **0.327** | **Comment5:IAD, MTARD in combination with Fair-ARD needs to be evaluated.** **Answer5:** Following your suggestion, we evaluate the Fair-ARD, Fair-IAD, Fair-MTARD, and Fair-RSLAD (different versions of Fair-ARD) for comparison with the ABSLD on CIFAR-10 of ResNet-18. As Table 13 in the overall response PDF shows, ABSLD improves worst-class robustness by 5.6% and reduces NSD by 0.049 under AutoAttack, which further demonstrates the effectiveness. **Comment6: Comparison of ABSLD with Fair-ARD in writing can be improved.** **Answer6:** **Fair-ARD exists two major distinctions with our ABSLD as follows:** 1. From the optimization perspective, ABSLD redesign a new loss function by adjusting **different smoothness degree of soft labels** for different classes. Fair-ARD modifies the existing loss function by adjusting **different weights** for different classes. 2. From the designed method, ABSLD applies **the optimization error risk as a metric** to adaptively re-temperate the label smoothness degree for different classes. Fair-ARD applies **the least PGD steps for generating adversarial examples as a metric** to adaptively re-weight for different classes. **We think that the following advantage benefits ABSLD the most over Fair-ARD:** We argue that the re-weighting method (e.g., Fair-ARD) and re-temperating method (our ABSLD) belong to different implementation paths to seek robust fairness. **The re-temperating method is more direct and accurate than the re-weighting method.** **Specifically**, in the optimization process, the essential optimization goal is to reduce the loss between the model's predictions and labels. Re-temperating directly adjust the labels, and its effect can be directly and accurately reflected in the final optimization results of the model. While re-weighting adjusts the loss proportion for different classes, which indirectly affects the model's optimization goal. **In addition**, due to different implementation paths, **re-weighting and re-temperating methods will not conflict with each other.** Here we try to combine re-weighting and re-temperating strategies. As the results of Table 11 in the overall response PDF, we find that this combination can achieve better robust fairness compared with the re-temperating strategy, which demonstrates that **these two approaches will mutually promote the improvement of robust fairness.** **Comment7(question1): The smoothness degree of different classes.** **Answer7:** In the training process, **the smoothness degree of the easy class will be smoother, and the smoothness degree of the hard class will be sharper.** In Figure 7 of the overall response PDF, we visualize the trend about the smoothness degree for easy classes (2-th) and hard classes (4-th) on CIFAR-10, which can further confirm our explanation. Here we measure the smoothness degree through information entropy, where the larger entropy denotes the smoother smoothness degree and the smaller entropy denotes the sharper smoothness degree. 1.Wat: improve the worst-class robustness in adversarial training. AAAI(2023). 2.Attacks which do not kill training make adversarial learning stronger. ICML(2020). 3.Revisiting adversarial robustness distillation from the perspective of robust fairness. NeurIPS(2023). 4.When adversarial training meets vision transformers: Recipes from training to archi tecture. NeurIPS(2022).
Rebuttal 1: Rebuttal: This response contains mainly an overall response PDF with details as follows: Figure 7: Information entropy change curve of teacher soft labels for hard classes and easy classes. Table 8: Results on two common corruptions, for Gaussian Noise(GN) and Colour Channel Transformations(CCT). Table 9: Discussion about different Initial $\tau_{k}^{t}$. Table 10: Discussion about different Initial $\beta$. Table 11: Comparison between reweighting, retemperating, and their combination. Table 12: Comparison between Labelsmoothing, re-temperating via Labelsmoothing, and re-temperating via KD (ABSLD). Table 13: Comparison between ABSLD and different Fair-ARD's versions. Pdf: /pdf/f7a68d9e92406e808fb1aa84a2ae185f447ddb27.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SkipPredict: When to Invest in Predictions for Scheduling
Accept (poster)
Summary: The paper studies the effectiveness of machine-learned predictions for queuing systems, and falls into the area of learning-augmented algorithms / algorithms with predictions. In particular it studies the M/G/1 queue with poisson arrival and i.i.d. service times with the objective to minimize the average response time. This problem has previously studies with service time predictions or with single-bit predictions, indicating whether a job is 'short' or 'long' depending on a certain threshold. Moreover, previously it has been assumed that predictions are for free. In contrast to these previous works, in this paper an algorithm needs to 'pay' for receiving predictions. This payment can either be some additional cost (external cost), or an additional load to the machine (server time cost) which delays the execution of other jobs. The latter is motivated by the scenario that the prediction itself first needs to be computed. Moreover, in this paper an algorithm has access to both, (cheaper) 1bit predictions and (more expensive) service time predictions. The authors propose the algorithm 'SkipPredict' which first queries a 1bit prediction for every job that arrives, and in case that this returns 'long', also queries the service time prediction. 'Short' jobs are generally preferred and scheduled First-Come-First-Serve (FCFS). Furthermore, they study a second algorithm 'DelayPredict' which only uses service time predictions. In contrast to SkipPredict, it starts scheduling every job until it can be categorized as being 'long', and only then queries a service time prediction. The authors compare both algorithms to the baselines FCFS, 1bit (only cheap predictions) and SRPPT (only expensive predictions). They find that in theory and in empirical experiments, the benefit of SkipPredict and DelayPredict depends on the how the threshold, the system load and the costs for both prediction types relate. Strengths: - The idea of using a two-stage prediction setup composed of a cheap but less expressive and an expensive but more expressive prediction is to the best of my knowledge new in the area of learning-augmented algorithms, and might have an impact on this field. - I really like the idea that the generation of a prediction requires processing volume itself. I think such a model could also be interesting for other scheduling problems, also in adversarial models. - The results show that the use of this two-stage prediction setup help to lower the overall cost compared to the baselines for certain instances. Weaknesses: - I think the main weakness of this work is that the results seem not to be very surprising. In particular, I think it is clear that the benefit of such a two-stage algorithm like SkipPredict heavily depends on how the costs relate. I am somehow missing some hard statements which summarize the findings more precisely. - Moreover, the two main components of SkipPredict, 1bit and SRPPT, have been previously studied and analyzed. Thus, to me it seems that the paper does not provide many interesting techniques or analyses. Given the conceptually interesting models, I tend to rate this paper as borderline right now. However, I am not too familiar with queueing theory and, thus, cannot for sure assess the weight of the second weakness I mentioned. Technical Quality: 3 Clarity: 3 Questions for Authors: Typos: - Line 114 "somewhat" - Line 224 missing whitespace Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback and understand your concerns. We believe the perceived "lack of surprise" in our results may stem from our presentation, which we will improve in the revision. - While it may seem intuitive that the effectiveness of a two-stage algorithm depends on relative costs, our key contribution is the formalization of this intuition through precise, closed-form formulas of two new algorithms (SkipPredict and DelayPredict). These formulas calculate the mean response time of jobs with size predictions, accounting for prediction costs. This quantitative approach can provide exact thresholds for when predictions become beneficial, moving beyond just high-level qualitative understanding. - To better illustrate the potential applications of our work, we plan to discuss how our approach could be relevant to practical scenarios to show how our formulas might be used to assess the trade-offs of utilizing predictions. - We appreciate the opportunity to clarify the point on the use of 1bit and SPRPT (previous works that have been studied before). First, the previous analysis of 1bit and SPRPT did not take costs into account; this is one of our contributions. Second, SkipPredict (and its analysis) does not directly rely on the 1bit and SPRPT analyses in deriving the closed-form average response time. Rather, SkipPredict represents a novel approach that, in extreme cases, converges to these known algorithms: - When the threshold T is set to infinity, SkipPredict converges to the 1bit (or more clearly, 1bit with analysis that takes costs into account). - When the threshold T is set to 0, it converges to SPRPT (or more clearly, SPRPT with analysis that takes costs into account). For a general T, neither 1bit nor SPRPT can be used as components. Their analyses are included primarily as baselines. and because the previous analyses of 1bit and SPRPT were incomplete and arguably impractical because they did not consider costs. Moreover, our DelayPredict algorithm doesn't make use of cheap prediction (1bit) at all -- instead, we use the idea of having an initial bound on processing act as an implicit "prediction". We will clarify this in the revised manuscript to better convey the innovation and importance of our approach in addressing this significant gap in previous research - We agree that our paper would benefit from more explicit statements summarizing our key findings. In our revision, we will include a dedicated section that clearly articulates our main results and their implications, emphasizing the novel contributions of our work. - Thanks for pointing out the typos, we will fix them in the revised paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I have the impression that several issues can be indeed fixed by a strongly refined presentation. As I wrote in my initial review, the paper introduces several interesting concepts, which are interesting for the area of learning-augmented algorithms. Thus, I decided to slightly raise my score from 4 to 5. However, I would like to leave it open to the AC/SACs on how to value potential improvements via a strongly refined presentation.
Summary: This paper studies a scheduling problem that aims to minimize the expected response time, where jobs arrive online and the algorithm needs to decide the priority of jobs. This paper proposes a new algorithm called skip-prediction. Namely, the algorithm first uses a cheap prediction to partition the whole job set into two parts: long job set and short job set. For jobs in short job set, they have the highest priority and no prediction will be made. And, the algorithm ranks these short jobs by the first-come-first-serve rule. For jobs in long job sets, an expensive prediction will be made to predict the size of these long jobs. Then, the algorithm ranks these long jobs using the shortest remaining processing time first rule. The paper considers two different models; in the first model, predictions consume some extra costs and in the second model, predictions consume server times. For each model, the total cost of the algorithm is defined as the total expected response time plus the prediction cost. The authors provide theoretical proof to compute the formula for the total cost of the proposed skip-prediction algorithm, and then run an experiment on some datasets. Strengths: The general motivation of the paper is good. Namely, it considers the case where the prediction does not get for free. The algorithm needs to optimize the scheduling objective with the prediction cost together. On the positive side, I expect that it will have a positive impact on practice application. I also expect that one may be able to abstract some interesting theoretical models from this paper; so it’s likely to influence future work. Weaknesses: I have to say that I am usually working on the theoretical analysis of the algorithms (e.g., approximation or online algorithms). I am not in the right position to judge the quality and novelty of the experimental paper. To me, the presentation of the paper is not clear; the authors describe an algorithm in the model section instead of defining a problem. Besides this, the authors give the formula of the total cost of skip-prediction in Table 1, but it is not clear how good the quality of these formulas is from the theoretical perspective. Technical Quality: 2 Clarity: 3 Questions for Authors: N.A. Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the motivation behind our work and its potential impact on practical applications. We appreciate your concern about the clarity of the presentation, particularly the flow from problem definition to algorithm description. We understand this may have obscured the primary contribution of our paper. To recap our objectives and contributions (see also the general Author Rebuttal): - Our goal: To develop a scheduling framework that accounts for the cost of predictions, addressing a gap in current learning-augmented algorithms for scheduling. - Why this goal: In real data systems, predictions consume resources and time. The common assumption in learning-augmented algorithms that predictions are cost-free does not reflect realistic scenarios. - Our main contribution: Comparison and analysis of multiple new algorithms (SkipPredict and DelayPredict). We note we also re-analyzed previous algorithms (1bit and SPRPT) under our more realistic models that take cost into account, extending the previous work. We derive closed-form formulas that calculate the mean response time of jobs with size predictions, given prediction costs. We show that when prediction costs are set to zero, our formulas for our new algorithms align with our new formulas for prior algorithms, demonstrating the robustness and generalizability of this analysis approach. In the appendix, we also explain how to generalize our formulas to non-fixed costs. These formulas are both theoretically significant and practically applicable. For instance, if a user can estimate prediction costs (using profiling), our formulas can be used to determine the necessary thresholds that when utilizing predictions will yield performance gains. We aim to enhance the presentation of our work in the following ways: - We recognize that the presentation of the problem definition before the algorithm introduction could be improved. In revising, we will look at restructuring this section to enhance clarity of the problem definition and highlight our end results more clearly. - We will provide additional context to better illustrate the significance of the formulas we have derived and their generalizations.
Summary: Motivated by recent prediction based scheduling of ML jobs in data centers, the paper considers the problem of prediction cost aware scheduling to optimize mean response time. It considers two cost models - external and server time. The paper proposes a novel algorithm, SkipPredict, that uses a two level hierarchical prediction with a cheap short/long prediction at the first level and an expensive total size prediction at the second level. It then uses this information to schedule short jobs as FCFS and long jobs by SRPT. The paper presents an analysis of the expected response time for both models using the SOAP framework and present a comparison against FCFS, one-level FCFS+SRPT and SRPT. Finally, it provides experimental evidence of SkipPredict’s superior performance on synthetic and real world datasets. Strengths: - The paper is the first to consider the important aspect of cost of prediction in scheduling - The proposed algorithm, SkipPredict, elegantly utilizes an efficient hierarchical scheme and is practical to implement - Thorough and well presented analysis of SkipPredict as well as the baselines including generalisation to non-fixed costs - Baselines compared against are comprehensive Weaknesses: - While the motivation for the problem comes from scheduling ML jobs, the algorithm and analysis are solely queueing theoretic arguments which makes me question the relevance/interest of this work to the NeurIPS community. MLSys/SIGMETRICS perhaps might be a better fit. - Further, I believe none of the experiments presented reflect the characteristics of a machine learning job. The Exponential and Weibull distributions of service considered in the synthetic data are unlike the very light tailed (almost deterministic) nature of ML jobs. The real world datasets consist of CPU computations which make them unlikely to be from ML jobs. Technical Quality: 4 Clarity: 4 Questions for Authors: - While the paper analytically shows that SkipPredict does better than the baselines for any general service distribution, it would also be good to see this in the experiments with distributions/datasets representing ML jobs - (Minor) It would be nice to have a short explanation of how the SOAP framework is used in the main paper as well instead of just in the appendix. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and the thoughtful suggestions. - We understand the concern about the relevance of our queuing theoretic approach to NeurIPS. Our approach relies on queuing theory to develop algorithms that efficiently manage resources in systems, opening new directions in learning-augmented algorithms, a topic of growing importance in the NeurIPS community. As mentioned by reviewer R-OTX6, there has been related previous work on prediction costs for more abstract problems that have appeared in this community, with more limitations (such as a budget on the number of predictions), rather than our cost optimization framework. Unlike typical approaches in learning-augmented algorithms that assume free predictions, our work challenges this by addressing the realistic scenario of dedicated resources for jobs. This is particularly important in scheduling, where, as in our Server Cost model, the same server uses time to make the prediction as well as to execute the work of the jobs being scheduled. We believe our work will inspire new models and algorithms in the broader context of learning-augmented algorithms. Importantly, these ideas have potential applications in various predictive systems, including those for Large Language Models (LLMs), where efficient resource management and accurate cost assessment of predictions are crucial for optimal performance. - We appreciate the reviewer's observation regarding the characteristics of ML jobs. To clarify: (1) Our evaluation framework is designed to capture scheduling for general jobs, with ML-type jobs being a subset of these. Our theoretical results hold for any general distribution, including those typical of ML workloads. (2) The presented real datasets focus on scheduling jobs with predicted service times in large-scale distributed systems, which aligns with previous work (e.g., Mitzenmacher and Dell’Amico; The Supermarket Model With Known and Predicted Service Times). (3) We appreciate the reviewer's point about better-representing ML job characteristics. We are not aware of, nor have we found through web searches, definitive information about specific statistical distribution models that accurately describe machine learning (ML) job distributions. However, based on general knowledge of job distributions, exponential and normal distributions could potentially model ML jobs. In our paper, we evaluate our approach using exponential and Weibull distributions, and as part of the rebuttal, we have included the results for an example normal distribution in the provided pdf. (We can readily include more experiments in the final version.) Regarding the real-world datasets we used, if the reviewer or others can provide or suggest real ML datasets, we would be glad to test our approach on them and include the results in our revised paper. (4) These additions will demonstrate that SkipPredict's performance extends to scenarios that closely resemble ML workloads, complementing our experimental results that show SkipPredict outperforms baselines for the tested service distributions. - We appreciate this suggestion of adding SOAP framework in the main paper would enhance reader understanding. We will add a concise overview of how SOAP is applied in our work and an explanation of the rank function of the jobs in each model. --- Rebuttal 2: Comment: - Acknowledge the recent interest in cost of predictions for learning augmented algorithms and their role in building infrastructure for ML systems - Appreciate the additional experiments with the lighter tailed normal distribution - Agree that public datasets with ML job distributions are unavailable (or atleast I am unaware of those as well). I would appreciate if the authors include a remark explaining the characteristics of the real-world dataset in the final version. - Given the strengths of the paper in the novel model of the costs, solid theory with closed form expressions and well-presented analysis, I increase my score to 7
Summary: The paper considers the job scheduling in the M/G/1 queueing model when the system has access to predictions regarding job lengths. The paper explicitly considers the cost incurred for obtaining the predictions in such a system. Two models are considered - (i) external cost: obtaining predictions incurs a fixed cost, but do not affect the service time of a job, and (ii) server time cost: predictions are obtained via operations that themselves need to be scheduled incurring time that delays jobs. The authors consider a setting in which there are two kinds of predictions - (i) a cheap 1-bit prediction that only classifies whether the job length is above or below a fixed threshold T, and (ii) an expensive prediction that predicts the actual job length. The authors propose a natural algorithm, called SkipPredict, that first obtains cheap predictions for all jobs to classify each job as either "short" or "long". All jobs classified as "short" are scheduled via FCFS. The algorithm then obtains expensive predictions for all long jobs which are then scheduled using the shortest predicted remaining time first rule. The authors analyze this algorithm under both cost models using the SOAP analysis framework and obtains explicit expressions for the mean response times of jobs. Strengths: - The paper raises an interesting question. Most work on learning augmented algorithms assumes that predictions are available to the algorithm for "free" which is certainly not true in practice. Explicitly modeling prediction costs is a good area for further research. The server time cost model that models the delays due to predictions is especially interesting for scheduling problems. Weaknesses: - The paper is missing some references to prior work. For example, (i) "Online algorithms with Costly Predictions" (Drygala et al) considers costly predictions in other learning augmented settings. (ii) "Parsimonious Learning-Augmented Caching" (Im et al) considers caching with goal of using few predictions. (iii) many papers on learning augmented scheduling. Technical Quality: 4 Clarity: 3 Questions for Authors: N/A Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for supporting our paper. We appreciate your feedback on missing references and we agree that these references are relevant to our study. In particular, we acknowledge the relevance of the paper 'Online algorithms with Costly Predictions', along with the related work 'Advice Querying under Budget Constraint for Online Algorithms'. We believe these papers (which appeared in AISTATS and NeurIPS, respectively) show the relevance of this area and our work to the NeurIPS community. We note these papers addressed standard online problems, such as the ski rental problem. However, we believe that the queueing setting introduces unique complexities not present in these more traditional problems. Notably, much of the literature in this area focuses on 'budgeted' settings, where the number of predictions is limited, whereas our work seeks to optimize overall costs considering both prediction and operational costs. In our revision, we will: - Incorporate the suggested references and other pertinent works into our related works section. - Provide a more comprehensive review of learning-augmented scheduling literature.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback. We are encouraged they found the paper raises an interesting question (R-oTX6, R-oree) and is the first to consider the important aspect of the cost of prediction in scheduling (R-ckzA, R-kmdF), which is relevant for learning-augmented algorithms, a growing field in the NeurIPS community. Reviewers R-oTX6, R-oree, and R-kmdF believe our approach will have a positive impact on practical applications and influence future work, potentially resulting in interesting theoretical models derived from this paper (R-oree, R-kmdF). We are glad they found our proposed algorithm elegant (R-ckzA), with thorough and well-presented analysis of SkipPredict (R-ckzA), comprehensive evaluation compared to baselines (R-kmdF), and achieved significant improvements. In our paper, we evaluate our approach using exponential and Weibull distributions; as a reviewer (R-ckzA) asked to see more results, we have here provided results for a normal distribution example (see uploaded pdf), where we also see benefits from our algorithms. One concern, reported mainly by reviewers less familiar with the field (R-oree, R-kmdF), was lack of clarify regarding the primary contribution of the paper. Our key contribution is the comparison and analysis of multiple new algorithms (SkipPredict and DelayPredict). We derive closed-form formulas that calculate the mean response time of jobs with size predictions accounting for the prediction cost in two different models (external cost and server time cost) for each algorithm. More generally, previous works in scheduling with predictions, as well as other related areas where predictions could be used, often ignore the cost of the prediction in the analysis or design of the system, even if it arises in experiments. In this respect, our contribution is to raise the bar for future work by formally incorporating the costs of predictions. We do this through several additional contributions, including: providing two general cost models; re-analyzing previous algorithms (1bit and SPRPT) with costs; and examining the idea of using multiple, different-cost predictions to improve performance with SkipPredict. In addition, we acknowledge the relevance of the paper 'Online algorithms with Costly Predictions" suggested by R-oTX6, along with the related work 'Advice Querying under Budget Constraint for Online Algorithms'. We believe these papers (which appeared in AISTATS and NeurIPS, respectively) show the relevance of this area and our work to the NeurIPS community. We note these papers addressed standard online problems, such as the ski rental problem. However, we believe that the queueing setting introduces unique complexities not present in these more traditional problems. Notably, much of the literature in this area focuses on 'budgeted' settings, where the number of predictions is limited, whereas our work seeks to optimize overall costs considering both prediction and operational costs. We acknowledge that we could have emphasized and elaborated on the formulas more clearly, as they represent the main theoretical advancement of our work. We address some specific questions below and will incorporate all feedback in the final version. Pdf: /pdf/222f543e49ac2b9f23220a25caa1c07cb3dff365.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Data Attribution for Text-to-Image Models by Unlearning Synthesized Images
Accept (poster)
Summary: The authors propose a new approach for data attribution for text-to-image models that utilizes machine unlearning. They then perform multiple experiments to demonstrate that the proposed method is competitive to other methods. Strengths: - The paper is generally well-written, clearly structured, and easy to follow. - The problem of data attribution in text-to-image models is interesting. - Multiple experiments are conducted to verify the viewpoints. Weaknesses: - This is not really a weakness, but I wonder if the leave-K-out model can still generate a target image just under a different prompt. Perhaps a metric that see how hard it is to perform Textual Inversion [1] can be used? References: [1] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or, “An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion” Technical Quality: 3 Clarity: 3 Questions for Authors: - See weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the feedback and suggestions. ### **Can Textual Inversion be used for evaluation?** In general response G3, we verify that Textual Inversion is not an ideal choice to check whether a concept is represented by leave-K-out models.
Summary: The paper discusses how we can identify influential examples by unlearning the synthesized images in the text-to-image generation models. Unlearning synthesized images leads to increase in the loss for the most influential training examples for the generation of synthesized images. The paper relies on a well-known regularization for Fisher information in order to avoid catastrophic forgetting. The paper evaluates image generation quality two ways: 1) Retraining the model from scratch after removing the influential examples from the training set. A Poor synthetic image generation quality is seen as a proxy for identifying most influential training examples. 2) Using ground truth attribution provided in a Customized Model Benchmark. Strengths: + The authors did a great job telling the story, motivating the problem and discussing related work. + The evaluation metrics and benchmarks are well-described. The authors show the effectiveness of their approach both qualitatively and quantitatively. Weaknesses: + It is unclear how robust the synthetic image generation for a given caption is. It would be good to estimate the sensitivity to the subtle changes in the caption to the generation quality. + Overall theoretical justification is not very clear. It is not clear how unlearning affects the overall utility of the model. + Figure 2 mentions that their approach is qualitatively better than DINO and JourneyTRAK, but how can we tell that it is quantitatively better ? It might be that the proposed method is better for some examples and worse for others. In the implementation details section, line 212, the paper suddenly starts mentioning DDPM loss without explaining what it is and how it is applied in their method. In the methods section there is no mention of it but instead the paper discusses a different loss, EWC loss. This is a bit confusing. + On Figures 3 and 2 DINO is listed as a baseline against which the authors evaluated the proposed method. It is, however, not an influence function's approximation or unlearning method. It seems that the authors used it for Image similarity. At the same time they compare it also against Journey TRAK which is an influence approximation approach. This is a bit confusing. + Figure 4: where exactly are the error bars in the plot ? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Since \hat{z} is not part of the training dataset, how do we know that its effect is unlearned appropriately from the model with the elastic weight consolidation loss ? 2. How sensitive is \hat{z} to subtle changes in the caption ? 3. Have you considered comparing your work against DataInf: https://arxiv.org/abs/2310.00902 ? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper discusses the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the helpful suggestions and comments. ### **Sensitivity to subtle changes in captions** We clarify that our work focuses on finding training images that influence a generated image, not how changes in captions affect generation quality. From our experience, the model is not very sensitive to subtle caption changes (e.g., misspell). ### **Theoretical Justification** In Appendix A, we show the theoretical connection between influence functions and our approach, both of which make approximations toward characterizing the effect of removing a training point and evaluating its effect on a synthesized (or vice versa). As described in L165-173 of the main text, a “perfect” attribution is intractable, since it requires combinatorially searching the set of influential images, and our methods and influence function both serve as “proxy” solutions to this. We validate our method through the leave-K-out evaluation, showing our "proxy" is more effective than baselines such as influence functions. We will clarify this in the text. ### **How unlearning affects the overall utility of the model** We would like to clarify that we use unlearning to attribute images **generated from the original model**. Hence, the unlearned model will not be used to generate images for end users but instead only serve as a part of the attribution algorithm. In addition, in our general response G1, we analyze and report effectiveness of our unlearning method. ### **How to say an attribution algorithm is better “quantitatively”?** We provide quantitative evaluation in Figure 4 in the main text, and Figs. 7&8 in the supplementary, showing our method consistently outperforms various baselines. ### **Mention of DDPM loss** DDPM loss is introduced by Ho et al. [3], which is the standard loss used to train diffusion models. This corresponds to $\mathcal{L}(\hat{z}, \theta)$ in Eq. 4 of the main text. We will clarify this, include a detailed description of DDPM loss, and add a citation in the revision. ### **Using DINO as a baseline** For completeness, we indeed use an array of strong baselines, including feature similarity, as is standard practice [1, 2]. Our method outperforms similarity-based baselines in most cases (Fig. 4, 7, and 8 of the main text). ### **Error bars in Figure 4** We show standard errors in the top plot in Figure 4, but the error bars are very small and negligible. To clarify this, following reviewer ZhUS’s suggestion, we will report a table that includes the standard errors in our revision (see Tab. 1 in our response PDF, which summarizes Figs 4, 7, 8). ### **How to know if synthetic content is unlearned appropriately?** Please see G1 in our general response, where we quantitatively verify that the target image is forgotten, while other information is retained. ### **Comparison to DataInf** As suggested, below, we compare our work against DataInf, evaluating their performance with metrics proposed in Sec. 5.1 of the main text: leave-K-out model’s (1) Loss change and (2) deviation of generation. | | Loss Change $\uparrow$ (K=500) | Loss Change $\uparrow$ (K=1000) | Loss Change $\uparrow$ (K=4000) | MSE $\uparrow$ (K=500) | MSE $\uparrow$ (K=1000) | MSE $\uparrow$ (K=4000) | CLIP $\downarrow$ (K=500) | CLIP $\downarrow$ (K=1000) | CLIP $\downarrow$ (K=4000) | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | DataInf | 0.0032±0.0001 | 0.0034±0.0002 | 0.0036±0.0002 | 0.038±0.004 | 0.039±0.005 | 0.045±0.005 | 0.78±0.02 | 0.78±0.02 | 0.77±0.02 | | Ours | **0.0051±0.0003** | **0.006±0.0005** | **0.0087±0.0006** | **0.054±0.006** | **0.055±0.004** | **0.059±0.005** | **0.75±0.02** | **0.69±0.02** | **0.62±0.02** | Our method significantly outperforms DataInf quantitatively. We show a qualitative example in Fig. 4 of the response PDF. DataInf often attributes images that are not visually similar, making them less likely to be influential to the synthesized images. We believe DataInf performs poorly because it is designed for LoRA fine-tuned models, rather than text-to-image models trained from scratch. According to DataInf’s paper, the method uses a matrix inverse approximation to compute the influence function efficiently, but this approach results in a pessimistic error bound. While DataInf’s authors mentioned that the error is more tolerable in LoRA fine-tuning, this error may not be acceptable for data attribution in text-to-image models trained from scratch. We will add results and discussion in the revision. Thanks for the reference. Due to time constraints, we average over 20 synthesized image queries here, but will show full results with 110 queries in the revision. **Citations** [1] Wang et al. Evaluating Data Attribution for Text-to-Image Models. [2] Singla et al. A Simple and Efficient Baseline for Data Attribution on Images. [3] Ho et al. Denoising Diffusion Probabilistic Models. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank you authors for the detailed response. I have a question while looking at Table 1 in the PDF. Is this table for 1 generated image or is this an aggregate for multiple generated images ? How is the domain of those generated images identified ? How do we ensure that our evaluation of generated images is representative ? How does the generation change if we say: "bus", "white bus" vs "big white bus" for K-leave out test in figure 1? In Figure 1 are `Related images` the ones that are being removed during K-leave out experiment ? If not then it is not clear which images are left out for K-leave out experiment. Is it possible to include textual captions for generation into the visual results ? --- Reply to Comment 1.1.1: Comment: Thanks for your quick response. - Table 1 is consistent with the practice in the paper, as described in L236. The results are aggregated by 110 generated images. - The generated images' prompts are from the MSCOCO validation set. This ensures that generated images are representative of the training set (also MSCOCO). - The generated images don’t change much when we change “bus” in the caption to “white bus” or “big white bus”. - No, only the attributed training images from the target are removed (see top row of Fig. 2, main paper). The “related images” are *generated* images, from captions similar to the target prompt. As seen in the figure, the leave-K-out model successfully removes the target image while preserving the related images. Please see general response G2 for more detail. - Yes, we will include captions when adding to our revision, similar to Figs 2&3 in the main paper.
Summary: This paper proposes a novel method for data attribution in text-to-image diffusion models. The key idea is to unlearn a synthesized image by optimizing the model to increase its loss on that image, while using elastic weight consolidation to avoid catastrophic forgetting. The authors then identify influential training images by measuring which ones have the largest increase in loss after this unlearning process. The method is evaluated through rigorous counterfactual experiments on MSCOCO, where models are retrained after removing the identified influential images. It outperforms baselines like influence functions and feature matching approaches. The method is also tested on a benchmark for attributing customized models. Strengths: 1. The paper addresses the issue of data attribution, which is important for understanding the behavior of generative models and the contribution of training data. 2. The paper proposes an innovative and effective approach to data attribution that outperforms existing methods on counterfactual evaluations. 3. The authors conduct extensive experiments, including computationally intensive retraining, to thoroughly validate the effectiveness of the proposed method. Weaknesses: 1. The evaluations are limited to medium-scale datasets like MSCOCO. It's unclear how well the approach would scale to the massive datasets, such as LAION, which is used to train state-of-the-art text-to-image models. 2. The method is computationally expensive, requiring many forward passes over the training set to estimate losses. This may limit scalability to large datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the performance of the unlearning method vary with different sizes and types of datasets? 2. How does the unlearning method perform in terms of efficiency compared with other methods? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the helpful suggestions and feedback. ### **Efficiency comparison with other methods** Following the reviewer’s suggestion, we compare our method’s efficiency with other methods. To proceed with the efficiency analysis, we provide a brief overview of each type of method. - **Our unlearning method**: we obtain a model that unlearns the synthesized image query and check the loss increases for each training image after unlearning. - **TRAK, JourneyTRAK**: precompute randomly-projected loss gradients for each training image. Given a synthesized image query, the authors first obtain its random-projected loss gradient, and then match it with the training gradients using influence function. For good performance, both methods require running influence function on multiple pre-trained models (e.g., 20 for MSCOCO). - **Feature similarity**: standard image retrieval pipeline. Precompute features from the training set. Given a synthesized image query, we compute feature similarity for the entire database. We compare the efficiency of each method. **Feature similarity is the most run-time efficient** since obtaining features is faster than obtaining losses or gradients from a generative model. However, feature similarity doesn’t leverage knowledge of the generative model. Our method outperforms various feature similarity methods (Fig. 4 of the main paper). **Our method is more efficient than TRAK/JourneyTRAK in precomputation.** Our method’s precomputation is much more efficient runtime and storage-wise compared to TRAK and JourneyTRAK. Our method only requires computing and storing loss values of training images from a single model. On the other hand, both TRAK and JourneyTRAK require pre-training extra models (e.g., 20) from scratch, and precomputing & storing loss gradients of training images from those models. **Our method is less efficient when estimating attribution score.** TRAK and JourneyTRAK obtain attribution scores by taking a dot product between synthesized image gradient and the stored training gradient features. The methods require calculating and averaging such dot product scores from the 20 pretrained models. On the other hand, as acknowledged in our limitations (Section 6 of the main text), though our model unlearns efficiently (e.g., only 1-step update for MSCOCO), getting our attribution score involves estimating the loss on the training set, which is less efficient than dot product search. Tradeoff-wise, our method has a low storage requirement at the cost of higher runtime. Our main objective is to push the envelope on the difficult challenges of attribution performance. Improving computation efficiency of attribution is a challenge shared across the community [1, 2], and we leave it to future work. ### **Scaling up data attribution to massive datasets** In the community, scaling data attribution to massive datasets remains a huge challenge in terms of both algorithm and evaluation [1, 2]. Our solution to mitigate this issue is to: 1. Provide the gold-standard leave-K-out evaluation on a moderately-sized dataset (MSCOCO) 2. Test on a Customized Model Benchmark, focusing on attributing personalized/customized large-scale text-to-image models. While we don't have the tech-giant-level resources to run full LAION-scale experiments, the MSCOCO evaluation is already the most computationally extensive evaluation in this space. **Citations** [1] Park et al. TRAK: Attributing Model Behavior at Scale. [2] Grosse et al. Studying Large Language Model Generalization with Influence Functions. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The authors have addressed my concerns. Based on their rebuttal, I have decided to upgrade my evaluation to the weak accept. --- Reply to Comment 1.1.1: Comment: Thanks for increasing the rating. We will incorporate the feedback in our revision.
Summary: Authors propose a method for identifying training images that need to be removed from the training set of a generative model to prevent a single specific “bad” (undesired) output from occurring in its output. Authors propose to directly unlearn the synthetized “bad” image, evaluate how this unlearning changes the training loss across training examples, and retrain the model omitting highly effected training examples - instead of evaluating how including each training example affects the “bad” output. Authors found that the resulting approach worked best when combined with prior techniques for preventing catastrophic forgetting in classifier uncleaning (e.g. optimizing only a set of weights and performing unlearning steps using Newton-Raphson iterations on the unlearned example using the original Fisher matrix). Authors show that retraining generators on pruned training sets indeed prevents the retrained generator from generating the undesired image - more specifically, authors show that the loss on the undesired output increases (more so compared to baseline methods) and that rerunning the retrained generator with the same random seed and text conditioning does not produce the “bad” example indeed. Strengths: While at first the result might seem trivial (since authors both optimize and measure the same metric - “the model loss”) the finding is actually in fact very much non-trivial since authors retrain the model while pruning images that were affected most by unlearning the problematic output, so it might have so happened that removing these images did not prevent the output from occurring - so reported findings are indeed both interesting and valuable for the community (potentially beyond unlearning work). Authors provide a random baseline and all provided results are significantly different from random (confidence intervals seem unnecessary). Authors provide qualitative evaluation (Fig 3) suggesting that the desired effect is indeed observed in practice and not only in metrics. Overall, the paper is well written and does a very good job of motivating both the problem and the proposed solution. Authors provide an extensive literature review that manages to be helpful even to a person without prior experience with unlearning (me). Weaknesses: While I enjoyed reading the first half of the paper, I have serious concerns regarding the quality of the results section. While authors provide many qualitative results in the main paper, the burden of making sense of quantitative results (e.g. comparing them to ablations) is not only entirely on the reader, but is made worse by complete lack of any raw numbers for cross-comparison (e.g. no tables - only barplots w\o numbers), and putting the vast majority of quantitative results into supplementary without providing that much more context there. For example, Figure 4 (top) - colors are very similar and lines are very close, making results not really legible. Given that performance is evaluated only at three data points, I see no reason why this can not be a table - that would aid both reproducibility and comprehension (e.g. cross-comparison). I also do not quite understand the point of reporting the "equivalent to X random points” metric - it is never even explicitly discussed in the main paper. All results for MSE and CLIP metric and all Ablations (also no tables, only bar plots without numbers) are in the supplementary, but even there not much additional context for interpreting these results is provided. Some claims are discussed and are just never qualitatively verified - e.g. authors claim that the proposed technique is specifically designed to not lead to catastrophic forgetting, but I could not find any evidence that would support that. Moreover, from Figure 3 it might seem like it not only forgot how to generate this specific bus, but might have forgotten how to generate all buses - I am not entirely sure if that is the intended behavior. I would appreciate a qualitative confirmation that the model did not just “forget everything”. I have minor concerns regarding the delta G metric. While Georgiev et al. [12] supposedly already showed in an accepted (workshop) submission that two models trained independently on the same or similar datasets generate similar images when primed with the same noise input - justifying the existence of metric (3) - I’d appreciate some results confirming that this is indeed the case for these models in these experiments. For example, if authors showed at least qualitatively that removing images most influential for generating a particular “bad” bedroom from the training set does not indeed in any way affect the generation of unrelated images (e.g. an image of mountains), that would help. Or, even better, by performing textual inversion on the “undesired” input after removing images that affect it the most, to show that it can no longer be represented by the model. Otherwise, there is no way to tell if a particular “bad generation” just “migrated” elsewhere. Another minor concern: since authors performed multiple gradient update steps, I wonder whether other methods can also be used with multiple update steps. Technical Quality: 3 Clarity: 2 Questions for Authors: In addition to the weaknesses listed above, here are some questions/concerns I had while reading the paper: Figure 1 - it is not said explicitly, but after several attempts, I figured out that the “bad” output that we want to remove in this figure is “bedrooms”, so it makes sense that the method assigned high loss (bottom orange bars) and “picked” bedroom images, the fact that only a single image (mountain) either “picked by the method” or “remains in the training set” is confusing; it might be worth to keep only two bedrooms and add a couple more images with low loss and somehow highlighting that “bedrooms are picked for removal”, not “only a single mountain is picked for keeping”. L130 vs L104 - it is not clear why you need to introduce \tilda \theta if you already have \theta, especially given that in (2) RHS appears to not depend on \theta (w\o \tilda) and L157 also uses \theta (w\o \tilda), so the use of \tilda there is somewhat inconsistent and confusing; looking at L483-499 I think I understand now that the implied difference between the two is “optimal” vs “updated”, but i think this distinction is somewhat confusing in the main paper L132 - \epsilon is never introduced at this point, so this is a bit confusing L137-139 - trained from the same random noise or generated from the same random noise? worth rewording L168-172 - these lines appear to not follow from the previous paragraphs introducing the notation for the attribution algorithm since it already provides per-sample scores (and therefore does not require 2^K evaluations); or not, and each tau(\hat z, z_i) is assumed to potentially involve iterating over all subsets of the training set (w\ and w\o z_i) - if so, please elaborate Figure 2 - “Notably, our method better matches the poses of the buses (considering random flips during training) and the poses and enumeration of skiers.” - I would not say that it is very apparent from the figure. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Authors do mention some limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thorough comments and suggestions. ### **Results Presentation** **Tables.** Thanks for the suggestion. We include a table for baseline comparison in Tab. 1 of the response PDF, which corresponds to results in Figs. 4, 7, and 8 of the main text. We will include this, along with a similar table for ablation studies, in our revision. **More context for supplemental results.** We will discuss our ablation studies more in the main text in our revision. To elaborate on the findings as reported in Appendix C.1, we find that using a subset of weights to unlearn leads to better attribution in general. We test three weight subset selection schemes (Attn, Cross Attn, Cross Attn KV), all of which outperform the using all weights. Among them, updating Cross Attn KV performs the best, consistent with findings from model customization [1,2] and unlearning [3]. In addition, we discussed the deviation of generated output (delta G metric) in Section 5.1 of the main text (L271-278). Due to space limit, we included the corresponding figures in the Appendix. We are happy to move some related figures (Figs. 7 or 8) to the main text in the revision. **“Equivalent to X random points” metric.** We introduced this metric in L229 of the main text. We proposed this metric to convert from DDPM loss changes (which is not intuitive to understand) to a budget of images (which is more understandable). For example, removing 500 images (0.4% of the dataset) predicted by our approach causes the same performance drop as randomly removing around half of the dataset. We will clarify this in the revision. ### **Evidence for preventing catastrophic forgetting** Please see G1 in our general response, where we quantitatively verify that the target image is forgotten, while other information is retained. ### **Did the leave-K-out model “forget everything” about buses?** The leave-K-out model “forgets” the specific query bus, while retaining other buses and other concepts. Fig. 1 of the response PDF shows a qualitative study of this. We also report a quantitative study of this in the general response G2. ### **Justification of delta G metric** In general response G2, we follow the suggestion and study whether leave-K-out models affect the generation of unrelated images. In general response G3, we also verify that Textual Inversion cannot faithfully reconstruct the images, therefore making it a less ideal choice to check whether a concept is represented by leave-K-out models. ### **Can other methods use multiple update steps?** Existing influence function methods for text-to-image models are based on a local linear approximation. The approximation is based on a single, infinitesimal network update, so it is not compatible with multiple update steps. Feature similarity methods leverage fixed features, so network updates will not apply to these methods. ### **Other clarifications** **Fig. 1 images.** We will include more non-bedroom images in Fig. 1 in the revision. **The use of $\tilde{\theta}$.** Yes, $\tilde{\theta}$ refers to the “optimal” pretrained model. Since it is a constant term in the EWC regularization loss (Eq. 4), we use a different term to separate it from $\theta$, the “updated” parameters for unlearning. We will make the notation more consistent in the revision and rename $\tilde{\theta}$ to $\theta_0$ for more clarity. **Introducing $\epsilon$.** In L132, $\epsilon$ is the noise map introduced in L105. We will clarify this by mentioning it is the noise map again in L132. **Generated from the same random noise.** In L137-139, the images are generated from the same random noise. We will rephrase the sentence as follows: “Georgiev et al. find that images generated from the same random noise $\epsilon$ have little variations, even when they are generated by two independently trained diffusion models on the same dataset.” **Beginning of method section.** To resolve the confusion about L168-172, we clarify our formulation as follows: 1. If we had infinite compute and a fixed budget of K images, we could search for every possible subset of K images and train models from scratch. If removing the set of K images leads to the most “forgetting” of the synthesized image, this set should be the most influential set. 2. Of course, the above is impractical, so we simplify the problem by individually estimating the influence of each training point. 3. One way to estimate the influence of a training image is to obtain a model that unlearns the training image. However, for attribution, it is expensive to run unlearning on every single training image. 4. To resolve this issue, we instead apply unlearning to the *synthesized image* and then assess how effectively each training image is also forgotten as a result. It is much faster as we only need to run unlearning once. We will reorganize the beginning of the method section to make this clear. **Citations** [1] Kumari et al. Multi-Concept Customization of Text-to-Image Diffusion. [2] Tewel et al. Key-Locked Rank One Editing for Text-to-Image Personalization. [3] Kumari et al. Ablating Concepts in Text-to-Image Diffusion Models. --- Rebuttal Comment 1.1: Title: rebuttal Comment: I appreciate the effort to explain the original motivation (for metrics used, etc.) and I agree with it. I also find Table 1 in the rebuttal pdf much more convincing (and thank you for error bars). Having read the rebuttal message, pdf, and rebuttals for other reviews, I think the final paper would be greatly improved by the addition of results and formatting changes (format results as tables) proposed during review. I think Figure 1 in the rebuttal also mostly addresses my concerns re G delta metric (shows that other images are indeed reconstructed almost perfectly confirming claims of Georgiev et al). For completeness, I'd appreciate Related/Other MSE/CLIP measurement for other baselines (if images are generated and stored, this should not be difficult)? And (later) I would encourage authors to put in more work into Textual Inversion experiments - people report much better reconstruction quality than what was provided in the rebuttal Fig 3. With the addition of (many) quantitative and qualitative evaluations added in the rebuttal, this submission shapes into a convincing well motivated work. My only concern is that with that many changes, a significant rewrite of the second half of the paper will take place - and that will not be peer-reviewed. Given that the authors did a good job with both 1) the first half of the original submission and 2) provided convincing results in this rebuttal - I tend to believe that they will be able to rewrite the second half of the paper incorporating all the feedback and experiments provided above. In the light of the previous paragraph, I increased my final rating to Accept, but I encourage authors to do a thorough reword of the second half of the paper. --- Reply to Comment 1.1.1: Comment: Thanks for increasing the rating. We will incorporate the feedback in our revision and reword/reorganize the second half of the paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful comments. We are happy that reviewers found that our paper motivated the problem well (ZhUS, UBLR), provided an extensive literature review (ZhUS, UBLR), proposed interesting findings (ZhUS, YhLa), and conducted extensive experiments (8wwb, YhLa). We will provide clarifications to questions shared across reviewers here. ### **(G1) Effectiveness in Unlearning Synthesized Images (ZhUS, UBLR)** Our attribution method relies on unlearning synthesized images, making it crucial to have an unlearning algorithm that effectively removes these images without forgetting other concepts. Following reviewers ZhUS and UBLR’s suggestions, we analyze the performance of our unlearning algorithm itself and ablate our design choices. We construct experiments by unlearning a target synthesized image and evaluating: - **unlearning the target image**: We measure the deviation of the regenerated image from the original model’s output—the greater the deviation, the better. - **retaining other concepts**: We generate 99 images using different text prompts and evaluate their deviations from the original model’s output—the smaller the deviation, the better. We measure these deviations using mean square error (MSE) and CLIP similarity. We evaluate across 40 target images, with text prompts sampled from the MSCOCO validation set. We compare to the following ablations: - **SGD** refers to swapping our method’s Newton update steps (Eq. 5 in main text) to the naive baseline mentioned in L174 of the main text, where we run SGD steps to maximize the target loss without EWC regularization. - **Full weight** refers to running our Newton update steps on all of the weights instead of Cross Attn KV. The following table, along with Fig. 2 of the response PDF, shows the comparison. | | Target MSE ($\uparrow$) | Target CLIP ($\downarrow$) | Other MSE ($\downarrow$) | Other CLIP ($\uparrow$) | |---|---|---|---|---| | SGD | 0.081±0.003 | 0.67±0.01 | 0.033±0.0004 | 0.83±0.002 | | Full weight | 0.086±0.005 | 0.7±0.01 | 0.039±0.001 | 0.86±0.002 | | Ours | **0.093±0.004** | **0.65±0.01** | **0.022±0.0004** | **0.89±0.002** | As shown in the figure and the table, both our regularization and weight subset optimization help unlearn the target image more effectively, without forgetting other concepts. ### **(G2) Does leave-K-out models forget other images? (ZhUS)** In Sec. 5 of the main paper, we show that leave-K-out models forget how to generate target synthesized image queries. Reviewer ZhUS raises an interesting question about whether these models forget unrelated images, too. Our findings show that the answer is a no: leave-K-out models forget only the specific concepts while retaining others. Following reviewer ZhUS’s suggestion, we study how much leave-K-out model’s generation deviates from those of the original model in three categories: - **Target images**: the attributed synthesized image. Leave-K-out models should forget these—the greater the deviation, the better. - **Related images**: images synthesized by captions similar to the target prompt. We obtain the most similar 100 captions from the MSCOCO val set using CLIP’s text encoder. Leave-K-out models should not forget all of them—the smaller the deviation, the better. - **Other images**: images of unrelated concepts. Prompts are 99 different captions selected from the MSCOCO val set. Leave-K-out models should **not** forget these—the smaller the deviation, the better. In Fig. 1 of the response PDF, we find that the leave-K-out model “forgets” the query bus image specifically while retaining other buses and other concepts. Similar to G1, we quantitatively measure deviations using mean square error (MSE) and CLIP similarity. We evaluate 40 pairs of target images and leave-K-out models. The following table shows the results. | | Target MSE ($\uparrow$) | Target CLIP ($\downarrow$) | Related MSE ($\downarrow$) | Related CLIP ($\uparrow$) | Other MSE ($\downarrow$) | Other CLIP ($\uparrow$) | |---|---|---|---|---|---|---| | K=500 | 0.054±0.004 | 0.719±0.0123 | 0.039±0.0003 | 0.862±0.0009 | 0.041±0.0003 | 0.788±0.0014 | | K=1000 | 0.058±0.0043 | 0.675±0.0136 | 0.041±0.0003 | 0.855±0.0009 | 0.041±0.0003 | 0.788±0.0014 | | K=4000 | 0.06±0.0036 | 0.612±0.0136 | 0.046±0.0004 | 0.831±0.0011 | 0.041±0.0003 | 0.787±0.0014 | We find that target images have larger MSE and lower CLIP similarity than related images and other images. Also, as the number of removed influential images (K) increases, the target image error increases rapidly while other images stay almost the same. Interestingly, related images’ errors increase with larger K, but the errors are still much smaller than those of target images. This can be due to the fact that as K increases, the group of influential images can start affecting other related concepts. ### **(G3) Running Textual Inversion for Leave-K-Out Models? (ZhUS, YhLa)** Both reviewers ZhUS and YhLa suggest using Textual Inversion to check whether leave-K-out models forget the target concept. In fact, we find that Textual Inversion cannot reconstruct a given image faithfully. In Fig. 3 of the response PDF, we run Textual Inversion on the original model, and the inverted results are variations of the reference images instead of faithful reconstructions. This finding coincides with the actual goal of Textual Inversion: generate variations of a user-provided image. Hence, we believe that deviation of generation (delta G) is a suitable choice for evaluation. In the previous response (G2), we further analyze and validate this choice. Pdf: /pdf/e114136ed2c757d3f6b3488d4687ef3873a83938.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe
Accept (poster)
Summary: The paper fine-tunes Pythia LLMs with different sizes for information retrieval (IR) applications. The paper investigates the effect of different model sizes, FLOPs, fine-tuning methods such as LoRA or bias tuning, and some important hyperparameters (e.g., dimension in LoRA) on loss and downstream applications (measured by MTEB). Some takeaways are interesting and counterintuitive (e.g., the larger model does not necessarily achieve better loss). Strengths: This paper conducts very extensive experiments and analyses in the main paper and appendix. I appreciate all the efforts and believe that it would be very useful for practitioners who want to create their own embedding models from LLM. Weaknesses: Although I think the experiments in this paper are valuable, I think main goal of the analyses is not very practical. Personally, I think the term Compute-Optimal from Chinchilla (https://arxiv.org/pdf/2203.15556) and this paper is very misleading because it only considers the training cost and ignores the inference cost. In practice, the LLMs are often trained much longer than this "optimal" value. For example, Chinchilla-optimal for 8B LLM is just 200B tokens, but LLaMA 3 is trained using over 15T tokens (https://ai.meta.com/blog/meta-llama-3/). For the information retrieval (IR) models, the inference cost might be even more important compared to LLMs. In addition to main complaint above, I think that some details could be explained more clearly (see the questions below). Finally, I think that the figures with FLOPs on the x-axis and different curves are for different model size are much easier to read than what the paper has now (the model size on the x-axis and different curves are for different FLOPs), but it might be my personally preference. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Although you show that the correlation between MTEB and loss is very high (−0.892) overall, some important trends seem to be different, which might affect your takeaway message. For example, the best score of MTEB always comes from the largest model in Figure 7-10, but the lowest loss often comes from smaller model size in Figure 2 and 4. The bias tuning methods are significantly worse in terms of loss, but it seems to achieve the best MTEB when FLOP is 5e+18 in Figure 11. When you write your takeaways, why not also include the results of MTEB? 2. I don't understand l_r and l_c in line 134 and 135. Why don't we need to use labels.T for l_c? If this is a standard loss, please cite. Otherwise, please provide more explanations/motivations for this loss. 3. Why do you choose to use the English partition of the BAAI BGE dataset, which is designed for Chinese embedding? If it is not a common choice, you should explain more about this. 4. Did you get the performance of full MTEB? Does the full MTEB give you different trends? In the figures, MTEB performances means the performances on the task subset, right? I think you should make it clear in those figures (MTEB performances -> MTEB subset performances). Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The current limitation section is good and adding some discussion of inference cost could make it better. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their insightful feedback! Addressing your questions and concerns: **(W1) The term Compute-Optimal [...] is very misleading because it only considers the training cost and ignores the inference cost [which is important in practice].** We do agree that our usage of the “compute-optimal” term is somewhat imprecise – we have updated the paper, pointing out that we mean the training cost only. We have also added an appropriate paragraph in the limitations section. We believe that there is value in deriving the functional relationship between the model size, training method, data, and the final training loss (which is also done in many previous works on scaling laws). This relationship clearly characterises the ingredients needed to achieve particular performances. We refrain from taking inference costs into account, as they are hard to estimate. However, we stress that by having the functional form, the reader can analyse their particular use case, which may take into account the sum of the training and inference cost as a constraint, and find the optimal iso-total-cost models. **(W2) I think that the figures with FLOPs on the x-axis and different curves are for different model size are much easier to read than what the paper has now [...]** Thank you for this suggestion – to increase the readability, we have added plots where the x-axis is the model size, not FLOPs. During experimentation we were looking at both versions and decided that it is better to present loss vs size plots as the IsoFLOP profiles in them directly answer our motivating question: given the FLOP budget, what is the optimal model size one should use? **(Q1) Although you show that the correlation between MTEB and loss is very high (−0.892) overall, some important trends seem to be different, which might affect your takeaway message. [...]** We thank the Reviewer for raising this. Ultimately, the most suitable metric to use is the one most relevant to the reader. We believe people who have the need to fine-tune their own embedding models and the resources to do so should use their own benchmarks to derive the scaling laws most accurate for them. Without access to the relevant benchmark, we find the validation loss to be the smoothest metric to base scaling laws upon (again, this approach is typically assumed in the scaling laws works). The MTEB results have higher variances and do not necessarily align with the user’s requirement. **(Q2) I don't understand `l_r` and `l_c` in line 134 and 135. Why don't we need to use `labels.T` for `l_c`? If this is a standard loss, please cite. [...]** It is a standard symmetric contrastive loss used throughout the contrastive learning literature for training textual embedding models [3,4,5,6] (and in other applications [7]). The specific formulation included in our paper originates from [3] (Section 2.2). We cite this work (please see line 126), but we will emphasise it more to avoid confusion – thank you for your suggestion! As for the question about `labels.T` for `l_c`: it should remain `labels` – we just use PyTorch notation here; the first argument to the cross entropy function is a matrix, and the second argument is a vector indicating for each row what is the index of the “correct” element. (We link the documentation in the footnote on page 3.) **(Q3) Why do you choose to use the English partition of the BAAI BGE dataset, which is designed for Chinese embedding? [...]** The reason for choosing BAAI BGE is that this dataset is a massive collection of data for training embeddings. Since we do not consider scaling laws under data constraints like [1], we needed a large enough dataset on which none of our experiments would exceed 1 epoch of training; BAAI BGE was the only published dataset satisfying our constraints. As for why we used only the English partition of BAAI BGE: we want to explore a family of decoder-only models with different model parameter scales, and the Pythia family is the only available option at the time of experiments that offers lots of sub-billion-parameter models, which are common for embedding models. The Pythia family was pre-trained on The Pile [2], which is an English-only dataset, and hence it does not have Chinese knowledge and capabilities. Therefore, we only focused on the English partition of BAAI BGE. We have updated the paper to include the full explanation provided here. **(Q4) Did you get the performance of full MTEB? Does the full MTEB give you different trends? [...] you should make it clear in those figures [that only a subset of MTEB was used].** The Reviewer is correct here—for practical computational purposes, we did use a subset of MTEB instead of the whole benchmark. (As we explain in Appendix B, this subset was selected in a principled way to be representative of the whole MTEB.) During the rebuttal period, we checked that the results on the whole suite (except MSMARCO, which is the most expensive to run) are highly correlated with our selected subset. To this end, we compared the two evaluations for 8 checkpoints from the full fine-tuning experiment, and the Pearson correlation is 0.976. This information has been included in the revised version of the paper. We have also fixed the captions to “MTEB subset performance.” ### References [1] Muennighoff et al.: Scaling data-constrained language models. NeurIPS 2023\ [2] Gao et al.: The Pile: An 800GB dataset of diverse text for language modeling. arXiv 2021\ [3] Neelakantan et al.: Text and Code Embeddings by Contrastive Pre-Training. arXiv 2022\ [4] Wang et al.: Text Embeddings by Weakly-Supervised Contrastive Pre-training. arXiv 2022\ [5] Wang et al.: Improving Text Embeddings with Large Language Models. arXiv 2024\ [6] Günther et al.: Jina Embeddings: A Novel Set of High-Performance Sentence Embedding. arXiv 2023\ [7] Eysenbach et al.: Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference. arXiv 2024 --- Rebuttal 2: Title: Thank you for your detailed explaination Comment: The authors address my concerns well. I have no other major reasons to reject this paper, so I raise my score from 6 to 7. --- Rebuttal Comment 2.1: Title: Thank you Comment: We thank the reviewer for their prompt response and for increasing the score of our manuscript. We are also happy to answer new questions if any arise. Best regards, Authors
Summary: This paper proposes a new algorithm to automatically find optimal configurations of model sizes, data quantities, and fine-tuning methods for text embedding models at different computational budget levels by contrastive pretraining. Specifically, the paper analyzes the choices of different configurations and then designs an algorithm to predict optimal architecture based on budget. The paper also designs a new task to find optimal model configuration given limited budgets. The paper first introduces basics, including the scaling laws, the constrained optimization problem, the contrastive learning, fine-tuning methods, and then computational cost formulas. The paper then experiments with different finetuning methods to find the optimal loss regarding parameters. Furthermore, the paper aims to find the relationship between loss functions regarding the number of parameters (trainable/total) and the number of training tokens. The paper then fit the plots to find the best equations. Strengths: 1. The paper proposes a new computation optimization problem: given a fine-tuning corpus and a family of pre-trained decoder-only language models of different sizes, minimize the contrastive loss subject to a fixed computational budget. The newly proposed problem is important and has the potential to expand. 2. The paper conducts a comprehensive analysis of existing fine-tuning techniques with different hyperparameters. The paper visualizes the results with clear figures and concludes with several useful results. The paper also predicts the optimal loss for different budgets. The paper evaluates the downstream results. 3. The paper provides code, additional evaluation results and analysis in the Appendix, and detailed computational resources. Weaknesses: 1. The paper's generalization ability is limited. As mentioned in the limitation, the proposed computation law is only on the Pythia; additional language models might better reflect the generalization ability. The paper also only focuses on contrastive finetuning. 2. The whole paper mainly focuses on the performance analysis. It would be better to include additional theoretical support for the phenomenon that appears in the paper. The paper can also be strengthened by testing the proposed performance law on some unobserved configurations. 3. The paper claims to also analyze the relationships between the data quantities and the performance. However, I cannot find it in Section 4.2. The paper fails to follow the NeurIPS citation format, which uses numbers for references/citations. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper includes a limitation section in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their effort to assess our work! Below we address each of the concerns. **(1a) The paper's generalization ability is limited. As mentioned in the limitation, the proposed computation law is only on the Pythia; additional language models might better reflect the generalization ability.** We agree that having additional models would allow for a better reflection of the paper’s generalisation. During the rebuttal we ran experiments with a very recent model – Gemma 2b [1], using LoRA and full fine-tuning methods. Importantly, we found that the **the loss formula with parameters fitted to the Pythia data points, adequately describes the behaviour of Gemma 2b training for the ‘medium’ budgets**. At the extremely low computational budget, Gemma performs better than the prediction, which intuitively is caused by the general higher model capability than those in the Pythia suite; also, scaling laws tend to be much less regular in smaller regimes – see Figure 4 in the pdf attachment. As a result of being time-restricted, we did not perform the two highest budget trainings, but we believe that the results so far are a strong indication as to the generalisability of our findings. We will update our paper to contain those new findings once we have the complete results for Gemma. **(1b) The paper also only focuses on contrastive finetuning.** We intentionally focus only on contrastive learning: this has become a standard approach to obtaining useful embedding models. We note that there has been no study before ours that helps to select the most compute-efficient variant of contrastive learning approaches in decoder-only models, rendering our study a worthwhile contribution. **(2) The whole paper mainly focuses on the performance analysis. It would be better to include additional theoretical support for the phenomenon that appears in the paper. The paper can also be strengthened by testing the proposed performance law on some unobserved configurations.** We provide theoretical support by utilising a classical risk decomposition model (similar to that of the Compute-Optimal Scaling Laws paper [2]), and adapting it to account for the difference in forward and backward computation parameters in Section 4.3. With this theoretical framework we describe the loss behaviour in terms of the model parameters (both forward and backward), and the amount of data. We also show that the scaling law generalises to the very recent Gemma 2B model (see above). **(3a) The paper claims to also analyse the relationships between the data quantities and the performance. However, I cannot find it in Section 4.2.** In this paper, we focused on investigating the compute-bounded setting – the data-quantity factor is only present implicitly as it can be inferred from the FLOP limit and the model size. However, we now realize that the data-centric perspective is also very valuable and we should analyse it explicitly, especially in the light of the claim in line 157. For this reason, we *attach Figure 3* to our 1-page pdf attachment with plots. A simple and practical conclusion can be inferred from it – if the data is restricted, one should choose the biggest model available and use full fine-tuning or LoRA with a rank on the bigger side. We have updated our paper to contain this plot and conclusion. We note that these conclusions can be achieved using the raw experimental data that we provide – please, see the link at the bottom of page 5 of our paper. **(3b) The paper fails to follow the NeurIPS citation format [...].** Thank you for noticing it! This has already been fixed. ### References [1] Mesnard et al.: Gemma: Open Models Based on Gemini Research and Technology. arXiv 2024\ [2] Hoffmann et al.: Training compute-optimal large language models. arXiv 2022 --- *We are grateful for the Reviewer’s feedback and hope the concerns are appropriately addressed. If that is the case, we kindly ask you to reconsider the paper score.* --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed explanation! The authors have addressed my concern. I will raise my score to 7. --- Rebuttal 2: Title: Thank you Comment: We again thank you for your feedback, and we are grateful for your support of our work!
Summary: This paper investigates how to effectively train text embedding models from pre-trained decoder-only language models while considering computational budget constraints. The authors explore the influence of model sizes, fine-tuning methods, and computational budgets on the performance of the embedding models. The results demonstrate that full fine-tuning is optimal for lower computational budgets, whereas low-rank adaptation fine-tuning is more effective for higher computational budgets. Strengths: (1) This paper focuses on the important problem of finding the optimal training settings for embedding models within a fixed compute budget. (2) This paper conducts extensive experiments across 8 model sizes, 6 compute budgets, and 4 tuning methods. (3) Although the paper uses contrastive loss as the primary measure of model performance, the authors show a strong correlation between contrastive loss and downstream task performance. Weaknesses: (1) In L51-52, it is stated that "given a fixed computational budget, predicts the optimal network architecture, data quantity, and parameter-efficient fine-tuning hyperparameters". However, data quantity, a very crucial factor, is not investigated in the paper. (2) The conclusion that full fine-tuning and low-rank adaptation fine-tuning produce optimal models at lower and higher computational budgets respectively seems somewhat superficial. (3) I have concerns about the selection of the MTEB subset. Retrieval is a crucial task for which embedding models can be used. However, only SciFact is evaluated. Scifact is not a representative retrieval dataset as its corpus is relatively small (only with 5183 documents). (4) Figure 1 is somewhat hard to read since it is difficult to distinguish the lines representing full fine-tuning, LoRA, and block freezing. (5) The average pooling method to extract representation is not entirely reasonable, as the autoregressive characteristic would cause the outputs of the first tokens to lack information about the latter tokens. (6) L141-146 is not entirely accurate. Hard negative mining methods like [1] and [2] do not require this two-stage training procedure. This two-stage training procedure is more of a recent trend in training powerful general-purpose embedding models. [1] Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. ICLR 2021. [2] Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma. Optimizing Dense Retrieval Model Training with Hard Negatives. SIGIR 2021. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see my comments in Weakness. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The paper has a proper Limitations and future work section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for detailed and informative feedback. Below we address each of the concerns. **(1) [Despite you mention so in lines 51-52] data quantity, a very crucial factor, is not investigated in the paper.** In this paper, we focused on investigating the compute-bounded setting – the data-quantity factor is present implicitly as it can be inferred from the FLOP limit and the model size. Following your comment, we recognise that the data-centric perspective is also very valuable. For this reason, we *attach Figure 2* to our 1-page pdf file with plots. A simple and practical conclusion can be inferred from it – if the data is restricted, one should choose the biggest model available and use full fine-tuning or LoRA with a rank on the bigger side. We have updated our paper to contain this plot and conclusion. We note that these conclusions can be achieved using the raw experimental data that we provide – please, see the link at the bottom of page 5 of our paper. **(2) The conclusion that full fine-tuning and low-rank adaptation fine-tuning produce optimal models at lower and higher computational budgets respectively seems somewhat superficial.** We do agree with the Reviewer that our study did not reveal patterns that are unexpected or peculiar. Nevertheless, we argue that our findings constitute a solid empirical grounding and, thus a concrete scientific contribution to helping researchers and practitioners working with embedding models. It is the first study devoted to establishing compute-efficient approaches to contrastive fine-tuning of decoder-only language models. Given the recent activity in this area, we found it to be a serious research gap that needed to be addressed. Also, please note that our study concerns not only the selection of the optimal method from amongst the four pre-selected, but we also go deeper and analyse important hyper-parameters of those methods (LoRA rank and active block fraction). This was not previously studied, with a notable exception of limited analysis in [2]. **(3) [...] Retrieval is a crucial task for which embedding models can be used. However, only SciFact is evaluated. Scifact is not a representative retrieval dataset as its corpus is relatively small [...].** Our selection of SciFact was guided by its strong correlation (0.927) with the overall performance in the retrieval tasks category, as evidenced by the Table 11 of MTEB [8]. Nevertheless, we concur with the Reviewer's observation regarding the size of the SciFact benchmark. Therefore, we now run new experiments to investigate the *correlations* between the performance on SciFact and two large retrieval benchmarks: *Natural Questions* [6] (2.6M test data points), and *DBPedia* [7] (4.6M test data points) for our best models for each (FLOP, method) combination, which are *0.976* and *0.971* respectively (also see Figure 3 of the attachment). We hence conclude, that despite its humble size, SciFact is a satisfactory representative of the retrieval category. This clarification has been added to the paper. **(4) Figure 1 is somewhat hard to read since it is difficult to distinguish the lines representing full fine-tuning, LoRA, and block freezing.** We thank the Reviewer for the suggestion. We have updated the figure to make it more legible in Figure 1 of the attachment. **(5) The average pooling method to extract representation is not entirely reasonable, as the autoregressive characteristic would cause the outputs of the first tokens to lack information about the latter tokens.** While using the average pooling method may indeed seem unintuitive, it works very well in practice. In *Appendix D, section 3*, we compare the two most popular methods for extracting embeddings – average and last pooling methods – for Pythia 410m, fine-tuned with the FLOP budget of 9.6e+16. We perform evaluation on the whole MTEB [8] without MSMARCO, and we find the difference in performance is 0.47 vs 0.36 in favour of average pooling. We note that average pooling has been used in several previous works [1,2,3]. Moreover, theoretically, the average pooling method has the possibility to converge to the ‘last’ method by zeroing out all the previous tokens. Hence, we argue the average pooling method is empirically and theoretically justified. **(6) L141-146 is not entirely accurate. Hard negative mining methods like [4] and [5] do not require this two-stage training procedure [which] is more of a recent trend in training powerful general-purpose embedding models.** We indeed ignored the existing approaches for mining the hard negatives. We have rewritten the mentioned paragraph as shown below (changes in bold) – if something is still imprecise, we would appreciate feedback! > […] which can be incorporated into the contrastive loss to promote learning more precise representations. **Moreover, there are works that focus on approaches for mining hard negatives which result in better training data in the context of specific downstream tasks [4,5].** > However, **recent works aiming at training powerful, general-purpose embedding models** that do rely on datasets with hard negatives often arrive at embeddings by having two distinct training phases […] ### References [1] Li et al.: Towards General Text Embeddings with Multi-stage Contrastive Learning. arXiv 2023\ [2] Wang et al.: Improving Text Embeddings with Large Language Models. arXiv 2024\ [3] Wang et al.: Text Embeddings by Weakly-Supervised Contrastive Pre-training. arXiv 2022\ [4] Xiong et al.: Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. ICLR 2021\ [5] Zhan et al.: Optimizing Dense Retrieval Model Training with Hard Negatives. SIGIR 2021\ [6] Kwiatkowski et al.: Natural Questions: A Benchmark for Question Answering Research. TACL 2019\ [7] Hasibi et al.: DBpedia-Entity v2: A Test Collection for Entity Search. SIGIR 2017\ [8] Muenninghoff et al.: MTEB: Massive Text Embedding Benchmark. EACL 2023 --- Rebuttal Comment 1.1: Comment: Again, we thank the reviewer for their constructive feedback. As we have passed the middle of the discussion period, we would like to confirm if our answers are comprehensive and satisfactory. Should any new questions arise, we would be happy to answer them.
Summary: This paper focuses on the efficient contrastive training of text embedding models using pre-trained decoder-only language models. The main contribution is an algorithm that determines the best configuration of model sizes, data amounts, and fine-tuning methods for different computational budgets. Through extensive experimentation, the authors developed a guideline that helps practitioners choose the most suitable design options for their text embedding models. The study finds that full fine-tuning is best for lower computational budgets, while low-rank adaptation fine-tuning is optimal for higher budgets. Strengths: The research question posed, “What is the best embedding model one can train from a backbone decoder-only LLM with a fixed budget?” is highly valuable and important. It addresses a critical need within the NLP and IR domains. The paper boasts a robust experimental setup, and the conclusions drawn from these experiments are insightful. The analysis based on empirical results provides a useful reference for advancements in both NLP and IR fields. This thorough approach enhances the reliability and applicability of the research findings. Weaknesses: The paper does not introduce a new methodology. The techniques employed, such as scaling laws, extracting representations from transformers, and contrastive fine-tuning, are all well-established in the literature. There are several existing studies that the paper fails to acknowledge, which could provide a richer context and enhance the literature review. Notable omissions include works like “LLM-Oriented Retrieval Tuner” and “Scaling Laws for Dense Retrieval.” Acknowledging and discussing these related studies could strengthen the paper by positioning it within the existing research landscape more accurately. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the previous sections. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 4 Limitations: This paper discusses the limitation in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their effort to assess our work! Below we respond to the two points of critique. **(1) The paper does not introduce a new methodology. The techniques employed, such as scaling laws, extracting representations from transformers, and contrastive fine-tuning, are all well-established in the literature.** We agree with the Reviewer that our study does not introduce novel techniques – and this was not our intention in this project. We rather aimed to investigate and compare existing modern approaches in the practical, limited compute-budget setting. Before our work, no study had been devoted to establishing compute-efficient approaches to contrastive fine-tuning of decoder-only language models. Given the recent activity in this area, we found it to be an important research gap that needs to be addressed. We believe that our findings constitute a *concrete scientific contribution* to helping researchers and practitioners working with embedding models. **(2) There are several existing studies that the paper fails to acknowledge, which could provide a richer context and enhance the literature review. Notable omissions include works like “LLM-Oriented Retrieval Tuner” and “Scaling Laws for Dense Retrieval.”** Thank you for pointing out these two recent works! We have extended the related work section to include both of them. Specifically, starting from line 95 (new text in bold): > These approaches are all applicable to embedding models. Muennighoff [1] explored a simple parameter-efficient approach to repurpose GPT models into embedding models where only the bias tensors of the transformer model are updated [2]. **Sun et al. [3] developed a parameter-efficient method designed specifically for fine-tuning embedding models.** However, a systematic study of what parameter-efficient methods are optimal under what scenarios has not been performed, which is the aim of our work. Also, starting from line 84: > **The single most related investigation is a concurrent work [4], where the scaling laws for encoder models for retrieval are investigated. There are significant differences in the settings we consider. Our work focus is investigating the process of fine-tuning decoder-only models for good quality embeddings.** Moreover, our main goal is to find which strategy for achieving good embeddings is optimal in a budget-restricted setting, while taking into account the popularity of applying parameter efficient methods for fine-tuning, like LoRA or partial model freezing. ### References [1] Muennighoff: SGPT: GPT sentence embeddings for semantic search. arXiv 2022\ [2] Zaken et al.: BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models. ACL 2022\ [3] Sun et al.: LLM-Oriented Retrieval Tuner. arXiv 2024\ [4] Fang et al.: Scaling Laws For Dense Retrieval. SIGIR 2024 --- *We hope that we adequately addressed the Reviewer’s concerns. If that is the case, we kindly ask for a reconsideration of the paper score.* --- Rebuttal Comment 1.1: Comment: Dear reviewer, we thank you again for your review. We would like to confirm if our rebuttal answers are satisfactory. Please let us know if you have any new questions.
Rebuttal 1: Rebuttal: We thank the Reviewers for their effort and constructive feedback. We believe that it will lead to a substantially better version of our paper. We are pleased that the Reviewers found * the research question our work addresses relevant and practical [GCkf, 4Bw9, zJJS, xguk], * our experimental setup extensive and robust [GCkf, 4Bw9, xguk], * the conclusions drawn insightful [GCkf, xguk]. We are also grateful for all the constructive concerns that have been raised and make a substantial effort to address them all. Below is a summary of the main points of our rebuttal, whereas detailed responses are provided below for each individual review. --- ## Summary of the rebuttal * We provide responses – largely supported by new preliminary experimental results (see below) – concerning all the issues raised by the Reviewers, most importantly: * transferability of our conclusions to other models [zJJS], * omitting the explicit analysis of the data-constrained setting in our paper [4Bw9, zJJS], * the robustness of our evaluation [xguk, 4Bw9]. * We resolved to update our paper for the camera-ready version with new experiments, namely: * the results from training the Gemma 2b model, * the evaluation results on a more realistic retrieval dataset. * We improved the text of the paper in a few places: * we added new related work (kindly pointed out by Reviewer GCkf), * we made the discussion about the hard negatives more accurate (per suggestion of Reviewer 4Bw9), * we extended our limitations section with a paragraph admitting that our study remains inference-cost-agnostic (thanks to the comment by Reviewer xguk). --- ## New experimental results During the limited rebuttal time, we have managed to run a number of experiments regarding the raised concerns: transferability of our conclusions to other models [zJJS] and our MTEB subset choice for evaluation [xguk, 4Bw9]. We give details in the answers to the reviewers indicated in the brackets. In a nutshell: * We found that for a new model – Gemma 2B – for two tested methods (full fine-tuning and LoRA), the results follow the same pattern as for our Pythia experiments. * We found that there is a very strong correlation between our selected subset and the whole MTEB evaluation. * We also confirmed that regarding the retrieval task specifically, the results on the SciFact benchmark that we used are highly correlated with much larger retrieval benchmarks. --- We believe that this makes our paper stronger, and again, we are grateful for the Reviewers’ suggestions! Pdf: /pdf/a2c2344ad75bd1b912ef9735dc4b98d63a28b7e1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Passive-Aggressive Framework for Online Regression with Side Information
Accept (poster)
Summary: The paper introduces a new method called passive-aggressive (PA) online linear regression with side-information regularization to solve the problem of stock prediction and allocation. The paper introduces PA-online regression as a variant to traditional online linear regression in two ways: 1. the parameter updates only happen when the prediction error exceeds an adaptive threshold - traditional online regression performs the update in every round - and 2. the parameters are projected back to side information - traditional online regression projection is towards an all-zero vector to minimize long-term regrets. The paper then applies the algorithm in a stock prediction problem, where the objective is to mimic an stock index without knowing the portfolio allocation ahead of time and the side information is the return of every stock symbol. It is unclear to me how the proposed algorithm maps to the real problem or the exact guarantees the authors claimed, due to the authors use of unconventional notations and unclear description of the noise-generation assumptions. Strengths: The paper develops upon a rich body of literature on online regression algorithms. The authors utilized existing techniques in optimization literature, such as online proximal-gradient descent and successive convex acceleration (which is similar to batch gradient descent with line search). The introduced algorithm is validated in simulated and semi-real-world stock prediction and index tracking problems. Weaknesses: The algorithm is not very well motivated. For example, I am not sure why the authors elected to use PA-online regression with an adaptive threshold instead of regular online regression. The introduction of side-information regularization is abrupt and does not serve the purpose to minimize online regret that is commonly expected of online learning algorithms. The claim of regret guarantee is presented without defining an objective function. The paper uses inconvenient notations for the general audience. For example, in the regression problems, the authors used cap{r} as the covariate variable and lower{r} as the outcome variable. They should use cap{x} and lower{y} instead. The authors did not explain the noise-generating model, leading to confusions in the general understanding of the problem setup. On Line 207, the authors suggest r_{t,i} as the observed stock return and r_t^b as the index return at time t. In this case, I would assume that the error is generated from not knowing the weight parameter w. But, on Line 222, the authors then suggested that the true index return contains a Gaussian noise even if the actual weight of the index components is known. This confuses me. Lastly, all of the explanations should happen at the beginning of the paper, instead of the experiment section. Technical Quality: 2 Clarity: 2 Questions for Authors: I have troubles understanding the problem setup or the motivations of the threshold-update techniques. I neither understand the definition of the objective function in the regret guarantees. I feel the paper has a point, but I am confused by the flow of the presentation. Edit: The author responses alluded to some unique properties about passive aggressive algorithm (mostly computational) that need to be discussed before the introduction of the algorithm. Alternative solutions should also be explored. I raised my score by 1 and lowered my confidence as a reflection of the authors' detailed answers. Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: No. The paper did not discuss the motivations of the proposed algorithm, let alone its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback on our paper. We appreciate your careful review and the constructive comments you provided. Below, we address your comments and questions in detail. ### Response to Weaknesses: > The algorithm is not very well motivated. For example, I am not sure why the authors elected to use PA-online regression with an adaptive threshold instead of regular online regression. **Reply:** Thank you for your comment. We would like to clarify that the PA method, as introduced by Crammer et al. (2006), is one of the most popular approaches for online regression due to its simplicity and efficiency. One of the key features of PA is its ability to provide a closed-form solution for the weight update in unconstrained problems, as shown in Equation (2). However, traditional PA methods face challenges in determining the optimal threshold parameter $\varepsilon$ and adapting to scenarios where additional side information is available. The threshold $\varepsilon$ is crucial since it determines when the weight vector is updated. Additionally, relying solely on tracking accuracy without considering side information may limit the method's potential performance. Our proposed adaptive framework enhances PA by integrating side information and adaptively updating the threshold to address the issues. > ... The introduction of side-information regularization is abrupt and does not serve the purpose to minimize online regret that is commonly expected of online learning algorithms. The claim of regret guarantee is presented without defining an objective function. **Reply:** We acknowledge that our primary goal for introducing side information is not solely to minimize the tracking error of online regression. Instead, we aim to enhance other potential performance aspects of the model. For example, in regression models, introducing sparsity might increase the error, but it can improve the model's robustness and performance in other areas. The purposes of introducing side information are twofold: 1. **Potential Performance Enhancement:** By leveraging side information, we can enhance various aspects of model performance, such as robustness and adaptability. This is particularly important in practical applications where different performance metrics need to be balanced. 2. **Assisting in the Selection of $\varepsilon$:** Side information can facilitate adaptively selecting the optimal threshold parameter $\varepsilon$. This adaptive selection process is crucial for achieving a balance between tracking accuracy and other performance metrics, thereby improving the overall effectiveness of the algorithm. Our objective is to achieve the optimal trade-off between tracking accuracy and side performance. The objective loss function for the regret bound in Theorem 2 is $$ f_t(\varepsilon) =\underset{\mathbf{w}\in\mathcal{W}}{\inf}\left[h_t(\mathbf{w})+\frac{1}{2\lambda}||{\mathbf{w}-\widehat{\mathbf{w}}_{t+1}(\varepsilon)}||_2^2 \right], $$ which is explicitly defined in Equation (6). The regret bound in Theorem 2 shows that our designed adaptive method for selecting $\varepsilon$ converges to the optimal setting in hindsight, thereby achieving the optimal trade-off. We also verify this numerically in Figure 2 of Section 4.1. We will highlight our motivation for introducing side information and the objective loss function in the revised paper. > The paper uses inconvenient notations for the general audience. For example, in the regression problems, the authors used cap{r} as the covariate variable and lower{r} as the outcome variable. They should use cap{x} and lower{y} instead. **Reply:** Thank you for your valuable feedback. We will improve the organization of the paper by moving the details of the synthetic data generation model, including noise generation and other relevant aspects, to the Preliminary section. > The authors did not explain the noise-generating model, leading to confusions in the general understanding of the problem setup... I would assume that the error is generated from not knowing the weight parameter w. But, on Line 222, the authors then suggested that the true index return contains a Gaussian noise even if the actual weight of the index components is known. This confuses me. **Reply:** It's common in regression modeling to assume the oracle of the regression problem and then generate synthetic observations and labels with a certain level of noise (Sen and Srivastava, 2012; Wang et al., 2018). Different levels of noise can be applied to assess model performance, with Gaussian noise being the most commonly used to represent uncertainty and randomness in data. To address the concerns raised, we will provide a more detailed explanation of the noise-generating model in the revised paper. Sen, A. and Srivastava, M. Regression analysis: theory, methods, and applications. Springer Science & Business Media, 2012. Wang, S., Gittens, A., and Mahoney, M. W. Sketched ridge regression: Optimization perspective, statistical perspective, and model averaging. Journal of Machine Learning Research, 18(218):1–50, 2018. > Lastly, all of the explanations should happen at the beginning of the paper, instead of the experiment section. **Reply:** We will provide more details of the synthetic data generation model, including noise generation and other relevant aspects, in the Preliminary section. > I have troubles understanding the problem setup or the motivations of the threshold-update techniques. I neither understand the definition of the objective function in the regret guarantees. I feel the paper has a point, but I am confused by the flow of the presentation. **Reply:** We appreciate your feedback. We will highlight the problem setup and motivations in the revised paper based on previous discussions. We hope we can address your concerns and enhance the overall clarity and impact of our paper. --- Rebuttal 2: Title: Please engage with the authors and acknowledge that you have read the rebuttal Comment: Dear reviewer, Thanks again for your thoughtful review. As this paper is a bit borderline, I would like to know if the rebuttal had any affect on your review. Please provide an update to your review and acknowledge that you have read the rebuttal and clarify if you want to adjust your score. Thanks! Your AC.
Summary: This paper proposes a novel adaptive version of the passive-aggressive (PA) method for online linear regression such that it incorporates additional side information within its optimization objective while adaptively updating the threshold above which the weight parameter in the regressor is updated. This method allows the incorporation of additional side information in the optimization objective while performing online regression without degrading the tracking error. The paper also proposes an efficient version of this algorithm based on successive convex approximation (Scutari et al 2013). Additionally, regret bounds are also provided for non-convex loss functions. The work is supported by real and simulated experiments that track a stock index using market data while also enhancing monetary returns. Strengths: The paper explores an interesting modification of the passive-aggressive (PA) framework where it is possible to incorporate additional side information within its objective while maintaining the original tracking accuracy. The problem formulation and the solution is intuitive. The experimental results suggest that the proposed method strikes a reasonable tradeoff between tracking accuracy and optimizing for an additional goal of enhancing monetary returns from stocks. In general, the original PA method provides better tracking accuracy, but the modified version in the paper provides comparable but slightly worse accuracy while significantly enhancing the monetary returns as compared to all the baselines. Weaknesses: It would be interesting to have some experiments with different kinds of additional side information incorporated into the objective and to evaluate whether the time complexity is affected by it. Also, it would be interesting to track the adaptively selected \epsilon parameter and its relationship with tracking error. The above experiments may provide us some more insights into the novel PA method. Technical Quality: 3 Clarity: 3 Questions for Authors: Are the experiments corresponding to Fig 2, 3 and 4 conducted using the efficient version of the algorithm (i.e. Algorithm 2)? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper would benefit from an explicit section on limitations, discussing the assumptions and avenues for improvement. Discussion on potential negative societal impact may be necessary as well (e.g. design of side information functions that have societal implications depending on the downstream application). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive review of our paper. We are pleased that you found our novel adaptive version of the passive-aggressive (PA) method for online linear regression to be an interesting and significant contribution, particularly in its ability to incorporate additional side information without degrading tracking error. Below, we address your comments and questions in detail. ### Response to Weaknesses: > It would be interesting to have some experiments with different kinds of additional side information incorporated into the objective and to evaluate whether the time complexity is affected by it. **Reply:** Thank you for your insightful suggestion. To evaluate whether the time complexity is affected by the inclusion of different types of side information, we conduct additional experiments utilizing various forms of side information beyond the log return $h_t(\mathbf{w}) = -\log(1+\mathbf{r}_t^{\mathsf{T}}\mathbf{w})$, such as: + **switching cost:** $h_t(\mathbf{w}) = ||\mathbf{w} - \mathbf{w}_t||_1$ + **weighted $\ell_1$ norm**: $h_t(\mathbf{w}) = \sum_{i=1}^N \rho_i |w_i|$ + **group Lasso:** $h_t(\mathbf{w}) = \sum_{i=1}^{m}\rho_{i}||w_{|\mathcal{G}_i}||_2$, where $\mathcal{G}_i, \dots, \mathcal{G}_m$ are $m$ disjoint groups We evaluate the performance of the proposed efficient method with different kinds of side information functions over $100$ randomized trials, comparing the average CPU time (in seconds) in the following table: | | log return | switching cost | weighted $\ell_1$ norm | group Lasso | | -------- | --------------------- | --------------------- | ---------------------- | --------------------- | | $N=500$ | $0.00084$ | $0.00084$ | $0.00045 $ | $0.00162$ | | $N=1000$ | $0.00119$ | $0.00084$ | $0.00084$ | $0.00252$ | | $N=2000$ | $0.00181$ | $0.00156$ | $0.00113$ | $0.00344$ | | $N=5000$ | $0.00335$ | $0.00356$ | $0.00282$ | $0.00702$ | From the above table, we can see the group Lasso incurs higher CPU times, especially for larger dimensions. This increase is due to the complexity of calculating the norm for disjoint groups, which is computationally more intensive. In general, while the type of side information can impact the computational time, the APAS framework maintains efficiency across different scenarios. We will include these results and analysis in the revised paper to demonstrate the versatility and efficiency of our APAS framework. > Also, it would be interesting to track the adaptively selected $\varepsilon$ parameter and its relationship with tracking error. **Reply:** In Figure 2b, we have included a comparison between the adaptively selected $\varepsilon$ by APAS and a fixed $\varepsilon$ used in PAS (which is essentially APAS with fixed $\varepsilon$ setting). For PAS with a fixed $\varepsilon$ setting, both the smallest $\varepsilon$ and the largest $\varepsilon$ fail to achieve the best tracking error, likely due to overemphasis or underestimation of real-time tracking error, thus compromising long-term tracking accuracy. This indicates the critical problem in finding the optimal parameter setting to achieve the best trade-off performance. Conversely, under the same excess cumulative return, the adaptively selected $\varepsilon$ in APAS achieves nearly the best tracking accuracy in most cases. This indicates that APAS effectively balances tracking accuracy and side performance, achieving an optimal trade-off. We will provide a more detailed analysis of the relationship between the adaptively selected $\varepsilon$ and the tracking error in the revised paper. ### Response to Questions: > Are the experiments corresponding to Fig 2, 3 and 4 conducted using the efficient version of the algorithm (i.e. Algorithm 2)? **Reply:** Yes, the experiments corresponding to Figures 2, 3, and 4 are conducted using the efficient version of the algorithm (i.e., Algorithm 2). We will clarify this in the revised manuscript to ensure that this is explicitly stated. ### Response to Limitations: > The paper would benefit from an explicit section on limitations, discussing the assumptions and avenues for improvement. Discussion on potential negative societal impact may be necessary as well (e.g. design of side information functions that have societal implications depending on the downstream application). **Reply:** Thank you for your insightful comment. In the revised manuscript, we will add a dedicated section on limitations. We will highlight scenarios where the performance of APAS may be suboptimal, such as in environments with extremely high volatility and skewed data, or when side information is unreliable. Additionally, we will discuss avenues for improvement, including potential enhancements to adaptive selection and robustness to different types of side information. Moreover, we will include a discussion on the potential negative societal implications of our work. For example, in financial applications, the use of certain side information might inadvertently favor specific market behaviors. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed discussion and additional work on experiments. I am keeping my accept rating. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and for taking the time to engage in the discussion. We appreciate your support and are glad that the additional experiments have reinforced your positive evaluation of our work.
Summary: The paper presents a novel Adaptive Passive-Aggressive online regression framework with Side Information (APAS) framework that addresses the challenges faced by the traditional Passive-Aggressive (PA) method, such as selecting optimal thresholds and adapting to complex scenarios with additional metrics. The APAS framework integrates side information to enhance weight selection and evaluation, adaptively choosing the threshold parameter to ensure convergence to the optimal setting. The paper also introduces an efficient implementation using the Successive Convex Approximation (SCA) technique, significantly reducing computational complexity. Numerical experiments demonstrate the model’s superior performance in achieving low tracking error and high returns compared to traditional PA methods. Strengths: 1. The integration of side information into the PA framework and the adaptive threshold selection address important limitations of existing methods. 2. Both the theoretical analysis and experimental validation demonstrate the effectiveness and robustness of APAS. 3. The paper is well-written and structured. 4. APAS may have potential applications in different domains, such as financial index tracking. Weaknesses: 1. Some mathematical derivations and theoretical explanations could be complemented with some high-level, more intuitive explanation to help understand. 2. The discussion of the implications of the results and the comparison with related work could be expanded to provide a clearer understanding of the contributions and their significance. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you provide more insights into the choice of the trade-off lambda and its impact on the performance of the APAS? 2. How does the APAS framework perform in scenarios with highly volatile or noisy side information? Will there be any performance degradation? 3. Besides financial index tracking, is there any other domain where the integration of side information is also beneficial? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations of their work, particularly in the context of the assumptions and scope of the theoretical analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and insightful feedback on our paper. We appreciate your positive comments and the constructive suggestions you provided. Below, we address your comments and questions in detail. ### Response to Weaknesses: > 1. Some mathematical derivations and theoretical explanations could be complemented with some high-level, more intuitive explanation to help understand. **Reply:** Thank you for your insightful suggestions. To enhance comprehension, we will include more explanations in the revised paper. For example, we will highlight our motivation for designing the weight selection and evaluation to balance tracking accuracy and side performance. Additionally, we will elucidate the practical meaning of the regret bound, demonstrating the convergence performance relative to the oracle for the optimal trade-off. > 2. The discussion of the implications of the results and the comparison with related work could be expanded to provide a clearer understanding of the contributions and their significance. **Reply:** We will expand the discussion on how our results compare with existing methods, highlighting the unique advantages and limitations of APAS. For example, compared to the traditional PA method, APAS achieves a superior trade-off between tracking error and side performance. Additionally, we will discuss the broader implications of our results and their potential applications. ### Response to Questions: > 1. Could you provide more insights into the choice of the trade-off lambda and its impact on the performance of the APAS? **Reply:** The trade-off parameter $\lambda$ quantifies the preference between tracking accuracy and side performance, which is crucial for balancing this trade-off, as illustrated in Figure 2a. In principle, $\lambda$ can be chosen based on the expected scale of side information and the desired balance between error minimization and side performance through cross-validation. For example, if we have a desired range of tracking error, we can use the bisection method during cross-validation to find the $\lambda$ that achieves the required tracking error with the best side performance. In the revised paper, we will provide more detailed guidance on choosing $\lambda$. > 2. How does the APAS framework perform in scenarios with highly volatile or noisy side information? Will there be any performance degradation? **Reply:** Thank you for your insightful comment. To illustrate the performance of APAS in high volatility and noisy data, we conducted simulations following the same steps as in our synthetic data experiments in Section 4.1. Previously, we considered Gaussian noise $\omega\sim\mathcal{N}(0, \delta^2)$ and Gaussian-distributed side information data $\mathbf{r}_t\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$. To evaluate the performance of APAS with highly volatile and noisy data, we generate heavy-tailed noise $\omega$ and data $\mathbf{r}_t$ based on the Student's t-distribution, using the same mean and variance settings. The degree of freedom for the Student's t-distribution is set to 3, representing significant heavy tails. The following table shows **the tracking error** of APAS with different combinations of noise and data distributions. Specifically, the column "Gaussian noise + t-distribution data" means the noise $\omega$ is generated by Gaussian distribution and the side information data $\mathbf{r}_t$ is generated by Student's t-distribution. In general, the difference in tracking error is small, indicating the robust performance of APAS in highly volatile data scenarios. ||Gaussian noise + Gaussian data|Gaussian noise + t-distribution data|t-distribution noise + Gaussian data|t-distribution noise + t-distribution data| |-|:-:|:-:|:-:|:-:| |$\lambda=1\times10^{-2}$|0.000157|0.000159|0.000164|0.000174| |$\lambda=5\times10^{-2}$|0.000198|0.000206|0.000198|0.000204| |$\lambda=1\times10^{-1}$|0.000261|0.000271|0.000259|0.000261| |$\lambda=2\times10^{-1}$|0.000354|0.000369|0.000353|0.000346| The following table compares the **excess cumulative return** under different combinations of noise and data distributions. For heavy-tailed noise (i.e., t-distribution noise), there is a mild performance degradation. Interestingly, for heavy-tailed data, there is a modest improvement, illustrating the robustness of APAS. This is mainly due to the increased chances of outliers in positive side performance for heavy-tailed data. These results show that APAS is robust to heavy-tailed data with adaptivity in tilting the weight towards positive side information. ||Gaussian noise + Gaussian data|Gaussian noise + t-distribution data|t-distribution noise + Gaussian data|t-distribution noise + t-distribution data| |-|:-:|:-:|:-:|:-:| |$\lambda=1\times10^{-2}$|0.0240|0.0272|0.0240|0.0238| |$\lambda=5\times10^{-2}$|0.0416|0.0458|0.0382|0.0407| |$\lambda=1\times10^{-1}$|0.0537|0.0613|0.0487|0.0525| |$\lambda=2\times10^{-1}$|0.0619|0.0754|0.0560|0.0585| We will include these experiments and analyses in the revised paper to demonstrate how APAS handles scenarios with high volatility data. > 3. Besides financial index tracking, is there any other domain where the integration of side information is also beneficial? **Reply:** The integration of side information can be beneficial in various domains. For instance, consider a broader setting where side information functions are used to manage total switching costs when updating the weight vector. In this scenario, we can set the side information as $h_t(\mathbf{w}) =||\mathbf{w}-\mathbf{w}_t||_1$, which represents the cost associated with changes in the weight vector. Other potential applications include recommendation systems, healthcare, and supply chain optimization, where contextual data can be applied to improve relevant performance. We will include a discussion of potential applications in the revised paper, illustrating the broader applicability of the APAS framework. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. It addresses most of my questions. Therefore, I would like to keep my acceptance rating. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your dedicated time and effort in reviewing our paper. We are glad our response addressed your concerns. Thank you very much for your insightful comment and kind reply.
Summary: This paper focuses on the online regression problems for handling large-scale streaming data. Passive-Aggressive (PA) method is a well established method for online regression but existing work struggles with determining optimal thresholds and adapting to complex scenarios with side information. To solve this issue, this paper proposes a novel adaptive framework that allows finer adjustments to the weight vector in PA using side information. Theoretical and empirical studies are shown the validate the effectiveness of this novel adaptive framework. Strengths: 1. Online regression is an important problem in machine learning because it doesn't rely on strong assumptions and its models demonstrate robustness with regret guarantees in challenging scenarios in practice. 2. Regret bound is shown to prove that the novel adaptive framework is able to converge to the optimal setting at the order of $O(T)$. Assumptions 1 and 2 are mild. 3. An efficient algorithm is carefully designed in Section 3.3 to make proposed method actually work in practice. 4. Both synthetic and real-world experiments are conducted to show the effectiveness of proposed framework. Weaknesses: Clarity remains the biggest weakness of this paper. See my questions below. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. Since $\lambda$ is the trade-off parameter that quantifies the preference between tracking accuracy and side performance and it is one of the inputs of Algorithms 1 and 2, choice of $\lambda$ is very important. How should we choose $\lambda$ in a principled way? 2. What are $D, G$ in Assumptions 1 and 2? Are they constants or parameters? If they are parameters, do they appear in upper bound in Theorem 2? 3. Again in Theorem 2, the upper bound only shows the dependence on $T$, which is too rough. A full upper bound is highly suggested, at least listing all important parameters. 4. In Proposition 1, "... converges in a finite number of iterations" is too rough. How large is the number of iterations? Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and the time you took to review our paper. We appreciate your recognition of the sound theoretical framework and the empirical validation of our method. Below, we address your comments and questions in detail. > 1. Since $\lambda$ is the trade-off parameter that quantifies the preference between tracking accuracy and side performance and it is one of the inputs of Algorithms 1 and 2, choice of $\lambda$ is very important. How should we choose in a principled way? **Reply:** Thank you for your valuable feedback. The trade-off parameter $\lambda$ is indeed crucial for balancing tracking accuracy and side performance, as illustrated in Figure 2a. In principle, $\lambda$ can be chosen based on theoretical considerations of the problem at hand, such as the expected scale of side information and the desired trade-off between error minimization and performance related to side information. Specifically, $\lambda$ can be chosen based on domain knowledge and through cross-validation. For example, if we have a desired range of tracking error, we can use the bisection method during cross-validation to find the $\lambda$ that achieves the required tracking error with the best side performance. We will include a detailed discussion on the choice of $\lambda$ in the revised paper. > 2. What are $D, G$ in Assumptions 1 and 2? Are they constants or parameters? If they are parameters, do they appear in upper bound in Theorem 2? **Reply:** $D$ and $G$ in Assumptions 1 and 2 are constants that depend on the problem setting, such as the data distribution and feature space characteristics. Specifically, $D$ represents the upper bound of the threshold parameter $\varepsilon$, and $G$ represents the upper bound of $|\partial f_t(\varepsilon)|$ for all $t$ and $\varepsilon$. These constants are crucial for bounding the regret in Theorem 2, as detailed in Line 400 of Appendix A. > 3. Again in Theorem 2, the upper bound only shows the dependence on $T$, which is too rough. A full upper bound is highly suggested, at least listing all important parameters. **Reply:** Thank you for your suggestions. The detailed upper bound for the regret in Theorem 2 can be found in line 400 of Appendix A, which is illustrated below: $$ R_T \leq \frac{D^2}{\eta_T} + \frac{G^2}{2}\sum_{t=1}^{T} \eta_t \leq 2\sqrt{\frac{D^3G}{\nu}}\sqrt{T} = O(\sqrt{T}). $$ In the revision, we will include this detailed upper bound in the main body of the paper. > 4. In Proposition 1, "... converges in a finite number of iterations" is too rough. How large is the number of iterations? **Reply:** The statement of finite number of iterations for describing convergence is widely used in optimization literature (Facchinei and Pang, 2003; Scutari et al., 2013). This general description is meant to indicate that an optimization algorithm will reach convergence after a certain number of steps. The empirical performance of our proposed algorithm's convergence speed is illustrated in Figure 6 of Section 4.3. The left panel shows that our proposed efficient algorithm achieves an approximate gap of $1 \times 10^{-7}$ relative to the optimal value within just 4 or 5 iterations. The convergence speed of our proposed algorithm is closely related to the problem dimension. The right panel of Figure 6 illustrates the average CPU time across different problem dimensions, showing that the computational time increases a bit with the problem dimensions. We will include a more detailed discussion on the number of iterations required for convergence in the revised paper to provide clearer insights into the performance of our algorithm. Facchinei, F. and Pang, J.-S. Finite-dimensional variational inequalities and complementarity problems. Springer, 2003. Scutari, G., Facchinei, F., Song, P., Palomar, D. P., and Pang, J.-S. Decomposition by partial linearization: Parallel optimization of multi-agent systems. IEEE Transactions on Signal Processing, 62(3):641–656, 2013. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: The rebuttal by authors addressed my questions and my rating remains a positive score. I hope the paper will be improved by incorporating all points mentioned in rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and insightful suggestions. We appreciate your suggestions and will ensure they are incorporated to enhance the paper.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ODGEN: Domain-specific Object Detection Data Generation with Diffusion Models
Accept (poster)
Summary: This paper tackles the task of controllable image generation from bounding boxes and text prompts for training object detectors. To fine-tune diffusion models to specific domains effectively and deal with the challenge of concept bleeding, this paper proposes a method called ODGEN. In the proposed method, the diffusion model is first fine-tuned on both entire images and cropped foreground regions. Then, text lists and image lists are utilized as input for ControlNet to avoid the concept bleeding and generate high-quality images. The experimental results show that the proposed method outperforms the state-of-the-art methods for controllable image generation on seven types of datasets in terms of FID score. In addition, the object detectors trained on the generated images from the proposed method achieve better performance than other methods. Strengths: i) The proposed method is quite simple yet effective. Its relative ease of implementation is helpful for computer vision practitioners and future researchers. Also, the task tackled in this paper is of practical benefit. ii) This paper is well-written and easy to follow. The motivation for tackling the task is clearly described in Sec. 1, and related works are well-summarized in Sec. 2. The proposed method is clearly explained at both the idea level and procedure level in Sec. 3. iii) The experiments demonstrate the high performance of the proposed method. The proposed method achieved better FID scores than other controllable image generation methods on seven types of object detection datasets. Additionally, training with synthetic images from the proposed method improved the performance of YOLO-based object detectors significantly. Weaknesses: My concerns are mainly about the lack of detailed analysis: I) This paper claims that fine-tuning with both cropped foreground regions and entire images is one of its contributions. However, an ablation study to confirm the effectiveness of this approach is missing. ii) From Fig. 4, it looks like the corrupted label filtering is performed only for the proposed method (ODGEN). Because the corrupted label filtering is a simple technique and can be applied to other methods as well, the comparisons between ODGEN and other methods not only without filtering but also with filtering should be provided for fair comparisons. iii) This paper provides only the results with 200 real images. I'm curious to know how the performance of the proposed method and other methods change when more number of real images are available for training. iv) In Sec. 1, the concept bleeding is raised as one of the challenges in this task. Although Fig. 6 provides an example, I would like to see a quantitative evaluation to check whether it was effectively addressed by the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: See iii) in Weakness. How is the performance of the proposed method and other methods when more real images are available? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are adequately discussed in Sec. A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We provide detailed responses below to resolve your concerns. $\textbf{1. Ablations for fine-tuning on both cropped foreground objects and entire images:}$ The fine-tuning process is the basis of the training of the following object-wise conditioning module. The fine-tuning on both cropped foreground objects and entire images makes the fine-tuned models capable of synthesizing the images of foreground objects, which are needed by building image lists for the next step. We provide some visualized samples in Fig. 1 of the PDF file attached to the global response. It shows that the fine-tuned model is capable of generating diverse foreground objects and complete scenes. The training of the object-wise conditioning module is designed to realize layout control based on the fine-tuned model and uses the image lists containing synthesized images of foreground objects as conditions. Therefore, it's hard to conduct an ablation study that fine-tunes the diffusion model without cropped foregrounds. Because the models only fine-tuned on entire images have difficulties in generating foreground objects required by image lists since they cannot bind the prompts to corresponding objects in entire images that may contain multiple categories of objects. $\textbf{2. Post-processing step with the corrupted label filtering:}$ See the analysis and experiments provided in part 1 of the global response. We provide results of all methods without post-processing to compare the generation capability directly and fairly. $\textbf{3. Experiments with a larger number of real images:}$ See the experiments provided in the parts 2 and 3 of the global response. We provide experiments with different methods trained on the COCO training set (80k images) and add experiments of our ODGEN trained on 1000 images from the Apex Game and Underwater Object datasets. $\textbf{4. Quantitative evaluation of concept bleeding:}$ This work mainly focuses on the concept bleeding problem of different categories. The mAP of YOLO models trained on synthetic images only in Tab. 3 of our paper can be a proxy task to prove that concept bleeding is alleviated since our approach achieves state-of-the-art performance in the layout-image consistency of complex scene generation conditioned on bounding boxes. In addition, we add the quantitative evaluation with BLIP-VQA ($\uparrow$) [a] which employs the BLIP model to identify whether the contents in synthesized samples are consistent with the text prompts. The results are shown in the table below. Our ODGEN outperforms other methods and gets results close to ground truth (real images from the COCO dataset sharing the same labels with synthetic images). The results are averaged over 41k synthetic images following the labels of the COCO validation set. Method | ReCo | GLIGEN | ControlNet | GeoDiffusion | MIGC | InstanceDiffusion | ODGEN (ours) | Ground Truth (provided as reference) ------|------|-------|-------|-------|------|-------|------|------ BLIP-VQA ($\uparrow$) | 0.2027 | 0.2281 | 0.2461 | 0.2114 | 0.2314 | 0.2293 | $\textbf{0.2716}$ | 0.2745 [a] K. Huang, K. Sun, E. Xie, Z. Li, and X. Liu, T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation, Advances in Neural Information Processing Systems, vol. 36, 2024. --- Rebuttal 2: Comment: Dear Reviewer FcFR, Thanks for your positive feedback. We are glad that most of your concerns have been well addressed. We add a detailed explanation for your remaining concern. 1. $\textbf{Specific Domains}$: Taking the Robomaster dataset in RF7 as an example, most images in the dataset contain multiple categories of objects like "armor", "base", "watcher" and "car", which are either strange for Stable Diffusion or different from what Stable Diffusion tends to generate with the same text prompts. For the model fine-tuned on entire images only, it cannot obtain guidance on which parts in entire images correspond to these objects. As a result, the fine-tuned models cannot synthesize correct images for foreground objects given text prompts like "a base in a screen shot of the robomaster game". We agree with you that "adding experimental results with the models fine-tuned on entire images only" can make our statement more convincing. We provide FID results of foreground objects synthesized by models fine-tuned on both entire images and cropped foreground objects (text prompts are composed of the names of objects in the image and the scene name, e.g., "a car and a base in a screen shot of the robomaster game"), and models fine-tuned on entire images only, in the table below for comparison: Table: FID ($\downarrow$) results of foreground objects in the Robomaster dataset synthesized by models fine-tuned on different data. Object categories | watcher | armor | car | base | rune -------|--------|-------|-------|--------|------- Models fine-tuned on entire images only | 384.94 | 532.56 | 364.46 | 325.79 | 340.11 Models fine-tuned on both cropped objects and entire images | $\textbf{124.45}$ | $\textbf{136.27}$ | $\textbf{135.66}$ | $\textbf{146.50}$ | $\textbf{128.95}$ It shows that our approach helps fine-tuned models generate images of foreground objects better. Similarly, we also provide results on the Road Traffic dataset, which is relatively more familiar for Stable Diffusion than the Robomaster dataset. Table: FID ($\downarrow$) results of foreground objects in the Road Traffic dataset synthesized by models fine-tuned on different data. Object categories | vehicle | traffic light | motorcycle | fire hydrant | crosswalk | bus | bicycle -------|--------|-------|-------|--------|-------|-------|------- Models fine-tuned on entire images only | 241.64 | 240.55 | 213.90 | 205.22 | 253.40 | 238.63 |153.76 Models fine-tuned on both cropped objects and entire images | $\textbf{105.27}$ | $\textbf{90.71}$ | $\textbf{143.30}$ | $\textbf{53.05}$ | $\textbf{97.33}$ | $\textbf{118.58}$ | $\textbf{65.61}$ Results also show improvement of our approach compared with models fine-tuned on entire images only. Since links are not allowed in the responses, we cannot provide supplemental visualized results here but will add them to the revised manuscript. 2. $\textbf{General Domains}$: The fine-tuning step is designed for specific datasets. As for datasets of general domains like COCO, Stable Diffusion can generate the same categories of objects as the objects in the COCO dataset without fine-tuning. Therefore, we skip the fine-tuning step and directly use Stable Diffusion to synthesize images of foreground objects and use them to build image lists. The results in our paper show superior improvement achieved by our ODGEN compared with prior works.  Thanks for your valuable suggestions and efforts in reviewing this paper again!  Authors --- Rebuttal Comment 2.1: Comment: Thanks again for the response. Because my remaining concern has been addressed, I'm happy to raise my rating. --- Rebuttal 3: Comment: Dear Reviewer FcFR, Thank you for your positive feedback and efforts in reviewing this paper again! Authors
Summary: This work is about generating synthetic (annotated) data for object detection with diffusion models that are tuned for a specific image domain. Generating synthetic annotated data can be useful for model training in situations where data is scarce. The authors propose a specific pipeline and new modules to generate such data. First, an off-the-shelf diffusion model is fine-tuned on a specific domain, for instance a specifc object detection dataset. This is done both with the full images and crops of foreground objects (via the given bounding boxes). The statistics of object categories, number of instances, bounding boxes are estimated. Then, a new layout is sampled from the distribution, the class-specific objects are individually synethesized with the tuned diffusion model. These are individually placed at their sampled bounding box location to build a so-called image list. Then, a ControlNet model is used that encodes this image list along with a text list of the object categories, which outputs the final synthesized image. The newly designed text and image encoders for the controlnet are trained. Finally, a verification step is done at the object level (classifying if the generated image contains the desired object inside a certain bounding box). The synthesized data is evaluated with the FID metric, and also used to train object detectors, which are then evaluated with mAP. Strengths: - The problem the paper tries to address is important and has several use cases, maybe even beyond what the paper lists. - I especially like the adaptation to specific domains, which is certainly relevant for many real-world use cases. - I enjoyed reading the paper, it's well written, has a good flow, it was easy to understand. - The results seem to be clearly better than prior works like GLIGEN. Weaknesses: - Positioning with respect to some prior work. - In the related work section, the paragraphs on layout-to-image generation & dataset synthesis contain some highly related works (like InstanceDiffusion or others listed as the second group of data synthesis approaches). However, these works are only briefly described but not contrasted against the proposed method. So, what's the difference to InstanceDiffusion, for example? - Why are there no comparisons to InstanceDiffusion in the experiments, or even simpler data augmentation techniques like "copy-paste"? - The object distributions are simplified compared to real distributions. - Using a Gaussian distribution for a discrete variable seems odd. There will be some probability to sample a negative number and if you clip, you ultimately end up with a different distribution than what was estimated, I guess. - It seems that all estimated distributions regarding the bounding boxes are independent. I assume that location and areas are dependent. Are the distributions modeled independently on purpose? As in, is it better for simulation to have independent distributions? Or is this just done for simplicity? And was the impact of this evaluated empirically? - Missing details on the usefulness of synthetic data - For the domain specific domain, it seems that the diffusion models were also tuned only with the 200 selected domain-specific images (according to line 212). I found this information to be crucial in judging whether or not the experiments are valid. I suggest to highlight this aspect already in Section 4.1 - and maybe also discuss why this information is curcial. My thinking here is that if you used more data for tuning the diffusion model than you use to train the detectors, it would be unclear where performance gains come from. It could just come from the additional data used to train the diffusion model, rather than the images being synthetic. - Also, does the same conclusion on object detector performance gains from Table 2 also hold when scaling up real and synthetic images? From a practical point of view, annotating 800 more images with bounding boxes would likely be affordable in most cases. So, I'm missing an experiment like in Table 2, but with more images for both real and synthetic images, e.g., 1000 and 25000 real and synthetic images. - For the general domain, it's great to see in Table 3 that the detector is better when trained on ODGEN data compared to any other synthetic data. However, the real reference points should be (a) the 10k real images and (b) the combination of 10k real and K synthetic images. Technical Quality: 3 Clarity: 4 Questions for Authors: - When you fine-tune the diffusion model with driving data, don't you have issues with limited diversity? I understand that you might only be interested in the same label spaces as defined in the dataset. But for real applications, one is often interested in simulating something that is missing in a dataset. For instance, a driving dataset may contain lots of regular cars, vans and trucks. But you also want to generate emergency vehicles like police cars or ambulances. - Does the filtering rate of the "corrupted label filtering" step correlate with object occlusions? As in, are there more issues (during either the generation or even the classification/filtering) for occluded/overlapping objects? - An interesting application of this paper could be to improve language-based object detectors like in [A]. This work relies on GLIGEN and could benefit from ODGEN. The difference would be that the general semantic knowledge of the diffusion model would be leveraged. References: - [A] Generating Enhanced Negatives for Training Language-Based Object Detectors. Zhao et al. CVPR'24 Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The limitations are adequately discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We provide detailed responses below to resolve your concerns. $\textbf{1. Comparison with InstanceDiffusion:}$ We didn't include InstanceDiffusion since it consists of an UniFusion module which at least requires bounding box labels and segmentation masks as training data. However, our approach and other methods included for comparison only need bounding box labels. We add an additional comparison by employing the open-source InstanceDiffusion model to generate images following the same setups as Tab. 3 in our paper (mAP results are provided in the YOLOv5s/YOLOv7 format): Method | FID($\downarrow$) | mAP@.50($\uparrow$) | mAP@.50:95($\uparrow$) --------|--------|--------|-------- MIGC | 21.82 | 9.54/16.01 | 4.67/8.65 InstanceDiffusion | 23.29 | 10.00/17.10 | 5.42/10.20 ODGEN (ours) | $\textbf{16.16}$ | $\textbf{18.90}$/$\textbf{24.40}$ | $\textbf{9.70}$/$\textbf{14.20}$ We also include InstanceDiffusion in the experiments mentioned in the second part of the global response (YOLO models trained on 80k real images v.s. YOLO models trained on 80k real images + 20k synthetic images). Our approach outperforms InstanceDiffusion as shown by the results in Tab. 2 in the PDF file attached to the global response. Other works in the second group of data synthesis approaches: Earlier work like LayoutDiffusion [64] is not implemented with latent diffusion models and thus is not included for comparison. Works [21, 37] are designed for the semantic segmentation task while our ODGEN is designed for the object detection task. $\textbf{2. Comparison with copy-paste:}$ The copy-paste method requires segmentation masks to get the cropped foreground objects, which are not provided by the RF7 datasets used in this paper. Besides, our approach only requires bounding box labels, which are easier to annotate than masks. $\textbf{3. Datasets synthesis pipeline design:}$ We agree that the current method cannot exactly reproduce the foreground object distributions in the training datasets. The number of objects in an image is sampled from a joint distribution across different categories while the positions and sizes of bounding boxes are independent. The current dataset synthesis pipeline is designed for simplicity and serves as a method to compare the fidelity and trainability of different methods. As illustrated in the limitations part (Suppl. A), the dataset synthesis pipeline has not been fully optimized yet, which would be interesting to explore in future work. $\textbf{4. Details of the training data:}$ The whole training process on domain-specific datasets, including the fine-tuning on both cropped objects and entire images and the training of the object-wise conditioning module, only depends on the 200 images used for the baseline YOLO detectors training. Besides, the foreground/background discriminator used for corrupted label filtering is also trained and evaluated on the 200 images. We will highlight this point in the revised manuscript to make it clearer. Thanks for your valuable advice. $\textbf{5. Experiments with more training data:}$ See the experiments added in the parts 2 and 3 of the global response. $\textbf{6. Experiments for the trainability evaluation on COCO:}$ See the experiments of training YOLO with "80k real v.s. 80k real + 20k synthetic" images added in part 2 of the global response. The current experiments of trainability on COCO (Tab. 3 in our paper) are designed to evaluate the trainability of using synthetic data only. We train YOLO models on synthetic datasets and evaluate them on real COCO data (sampled from the COCO validation set). We find that results of different methods have high degrees of discrimination and our approach achieves improvements by a large margin compared with existing methods. $\textbf{7. Generalization to novel categories:}$ It is an interesting topic to evaluate the generalization to novel categories not included in the training process. We use ODGEN trained on COCO to synthesize samples containing objects not included in COCO like moon, ambulance, and tiger et al. We show visualized samples in Fig. 4 in the PDF file enclosed in the global response. It shows that ODGEN trained on COCO is capable of controlling the layout of novel categories. However, we find that the generation quality is not stable enough, which may be impacted by the fine-tuning process on the COCO dataset. We are also trying to train a generic ODGEN on about 3000k images covering more than 3000 categories. We hope to get a powerful and generic model covering most common categories in future work. $\textbf{8. Corrupted label filtering:}$ The label filtering step is not applied to experiments on COCO. The filtering rate of the "corrupted label filtering" step is fixed for experiments on RF7. The proposed discriminator is only designed to justify foreground and background and doesn't discriminate specific categories. It is designed to filter the boxes in which no objects are synthesized. Even if object A is partially occluded by object B, the discriminator is accessible to the information of object B and still judges it as foreground. Therefore, we don't find apparent performance drops on occluded objects. The accuracy of discriminators on the test sets sampled from the RF7 datasets is over 99% (details are provided in Suppl. D.2, Line 505-509). $\textbf{9. Benefiting other works with our ODGEN}$ It's good to know that work [A] may be benefited from this paper. We also hope that ODGEN can help and inspire the community in the future. [21] Y. Jia, et al. Dginstyle: Domain generalizable semantic segmentation with image diffusion models and stylized semantic control. CVPR 2024 workshop. [37] D. Peng, et al. Diffusion-based image translation with label guidance for domain adaptive semantic segmentation. ICCV 2023. [64] G. Zheng, et al. Layoutdiffusion: Controllable diffusion model for layout-to-image generation. CVPR 2023. --- Rebuttal Comment 1.1: Title: Supplemental: synthesizing overlapping objects Comment: Compared with other cases, cases with overlapping objects are more complex and challenging to synthesize. As shown by the visualized samples provided in Fig.5, 10, 11, and 12 in our paper, our approach has made great progress in generating overlapping objects and outperforms prior works on layout-image consistency.
Summary: ODGEN uses a diffusion-based generation model to create novel images to train object detectors. Object bounding boxes along with the object's textual description are given as a conditioning for the generation step. With these generated images they can improve the detector's performance. Strengths: The proposed generation pipeline can be used to enrich the training dataset with more high-quality images. Since multiple objects can exist in the same image, using bounding boxes along with the textual description seems to be the better approach. The paper is well written and the evaluation is done with standard datasets. The improvement over the baselines shows the efficacy of the approach. Weaknesses: 1. There is a high generation cost when there are multiple objects in the same image, which reduces the practicality of the proposed approach. 2. It would be interesting to see if the generated images improve the detector's performance where it originally failed due to lack of data, such as when objects are partially occluded. 3. How are the statistics of the object layout taken? 4. Why only 5k images were generated? Can we see some study of gradually adding synthetic data, like 1k,5k,10k,20k, and some plateauing of the detector performance? 4. It will be only fair to compare with other generation approaches with the same post-generation filtering being applied to them. How does ODGEN compare then? 4. In line 111, it is misleading to say "a new method to fine-tune the diffusion model", as it is nothing more than just fine-tuning. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We provide detailed responses below to resolve your concerns. $\textbf{1. Generation cost:}$ Before training and inference, we can generate an offline library of foreground objects to accelerate the process of building image lists. With the offline library, we can randomly pick images from it to build image lists instead of synthesizing new images of foreground objects every time. In the generation stage (Fig. 2c in our paper), our approach pads both the image and text lists to a fixed length. Therefore, the computational cost for inference with offline libraries doesn't increase with more foreground objects. Other methods like InstanceDiffusion [a] and MIGC [b] need more time for training and inferencing with more objects. Taking the models trained on COCO as an example, to generate an image with 6 bounding boxes on a V100 GPU, ODGEN takes 10 seconds, ControlNet takes 8 seconds, and Instance Diffusion takes 30 seconds. During the inference, we compare using the same offline image library as training against using a totally different image library. We get very close results as shown in the table below (mAP metrics are provided in the YOLOv5s/YOLOv7 format): Offline Image Library | FID ($\downarrow$) | mAP@.50 ($\uparrow$) | mAP@.50:95 ($\uparrow$) -----|-----|-----|----- Same as training | 16.01 | 18.90/9.70 | 24.40/14.20 Different from training | 16.16 | 18.60/9.52 | 24.20/14.10 Both outperform other methods significantly as shown in Tab. 3 of our paper. It indicates that ODGEN is capable of extracting category information from synthesized samples of foreground objects in image lists instead of depending on certain images of foreground objects. ODGEN is not constrained by the image library of foreground objects used in training and can be generalized to other newly generated offline image libraries consisting of novel synthesized samples of foreground objects, which ensures that the use of offline libraries won't reduce the practicality of ODGEN. $\textbf{2. Detector performance improvement with synthetic data:}$ We provide several visualized detection results of YOLO models trained with and without synthetic data in Fig. 3 of the PDF file enclosed in the global response. It shows that the synthetic data helps detectors detect some occluded objects. $\textbf{3. Object layout statistics:}$ As illustrated in Sec. 3.3 (Line 167-173) in our paper, the statistics of the object layout are taken from the training dataset. For example, ODGEN trained on 200 images takes the statistics from the 200 images. $\textbf{4. The number of synthetic images:}$ We have added the ablations of the number of training samples (1k, 3k, 5k, and 10k) in Tab. 1 in the PDF file attached to the global response. It shows that using 5k synthetic images gets results close to using 10k synthetic images and outperforms using 1k or 3k synthetic images under our experimental setups. $\textbf{5. Post-generation filtering step:}$ See the analysis and experiments provided in part 1 of the global response. $\textbf{6. Expression:}$ We will change the expression to "We propose to fine-tune the diffusion model with domain-specific images and cropped foreground objects" in the revised manuscript to clarify the difference between our approach and the fine-tuning method on entire images only. [a] InstanceDiffusion: Instance-level Control for Image Generation, CVPR 2024 [b] Migc: Multi-instance generation controller for text-to-image synthesis, CVPR 2024 --- Rebuttal 2: Comment: Dear Reviewer PYXX, We are writing to kindly remind you that the time left for discussion is only about one day. Could you please confirm that you have read the rebuttal and check if you still have any concern about our work? Thanks for your efforts in reviewing this paper and proposing valuable comments. Authors. --- Rebuttal Comment 2.1: Comment: Thanks for the rebuttal; After reading them I don't have any further questions. --- Reply to Comment 2.1.1: Comment: Dear Reviewer PYXX, We are glad that all your concerns have been solved. We appreciate your efforts in reviewing our paper and providing feedback during the discussion period. Authors
Summary: This paper proposes the ODGEN method to generate high-quality images conditioned on bounding boxes. ODGEN fine-tunes Stable Diffusion (SD) on domain-specific datasets to enhance image quality in specialist domains, designing a novel strategy to control SD with object-wise text prompts and synthetic visual conditions to alleviate 'concept bleeding'. Experimental results on the COCO-2014 dataset demonstrate that ODGEN surpasses other methods in control capability. Additionally, the authors design a dataset synthesis pipeline using ODGEN, showing that using additional generated training data can improve the performance of state-of-the-art (SOTA) object detectors. Strengths: + The ODGEN method proposed in this paper outperforms previous methods and shows significant improvements. + This paper is well organized and written. The overall paper is easy to follow. Weaknesses: There are some unclear implementation details, and a few experiments are missing. Please refer the Questions. Technical Quality: 3 Clarity: 3 Questions for Authors: - When fine-tuning the pre-trained diffusion model, the authors use not only the entire images from the dataset but also the crops of foreground objects (i.e., resized to 512 x 512). I'm curious if this approach would help in generating particularly small objects. - During actual inference, what is the visual relationship between the image list $c_{il}$ (i.e., in Eqn. 2) and the final generated image? The authors should provide some visual examples to illustrate this. - The authors mention that the object-wise conditioning used in the paper can alleviate "concept bleeding" in multi-instance generation. I believe it is necessary to conduct experiments on the COCO-MIG benchmark [65], which assesses the model's ability to control both positioning and attributes, to elaborate on the attribute binding capability of the ODGEN. [65] Migc: Multi-instance generation controller for text-to-image synthesis, CVPR'24 - Compared to GLIGEN and MIGC, ODGEN achieves better results in overlapping cases. Which module's design primarily contributes to this improvement? As this is a major issue in the industry, I hope the authors can emphasize this in the methods section. - Have the authors considered open-sourcing the code and model? I believe this would greatly benefit the community. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The proposed method requires fine-tuning the entire Stable Diffusion model and training an additional ControlNet. This often requires significant computational resources, making reproduction challenging. - I hope the authors can open-source the corresponding model and code, as this would greatly benefit the community. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We provide detailed responses below to resolve your concerns. $\textbf{1. Fine-tuning on foreground objects:}$ Both the model fine-tuning and the object-wise conditioning module are designed to enhance foreground object generation but not limited to small objects. These two parts are conducted in sequence and contribute to synthesizing both large and small objects together. Given an object detection dataset, we fine-tune Stable Diffusion with the proposed approach to make it capable of synthesizing all kinds of foreground objects and provide visual prompts for the training of the object-wise conditional module. The image lists in the object-wise conditioning module provide the information of category and position for foreground objects. As shown by the visualized samples and quantitative results, our approach obtains improvement in generating dense and small objects compared with other methods and achieves better layout-image consistency. $\textbf{2. Visual relationship between image lists and synthesized images:}$ The image list is designed to provide the information of category and localization for the control of foreground object layouts. The synthesized images share the same category with objects in generated images and are pasted to the same position as the objects in generated images to build image lists. The objects in image lists and objects in synthesized images may all be apples or cakes but have different shapes and colors. More visualized samples are provided in Fig. 2 of the PDF file enclosed in the global response. $\textbf{3. Attribute binding:}$ Our approach is designed to synthesize object detection datasets that focus on the categories of foreground objects. As a result, the attribute binding is not in the scope now. Our work focuses on the concept bleeding problem of categories while MIGC pays additional attention to attributes. Taking the right part of Fig. 6 in our paper as an example, our ODGEN fixes the concept bleeding problem of categories and generates the concepts of motorcycle and vehicle correctly. We consider to address the challenge of attribute binding in future work to obtain ODGEN models with powerful attribute binding capability by training on datasets containing annotations of attributes (e.g., Visual Genome [a]). We will also highlight the difference between ODGEN and MIGC in the revised manuscript. $\textbf{4. Overlapping objects:}$ The image list in the object-wise conditioning module contributes to the improvements in object occlusion. Our ODGEN introduces guidance for objects with synthesized visual prompts. We employ an image list to paste the visual prompt for each object separately on different empty canvases. As a result, ODGEN has access to the localization of overlapping objects with visual knowledge in the image list. We ablate this design in the ablations part (the middle part of Fig. 6 in our paper) to show that it contributes to the synthesis of overlapping objects. Experiments also show that our method contributes to the improvement in complex scene synthesis compared with prior works. We will emphasize this part in the method section of the revised manuscript. $\textbf{5. Open-source:}$ We have plans to open-source the code and model weights. [a] Krishna R, Zhu Y, Groth O, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 2017, 123: 32-73. --- Rebuttal 2: Comment: Dear Reviewer N8et, We are writing to kindly remind you that the time left for discussion is only about one day. Could you please confirm that you have read the rebuttal and check if you still have any concern about our work? Thanks for your efforts in reviewing this paper and proposing valuable comments. Authors. --- Rebuttal Comment 2.1: Comment: Thanks for your efforts in rebuttal. The response has addressed my concerns. After reading other reviewers' comments and the response, I would like to raise my rating. --- Reply to Comment 2.1.1: Comment: Dear Reviewer N8et, We are glad that all your concerns have been solved. We appreciate your efforts in reviewing our paper and providing feedback during the discussion period. Authors
Rebuttal 1: Rebuttal: We thank all reviewers for your efforts in reviewing this paper and providing so many valuable comments. We are glad that all reviewers acknowledge that the performance of our work is better than prior methods on the box-to-image generation task and most reviewers find our paper well-written and easy to follow. Besides, reviewer 2vbs and FcFR point out that the challenge tackled by this paper has many practical use cases, even beyond what this paper lists. We provide detailed responses to resolve reviewers' concerns and will add the discussion and experiments in the rebuttal to the revised manuscript. We first address several common concerns and provide supplemental figures and tables in the attached PDF: $\textbf{1. Post-processing with the corrupted label filtering (Reviewer PYXX and FcFR):}$ We design this step to filter some labels when objects are not generated successfully, which may be caused by some unreasonable boxes obtained with the pipeline in Fig. 3 in our paper. For COCO experiments (Tab. 3 in our paper), this step is not applied to any method since we directly use labels from the COCO validation set and hope to synthesize images consistent with the real-world labels. For RF7 experiments (Tab. 2 in our paper), this step is only applied to our ODGEN. For fair comparison, we skip this step on RF7 to compare the generation capability of different methods fairly. We provide results in the Table below: Table: mAP@.50:.95 ($\\uparrow$) of YOLOv5s/YOLOv7 on RF7. ODGEN (with or without post-processing) leads to greater improvement than other methods on all 7 datasets. Datasets | Baseline | ReCo | GLIGEN | ControlNet | GeoDiffusion | ODGEN w/o post-processing | ODGEN w/ post-processing -------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|------------------ real + synth #|200 + 0|200 + 5000|200 + 5000|200 + 5000|200 + 5000|200 + 5000|200 + 5000 Apex Game|38.3/47.2|25.0/31.5|24.8/32.5|33.8/42.7|29.2/35.8|$\textbf{39.8}$/$\textbf{52.6}$|$\textbf{39.9}/\textbf{52.6}$ Robomaster|27.2/26.5|18.2/27.9|19.1/25.0|24.4/32.9|18.2/22.6|$\textbf{39.0}$/$\textbf{33.3}$|$\textbf{39.6}/\textbf{34.7}$ MRI Image|37.6/27.4|42.7/38.3|32.3/25.9|44.7/37.2| 42.0/38.9|$\textbf{46.1}$/$\textbf{41.5}$|$\textbf{46.1}$/$\textbf{41.5}$ Cotton|16.7/20.5|29.3/ 37.5|28.0/39.0|22.6/35.1|30.2/36.0|$\textbf{40.5}$/$\textbf{42.1}$|$\textbf{42.0}$/$\textbf{43.2}$ Road Traffic|35.3/41.0|22.8/29.3|22.2/29.5|22.1/30.5|17.2/29.4|$\textbf{38.2}$/$\textbf{43.2}$|$\textbf{39.2}$/$\textbf{43.8}$ Aquarium|30.0/29.6|23.8/34.3|24.1/32.2|18.2/25.6|21.6/30.9|$\textbf{32.0}$/$\textbf{38.4}$|$\textbf{32.2}$/$\textbf{38.5}$ Underwater|16.7/19.4|13.7/15.8|14.9/18.5|15.5/17.8|13.8/17.2|$\textbf{18.9}$/$\textbf{21.6}$|$\textbf{19.2}$/$\textbf{22.0}$ It shows that the corrupted label filtering step only contributes a small part of the improvement. Without this step, our method still outperforms the other methods significantly. Results of the other methods were provided in Tab. 6 of our paper as well. In addition, as illustrated in the limitations part (Suppl. A), the current dataset synthesis pipeline is designed to compare different methods and can be improved further in future work. We don't see this part as the main contribution of our paper. $\textbf{2. Scaling up on COCO (Reviewer Fyyh and 2vbs):}$ We conduct experiments by adding 20k synthetic images to the 80k training images. We train YOLO models on the COCO training set (80k images) as the baseline and on the same 80k real images + 20k synthetic images generated by different methods for comparison. The COCO validation set contains 41k images, we use the labels of 20k images of them as conditions to generate the synthetic set and use the other 21k real images for evaluation. The results are shown in Tab. 2 in the attached PDF file. It shows that ODGEN improves the mAP@.50:95 by 0.5\% and outperforms the other methods. $\textbf{3. Scaling up on RF7 (Reviewer Fyyh, 2vbs, and FcFR):}$ Apart from the experiments on COCO provided above, we add additional experiments with 1000 training images from the Apex Game and the Underwater Object dataset here. We provide the results of YOLO models trained on the combination of real and synthetic data as below. We conduct experiments on ODGEN, ReCo, and GeoDiffusion. ReCo and GeoDiffusion may benefit from larger-scale training datasets since they need to fine-tune more parameters in both the UNet in Stable Diffusion and the CLIP text encoder. GLIGEN struggles to adapt to new domains and ControlNet performs worse on layout control than ODGEN. So they are not included in this part (also limited by computational resources and time in rebuttal). The corrupted label filtering step is not used for any method. Table. mAP@.50/mAP@.50:.95 ($\uparrow$) results of ODGEN trained on larger-scale datasets. Datasets|Apex|Apex|Apex|Underwater|Underwater|Underwater ------|--------|--------|--------|--------|--------|-------- real + synth #|1000 + 0|1000 + 5000|1000 + 10000|1000 + 0|1000 + 5000|1000 + 10000| YOLOv5s ODGEN (Ours)|83.2/53.5|$\textbf{83.3}$/$\textbf{53.5}$|$\textbf{83.6}$/$\textbf{53.6}$|55.6/29.2|$\textbf{59.6}$/$\textbf{32.5}$|$\textbf{56.3}$/$\textbf{29.8}$ YOLOv5s ReCo|83.2/53.5|78.7/46.9|82.0/46.9|55.6/29.2|55.1/28.4|55.9/29.1 YOLOv5s GeoDiffusion|83.2/53.5|80.0/47.2|82.5/47.5|55.6/29.2|54.2/27.9|54.3/28.0 YOLOv7 ODGEN (Ours)|83.8/55.0|$\textbf{84.4}$/$\textbf{55.2}$|$\textbf{84.0}$/$\textbf{55.0}$|54.6/28.3|$\textbf{58.2}$/$\textbf{29.8}$|$\textbf{62.1}$/$\textbf{31.8}$ YOLOv7 ReCo|83.8/55.0|80.5/50.7|79.2/49.9|54.6/28.3|56.5/28.7|56.4/30.1 YOLOv7 GeoDiffusion|83.8/55.0|81.2/51.0|81.0/50.5|54.6/28.3|57.0/28.9|55.8/28.9 When trained on larger datasets, the baselines (trained on real data only) become stronger but our ODGEN still benefits detectors with synthetic data and outperforms other methods. Pdf: /pdf/2bf371eed3fe2b98c40ddd6e7f6b9a03b1a91612.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: It proposes a novel data synthesis pipeline, ODGEN, which uses diffusion models to generate high-quality and controllable datasets for object detection. They first fine-tune the pre-trained diffusion models on both cropped foreground objects and entire images. Next, they control the diffusion model using synthesized visual prompts with spatial constraints and object-wise textual descriptions. For each image, the object number per category, bounding box location and size are sampled from estimated normal distributions. The sampled values are used to generate text lists, image lists, and global text prompts which are fed to ODGEN to generate the image. Strengths: The method addresses a practical problem in machine learning, and new effective techniques are employed in the application of diffusion models to achieve significant better synthesis quality than existing approaches. For example, two-step encoding (CLIP text tokenizer -> stacking -> encoder) of the textual condition enables the ControlNet to capture the information of each object with separate encoding and alleviates the concept bleeding problem of multiple categories. Also, encoding individual image lists rather than pasting all objects on a single image can effectively avoid the influence of object occlusions. The experimental validation is thorough, covering multiple benchmarks and providing detailed comparisons with baseline methods. Ablation analysis is insightful. Weaknesses: As mentioned in section D2, the encoder architecture (used for image and text embedding) differs for different datasets, which potentially limits the generalizability of the approach. This should be discussed. For the data scarcity experiment, authors only sample 200 images as the training set for all datasets, which is interesting but not sufficient for the analysis. More extensive evaluation can be done with higher ratios of real:synthetic data for datasets like ApexGame, Robomaster, and Underwater (and even larger datasets like COCO) . Restricting real images to 200 might lead to sub-optimal “Domain-specific Diffusion Model Fine-tuning”, which maybe the reason why “synthetic only” version in Table 7 doesn’t perform very well. Kindly comment on this. ODGEN’s and RICO have similar FID score on COCO (while the mAP is quite different), which could be discussed/explained in the paper. There are no comparisons provided with the fully supervised approach (models trained on complete real training set) on representative datasets. This would help evaluate the trainability of the proposed approach over data augmentation techniques such as copy-paste. The computational complexity (and training time) of the proposed approach should be discussed and compared with existing methods. Technical Quality: 3 Clarity: 3 Questions for Authors: As mentioned in section D2, the encoder architecture (used for image and text embedding) differs for different datasets, which potentially limits the generalizability of the approach. This should be discussed. Restricting real images to 200 might lead to sub-optimal “Domain-specific Diffusion Model Fine-tuning”, which maybe the reason why “synthetic only” version in Table 7 doesn’t perform very well. Kindly comment on this. ODGEN’s and RICO have similar FID score on COCO (while the mAP is quite different), which could be discussed/explained in the paper. The computational complexity (and training time) of the proposed approach should be discussed and compared with existing methods. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We provide detailed responses below to resolve your concerns. $\textbf{1. Encoder architectures:}$ In this paper, we change the channel number according to the maximum object numbers that can be found in a single image. For datasets like MRI in which most images contain only one object, we can use fewer channels to make the model more lightweight. For a generic model trained on large-scale datasets, we can set the channel number with a higher enough value that can cover most circumstances. For example, we are working on a model trained on about 3000k images and set the input channels as 150 to make it applicable to images containing at most 50 foreground objects. $\textbf{2. More extensive evaluation with larger-scale datasets:}$ See the experiments provided in parts 2 and 3 in the global response. $\textbf{3. ``Synthetic only" not performing well:}$ To the best of our knowledge, the current synthetic data are mostly used to serve as augmentation data instead of training detectors on them only. A similar conclusion was drawn by prior work like GeoDiffusion [a]. Although our approach has achieved improvement in complex scene synthesis, there still exists noticeable gaps between real data and synthetic data. For example, we train ODGEN on 80k images from COCO and synthesize 10k images to train YOLO models from scratch. However, compared with the YOLO trained on 10k real images from scratch, the YOLO model trained on synthetic data only still falls behind (YOLOv7 mAP@.50: trained on 10k synthetic images only 24.40 v.s. trained on 10k real images only 53.20, same labels applied). We look forward to further improvement in future work with more powerful generative models and control methods to narrow the gaps. Besides, as illustrated in the limitations part (Suppl. A), the current dataset synthesis pipeline (Fig. 3 in our paper) is not fully optimized and cannot reproduce the distributions of foreground objects completely, which may also lead to worse performance of training on synthetic data only on RF7 datasets (labels used for COCO experiments are directly obtained from real images). $\textbf{4. FID and mAP:}$ FID is used to evaluate the distance of distributions between generated samples and real samples by extracting features from images and computing the mean and covariance values. However, it's hard for FID to reflect the condition(layout)-image consistency, which is required by the object detection task and can be reflected by mAP results. Taking Fig. 5 in our paper as an example, ReCo failed to generate the bed within the orange box or generate cups within the purple boxes. Similar misalignment phenomena can be found in Fig. 10, 11, and 12 as well. It will hurt the detector's training and make ReCo perform much worse on trainability (in terms of mAP) than our ODGEN. $\textbf{5. Comparison with the fully supervised approach:}$ We presented fully supervised experiments with training on real data only as the baseline on RF7 datasets in Tab. 2 of our paper. Similar experiments on the COCO dataset are added to part 2 of the global response. The copy-paste method requires segmentation masks to get the cropped foreground objects, which are not provided by the RF7 datasets employed in this paper. Besides, our ODGEN can be implemented without segmentation masks. $\textbf{6. Computational complexity and training time:}$ Taking the model for the COCO dataset as an example, our ODGEN shares a very close scale of parameters with ControlNet (Trainable parameters: ODGEN 1231M v.s. ControlNet 1229M. parameters in the UNet of Stable Diffusion are included). We provide the training time for 1 epoch on COCO with 8 V100 GPUs of different methods in the Table below (the training code of MIGC is not open-source and thus not included here): Method | ReCo | GLIGEN | ControlNet | GeoDiffusion | ODGEN (ours) ----------|---------|-------|----------|---------|---------- Training time (hrs) | 3 | 7 | 4.5 | 3.2 | 5.6 [a] GeoDiffusion: Text-Prompted Geometric Control for Object Detection Data Generation, ICLR 2024.
null
null
null
null
null
null
Text-Infused Attention and Foreground-Aware Modeling for Zero-Shot Temporal Action Detection
Accept (poster)
Summary: This paper proposes a Ti-FAD framework for the zero-shot temporal action detection (ZSTAD) task, where the goal is to locate and classify unknown action classes. The proposed Ti-FAD features a mutual corss=attention integration module for detection, and leverages text-related sub-action information to mitigate the action bias problem in ZSTAD. Strong performances are achieved on THUMOS14 and ActivityNet v1.3. Strengths: 1. The motivation is strong and persuasive. However, the common sub-action bias may mislead the classification. This problem greatly affects the performance of visual-only and cross-modal methods. 2. The proposed text-infused cross attention and foreground-aware head are technically sound, and demonstrate strong effectiveness in the ablation study. 3. The paper is overall well-written and easy-to-read. 4. The final performances are strong, achieving state-of-the-art performances on THUMOS14 and ActivityNet v1.3. Weaknesses: This paper is overall of good quality. I do not spot major technical weaknesses. 1. What exactly is the text used for the text feature extraction? The authors only mention they use a text encoder for text feature extraction (lines 105-106) but do not mention the textual prompts. 2. Baseline performance. The authors propose a strong cross-modal baseline in Sec 3.2, whose performance already outperforms several existing methods (Fig. 2 (b)). Can the authors provide a more detailed breakdown to identify the most effective part of the baseline? 3. The authors only use I3D visual feature for experiments in Table 1, it would be help to evaluate with CLIP features to demonstrate the generalization ability to different features. 4. Limitations are not discussed in the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. What are the textual prompts used for text feature extraction? 2. What are the most important designs in the baseline to make it outperform existing methods? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Limitations are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1,Q1) What exactly is the text used for the text feature extraction? The authors only mention they use a text encoder for text feature extraction (Lines 105-106) but do not mention the textual prompts.** We simply use "{class name}" as the text prompt without any prefix or contextual text. We will add a detailed description of the text prompt in Line 107 to provide a clear view. Furthermore, to show that Ti-FAD's performance does not depend on the type of text prompt, we compare the performance of our baseline and Ti-FAD with the most utilized types of text prompts following previous works [2,3,4,6] on THUMOS14 in the 50%-50% setting. ||Text Prompt|Baseline mAP@0.3|mAP@0.5|mAP@0.7|mAP@Avg|Ti-FAD mAP@0.3|mAP@0.5|mAP@0.7|mAP@Avg| |-|-|-|-|-|-|-|-|-|-| |(a)|"a video of action {classname}"|53.8|33.9|9.7|32.8|55.6|42.6|20.9|40.4| |(b)|Prompt Augmentation [5]|54.3|34.6|10.3|33.4|56.3|42.7|20.6|40.5| |(c)|Prompt Ensemble [5]|53.8|34.3|10.2|33.1|56.8|43.0|20.3|40.7| |(d)|**"{classname}"**|**55.9**|**35.9**|**10.5**|**34.5**|**57.0**|**43.3**|**21.2**|**41.2**| (a) is utilized in [2,3,6]. (b) refers to using the 28 templates of the prompt, utilized in UnLoc [4]. (c) refers to using the average embedding vector from the 28 templates at the inference, utilized in UnLoc [4]. The result demonstrates that our Ti-FAD shows similar performance regardless of the type of text prompt. We will add this experiment to the Appendix. > **W2,Q2) Baseline performance. The authors propose a strong cross-modal baseline in Sec 3.2, whose performance already outperforms several existing methods (Fig. 2 (b)). Can the authors provide a more detailed breakdown to identify the most effective part of the baseline?** The most effective part of our baseline is integrating text and visual information throughout the entire detection process. In Lines 31-33, most existing methods [2,3,6] utilize pre-extracted visual features, which are not integrated with textual information throughout the entire detection process. This separated process limits their ability to fully leverage the contextual information between both modalities. To demonstrate the effectiveness of cross-modal fusion in our baseline, we conduct a comparative analysis of ActionFormer without the cross-modal fusion and our cross-modal baselines. To facilitate a clearer understanding of our experimental models, we additionally provide illustrations of (a), (b), and (c) in Fig. B of the PDF in the global response. ||Model|mAP@0.3|mAP@0.5|mAP@0.7|mAP@Avg| |-|-|-|-|-|-| |(a)|ActionFormer w/o cross-modal fusion|30.3|15.1|3.5|16.1| |(b)|Cross-modal baseline (self-attn)|42.2|24.0|5.5|23.8| |(c)|**Cross-modal baseline (cross-attn)**|**55.9**|**35.9**|**10.5**|**34.5**| This table shows that ActionFormer without cross-modal fusion (a) exhibits inferior performance because text and video are not properly aligned, highlighting the importance of cross-modal fusion. Furthermore, the cross-attention baseline (c) outperforms the self-attention baseline (b). We will add these results and illustrations to the Appendix to emphasize the contributions of our cross-modal part. > **W3) The authors only use I3D visual feature for experiments in Table 1, it would be help to evaluate with CLIP features to demonstrate the generalization ability to different features.** We add experimental results utilizing the CLIP as a visual encoder in Table 1. We have revised Table 1, including an experiment utilizing a CLIP visual encoder. Please refer to Table 1 in the PDF of the global response. As shown in the revised Table 1, the performance using I3D features still outperforms the use of CLIP features because the simultaneous utilization of spatiotemporal information is crucial in the temporal action detection area. Therefore, in the original manuscript, we primarily utilize I3D as a visual encoder, following previous works [2,3,6]. However, we agree that these experiments utilizing CLIP as a visual encoder are essential to demonstrate the generalization ability of our method with different features. > **W4) Limitations are not discussed in the paper.** We have mentioned some limitations in Sec. 5 on lines 302-304. However, we acknowledge the need for a more detailed discussion. We will extend the limitation part. --- Rebuttal Comment 1.1: Comment: Thanks for providing the detailed response and conducting additional experimental results. My concerns have been well-addressed, and I would like to keep my initial rating as weak accept.
Summary: This paper deals with the problem of Zero-Shot Temporal Action Detection (ZSTAD). The authors propose a simple cross-modal ZSTAD baseline with good performance. To address the issue that the cross-modal baseline over-focus on common sub-actions, the paper further proposed a Ti-FAD module to focus on text-related visual parts. The method is evaluated on two popular datasets. Strengths: 1. The paper is generally well-written and easy to follow. The proposed method is well presented. 2. The issue that the cross-modal baseline over-focus on common sub-actions is well recognized and analyzed. The proposed Ti-FAD module seems to be effective on this issue. 3. The performance of the proposed method is promising. Weaknesses: 1. The two branches in Figure 2(a) and Figure 3(a) should be cross-connected before cross-attention. Now they look like they are independent. 2. The TiCA module seems a bit counter-intuitive. After obtaining the Smask, which describes which visual parts are most text-related, you didn't use it as an attention mask for cross-attention from text to vision, but instead, as a bias for vision-to-text attention. What was the rationale behind this decision? It might seem more intuitive to use the latter operation, where the Smask masks out unimportant visual segments, allowing the text to focus more on the important visual fragments. Besides, I would like to see a comparison in the experiment. 3. The so-called "Foreground-Aware" approach refers to the model's ability to predict temporal boundaries on its own, without relying on an offline proposal generator, is that right? The design in the Foreground-Aware Head is quite common in temporal action detection methods, and I don't see anything particularly unique about it. Is this a core contribution of the article? Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors state limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1) The two branches in Fig. 2(a) and 3(a) should be cross-connected before cross-attention.** We apologize for any potential misleading caused by Fig. 2(a) and 3(a). As shown in Eq. (1) and (6), ${F'}^{(l)}\_{vid}$ and ${F'}^{(l)}\_{txt}$ are used as the inputs for the cross-attention, and these notations are also depicted inside of X-MHA and TiCA in Fig. 2(a) and 3(a). To enhance the clarity of our method, we have added cross-connected arrows to both figures. The updated figures are included in the PDF of the global response. We hope that adding the cross-connected arrows can clarify our method. > **W2) The TiCA module seems a bit counter-intuitive. After obtaining the $S_{mask}$, which describes which visual parts are most text-related, you didn't use it as an attention mask for cross-attention from text to vision, but instead, as a bias for vision-to-text attention. What was the rationale behind this decision? It might seem more intuitive to use the latter operation, where the $S_{mask}$ masks out unimportant visual segments, allowing the text to focus more on the important visual fragments. Besides, I would like to see a comparison in the experiment.** In this paper, our main goal is to ensure that the cross-attention part utilizes text-related discriminative visual features, regardless of the cross-attention direction. We primarily focus on how to obtain the discriminative part using the salient attentive mask (SAM, $S_{mask}$) to address text-related discriminative visual features by containing information about which temporal location is more text-related. Thereby, the direction of the cross-attention does not change the meaning of the SAM. Furthermore, we believe that a video-centric approach is more beneficial than a text-centric approach because zero-shot TAD involves more extensive information from video than text (that just contains the names of actions). The rationale behind this decision aligns with our experimental results comparing three approaches: using $S_{mask}$ as an attention bias for text-to-vision cross-attention, vision-to-text cross-attention, and both directions of cross-attention. To apply text-to-vision cross-attention, we transpose $S\_{mask} \otimes \mathbf{1}\_{C}$ in Eq. (6). The performance comparison is shown in the tables below. **50%-50% setting** |Method|THUMOS14 mAP@0.3|mAP@0.5|mAP@0.7|mAP@Avg|ActivityNet v1.3 mAP@0.5|mAP@0.75|mAP@0.95|mAP@Avg| |-|-|-|-|-|-|-|-|-| |Baseline|55.9|35.9|10.5|34.5|44.1|26.1|3.1|26.6| |Text-to-vision|**57.4**|42.5|20.0|40.6|50.8|31.9|**5.6**|32.0| |Vision-to-text (Ours)|57.0|**43.3**|**21.2**|**41.2**|50.6|**32.2**|5.2|32.0| |Both|56.2|42.8|**21.2**|40.7|**50.9**|32.1|**5.6**|**32.1**| **75%-25% setting** |Method|THUMOS14 mAP@0.3|mAP@0.5|mAP@0.7|mAP@Avg|ActivityNet v1.3 mAP@0.5|mAP@0.75|mAP@0.95|mAP@Avg| |-|-|-|-|-|-|-|-|-| |Baseline|59.1|39.3|12.2|37.4|47.0|27.2|3.8|28.3| |Text-to-vision|62.3|47.8|23.6|45.4|**53.9**|34.5|6.3|34.3| |Vision-to-text (Ours)|**64.0**|**49.7**|24.1|**46.8**|53.8|**34.8**|**7.0**|**34.7**| |Both|63.6|48.8|**24.5**|46.5|**53.9**|34.4|6.1|34.2| As shown in the table, utilizing video features as a query (vision-to-text) slightly outperforms compared to other methods. As you mentioned, text-to-vision also allows the text features to focus on the related visual features and significantly improves the performance compared to our baseline. Our model shows similar performance regardless of whether $S_{mask}$ is used for text-to-vision or vision-to-text cross-attention. These results indicate that the most crucial part of our TiCA is the salient attentive mask (SAM, $S_{mask}$), and the choice between text-to-vision and vision-to-text cross-attention is less critical. We will revise the Ti-FAD section to emphasize the focus on $S_{mask}$ more clearly. > **W3) The so-called "Foreground-Aware" approach refers to the model's ability to predict temporal boundaries on its own, without relying on an offline proposal generator, is that right?** As you mentioned, our model does not require an offline proposal generator. However, the "Foreground-Aware" approach is not related to predicting temporal boundaries. As described in Lines 173-174, the "Foreground-Aware head" aims for our model to focus more on foreground action segments in a class-agnostic manner. > **W3) The design in the Foreground-Aware Head is quite common in temporal action detection methods, and I don't see anything particularly unique about it. Is this a core contribution of the article?** Our core novelties are conducting solid cross-modal baseline, as acknowledged by Reviewer EXtU, and proposing the Ti-FAD that addresses the issue of over-focus on common sub-actions. In this context, we mainly focus on capturing text-related sub-action parts using the salient attentive mask (SAM, $S_{mask}$) in TiCA and reducing the influence of unrelated parts by suppressing the background in the foreground-aware head. Therefore, our contribution not only lies in adopting the foreground-aware head in our Ti-FAD. Furthermore, while some aspects of the foreground-aware head are similar to existing methods, utilizing foreground-awareness is first applied in the zero-shot temporal action detection area. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed rebuttal. My concerns have been addressed. Considering the overall quality of the paper and the comments of other reviewers, I keep my rating at "Borderline accept".
Summary: The paper provides a solution for zero-shot temporal action detection(TAD). It mainly addresses distinguishing between similar actions that share common sub-actions. The method introduces a cross-attention in the model and two actionness losses. They benchmark on the two standard TAD datasets, Thumos and ActivityNET, and outperform existing methods by a significant margin. Strengths: 1) The proposed method outperforms existing methods by a significant margin 2) The paper mentions an interesting issue of the classes in the dataset containing a common sub-action, which could confuse the model. 3) The paper addresses a challenging and relatively underexplored task of zero-shot TAD. Weaknesses: 1) From my understanding, none of the previous methods deployed the ActionFormer baseline. This is a robust baseline, as you mentioned in L75. You only report the results of the cross-modal baseline, claiming that introducing the cross-modal model boosts the performance over the existing models. However, I don't see the results of using the ActionFormer detector without the baseline modifications. I think that would give a more fair view of the contributions of the cross-modal part. 2) Your first contribution states the novelty of the cross-modal baseline. From what I understand from Sec.3.2, you are introducing a well-established cross-attention module into ActionFormer. That is also important to consider in 1). 3) How often does the baseline model need clarification due to a common action bias? For example, if you plot the confusion matrix, would you actually see that it most often predicts the wrong class containing the common action? Furthermore, do you think this issue is specific to datasets like Thumos, where the granularity of actions is not that fine? 4) [L70-71] If you are using CLIP, you must provide a specific text (prompt) for this model, even if that is just a class name. Some models, such as UnLOC, just had one prompt to provide the results of the zero-shot, which augments the class name with a prefix text. This does not seem so different from what you do. 5) Please clarify more explicitly what SAM stands for (I assume it should be the salient attentive mask). The first time it is used in the text is in Figure 3, and it is already an abbreviation. 6) [Table 1] Does UNLoc do prompt tuning? I don't see any mention in the paper of any learnable prompts. You also report the results of UNLoc using a simple single prompt only, so I would not say that they are doing much engineering on the prompt side for these results. Technical Quality: 2 Clarity: 2 Questions for Authors: Why are you using the I3D features and not the CLIP vision encoder features? Also, since you don't fine-tune the text encoder, did you try using a larger CLIP-L? The results in UNLoc with a larger CLIP-L are better than with CLIP-B. I am curious to know if that would hold for you, given that you only use CLIP for the text part (I am not asking for an experiment on this one, just in case you have run it already). Overall, I think the results on Thumos and ANET are good, but I am concerned about why you didn't include the ActionFormer baseline. I am willing to change my opinion if my points are addressed. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors briefly mentioned the limitations in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1,W2) ActionFormer detector without cross-modal modification. I think that would give a more fair view of the contributions of the cross-modal part.** We appreciate your in-depth comment, which allows our cross-modal part to be viewed more fairly. We conduct a comparative analysis of ActionFormer without the cross-modal fusion and our cross-modal baselines on THUMOS14 in the 50%-50% setting. To facilitate a clearer understanding of our experimental models, we additionally provide illustrations of (a), (b), and (c) in Fig. B of the PDF in the global response. ||Model|mAP@0.3|mAP@0.5|mAP@0.7|mAP@Avg| |-|-|-|-|-|-| |(a)|ActionFormer w/o cross-modal fusion|30.3|15.1|3.5|16.1| |(b)|Cross-modal baseline (self-attn)|42.2|24.0|5.5|23.8| |(c)|**Cross-modal baseline (cross-attn)**|**55.9**|**35.9**|**10.5**|**34.5**| This table shows that ActionFormer without cross-modal fusion (a) exhibits inferior performance because text and video are not properly aligned, highlighting the importance of cross-modal fusion. Furthermore, the cross-attention baseline (c) outperforms the self-attention baseline (b). We will add these results and illustrations to the Appendix to provide a fairer view of our cross-modal part. > **W3) How often does the baseline need clarification due to a common action bias? If you plot the confusion matrix, would you see that it most often predicts the wrong class containing the common action?** Unlike classification tasks, zero-shot TAD involves detecting and localizing actions within continuous video streams. This makes it challenging to visualize misclassifications in the form of a confusion matrix. Instead, we utilize Average Precision (AP) following the previous work [1], which provides a comprehensive measure of performance in detecting and classifying actions. To show which action class is mostly affected by common sub-action bias, we compared the per-unseen class AP on THUMOS14 in Fig. 4 (Main paper) and A (Appendix). By comparing AP between our baseline and Ti-FAD, we show that our Ti-FAD successfully addresses the common sub-action bias by improving the AP for action classes such as *LongJump*, *HighJump*, *PoleVault*, *JavelinThrow*, and *BasketballDunk* that share the common sub-action part *Running*. These improvements are not only reflected in the performance metrics but also align with our expectations. > **W3) Do you think this issue is specific to datasets like Thumos, where the granularity of actions is not that fine?** We observe that sharing common sub-actions between multiple classes also occurs in other datasets, such as ActivityNet v1.3. For example, actions such as *Painting*, *Cleaning sink*, *Washing face*, *Cleaning windows*, *Hand car wash*, *Cleaning shoes*, *Ironing clothes*, *Hand washing clothes*, and *Washing dishes* that share the common sub-action part *Wiping motion with hands*, leading to similar issue. Furthermore, we provide additional per-unseen class AP results for 20 classes of ActivityNet v1.3 compared with our cross-modal baseline results in the PDF of the global response. The AP results for all classes will be added to the Appendix. These results demonstrate that the common sub-action bias issue is not isolated to THUMOS14. > **W4) You must provide a specific text prompt. Some models, such as UnLoc, just had one prompt, which augments the class name with a prefix text.** We simply use "{classname}" as the text prompt without any prefix or contextual text. We will add a detailed description of the text prompt in Line 107 to provide a clear view. Furthermore, to show that Ti-FAD's performance does not depend on the type of text prompt, we compare the performance of our baseline and Ti-FAD with the most utilized types of text prompts following previous works [2,3,4,6] on THUMOS14 in the 50%-50% setting. ||Text Prompt|Baseline mAP@0.3|mAP@0.5|mAP@0.7|mAP@Avg|Ti-FAD mAP@0.3|mAP@0.5|mAP@0.7|mAP@Avg| |-|-|-|-|-|-|-|-|-|-| |(a)|"a video of action {classname}"|53.8|33.9|9.7|32.8|55.6|42.6|20.9|40.4| |(b)|Prompt Augmentation [5]|54.3|34.6|10.3|33.4|56.3|42.7|20.6|40.5| |(c)|Prompt Ensemble [5]|53.8|34.3|10.2|33.1|56.8|43.0|20.3|40.7| |(d)|**"{classname}"**|**55.9**|**35.9**|**10.5**|**34.5**|**57.0**|**43.3**|**21.2**|**41.2**| (a) is used in [2,3,6]. (b) refers to using the 28 templates of the prompt, used in UnLoc [4]. (c) refers to using the average embedding vector from the 28 templates at the inference, used in UnLoc [4]. The result demonstrates that Ti-FAD shows similar performance regardless of the type of text prompt. We will add this experiment to the Appendix. > **W5) Clarifying the term of SAM.** We apologize for any confusion caused by the unclear term. We will replace "SAM" with "salient attentive mask (SAM)" in Line 158, which is used for the first time. > **W6) Does UNLoc do prompt tuning?** We apologize for any confusion in Table 1. As you mentioned, UnLoc [4] does not use prompt tuning. We have revised Table 1, including UnLoc's all reported versions. Please check Table 1 in the PDF of the global response. > **Q1) Why are you using the I3D features and not the CLIP features?** In video understanding tasks [2,3,6], using snippet-level features is generally more effective than frame-level features for capturing temporal details because addressing simultaneous spatio-temporal information is essential for video understanding. For a fair comparison, we add the results using the CLIP features in Table 1. The revised Table 1 shows that I3D features still outperform overall. Even when using CLIP as a video encoder, our Ti-FAD still shows competitive performance compared to previous methods using I3D features. Please check Table 1 in the PDF of the global response. > **Q2) Did you try using a CLIP-L?** Following the most existing methods [2,3,6], we mainly utilize CLIP-B. For a more comprehensive comparison, we have updated the results using CLIP-L in Table 1. Please check Table 1 in the PDF of the global response. --- Rebuttal 2: Comment: Dear authors, I appreciate all your answers! I have a short follow-up question: Why does using CLIP-L versus CLIP-B decreases the performance for your model? --- Rebuttal Comment 2.1: Comment: We appreciate your constructive feedback once again and your additional question, which has allowed us to clarify our findings. We also want to express our gratitude for the time and effort you have dedicated to reviewing our work. --- Based on our analysis, it appears that there is no significant difference in performance between CLIP-B and CLIP-L as text encoders in most cases. We have split our findings into two scenarios in the context of video encoder (CLIP, I3D): **Video encoder: CLIP** As shown in the revised Table 1, using CLIP-L as a text encoder performs worse than using CLIP-B at the 50%-50% setting on THUMOS14. However, using CLIP-L as a text encoder shows superior performance compared to CLIP-B at the 75%-25% setting on THUMOS14 and both settings (50%-50% and 75%-25%) on ActivityNet v1.3. **Video encoder: I3D** As shown in the revised Table 1, using CLIP-L as a text encoder shows worse performance compared to CLIP-B at the 50%-50% setting on THUMOS14 and 75-25% setting on ActivityNet v1.3. However, using CLIP-L as a text encoder shows superior performance compared to CLIP-B at the 75%-25% setting on THUMOS14 and 50-50% setting on ActivityNet v1.3. In conclusion, this mixed performance indicates that the choice of text encoder does not significantly impact performance because **zero-shot temporal action detection involves more extensive information from video than text (just contains the names of actions)**. --- We hope that our overall rebuttal and this note have addressed your concerns thoroughly. We would like to carefully request your consideration in increasing the final review score.
null
null
Rebuttal 1: Rebuttal: Dear All Reviewers, We sincerely appreciate the reviewers' thoughtful feedback on our paper. We are grateful for the time and effort you have taken to review our manuscript. All constructive comments allowed us to develop our paper even further. In this global response, we address three aspects: 1. **Our Core Novelties** 2. **Key Strengths Acknowledged by Reviewers** 3. **Additional Experiments & Revised Table and Figures (PDF)** --- ### **Our Core Novelties** 1. **Novel Cross-Modal Baseline**: - We introduce a solid cross-modal Zero-Shot Temporal Action Detection (ZSTAD) baseline, which shows strong performance compared to the previous methods. Our cross-modal baseline enables the detecor to capture the whole information of both video and text modalities throughout the entire detection process by eliminating the step of pre-extracting foreground candidate proposals in the previous ZSTAD methods. 2. **Identifying Key Issue**: - Our approach identifies the common sub-action bias issue causing confusion in our cross-modal baseline. For example, in the THUMOS14 case, action classes such as *LongJump*, *HighJump*, *PoleVault*, *JavelinThrow*, and *BasketballDunk* can share the common sub-action part *Running*. In the ActivityNet v1.3 case, action classes such as *Painting*, *Cleaning sink*, *Washing face*, *Cleaning windows*, *Hand car wash*, *Cleaning shoes*, *Ironing clothes*, *Hand washing clothes*, and *Washing dishes* can share the common sub-action part *Wiping motion with hands*. These common sub-actions hinder the model from learning the fine-grained alignment between video and text information, especially in a ZSTAD environment where the amount of video information is overwhelmingly larger than the text information (that just contains the name of action). 3. **Model Innovation**: - The proposed Text-infused attention and Foreground-aware Action Detection (Ti-FAD) addresses the common sub-action bias issue by (1) capturing text-related sub-action parts, and (2) distinguishing action segments from the background. Our Ti-FAD shows promising performance on THUMOS14 and ActivityNet v1.3. --- ### **Key Strengths Acknowledged by Reviewers** We are encouraged by the reviewers' recognition of our approach in several key areas: 1. **Outperforming Existing Methods** (Reviewers EXtU, QEJb, and mst5): - Our method significantly outperforms existing methods by a significant margin. 2. **Addressing the Important Issue about Common Sub-Action Confusion** (Reviewers EXtU, QEJb, and mst5): - Our paper identifies and addresses the profound issue of sub-actions causing confusion in our baseline. 3. **Clarity and Readability** (Reviewers QEJb and mst5): - Our paper is well-written and easy to follow. 4. **Tackling an Unexplored and Challenging Task** (Reviewer EXtU): - Our paper addresses an unexplored and challenging zero-shot approach in the temporal action detection area. --- ### **Additional Experiments & Revised Table and Figures (PDF)** 1. **Table 1**: The revised version of Table 1 in the original manuscript. This table additionally contains three aspects: (1) Additional experiments for CLIP visual encoders, (2) Additional experiments for CLIP-L text encoder, and (3) missing part of a previous work. 2. **Figure 2(a) and Figure 3(a)** : Updated model figures for improved clarity. We have added cross-connected arrows to depict the connection of cross-attention. 3. **Figure B**: Illustration of the baseline approach to clarify the experimental setup in Reviewers EXtU's comment (W1,W2) and mst5's comment (W2,Q2). This figure demonstrates the differences in baseline architectures of the ActionFormer model without cross-modal fusion and cross-modal baselines (self-attention and cross-attention). Our response to Reviewers EXtU's comment (W1,W2) and mst5's comment (W2,Q2) contains the comparative experimental results and discussion about this part. 4. **Figure C**: Per-unseen class AP for 20 classes in ActivityNet v1.3. This figure shows the improvements of our Ti-FAD compared to our cross-modal baseline, which indicates that our Ti-FAD addresses common sub-action confusion in various datasets. --- **References** [1] L. Zhang et al., "ZSTAD: Zero-Shot Temporal Activity Detection," *CVPR*, 2020. [2] S. Nag et al., "Zero-Shot Temporal Action Detection via Vision-Language Prompting," *ECCV*, 2022. [3] C. Ju et al., "Prompting Visual-Language Models for Efficient Video Understanding," *ECCV*, 2022. [4] S. Yan et al., "UnLoc: A Unified Framework for Video Localization Tasks," *ICCV*, 2023 [5] A. Radford et al., "Learning Transferable Visual Models from Natural Language Supervision," *ICML*, 2021. [6] T. Phan et al., "ZEETAD: Adapting Pretrained Vision-Language Model for Zero-Shot End-to-End Temporal Action Detection," *WACV*, 2024. --- We sincerely appreciate all constructive comments and believe that the revisions and additional experiments have strengthened our paper. We are also committed to making our code publicly available upon acceptance of the paper, to facilitate further research in this area. We kindly request your consideration for a positive evaluation. Once again, we appreciate all your valuable feedback. Pdf: /pdf/97588dc8f047b6a76b64d6f18de2a2536c9493bf.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Dimension-free Private Mean Estimation for Anisotropic Distributions
Accept (poster)
Summary: This paper tackles the problem of DP mean estimation for high-dimensional distributions exhibiting anisotropy, meaning the variances along different directions are highly non-equal. Prior works on this problem were plagued by a "curse of dimensionality", requiring sample complexities at least on the order of the square root of the ambient dimension d, even in cases where the non-private setting permits much lower sample sizes. The authors make two main contributions to address this limitation. Firstly, for the case when the covariance matrix $\Sigma$ is known, they provide a DP algorithm achieving sample complexity independent of the dimension d. Instead, the sample complexity depends solely on $tr(\Sigma^{1/2})$, which can be substantially smaller than d when the distribution is highly anisotropic. The bound matches the optimal non-private sample complexity up to logarithmic factors. Secondly, the authors develop a DP algorithm for the unknown covariance setting that improves upon prior work by reducing the dimension dependence from $1/2$ to $1/4$, while also depending on properties of the diagonal entries of $\Sigma$. Strengths: - Paper is clear and well written, though it is mathematically heavy. The analysis, supported by proof sketches, appears technically sound. - The motivations of improved rates with certain covariance structure are intuitive. Weaknesses: - The results involving known covariance are straightforward. The benefits comes from (1) application of Tsfadia et al. and (2) rescaled noise adding, which is similar to Aumüller et al. - There is still a $d^{1/4}$ left. Technical Quality: 3 Clarity: 3 Questions for Authors: Major questions with potential score raising: - Is it possible to establish a lower bound strictly higher than Theorem 1.2 for the unknown covariance case? This could help identify an intrinsic gap between the known and unknown covariance settings beyond logarithmic factors. - The authors mention that for special cases of eigenvalue/variance decay, the unknown covariance bound can be improved to nearly match the known case up to log factors. What are the key bottlenecks or assumptions preventing a fully dimension-independent bound for unknown covariance? The remaining $d^{1/4}$ dependence seems to stem from coordinates with smaller variances - is this avoidable under some assumptions on the decay behavior? Minor questions: - In line 113, is the last term in the sample complexity bound for unknown covariance correct? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and interest in our results. - **Lower bound for the case of unknown covariance**: We are not certain that our results are optimal in the unknown covariance case, even assuming a diagonal covariance matrix, and we agree that identifying an intrinsic gap between the known and unknown covariance would be very interesting (additionally since this gap does not exist for privately learning the mean in the Mahalanobis norm). Known lower bound techniques for private statistical estimation (in particular, the most popular fingerprinting technique for exponential families) does not apply as is to our problem. However, our current intuition is that there is a case for which the trace term is constant and the $d^{¼}$ term is necessary. Therefore, we think that it is possible that the $d^{¼}$ term cannot be improved. - **When can we achieve a dimension-independent bound under unknown covariance**: Indeed, the additional error in our results for the unknown covariance case stems from the $d-k$ coordinates of smaller variance, where $k$ is the number of top-variance coordinates we choose to learn, and we show that we can always learn up to roughly $\varepsilon^2n^2$ coordinates without increasing the sample complexity. In particular, the error incurred in the mean estimation step is $(\lVert\sigma_{1:k}\rVert_1 + \sqrt{d-k} \lVert \sigma_{k+1:d}\rVert_2) / \varepsilon n$. The first term is already smaller than the optimal $\lVert\sigma_{1:d}\rVert_1$, and in our result we show that the second term can always be upper-bounded by $\sqrt{d}\lVert\sigma_{1:d}\rVert_1/\sqrt{k}$. When this inequality is tight for some $k$, our analysis of this algorithm is tight. More generally though, there are interesting cases when the term $\sqrt{d-k}\lVert\sigma_{k+1:d}\rVert_2$ is smaller than the optimal $\lVert\sigma_{1:d}\rVert_1$, for $k$ at most $\varepsilon^2n^2$, and then our results match the ones for known covariance, up to logarithmic factors in $d$. One example is the case of exponentially decaying variances: if $\sigma_i=\sigma_1 e^{i-1}$, then it suffices to learn only the top $k=\log(d)$ variances, because the error incurred by the last term $\sqrt{d-k}\lVert\sigma_{k+1:d}\rVert_2\approx 1/\sqrt{d}$, which is smaller than the optimal $\lVert\sigma_{1:d}\rVert_1=O(1)$. Learning the top $\log(d)$ variances using the sparse-vector technique can be achieved with a polylogarithmic in $d$ number of samples, adding only logarithmic factors to the optimal sample complexity. - Line 113 omits several logarithmic factors with respect to $\delta$, $\beta$, but indeed there is a typo in this particular term: the exact dependence is $\log^2(d)+\log^{1.5}(d)/\varepsilon\gamma + \log(d)/\varepsilon\gamma^2$. We will add the full calculation for this case of interest in the appendix. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification and I have updated my score. I still think a lower bound for unknown covariance (even if not tight) would be convincing.
Summary: The paper presents new differentially private algorithms for estimating the mean of high-dimensional data, addressing the inefficiencies of traditional methods that suffer from the "curse of dimensionality." The proposed estimators are tailored for anisotropic subgaussian distributions, where data signals are concentrated in fewer dimensions. These estimators achieve optimal sample complexity that is independent of dimensionality when the covariance is known and improve the sample complexity for unknown covariance cases from $d^{1/2}$ to $d^{1/4}$. Strengths: The paper studied interesting problems and the writing is clear. The authors give both upper bound and lower bound for the problem. Weaknesses: 1. I understand the space of the main text is limited but there is no conclusion. 2. There is no experimental design to verify their theoretical findings. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The lower bound does not totally match the upper bound. Is it possible to improve the gap? 2. In Theorem 3.1., the range of $\epsilon$ is from 0 to 10. Actually, 10 is a weak privacy guarantee in DP in practice. Why do you take this value? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is no experiment. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and suggestions to improve the presentation of our paper. - We agree that having a conclusion would be preferable and since the final manuscript can be a page longer we will certainly add one (and move discussion for future work from the appendix) in the main body. - **Lower bound for the case of unknown covariance**: We are not certain that our results are optimal in the unknown covariance case, even assuming a diagonal covariance matrix, and we agree that this is a very interesting question. We know of special cases (e.g., diagonal matrices whose singular values follow an exponential decay or isotropic matrices) for which our results are optimal up to logarithmic factors. Our intuition is that there is a case for which the trace term is constant and the $d^{¼}$ term is necessary. Therefore, we think that it is possible that the $d^{¼}$ term cannot be improved. - **Range of epsilon**: Thank you for pointing this out – we understand that the range $(0,10)$ seems somewhat arbitrary, and we will clarify this in the updated version. Please note that the theorem still holds for $\varepsilon\in(0,1)$, which would correspond to a reasonably strong privacy guarantee. However, we wanted to point out that the theorem holds more generally, for $\varepsilon\in(0,c)$ where $c$ is some constant larger than $1$, since $\varepsilon>1$ is still encountered in real-world applications, and might be a regime of interest for those. The choice of constant is due to approximations we take in the privacy proof and it has not been optimized (so the theorem could possibly hold for $\varepsilon\in(0,c)$, where $c>10$). However, as you mentioned, 10 seems to be large enough already, so we did not find it necessary to explore the exact constant $>10$ for which our proof goes through. - **Simulations**: We focus on the theoretical analysis of our algorithms and prove their privacy and accuracy guarantees, but we do expect that the known-covariance algorithm will perform well, since the constants involved in our error are provably small. The results for the unknown covariance case are less conclusive and we agree that some experimentation might be interesting in this case. However, we would like to point out that ours are the first methods that are applicable for $n\leq \sqrt{d}/\varepsilon$, which makes it difficult to choose an appropriate baseline to compare to for smaller sample sizes.
Summary: This paper considers the problem of DP mean estimation and the focus is on the high dimensional settings where the distribution is nearly low rank (or tr(Sigma)<<d), and the error metric is l_2. Prior work in this setting still requires sqrt(d) samples to achieve any non-trivial error which is sub-optimal. In the known covariances setting this paper achieves sample complexity n = tr(Sigma)/alpha^2 + tr(Sigma^1/2)/(alpha*eps) which can be significantly smaller than sqrt(d) in the high dimension setting. The algorithm filters out the outlier using the FriendlyCore algorithm [62] and leverages the propose-test-release framework. For the unknown covariance, one will need to first have a rough estimate of the covariance such that appropriate noise can be added to guarantee privacy. However dimension free estimation of the covariance is impossible, so the idea is to estimate the top-K coordinate with the largest variances, and simply adding small and isotropic gaussian noise to the remaining directions. Combining the error incurred in the top and bottom coordinate gives a d^(1/4) dependency in the dimensionality. On the lower bound side, they prove their known covariance result is optimal up to logarithmic factors. It is unclear what the optimal sample complexity for the unknown covariance should be. Strengths: DP mean estimation problem in the high dimensional setting is a fundamental problem in differential privacy, and the paper achieves optimal results for known covariance setting and makes improvement for the unknown covariance meeting. The ideas for unknown covariance seem novel to me. The sample complexity of mean estimation in the unknown covariance setting, posed in this paper, remains an interesting open problem. Weaknesses: The techniques for the known covariance setting have been developed previously. Technical Quality: 4 Clarity: 4 Questions for Authors: I don't have question for the authors. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and interest in our results.
Summary: This paper presents a new method for estimating the mean of a subgaussian distribution, such that differential privacy is guaranteed (i.e., the final result does not provide too much identifying information about any individual sample). For the proposed method, in the case of known covariance, the sample complexity depends only on the trace of the (1/2) power of the covariance matrix. This means that, if the distribution is highly anisotropic (with the variance mostly concentrated in only a few dimensions) then the sample complexity does not depend explicitly on the full dimensionality of the space *at all*, only on the dimensionality of the lower-dimensional space where "most" of the variance lies. This improves on the prior SOTA result, from (Aumüller et al 2023), where the sample complexity explicitly contains a factor of sqrt(d), where d is the full dimensionality of the space. The core algorithm proceeds by first filtering outliers, using a differentially-private filtering mechanism proposed by (Tsfadia et al 2022), and then simply taking the mean of the remaining samples and adding Gaussian noise. A matching lower bound is also proven, showing that we can't in general do any better asymptotically than O(trace(Σ^(1/2))), up to logarithmic factors. A variant of the algorithm is proposed for the case of unknown covariance as well, which has sample complexity with an explicit dependence on d^(1/4), which is still an impovement overt the prior SOTA d^(1/2). Strengths: - The problem being addressed is important, given the increasing relevance of privacy guarantees in machine learning. - The improvements from prior work are clearly contextualized. Sufficient background information is provided to understand the major points of the paper. - To my first estimation, the technique appears sound, although this is not my area of expertise and I am not qualified to rigorously evaluate the technical correctness of the proposed algorithm. Weaknesses: - The order of the presentation could be improved. The preliminaries section could be presented earlier in the paper, before the definitions given are referred to. - No empirical tests are performed. Including a numerical simulation of the algorithm would increase the reader's confidence in the correctness of the theoretical results, as well as give an idea of the tightness of the bound in practice. Technical Quality: 3 Clarity: 3 Questions for Authors: - Is it possible to derive a separate, tighter lower bound for the the case of unknown covariance? Or is it possible that we may in the future improve from the d^(1/4) result in this paper? Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are addressed adequately in the Future Work section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and suggestions to improve the presentation of our paper. - Thank you for the suggestion on the presentation! We will consider including definitions from the preliminaries earlier in the introduction to formalize the intuition we want to convey. Please let us know if there was a specific definition that would have been helpful to see earlier. - **Lower bound for the case of unknown covariance**: We are not certain that our results are optimal in the unknown covariance case, even assuming a diagonal covariance matrix, and we agree that this is a very interesting question. We know of special cases (e.g., diagonal matrices whose singular values follow an exponential decay or isotropic matrices) for which our results are optimal up to logarithmic factors. Our current intuition is that there is a case for which the trace term is constant and the $d^{¼}$ term is necessary. Therefore, we think that it is possible that the $d^{¼}$ term cannot be improved. - **Simulations**: We focus on the theoretical analysis of our algorithms and prove their privacy and accuracy guarantees, but we do expect that the known-covariance algorithm will perform well, since the constants involved in our error are provably small. The results for the unknown covariance case are less conclusive and we agree that some experimentation might be interesting in this case. However, we would like to point out that ours are the first methods that are applicable for $n\leq \sqrt{d}/\varepsilon$, which makes it difficult to choose an appropriate baseline to compare to for smaller sample sizes. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for responding to my comments. To be more specific about the ordering of the paper, the FriendlyCore procedure; and the distinction between "pure" DP and (\epsilon,\delta)-DP, could be introduced earlier, before being referenced. I am keeping my original "Accept" score.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper studies mean estimation for multivariate Gaussian distributions. It's well known that this problem under differential privacy suffers from a curse of dimensionality- the sample complexity of estimating the mean (in expected $\ell_2^2$ error) scales with $\sqrt{d}$ where $d$ is the dimension. However, the lower bounds are for isotropic covariance matrices, whereas in the real world, covariance matrices are often far from isotropic. The main intuition is that when covariance matrices are far from isotropic, there is a gap between large and small singular values, and hence the number of 'important dimensions' is relatively small- the object behaves 'more lower-dimensional' than the dimension $d$ indicates. They consider two settings, the known and unknown covariance cases. 1) In the known covariance case, they show a dimension-independent bound (for approximate DP) that depends instead on the sum of (square roots of) singular values. They argue that this is optimal up to logarithmic factors (and also that no pure DP algorithm can achieve such a dimension-independent bound). 2) For the unknown covariance case, one approach is estimating the covariance and then applying the known-covariance algorithm, but this might be prohibitive since estimating the covariance is known to require at least $d^{3/2}$ samples asymptotically. The authors instead show that it's possible to achieve a bound with a $d^{1/4}$ dependence instead. For the known covariance case, the authors use the known FriendlyCore algorithm with a carefully specified predicate to remove outliers (points far from the mean in any coordinate), which allows them to bound the sensitivity and then add dimension-independent noise. The lower bounds are adaptations of the packing and fingerprinting approaches used for pure and approximate DP. In the unknown covariance case, they treat large and small singular values differently. Firstly, they learn the identities and values of the top $k$ singular values up to multiplicative factors and use the known covariance algorithm to estimate the mean restricted to these coordinates. For the other coordinates, they use Holder's inequality to bound the $\ell_2$ norm of the singular values. For small singular values, the variance does not need to be as accurately estimated since these directions are less important. Balancing $k$ to optimize the cumulative error gives the sample complexity bound. Strengths: 1) Gaussian mean estimation is a fundamental problem, and this paper suggests a way to deal with the curse of dimensionality that privacy imposes for this problem, by considering non-worst case instances (anisotropic covariances). This is an interesting direction that will likely spawn future work. The paper also does a nice job of suggesting future directions of research in this space. 2) The paper combines known techniques in privacy in clever ways to obtain their upper and lower bounds. The problem of private Gaussian mean estimation has seen lots of prior investigation, and they also do a good job of explaining how a lot of these techniques inherently give dimension-dependent bounds. Weaknesses: 1) While the results are interesting, they mostly follow from applying known techniques from the privacy literature- the additional technical insight of this paper is rather limited. 2) The unknown covariance case as presented was confusing to me- is there an assumption that the covariance matrix is diagonal; this seems to be used in the author's proofs and approaches? This wasn't clearly described in the paper and could use clarification. I don't see how to extend the techniques of the authors to deal with general covariance matrices. Technical Quality: 4 Clarity: 3 Questions for Authors: 1) Line 829, I believe the authors mean $\|\| \sigma_{bot} \|\|_2$. 2) Do known lower bounds for estimating the covariance matrix depend on $d$ even with large singular value gaps? Could one hope to get bounds for this problem that depend similarly on the sum of singular values? I am curious if existing results truly rule out the approach of estimating the covariance matrix before using a known covariance algorithm. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and interest in our results. - **The case of unknown general covariance**: Our main result for unknown covariance (Theorem 1.3) includes the sum of the square-roots of the diagonal elements of the covariance matrix, $\sum_{i=1}^d \Sigma_{ii}^{½}$. This way, we present a result for general covariance without making any assumptions on the covariance matrix. However, this sum is more easily interpretable when the covariance is indeed diagonal: the sum is equal to $\mathrm{tr}(\Sigma^{½})$, when the covariance matrix is diagonal, and can be larger otherwise. In the paper, we focus on the case of diagonal covariance in places, for ease of exposition when describing our techniques, to compare with other baseline approaches which only work for diagonal covariance matrices, or to compare with our result for the case of known covariance which includes the $\mathrm{tr}(\Sigma^{½})$ term. We understand how this can be confusing and we will clarify that we make this assumption for simplicity when we do. Thank you for the suggestion. - Indeed, Line 829 has a typo and this is the correct term. Thank you for noting it! - **On the approach of privately estimating the covariance matrix before using a known-covariance mean estimation algorithm**: We do not know of any lower bounds specifically aimed at privately estimating a covariance matrix with large singular value gaps. The existing lower bounds for covariance estimation in spectral norm which apply to Gaussian data ([Kamath Mouzakis Singhal 2022], extended to the low-accuracy regime later by [Narayanan 2023]) use almost-isotropic covariance matrices as the lower bound instance, so they do not go through under a large singular value gap assumption. But we would not expect them to: there are cases [Singhal Steinke 2021] where assuming a known large singular value multiplicative gap in the order of O(d^2) allows us to learn the subspace of the top eigenvectors (and subsequently their singular values) with a sample complexity roughly polynomial in the dimension of the subspace, which does not depend on d. However, we note that in any case, we expect that going through covariance estimation to achieve our goal would be a lossy step. The reason is that for the case of (almost) identity covariance, the sample complexity of covariance estimation in spectral norm would be $d^{1.5}/\alpha\varepsilon$, whereas our current upper bound for the unknown covariance case is $d/\alpha\varepsilon+d^{0.75}/\alpha^{½}\varepsilon$, which is smaller (and optimal for the regime $\alpha\leq \sqrt{d}$ which is usually of interest). We will update our manuscript to include the discussion. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their clarifications!
null
null
null
null
null
null
Identifying Latent State-Transition Processes for Individualized Reinforcement Learning
Accept (poster)
Summary: The paper addresses the challenge of identifying individualized state-transition processes in reinforcement learning (RL) when individual-specific factors are latent. The authors propose a novel framework, the Individualized Markov Decision Processes (iMDPs), to incorporate these latent factors into the state-transition processes. They provide theoretical guarantees for the identifiability of these latent factors and present a practical method for learning them from observed state-action trajectories. Experiments on various datasets demonstrate the effectiveness of their method in identifying latent state-transition processes and optimizing individualized RL policies. Strengths: 1. Theoretical guarantees for identifiability under various conditions. 2. Effective practical method for learning individualized RL policies from observed data. 3. Comprehensive experiments on synthetic and real-world datasets demonstrating the effectiveness of the proposed method. Weaknesses: 1. The method might be challenging to generalize to all types of RL problems, especially those with instantaneous causal influences within states. 2. Some sections could be simplified for broader accessibility, and future work should address the limitations mentioned. Technical Quality: 3 Clarity: 4 Questions for Authors: 1.How does the proposed framework scale with larger datasets and more complex environments? Are there any computational limitations or bottlenecks that need to be addressed? 2. Can the framework handle real-time adaptation in dynamic environments where latent factors might change over time? If so, how would this be implemented? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: 1. The framework assumes that the latent individual-specific factors are time-invariant. In real-world scenarios, individual-specific factors can evolve over time, which this model does not account for. 2. The framework might face challenges in scaling to environments with very high-dimensional state and action spaces, which are common in some real-world applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The method might be challenging to generalize to all types of RL problems, especially those with instantaneous causal influences within states.** **A1:** Thank you for pointing out this scenario. It is indeed possible that there are instantaneous causal influences within states in the considered problem. However, even with these influences, our framework can still identify the latent individual-specific factors, and the theoretical results will not be affected. In the instantaneous case, since the states are observed, we can treat the states at each time step as a whole and input them into the current estimation framework. Thus the instantaneous causal influences within states will not affect our current estimation framework and theorem results. **Q2: Some sections could be simplified for broader accessibility, and future work should address the limitations mentioned.** **A2:** Thank you for your suggestion. In the updated manuscript, we have reorganized the sections to improve accessibility and clarity. Additionally, we have added a separate Limitations section to discuss how we address limitations, including instantaneous causal influences. Future work could extend our current framework from a causal perspective. For instance, we could develop modules that explicitly account for causal dependencies within states, using techniques such as causal graphical models and advanced inference methods that handle instantaneous causal relationships [1]. Such extensions could be directly integrated into our current framework, and we have been working on this problem. [1] Li, Zijian, et al. "On the Identification of Temporally Causal Representation with Instantaneous Dependence." arXiv preprint arXiv:2405.15325 (2024). **Q3: How does the proposed framework scale with larger datasets and more complex environments? Are there any computational limitations or bottlenecks that need to be addressed? The framework might face challenges in scaling to environments with very high-dimensional state and action spaces, which are common in some real-world applications.** **A3:** In the updated manuscript, we validated our algorithm in the AhnChemoEnv in DTRGym [2] and inventory management tasks [3]. AhnChemoEnv is designed to simulate cancer treatment through chemotherapy, allowing realistic modeling of tumor growth and response to treatment. Inventory management is an important real-world problem that aims to keep inventories of goods at optimal levels to minimize inventory costs while maximizing revenue from demand fulfillment. We test the performance of our algorithm on the inventory with state dimensions of 50, 100, and 200. The experimental results (see Figure 1 and Figure 3 in REBUTTAL.pdf) show that our framework outperforms other algorithms in terms of initial reward and final reward. Specifically, our method shows a significant jump-start compared to non-policy adaptation, validating the effectiveness of our approach. Unlike the meta-gradient method, which requires continuous adjustment of learning strategies during training and thus converges more slowly with less significant adaptation effects, our algorithm is more efficient. Additionally, training across multiple tasks in multitask RL can be time-consuming, and identifying which new task corresponds to a previously trained task can be challenging. Our algorithm addresses this by directly estimating $\kappa$ without requiring prior knowledge. Furthermore, policy distillation heavily depends on the performance of teacher models; insufficiently trained teacher models can negatively impact the final performance. Our algorithm, however, does not rely on source policy performance. Instead, it optimizes the policy based on the new environment, leading to better final performance. Our findings suggest that while the algorithm scales reasonably well, certain computational limitations, such as increased processing time and memory consumption, need to be addressed. In high-dimensional scenarios, the use of hardware accelerators such as GPUs, as well as fine-tuning the model, is essential to maintain optimal performance. [2] Luo, Zhiyao, et al. "DTR-Bench: An in silico Environment and Benchmark Platform for Reinforcement Learning Based Dynamic Treatment Regime." arXiv preprint arXiv:2405.18610 (2024). [3] Sun, Yuewen, et al. "ACAMDA: Improving Data Efficiency in Reinforcement Learning Through Guided Counterfactual Data Augmentation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 14. 2024. **Q4: Can the framework handle real-time adaptation in dynamic environments where latent factors might change over time? If so, how would this be implemented? The framework assumes that the latent individual-specific factors are time-invariant. In real-world scenarios, individual-specific factors can evolve over time, which this model does not account for.** **A4:** Thank you for pointing out this interesting scenario. It is indeed possible for the latent individual-specific factor to be time-variant in the considered problem. We believe our framework can be extended to handle time-varying cases, although establishing the theoretical identifiability is highly non-trivial. For instance, if we directly allow these latent factors to be time-varying, it seems hopeless to recover the latent variables, since each individual may have a specific latent factor. Therefore, further constraints would be needed. For instance, the theoretical identifiability might benefit from the constraint that the total number of possible values of the latent factors over time and across different individuals is finite. We hope this important extension can be achieved by researchers in the field by leveraging our framework and further considering the idea in [4]. [4] Hu, Yingyao, and Matthew Shum. "Nonparametric identification of dynamic models with unobserved state variables." Journal of Econometrics 171.1 (2012): 32-44. --- Rebuttal 2: Title: Could you please let us know whether our responses properly addressed your concern? Comment: Dear Reviewer pu6U, Thank you again for your valuable comments. Your suggestions on the experiments were very helpful, and we are eager to learn whether our additional experimental results and our responses have properly addressed your concerns. Due to the limited time for discussion, we hope to see your feedback and hope for the opportunity to respond to your further questions, if there are any. Yours sincerely, Authors of submission 12027
Summary: The authors consider a problem where the underlying dynamical process is an MDP but with a time-invariant latent factor. The authors provide examples of such problems in the real-world. To solve these problems, the authors propose a new mathematical framework called Individualized Markov Decision Processes (iMDPs). Furthermore, the authors provide theoretical guarantees and an algorithm for learning these latent factors from observational data. The authors then demonstrate the performance of this approach on multiple datasets. Strengths: I think the problem considered is a very relevant and useful problem. The authors propose a novel approach to solve this problem and present theoretical guarantees and analyses along with some empirical experiments to validate their approach. The paper is, for the most part, well written and clear. Weaknesses: The authors could have considered simultaneous estimation and control, which is often needed in real-world problems. Furthermore, in choosing baselines for comparisons, the authors have not considered RL methods that take into account trajectory history. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How does this compare with meta learning or transfer learning or a POMDP where each individual is a "slightly different" environment? While the authors state that these models are different, in terms of learning and practical considerations, how do these frameworks differ? 2. Does identifiability of these latent factors get influenced by the policy being followed to generate the data? 3. Why is the individual-specific latent factor $\kappa$ not included in the reward function? 4. What does the term $\mathbb{P}(s, a | u)$ imply in equation (1)? Since $u$ is not observed by the decision maker, how can its action $a$ be conditioned on $u$? 5. $d_s$ and $d_a$ are not defined in line 110. Can the authors add these definitions? 6. Is this learning online or offline? From Line 246 it appears that the authors only consider the offline case. However, their control part requires an online setup as mentioned in Line 335. Specifically is the learning of $\kappa$ and optimal policy simultaneous? 7. If an online setup was available, how would the authors propose to use exploration to better identify $\kappa$? Will this exploration to learn $\kappa$ be harmful/costly? 8. Can the authors define the term "post-nonlinear temporal" model in Line 162? 9. Can the authors define the covariance matrices $\Sigma_{\mathbf{A_i}, \mathbf{B_i}}$ in Line 173? 10. Is $n$ in Theorem 4.3 same as $d_s$ in Line 110? 11. Is the iMDP assumed to be an infinite horizon setup? 12. Why are the quantization loss and commitment loss defined after equation (4) the same? Also in Line 9 in Algorithm 2. 13. Is the reconstruction loss differentiable? Can the authors explain as it involves a quantization step as well? Or does this term only compute the gradient with respect to parameters of the decoder? 14. Is the policy adaptation step specified equivalent to augmenting the input given to the policy function or Q function with $\kappa$? 15. In the caption for Figure 3, it would be helpful if the authors could define the shaded regions around the curves. 16. While considering RL baselines, why haven't algorithms for POMDPs that take the entire history into account been considered? 17. Can the authors explain the motivation behind the ablation study starting in Line 317? 18. What do the authors mean by the limitation about "instantaneous causal influences" in Line 389? 19. Can the authors give more details in the proof of Theorem 4.2? (Line 678 onwards) 20. In Line 678, is $n$ the number of state components? 21. I could not follow the proof for Theorem 4.3. There are several concepts introduced in the appendix and often the proof refers to a concept explained later. It would be helpful if the authors could provide some motivation/intuition for these various constructions/concepts. 22. In line 912, arg max is a set operator and hence $=$ should be replaced with $\in$. 23. In Line 7 in Algorithm 1, what is the policy used to collect the trajectory samples? 24. Can the authors give the specific values for the dimensions and other architecture details used in their numerical examples? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of their work in the Conclusions section and in the Impact Statement section (Appendix H). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **A1:** Thanks for the helpful questions. Here is a brief summary of the differences. We also empirically compared our method with meta-learning and transfer-learning techniques and reported results in REBUTTAL.pdf. - **Meta-learning** trains the model on a variety of tasks so that it can efficiently apply what it has learned to new tasks. Unlike our method, it does not assume a time-invariant latent factor, has no guarantee of identifiability, and does not provide a clear clue of adaptation. - **Transfer learning** focuses on leveraging knowledge from one domain to improve performance in a new domain while facing challenges such as negative transfer. Our method can identify the latent individual-specific factors and realize individualized decision-making with interpretability. - **POMDPs** focus on dealing with incomplete information and making decisions under uncertainty with time-varying hidden states. Our work emphasizes individualized decision-making considering latent factors that influence state transitions with fully observable states. **A2:** In the current theoretical treatment, yes, the identifiability is influenced by the policy. According to our proof technique, this assumption is needed. Thank you for pointing it out and we have made this condition explicit in the updated paper. At the same time, it might be possible to avoid this assumption although the proof seems to be nontrivial. **A3:** In our setting, we assume that different groups have no different reward weightings. Thus $\kappa$ only influences the states and transition dynamics. **A4:** We believe this might be a misunderstanding. $\mathbb{P}(s,a|u)$ represents the joint distribution of $(s,a)$ for each individual. Here, $u$ denotes different individuals which is explicitly observed. This notation is used for mathematical convenience. **A5, A10, A20:** Both $n$ in Theorem 4.3 and $d_s$ in line 110 represent the state dimension, and $d_a$ denotes the action dimension. We didn’t find $n$ in line 678–perhaps you referred to another line number? If you mean the $n$ in lines 166, 728, or 739, it also represents the state dimension. **A6, A7:** The estimation network is pre-trained offline. When a new individual arrives, we estimate $\kappa$ and adapt the policy simultaneously, and the policy is adapted through new interactions. Such exploration is necessary to better identify $\kappa$ and discover the optimal policy in RL. This has been provided explicitly in the updated paper. **A8, A9:** The definition of the post-nonlinear temporal model and covariance matrices are provided in Appendix C.1 and C.4.1, respectively. In light of your question, we have also moved them to the main paper in the updated paper. **A11:** Yes, we assume an infinite horizon setup in iMDP. **A12, A13:** This might be a misunderstanding. The quantization loss and the commitment loss are different and serve unique purposes. The key is in the positioning of the stop gradient (sg[·]) [1], showing how these losses influence the encoder and the codebook differently. The stop gradient allows the model to handle the non-differentiable quantization step, ensuring that gradients are computed for the encoder and the rest of the model, making the reconstruction loss differentiable. [1] Van Den Oord, Aaron, and Oriol Vinyals. "Neural discrete representation learning." Advances in neural information processing systems 30 (2017). **A14:** Yes, we identify and incorporate $\kappa$ into policy function to learn individualized policy. **A15:** The shaded regions represent the standard deviation. We have added an explanation in the revision. **A16:** Our problem and POMDP address different settings. We assume fully observable states influenced by latent individual-specific factors, rather than partially observable states. Under the similar setting, we compared our method with meta-learning and transfer learning techniques. **A17:** The ablation study analyzes the contributions of different components in the framework. The sequential encoder aligns our framework with the theorem requirements, and the conditional decoder meets our data generation process. The noise estimator is used to increase robustness. **A18:** Our current framework only considers transitions from time $t$ to $t+1$. However, in some cases, there may be causal influences that happen instantaneously within the same time step, which is beyond the scope of our current discussion. **A19:** Individuals sharing the same latent factor can be grouped due to similar state transitions. For a specific state-action pair $(s^*, a^*)$, the next state $s'$ should be consistent across individuals in the same group. We define $t^j$ as the time of the $j$-th occurrence of $(s^*, a^*)$ and collect states $X^j$ = {$s_{t^j+1} \mid t^j = 1, 2, \dots$}. By comparing $X^j$ across different individuals, we group those with similar transition behaviors and achieve identifiability by Lemma B.1. **A21:** We appreciate your feedback–indeed, this part is dense and involves many concepts and derivations. The proof for Theorem 4.3 has two parts. - Structure Identifiability: Lemma C.1 shows that the rank of the sub-covariance matrix of the observed variables equals the number of minimal choke sets, which represent the minimal sets that separate the two observed variable sets forming the sub-covariance matrix. The difference between the calculated rank and the expected choke factors indicates the number of latent factors. - Parameter Identifiability: We estimate $\alpha$ and $\beta$, which describe the influence of states and actions, using a regression model. The latent coefficient $\lambda$ is identified using orthogonal techniques in factor analysis. **A22:** We have revised the notation accordingly. **A23:** The trajectory samples are collected using a random policy. **A24:** The summary of architecture details is provided in REBUTTAL.pdf and more details have been added in the updated manuscript. --- Rebuttal 2: Title: Could you please let us know whether our responses properly addressed your concern? Comment: Dear Reviewer t6CB, Thank you again for your valuable time dedicated to reviewing our paper and for your helpful suggestions. We particularly appreciate your questions regarding the problem setting, theorem, and framework. So we are eager to see whether our responses properly addressed your concerns and would be grateful for your feedback. With best wishes, Authors of submission 12027 --- Rebuttal Comment 2.1: Comment: I thank the authors for their detailed explanations to my questions. Based on these responses, I have increased my score. However, there are some points that I have not yet understood: A. "POMDPs focus on dealing with incomplete information and making decisions under uncertainty with time-varying hidden states. Our work emphasizes individualized decision-making considering latent factors that influence state transitions with fully observable states". I did not understand this. If the states are fully observable, then in an MDP framework, the transitions should not depend on any other factor. Am I missing something here? B. Regarding A12 and A13, I find the definitions: 1. $\mathcal{L}_{Quant} = \sum_i \lVert sq[z_i] - e_i \rVert^2$ 2. $\mathcal{L}_{Commit} = \sum_i \lVert e_i - sq[z_i] \rVert^2$ to be identical (I have left out the $m$ in the subscript on the RHS due to a formatting error. What am I missing here? --- Reply to Comment 2.1.1: Comment: Thank you so much for your time and feedback. We sincerely appreciate you updating your recommendation. Please let us answer your questions below. **Follow-Up Q1: If the states are fully observable, then in an MDP framework, the transitions should not depend on any other factor. Am I missing something here?** **Follow-Up A1**: Thanks for the good question. According to the definitions [1-2], the difference between POMDPs and MDPs is that in POMDP, the agent is unable to directly observe the current state $s$. Instead, the POMDP agent only receives a noisy or partial observation $o$, which is decided by the sensor model $O(o|s, a)$ (or sometimes $O(o|s)$). In our problem setting, although the states $s$ are fully observable, the transition probability $\mathbb{P}(s’|s, a, \kappa)$ is influenced by unobserved variable $\kappa$, along with the observed variables ($s$ and $a$). The dependency of the transition on observed or unobserved variables does not necessarily determine whether the environment is modeled as an MDP or a POMDP. Therefore, despite including a latent variable, our setting conforms to the definition of an MDP. [1] Kaelbling, Leslie Pack, Michael L. Littman, and Anthony R. Cassandra. "Planning and acting in partially observable stochastic domains." Artificial intelligence 101.1-2 (1998): 99-134. [2] Igl, Maximilian, et al. "Deep variational reinforcement learning for POMDPs." International conference on machine learning. PMLR, 2018. **Follow-Up Q2: Regarding A12 and A13, I find the definitions to be identical. What am I missing here?** **Follow-Up A2**: Sorry for the confusion and you are right. Thank you for pointing out this typo, and we have corrected it in the updated manuscript. The quantization loss and commitment loss should be $\mathcal{L} _ {\text{Quant}} = \sum_i \left\|\| \text{sg}[z _ {m,i}] - e _ {m,i} \right\|\|^2$, and $\mathcal{L} _ {\text{Commit}} = \sum_i \left\|\| z _ {m,i} - \text{sg}[e _ {m,i}] \right\|\|^2$, respectively.
Summary: The authors of this paper establish the identifiability of latent state-transition processes in reinforcement learning (RL) and propose a practical method for learning these processes from observed state-action trajectories. The focus is on personalized reinforcement learning (RL), where different individuals may exhibit different state-transition processes influenced by latent individual-specific factors. The study introduces the concept of a personalized Markov decision process (iMDP) and provides theoretical guarantees for learning state-transition processes with latent factors. Experiments on multiple datasets demonstrate the effectiveness of the method in inferring these factors and learning personalized strategies. Strengths: 1. **Innovation and Importance**: The paper addresses a significant challenge in RL—identifying latent state-transition processes influenced by individual-specific factors. This is crucial for optimizing personalized strategies in fields like healthcare and education. 2. **Theoretical Contribution**: The authors provide theoretical guarantees for the identifiability of latent factors, which is a significant contribution to the field of personalized RL. 3. **Experimental Validation**: The method has been experimentally validated on multiple datasets, showcasing its effectiveness in practical applications. 4. **Modular Framework**: The proposed iMDP framework is modular, allowing for independent study and improvement of each component, which is beneficial for further research and practical applications. Weaknesses: 1. **Broader Experimental Scope**: Although the experiments cover multiple datasets, including more diverse environments to test the generality of the findings would be valuable, such as testing in more complex high-dimensional RL environments. 2. **Comparison with Other Methods**: A more comprehensive comparison with other state-of-the-art methods in meta-RL and multi-task reinforcement learning could strengthen the validation of the proposed method. 3. **Discussion in Appendix**: Adding a discussion in the appendix about the differences and connections between iMDP and meta-RL, context-based RL, and multi-task reinforcement learning would be beneficial. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. **Consideration of Transformer as Encoder**: In Section 5.1, "temporal dependency from the sequential observations," the use of Transformers as the primary encoder was not considered. Supplementing with relevant experimental comparisons would provide clearer insights for readers. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The paper explicitly discusses its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Broader experimental scope:** Thank you very much for your suggestion. In the updated manuscript, we validated our algorithm in the AhnChemoEnv in DTRGym [1] and inventory management tasks [2]. AhnChemoEnv is designed to simulate cancer treatment through chemotherapy, allowing realistic modeling of tumor growth and response to treatment. Inventory management is an important real-world problem that aims to keep inventories of goods at optimal levels to minimize inventory costs while maximizing revenue from demand fulfillment. We tested the performance of our algorithm on the inventory with state dimensions of 50, 100, and 200. The experimental results (see Figure 1 and Figure 3 in REBUTTAL.pdf) show that our framework outperforms other algorithms in terms of initial reward and final reward. Detailed analyses are provided in Q2. [1] Luo, Zhiyao, et al. "DTR-Bench: An in silico Environment and Benchmark Platform for Reinforcement Learning Based Dynamic Treatment Regime." arXiv preprint arXiv:2405.18610 (2024). [2] Sun, Yuewen, et al. "ACAMDA: Improving Data Efficiency in Reinforcement Learning Through Guided Counterfactual Data Augmentation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 14. 2024. **Q2: Comparison with other methods:** We appreciate your suggestion. In the updated manuscript, we have added more benchmarks and further compared our algorithm against (1) meta gradient RL, (2) multitask RL, (3) policy distillation, and (4) non-policy adaptation. Additionally, we have included two evaluation metrics: (1) initial reward, which measures the initial performance benefited from policy adaptation, and (2) final reward, which measures the performance after the full training process. Please refer to REBUTTAL.pdf for the detailed results. The comparisons suggest that our method achieves the highest initial and final rewards compared to the benchmarks. Specifically, it shows a significant jump-start compared to non-policy adaptation, validating the effectiveness of our adaptation approach. The meta-gradient method optimizes the hyperparameters of the learning algorithm by calculating the gradient of the learning process, allowing rapid adaptation to new tasks as they change. However, due to the continuous adjustment of learning strategies during training, it converges more slowly and the adaptation effect is less significant compared to our algorithm. Multitask RL improves learning efficiency by sharing model strategies across different tasks. This requires first training policies on multiple tasks, which can be time-consuming (and even risky) during exploration. Moreover, identifying which new task corresponds to a previously trained task can be challenging. Our algorithm addresses this by estimating directly using $\kappa$ without requiring prior knowledge. Policy distillation transfers the knowledge of already trained teacher models to a student model, allowing the student to perform well across multiple tasks. However, this approach highly relies on the performance of the teacher models; insufficiently trained teacher models can negatively impact the final performance. Our algorithm does not depend on the source policy performance; subsequent policy optimization is based on the new environment, leading to better final performance. **Q3: Discussion in Appendix:** In the updated manuscript, we have added a section discussing the differences and connections between iMDP and meta-RL, context-based RL, and multi-task reinforcement learning, summarized below. - **Meta-learning** trains a learning model on a variety of tasks so that it can efficiently apply what it has learned to new tasks. Unlike our method, it does not assume a time-invariant latent factor, has no guarantee of identifiability, and does not provide a clear clue of adaptation. iMDP captures how an individual's belonging to a certain group affects their interactions within an environment, allowing for individualized policy adaptation. Moreover, iMDP provides a guarantee of identifiability and develops a corresponding estimation framework that potentially offers better interpretability. - **Contextual MDPs** consider the general contextual influence on transition probabilities and rewards. However, the context variables are assumed to be partially observable and do not guarantee the identifiability of the context variables. In our case, when the latent factors are finite, our method guarantees group-wise identifiability even when the transition processes are nonparametric. In the cases of infinite latent factors, identification could be achieved under proper assumptions. - **Multi-task RL** involves learning policies for a variety of tasks simultaneously. The goal of the agent is to perform well on all these tasks, which may have similar or different objectives. It often involves sharing information between tasks to improve learning efficiency and policy performance. Instead of focusing on policy optimization for all tasks, iMDP identifies latent individual-specific factors that implicitly influence the decision-making process. These factors indicate the unique properties of each individual, providing explanatory clues for policy adaptation. **Q4: Consideration of transformer as encoder:** We appreciate your insightful suggestion. In the revision, we have incorporated transformers into our framework to perform a comparative analysis against the models currently in use, and the results are reported in Figure (c) in REBUTTAL.pdf. Although both frameworks can achieve identifiability, the Transformer encoder can achieve faster convergence compared with our previous framework.
Summary: The paper titled "Identifying Latent State-Transition Processes for Individualized Reinforcement Learning" addresses the challenge of optimizing individualized reinforcement learning (RL) policies by focusing on latent individual-specific factors that influence state transitions. This is particularly significant in domains like healthcare and education, where individual differences can causally affect outcomes. The primary contribution of the paper is the establishment of the identifiability of latent factors that drive individualized state-transition processes. The authors propose a novel framework called Individualized Markov Decision Processes (iMDPs) that incorporates these latent factors into the RL framework. The key advancements include: 1. The paper provides theoretical guarantees for the identifiability of latent factors under both finite and infinite conditions, making it possible to distinguish different underlying components in state transitions. 2. The authors develop a generative-based method to effectively estimate these latent individual-specific factors from observed state-action trajectories. This method employs a variational autoencoder with a vector quantization layer to discretize the latent space and ensure accurate factor estimation. 3. The proposed method is empirically validated across various datasets, demonstrating its effectiveness in identifying latent state-transition processes and improving the learning of individualized policies. 4. The paper outlines a two-stage approach for policy learning, where the estimated latent factors are used to tailor RL policies for individuals, thereby enhancing the policy’s adaptability to different environments and individuals. Strengths: - The introduction of the Individualized Markov Decision Processes (iMDPs) framework is a novel approach to integrating latent individual-specific factors into reinforcement learning (RL). This approach goes beyond existing models by considering the fixed influence of these latent factors on state transitions, addressing a significant gap in the current literature. - The paper establishes theoretical guarantees for the identifiability of these latent factors under both finite and infinite conditions. - The use of variational autoencoders (VAEs) with vector quantization to discretize the latent space and estimate the latent factors is not surprisingly a straightforward application of these techniques in the context of RL. - The experimental validation is thorough, with the proposed method tested on multiple datasets, including synthetic data and real-world scenarios like the Persuasion For Good corpus. The experiments are well-designed to demonstrate the effectiveness of the method in different settings, showcasing its robustness and versatility. Weaknesses: **Assumptions on Latent Factors** The paper assumes that the latent individual-specific factors are time-invariant, which might not hold in all real-world scenarios. For instance, in healthcare, a patient’s condition can change over time, affecting the state-transition processes. Addressing this limitation could involve extending the framework to account for time-varying latent factors or providing a discussion on how to handle such scenarios. **Limited Real-World Validation** While the empirical validation includes synthetic datasets and a real-world dataset (Persuasion For Good corpus), the application domains are somewhat limited. The healthcare and education examples are discussed theoretically but not empirically validated with real-world data. To strengthen the paper, the authors could include experiments using real-world healthcare or education datasets to demonstrate the practical utility and robustness of their method in these critical applications. A possible healthcare platform is [1], where there are 4 environments with customizable PK/PD variation setups. **Policy Adaptation for New Individuals** The approach for policy adaptation for new individuals involves initializing the policy based on the estimated group factor and fine-tuning it with new interactions. However, the paper lacks detailed evaluation metrics or benchmarks to assess the effectiveness and efficiency of this adaptation process. Including more quantitative analysis and comparisons with existing methods for policy, adaptation would enhance the understanding of the benefits and limitations of the proposed approach. ### Actionable Suggestions 1. add a discussion of time-varying latent factors in your limitation section 2. Incorporate real-world experiments in healthcare or education to validate the practical utility of the method. 3. Provide detailed evaluation metrics and benchmarks for the policy adaptation process. References [1] Luo, Zhiyao, et al. "DTR-Bench: An in silico Environment and Benchmark Platform for Reinforcement Learning Based Dynamic Treatment Regime." arXiv preprint arXiv:2405.18610 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: How effective is the policy adaptation process for new individuals compared to existing methods? Can the authors provide quantitative benchmarks or metrics to evaluate this process? How does the variability in initial states across individuals impact the performance and identifiability of the latent factors? Are there specific strategies to handle high variability in initial states? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Privacy Concerns: The individualized nature of the proposed method implies that it relies on detailed personal data. This raises potential privacy concerns, especially in sensitive domains like healthcare and education. The authors should discuss how they plan to handle privacy issues, possibly suggesting the use of privacy-preserving techniques such as differential privacy. Robustness and Reliability: Another potential impact is related to the robustness and reliability of the individualized policies. In critical applications, such as healthcare, incorrect policy recommendations could have serious consequences. The authors should address its limitations from this angle. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Discussion of time-varying latent factors:** Thank you for pointing out this interesting scenario. It is indeed possible for the latent individual-specific factor to be time-variant in the considered problem. We believe our framework can be extended to handle time-varying cases, although establishing the theoretical identifiability is highly non-trivial. For instance, if we directly allow these latent factors to be time-varying, it seems hopeless to recover the latent variables, since each individual may have a specific latent factor. Therefore, further constraints would be needed. For instance, the theoretical identifiability might benefit from the constraint that the total number of possible values of the latent factors over time and across different individuals is finite. We hope this important extension can be achieved by researchers in the field by leveraging our framework and further considering the idea in [1]. [1] Hu, Yingyao, and Matthew Shum. "Nonparametric identification of dynamic models with unobserved state variables." Journal of Econometrics 171.1 (2012): 32-44. **Q2: Additional real-world experiments in healthcare:** Thank you very much for your suggestion and for providing the healthcare platform. In the updated manuscript, we validated our algorithm in the AhnChemoEnv environment in DTRGym [2] and created different groups with PK/PD variation. The experimental results (see Figure 1 in REBUTTAL.pdf) show that our framework outperforms other algorithms in terms of initial & final reward. Detailed analyses are provided in Q3. [2] Luo, Zhiyao, et al. "DTR-Bench: An in silico Environment and Benchmark Platform for Reinforcement Learning Based Dynamic Treatment Regime." arXiv preprint arXiv:2405.18610 (2024). **Q3: Evaluation and discussion on policy adaptation process:** We appreciate your suggestion. In the updated manuscript, we have added more baselines and further compared our algorithm against (1) meta gradient RL, (2) multitask RL, (3) policy distillation, and (4) non-policy adaptation. Additionally, we have included two evaluation metrics: (1) initial reward, which measures the initial performance benefited from policy adaptation, and (2) final reward, which measures the performance after the full training process. Please refer to REBUTTAL.pdf for the detailed results. Our method achieves the highest initial and final rewards compared to the baselines. Specifically, it shows a significant jump-start compared to non-policy adaptation, validating the effectiveness of our adaptation approach. The meta-gradient method optimizes the hyperparameters of the learning algorithm by calculating the gradient of the learning process, allowing rapid adaptation to new tasks as they change. However, due to the continuous adjustment of learning strategies during training, it converges more slowly and the adaptation effect is less significant compared to our algorithm. Multitask RL improves learning efficiency by sharing model strategies across different tasks. This requires first training policies on multiple tasks, which can be time-consuming (and even risky) during exploration. Moreover, identifying which new task corresponds to a previously trained task can be challenging. Our algorithm addresses this by estimating directly using $\kappa$ without requiring prior knowledge. Policy distillation transfers the knowledge of already trained teacher models to a student model, allowing the student to perform well across multiple tasks. However, this approach highly relies on the performance of the teacher models; insufficiently trained teacher models can negatively impact the final performance. Our algorithm does not depend on the source policy performance; subsequent policy optimization is based on the new environment, leading to better final performance. **Q5: Discussion on variability in initial states:** This is a great question! We can still establish the identifiability of the latent variable for each trajectory provided by each user. However, since each trajectory has a finite (usually pretty small) length, the estimated values might be noisy. The second step is to merge latent variables that should have the same value. A straightforward method is as follows: we first estimate the value of the latent variable for each individual and then see how they can be clustered together. We check whether they should be classified into the same or different groups by conducting clustering. A more principled way to address this issue has yet to be developed, and we have conducted additional experiments to verify the feasibility and report the results in Figure (a-b) in REBUTTAL.pdf. The initial state distributions are defined with two types: normal and uniform. For the normal distribution, the means are set to [0, 1, 1] and the standard deviations are set to [1, 2, 1], respectively. For the uniform distribution, the range for each dimension is defined with lower bounds [0, -1, 1] and upper bounds [1, 1, 1.5]. As you can see from the experiment, although the initial state has a high variability, finally the estimated values of the latent factors corresponding to the 200 individuals are highly classified and can be divided into 4 groups. **Q6: Impact on privacy issues \& robustness and reliability:** In the updated manuscript, we have included a separate limitation section to discuss how we address these limitations. - Potential privacy risks: We could use de-identifying techniques and remove direct identifiers such as names and zip codes, apply masking techniques such as data perturbation, and use pseudonymization to replace private identifiers with artificial ones. Incorporating differential privacy techniques is also interesting and we will leave it as future work. - Robustness and reliability: We could provide an assumption checklist to help users determine whether our work is applicable to their specific scenarios, thereby avoiding misuse and improving reliability. --- Rebuttal Comment 1.1: Comment: Thank you for the response. The authors have addressed most of my major concerns. I have no further questions, and I am happy to raise the score to 7. --- Reply to Comment 1.1.1: Comment: Dear Reviewer cEn6, Thank you so much for your time and feedback. It means a lot to us. We are so happy that most of your major concerns were properly addressed. Wish you all the best, Authors of submission 12027
Rebuttal 1: Rebuttal: We thank all reviewers for their time dedicated to reviewing the paper and valuable comments. We have revised the manuscript accordingly as described below. Concerns about the experiments are addressed collectively in the REBUTTAL.pdf. Your further feedback, if any, would be appreciated. Pdf: /pdf/9e05b2d8993639f6552be9707d086205d8006891.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Synthetic Programming Elicitation for Text-to-Code in Very Low-Resource Programming and Formal Languages
Accept (poster)
Summary: Recent advances in large language models (LLMs) for code applications have demonstrated remarkable zero-shot fluency and instruction following on challenging code-related tasks, ranging from test case generation to self-repair. However, the authors note that these models struggle to compose syntactically valid programs in programming languages that were unrepresented during pre-training, known as very low-resource programming languages (VLPLs). VLPLs are critical in various settings, including domain-specific languages for internal tools and tool-chains, and legacy languages. Inspired by program elicitation, the authors propose creating a hallucinated library within a high-resource language that can be automatically compiled to the VLPL. This library enables the LLM to generate and self-repair code within the syntax of a familiar language. Specifically, the authors introduce Synthetic Programming Elicitation and Compilation (SPEAC), an approach that enables LLMs to generate syntactically valid code even for VLPLs. The authors empirically evaluate the performance of SPEAC in a case study and find that, compared to existing retrieval and fine-tuning baselines, SPEAC produces syntactically correct programs more frequently without sacrificing semantic correctness. Strengths: + Important area + Novel idea + Good performance Weaknesses: - Missing some details - Fail to consider code generation approaches via self-repair or self-debugging - Only conducted experiments on UCLID5 Technical Quality: 2 Clarity: 3 Questions for Authors: Overall, I appreciate the idea presented in this paper. The research tackles a critical problem in the field of LLMs for code generation, particularly focusing on VLPLs, which are often overlooked but essential in many applications. The concept of using MAX-SAT to bridge the gap between high-resource and very low-resource languages is innovative and offers a new direction for future research. The empirical results demonstrate that SPEAC outperforms existing baselines in producing syntactically correct programs without losing semantic accuracy. However, the authors need to address some issues. 1. **Missing some details**: The authors omit important details about the semantic score. How is the semantic score computed? Is it done automatically or labeled manually? If labeled manually, how many people were involved, and what is the kappa score? 2. **Fail to consider code generation approaches via self-repair or self-debugging**: While the proposed SPEAC approach is promising, the paper does not adequately address how it compares to or integrates with existing methods that leverage self-repair or self-debugging capabilities in LLMs. These approaches are also relevant to this paper. 3. **Only conducted experiments on UCLID5**: The evaluation of SPEAC is limited to a single case study on UCLID5. Broader validation across multiple VLPLs would enhance the generalizability of the findings. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: See Questions, thanks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. Please see below for our answers to the specific questions posed. **6oM2-Q1: Details of the semantic score?** The semantic scores were manually judged by one professor with experience teaching formal methods (undergraduate and graduate level courses). This small number of judges is definitely a limitation and we will make this explicit in the paper. However, the key takeaway from the results is that Eudoxus lifts the syntactic correctness from 2/33 (Fine-tuned GPT-3.5-turbo) to 28/33 (Eudoxus using GPT-3.5-turbo) while not terribly harming semantic correctness. **6oM2-Q2: Self-repair or self-debugging?** Please see response to reviewer 9eZc question 3 (9eZc-Q3). **6oM2-Q3: Only UCLID5?** The goal of the UCLID5 case study was to conduct an in depth demonstration of our techniques for a very low resource language that looks very different from high resource languages and has a wide range of features. UCLID5 supports software verification, like Boogie (Barnett et al. 2005); hardware verification, like ABC (Brayton and Mishchenko 2010); and can even be used for modeling distributed systems, like P (Desai et al. 2013). We acknowledged that including only one case study is a threat to validity (lines 328-331) but UCLID5’s wide range of features is evidence for the generalizability of our approach. --- Rebuttal 2: Comment: Thanks. I like the idea of this paper and have decided to give it a score of 7. I was initially wavering between a 6 and a 7 because while the idea of this paper is really good, the number of human judges is too small. Ultimately, I settled on a 7.
Summary: In this paper, the authors present a novel approach called SPEAC (Synthetic Programming Elicitation and Compilation) to enable large language models (LLMs) to generate syntactically valid code for very low-resource programming languages (VLPLs). The approach involves creating a hallucinated library within a high-resource language, which can be compiled into the VLPL. The paper includes a case study demonstrating that SPEAC outperforms existing methods in producing syntactically correct programs. Evaluation results demonstrate that SPEAC outperforms all baselines on syntactic correctness (see “Parse Rate” in Tab 1), achieving full syntactic correctness on 78% of benchmarks. Strengths: The paper addresses a novel problem of generating code for VLPLs by leveraging LLMs, which is a relatively unexplored area. The empirical evaluation demonstrates the effectiveness of SPEAC compared to other baselines. Weaknesses: The claims made about the significant improvements achieved by SPEAC are not fully supported by a diverse set of experiments. The paper is poorly organized and lacks a clear, logical flow. Technical Quality: 2 Clarity: 1 Questions for Authors: The first paragraph in Section 2 is very disorganized. If the authors aim to discuss LLMs for code generation, they should focus on them rather than the benchmarks. It would be beneficial to include more examples of LLMs and either remove the benchmarks discussion or introduce benchmarks in a separate section dedicated to the code generation area. The experimental section is weak. Reviewers can only find the experiment results on the eighth page. The authors should present the experimental setup, results, and analysis more prominently and earlier in the paper to support their claims better. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. Please see below for our answers to the specific questions posed. **9mNj-Q1: Reorganization of the paper?** We are happy to make all the suggested organization changes under the questions header. Specifically, we are happy to (1) reorganize the first paragraph of Section 2; (2) introduce benchmarks in a separate section; and (3) present the experimental setup, results, and analysis more prominently and earlier in the paper. We estimate these changes should take no more than one day of work. **9mNj-Q1: More LLMs?** We are happy to include more examples of LLMs, as suggested, although our initial experimentation suggests that their performance generating syntactically correct UCLID5 code will be comparable to the OpenAI models (i.e., poor). If there are specific LLMs the reviewer believes we should add, we are happy to follow suggestions. --- Rebuttal Comment 1.1: Comment: Thank you for the responses, and sorry for the late response due to a flurry of proposals and review dues. I have read other reviewer's rebuttals and the author's response. I have increased my overall assessment as some of my concerns have been addressed. While others are shown in below. **Other LLMs** I recommend conducting experiments in OpenCodeInterpreter, DeepSeek-Coder, XwinCoder, CodeLlama, WizardCoder, and Starcoder2 in open-source LLMs.
Summary: The paper presents a framework to generate programs given natural language and targeting very low-resource programming languages. They first choose a language well-represented in the training data (in this case Python), and then make the LLM generate in a subset of that language, and then compile the generated program to the target language. When the generated program is checked by formal techniques and found to be inconsistent, some parts of the program are replaced with holes and fixed with LLM again. They apply this method to a very low-resource language VLPL and show that it is able to generate programs that parse (24 out of 33 test problems) and 11 out of 24 have fully correct semantic meaning, while the baseline few-shot prompting and fine-tune methods obtain near zero parsable programs. Strengths: * The results are very strong compared to the baseline few-shot prompting and fine-tune methods, which obtain near zero parsable programs. * The framework is described in a general way and potentially applicable to other low-resource programming languages as well. * The system integrating LLM program generation and formal methods for checking the program is very interesting and demonstrates the effectiveness of the method empirically. Weaknesses: A major concern of the paper is that there should be other stronger and popular methods to use as baselines. For example, a common method is iterative fixing of unparsable code by prompting LLM with the parser feedback. Also, many-shot examples (more than one) and including language specification in the prompt may also help, which is also a very standard prompting technique among the community. The method presented here would likely still work better, but it would be more informative to compare with those methods and see how much improvement is made. Technical Quality: 3 Clarity: 3 Questions for Authors: * Would few-shot prompting baseline methods with more than one example work better? * How does constraint decoding compare to the method here to get a parsable program in VLPL instead of Python subset? * How does the method compare with reflexion-style methods where iterative prompting is used to fix the problem by including feedback from the parser in the prompt at each iteration? * It is mentioned that there is a set of regression tests comprised of 317 tests taken directly from the UCLID5 GitHub repository. How are these tests used? Are they used to evaluate besides the results on the 33 test problems in Table 1? * How does the self-instruct with fine-tune method work specifically? How many synthetic programs are generated? * Can you provide the full prompt that is used for generation and fixing? Is the code in the appendix A included in the prompt in addition to the prompt template shown in Fig 4? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. Please see below for our answers to the specific questions posed. **9eZc-Q1: Would few shot prompting work better?** Adding more examples improves both the baseline’s performance and Eudoxus’ performance (which currently uses no examples, see Fig. 4). Using few-shot with 3 in-context examples leaves gpt-3.5-turbo's performance with zero successful compilations. For gpt-4-turbo, we find both with and without COT produces successfully compilation for 3/33. **9eZc-Q2: Comparison with constrained decoding?** Our work is complementary to constrained decoding in two ways. First, one of the pitfalls of constrained decoding is that it will perform poorly if the external constraints do not align with the underlying vocabulary of the LLM (Beurer-Kellner et al. 2024), i.e., if the desired output is “unnatural” to the LLM then constrained decoding does poorly. Synthetic programming elicitation could help bridge this gap for very low resource programming languages. Second, most existing constrained decoding techniques support only context-free languages (Beurer-Kellner et al, 2024) but most programming languages are at-least context-sensitive languages. Our MaxSMT driven repair can be applied after constrained decoding to fix more complicated, context-sensitive errors, like type-checking bugs. **9eZc-Q3: Comparison with reflection style methods?** We gave gpt3.5 and gpt4 compiler feedback in a self-repair loop starting from a zero-shot prompt and found no benefits. Specifically, our repair prompts provided the LLMs with their previously generated code and the corresponding compiler error. We let both LLMs iterate up to five times (same as Eudoxus in our empirical evaluation) but neither LLM was able to successfully repair its own code. We believe this is because the baseline LLM outputs are too far from correct outputs: there are just too many errors to fix. This is consistent with observations from existing work for verification languages, where self-repair alone shows only modest improvements (e.g., Loughridge et al. 2024). Reflection style methods may be more impactful when combined with more advanced techniques (as opposed to a zero-shot starting point), like Eudoxus. Our LLM repair step is currently extremely simple, giving no compiler feedback and doing no self-debugging (see Fig. 4b). Combining our work with complementary reflection style methods is a promising avenue for future work. **9eZc-Q4: How are the regression tests used?** The 317 are used for fine-tuning and for synthetic programming elicitation. See 9eZc-Q5 for more information on fine-tuning. For synthetic programming elicitation, we wrote natural language descriptions for a subset of the regression tests and used these natural language descriptions (without the corresponding regression test) to study the “natural” behavior of the LLMs. We mention this on lines 199-200 but will make it more clear. **9eZc-Q5: How does self-instruct with fine-tuning work?** For self instruct, we took existing tests in UCLID5 and asked the off-the-shelf LLM to create a plausible natural language description. This provides us 317 training pairs of natural language and valid UCLID5 on which to fine-tune. Just as in the self-instruct paper (Wang et al. 2023), the natural language descriptions (questions) are synthetic but the programs (solutions) are not synthetic. We briefly describe this on lines 295-297 but will clarify and add the appropriate citation. **9eZc-Q6: Providing the full prompt?** Yes and yes! In fact all of this is already included in the supplementary material that we submitted with the paper (but with possibly insufficient documentation, which we apologize for). You can see the full initial prompt in `get_sketch_prompt` of `eudoxus-main/src/eudoxus/llm/prompts.py` and the full hole filling prompt in `get_complete_prompt` of the same file. We are also happy to add the full conversations for every benchmark in the supplementary material along with a few in a new appendix (so that one does not need to execute the code to see a full conversation).
Summary: The paper addresses the challenges of LLMs in generating code for very low-resource programming languages (VLPLs), which are not represented in their pre-training data. Traditional methods to enhance LLM efficacy in this domain include prompting, constrained decoding, and fine-tuning. The authors propose a novel approach called SPEAC, which leverages natural programming elicitation to align with LLMs' tendencies, including their common hallucinations. The technique involves translating a high-resource language (P) into a target VLPL (T) using a subset of P and iterative error repair through LLMs. The largest consistent sub-tree of a generated program in P is identified using a MAX-SMT solver, and errors are iteratively fixed. The approach is demonstrated on the UCLID5 verification language, showing significant improvement in syntactic correctness compared to traditional fine-tuning methods . Strengths: - The idea presented is novel, and the results are impressive, demonstrating significant improvements over traditional methods. - The paper is well-written and engaging. The introduction clearly explains the contributions and the problem being addressed. The running example in Section 3 is helpful for understanding the proposed method. Weaknesses: I did not find any major weakness in the paper, however, the paper is focused on UCLID5 programs, which consist of a relatively narrow set of use cases, and it is not obvious how well this method will generalize to other VLPLs. Technical Quality: 4 Clarity: 4 Questions for Authors: In the paper, the authors propose using a subset of the high-resource language P, referred to as C, which can be transformed into the target low-resource language T. How is the existence of C determined in general? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Section 5.1 discusses the process of defining the subset C of P at a high level, but it lacks rigorous detail. I would appreciate seeing a more elaborate theory developed around this process in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. Please see below for our answers to the question posed. **tGyy-Q1: How is the existence of C determined?** Determining the existence of C is a design process. It is possible that some target languages are so esoteric that no subset of a high-resource language is even remotely compatible. However, showing that UCLID5, a language that is nothing like Python, has a good C in Python is evidence for the existence of a language C for many target languages. We described the general process for finding the child language C through grounded theory in Section 5.1 interleaved with the specific process for UCLID5 (mostly on lines 223 to 241) but we omitted some details due to space constraints. We will add detail to the description of the general process by drawing on general guidelines for using grounded theory in software engineering (e.g., Stol et al. 2016) and specific case studies (e.g., Barke et al. 2023). We will add detail to the specific process for UCLID5 by including concrete examples, possibly in the appendix. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thank you for your response and explanation. Please include general guidelines and specific case studies in the next iterations of the paper. My question has been addressed, and I have no further inquiries.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern Generators
Accept (spotlight)
Summary: This paper proposes a new position encoding technique CPG-PE for SNNs, which is inspired by the central pattern generator in the human brain and improves the ability of SNNs to process sequence data. Experimental results show that CPG-PE outperforms traditional SNNs in multiple fields such as time series prediction, text classification and image classification. Strengths: 1. The bio-inspired CPG-PE technique is proposed to enhance the sequence processing capability of SNNs. 2. The effectiveness of CPG-PE is verified in multiple fields, including time series, text classification, and image classification. 3. The performance of SNNs in various tasks is significantly improved, outperforming traditional models. 4. CPG-PE is designed with compatibility with neuromorphic hardware in mind, facilitating deployment in practical applications. Weaknesses: 1. No experiments were conducted on the large-scale image classification Imagenet dataset. 2. Other encoding methods in SNN such as rate coding [1] and temporal coding [2;3], were not discussed. [1] Kim Y, Park H, Moitra A, et al. Rate coding or direct coding: Which one is better for accurate, robust, and energy-efficient spiking neural networks?[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022: 71-75. [2] Han B, Roy K. Deep spiking neural network: Energy efficiency through time based coding[C]//European Conference on Computer Vision. Cham: Springer International Publishing, 2020: 388-404. [3] Comsa I M, Potempa K, Versari L, et al. Temporal coding in spiking neural networks with alpha synaptic function[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020: 8529-8533. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. I suppose the traditional positional encoding in Spikformer can be seen as direct encoding in SNNs. In contrast, the CPG-PE proposed in this paper is more like a temporal encoding with dynamic expressions related to time series. Could the authors discuss the differences between other encoding schemes [1;2;3] and positional encoding in SNNs within the related work? 2. Can the authors provide experimental results of CPG-PE on the ImageNet image classification dataset? 3. Some tyops: Formulas 4, 5, 6, 7, 8, and 10 are missing commas. There should be a comma after each formula. I would be pleased to raise the score if the authors address my concerns. [1] Kim Y, Park H, Moitra A, et al. Rate coding or direct coding: Which one is better for accurate, robust, and energy-efficient spiking neural networks?[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022: 71-75. [2] Han B, Roy K. Deep spiking neural network: Energy efficiency through time based coding[C]//European Conference on Computer Vision. Cham: Springer International Publishing, 2020: 388-404. [3] Comsa I M, Potempa K, Versari L, et al. Temporal coding in spiking neural networks with alpha synaptic function[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020: 8529-8533. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The author has discussed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are cheerful that our contribution is well recognized. And thanks for your valuable suggestions that truly enhance the quality of our paper and make it more understandable for the wider community. Responses to your concerns are presented as follows: ### **1.ImageNet Dataset (W1, Q2)** Thanks for your suggestion. We have conducted experiments with Spikformer without positional encoding (PE), Spikformer with relative positional encoding (RPE), and Spikformer with our proposed CPG-PE on the ImageNet dataset. The results are as follows: | Model | Param(M) | ImageNet | | --------------------------- | -------- | -------- | | Spikformer w/o PE | $15.50$ | $69.46$% | | Spikformer w/ RPE \[1\] | $16.81$ | $70.24$% | | Spikformer w/ CPG-PE \[ours\] | $15.66$ | $71.17$% | Specifically, we set the depth to 8 and the dimension of representation to 384. For other experimental settings, we have faithfully followed the guidelines in \[1\]. From the table, we can see that CPG-PE performs well on large-scale image datasets. However, due to time constraints, we were unable to conduct experiments on Random PE or Float PE. We believe that the above results demonstrate the effectiveness of our proposed CPG-PE in positional encoding. We will add these experiment results to Table 3 in our revised manuscript. ### **2.Discussion on temporal coding and rate coding (W2, Q1)** Thanks for your suggestion. (1) We carefully reviewed the literature you provided and found your suggestions valuable for improving the quality of our paper. In the revised manuscript, we will include the following discussion in the related work section (due to character limitation, we omit some references but will include them in the revised manuscript): "Spiking Neural Networks (SNNs) employ several coding methods to encode input information, each offering unique advantages. Direct coding \[1\]\[2\], the simplest form and widely-used in image tasks, directly associates spikes with specific values or events, providing straightforward and interpretable outputs but often lacking efficiency for complex tasks. Rate coding \[3\]\[4\], where the input is represented by the frequency of spikes within a given timeframe, is more robust and widely used but can be less precise due to its reliance on averaged spike rates. Temporal coding \[5\]\[6\] (a.k.a latency coding) encodes information based on the timing of individual spikes, allowing for high temporal precision and efficient representation of dynamic inputs, though it can be computationally demanding. In addition, Delta coding \[7\] represents changes in input signals through spikes, focusing on differences rather than absolute values, which can enhance efficiency and response times but may introduce complexity in decoding. Each of these methods contributes to the versatility and applicability of SNNs in various domains, from neuroscience to artificial intelligence. The SNNs we considered in this paper should fall into the category of rate coding since back-prop is conducted on spike rate. Meanwhile, CPG-PE can be considered converting temporal information into spike rate of a group of neurons (Equations 11-12), and this is why CPG-PE can improve performance for sequential data. It is possible to introducing learning algorithms of temporal coding for the CPG neurons to tackle more complex sequence structure, which remains as future work." (2) In Related Works Section, we have discussed the positional encoding used in SNNs. ### **3.Typo (Q3)** Thanks for your reminding. We will add commas after formulas 4-10 to ensure readers are not misled. ### Reference [1] Zhou Z, Zhu Y, He C, et al. Spikformer: When Spiking Neural Network Meets Transformer[C]//The Eleventh International Conference on Learning Representations (ICLR). 2023. [2] Yao M, Hu J, Zhou Z, et al. Spike-driven transformer[J]. Advances in Neural Information Processing Systems (NeurIPS), 2023. [3] Kim Y, Park H, Moitra A, et al. Rate coding or direct coding: Which one is better for accurate, robust, and energy-efficient spiking neural networks?[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022: 71-75. [4] Lv C, Xu J, Zheng X. Spiking Convolutional Neural Networks for Text Classification[C]. // The Eleventh International Conference on Learning Representations (ICLR). 2023. [5] Han B, Roy K. Deep spiking neural network: Energy efficiency through time based coding[C]//European Conference on Computer Vision. Cham: Springer International Publishing, 2020: 388-404. [6] Comsa I M, Potempa K, Versari L, et al. Temporal coding in spiking neural networks with alpha synaptic function[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020: 8529-8533. [7] Yoon Y C. LIF and simplified SRM neurons encode signals into spikes via a form of asynchronous pulse sigma–delta modulation[J]. IEEE transactions on neural networks and learning systems, 2016, 28(5): 1192-1205. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I think this is a nice bit of discussion and could be added to the manuscript. In light of the additional discussion, I'd like to raise my score to a 7. This is an interesting piece of work and would be a nice addition to NeurIPS.
Summary: This paper introduces a novel positional encoding method for SNNs called CPG-PE, inspired by central pattern generators (CPGs) in biological neural systems. The authors demonstrate both theoretically and empirically that CPG-PE can effectively capture positional information in sequential data while maintaining the spike-based nature of SNNs. The approach is evaluated on a range of tasks including time-series forecasting, text classification, and image classification. Strengths: 1. Strong theoretical foundation: The authors mathematically demonstrate how CPG-PE relates to conventional sinusoidal positional encoding used in transformers. 2. Comprehensive empirical evaluation across time-series forecasting, text classification, and image classification tasks shows consistent performance improvements when incorporating CPG-PE. 3. The method is biologically plausible and potentially compatible with neuromorphic hardware, as it can be implemented using leaky integrate-and-fire neurons. 4. Well-structured paper with clear motivation, methodology, and insightful analysis of CPG-PE properties and their relationship to biological CPGs. Weaknesses: 1. Typographical error on line 158 where X is incorrectly specified as belonging to the real number domain instead of the binary domain {0,1} for spike data. 2. The authors should clarify the similarities and differences between the decay mechanism in CPG-PE and that of LIF neurons. Technical Quality: 3 Clarity: 3 Questions for Authors: See my weakness part. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed limitations and future work in Appendix E, which is commendable. They discuss the challenges of applying CPG-PE to non-sequential data like images and propose potential solutions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments and suggestions that enhanced the quality of our paper. Responses to your concerns and questions are hereby presented: ### **1.Typographical error (W1)** Apologies for the confusion caused by this typo error. In line 158, the $X$ is the input spike matrix and it belongs to the binary domain $\{0,1\}$ rather than real number domain $\mathbb{R}$, i.e., $X \in \{0,1\}^{T \times B \times L \times D}$. This error will be corrected in our revised manuscript. ### **2.Similarities and differences between the decay mechanism of CPG-PE and that of LIF neurons (W2)** (1) If we understand your question correctly (please kindly let us know if we misunderstood), the question is about the differences between the decay mechanisms of *CPG neurons* and LIF neurons instead of *CPG-PE* . To be clear, CPG indicates that the neuron (together with other neurons in a CPG circuit) plays the role of rhythmic output generator, while LIF is a specific dynamics of spiking neuron model. CPG-PE is our propose method which leverages the spikes of CPG neurons for encoding positional information in sequential data. Therefore, CPG-PE and LIF neurons are orthogonal concepts, and we have proved that CPG-PE could be composed of LIF neurons (as shown in Appendix C), or be manually set (for simplicity, as used in our experiments). (2) Regarding the decay mechanism, as shown in Equation (2), the $\beta$ control the decay rate of LIF neurons. However, our proposed CPG-PE does not have a decay mechanism. We would greatly appreciate it if you could provide further clarification on this issue, and we are happy to address it. Please let us know if you have any remaining questions or concerns, we are commited to addressing the issues. --- Rebuttal Comment 1.1: Title: Thank you Comment: The reviewer thanks the authors for the discussion. It addressed all of my concerns. I decide to keep my score.
Summary: The lack of an effective and hardware-efficient spike-frm position encoding strategy in SNNs has been a consistent motivation for this study. Drawing inspiration from the central pattern generators (CPGs) in the human brain, which produce rhythmic patterned outputs without requiring rhythmic inputs, this work proposes a novel PE technique for SNNs, termed CPG-PE. Extensive experiments across various domains show the superior performance with CPG-PE. Strengths: 1.To the best of my knowledge, this is the first work on position encoding in SNNs, laying the foundation for efficient sequence modeling in SNNs. 2.The approach utilizes the coupling of multi-neuron pulse signals as position encoding, which is innovative. 3.The authors demonstrated the effectiveness of this method across various tasks. 4.The spike-position encoding generating method through mutual inhibition between two groups of spiking neurons is brain-inspired and hardware-friendly. Weaknesses: 1.This type of positional encoding, through aggregation, introduces a small number of additional parameters and computational overhead. Please provide ablation experiments demonstrating that the performance improvement is not solely due to these factors. 2.I suggest the authors include results on ImageNet to demonstrate the effectiveness on large-scale datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.In ANNs, positional encoding is typically added to features. Please analyze the similarities and differences of this category aggregation-based positional encoding compared to ANNs. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I believe the authors' discussion on limitations is comprehensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions, which are valuable for enhancing our paper. We are pleased that our contributions are well recognized. Responses to your concerns and questions are hereby presented: ### **1.Ablation experiments on parameters (W1).** Thanks for your suggestion. (1) Firstly, while our method increases the parameter count compared to the version without positional encoding (w/o PE), this increase is relatively small. For example, on CIFAR10, the paramter numbers of Spikformer with CPG-PE are only $0.17$M(~$2.17$% of the total parameters) larger than that of Spikformer w/o PE. (2) Secondly, we conducted ablation experiments by reducing the parameter count of Spikformer with CPG-PE to be comparable to Spikformer w/o PE, allowing for a more direct performance comparison. The results are as follows:: | Model | CIFAR10 | CIFAR10-DVS | CIFAR100 | | ----------------------------------- | ------------------------------ | ------------------------------ | ------------------------------ | | Spikformer w/o PE | Param: $8.00$M, Accuracy: $93.77$% | Param: $1.99$M, Accuracy: $76.40$% | Param: $8.04$M, Accuracy: $73.59$% | | Spikformer w/ CPG-PE \[Equal Param\] | Param: $7.99$M, Accuracy: $94.60$% | Param: $1.99$M, Accuracy: $78.00$% | Param: $8.02$M, Accuracy: $76.91$% | Specifically, we adjusted the representation dimension of Spikformer with CPG-PE to make the parameter counts similar to those of Spikformer without PE. For instance, on CIFAR10, we changed the dimension from $384$ to $380$, resulting in a parameter count of $7.99$M, which is almost equal to $8.00$M. As shown in the table above, we can conclude that when the parameter counts are equal, Spikformer with CPG-PE significantly outperforms Spikformer without PE across all image classification datasets, proving our proposed CPG-PE is effective. ### **2.ImageNet Dataset (W2).** Thanks for your suggestion. We have conducted experiments with Spikformer without positional encoding (PE), Spikformer with relative positional encoding (RPE), and Spikformer with our proposed CPG-PE on the ImageNet dataset. The results are as follows: | Model | Param(M) | ImageNet | | --------------------------- | -------- | -------- | | Spikformer w/o PE | $15.50$ | $69.46$% | | Spikformer w/ RPE \[1\] | $16.81$ | $70.24$% | | Spikformer w/ CPG-PE \[ours\] | $15.66$ | $71.17$% | Specifically, we set the depth to 8 and the dimension of representation to 384. From the table, we can see that CPG-PE performs well on large-scale image datasets. However, due to time constraints, we were unable to conduct experiments on Random PE or Float PE. We believe that the above results demonstrate the effectiveness of our proposed CPG-PE in positional encoding. We will add these experiment results to Table 3 in our revised manuscript. ### **3.Similarities and differences between CPG-PE and addition-based PE in ANNs (Q1).** In ANNs, especially in language tasks, the difference between using addition and concatenation for positional encoding is not significant \[2\]\[3\]. However, in spiking neural networks (SNNs), addition is not suitable for spike matrices because it can result in non-binary values, which is one of the motivations for designing SNN-PE. This paper addresses this issue by first concatenating the spike matrix and then using a projection layer to project it back to the initial dimension. ### Reference [1] Zhou Z, Zhu Y, He C, et al. Spikformer: When Spiking Neural Network Meets Transformer[C]//The Eleventh International Conference on Learning Representations (ICLR). 2023. [2] Rosendahl J, Tran V A K, Wang W, et al. Analysis of positional encodings for neural machine translation[C]//Proceedings of the 16th International Conference on Spoken Language Translation. Association for Computational Linguistics. 2019. [3] Ke G, He D, Liu T Y. Rethinking Positional Encoding in Language Pre-training[C]//International Conference on Learning Representations (ICLR). 2021. --- Rebuttal 2: Title: Response to the rebuttal Comment: Thank you for your detailed response. The experiments have alleviated my concerns about the overhead of CPG-PE. The further validation on the large-scale ImageNet has demonstrated its generalization and effectiveness. Therefore, I would like to increase my score.
Summary: This paper introduces central pattern generators (CPGs) from neuroscience into the SNN framework as a novel method for position encoding. Through mathematical derivation, it is proven that the existing abstract PE methods in transformers are actually a particular solution for a specific type of CPG. The effectiveness of CPG is validated through experiments across several domain benchmarks. Strengths: 1. This article connects existing abstract Positional Encoding (PE) methods in transformers with Central Pattern Generators (CPGs) in the human brain through mathematical derivation, showing that the former can be viewed as a specific mathematical solution to the membrane potential dynamics of the latter. This presents an interesting viewpoint. 2. The paper is well-organized and clearly written, offering high readability. Readers can effortlessly grasp the authors' intentions, supported by both the textual explanations and accompanying illustrations. Weaknesses: 1. The authors have not convinced me why the problem addressed in this paper is very important, i.e., why the existing PE methods in SNNs are such a big problem that they need to be improved by CPG. 2. The method proposed in the paper is simple and does not provide enough inspiring insights; the contribution is relatively limited. 3. The implementation of CPG-PE on hardware involves coupled nonlinear oscillators that require frequent updates of neuron membrane potentials, entailing floating-point computations and memory read-write operations, which result in additional energy expenditures. The paper should scrutinize and analyze whether the performance enhancements afforded by this encoding method justify the additional energy costs. This trade-off demands a detailed examination to assess its viability in practical applications. 4. In Sec3.1, why is "F(x)=b<=0, H(y)=d<=0" followed after “...gain membrane voltage with constant speed” instead of “F(x)=b>0, H(y)=d>0”? 5. In Sec4.2, does the CPG-Full method replace all linear layers in the model with CPG-Linear layers? Why is its performance not as good as that of CPG-PE? Can you provide further analysis and explanation? Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to weakness 3, 4, 5 Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and suggestions, which are valuable for enhancing our paper. The follows are responses to your individual concerns: ### **1.Why is positional encoding important to SNNs (W1)?** Currently, SNNs have been applied to a variety of tasks beyond image processing, including time-series forecasting [1] and text classification [2]. However, due to lack of suitable positional encoding (PE) method, it remains challenging for SNNs to capture indexing information, rhythmic patterns, and periodic data. Existing PE methods for SNNs do not meet the essential criteria: uniqueness of each position and formulation in spike-form. Inspired by CPG neurons, we propose CPG-PE, which performs well on both sequential and image tasks. Additionally, we ensure its hardware-friendliness and provide insights into the role of CPGs in neuroscience (Section 6). ### **2.The method is simple and does not provide enough inspiring insights; contribution is relatively limited. (W2).** > "Everything should be made as simple as possible, but no simpler." – Albert Einstein (1) We believe simplicity is a strength of our method (as it is effective), not a weakness. As it is easy to implement, there is a higher probability that more researchers can benefit from leveraging our method on their model to improve performance. Furthermore, since CPG-PE is novel to the SNN community, complex models and algorithms with even better capacity may emerge following our work, but it is necessary that we first comprehensively address that a simple CPG-PE can consistently improve the performance of SNNs. (2) Our contributions can be summarized as: **(1) Pioneer in the exploration of spike-version PE for SNNs.** To the best of our knowledge, our proposed CPG-PE is the first work on spike-version PE in SNNs, which are both brain-inspired and hardware-friendly. We corrected the inappropriate PE designs previously used in SNNs. **(2) Consistent Performance Gain.** CPG-PE enhances the performance of SNNs across a wide range of tasks, including time-series forecasting, text classification, and image classification. **(3) Insights on neuroscience and neuromorphic hardware.** We mathematically proved that the traditional PE in Transformers is a particular solution of the membrane potential variations in a specific type of CPG, and we also proved that CPG-PE can be implemented with 2 LIF neurons so that it will not introduce any burden of redesigning hardware. ### **3.Frequent updates of neuron membrane potentials bring additional floating-piont operations and energy on hardware (W3).** We fully agree, and indeed, we have discussed this aspect in our paper. (1) **Physical Implementation of CPG-PE** We have proposed an approach to physically implement CPG-PE with only 2 LIF neurons, introducing no extra efforts on chip designs. A CPG-PE neuron can be viewed as an autonomic neuron that will emit a burst of $K$ spikes after resting for $R$ time steps. Through rigorous derivation (see **Appendix C**), we prove that the system can behave periodically with a period $T = (R + K)∆T$, i.e., CPG-PE can be implemented with 2 LIF neurons. This approach eliminates the need for additional floating-point computations and memory read-write operations on neuromorphic hardware to frequently update neuron membrane potentials. (2) **Support from Existing Circuit Implementations** Existing circuit implementations on hardware [3][4] strongly support our proposed CPG-PE, as these works are based on the similar idea of **coupled LIF neurons**. Circuits based on complementary metal-oxide-semiconductor technology [3] simplify membrane potential calculations, reduce power consumption significantly, and eliminate the need for additional memory to store membrane potentials. Furthermore, memristor-based CPG circuits [4] have been shown to minimize circuit complexity and increase energy efficiency. Therefore, we believe that our proposed approach is hardware-friendly, without requiring additional operations on hardware, which has also been acknowledged by other reviewers (3Lmi, s3xa, Pjez). ### **4.Misleading in Section 3.1 (W4).** We apologize for this typo and the resulting confusion. It should be: "$\mathbf{F}(\mathbf{x})=b>0, \mathbf{H}(\mathbf{y})=d>0$." After our thorough inspection and verification, we confirm that this does not affect the derivation results, and Equations 6-10 remain valid. We will correct this typo in our revised manuscript. ### **5.Why is the performance of CPG-Full not as good as that of CPG-PE (W5).** (1) CPG-Full stands for replacing all linear layers with CPG-Linear layers. CPG-Linear was introduced in Appendix D. The original CPG-PE models the inputs, while CPG-Full models the hidden states. (2) The difference between the two is not significant. In terms of metrics, the average $R^2$ is identical, and the $RSE$ difference is not statistically significant (p-value=$0.9351$, student *t*-test). (3) We conducted this experiment for two main reasons: Firstly, to explore how to implement modular CPG, hypothesizing that simply replacing Linear with CPG-Linear would allow the PE functionality to work effectively. Secondly, this experiment is supplementary, aiming to confirm that the performance improvement of SNNs is due to the functionality of PE rather than an enhancement in representation capability, as CPG-Full does not affect representation capability. Please kindly let us know if there is any remaining concern. We are committed to addressing your concerns. [1] Lv C, et al. Efficient and Effective Time-Series Forecasting with Spiking Neural Networks. ICML, 2024. [2] Bal M, et al. Spikingbert: Distilling bert to train spiking language models using implicit differentiation. AAAI, 2024. [3] Vogelstein J, et al. Dynamic control of the central pattern generator for locomotion. Biological Cybernetics, 2006. [4] Dutta S, et al. Programmable coupled oscillators for synchronized locomotion. Nature Communications, 2019. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for the response to my questions. The idea of building positional encoding in SNN is interesting and I am willing to increase my score to 6.
Rebuttal 1: Rebuttal: # Global Response We express our gratitude to all the reviewers for the valuable insights and acknowledging our contributions to advance the sequential modeling ability of SNNs through central pattern generators. We are encouraged by the comments highlighting the strengths of our work: - Clear motivation, novelty, and innovation (Reviewer 3Lmi, s3xa) - Effectiveness / consistent performance improve (Reviewer 3Lmi, s3xa, Pjez) - Both Hardware-friendliness and biologically-plausibility (Reviewer 3Lmi, s3xa, Pjez) - Comprehensive experiments and analysis (Reviewer 3Lmi, s3xa, Pjez) - Well-organized and clearly-written (Reviewer P6YU, s3xa) A major concern most reviewers share is the evaluation on large-scale image classification benchmark, i.e., ImageNet. We agree with the comment and to address this issue, we have conducted experiments with Spikformer without positional encoding (PE), Spikformer with relative positional encoding (RPE), and Spikformer with our proposed CPG-PE on the ImageNet dataset. The results are as follows: | Model | Param(M) | ImageNet Acc (Top 1)| | --------------------------- | -------- | -------- | | Spikformer w/o PE | $15.50$ | $69.46$% | | Spikformer w/ RPE | $16.81$ | $70.24$% | | Spikformer w/ CPG-PE \[ours\] | $15.66$ | $71.17$% | In particular, we set the depth to 8 and the dimension of representation to 384. From the table, we can see that CPG-PE performs well on large-scale image datasets. However, due to time constraints, we were unable to conduct experiments on Random PE or Float PE. We believe that the above results demonstrate the effectiveness of our proposed CPG-PE in positional encoding. One comment on weakness is about our contributions. We would like to highlight several key contributions that distinguish our work from existing literature: **(1) Pioneer in the exploration of positional encoding for SNNs.** To the best of our knowledge, our proposed CPG-PE is the first work on spike-version positional encoding in SNNs, which are both brain-inspired and hardware-friendly. We corrected the inappropriate positional encoding designs previously used in SNNs. **(2) Consistent performance gain with a simple method.** CPG-PE enhances the performance of SNNs across a wide range of tasks, including time-series forecasting, text classification, and image classification. **(3) Insights on neuroscience and neuromorphic hardware.** We mathematically proved that the traditional positional encoding in Transformers is a particular solution of the membrane potential variations in a specific type of CPG. Additionally, we demonstrated that CPG-PE can be implemented with 2 LIF neurons, ensuring it does not introduce any burden of redesigning hardware. We will address the concerns and polish our paper in the revised version to enhance its clarity and accessibility for the wider community. To summarize the updates: 1. Extensive experiment results on large-scale dataset, specifically ImageNet. 2. Further discussion on spike encoding methods, including temporal coding and rate coding, in the related work section. 3. Revision of typos and proofreading for better language. We are confident that our work contributes to the NeurIPS community by advancing neuromorphic AI and potentially computational neuroscience. We are happy to answer follow-up questions from the reviewers if anything remains unclear.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
QGFN: Controllable Greediness with Action Values
Accept (poster)
Summary: The paper "QGFN: Controllable Greediness with Action Values" introduces a novel approach to enhance Generative Flow Networks (GFNs) by incorporating action-value estimates (Q-values) to control the greediness of sampling policies. This method, called QGFN, includes three variants—p-greedy, p-quantile, and p-of-max—each designed to balance the generation of high-reward samples with the maintenance of diversity. Through comprehensive experiments on tasks like molecule generation and RNA design, the authors demonstrate that QGFN significantly improves the generation of high-utility samples while preserving diversity, providing a practical and effective solution for deploying safe and helpful LLMs. Strengths: - **Innovative Approach**: The introduction of Q-values to control the greediness of sampling policies in Generative Flow Networks (GFNs) is a novel and creative solution to the challenge of balancing high-reward sample generation with diversity. - **Comprehensive Evaluation**: The paper includes thorough experiments across various tasks, such as molecule generation and RNA design, providing strong empirical evidence for the effectiveness of QGFN. - **Ablation Study**: The paper provides in-depth ablation studies on the hyperparameters of GFNs. Weaknesses: - **QGFN variants matter**. This paper does not provide a method to select different QGFN variants. Different variants of QGFN have very different performances in different environments. In Section 5, the authors claim that p-of-max is suitable for small action spaces, and p-quantile is suitable for large action spaces. What if different states have different action spaces? For example, in the graph combinatorial optimization problems [1], earlier states have a much larger action space than later states, and the action space size will change (decrease) during the sampling. I think compared with a fixed action space, this setting probably requires the use of different QGFN variants on different states during the sampling. - **Lack of more complex environment**. Therefore, I wonder about the QGFN's performance on graph combinatorial optimization problems, such as MIA. [1] - There are some missing recent works that deal with sampling high-reward candidates with diversity, such as [2, 3] - I am happy to raise my scores if my concerns are resolved. [1] Zhang, D., Dai, H., Malkin, N., Courville, A., Bengio, Y., & Pan, L. (2023). Let the flows tell: Solving graph combinatorial optimization problems with gflownets. arXiv preprint arXiv:2305.17010. [2] Chen, Y., & Mauch, L. (2023). Order-Preserving GFlowNets. arXiv preprint arXiv:2310.00386. [3] Jang, H., Kim, M., & Ahn, S. (2023). Learning Energy Decompositions for Partial Inference of GFlowNets. arXiv preprint arXiv:2310.03301. Technical Quality: 3 Clarity: 3 Questions for Authors: See **Weaknesses** Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See **Weaknesses** Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate the time and effort you put into reviewing our work. Below, we address your questions and comments in detail. We hope this clarifies our approach and findings. > QGFN variants matter. This paper does not provide a method to select different QGFN variants. Different variants of QGFN have very different performances in different environments. In Section 5, the authors claim that p-of-max is suitable for small action spaces, and p-quantile is suitable for large action spaces. What if different states have different action spaces? For example, in the graph combinatorial optimization problems [1], earlier states have a much larger action space than later states, and the action space size will change (decrease) during the sampling... This is a great question, and as the reviewer points out, we find that the optimal variant tends to be a function of the action space. Although not always optimal, the $p$-greedy variant consistently outperforms its GFN counterpart in finding modes and is a safe choice to default to that is less dependent on the action space. We also would like to highlight that a variant such as $p$-of-max QGFN will select a variable number of actions (any action whose $Q$ value is at least a fraction $p$ of the best action's $Q$ value), and might be a suitable choice for varying action sizes. It is also likely that more appropriate mixtures exist. Additionally, note that in both the fragment-based molecule generation task and atom-based molecule generation task (QM9), the size of the action space is highly variable as it depends on the number of generated fragments/atoms at each timestep. > Lack of more complex environment. Therefore, I wonder about the QGFN's performance on graph combinatorial optimization problems, such as MIA. [1] While we feel that the fragment-based molecule task is complex (Figure 2; $|\mathcal{X}|\approx 10^{100}$), we address the concern by training QGFN on the MIS codebase [1] with default hyperparameters. From superficial experimentation over the last few days, we show we can improve performance upon the baseline model. | METHOD | Small - Metric Size | Small - Top 20 | |----------------------|---------------------|----------------| | FL | 18.20 | 18.72 | | FL - QGFN (p-greedy) | 18.21 | **19.06** | | FL - QGFN (top-p) | 18.20 | 18.75 | | FL - QGFN (high-p) | **18.26** | 18.74 | Table A: Max independent set experimental results on small dataset (200 to 300 vertices) using FL-GFlowNets (Pan et al., 2023). We sample 20 solutions for each graph configuration and get the average and top results. Due to the limited time for training during the rebuttal period, we anticipate that experimenting with different hyperparameters could yield even better performance. We will include these extended results in the revised paper for a more thorough evaluation. > There are some missing recent works that deal with sampling high-reward candidates with diversity, such as [2, 3] We appreciate that other GFN methods exist that increase the diversity-reward quality of sampling. However, our experimentation focuses on showcasing how QGFN can be easily integrated into the GFN framework across various tasks with improved performance. In other words, one _could_ train OP-QGFNs and LED-QGFNs. As shown in the above experimentation on MIS, we successfully used QGFN variants on FL-GFlowNets (Pan et al., 2023) and showed that they can adapt to varying action spaces dynamically and provide a boost in performance. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. However, I think the question, "How to choose the best QGFN variants given different environment" is still unsolved. Given a state $s$ and the action space $A(s)$ at this state, there should be more detailed instruction on how to choose the QGFN variants and $p$. Regarding the best $p$, I have additional concerns below from the question from the first reviewer. From your rebuttal, the choice of $p$ in Table 1 is based on Figure 6. the chosen $p$ of p-greedy, p-of-max, and p-quantile QGFN are 0.4, 0.9858, and 0.93 respectively, which is very different from each other (and very strange numbers). According to Figure 6, the p-of-max close to the "Greediest" ($p=1$) will have inferior performance than "Least Greedy" ($p=0$). Therefore, the choice of an appropriate $p$ is non-trivial. (Also, The color lines for the left two and right two subfigures seem to be inconsistent, and for the 3rd subfigures, no numbers on the x-axis. ) I think choosing training $p$ using Figure 6 is not appropriate. if you are given a trained $P_F$ and $Q$, I think you can enumerate different $p$s at inference to select the best $p$ to combine them together at inference. However, as I reread the paper, the different $p$s are chosen in training, I do not think it is acceptable to enumerate all different $p$ in training since it will be very computationally expensive to retrain the model for every $p$ and not fair to other methods. I temporarily decrease my score to 4. If the above concerns are resolved. I am willing to raise my scores again. --- Reply to Comment 1.1.1: Title: Response Comment: Thank you for engaging with us. We apologize for a mistake in our rebuttal to bgQq, inference time $p$ values are chosen based on Figure 9, not Figure 6. Figure 6's left two figures use these inference-time $p$ values, while Figure 6's 3rd subplot show the effect of different (fixed) $p$ values used for training. We understand your perspective, but we believe it comes from us not properly communicating the facts rather than what appears to be (and would be very concerning!) _strange numbers_. Below are our detailed responses to your questions. We hope these address your concerns. We are happy to answer any further questions you may have. Let's start with the choice of $p$. We make two claims: - A, $p$ can be treated like a hyperparameter, set once during training, - B, $p$ (and $\mu$) can be changed after training to _even further_ improve the reward-diversity quality of samples. Figures 2,3,4,5, and 6 (3rd subplot) show A. Treat $p$ as a hyperparameter, employ standard hyperparameter search. Even in this setting, QGFN shows promise. We hope this clarifies our approach to the reviewer. In a desire to show how malleable our method is, we went further and claimed B. This is shown in Figures 6 (two left figures) and 9 and Table 1. Without any retraining, we can get even more performance out of the models by tweaking one single parameter, $p$. **What is this "Greediest" to "Least Greedy" spectrum?** We chose this as an axis rather than 0-1 because 0-1 makes little sense for p-of-max. Instead (and we wrote this in l290-292 but clearly need to make this more prominent in the text), we use 0.9-1 for p-of-max and 0-1 for p-quantile and p-greedy. Below 0.9, p-of-max in practice samples from $P_F$ so it would have been a waste of space to show the 0-0.9 values. To overlay all the results on a single plot, it made more sense to us to simply show the evolution of behavior as a function of "greediness" rather than put too many details and multiple overlapping axes. This in retrospect may have been a confusing choice for the reader, and we will change this to make it clearer. Next, while 0.9853 may seem unusual and arbitrary, it is computed during inference with $p$ chosen from `np.linspace(0.9, 0.999, 16)`. Similarly, 0.93 is the second to last value of `np.linspace(0, 1, 16)`. > I think choosing training $p$ using Figure 6 is not appropriate The 3rd subplot of Figure 6 does **not** show $p$ determined **after** training, it shows the result of a hyperparameter search for $p$. Why would it not be appropriate to chose a training hyperparameter from the result of this? > I do not think it is acceptable to enumerate all different $p$ in training We believe we perform standard hyperparameter search, $p$ is chosen beforehand before any training occurs. This means $p$ is treated like any other hyperparameter, like learning rate, or batch size. This is a common practice in method papers. We want to clarify that we are **not** adjusting $p$ during training based on inference performance. Claim B is entirely about what happens _after_ having trained models during inference. This distinction between Claims allows our approach to be computationally feasible and fair when compared to other methods. Note that for baselines, we _also_ do a hyperparameter search on greed-controlling parameters, notably $\beta$. Part of the results of this search are shown in Figure 6's left two subplots. This is fair standard practice. Finally on this point, we apologize for the color confusion of Figure 6; we will correct this in the revision. > there should be more detailed instruction on how to choose the QGFN variants and $p$ We did not explicitly state this, as our focus is on showing the usefulness of the _existence_ of $Q$-$P_F$ mixtures, which we think is a valuable NeurIPS-level scientific contribution in itself, rather than advising on a specific approach. However, we understand that readers may be curious about which variant to use, and we reiterate what we write in our rebuttal, and will emphasize in our paper that starting with $p$-greedy (with $p=0.5$) is a reliable option. We speculate in our paper that variants have effects which partially depend on the action space because of the nature our tasks. These speculations are based on evidence, but instructions for the general case would require vastly more evidence, more compute, on many more types of action spaces than what we have. We hope to impress upon the reviewer that this would be worthy of an entirely different 9-page paper. We have introduced a novel method, shown that it works, and that it works because of the hypothesized mechanism, and went beyond that and showed that many variants of the idea are both sensible and capable of producing interesting results. We agree with the reviewer that this paper would be stronger if we had a general recipe, but providing such a comprehensive solution would be beyond the intended scope and contribution of this paper. --- Rebuttal 2: Comment: Thanks for your response. Regarding the additional $Q$, I think training $Q$ and GFN is only 5-10% more compared with standard GFN is very important to justify the inclusion of the additional $Q$. Because modified GFNs also require more training time than standard GFNs. I raise the score to 5. The reason I do not raise to the higher score is the following. > Even if QGFN took twice as long to train, if it means 3x more high-reward candidates. But the problem is that it does not have so much performance gain, especially when the lack of a stronger baseline. In the last part, I actually mean that this paper should contain more baselines like local search GFN, LED GFN, and compare QGFN directly with them (not incorporate $Q$ into them). Since they are the most up-to-date GFN variants and do not require an additional $Q$. The claim that $Q$ can improve these GFN variants is additionally from your rebuttal, not from the reviewers. Regarding these two claims, that "Q is better than GFN variants" and "Q can improve the GFN variants", the only result is that FL+Q is slightly better than FL. Again, I understand the limited rebuttal time can be an issue, but the first claim should not require much coding to verify given these methods are all open-source.
Summary: The paper focuses on improving high-reward sample collection, i.e., exploitation, in training GFlowNets. The motivation stems from the fact that the flow may pursue states that lead to many low-reward states rather than focusing on states that lead to high-reward states. To this end, the authors propose incorporating Q-value estimation, i.e., expected future reward, and interpolating between flow and Q to make a transition. Experimentally, the proposed method shows promising performance in fragment and atom-wise molecule generation and RNA-sequence generation. Strengths: - The paper is well-written and easy to follow, with a well-developed motivation and idea. - It seems interesting that the compositional nature of the generation model can be considered to improve the performance of GFlowNets for practical applications. - This paper discusses various design choices for incorporating Q-value estimation, which offer different perspectives on "greediness." - Overall, the experiments are well-done, and the reported results demonstrate the effectiveness of the proposed model. Weaknesses: No major weaknesses. See questions below. Technical Quality: 3 Clarity: 4 Questions for Authors: - Are there failure scenarios where QGFN shows poor performance compared to GFN? (opposite of Figure 1) - I am curious if QGFN performs better than a policy interpolated between GFlowNets trained with low temperature and high temperature (with $p$). - Is there an analytic form of the sampling distribution of terminal states for the proposed methods? - It would be interesting to illustrate the sample efficiency for discovering modes in a toy example, e.g., hyper-grid. - Line 160: "good.." => "good." Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have acknowledged the limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read our paper. We appreciate your questions and have provided detailed answers below. > Are there failure scenarios where QGFN shows poor performance compared to GFN? (opposite of Figure 1) We haven't encountered such scenarios in practice other than $p$ being set too high. In such cases, the generated samples tend to be overly similar to each other, reducing the diversity and generalization of the model. Additionally, it is possible that QGFN could show poor performance if the underlying GFN is improperly trained. If the GFN is suboptimal, the combination of $Q$ could amplify this problem. > I am curious if QGFN performs better than a policy interpolated between GFlowNets trained with low temperature and high temperature (with $p$). While we haven't performed this exact experiment, we can extrapolate from Figure 6b that this interpolation would still be less diverse than QGFN, stemming from the loss in diversity in low-temperature (high $\beta$) GFN. > Is there an analytic form of the sampling distribution of terminal states for the proposed methods? We haven't been able to derive any interesting results in this respect. In the limits of $p$ one simply retrieves either the GFN behavior or the greedy $Q$ policy behavior. Note that in some restricted settings (think of the bandit case for example), sampling from QGFN is provably at least as good as sampling from GFN. We include such an analysis as well as intuition-building for the general case in the global response. > It would be interesting to illustrate the sample efficiency for discovering modes in a toy example, e.g., hyper-grid. While the hyper-grid example has been used in many GFlowNet papers, we do not believe it to be particularly interesting other than as a sanity check of convergence; i.e. it should not be used to rank algorithms. The main reason is that the setting is too simple to get much generalization out of the DNN approximating $P_F$ and/or $F$ (c.f. [1]). We can nonetheless add such an experiment to the appendix as an additional sanity check. [1] Investigating Generalization Behaviours of Generative Flow Networks, Lazar Atanackovic, Emmanuel Bengio, 2024 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed clarifications. I have no further comment.
Summary: The paper proposes jointly learning a $Q$ function and a policy network $P_F$ to improve the search for high-valued states when training GFlowNets. To achieve this, the authors develop three sampling strategies for composing $Q$ and $P_F$, $p$-greedy, $p$-of-max, and $p$-quantile, and show that the resulting algorithm, termed QGFN, often leads to faster discovery of modes relative to GFlowNet baselines. In spite of QGFN’s notable performance, I believe the paper would greatly benefit from a more clearly described solution and from the inclusion of stronger, optimization-focused baselines. I will be happy to increase my score if the enumerated weaknesses and questions below are properly addressed during the rebuttal period. Strengths: 1. **Well-described problem**. Section 3 highlights the issues of an exclusively GFlowNet-oriented approach to the search of high-valued states with clarity, namely, the large probability mass associated with low-reward states. 2. **Intuitively sound method with strong empirical results**. When the $Q$ function is accurately learned, QGFN — which interpolates between a GFlowNet (p = 0) and a DQN (p = 1) — should yield samples with higher rewards for larger $p$. This behavior is experimentally confirmed. However, the distribution from which QGFN samples, even when $Q$ is perfectly estimated, is mostly unclear. See weakness 4 below. 3. **Extensive experimental campaign**. (however, important baselines are missing) Experiments include four commonly used benchmark tasks for GFlowNets, and QGFN often outperforms alternative methods by a large margin. Weaknesses: 1. **Missingness of important baselines**. FL-GFlowNets (Pan et al., 2023) and LED-GFlowNets (Jang et al., 2024) exhibited strong performance for the optimization-focused tasks of molecule and set generation. Notably, contrarily to the greediness $p$ of QGFN, these methods do not require tuning of a hyperparameter that may unpredictably modify the sampling distribution. Authors should include FL-GFlowNets and LED-GFlowNets in Figures 2, 3, and 4. Have the authors considered increasing the sharpness of the distribution by modifying the temperature in LSL-GFN? 2. **Training of the $Q$-network is insufficiently described**. Lines 163-175 are hard to follow and Algorithm 1 in Section B does not provide sufficient details for understanding the training of QGFN. If I understand correctly, the adoption of multi-step returns with the horizon size set as the maximum trajectory length implies that the each $R(s_{t})$ in Equation (2) corresponds to the expected reward among $s_{t}$’s children under the chosen sampling policy. In any case, an unambiguous equation representing the learning objective for $Q$ should be included in Section 4. 3. **Difference between each sampling strategy is unclear**. In practical applications, a sampling strategy would have to be chosen, as selecting among the proposed approaches can be excessively time-consuming — even without retraining the model. However, $p$-greedy, $p$-of-max, and $p$-quantile seem to perform relatively different when the benchmark task is modified. A general approach for choosing a sampling strategy should, then, be proposed by the authors. Otherwise, the method is hardly usable in real applications. 4. **QGFNs lack theoretical guarantees**. Important questions are left unanswered. What if $Q$ is not properly trained? How do we know that $Q$ is inaccurately learned? What is the sampling for a given $p$ and sampling strategy? To improve QGFN’s reliability, a theoretical analysis should be included. Does a sufficiently trained QGFN finds, in average, more rewarding than standard GFlowNets? 5. **Rationale for choosing $p$ is not well-defined**. Experiments are filled with cryptically chosen numbers. In Table 1, the $p$ value for $p$-of-max and $p$-quantile are respectively set to $0.9858$ and $0.93$; in Figure 3, a “mode” is defined as a molecule with Tanimoto similarity smaller than $0.70$ and a reward larger than $1.10$. What is the rationale behind such numbers? How can a practitioner choose $p$? Also, if the choice of $p$ relied on a grid search over $N$ values in $[0, 1]$, a fair comparison with GFN-TB and GFN-SubTB in Table 1 should allow these methods to sample $N$x times more trajectories than QGFN. However, it is unclear whether this observation was considered for the experiments defining Table 1. 6. [Minor] Typos. Algorithm 1 refers to $\mathcal{L}\_{flow}$ and $\mathcal{L}\_{huber}$, which are undefined. I assume $\mathcal{L}\_{flow}$ is $\mathcal{L}\_{TB}$, but I could not find a definition for $\mathcal{L}\_{huber}$. Technical Quality: 2 Clarity: 2 Questions for Authors: * Why does QGFN provide more diverse samples than, e.g., GFN-TB in Table 1 even when $p$ is set as high as $p = 0.9858$? Do we need to train a GFN? Given the large values of $p$ (above 0.9), it seems that the GFN is not very helpful. I wonder whether an untrained $P_F$ would produce similar results. * What is $k$ in line 230? * Figures 9 and 11 are quite difficult to grasp. Please consider visually encoding the greediness of the method as the size or opacity of points in a scatter plot. * Also, the convergence speed of GFlowNets via TB minimization is known to be dramatically affected by the learning rate of $\log Z$, which the authors set to $10^{-3}$. I wonder whether the relative results would remain the same if such learning rate was increased to $10^{-2}$ or $10^{-1}$. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Authors claim that the computational overhead induced by learning a Q network and the sensitivity of sampling to the accuracy of $Q$ are the main limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate the time and effort you put into reviewing our work. Our work aims to provide an empirical analysis of the method we introduce, a novel idea that can be applied to any GFN. While we understand the reviewer's emphasis on theoretical guarantees, it is worth noting that many significant contributions in the field do not include such guarantees. We believe that our approach, with thorough empirical analysis, is a strong contribution to the field and aligns with the standards of many accepted papers in major ML conferences. Overall, we hope to have addressed the reviewer's concerns in our response, and hope they would reconsider their score. We are committed to improving the presentation of the paper, particularly to dispel any sense of choices and (hyper)parameters being arbitrary. > baselines, temperature in LSL-GFN We compared against six baselines typically used in the GFN literature, covering characteristics like greediness and diversity. Further, the QGFN idea can be applied to any GFN, including FL-QGFNs and LED-QGFNs. In this spirit, we trained FL-QGFNs for Table A of Reviewer eVPz's rebuttal. Yes, if a base GFN algorithm is already optimal, QGFN may not improve it, but this is unlikely to be the case in general. Our contribution is to show that mixtures of action values and GFN policies have desirable behaviors; we did not intend to show that a specific QGFN implementation is SoTA. For LSL-GFN, we tested the effects of increasing sharpness during inference using a range of $\beta$ values (training $\beta\sim U[1,256]$), and chose $\beta = 78$, with rewards closest to SubTB for comparison (larger $\beta$ greatly increased similarity). We will include these results in our revision in Table 1 (see attached pdf). > Training of the $Q$-network We will include the exact objective of $Q$ in $\S$ 4 and Alg. 1. The reviewer is correct that in the max horizon case $Q_\theta(s,a)$ converges to the expected reward. This is the crux of lines 163-175, where we attempt to convey this unexpected reliance on max horizon returns. > Difference between strategies Section 5.1 highlights that the best strategy depends on the environment, particularly the number of relevant actions. Although not foolproof, this is a reasonable starting point. That QGFN is adjustable post-hoc is a strength of the method which can be used here. > QGFNs lack theoretical guarantees - the distribution from which a perfect QGFN samples is hard to define theoretically. We would be happy to find such a result, but we would ask the reviewer to not hold us to impossible standards. For example, Schulman et al. did not know what distribution PPO sampled from, but produced an empirically well understood method. It took two years for proper analysis using two-time-scale methods to provide us with strong theory - In specific cases one can show that the expected reward of QGFN is at least as much as GFN's. We include analysis for the bandit case and intuition-building for the general case in the global response. We would like to emphasize that we've performed extensive empirical work beyond measuring performance to show that our method indeed works according to its hypothesized mechanism - _What if $Q$ is not properly trained?_ Early in training we observe that QGFNs tend to outperform baselines. At that stage it is unlikely that $Q$ is precise. - _How do we know if $Q$ is accurate?_ This can be measured, e.g. in Fig 7 showing an imperfect $Q$. Note that this $Q$ is still improving the sampling of high-reward diverse objects On $p$ and other values: - as explained in $\S5.1$ 244-288 and $\S6$ 280-286, and shown in Figure 6, the $p$s of Table 1 are chosen based on Figure 6. - _Why this Tanimoto threshold?_ 0.7 has been used in every GFlowNet paper with this task, we simply reuse this choice from the literature. - _How can one choose $p$?_ We are unsure what the reviewer means here. There are standard ways to choose (hyper)parameters, but we show in Fig 6 and 9 that e.g. sampling from the model with different $p$s and measuring the diversity-reward trade-off works - _baselines should be allowed to sample $N$x times._ This is incorrect, because we are reporting averages. As a thought experiment, if we sampled infinitely many samples for them, the _average_ reward and similarity would converge to some values. Table 1 uses 1000 samples. - A better test would be to log the time of training QGFN + tuning $p$. The cost of training $Q$ is small (thank parallelism), and taking ~1000 samples with different $p$s takes a minute. That QGFN is adjustable without any retraining is a strength of the method Questions: - _Why is QGFN [with $p=0.9858$] more diverse than GFN-TB?_ Note that this is for **$p$-of-max QGFN**: if many actions have a $Q$ value that is at least 98.58% of the max $Q$, then they can all be sampled by $P_F$, retaining the diversity of $P_F$, but sampling more high-reward objects. Note QGFN isn't strictly more diverse than GFN, but for $n$ samples its _number of modes_ will be higher because less time is spent sampling low-rewards - _with large $p$, is QGFN helpful?_ As above, a good $P_F$ is still needed, and a $p$ close to but less than 1 does **not** entail the greedy policy, except for the $p$-greedy mixture (where $p\approx 0.5$ are best; see Fig 9 & l287-292) - _Would an untrained $P_F$ be similar?_ This is an interesting question! We compared a trained $P_F$ to an untrained $P_F$ (trained $Q$ for both). The latter has an ok performance for high $p$, but is still far from what QGFN can reach (see attached pdf Fig. 1) - $k$ in line 230 refers to the bit width of actions that we later use (lines 259-261) as per Malkin et al. We will clarify this - Style of Fig. 9&11, we were uncertain about this, and use a revised style in Fig. 1 of the attached pdf. We would welcome feedback - For us large learning rates for $Z$ were not helpful, and too often caused runs to diverge --- Rebuttal Comment 1.1: Title: Clarification on $p$ and other values Comment: We apologize for a mistake in our rebuttal, inference time $p$ values are chosen based on Figure 9, not Figure 6. Figure 6's left two figures use these inference-time $p$ values, while Figure 6's 3rd subplot show the effect of different (fixed) $p$ values used for training.
null
null
Rebuttal 1: Rebuttal: We appreciate all feedback from the reviewers. To provide more analysis on QGFN, we have conducted an analysis using a bandit example. We also use some derivations to illustrate the general case. We hope these examples illustrate the behavior of QGFN. We will include this analysis in our revised manuscript. ### Analysing $p$-greedy Consider the bandit setting where trajectories are 1 step and just consist in choosing a terminal state. Let $p_G(s) = R(s)/Z$. Let $0<p<1$, then with $\mu(s'|s) = (1-p)P_F(s'|s) + p\mathbb{I}[s'=\arg\max Q(s,s')]$, assuming there is only a single argmax $s^*$, then $p_\mu(s) = (1-p)R(s)/Z + p\mathbb{I}[s=\arg\max R(s)]$. This means that for every non-argmax state, $p_\mu(s) = (1-p) p_G(s) < p_G(s)$. We get that $\mathbb{E}_\mu[R] > \mathbb{E}_G[R]$: $$\\begin{align} \\mathbb{E}_ \\mu[R] - \\mathbb{E}_ G[R]=&\\sum_ s p_ \\mu(s) R(s) - \\sum_s p_G(s) R(s)\\\\ =& (p + (1-p)R(s^*)/Z - R(s^*)/Z) R(s^*) + \\sum_{s\\neq s^*} (1-p)R(s)^2/Z-R(s)^2/Z\\\\ =& pR(s^*) - pR(s^*)^2/Z + \\sum_{s\\neq s^*} (-p)R(s)^2/Z\\\\ =& p/Z \\left(R(s^*)Z - R(s^*)^2 - \\sum_{s\\neq s^*} R(s)^2\\right),\\;\\;\\;Z=\\sum_s R(s)\\\\ =& p/Z \\left(R(s^*)[\\sum_s R(s)] - R(s^*)^2 - \\sum_{s\\neq s^*} R(s)^2\\right)\\\\ =& p/Z \\left(R(s^*)[\\sum_s R(s)] - \\sum_{s} R(s)^2\\right)\\\\ =& p/Z \\left(\\sum_{s} R(s^*)R(s) - R(s)^2\\right)\\\\ \\end{align}$$ since $R(s^*) > R(s)$ and both are positive then $R(s^*)R(s) > R(s)^2$ thus the last sum is positive. All other terms are positive, therefore $\mathbb{E}_\mu[R] - \mathbb{E}_G[R] > 0$. In the more general case, we are not aware of a satisfying closed form, but consider the following exercise. Let $m(s,s') = \mathbb{I}[s'=\arg\max Q(s,s')]$. Let $F'$ be the "QGFN flow" which we'll decompose as $F'=F_G + F_Q$ where we think of $F_G$ and $F_Q$ as the GFN and Q-greedy contributions to the flows. Then: $$\begin{align} F'(s) &= \\sum_{z\\in\\mathrm{Par}(s)} F'(z)((1-p)P_F(s|z) + pm(z,s))\\\\ &=\\sum_z F'(z)(1-p)P_F(s|z) + \\sum_z F'(z) pm(z,s)\\\\ &=\\sum_z F_G(z)(1-p)P_F(s|z) + F_Q(z)(1-p)P_F(s|z)+ \\sum_z F'(z) pm(z,s)\\\\ &=(1-p)F_G(s) + \\sum_z F_Q(z)\\mu(z|s)+ F_G(z) pm(z,s)\\\\ \end{align}$$ Recall that $p(s)\propto F(s)$. Intuitively then, the probability of being in a state is reduced by a factor $(1-p)$, but possibly increased by this extra flow that has two origins. First, flow $F_Q$ carried over by $\mu$, and second, new flow being "stolen" from $F_G$ from parents when $m(z, s)=1$, i.e. when $s$ is the argmax child. This suggests that flow (probability mass) in a $p$-greedy QGFN is smoothly redirected towards states with locally highest reward from ancestors of such states. Conversely, states which have many ancestors for which they are not the highest rewarding descendant will have their probability diminished. Pdf: /pdf/a3d9c86e598480d160b3eef00778b5f7bd92683f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Vitron: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing
Accept (poster)
Summary: This paper proposes a unified vision LLM for various downstream visual tasks, including understanding, generating, segmenting, and editing. The proposed framework addresses the problems with respect to images, videos, text, and human interactions. The authors also propose a hybrid method to integrate discrete textual instructions and continuous signal embeddings. Strengths: From low-level to high-level semantics, the proposed method address the vision problems in a unified framework. In addition, the generation and editing tasks are also included in this framework. The proposed instruction-passing mechanism over discreet texts and continuous embeddings help to address the different downstream tasks. The experiments are comprehensive. Weaknesses: There is no obvious technical weaknesses from my side. However, I do have some questions about this paper. 1. The authors put "pixel-level" in a highly key position of this paper. From my side, as the proposed model supports different visual tasks, among them, many downstream tasks are not pixel-level such grounding. Why did the authors highlight the concept? 2. The implemental details are missing, especially the synergy module. How can the readers know the re-training and designing process in detail? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. To what extent, can the "unification" be used for the multimodal tasks? 2. The authors proposed the statements about instructions and cooperation in Line 57-59. How did they reflect these aspects in the designed modal and experiments? 3. How did the model distinct the features from task-specific and task-invariant parts? In other word, did the authors do something to assign the aims during the model training? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: As the most parts are clear for me, I am looking forward to the answers to the above questions. I suggest to add more details of the proposed module and the training scheme in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We take your opinion “**There is no obvious technical weaknesses from my side**” as the greatest acknowledgment of the value of our work. Thank you very much! Your affirmation provides us with great motivation to further improve our research. ----- **Q1: The authors put "pixel-level" in a highly key position in this paper. From my side, as the proposed model supports different visual tasks, among them, many downstream tasks are not pixel-level such as grounding** **A**: From a broader perspective, the research on MLLM is very vibrant and rapid. And future research on MLLM will inevitably develop towards a more unified approach, covering more modalities in breadth and achieving more powerful task processing capabilities in depth. This work focuses more on the latter. For visual MLLM, the initial visual MLLMs could only support coarse-grained, instance-level understanding or generation, but a key problem here is object hallucination due to the lack of pixel-grounding capabilities. To solve these issues, the community has gradually developed more advanced visual MLLMs that can support fine-grained, pixel-level visual understanding and generation, including visual grounding, segmentation, editing, etc. It's reasonable because to achieve stronger task performance, it is necessary to have stronger visual processing capabilities. In fact, if a model has fine-grained visual processing capabilities, it will definitely have stronger capabilities in coarse-grained tasks. Therefore, we emphasize achieving pixel-level visual capability in this paper. ----- **Q2: The implemental details are missing, especially the synergy module** **A**: Apologies for the lack of necessary technical descriptions due to the 8-page limit. Please refer to Appendix E.3, 'Cross-task Synergy Learning', where we provide an extended introduction about this module. Also, in Appendix F, 'Extended Details of Experimental Settings', we extended the introduction to the overall framework's implementation. Moreover, our provided code includes detailed implementation information. Our system, including the code and all data, will be open-sourced. We will continue to improve the open-source code and provide a detailed manual. ----- **Q3: To what extent can the "unification" be used for multimodal tasks?** **A**: Unification refers to using a single model or system to execute a variety of tasks, just as using one ChatGPT can perform all NLP tasks. For vision multimodal tasks, we have categorized all visual tasks into four orthogonal grand categories. This way, we use a separate backend module to support each category. Specifically, for categories like visual comprehension, our system will cover almost all image/video-to-text scenario tasks. Going beyond the unification of pixel-level visual MLLM, in this paper, we also emphasize achieving synergy across multiple modalities and tasks to realize a true generalist towards human-level AI. We can understand synergy as a type of generalization ability, only when MLLM achieves generalization across different modalities and tasks can we say we have reached the next level of MLLM towards human-level AI, similar to how ChatGPT achieves cross-task synergy. However, we have not yet seen such generalization capabilities in MLLM (or multimodal unified models). The Cross-task Synergy Learning mechanism proposed in Vitron is a key starting point towards that goal. ----- **Q4: authors proposed statements about instructions and cooperation. How did they reflect these aspects in the designed model and experiments?** **A**: We propose a hybrid mechanism of message passing, where we combine both discrete textual instructions and continuous signal embedding methods. The former aids in accurately invoking different backbone modules (thanks to the LLM’s proficiency in task dispatching), while the latter supplements with richer modality-preserved visual features that cannot be directly described through discrete text. As depicted in Fig. 2, the LLM outputs 1) text responses for users, 2) text instructions for module invocation, and 3) feature embeddings of special tokens. Both text instructions and feature embeddings are passed to backbone modules. In Figure 4, we explored these two different message-passing mechanisms to determine whether discrete textual instruction is more beneficial or whether continuous signal embedding is better for building a multi-modal generalist. Also, there we validated the pros and cons of the proposed hybrid method of message passing. ----- **Q5: How did the model distinguish the features from task-specific and task-invariant parts?** **A**: Technically, we employ adversarial training to decouple task-specific from task-invariant features. We first let different backbone visual specialists make task predictions based on these two features (via concatenation). Meanwhile, we encourage a third-party discriminator (acting as a classifier) to determine which is the current task based solely on the shared feature representation. Ideally, once the discriminator can no longer accurately identify the task, the shared feature can be considered the most purified and broadly applicable across tasks. We further detailed formal task modeling in Appendix E.3, 'Cross-task Synergy Learning'. During the Embedding-oriented Decoder Alignment Tuning stage, we align the feature embedding with all the visual module’s input encoders via the decoding-side projection layers. We do this feature alignment learning by minimizing the distance between the projected feature embedding and the module’s input encoder. For example, for diffusion-based image or video generation, we may directly use the textual condition encoder, while keeping all other modules fixed. ----- **Q6: I suggest adding more details of the proposed module and the training scheme in the appendix.** **A**: Thank you for your suggestion, we will adopt it and reflect it in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses to my questions and comments. However, the response to Q3 is still confusing to me. And after checking other reviewers' comments, I have a similar concern regarding the novelty. Thus, I incline to decrease the rating a bit to Weak Accept. --- Reply to Comment 1.1.1: Title: Re-response to Reviewre Comment: Dear reviewer #vBVg, Thank you for your response and feedback. If possible, could you please more specify the points of confusion in Q3? We would be more than happy to provide further explanations and clarifications. --- Regarding the novelty of this work, we'd like to re-emphasize it again. **From idea perspective:** 1. We for the first time introduce a grand unified vision MLLM, Vitron, which is capable of instance- and pixel-level understanding, generating, segmenting, and editing both images and videos in a "one-for-all" manner. 2. Also we propose the idea of achieving cross-modal and cross-task synergy (i.e., generalization ability) for realizing true generalist capabilities towards human-level AI. This is also the core and most valuable aspect of our work. **From technical perspective:** 1. We have devised a novel hybrid instruction-passing mechanism that combines explicit textual instructions with implicit signal representations, enabling the Vitron to support comprehensive and powerful fine-grained visual processing capabilities. 2. We introduce the cross-task synergy learning mechanism, which allows Vitron to achieve visual generalization not merely by invoking or recalling tools like an agent, but by emphasizing the synergy across multiple modalities and tasks. ----- We have held the greatest gratitude to you when we received very strong support from you, i.e., **Strong Accept**. And now that you have further questions, and we are eager to address them. We look forward to your further feedback. We hope you can maintain the chance for us to address any concerns you may have. Best regards
Summary: This paper introduces a universal pixel-level vision LLM designed for comprehensive understanding, generating, segmenting, and editing both static images and dynamic videos. Technically, the authors propose using the LLM as the core brain, incorporating different encoders for images, videos, and pixel-level regions to extend the comprehensive capabilities of LLMs, and various decoders for images, videos, and segmentations to extend the generative capabilities of LLMs. Additionally, a well-rounded hybrid message-passer, incorporating discrete textual instructions and continuous signal embeddings, is designed to pass messages from the LLM to backend decoders. A novel cross-task synergy module is introduced to enhance the synergy between different tasks. To demonstrate the efficacy of the proposed method, the authors conduct experiments on 12 visual tasks and evaluate across 22 datasets. Extensive experiments show that the proposed method can handle multiple vision-language tasks, spanning from visual comprehension to visual generation, and from low-level pixel understanding to high-level semantic understanding. Strengths: 1. The motivation of this work is clear and reasonable. Building a universal generalist is a trending topic, achieving an "one for all" capability. Images and videos are two core vision interfaces through which humans understand the world; thus, powerful vision-language generalists should be capable of comprehending and generating images and videos. 2. This work presents a well-designed unified framework, integrating the SoTA specialists and enabling performance across various vision-language task groups from vision comprehension to generation, and from low-level pixel understanding to high-level semantic comprehension. 3. This work proposes a hybrid instruction passing method by combining discrete textual instructions and continuous signal feature embeddings to ensure effective and precise information passing to the modules. 4. The authors design a fine-grained spatial-temporal vision grounding instruction tuning method, enabling sufficient pixel-level visual perception. Furthermore, they apply adversarial training to enhance synergy between different tasks, achieving overall performance improvement. 5. Extensive experiments are solid, and conducted on various datasets and tasks, including vision segmentation, fine-grained vision understanding, and vision generation. The experimental results show that after integrating cross-task synergy learning, the framework consistently improves performance across the majority of tasks. 6. this work builds a text invocation instruction tuning dataset. If it is publicly available, it will be beneficial to the development of related research. Overall, I like this paper. I think the proposed idea is novel and interesting, and also intuitive. I tend to believe the contribution, novelty and quality of the paper to be highly aimed. Weaknesses: 1. The authors may need to clarify the advantages of their work over existing generalists like HuggingGPT and Visual ChatGPT, which also aim to be generalist models to some extent. 2. The authors propose a hybrid message-passing mechanism and provide a corresponding ablation study. But it would be better to have an intuitive demonstration of the information complementation between discrete textual instructions and continuous feature embeddings. For example, which types of information are passed by textual instructions and which are conveyed by the embeddings. 3. GPT-4 is employed to construct the text invocation-oriented instruction tuning datasets. The detailed prompt templates and some examples should be provided. 4. Some symbols need clarification, such as the J and F in Table 3. 5. A value bar should be added to intuitively show the synergy across different tasks. Additionally, the calculation of the synergy between tasks should be clarified. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. I have a question that I would like to discuss with the authors: What are the current limitations of the unified model? Given that the performance of specialists is still not very satisfactory, what is the significance of a unified model? In the future, if the form of specialist tasks changes (e.g., the emergence of vision LMs rather than LLMs being at the core), what modifications would the current unified model need to undergo? 2. If there are better specialists available, how can they be replaced, and what would be the cost of doing so? 3. Are other modalities, such as audio or 3D, being considered for inclusion with fine-grained capabilities? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our work and providing insightful comments that significantly enhance the quality of our paper. Here are our responses to your concerns and questions, and we hope to gain your further support. --- **Q1: The authors may need to clarify the advantages of their work over existing generalists like HuggingGPT and Visual ChatGPT, which also aim to be generalist models to some extent.** **A**: Vitron is not just a combination or tool invocation; it introduces a new concept: the necessity of synergy. This is what differentiates Vitron from other agent-based generalist works, including Visual ChatGPT, HuggingGPT, and LLaVA-Plus. These methods operate like agents, merely invoking external modules through LLM, which is ultimately ineffective because the resultant generalist cannot surpass meta-specialists. To achieve a truly stronger generalist, cross-modal/cross-task synergy is essential, similar to how ChatGPT achieves cross-task synergy in NLP tasks. Synergy is crucial for unlocking the native multimodal emergent capabilities of MLLM. ----- **Q2: The authors propose a hybrid message-passing mechanism and provide a corresponding ablation study. It would be beneficial to have an intuitive demonstration of how discrete textual instructions complement continuous feature embeddings.** **A**: This concept is straightforward. For simple user queries, textual instructions can successfully invoke the backend module to produce the correct result. However, for more complex user intents or queries, additional supplementary information and features help the backend module understand how to execute the task accurately. For instance, in an image caption task, textual instructions suffice by simply instructing the backend module to "describe the given image". But for a text-to-video generation task, instructions like "generate a video based on the text 'a dog walking in the park'" may need further visual information in the form of feature embeddings, such as details describing the dog or the park. ----- **Q3: GPT-4 is employed to construct text invocation-oriented instruction tuning datasets. Detailed prompt templates and examples should be provided.** **A**: Thank you for pointing this out. We have exemplified a prompt for a video tracking task in Appendix E.1. We will provide complete prompt templates for all tasks in the revision, and our data will be fully open-sourced. ----- **Q4: Some symbols need clarification, such as the J and F in Table 3.** **A**: Apologies for not explaining these symbols. For video object segmentation on DAVIS 17, we use the mean Jaccard $J$ index and mean boundary $F$ score, along with mean $J$ & $F$, to evaluate segmentation accuracy. ----- **Q5: A value bar should be added to intuitively show the synergy across different tasks. Additionally, the calculation of the synergy between tasks should be clarified.** **A**: We conducted this analysis in Appendix G.3 Cross-task Synergy Study, and Figure 6 provides a visualization of the result. We calculate cross-task synergy by recording Vitron's performance on individual tasks without Cross-task Synergy Learning and after incorporating it, then comparing the performance improvements for the same tasks. Figure 6 shows the normalized improvement visualization. ----- **Q6: What are the current limitations of the unified model? What is the significance of a unified model if the performance of specialists is still not very satisfactory?** **A**: As emphasized before, achieving true generalist capabilities towards human-level AI necessitates cross-modal and cross-task synergy. We view synergy as a form of generalization ability. Only when MLLM achieves this across different modalities and tasks can we speak of advancing to the next level of MLLM towards human-level AI, similar to ChatGPT's synergy in NLP tasks. However, we have yet to see such generalization capabilities in MLLM (or unified multimodal models). The Cross-task Synergy Learning mechanism proposed in Vitron is a key starting point towards this goal. ----- **Q7: If better specialists are available, how can they be replaced, and what would be the cost?** **A**: Vitron supports replacing backend specialist modules. However, the entire system might need corresponding retraining and alignment learning. Fortunately, Vitron currently uses the state-of-the-art specialists. ----- **Q8: Are other modalities, such as audio or 3D, being considered for inclusion with fine-grained capabilities?** **A**: While unifying various modalities, including sound, images, and 3D information, is an important trend (as demonstrated by NExT-GPT), the main focus of this work remains on the vision modality. Nonetheless, achieving more modality unification and synergy is also a significant goal of this work. We hope the proposed Cross-task Synergy Learning mechanism will receive more attention and study in subsequent research. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I think most of my questions and concerns have been addressed. Thus, I increase the score to "Strong Accept" rating. --- Reply to Comment 1.1.1: Title: Thank you for your strong support! Comment: Dear Reviewer #d2kF, Thank you again for your continued greater support. We appreciate it very much! We will make the necessary adjustments in the revision. Thank you once again for your insightful feedback. Best
Summary: This paper proposed VITRON, a multimodal generalist that supports a wide range of vision tasks, treating images and videos as a unified entity. It combines discrete textual instructions with continuous signal embeddings for effective function invocation. The model is trained on fine-grained spatiotemporal vision-language alignment to enhance its pixel-level visual capabilities. A synergy module is developed to maximize the sharing of fine-grained visual features across different visual tasks, improving overall performance. Overall, VITRON aims to be a comprehensive system for vision-related tasks, demonstrating strong capabilities across various benchmarks. Strengths: VITRON introduces a novel hybrid method for message passing, combining discrete textual instructions with continuous signal embeddings. It proposes pixel-level spatiotemporal vision-language alignment learning, advancing fine-grained visual capabilities. The paper demonstrates extensive capabilities across 12 visual tasks evaluated on 22 datasets, showcasing high performance. It includes a cross-task synergy module that optimizes task-invariant fine-grained visual features, enhancing task cooperation. VITRON represents a significant step towards a unified AI capable of understanding, generating, segmenting, and editing both images and videos. Weaknesses: The proposed system is complicated but the paper is not well-structured. A clearer exposition of the system’s components and their interactions would greatly benefit the reader’s comprehension. **Literature Review**: The paper would benefit from a more comprehensive literature review, particularly discussing related works such as Visual ChatGPT [1], HuggingGPT [2], InternGPT [3], ControlLLM [4], GPT4Tools [5], LLaVA-Plus [6], etc. A comparative analysis with these methods in the experimental section would provide valuable context and benchmarking. **Technical Clarifications**: There are several areas in the paper where further clarification is needed. * The necessity of both Module name and Invocation command (Line 874) could be better justified. Using Invocation command is enough? * Why does Video Segmentation module need Region results? In my opinion, the region should be predicted by the backend models? The role of the Region results in the Video Segmentation module warrants further explanation, as it seems the backend model should predict these. * The alignment of Task-specific and Task-invariant Fine-grained Features with the inputs of Backend Visual Specialists is unclear, especially since these specialists are frozen during training. Specifically, how to connect Task-specific Feature and Task-invariant Fine-grained Feature to the backend models? Or the Backend Visual Specialists can work without any input limitation? **Data Format Consistency**: The data format inconsistency between Line 880-881 and page 23 should be addressed to avoid confusion about task requirements and tool usage. Does it mean that the tasks in page 23 do not require using tools? **Impact of Features**: More insight into how the Task-specific and Task-invariant Fine-grained Features affect the overall system performance would be beneficial. **Baseline Comparisons**: Including training-free baselines that utilize GPT-4v with in-context learning and ReACT [7] for tool invocation would provide essential benchmarks for comparison. **Model Results**: As this paper claims that “VITRON surpasses existing SoTA specialists’ performance”, It is recommended to present the results of specialist models in the experimental tables as reference performances. Furthermore, As VITRON is positioned as a generalist, comparisons with other generalist methods would be fair enough. **Tool-use Accuracy**: An exploration of tool-use accuracy across the 12 tasks covered by VITRON is necessary to validate the method’s efficacy. **Execution of Backend Modules**: Clarification is needed on what constitutes the successful execution of backend modules in Figure 4. In addition, why text instruction is more conducive to the successful execution of backend modules while soft feature embedding seems to be more useful in terms of specific task performances? Generally, the paper’s contributions are incremental. To my knowledge, the single tool invocation is not complicated, especially with the limited scale of toolset. There are several papers can solve tasks by chaining multiple tools. [1] Wu C, Yin S, Qi W, et al. Visual chatgpt: Talking, drawing and editing with visual foundation models[J]. arXiv preprint arXiv:2303.04671, 2023. [2] Shen Y, Song K, Tan X, et al. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face[J]. Advances in Neural Information Processing Systems, 2024, 36. [3] Liu Z, He Y, Wang W, et al. Interngpt: Solving vision-centric tasks by interacting with chatgpt beyond language[J]. arXiv preprint arXiv:2305.05662, 2023. [4] Liu Z, Lai Z, Gao Z, et al. Controlllm: Augment language models with tools by searching on graphs[J]. arXiv preprint arXiv:2310.17796, 2023. [5] Yang R, Song L, Li Y, et al. Gpt4tools: Teaching large language model to use tools via self-instruction[J]. Advances in Neural Information Processing Systems, 2024, 36. [6] Liu S, Cheng H, Liu H, et al. Llava-plus: Learning to use tools for creating multimodal agents[J]. arXiv preprint arXiv:2311.05437, 2023. [7] Yao S, Zhao J, Yu D, et al. React: Synergizing reasoning and acting in language models[J]. arXiv preprint arXiv:2210.03629, 2022. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to **Weaknesses**. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please refer to **Weaknesses**. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We feel honored for your many constructive comments, and appreciate you went through our paper so carefully. Below we try to address your concerns or misunderstandings. If you find our response effective, please consider generously increasing rating. ---- **Q1: The system is complicated, but paper is not well-structured** **A**: Sec. 3 mainly introduces the framework components, and the three subsections of Sec. 4 further explain three important tuning mechanisms. We will further polish the paper's structure. --- **Q2: needs a more comprehensive literature review, such as Visual ChatGPT, HuggingGPT, InternGPT, ControlLLM, GPT4Tools, LLaVA-Plus, etc** **A**: Actually, HuggingGPT and LLaVA-Plus are already included. Other works you listed, as well as the latest works will all be covered in the revision. --- **Q3: several clarification is needed** **Q3.1: The necessity of both Module name and Invocation command (Line 874)? Using Invocation command is enough?** **A**: It is necessary to consider both the Module Name and Invocation Command. The module name determines which module to invoke, while the command serves as input to that module. Another benefit is that it allows the LLM to explicitly recognize which task module should accurately process the user's input demand. --- **Q3.2: Why does the Video Segmentation module need Region results?** **A**: Users may specify a particular area of interest to segment/track. For example, if a user specifies an area on a sketch pad, this area can be directly passed to the backend module. If the user specifies a region of interest using natural language, the LLM itself needs to determine and output this region result for the backend module. --- **Q3.3: how to connect Task-specific and Task-invariant Fine-grained Features to the backend models?** **A**: Task-specific Feature and Task-invariant Fine-grained Feature embeddings will be concatenated as one, which is then fed to the feature encoder of the backend module. For example, for the video generation module of diffusion-based ZeroScope, the concatenated feature representation is fed directly to the UNet. But before connecting this feature vector to the backend module, we do alignment between them for sure. --- **Q4: The data format inconsistency between Line 880-881 and page 23** **A**: Correct. The prompt in Line 880-881 is used to tune the invocation of tools, while the prompts on page 23 do not involve tool invocation. Therefore, these prompts are not necessarily in the same format. --- **Q5: how the Task-specific and Task-invariant Fine-grained Features affect the overall performance** **A**: In Sec G.3, the Cross-task Synergy Study, we provided a general overview of how different tasks utilize the Task-invariant Fine-grained Features and their impacts. We will consider conducting a more detailed study of the separate impacts of these features in revision. --- **Q6: should include training-free baselines that utilize GPT-4v with in-context learning and ReACT [7] for tool invocation.** **A**: Thanks. We implemented this experiment. Since GPT-4v doesn’t support video, and also time constraints in rebuttal period, we only implemented one image processing task, __referring image segmentation__ on RefCOCOg. | | Val | Test | |-----------------------|------|------| | NExT-Chat | 67.0 | 67.0 | | SEEM | 65.7 | 65.8 | | GPT-4v + SEEM | 64.9 | 65.1 | | GPT-4v + ReACT + SEEM | 65.4 | 65.4 | | Vitron | 67.9 | 68.9 | Compared to two pipeline systems, due to our Vitron being trained fully jointly, it performs better than the pipeline—even though the backbone GPT-4v is stronger. As seen, the Pipeline systems did not surpass the backend specialist (SEEM). --- **Q7: the paper claims “VITRON surpasses existing SoTA specialists”, should present the results of specialists** **A**: Actually our results (Table 1-11) cover some important SoTA specialists, such as G-DINO for video grounding, and MakeVideo for video generation, where Vitron achieved better performance. But thanks; we will consider covering more specialists and generalists in revision. --- **Q8: how is tool-use accuracy?** **A**: Actually this is already shown in Fig 4, specifically in the right-hand of the bar chart (i.e., Execution Rate). --- **Q9.1: what constitutes the successful backend execution in Fig 4?** **A**: For successful execution, it is crucial to accurately determine the module name and provide accurate input information, which includes both commands and some necessary features. --- **Q9.2: why is text instruction more conducive to the successful execution; soft embedding more useful in task performances?** **A**: The reason is straightforward: Text instruction can only help ensure that the backbone module can select the downstream tasks correctly, but some critical information might not be conveyed through pure text alone. Soft feature embedding, on the other hand, provides very detailed additional information, giving the backbone module a more complete set of input features, which helps to achieve better results for the output tasks. --- **Q10: contributions are incremental. the single tool invocation is not complicated** **A**: We believe the most core and valuable aspect of our paper is that Vitron is not simply about combining or invoking tools, but it proposes a new idea—necessitating synergy. This is what distinguishes Vitron from other agent-based generalist works (including Visual ChatGPT, HuggingGPT, and LLaVA-Plus). These existing systems operate exactly similarly to an agent, which can be quite meaningless, as the resultant generalist cannot surpass meta specialists. To achieve a truly more powerful generalist, it is key to realize cross-modal/cross-task synergy. This is akin to how ChatGPT achieves cross-task synergy in NLP tasks. Synergy is pivotal to unlocking the native multimodal emergent capabilities of MLLM. --- Rebuttal 2: Title: Please let us know whether we address all the concerns Comment: Dear reviewer #BLKJ, Thank you for the comments on our paper. We have shown the response to your comments. Please let us know if you have additional questions so that we can address them during the discussion period. We really hope you can consider raising the score when we did a good job in addressing your concern. Thanks again! --- Rebuttal Comment 2.1: Comment: I still have a question that needs to be clarified. The backend modules are frozon as you mentioned, but the embeddings generated by MLLMs will be fed into backend modules. I noticed embedding alignment tuning for decoders in Line 202-205. It seems the alignment between LMMs and decoders just follows the idea from NExT-GPT. This alignment, in my view, is extremely important and should not be explained with just two sentences. I need the authors' clarification on this issue. Generally, I may raise my score, but still question about the novelty of this work. --- Reply to Comment 2.1.1: Title: Re-Response to Reviewer Comment: Dear reviewer #BLKJ, Thank you for your reply and feedback. We apologize for not providing a detailed description of this part in our manuscript. This was majorly due to the many system components we have in Vitron and the limited space available, which required us to carefully balance the content. Here, we provide further clarification and more description for the section on "Embedding-oriented Decoder Alignment Tuning". (Apologies for taking a day to respond; we've been attending a conference and faced some scheduling conflicts.) ---- Our description in the main text is rather general (hence just a few sentences); however, the alignment strategies for this section are specifically designed according to the functionalities of different backbone modules, thus the design of each module varies slightly. **Overall Design and Principles:** We believe that text instructions are simple and efficient, yet explicit textual instructions carry limited information and may lack detail. Therefore, we designed an implicit signal representation to provide additional, detailed information. This is also the key difference between Vitron and NExT-GPT w.r.t. the backend alignment part. Technically, partially following NExT-GPT (yes, we did), we introduce a set of signal tokens, [SIG_1], ... [SIG_N], to guide the downstream task processes. But unlike NExT-GPT, which introduces different signal tokens for different modalities, we utilize the same set of signal tokens for all functional tasks in our Vitron system. **Alignment Learning Design:** We have designed different alignment strategies according to the backbone of specialists. Specifically, we have categorized our approaches into two: 1) **Diffusion-based backend module,** for tasks like image and video generation, image and video editing. Under this Diffusion architecture, explicit text instruction will be modeled by the text encoder, obtaining explicit instruction representation $h^{exp}$. The implicit signal representation is divided into task-specific features and task-invariant features to capture synergy across tasks, which are then fused/concatenated as the final implicit instruction representations $h^{imp}$. Through a projection module, these two types of features $(h^{exp}$ and $h^{imp}$) are concatenated and mapped as the conditional input into the pre-trained UNet of Diffusion. During alignment learning, we aim for the instruction representation to be instruction-specified and detail-enriched, effectively guiding the diffusion-based module to generate high-fidelity images and videos. Our learning objectives include a) diffusion loss, which directly learns the correspondence between instruction representation and the appropriate conditional generation characteristics of the diffusion model, and b) minimizing the distance between the instruction representation and the instruction feature from the text encoder in the text-to-image generation model, enriching the instruction with more details for finer-grained features. However, the backend module remains frozen and does not require updates (as described in Line 202-205 in paper); updates are applied through backpropagation to the signal feature embedding/representation (from LLM). 2) **Non-diffusion-based backend module,** for tasks like image segmentation, video segmentation, etc, via SEEM. This model architecture inherently includes a textual encoder for receiving textual instruction and a feature encoder for encoding visual embeddings, separately. Thus, we input text instructions into the textual encoder and the enriched implicit feature $h^{imp}$ (with more fine-grained details) into the visual feature encoder. Specifically, just like the process above for the diffusion-based backend, we obtain a final, efficient instruction representation. Then, during the alignment learning phase, our objectives include a) the learning target of segmentation itself and b) minimizing the distance between the implicit feature $h^{imp}$ and the vision feature of the gold segmented image/video. Our code implementation includes all the technical details, which will be made completely open. We will update this point in the revision (preferably with some figure illustrations, most probably in the Appendix). ---- Finally, if you have any further questions, we are willing to provide additional responses. We sincerely hope you can increase your score. Best regards
Summary: This paper proposes a universal vision LLM, named Vitron, for comprehensive understanding, generating, segmenting, and editing of both images and videos. The model incorporates SOTA visual specialists as the backend to support various visual tasks. A cross-task synergy module is advised to learn to maximize the task-invariant visual features. Experiments demonstrate the effectiveness of Vitron on 12 visual tasks. Strengths: + The paper is well-written and easy to follow. + The proposed model integrates multiple visual tasks, including understanding, generation, segmentation, and editing. + Experiments show that the performance of Vitron is comparable to SOTA methods of each visual task. Weaknesses: Some SOTA methods are not included in the experiments. For instance, CM3leon [1] 7B achieves 4.88 FID on COCO-Captions. MAGVIT-v2 [2] reaches 53 FVD on UCF. [1] Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning, 2023. [2] Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation, 2023. Technical Quality: 4 Clarity: 4 Questions for Authors: Refer to weaknesses. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your time in writing comments, especially for your strong recognition of our paper. Below we provide our response. ---- **Q1. Some SOTA methods are not included in the experiments. For instance, CM3leon [1] 7B achieves 4.88 FID on COCO-Captions. MAGVIT-v2 [2] reaches 53 FVD on UCF.** **A**: Thank you for pointing this out. In fact, CM3leon utilizes retrieval-augmented pretraining, which retrieves additional information to help achieve a 4.88 FID on COCO-Captions. For a fair comparison, we should consider the performance of CM3leon without retrieval, which is 10.82 FID on COCO-Captions, and this does not outperform our Vitron's performance (7.57 FID). As for MAGVIT-v2, we noted that the settings in their paper are different from ours, even though both use the UCF-101 dataset for video generation. MAGVIT-v2 is focused on class-conditional video generation, i.e., label-to-video generation, whereas Vitron is focused on image(frame)-to-video generation, mainly following the baseline DynamiCrafter's setting. Therefore, these scores are not directly comparable. Since MAGVIT-v2 is not yet open-sourced, we are unable to run their model on image(frame)-to-video generation for direct comparison. However, we will for sure include these two works in the revision and will further cover the latest released relevant works that we failed to include when submitting to NeurIPS. Additionally, compared to coarse-grained vision generation or understanding, our Vitron system has an advantage in fine-grained vision tasks (such as editing, grounding). Moreover, there is actually ample room for further performance improvement in various tasks, as Vitron has not yet been fully fine-tuned on the relevant datasets. Lastly, we want to further emphasize the most core and valuable aspect of this paper: Vitron is not just about combining or invoking tools, but proposes a new idea: it must be capable of showing synergy. This is the core of Vitron that differentiates it from other similar Agent-based generalist models, which simply use LLMs to invoke external modules—this is meaningless, as the resultant generalist cannot surpass meta-specialists. To achieve a truly more powerful generalist, it is essential to realize cross-modal/cross-task synergy. This is akin to how ChatGPT achieves cross-task synergy in NLP tasks. Synergy is equally key to realizing the native multimodal emergence capabilities of MLLMs. So, if you are affirming the value of our work, we hope you can consider improving the score slightly. Thank you very much!
Rebuttal 1: Rebuttal: # General Response to All Reviewers --- Dear Reviewers, We sincerely appreciate the detailed and constructive comments you have provided on our work. We are fully committed to integrating your suggestions into our revision process. We feel very encouraged that the reviewers find our work novel, interesting, and intuitive. Your support is deeply appreciated. Here we would like to re-emphasize the significant and distinctive contributions of this work. We believe three keywords can well represent this project: **Unification**, **Fine-grained Vision**, **Synergy**. 1. **Unification.** The research on MLLMs is currently very vibrant and rapid. Therefore, future research on MLLMs will inevitably develop towards a more unified direction, i.e., to cover more modalities in breadth and achieving more powerful task processing capabilities in depth. This work focuses more on the latter and specifically on vision MLLM. For the first time, this paper proposes a grand unified vision MLLM, Vitron, handling understanding, generating, segmenting, and editing of both images and videos in a "one-for-all" manner. 2. **Fine-grained Vision.** In the visual MLLM community, initially the visual MLLMs could only support coarse-grained, instance-level understanding or generation. However, a key problem here is object hallucination due to the lack of pixel-grounding capabilities. Subsequently, more advanced visual MLLMs that support fine-grained, pixel-level visual understanding and generation (visual grounding, segmentation, editing, and more) are emerging. We devises a hybrid instruction-passing mechanism and various pixel-level vision-language spatiotemporal alignment learning, enabling the proposed Vitron model to support comprehensive and powerful fine-grained visual processing capabilities. 3. **Synergy.** Vitron achieves visual generalist, but not simply through tool invocation (like an agent); rather, it emphasizes achieving synergy across multiple modalities and tasks. We can understand synergy as a type of generalization ability, where only when an MLLM achieves generalization across different modalities and tasks can we say we have reached the next level of MLLM towards human-level AI. This is similar to how ChatGPT achieves cross-task synergy in NLP tasks. However, we have not yet seen such generalization capabilities in existing MLLMs. The Cross-task Synergy Learning mechanism proposed in Vitron is a key starting point toward that goal. This is also the most core and valuable aspect of this paper. We hope this work will inspire more related research to further explore this mechanism. We will fix all the possible issues and improve the manuscript. To address all your concerns and questions, we have prepared a comprehensive response, including additional experiments where necessary. If you have any further questions or feedback, please don't hesitate to interact with us. We are more than willing to provide clarifications and address any additional concerns you may have. Best regards
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Solving Sparse \& High-Dimensional-Output Regression via Compression
Accept (poster)
Summary: The paper proposes the Sparse & High-dimensional-Output REgression (SHORE) model to address challenges in Multi-Output Regression (MOR). SHORE introduces a two-stage framework to handle high-dimensional outputs efficiently. Theoretical analysis shows that the framework maintains training and prediction loss while being computationally scalable. Empirical results validate these findings, demonstrating the framework’s efficiency and accuracy in handling SHORE for modern MOR applications. Strengths: - Overall, this paper is well-written and well-organized. - The two-stage algorithm proposed in the paper is easy to grasp and implement. Notably, by introducing a random projection matrix, it reduces the dimension of $Y$ and largely relaxes the RIP-type condition in sparsity-constrained optimization. - The theoretical analysis is solid and includes a high-level interpretation of the proof. The imposed assumptions are mild and likely to (approximately) hold in practice. Weaknesses: - The prediction stage is significantly different from the classical manner (i.e., $Z X$). It involves solving sparsity-constrained optimization (SCO) problems. It would be good to highlight this difference. Additionally, solving SCO is more computationally intensive than the classical method, so more discussion (in both theory and experiments) is necessary. - Algorithm 1: Algorithm 1: Steps 3-4 resemble iterative hard thresholding [1, 2]. It would be better to discuss this in the appropriate place. - Lines 217-218: The setting for $y$ seems to imply that only $s$ columns have non-zero values while the remaining columns always take zero values. Under this assumption, why not filter out some columns according to the magnitude of the columns of $y$? - It would be better to explicitly describe how OMP is compared. Is it used for solving (3) or (1)? - Line 310-311: Elastic net may have better performance in a low signal regime [3]. - Considering these assumptions are mild in practice, why not test the algorithm on real-world datasets and present the results in the main text? - Lines 437, 467, 542: Incomplete sentences. Perhaps they can be changed to "Recall the following theorem in the main text." Technical Quality: 3 Clarity: 3 Questions for Authors: - Theorem 4: $\hat{y}$ in the theorem is the optimal solution of (3). How about the algorithmic solution? - The selection $\eta$ is much easier than in [2]. However, it would be good to provide some default values for practical usage. Furthermore, is it possible to select $\eta$ in a more data-driven way, for example, by leveraging the value of the loss function like in [4]? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations in this paper. ### Reference - [1] Blumensath, Thomas, and Mike E. Davies. "Iterative hard thresholding for compressed sensing." Applied and computational harmonic analysis 27.3 (2009): 265-274. - [2] Jain, Prateek, Ambuj Tewari, and Purushottam Kar. "On iterative hard thresholding methods for high-dimensional m-estimation." Advances in neural information processing systems 27 (2014). - [3] Hastie, Trevor, Robert Tibshirani, and Ryan Tibshirani. "Best subset, forward stepwise or lasso? Analysis and recommendations based on extensive comparisons." Statistical Science 35.4 (2020): 579-592. - [4] Wang, Zezhi, et al. "Sparsity-Constraint Optimization via Splicing Iteration." arXiv preprint arXiv:2406.12017 (2024). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response We sincerely thank the reviewer for their positive and encouraging feedback! Table 1 mentioned below can be found in the uploaded pdf file in 'global' author rebuttal. **[Response to Comparison with SCO]** Thank you for highlighting this aspect! If we understand correctly, the classical manner you mentioned denotes predicting SHORE *without compression*. Please refer to the 'global' author rebuttal part ''Proposed prediction stage v.s. General SCO'' and ''Proposed prediction stage v.s. Classical manner'' for detailed discussions. We will add these discussions in the introduction and problem setup sections. **[Response to Comparison with iterative hard thresholding]** Thank you for your suggestion. We will add the following discussion to Section 3.1 after introducing the proposed two-stage framework SHORE. Note that both the iterative hard thresholding methods (Blumensath et al. 2009, Jain et al. 2014) and the proposed Algorithm 1 share the same insight from the vanilla projected gradient method for nonconvex constrained minimization. Compared with existing iterative hard thresholding, Step 4 (projection step) projects the resulting vector to a more general feasible set $\mathcal{V}_s^K \cap \mathcal{F}$ with additional constraints other than sparsity, which might require non-trivial modifications in designing projection oracles and convex analysis. In particular, we discuss the different projection methods under different $\mathcal{F}$ through the perspective of optimization (see Appendix A.5.1 on page 19). We also discuss the bound for stepsize $\eta$ to guarantee the algorithm's convergence. **[Response to nonzero of y]** Thank you for raising this ambiguity! We only assume that each column of $\boldsymbol{Y}$, i.e., $\boldsymbol{y}^{i}$ for $i = 1, \ldots, n$, has $s$ non-zero components. However, the support sets of two distinct samples, e.g., $\boldsymbol{y}^{i}$ and $\boldsymbol{y}^{i'}$ with $i \neq i'$, might be different, which does not imply the row-sparsity for matrix $\boldsymbol{Y}$ (i.e., only $s$ rows/labels are non-zero). **[Response to Comparison with OMP]** Thank you for pointing this out! We will add an explicit description of both the OMP algorithm and how we compare the OMP algorithm with our algorithm in the camera-ready version. Here, we would like to point out that OMP is an algorithm to solve the sparse regression problem, which has some differences from the proposed problem in the prediction stage. See the global rebuttal part for details. **[Response to Comparison with Elastic Net]** Thank you for your insightful observation! We conducted several small experiments and found that Elastic Net works better in a low SNR regime (Refer to Table 1). We will add relevant comparisons in the revision. **[Response to testing on real-world dataset]** Thank you for identifying this point. Because of resource limitations, we only conducted several experiments on relatively small real datasets where $\text{K} \approx 31,000$ , which are shown in Appendix 7. Currently, we are preparing more computational resources to conduct experiments on larger real-world datasets. **[Response to typos]** Thank you for pointing this out! We will correct them in the revision and do more proofreading. **[Response to $\hat{y}$ and algorithmic solution ]** Thank you for asking this! We actually discussed the algorithmic solution, denoted by $\boldsymbol{v}^{(T)}$​, a bit below Theorem 4, see lines 273 - 275. We will highlight the results in a remark in the revision. **[Response to selection of $\eta $]** In our numerical experiments, the default setting for $\eta$ is $\eta = 0.9$, which satisfies the convergence condition proposed in Theorem 2 and Theorem 4 for all $\delta \in (0, \frac{4}{9})$. For a data-driven setting on $\eta$, we realize that (Wang et al. 2024) was recently accessible on arXiv after the NeurIPS 2024's submission deadline, which introduces a novel iterative algorithm with a tuning-free property and linear convergence guarantee. We believe that this is a good related future direction and will be discussed in Section 5. ## Reference **[Blumensath et al. 2009]** Thomas Blumensath and Mike E Davies. Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis, 27(3):265–274, 2009. **[Jain et al. 2014]** Prateek Jain, Ambuj Tewari, and Purushottam Kar. On iterative hard thresholding methods for high-dimensional m-estimation. Advances in neural information processing systems, 27,2014. **[Wang et al. 2024]** Zezhi Wang, Jin Zhu, Junxian Zhu, Borui Tang, Hongmei Lin, and Xueqin Wang. Sparsity- constraint optimization via splicing iteration. arXiv preprint arXiv:2406.12017, 2024. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you so much for the detailed response and additional results. Most of my concerns have been cleared. I hope this discussion and clarification can be incorporated into the revised manuscript, which should benefit the paper's quality and contribution. I've increased my rating to 7. --- Reply to Comment 1.1.1: Comment: Thank you again for the helpful comments and critiques! We will add the discussions, clarifications, and additional numerical results to the revised manuscript.
Summary: In this paper, the author has proposed a new method for solving the high-dimensional output regression (MOR) problem through compression. The authors introduce a two-stage framework that incorporates output compression to achieve computational efficiency while maintaining accuracy. Theoretical results demonstrate the framework's scalability and error bounds, while empirical results validate its performance. Strengths: - The author clearly defined the question about the MOR, and the proposed methods that trying to achieve the good performance. - The proof and the structure of the paper is well-written and well-stated, containing comprehensive theoretical analyses, including training loss bounds and convergence guarantees. Weaknesses: - [Comparison Methods] In this paper, the authors has compare the model performance with some existing prediction baselines like Orthogonal Matching Pursuit and Elastic Net. Both of those two methods seems not to be the latest sparse regression methods that would be interesting to compare with. Therefore, I am wondering if some methods related to sparse high dimension regression like [1,2,3,4] can be discussed within the paper, and tested on the model performances. Also, there are plenty of the methods that are mentioned within the literature review parts, those methods are also not taken into considerations instead of elasticnet, which make the conclusion less convincing from the numerical results perspective. - [Missing of the Sparsity levels] with the synthetic dataset. The paper doesn't mentioned the sparsity level while the sparsity level is pre-defined within the numerical experimental settings since the set up sparsity level tends to be the most correct baseline for evaluating the sparsity level of the model, so it would be really important to be shown within the paper. [1] Bertsimas, D., and B. Van Parys. "Sparse high-dimensional regression: Exact scalable algorithms and phase transitions (2017)." arXiv preprint arXiv:1709.10029 (2019). [2] Bertsimas, Dimitris, and Bart Van Parys. "SPARSE HIGH-DIMENSIONAL REGRESSION." The Annals of Statistics 48.1 (2020): 300-323. [3] Liu, Liu, et al. "High dimensional robust sparse regression." International Conference on Artificial Intelligence and Statistics. PMLR, 2020. [4] Sun, Qiang, et al. "Sprem: sparse projection regression model for high-dimensional linear regression." Journal of the American Statistical Association 110.509 (2015): 289-302. Technical Quality: 3 Clarity: 3 Questions for Authors: - [The influence on the compressed hyperparameter m] With the increase of the hyperparameter $m$ as the compressed dimension. The model performance of the methods has been significantly improved. Therefore, I am wondering if the model training time is depending on m, so it would be interesting to see how the training time increase in real datasets as the hyperparameter m change since the current training time is just a range, while the computational complexity O(KM) in the second stage, so the training time wshould definitely increase with the improvement of m. - [The selection of m] I can understand there is a trade-off between model performance and compressed output dimension. However, if the dataset is totally unknown, the training time would increase under the trial. Therefore, I am wondering if that is also an issue when this method is finally applied for real application. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response We sincerely thank the reviewer for their positive assessment of the paper's clarity, methodology, and theoretical contributions. Figure1 mentioned below can be found in the uploaded pdf file in 'global' author rebuttal. **[Response to Comparison with Sparse Regression and related references]** Thank you for these suggestions on related literature and possible baselines. We will add the following comparisons and literature review to the revised version. Additionally, some numerical experiments will be conducted as suggested. **Proposed prediction stage vs. Sparse regression:** Please refer to the 'global' author rebuttal part "Proposed prediction stage vs. Sparse regression" for detailed discussions. We will also add the following literature reviews (Bertsimas et al. 2019, Bertsimas et al. 2020, Liu et al. 2020, Sun et al. 2015) to Section 3 in our camera-ready version: - **[Bertsimas et al. 2019, Bertsimas et al. 2020]:** This work proposes a binary convex reformulation (BCR) of the sparse regression with an additional $\ell_2$ squared regularizer and devises a cutting plane method for solving the proposed BCR. The authors of this work demonstrate its scalability under their sample-generating model. Compared with our result, the proposed cutting-plane method lacks an iterative convergence guarantee, even though such a method ensures the finding of a globally optimal solution with exponential time complexity. - **[Liu et al. 2020]:** This paper proposes an algorithm for high-dimensional sparse regression with a constant fraction of corruptions, which has a different problem setting compared with ours. - **[Sun et al. 2015]:** This paper develops a sparse projection regression modeling (SPReM) framework to perform multivariate regression, focusing on addressing the low statistical power issue of many standard statistical approaches. However, the authors of Sun et al. 2015 do not provide any iterative convergence result for their proposed coordinate descent algorithm (Algorithm 1). **[Response to Missing of Sparsity levels]** Thank you for your feedback regarding the importance of including the sparsity levels in the discussion of our synthetic dataset experiments. We would like to point out that the sparsity levels are indeed mentioned on line 298 on page 8, where $s = 3$. However, we understand that this might not have been sufficiently emphasized. To address this, we will make the description more explicit and highlight it in figure captions and subplot titles. **[Response to the influence on the compressed output dimension $m$: model performance]** Here, the performances are relatively stable as $m$ is greater than some worst-case lower bound. Typically, the worst-case lower bound is a fixed number much smaller than the label number $K$ given the training dataset (see response on "The selection of $m$" below). In our numerical experiment, due to resource limitations, we only conducted several experiments on relatively small real datasets where $K \approx 31,000$. Thus, the ratio of the worst-case lower bound on compression number divided by label numbers $K$ is high compared to large datasets, leading to the result that the accuracy of experiments significantly improves as $m$ increases. Now, we have tested on relatively larger $m$; see the third panel in Figure 1 and the Real-world dataset part in the global author rebuttal for detailed discussions. **[Response to the influence on the compressed output dimension $m$: computational complexity]** Thank you for pointing this out. Yes, the computational complexity will increase with the improvement of $m$. As noted above, the accuracy performance is relatively stable as $m$ is greater than some worst-case lower bound. Thus, there is no strong motivation to increase hyperparameter $m$ beyond such a worst-case lower bound, which leads to an "upper bound" on the computational complexity. See the second panel in Figure 1 in the global rebuttal for details. **[Response to the selection of $m$]** Thank you for your feedback regarding the selection of $m$ for real applications without prior knowledge. As noted in Remark 3 (lines 182-184), we provide a theoretical worst-case lower bound on $m$ with high probability ensuring that the compressed matrix $\Phi$ satisfies the RIP-property. More specifically, the worst-case bound is given by $m \ge \dfrac{12}{\delta^2}(3s \ln{9} + 3s\ln{\dfrac{e|\mathcal{V}|}{3s}}+\ln{\dfrac{2}{\tau}})$ where $s$ represents the sparsity level. As implemented in our numerical experiments, we use the worst-case bound above and repeat the experiments ten times to reduce the impact of randomness. In real applications, the dimension of the dataset is typically known, and the sparsity level $s$ can be chosen close to the average of non-zero labels in the dataset, allowing us to compute the worst-case lower bound of $m$​ accordingly. ## Reference **[Bertsimas et al. 2019]** D Bertsimas and B Van Parys. Sparse high-dimensional regression: Exact scalable algorithms and phase transitions (2017). arXiv preprint arXiv:1709.10029, 2019. **[Bertsimas et al. 2020]** Dimitris Bertsimas and Bart Van Parys. Sparse high-dimensional regression. The Annals of Statistics, 48(1):300–323, 2020. **[Liu et al. 2020]** Liu Liu, Yanyao Shen, Tianyang Li, and Constantine Caramanis. High dimensional robust sparse regression. In International Conference on Artificial Intelligence and Statistics, pages 411–421. PMLR, 2020. **[Sun et al. 2015]** Qiang Sun, Hongtu Zhu, Yufeng Liu, and Joseph G Ibrahim. Sprem: sparse projection re- gression model for high-dimensional linear regression. Journal of the American Statistical Association, 110(509):289–302, 2015. --- Rebuttal Comment 1.1: Comment: Thank you for your response, and that solves my concerns to the paper. I have slightly increase my rating for this paper. --- Reply to Comment 1.1.1: Comment: Thank you for the insightful comments and willingness to raise your score! We will add the related references, discussions, clarifications, and additional numerical results to the revised manuscript.
Summary: This manuscript proposes a new approach the authors refer to as Sparse & High-dimensional-Output REgression (SHORE) to tackle linear regression problems promoting sparse predictions. A major component of the author's approach is to reduce the computational cost with a random normal matrix compressing the signal first and then producing sparse predictions in the second phase. The manuscript provides a theoretical analysis of the accuracy of the sparse prediction. The authors include further some numerical numerical investigations supporting their theoretical findings. Strengths: The manuscript is well-written and tackles an interesting problem. Weaknesses: Comparison to state-of-the-art methods such as FISTA (Fast Iterative Shrinkage-Thresholding Algorithm) or ADMM (Alternating Direction Method of Multipliers) would strengthen the contribution of this work. A clear comparison highlighting the advantages of the proposed method in terms of convergence speed, memory efficiency, or specific problem suitability would be valuable. Additionally, the paper could benefit from exploring connections to randomized linear algebra, particularly regarding the selection of the matrix Φ. The paper Randomized Numerical Linear Algebra: Foundations & Algorithms by Martinsson and Tropp provides a comprehensive overview of how random projections can be leveraged to achieve efficient solutions in high-dimensional problems. Further, Equation (1) is a simple linear regression or least squares problem ||Y-ZX||_F^2 = ||\vec(Y) - (X' \kron I) vec(Z)||_2^2 which does not require a new renaming as SHORE. Technical Quality: 4 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response Thank you for your positive feedback! Figure1 mentioned below can be found in the uploaded pdf file in 'global' author rebuttal. **[Response to FISTA or ADMM]** Thank you for your comments. We have conducted some preliminary numerical experiments concerning comparison with FISTA, as you suggested. Please check the first panel in Figure 1 in the global response. Besides, it is difficult for FISTA to achieve the stopping criteria in Appendix 6.1 on page 21. We suspect that this is due to the special structure of our proposed model versus the general SCO. **[Response to other randomized linear algebra]** Thank you for providing these references. Actually, as you mentioned above, we realized that how to select a projection matrix is an important open problem that may significantly influence the accuracy of the final predicted output in both theory and practice (also see Blum et al. (2005), Zhang et al. (2015), Pillai et al. (2011)). We will add this part to Section 5 as one possible future direction. **[Response to SHORE name issue]** Thank you for raising this confusion. We will add a discussion to the camera-ready version based on the following explanations. We agree that Equation (1) for the training stage can be reformulated as a linear regression, as you presented. However, the SHORE framework we proposed here refers to a two-stage framework, which includes both the training stage (Equation (2), compressed version of Equation (1)) and the prediction stage (Equation (3)). Moreover, we would like to mention that vectorizing the output matrix $\boldsymbol{Y}$ and linear regressor $\boldsymbol{Z}$ might impede its ability to extend to other structured settings. For example, in some specific applications, the linear regressor $\boldsymbol{Z}$ is assumed to be low-rank, which is more natural to formulate $\boldsymbol{Z}$ as a matrix rather than a vector. ## Reference **[Blum et al. (2005)]** Avrim Blum. Random projection, margins, kernels, and feature-selection. In International Statistical and Optimization Perspectives Workshop” Subspace, Latent Structure and Feature Selection”, pages 52–68. Springer, 2005. **[Zhang et al. (2015)]** Shengping Zhang, Huiyu Zhou, Feng Jiang, and Xuelong Li. Robust visual tracking using structurally random projection and weighted least squares. IEEE Transactions on Circuits and Systems for Video Technology, 25(11):1749–1760, 2015. **[Pillai et al. (2011)]** Jaishanker K Pillai, Vishal M Patel, Rama Chellappa, and Nalini K Ratha. Secure and robust iris recognition using random projections and sparse representations. IEEE transactions on pattern analysis and machine intelligence, 33(9):1877–1893, 2011. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I have slightly increased my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We truly appreciate your feedback and would be grateful if you could consider raising your rating score, as we have taken steps to address the concerns you mentioned.
null
null
Rebuttal 1: Rebuttal: ## Responses to All Reviewers & Area Chairs We would like to thank the reviewers for their constructive and high-quality feedback. The manuscript has been revised based on the comments given in three reviewers' reports. ### Comparisons & Differences We first highlight the differences between our proposed prediction stage, general sparsity-constrained optimization (SCO), sparse regression, and the classical manner (i.e., prediction stage of SHORE without compression). The following discussions will be added to the revised version. - **Proposed prediction stage vs. General SCO:** The prediction stage requests solving a sparsity-constrained optimization (SCO) problem with its general form: $\min_{||\boldsymbol{\alpha}||_0 \leq k} ||\boldsymbol{\beta} - \boldsymbol{A}\boldsymbol{\alpha}||_2^2$. Finding the global optimal solution of SCO, in general (worst-case scenario), is computationally hard. In contrast with the general SCO, the problem we studied in the predicted stage takes the random projection matrix $\boldsymbol{\Phi}$ with restricted isometry property (RIP) as its $\boldsymbol{A}$ and uses $\widehat{\boldsymbol{W}}\boldsymbol{x}$ with $\widehat{\boldsymbol{W}}$ obtained from the compressed training stage as its $\boldsymbol{\beta}$. As a result, and as presented in Theorem 2 (page 5) and Theorem 4 (page 7), the proposed efficient algorithm (Algorithm 1 -- PGD) ensures a globally linear convergence to a ball with center $\widehat{\boldsymbol{y}}$ (optimal solution of the prediction stage) and radius $O(||\boldsymbol{\Phi}\widehat{\boldsymbol{y}} - \widehat{\boldsymbol{W}}\boldsymbol{x}||_2)$, which might not be true for general SCO problems. - **Proposed prediction stage vs. Sparse regression:** Although both the proposed prediction stage and sparse high-dimensional regression share a similar optimization formulation: $\min_{||\boldsymbol{\beta}||_0 \leq k} ~~ ||\boldsymbol{y} - \boldsymbol{X}^{\top}\boldsymbol{\beta}||_2^2,$ the proposed prediction stage (equation (3) under line 149 on page 4) is distinct from sparse regression in the following parts: - **Underlying Model:** Most existing works on sparse high-dimensional regression assume that samples are i.i.d. generated from the linear relationship $\boldsymbol{y} = \boldsymbol{X}^{\top}\boldsymbol{\beta}^* + \boldsymbol{\epsilon}$ with underlying sparse ground truth $\boldsymbol{\beta}^*$. In the proposed prediction stage, we do not assume additional underlying models on samples if there is no further specific assumption. The problem we studied in the prediction stage takes the random projection matrix $\boldsymbol{\Phi}$ with RIP as its $\boldsymbol{X}^\top$ (whereas $\boldsymbol{X}^{\top}$ in sparse regression does not ensure RIP) and uses $\widehat{\boldsymbol{W}}\boldsymbol{x}$ with $\widehat{\boldsymbol{W}}$ obtained from the compressed training stage as its $\boldsymbol{Y}$. - **Problem Task:** Sparse regression aims to recover the sparse ground truth $\boldsymbol{\beta}^*$ given a sample set $\{(\boldsymbol{x}^i, \boldsymbol{y}^i)\}_{i = 1}^n$ with $n$ i.i.d. samples. In contrast, the task of the proposed prediction stage is to predict a sparse high-dimensional output $\widehat{\boldsymbol{y}}$ given a random projection matrix $\boldsymbol{\Phi}$ and a single input $\boldsymbol{x}$. Therefore, due to distinct underlying models and problem tasks, some existing methods/results for sparse regression cannot be directly applied to the proposed prediction stage. - **Proposed prediction stage vs. Classical manner:** As mentioned in Remark 1 & Remark 2 on page 4, the compressed SHORE (both training stage & prediction stage) enjoys better computational complexity with respect to the classical manner, especially under the setting with high-dimensional outputs. ### Numerical Experiments We then report some preliminary results of numerical experiments as suggested by reviewers (see Figure 1 and Table 1). Other numerical experiments, including relatively sophisticated optimization methods or large real instances, will be reported later in the camera-ready version due to resource and time limitations. - **Comparing with FISTA:** The first panel in Figure 1 compares the proposed algorithm (Algorithm 1 -- PGD) with FISTA. As we can observe, the precision of the proposed algorithm outperforms the precision of FISTA, especially when $m$ is small (heavily compressed). - **Prediction time:** The second panel in Figure 1 reports the prediction running time (measured in seconds) on solving one implemented prediction method with early stopping (see lines 618-620, Section A.6.1 on page 21) by PGD under different compressed output dimensions $m$. As we can observe, the running time first decreases dramatically, then increases almost linearly with respect to $m$. Such a phenomenon has occurred since the max number of iterations $T$ is 30 in the implemented prediction method with early stopping, which is relatively large; As $m$ increases but is still less than 1000, the actual number of iterations drops dramatically due to early stopping criteria; After $m$ passes 1000, the actual number of iterations stays at 5, and then the running time grows linearly as dimension $m$ increases. - **Real-world dataset:** The third panel in Figure 1 shows that for a real-world dataset with 31,000 labels, the model performance of our proposed algorithm becomes stable as $m$ is greater than some worst-case lower bound. - **Comparing with elastic net under low SNR regime:** As observed from Table 1, the elastic net has similar performances on precision compared with the performances of the proposed PGD under a low SNR regime. It's true that the elastic net performs much better in numerical experiments in low SNR regime than in high SNR regime. We would like to point out that the elastic net method performs much slower than PGD. The prediction time for the elastic net is around 1,000 seconds, while the time for the PGD is around 10 seconds. Pdf: /pdf/f2b2bce9275dad05b371dc2990c8da48effc2699.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Chain of Thoughtlessness? An Analysis of CoT in Planning
Accept (poster)
Summary: This paper evaluates the effectiveness of CoT prompting on reasoning problems within the Blocksworld domain, revealing that performance gains from CoT are limited and heavily reliant on problem-specific prompts, with diminishing returns as problem complexity increases. Strengths: - The study employs a well-defined case study within the Blocksworld domain, providing a concrete context to evaluate CoT prompting. - The study examines the generality of examples and problem complexity offers a structured and comprehensive approach to evaluating CoT's effectiveness. Weaknesses: 1. The paper ignores all experimental details except prompt. The lack of description of experimental details, including the temperature and other parameters used in the data set, affects the reproducibility of the paper. 2. The paper is not very clear. The core point of the paper is that the model CoT is very sensitive to prompts and demonstrations. However, the paper spends a lot of time exaggerating the failure of CoT-style demonstration. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. I seriously doubt the test of Table 1. Why does Lexicographic Stacking Prompt only have 17 samples (*/17), while other prompts have 270 samples? This comparison is completely unfair. The model may only perform well on these 17 examples. 2. I think there is an obvious overclaim in this paper. This paper only shows that the current CoT is very sensitive in this kind of Action-related tasks. In addition, the effectiveness of CoT plans is often verified on natural language mathematical tasks. Because the expression of mathematics + natural language is more free, and theplanning logic is more general, demonstration can be used to stimulate better output of CoT style, including various types such as ICL CoT, Zero-shot CoT, Plan-and-Solve, Complex- CoT and a series of methods. 3. In fact, as shown in Figure 1, the absolute improvement of CoT has become smaller. The essential reason is that the model itself cannot solve such a complex problem. It is not that the CoT strategy is not good. In fact, compared with direct output, the relative increase in performance is actually Improved (CoT Acc/Direct Acc) Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review. Responses to questions: 1. The lexicographic stacking problem is a special case. For a given number of blocks, there is only one problem which requires stacking them all in lexicographic order, as the syntactic stringency of the order fully determines the problem. All of our instances are between three and 20 blocks total, thus giving us exactly 17 possible lexicographic problems to test. You are exactly correct that the model may only perform well on these examples–in fact, they are constructed to explicitly require only syntactic matching. We will make this clearer in the text and description, and we will highlight this in the final table by shading the relevant squares a different color. 2. Section 6 extends our results to previously studied natural language problems. We took domains that had been examined in the original CoT paper–coinflip and last letter concatenation–and created versions where more reasoning steps are necessary. Neither of these is “action-related”. In both cases, previous work has claimed CoT learns the necessary procedure by example. In the same section, we also examine a very simplified natural language arithmetic domain where the model only needs to repeatedly simplify one digit by one digit arithmetic expressions. We discuss the results and point out that we see the same trends as before in letter concatenation and arithmetic simplification. As for the varieties of CoT listed: In-Context Learning (ICL) CoT and Zero-shot are both explicitly covered. If by Complex CoT, the reviewer is referring Fu et al’s 2023 paper Complexity-Based Prompting for Multi-step Reasoning, then, to quote from it: “We observe a clear trend on both GSM8K and MathQA: complex prompts perform on par with simple prompts on hard cases, while achieving more clear gains on cases with fewer number of reasoning steps.“ This directly matches our results. Low reasoning step number problems are very amenable to CoT improvements, but these gains disappear when generalization is attempted. 3. We assume the reviewer is referring to Figure 2. We’re not saying that CoT doesn’t improve raw performance. We’re saying that the mechanism underlying it does not seem to involve learning the procedure or algorithm demonstrated. If it did, then we’d expect to see generalizable performance improvements. Table-to-stack problems are not particularly complex: every block is on the table, and the model is tasked with stacking them in a predetermined order. The stacking prompt proceeds very similarly to an explicit n-shot plan-and-solve prompt: first the model figures out which is the bottom block, and then it figures out, step-by-step, which block goes on top of the previously placed block. Every one of these steps requires only a simple syntactic transformation that LLMs are known to excel at, but when the model is tasked with doing this for instances larger than the ones demonstrated, it fails to extend the procedure correctly. This is even clearer in the case of last letter concatenation. There, the model need only list every word and its last letter and then concatenate them together. It is able to do this perfectly for up to a few words, but afterwards, performance quickly plummets. Our main point, however, is that CoT does not work the way that it has been described in previous work. The model may do better, but it doesn’t learn and apply a new algorithm. **About experimental details:** All code and data in this paper will be made public, together with instructions for setting up and running the same tests with any API-accessible LLM. We will add the details about the exact models and temperatures we used to the paper. They are reproduced below: Temperature was set to 0 for all experiments except those already explicitly mentioned in the current text (self-consistency and some single-digit arithmetic). We used the static models when possible: GPT-4-Turbo is gpt-4-turbo-2024-04-09 and GPT-4 is gpt-4-1106 in the OpenAI API. Claude-3-Opus was accessed mid-April 2024. The instances within the Blocksworld domain were generated using the PDDL generators provided by the International Planning Competitions. Within each problem class an equal number of instances were generated across the number of blocks. The intended test set for zero-shot, progression proof and universal algorithm had a total of 270 instances (15 instances per # of blocks). For the stacking prompt, the test set had a total of 261 instances as there are only 6 stack combinations for 3 block problems. Finally, there was only one instance per # of blocks for the lexicographic case. The coinflip, lastletter, and arithmetic domains are detailed in appendices A.3, A.4, and A.5. CF and LLC are both domains extended directly from Wei’s 2022 Chain of Thought paper, modified to allow for arbitrary length instances. Exact distributions are in the appendix. The multi-step arithmetic domain is a synthetic domain. We will add the random generation procedure for problems that we used to create the dataset: generate the innermost number m; uniformly rejection sample an operation # and number n, such that n#m=k is an integer from 1 and 9; if the number of reasoning steps is enough, stop, otherwise set m<-k and jump to the rejection sampling step. **The core point of our paper** is not that CoT is sensitive to prompts and demonstrations, but that CoT, contrary to previous claims, does not in-context teach LLMs general and robust algorithmic procedures. We show that CoT depends on specific prompts being narrowly constructed and customized to the generality of the problem class and the length complexity of the instance itself. Our results provide critical evidence that counters the current claims and consensus that CoT unlocks human-like procedural reasoning abilities within LLMs from a few demonstrations, and suggest that pattern matching rather than procedure following may be a better explanation for its success. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Your rebuttal indeed clarified some of my misunderstandings. I have decided to improve my overall score. 1. **It seems that over-claiming still exists.** As you mentioned, the CoT prompt does not make the model robustly learn the correct algorithmic procedures. However, in most of your tasks, CoT indeed shows an improvement compared to Direct, which precisely proves that LLMs can learn a step-by-step thought. 2. **Prompt sensitivity:** Additionally, I still believe that the contribution of this paper is limited to the fact that CoT is only sensitive to specific prompting or relies on a certain way of thought. In fact, to solve certain types of problems, demonstration or prompt strategy is inherently domain-specific. The clearer and more relevant the logical format for some problems, the faster and better the reasoning performance can be achieved. This kind of discussion seems unnecessary. For example, if I demonstrate or guide anyone through a Program-of-Thought logic, as humans, we cannot completely follow the Program-of-Thought logic to solve commonsense reasoning problems that cannot be strictly expressed. 3. **Insufficient consideration:** Based on the previous point, using a specific logic to solve specific tasks is inherently a shortcut for humans, while general logic is often inefficient for uncommon tasks with special detailed settings (including so-called blocksworld, letter concatenation, and coin flipping). I believe this is beyond doubt. Therefore, I still think it is insufficient that this paper does not discuss the benchmark, like GSM8K, MATH, CommensenseQA, which is a natural language problem requiring general logic. --- Rebuttal 2: Comment: Thank you for your reply. 1. Perhaps we are misunderstanding, but this response seems contradictory: robustly learning and applying a correct algorithmic procedure and learning a “step-by-step thought” are the same thing. Our claim is that, while the improvements seen do exist, they are brittle in ways that robustly learning the requisite step-by-step process wouldn’t be–raw improvement does not contradict this. The evidence necessary for our claim is the rapid deterioration of this improvement on instances that require more reasoning steps, a phenomenon we demonstrate across multiple diverse multi-step reasoning domains, from variants of Blocksworld to last letter concatenation and arithmetic expression simplification. In the camera ready version, we will make much clearer both this distinction and how we are using the evidence we gather to show the claim we make. 2. Outside of zero-shot methods (“let’s think step by step”), manual CoT construction is always domain-specific, and requires demonstrating an in-context procedure for each exemplar. We do not claim anything about CoT’s sensitivity to particular prompts. Consider the Last Letter Concatenation domain: performance is perfect for smaller instances, yet falls quickly on larger ones, despite the prompt and procedure being the same in all cases. Our contribution is to provide critical evidence that the intuitions underlying previous work–that CoT demonstrations of procedures allow LLMs to learn those procedures and apply them in-context, an anthropomorphization arising from the response format engendered by the technique and the popular name for it–miss the mark. We constrain our claims and evaluations to CoTs that are designed for and applicable to the problems they’re presented with. 3. In its current form, our paper does explicitly discuss GSM8k, CommonsenseQA, MAWPS, AsDiv, and others. We go further in depth on arithmetical reasoning benchmarks and construct a simplified natural language synthetic benchmark in section 6, under the heading “Multi-step Arithmetic on Single Digit Numbers”. In particular, we discuss a fundamental limitation of these benchmarks: they generally require only a couple of reasoning steps. Only ten percent of all GSM8k problems (a benchmark which explicitly attempted to increase the number of reasoning steps necessary) require more than five steps, and none are over eight. This sort of narrow problem distribution is insufficient to properly evaluate whether the model has learned and applied the correct procedure or is merely pattern matching to syntactically similar templates. Note also the quote we brought up in our rebuttal from the Complex CoT paper, which we will include together with a deeper discussion in our camera ready version: “We observe a clear trend on both GSM8K and MathQA: complex prompts perform on par with simple prompts on hard cases, while achieving more clear gains on cases with fewer number of reasoning steps.” Robust generalization is still not seen even with additional prompt engineering or manipulation. Furthermore, as mentioned in our paper, CommonsenseQA and similar domains do not explicitly require multi-step reasoning. In fact, the CoT exemplars given in Wei’s 2022 Chain of Thought paper for CSQA are all exactly two steps. Because CSQA questions require broad knowledge, and are very amenable to memorization, there is no way to scale the dataset to instances that necessarily require more steps–these are problems that mainly test what the model knows rather than whether the model can reason in a generalizable manner. --- Rebuttal Comment 2.1: Comment: Thanks you for your detailed response. I have no additional questions.
Summary: This paper evaluates the efficacy of Chain of Thought (CoT) prompting in improving the reasoning capabilities of large language models (LLMs) in planning tasks. The authors analyze CoT's performance in the Blocksworld domain, a classical planning problem, and extend their findings to other synthetic tasks. They demonstrate that CoT prompts only show meaningful improvements when the provided examples are highly specific to the problem class, and that these improvements quickly deteriorate as the complexity of the problems increases. The paper challenges the notion that CoT enables LLMs to learn general algorithmic procedures, suggesting instead that performance gains are largely due to pattern matching rather than genuine algorithmic understanding. Strengths: 1. **Comprehensive Evaluation**: The paper provides a thorough analysis of CoT prompting across various problem domains, including Blocksworld, Coin Flip, Last Letter Concatenation, and multi-step arithmetic. This breadth ensures that the findings are robust and generalizable. 2. **Detailed Analysis**: The authors dissect the performance of LLMs on different levels of problem complexity, offering insights into the limitations of CoT prompting as problem size and complexity increase. 3. **Critical Perspective**: The paper critically examines the assumptions behind CoT prompting, providing evidence that contradicts the widely held belief that CoT enables LLMs to learn and apply general reasoning strategies. Weaknesses: 1. **Weak Motivation**: The Chain-of-Thought paper [1] claims that "chain-of-thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting." However, the findings from this paper, specifically in lines 84 to 88, reiterate the same point, stating that CoT prompts act as exemplars and involve "pattern matching." 2. **Unrelated Title**: The paper primarily evaluates CoT prompts in a single planning task, Blocksworld, and the results indicate that CoT is still useful, as shown in Table 1. However, the title "Chain of Thoughtlessness" does not accurately reflect the content or findings. 3. **Lack of Smooth Transitions**: The content in the introduction and related work sections lacks order, making the paper structure disjointed and difficult for readers to follow. The transitions between paragraphs are abrupt and lack coherence. For example, using subheadings in the related work section could help group related content into subsections, creating a more logical flow. 4. **Section 3 Placement**: The background information in Section 3 could be integrated into a subsection of the related work, providing a smoother introduction to the planning tasks. 5. **Unclear Conclusions**: The conclusions in lines 231 to 235 are not well-supported by the results. Table 1 shows that Zero-shot CoT, Blocksworld Universal Algorithm CoT, Stacking CoT Prompt, and Lexicographic CoT Stacking Prompt all improve performance, with some showing significant improvements. However, the paper claims that the CoT approach does not enhance performance, which contradicts my understanding. Perhaps my interpretation of this table is incorrect. 6. **Lack of Examples in Main Text**: Lines 196 to 220 should include specific examples directly in the main text, as they are central to the experiments. Currently, these examples are relegated to the appendix, which diminishes their impact and increasing the understanding of remaining sections. 7. **Insufficient Experimental Explanation**: The experimental section lacks clarity and organization. Each experiment should be clearly described, explaining what each aims to demonstrate, such as Experiment A for generality and Experiment B for complexity. 8. **Missing Statistics about Test Set**: There are no statistics provided about the Blocksworld test set, leaving the question distribution unclear. This is essential evidence for evaluating the generality of CoT prompts. 9. **Unaddressed Human Labor Cost**: The paper frequently mentions the drawback of CoT prompts requiring human labor but does not provide any comparative analysis or solutions. This point does not directly support the paper's argument and feels extraneous. 10. **Writing Typos**: - Line 72: PDDL is introduced as an acronym without explanation. - Line 239 likely refers to Figure 2, which should be corrected. > Reference: > [1] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., ... & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35, 24824-24837. Technical Quality: 1 Clarity: 1 Questions for Authors: N/A Confidence: 4 Soundness: 1 Presentation: 1 Contribution: 1 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Respectfully, we found this review highly inconsistent and in places incoherent. We wonder if there was some transmission/saving error when the reviewer posted it. Nevertheless, let us respond to the review as it was posted. We hope our clarifications persuade you to rethink your overall evaluation of the paper. Lines 84-88 are a summary of our results: “Overall, this case study calls into question assumptions about the generalizable effectiveness of chain of thought, and suggests that LLMs do not learn new, general algorithms in context, but instead rely on some form of pattern matching to achieve prompt-design-specific performance increases.” The reviewer’s quote is from Wei’s 2022 paper introducing Chain of Thought. The full context from the paper: > We explore how generating a chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain-of-thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Our paper directly challenges the generality of this claim. While CoT does improve performance in terms of raw accuracy in many domains, our results show that it does not do so in a way that generalizes across problem complexities, despite including explicit demonstrations of algorithms that do generalize along these dimensions. Thus, we position our paper as a counterpoint to Wei’s “lower bound” approach–our results give evidence about the upper bound of CoT efficacy as part of a critical examination of the assumption that CoT unlocks human-like procedural reasoning from a few examples. There may be some confusion about the meaning of the title. The full title is “Chain of Thoughtlessness? An Analysis of CoT in Planning”. The word “thoughtlessness” does not refer to the opposite of thought, but to the opposite of thoughtfulness. What we are questioning with this title is whether “chains of thought” generated by LLMs are produced in a careless way–one which does not seem to involve generalizable procedure learning but instead looks to consist of slapdash pattern matching. We would also argue that the word “thought” in the phrase “chain of thought” is already very loaded and anthropomorphized, and so is fair to criticize. Furthermore, we examine three domains other than blocksworld: coinflip, last letter concatenation, and an arithmetic benchmark. The first two are direct extensions of previous work that made explicit claims about the breakthrough efficacy of CoT on these domains. We confirm that CoT prompting does generalize fairly far in the coinflip domain, but find that on last letter concatenation and arithmetic expression simplification, performance drops substantially when the domain is extended, showcasing that the model did not correctly replicate the procedure demonstrated in the prompt. We wish to reiterate that the central claim of our paper is not that CoT does not lead to any performance improvement, but, to quote the reviewer’s strengths section, to offer evidence against “the widely held belief that CoT enables LLMs to learn and apply general reasoning strategies.” We will clarify this point further in the final copy, and we will change the phrase “does not meaningfully enhance performance” to “does not robustly generalize.” We’d also like to point out that, in table 1, Zero-shot CoT improves or worsens (depending on the model) performance by around one percentage point, making no meaningful difference, and that the Blocksworld Universal algorithm is already a fairly narrow prompt which can only perform well on a loose relaxation of the domain. Note that the improvements become more significant the more granular and problem-specific the prompts become–which is a point we make in the paper: to effectively utilize CoT, the user needs to greatly restrict the problem or write different CoTs for many subproblems. We will add this analysis to the appendix. Blocksworld instances were generated using PDDL generators provided by the International Planning Competitions. The intended test set for zero-shot, progression proof and universal algorithm had a total of 270 instances (15 instances per # of blocks between 3 and 20). For the stacking prompt, the test set had a total of 261 instances as there are only 6 stack combinations for 3 block problems. Finally, there was only one instance per # of blocks for the lexicographic case. We will add these details to the appendix. On formatting: - Almost every paragraph begins with a linking sentence introducing the new topic. We will also improve the prose: lines 40-47 will be rearranged to improve flow, lines 48-53 will be mostly removed. We can add headings to the paragraphs in the related work as follows: What is CoT?, Enhancements to CoT, Problems with CoT, Generalizability of CoT, Current Opinions. We will also rework the final paragraph of the related work section to increase clarity and readability. - Section 3 will be a subsection of related work. - Due to the amount of detail provided in each prompt, it is infeasible to provide examples in the main body. We compromise by putting the prompts in the appendix and creating figure 1, which illustrates each of the three levels of generality in the blocksworld domain. We will make it clearer in the text which sections corresponds to which problem. - We do provide a definition of PDDL in the background section on line 146, but we will add a parenthetical to the introduction to clarify. - We have corrected the figure references. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. First, I apologize for the unstructured review. My main concern is that your paper seems to overclaim, and the experiments are not convincing, which has also been mentioned by other reviewers. I will wait for the responses from the other reviewers before deciding whether to raise the score. --- Rebuttal 2: Comment: Thank you for your reply. We reiterate that our global response as well as the response to you clearly points out that we are only claiming that CoT doesn’t robustly learn the algorithmic procedure demonstrated in prompt, as shown by the performance deteriorating as the number of reasoning steps increases (despite the fact that the given procedure does solve the problem). This is not contradicted by raw improvements on problems similar to the given CoT exemplars. As we pointed out in our individual response to you, your original review seems to have misread this. We would also like to draw your attention again to the fact that in addition to planning tasks, we have done experiments on extensions of standard CoT benchmarks–including last letter concatenation (Section 6)--and those results too are in agreement with our claim about Cot not leading to general procedure learning. Finally, we note–as you yourself can readily see–that all the other reviewers have had significantly more positive assessment of the paper. We look forward to your reconsideration of your assessment.
Summary: The paper conducts a systematic study of claims that chain-of-thought (CoT) prompting unlocks reasoning abilities in LLMs. In particular, the paper evaluates the ability of LLMs to a) learn a simple algorithm from demonstrations annotated with reasoning steps ("thoughts") provided as part of the input prompt, and b) to generalize to harder instances in the same problem class. CoT prompt variants are evaluated on a carefully constructed set of simple planning problems (e.g., Blocksworld) and simplified variants. The experimental evaluation demonstrates that CoT works better when the prompt includes reasoning demonstrations on examples "similar" to the query (test) problem and when the test problem class is sufficiently easy (small, specific). The paper makes the case that CoT enables a form of syntactic pattern matching rather than algorithmic reasoning. Overall, the paper provides deep empirical insight into CoT prompting, clearly demonstrating its limited ability to generalize to larger tasks requiring multi-step reasoning when given smaller sized demonstrations, a task easily handled by sound planning algorithms. Strengths: + The paper tackles an important topic of large interest to the community. The extent of LLMs abilities aren't fully understood, especially on challenging tasks (reasoning, planning) and it is important to carefully assess claims of new abilities on these tasks. + The paper performs a rigorous empirical evaluation using the classical and easily-understood domain of Blocksworld as well as other domains. By evaluating a variety of general-to-specific prompts across problem distributions, the paper is able to thoroughly assess claims of reasoning abilities. + The result of this careful evaluation is strong evidence of the limited ability of CoT to induce algorithmic reasoning in LLMs using only annotated demonstrations. Rather, some form of syntactic pattern matching seems to be occurring. + The paper is extremely well written and easy to understand. The appendixes are richly detailed. Weaknesses: - I didn't spot any major weaknesses in this paper. Some minor nitpicks are in the questions. Technical Quality: 4 Clarity: 4 Questions for Authors: - This may be out of scope but I'd be curious to understand how much variance there is in these results wrt to the prompt inputs. Specifically, how robust are the results wrt (semantically) small changes to the system prompt, examples, thoughts, formatting, etc.? - (Line 200) I'm probably missing something but where exactly is the meta prompt explaining plan correctness in A.6.2? - Some references seem to be incorrect or missing. - (Line 239) Should it be "Figure 2" (instead of "Figure 3")? - (Line 253) Should it be "Table 2" (instead of "Table 3")? - Should Figure 3 and Table 3 be referenced somewhere in Sec 6.1? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. The progression proof CoT prompt does include a meta-prompt, but we failed to include it in the original draft. Here it is: > The plan correctness is defined in terms of states resulting from executing the actions in the plan. An action is executable in a state when all its preconditions hold in that state. The state resulting from the action execution consists of everything in the previous state with the addition and deletion of add and delete effects of the action. Plan correctness is defined as follows: if the first action in the plan is applicable in the initial state, i.e., its preconditions are all present there; and the second action is applicable in the state resulting from applying the first action to the initial state, this process continues until the state resulting from the application of the last action in the last but one state gives rise to the final state where all the goals are satisfied. We will add this to the appendix. We’ve also fixed all the missing and incorrect references, including the ones you mentioned. Thank you for catching these! As to your question, we did also test multiple prompts across our experiments, to ensure that minor details of prompt selection did not impact our results, and chose the best performing ones for fairness. In the Blocksworld domains, we tried four varieties of the universal prompt, ranging from more to less explicit algorithm demonstrations. We compared a CoT that required finding the base block first and ones that instead serialized the goal conjuncts, eventually settling on the prompt that gave the best performance. We also compared varieties of lexicographic prompt while looking for a clearcut example where CoT guarantees length generalization over n-shot prompting, and we checked reverse lexicographic problems, as well as problems where the given query had the opposite order of the prompts given. We found some variation across all of these, but nothing that contradicted the general trends we observed in our reported findings. All data we gathered, whether featured in the main body of the paper or not, will be publicly released on github, and we will add a discussion of these prompt variations to the appendix. We also considered prompt variations in the non-planning domains: In the coin flip and last letter concatenation domains, we generated the same kinds of prompts that Wei’s 2022 Chain of Thought paper used. However, as we needed more varied names, we drew ours from a different source (the US Social Security administration, as discussed in appendices A.3 and A.4), and partitioned these into 1, 2, and 3-length names. We measure the length as the number of tokens the GPT-4-Turbo tokenizer (CL100K_Base) requires to encode them. We found no significant differences in performance between them, and included equal mixtures of all three kinds of prompts in our reported experiments. We also varied the number of examples–from 1 to 3–and the correctness of the examples: either all correct examples or all incorrect. While incorrect examples do sometimes show a small decrease in performance, it is almost the same, and the overall trend is unaffected (see also Invalid Logic, Equivalent Gains, Schaeffer et. al. 2023 for a more thorough analysis of wrong CoTs). In the letter concatenation task, we also tried a prompt where, instead of saying “let’s think step by step” we merely added a “[Thought]” tag to the end of the direct zero-shot prompt. This retained some of the improvement of normal zero-shot CoT, but didn’t do quite as well. In the arithmetic task, we varied the requirements of the CoT from free-form thoughts to requiring intermediate answers to be tagged to requiring intermediate answers and computations to be tagged. We also tried explicitly including an instruction that every intermediate computation will be a single digit integer. Performance was roughly the same across all, and the overall downward trend was unaffected by these modifications. We will add these additional experiments and associated charts together with this discussion to the appendix. --- Rebuttal Comment 1.1: Title: Re. author response Comment: I thank the authors for their detailed response to all reviewers. After reading the other reviews and the comments, I'm now more positive about the paper. In my opinion, the authors have been clear about their central claims both in the paper and comments ("we are only claiming that CoT doesn’t robustly learn the algorithmic procedure demonstrated in prompt, as shown by the performance deteriorating as the number of reasoning steps increases"), and proceed to demonstrate its empirical validity rigorously. I think this paper adds valuable empirical insight into the limitations of current LLMs, which is sufficient for me to recommend acceptance. Beyond that, I think this paper would be a valuable addition to the growing body of work in LLM-based planning.
Summary: The paper aims to show that Chain of Thought style prompting does not result in generalisation of reasoning, instead relying on pattern matching to improve performance. It argues that if CoT results in language models learning algorithms in context, then prompts describing general procedures should result in similar performance gains to prompts describing task specific examples. Experiments are performed on planning problems in the Blocksworld domain using prompts at different levels of generality. The results show smaller performance gains from general prompts compared to large gains for more specific prompts. Through further experiments on planning problems in other domains, the paper shows that CoT does not generalise to more complex problems than those presented as examples. The paper concludes that improvements from CoT are due to pattern matching rather than in-context learning of algorithms. Strengths: 1. Novel and interesting evaluation of how the performance gains from CoT can vary with the level of generality of CoT described in the prompt. 2. Comprehensive evaluation of how CoT fails to generalise to problems of higher complexity across a variety of tasks. 3. The blocksworld experiments as well as other synthetic benchmarks are a useful contribution for evaluating reasoning in a scalable manner. Weaknesses: While the results presented do indicate that more general demonstrations of CoT do not perform as well as task specific ones, they also do show improvement over standard prompting. For example in Table 1, GPT4 jumps from 7% to 28% using the Blockworld Universal Algorithm. A similar trend is present in Table 2 and Table 3. It is unclear to what extent this is from pattern matching or from algorithmic generalisation. The paper may be at risk of overstating their claim that CoT is not inducing any reasoning. Discussion and further analysis of this would be helpful. A concern about the "Progression Proof CoT" prompt: The prompt includes demonstrations of successful plans with details of the actions and states. However, it does not include any reasoning / algorithmic description of how the plan was obtained from the problem. To be considered a legitimate Chain of "Thought" prompt, should not this prompt include description of a general procedure to derive the plans? Otherwise, it is difficult to see of the LLM can be expected to generalise reasoning from a description of the final answer without intermediate reasoning "thoughts". Overall, a novel and interesting contribution but there are some concerns about evaluation and the conclusions drawn from them. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. In 5.2, the paper says "only the most specific and least applicable prompts retain anywhere near this performance improvement". Is this referring to the "Blocksworld Universal Algorithm" or the "Stacking Prompt"? In the case of the former, is this also not a fairly general prompt? More broadly my concern is that Table 2 shows that the Universal Algorithm Prompt achieves similar performance to the Stacking prompt, showing that a more general CoT can perform as well as a more specific one, which is contrary to the central claim of the paper. 2. The performance of GPT4 on Lexicographic Stacking seems surprisingly low with the n-shot prompt (considering the CoT prompt achieves 94%). Was this due to the particular examples provided in the n-shot prompt? 3. Table 3 does not seem to be referenced in the text? 4. Does 5.1 contain an incorrect reference to Figure 3? 5. Does 5.2 contain an incorrect reference to Table 3? 6. Can the n-shot prompts be included in the appendix? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: I could not find any discussion of the limitations of the work. One limitation is that the work does not provide methodologies for reasoning with LLMs beyond that go beyond the limitations described. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review. Analyzing the data using only the tables is somewhat misleading. Figures 2 and 3, as well as the appendix figure A.1.1 show more clearly that the bulk of the improvement for all prompts is on the few-block problems, whereas if the procedure shown in these CoTs were followed, we would expect this improvement to be much more robust on examples of larger length than those shown in the examples. **To the point about the progression proof prompt:** “Chain of Thought” is a loosely defined term in the literature. Intermediate “thoughts” vary from piece-by-piece rationales to compositional reasoning steps. The progression proof CoT we use is taken from previous literature that studied LLM performance on this domain ([1] call it a “state-tracking CoT prompt”). At each step, this prompt provides not just reasons for the given action, but also the current state from which to determine which action to take next. In their 2023 survey of CoT techniques, Zhang et. al. specify that “the intermediate processes of CoT reasoning [...] can encompass solutions, intermediate reasoning steps, or any relevant external knowledge pertaining to a question.” The progression proof provides partial solutions and their effects, thus making it a valid CoT instantiation. In fact, the procedure provided, if followed, should ensure that the output plan is valid. Yes, this prompt doesn’t provide an algorithm that precisely specifies every part of the reasoning process, but neither does any CoT–all CoTs assume that the model can handle some parts automatically. For example, in grade school arithmetic tasks, the selection of relevant numbers and the exact sequencing of the steps needed to solve the problem (which can be the hardest part of the problem, something which is exploited by teachers who introduce unnecessary additional information) is not specified in the chain of thought prompts provided in the original Chain of Thought paper (Wei et. al. 2022). Note also that, while the progression proof guarantees the executability of the output plan, we have data showing that this guarantee is not respected by the LLMs we test, thus showing they did not execute the given procedure: If they had, they would have generated a high or perfect percentage of plans that were executable. GPT-4-Turbo generated 9.36% executable plans, Claude-3-Opus was 59.63%, and GPT-4 was 38.15%. We will include these figures in the appendix as additional analysis of the LLMs’ inability to execute the procedures demonstrated in the CoT. **The central claim of our paper** is that, because CoT prompts do not lead to the model robustly learning the correct algorithmic procedure, effective CoT prompts require both strong specificity to the problem class and specificity to the length complexity of the problem itself. Our data generally supports the problem class specificity hypothesis, as zero shot CoT does worse than the progression proof, the progression proof CoT generally does worse than the universal algorithm, the three-part universal algorithm does worse on two out of three models than the two-part stacking prompt (which is itself a version of the universal algorithm that assumes all blocks are already on the table, but is otherwise identical to it–hence how close their performance is), and the stacking prompt does worse than the lexicographic prompt, which does the best. The other part of our central claim–that CoT demonstrations do not induce length generalization–is supported not just across all but the most trivial (the lexicographic case) of our blocksworld domains, but is also shown very clearly by our analysis of the last letter concatenation and arithmetic expression simplification tasks (see Figure 3). **On lexicographic stacking performance:** In all of our problems, both the n-shot and the CoT prompts use the same examples. The lexicographic n-shot prompts tend to fail because the model neglects exactly this part of the rules: “I can only pick up a block if the block is on the table and the block is clear. A block is clear if the block has no other blocks on top of it and if the block is not picked up.” Instead the model will stack A on B, then pick up B (which is illegal based on the explicit rule cited because B is not clear) and stack it on C, and so forth. GPT-4 fails all lexicographic problems which are larger than the examples given in this exact same manner. The CoT prompt specifies a two-part procedure: first determine which block is the base of the tower, then stack all the blocks in the correct order on that base. This is sufficient to correct the issue in most, but not all, cases, most likely by aligning the most probable completion with the correct semantics in this particular case. **Typos and formatting issues:** Thank you for pointing these out! Missing references have been added and incorrect ones have been fixed. We will add the n-shot prompts to the appendix, as well as releasing the entire codebase and data publicly on github. [1] Valmeekam K, Marquez M, Sreedharan S, Kambhampati S. On the planning abilities of large language models-a critical investigation. Advances in Neural Information Processing Systems. 2023 Dec 15;36:75993-6005. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response. After considering the clarifications in the rebuttal, I better understand the paper's claim that CoT does not robustly generalize underlying algorithms from prompts. I believe this has been demonstrated through sufficient emperical analysis and is a valuable contribution. I have decided to increase my score given these considerations.
Rebuttal 1: Rebuttal: In addition to the reviewer-specific rebuttals, we provide this global response to a couple of common ways the reviewers misunderstood the claims/contributions of our paper. **Central claim of our paper:** The central claim of our paper is not that CoT can’t improve raw accuracy on static benchmarks. We demonstrate that CoT depends on specific prompts being narrowly constructed and customized to the generality of the problem class and the length complexity of the instance itself. Our experiments show that while CoT leads to performance improvements for the narrow problem classes it is tailored to, the model fails to robustly learn the demonstrated procedure/algorithm that would allow it to generalize to other problems which are also solved by that procedure. Our results provide critical counterevidence to the current consensus that CoT unlocks human-like procedural reasoning abilities within LLMs from a few demonstrations [1]. This suggests that CoT’s success is guided more by pattern matching than procedure following. **Going beyond planning:** While our initial interest was understanding the limitations of CoT in planning, we believe that the limitations we unearth are fundamental to CoT and also apply to other problems. Specifically, we extend our experiments to extensions of domains that have been studied by the original CoT paper [1] (Coinflip and Last Letter Concatenation), creating instances within those domains where more reasoning steps are required. As grade school arithmetic is a very commonly studied domain in the CoT literature, we created a much-simplified synthetic natural language arithmetic domain.This domain requires simplifying parenthesized expressions by repeatedly applying one of the four basic arithmetical operations on one digit numbers. All intermediate answers stay one digit, so that the only math required consists of composing operations we know every LLM can evaluate perfectly. Results on Last Letter Concatenation and One-digit arithmetic are consistent with those of Blocksworld, showcasing a performance collapse that suggests CoT prompting has failed to teach the model the necessary underlying procedure to generalize properly. **Tables vs. Plots:** While we gave the results of our experiments both in the form of tables and plots, we note that the plots provide a lot more insight into the performance of CoT. One trend that the plots show very clearly is how the effectiveness of CoT degrades as the problem size increases past that of exemplars in the CoT prompt. This can be seen across both planning and non-planning instances (such as last letter concatenation), and provides strong evidence that CoT does not generally induce models to learn and apply the procedure demonstrated in the prompt. [1] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, Le QV, Zhou D. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems. 2022 Dec 6;35:24824-37.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Nearly Lossless Adaptive Bit Switching
Reject
Summary: The paper addresses challenges in model quantization for deep neural networks (DNNs), focusing on optimizing quantization-aware training (QAT) across multiple bit-widths with weight-sharing. To this end, this paper introduces a novel quantization method that exploits the highest integer precision to achieve nearly lossless bit-switching, reducing storage without relying on full precision. Key contributions include: (1) Adaptive Learning Rate Scaling: A technique that dynamically adjusts learning rates for different precisions to address competitive interference and inconsistent gradient issues during one-shot joint training. (2) Double Rounding: An extension for one-step rounding quantizer in fixed-precision quantization to improve accuracy. Experimental results on the ImageNet-1K dataset show that the proposed methods surpass state-of-the-art approaches in both multi-precision and mixed-precision scenarios, achieving higher efficiency and accuracy. Strengths: - This submission is well-written, as well as with good figures in Sec.4. - The authors conduct extensive experiments on multiple datasets and multiple networks. Weaknesses: - Some analysis is missing. For example, I'm wondering whether the second rounding leads to more quantization errors, as the first rounding is used to produce INT8 weights and second rounding is then performed to quantize lower bit-width, the twice quantization is possible to cause more clipping errors and rounding errors, some analysis could enhance the strength of proposed methods. - Some designs should be further clarified, e.g., why ALRS is applied only for the scaling factors? Intuitively, weights of small bit-width is induced large gradient variance by STE, and thus the weights of small bit-width should also benefit from using smaller LR. - Fig. 1 is a bit confusing, some colored arrows are not well explained. - This works essentially lies in the research of mixed-precision quantization, so I think it is better to compare more MPQ (e.g., HAQ, DNAS, LIMPQ, etc) research in the Sec.4. Moreover, some recent papers on multi bit-width quantization are missed on the , e.g., [1] (PTQ-based) and [2][3] (QAT-based), which could be included into the Related Work. [1] Xu, Ke, et al. "PTMQ: Post-training Multi-Bit Quantization of Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 14. 2024. [2] Tang, Chen, et al. "Retraining-free model quantization via one-shot weight-coupling learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] Zhong, Yunshan, et al. "MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization." arXiv preprint arXiv:2305.08117 (2023). Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the weaknesses. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please refer to the weaknesses. Overall, this paper currently needs more experiments and analysis to reveal some designs are reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your careful reading, helpful comments, and constructive suggestions, which have significantly improved the presentation of our manuscript. We are delighted with the identification of the novelty and effectiveness of the proposed method. We have carefully addressed all comments, and below are our responses: **W1**: Whether the second rounding lead to more quantization errors? **A**: Thank you for your valuable comment. Firstly, we acknowledge that twice rounding introduces more quantization error compared to once rounding. However, compared to previous methods, the advantage of our method is that only saves the quantization parameters (i.e., scales) for the highest bit(e.g., int8). This allows us to save only the highest bit weights and switch to lower bits while eliminating the multiplication and division operations brought by multiple scales during adaptive bit-switching to accelerate the model's inference. Additionally, model training can compensate for this loss, achieving nearly lossless bit-switching. The ultimate goal of this approach is to achieve faster inference with adaptive bit-switching on hardware. **W2**: Why ALRS is applied only for the scaling factors and not used in weights of small bit-width? **A**: Thank you for your kind comments. We apologize for the lack of clarification in that statement. Firstly, ALRS is only applicable to the quantization scale factor because it is quite sensitive during quantization training and determines the final model's convergence performance. In our multi-precision or mixed-precision scenarios, weights are shared, and directly scaling their values would cause other precisions to fail to converge due to severe competition between different precisions. Additionally, applying the ALRS strategy to the quantization scales of different precisions (including small bit-widths) is indirectly equivalent to scaling the weights of that bit-width, because $weight_i = Round(weight_{i-1}) / scale_i) * scale_i + delta_w(x) * lr$, $scale_i = scale_{i-1} + delta_s(x) * ALRS $, where the $delta_w$ and $delta_s$ denote the gradients of the $weight_{i-1}$ and $scale_{i-1}$ respectively, and $ALRS$ denotes the adaptive learning rate of the quantization scaling factors and $lr$ denotes the learning rate of the weights. Even so, we also tried applying ALRS to the weights but did not achieve better performance. We hope this explanation clarifies the rationale behind the argument. **W3**: Fig. 1 is a bit confusing, some colored arrows are not well explained. **A**: Thank you for your kind feedback. In Figure 1: (1) The red arrows indicate the expression of different precisions obtained through double rounding, then the green arrows indicate the combined optimization using the ALRS technique to achieve multi-precision, followed by the HASB technique to further achieve a mixed-precision model. (2) The black arrows in the network structure indicate the data flow between networks. (3) The gray dashed lines represent the techniques required at different stages. We will further refine Figure 1 in the revised manuscript. **W4**: It is better to compare more MPQ (e.g., HAQ, DNAS, LIMPQ, etc) research and could include the [1] (PTQ-based) and [2][3] (QAT-based) into the Related Work. **A**: Thank you for your suggestions. We have carefully read the literature you provided: HAQ [4], DNAS [5], and LIMPQ [6]. We find that DNAS mainly designs a differentiable NAS to search for a hardware-aware efficient network without considering quantization. However, we will incorporate DNAS's efficient NAS techniques to further enhance the performance of our mixed-precision quantization. Regarding HAQ, it uses reinforcement learning to integrate the hardware accelerator’s feedback into the design loop for the quantization strategy. However, the networks learned are hardware-customized, and the learned networks do not have specific bit-width standards, so a direct fair comparison with our method may not be possible. Nevertheless, we will attempt to combine HAQ's techniques in the future to further improve the performance of our method. For LIMPQ, please refer to our response to reviewer bT7w **W4**. Lastly, we promise to include the literature you suggested [1] (PTQ-based) and [2][3] (QAT-based) in the related work section of the revised manuscript. [4]HAQ: Hardware-Aware Automated Quantization with Mixed Precision [5]FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search [6]Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance --- Rebuttal 2: Comment: Thanks for the response. I have carefully read the rebuttal and other reviews, the response addressed my concerns. I will increase my rating while I hope the authors can incorporate these additional results, discussions, and references into the revision. BTW, DNAS refers to the paper entitled "Mixed precision quantization of convnets via differentiable neural architecture search". --- Rebuttal 3: Comment: Dear reviewer T2Zq, Thanks for your positive comments. We will add the additional results, discussions, and references in the final revision. We are pleased to answer your questions and concerns. We sincerely implore that you can consider raising the final score. In this case, we will be greatly inspired and will contribute to the community permanently. Thanks again for your time and efforts in this paper. Best regards Authors --- Rebuttal Comment 3.1: Comment: Dear Reviewer T2Zq, Sorry to bother you again. Thank you for your valuable reply. We have carefully addressed the points raised and have incorporated the suggestions provided to further strengthen the manuscript. Notably, other reviewers have expressed positive evaluations, which we have reflected upon and integrated into the revisions. Given these enhancements and the overall positive reception, we kindly ask if you would consider re-evaluating your score, taking into account the improvements made. Your thoughtful reassessment would be greatly appreciated and would contribute significantly to the overall quality and impact of our work. Thank you once again for your time and consideration. Best regards, Authors
Summary: The paper proposes a QAT scheme to jointly optimize a single model with different precisions. The authors apply their scheme on various CNN-based models on CIFAR-10 and ImageNet datasets. Strengths: 1. The paper is well-written 2. The ablation study is strong in my opinion and they evaluate various aspects of their scheme Weaknesses: 1. I think the main limitation of the paper is the models and datasets. I believe that the study should be done on larger models (LLMs for example) as a architecture goal. For example, the authors show that they do not save a FP32 master copy of the model in their scheme. However, ResNet style models (or MobileNet) are easy to fit in even moderate GPUs and I don't think FP32 master copy is a big problem in that case (please correct me if I'm wrong). 2. I couldn't find a source-code to reproduce the results of the paper in my side. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The paper assumes a power-of-two relation between different precision. Do you have any theoretical intuition for this assumption? or this is just because of fast HW implementation? 2. What is the cost HMT in different networks? How can we compare it against the e2e runtime? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your careful reading, helpful comments, and constructive suggestions, which have significantly improved the presentation of our manuscript. We have carefully addressed all comments, and below are our responses: **W1**: The study should be done on larger models (LLMs), and the FP32 master copy is not a big problem in moderate GPUs. **A**: Thank you for your kind comments. Sorry for the confusion. Firstly, the advantage of saving the highest integer bit-width (int8) we proposed may lie in small terminal devices (e.g., smartphones, small drones, etc.). Hardware implementation is not limited to GPUs but may include CPUs or FPGAs with limited resources and communication bandwidth, as well as Arm processors. Secondly, we acknowledge your point that the technique we proposed may have greater advantages in the field of LLMs. Due to limited time and resources, we conduct Multi-precision experiments on small LLMs[1] without using ALRS and distillation, please refer to Table 1 of the attached PDF in the global response. Note, except for not quantizing the embedding layer and head layer, due to the sensitivity of SiLU activation causing non-convergence, we don’t quantize the SiLU activation in the MLP and set batch size=16. The training process is shown in Figure 1(TinyLLaMA-1.1B) of the attached PDF. **W2**: Couldn't find a source-code to reproduce the results of the paper. **A**: Thank you for your kind feedback. Our open-source code can be accessed by clicking the last word "here" in the abstract, which is a hyperlink. As reviewer rFvS also points out, one of our advantages is that "Code is given." **Q1**: Have any theoretical intuition for assuming a power-of-two relation between different precision or this is just because of fast HW implementation? **A**: Thanks for the nice question. We apologize for the lack of clarification in that statement. The “power-of-two relation” arises because we use the same quantization parameters, i.e., scale, for models of different precisions when learning the weights. Switching from a higher bit-width to a lower bit-width is equivalent to clipping the lower bits of the weight values, retaining the higher bits. For example, the first four bits of the 8-bit weight are the same as the 4-bit weight. This is also equivalent to multiplying the scale value by 2 to the power of k, where k is the difference between the two bit-widths. Of course, in conventional multi-precision training where quantization parameters are not shared among different precisions, there is no “power-of-two relation” in their scales. The ultimate goal of this approach is to achieve faster inference with adaptive bit-switching on hardware. We hope this explanation clarifies the rationale behind the argument. **Q2**: What is the cost HMT in different networks? How can we compare it against the e2e runtime? **A**: Thank you for your kind question. The cost of computing the HMT (Hessian Matrix Trace) in different networks takes a few minutes (e.g., approximately 2 minutes for ResNet18 and 5 minutes for ResNet50). Since this computation is done offline and then directly used for end-to-end inference, this part of the computation can be considered negligible. [1]TinyLlama: An Open-Source Small Language Model. --- Rebuttal 2: Title: Reply Comment: Thank you so much for answering my questions and providing more results. For some reason, I couldn't find the code :-) I will increase my score. --- Rebuttal 3: Comment: Dear reviewer Aa14, Thanks for your positive comments. We are sorry that you can't find the code we provided for some reason. We will forward an anonymous code link to you via AC, and we promise to make the code publicly available on GitHub later. We are pleased to answer your questions and concerns. We will add the relevant discussions in the final version. We sincerely implore that you can consider raising the final score. In this case, we will be greatly inspired and will contribute to the community permanently. Thanks again for your time and efforts in this paper. Best regards Authors --- Rebuttal Comment 3.1: Comment: Dear Reviewer Aa14, Sorry to bother you again. Thank you for your valuable reply. We have carefully addressed the points raised and have incorporated the suggestions provided to further strengthen the manuscript. Notably, other reviewers have expressed positive evaluations, which we have reflected upon and integrated into the revisions. Given these enhancements and the overall positive reception, we kindly ask if you would consider re-evaluating your score, taking into account the improvements made. Your thoughtful reassessment would be greatly appreciated and would contribute significantly to the overall quality and impact of our work. Thank you once again for your time and consideration. Best regards, Authors
Summary: This paper discusses advanced methods in multi-bit model quantization. Specifically, this paper proposes a method for one-shot joint training of multiple precisions. To this end, the authors introduce a double-rounding quantizer that leverages the highest integer precision to achieve nearly lossless bit-switching while reducing storage requirements. Moreover, they also propose an Adaptive Learning Rate Scaling technique that adjusts learning rates dynamically for different precisions. Two proposed techniques mitigate the competitive interference between bit-widths caused by inconsistent gradients of different precisions during biased gradient estimation. They also extend their Double Rounding method to support one-shot mixed precision training and develop a Hessian-aware Bit-witdh sampling strategy. Experimental results on the ImageNet-1K classification task show that their methods outperform state-of-the-art one-shot joint QAT in both multi-precision and mixed-precision scenarios. Strengths: - Eliminating the costs of retraining for mixed-precision quantization is a meaningful and challenging topic. - The end-to-end experiments are sufficient, and the presentation is good. Weaknesses: - More uniquness analysis needed. The using of Hessian information seems a bit trivial, each layer's Hessian is just used to compare with the averaged Hessian trace. Firstly, as shown in recent zero-cost NAS research [1], the architectural proxies will be less effective as the training goes on, I'm not sure the Hessian information obtained on the initial full-precision model will remain useful as the quantization-aware training continues. Moreover, the sampling probability is modified with a simple ascending heuristic, which is not Hessian-aware. - Also applies here: the design of the double-rounding quantizer is similar to Bit-Mixer, Adabits, and ABN. Specifically, ABN also uses - ALRS needs further ablations. In ALRS, the authors use a fixed scaling ratio to bit-widths, e.g., 8-bit is 1, 6-bit is 0.1, and 4-bit is 0.01, the choice of these scaling factors still requires more ablation studies and discussions. - More comparisons needed. Since this paper adopts an ILP-based search algorithm to find optimal subnets, it is better to compare with these ILP-based mixed-precision quantization papers, e.g., [2] and [3]. [1] A Deeper Look at Zero-Cost Proxies for Lightweight NAS [2] Mixed-precision neural network quantization via learned layer-wise importance, ECCV 2022. [3] Hawq-v2: Hessian aware trace-weighted quantization of neural networks, NIPS 2020. Technical Quality: 3 Clarity: 3 Questions for Authors: - How do the authors perform KD for the proposed method? Is an external teacher used or only distilled from the highest precision with the in-place distillation? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your careful reading, helpful comments, and constructive suggestions, which have significantly improved the presentation of our manuscript. We are delighted with the identification of the novelty and effectiveness of the proposed method. We have carefully addressed all comments, and below are our responses: **W1**: Using Hessian information seems a bit trivial, and the sampling probability is modified with a simple ascending heuristic, which is not Hessian-aware. **A**: Thank you for your valuable comment. Firstly, we acknowledge that this method still has room for improvement, and we will refine it in the future by incorporating zero-cost NAS research or more advanced algorithms. However, the sampling probability is indeed modified using a simple heuristic approach based on the sensitivity (Hessian trace) of different layers during the training phase. During the inference phase, when using ILP for inference, the Hessian information is also used as a constraint factor for aligning the training phase, thereby avoiding the need for retraining, which maybe can be considered Hessian-aware. We hope this explanation clarifies the rationale behind the argument. **W2**: The design of the double-rounding quantizer is similar to Bit-Mixer, Adabits, and ABN. Specifically, ABN also uses. **A**: Thank you for your thoughtful comment. Similar to Bit-Mixer, Adabits, and ABN, our method learns different bit quantization parameters through shared weights, but the specific implementations are different. Firstly, Adabits and Bit-Mixer primarily achieve bit-switching through the Floor operation, while our method switches using twice Rounding operation. Secondly, ABN's formula appears similar to our doubling rounding, but the key difference is that different precisions use different quantization parameters. In contrast, our double rounding only updates the quantization parameters of the highest INT weight during training. Finally, in bit-switching scenarios, storing only one shared quantization parameter of our double-rounding is hardware-friendly and reduces the computation overhead of scale (floating-point value). **W3**: The scaling factors of ALRS need further ablations. **A**: Thank you for your valuable suggestion. We have conducted more ablation experiments on scaling factors, and the results can be seen in Table 4 of the attached PDF in the global response. It can be observed that the performance differences of the model under different scaling factor settings are not significant, further proving the effectiveness of ALRS. **W4**: It is better to compare with these ILP-based mixed-precision quantization papers, e.g., [2] and [3]. **A**: Thanks for your kind suggestion. We have carefully read the literature [2] and [3] you provided. However, we find that only the results in Table 2 of [2] regarding ResNet18's 3MP (Top-1: 69.7) can be fairly compared with ResNet18's 3MP (Top-1: 69.92) in Table 3 of our main text. The other configurations either refer to ResNet50's w-bits=3MP, a-bits=4MP in literature [2] or ResNet50's w-bits=2MP, a-bits=4MP in literature [3]. We will further attempt similar bit-width configurations for a fair comparison in the revised version of the paper. **Q1**: How do perform KD for the proposed method? **A**: Thank you for your valuable question. We drew inspiration from the progressive in-place distillation of [4], i.e., self-distillation. However, we apply it to the multi-precision quantization method in this paper. Specifically, higher bit-widths distill to their neighboring lower bit-widths, such as 8-bit distills to 6-bit, 6-bit distills to 4-bit, and 4-bit distills to 2-bit. [4] self-knowledge distillation with progressive refinement of targets. --- Rebuttal Comment 1.1: Comment: Thanks for addressing the questions and comments in the previous round. I also read the others' comments and remain positive for the rating. --- Rebuttal 2: Comment: Dear Reviewer bT7w, Thanks for your positive comments. We are pleased to address your questions and concerns. We will add the relevant discussions in the final version. We sincerely implore that you can consider raising the final score. In this case, we will be greatly inspired and will contribute to the community permanently. Thanks again for your time and efforts in this paper. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Dear Reviewer bT7w, Sorry to bother you again. Thank you for your valuable reply. We have carefully addressed the points raised and have incorporated the suggestions provided to further strengthen the manuscript. Notably, other reviewers have expressed positive evaluations, which we have reflected upon and integrated into the revisions. Given these enhancements and the overall positive reception, we kindly ask if you would consider re-evaluating your score, taking into account the improvements made. Your thoughtful reassessment would be greatly appreciated and would contribute significantly to the overall quality and impact of our work. Thank you once again for your time and consideration. Best regards, Authors
Summary: The authors propose a bit-switching quantization method using Double Rounding, which applies rounding twice to achieve nearly lossless switching without storing a full-precision model. They also introduce Adaptive Learning Rate Scaling (ALRS) to adjust learning rates dynamically across precisions, ensuring consistent quantization updates. Additionally, they develop Hessian-Aware Stochastic Bit-switching (HASB) for one-shot mixed-precision training, optimizing bit-width distribution based on layer sensitivity, thus eliminating retraining stages. Strengths: 1. ALRS heuristic can help practitioners who wish to train mutli-precision model 2. Authors made extensive experiments on vision models and compare to previse methos 3. Most sections are well written 4. Code is given Weaknesses: **Novelty** is limited and I am not highly motivated that the problem is important. 1. The main contribution is to not same 32bit weight and different quantization parameters but only the high bidwith using a pretty straightforward idea of double rounding during training 2. The ALRS is based on observation and heuristic to fix it. It is nice and helps for when trying to use 2 bits as well. Yet, I am not sure it is important for methods that don’t use the double rounding. **Motivation** 3. Since we usually don’t switch models based on data I am not sure why this is important. Do we really have edge device that switch on a daily base model precision and thus need to store in small local memory the 32bit model? Can you elaborate why and where multi precision is really important. 4. No results on more recent models (LLMs) Technical Quality: 3 Clarity: 3 Questions for Authors: I don't understand why you claim the ALRS method was inspired by LARS. The only similarity seems to be the name. Can you explain the connection? Can you provide a scenario where your method is particularly important? You state that “if different precision losses separately compute gradients and directly update shared parameters at each forward process, it attains better accuracy when combined with our ALRS training strategy.” However, this involves updating the gradients four times more frequently, which is inefficient (4x backward pass and optimizer). This seems equivalent to small batch versus large batch training. Have you considered simply increasing the learning rate with the conventional multi-precision training approach? It might achieve the same results. In Table 4, have you run the same number of forward/backward passes with the {8, 6, 4, 2} and {4, 3, 2} bit-widths? Since you have only three bit-widths in the latter, you might need to run 4/3 more iterations to ensure the total number of updates is the same. Have you tried that? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors partially discuss limitation Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your careful reading, helpful comments, and constructive suggestions, which have significantly improved the presentation of our manuscript. We have carefully addressed all comments, and below are our responses: **W1**: using a pretty straightforward idea of double rounding during training. **A**: Thank you for your comment. Although the concept of double rounding appears simple, learning shared quantization parameters that maintain the characteristics of low integer precision makes it possible to switch to other lower bits almost losslessly. Our study verifies both the feasibility and efficiency of this method. **W2**: Is ALRS important for methods that don’t use double rounding? **A**: Thanks for your constructive comment. In fact, ALRS is a general technology for multi-precision. We have conducted ablation experiments on other methods, please refer to Table 2 of the attached PDF in the global response. The results further validate the effectiveness of our proposed ALRS. **W3**: Do have an edge device for switching daily model precision, why and where is multi-precision important? **A**: Thank you for your valuable comment. Unfortunately, it seems that there are no specific edge devices that implement adaptive bit-switching during the inference phase currently. However, existing hardware already has the capability for such bit-switching (for example, the Nvidia A100 supports INT8, INT4, and Binary [1], and small AI chips support INT4 and INT2 [2]). This potential has not yet been fully developed, but we believe this technology will become widespread in the future for the following reasons: - **Model Compression**: For different terminal devices, the storage and computational resources provided vary. To reduce the cost of repeated quantization training and simultaneously provide multiple versions of models with different sizes [3], multi-precision is an effective means to solve this problem. - **Scenario-Based Precision Switching**: Depending on different scenarios, real-time switching of model precision can be implemented. For instance, in the field of autonomous driving, complex road scenarios require high precision to ensure real-time accuracy, while simple road conditions can switch to lower precision to save energy. - **Large Language Models (LLMs)**: For complex logical tasks, a high-precision model may be needed to provide more reliable answers (similar to GPT4-pro), whereas for simple conversational tasks, a low-precision model can provide reliable answers (similar to GPT4o or GPT4o-mini) to accelerate inference. **W4**: No results on more recent models (LLMs)? **A**: Thank you for your nice comment. Due to limited time and resources, we conduct Multi-precision experiments on small LLMs[4] without using ALRS and distillation, please refer to Table 1 of the attached PDF. Note, except for not quantizing the embedding layer and head layer, due to the sensitivity of SiLU activation causing non-convergence, we don’t quantize the SiLU activation in the MLP and set batch size=16. The training process is shown in Figure 1(TinyLLaMA-1.1B) of the attached PDF. **Q1**: Explain the connection between the ALRS method inspired by LARS? **A**: Thank you for your comment. LARS first uses a separate learning rate for each layer instead of each weight. Secondly, the magnitude of updates is controlled according to the weight norm to better manage the training speed. Our ALRS refers to LARS's weight norm update strategy and modifies it to a gradient norm update strategy to adapt the learning rate of different precision. We hope this explanation clarifies the rationale behind the argument. **Q2**: provide a scenario where your method is particularly important. **A**: Please refer to Answers of W3. **Q3**: Simply increasing the learning rate with the conventional multi-precision training approach? **A**: Thanks for your kind comment. Initially, we attempted conventional multi-precision training by simply increasing the learning rate, but this led to non-convergence and even training collapse. We analyze that this is due to the excessive sensitivity of the quantization scale and the severe competition between different precisions. **Q4**: Need to run 4/3 more iterations to ensure the total number of updates. **A**: Thanks for the nice suggestion. We have conducted experiments on {8, 6, 4, 2}-bit and {4, 3, 2}-bit using more epochs. Please refer to Table 3 of the attached PDF, where we find a slight improvement. [1]Once-for-all: Train one network and specialize it for efficient deployment. [2]NVIDIA A100 Tensor Core GPU Architecture. [3]A 7-nm Four-Core Mixed-Precision AI Chip With 26.2-TFLOPS Hybrid-FP8 Training, 104.9-TOPS INT4 Inference, and Workload-Aware Throttling. [4]TinyLlama: An Open-Source Small Language Model. --- Rebuttal Comment 1.1: Title: Answer to the authors rebuttal Comment: I would like to thank the reviewers for their detailed responses. The authors addressed my questions effectively. Due to their additional experiments and the fact that ALRS has been shown to improve accuracy for other multi-precision methods, I have raised my score to 5. --- Rebuttal 2: Comment: Dear reviewer rFvS, Thanks for your positive comments. We are pleased to answer your questions and concerns. We will add the relevant discussions in the final version. We sincerely implore that you can consider raising the final score. In this case, we will be greatly inspired and will contribute to the community permanently. Thanks again for your time and efforts in this paper. Best regards Authors --- Rebuttal Comment 2.1: Comment: Dear Reviewer rFvS, Sorry to bother you again. Thank you for your valuable reply. We have carefully addressed the points raised and have incorporated the suggestions provided to further strengthen the manuscript. Notably, other reviewers have expressed positive evaluations, which we have reflected upon and integrated into the revisions. Given these enhancements and the overall positive reception, we kindly ask if you would consider re-evaluating your score, taking into account the improvements made. Your thoughtful reassessment would be greatly appreciated and would contribute significantly to the overall quality and impact of our work. Thank you once again for your time and consideration. Best regards, Authors
Rebuttal 1: Rebuttal: We extend our sincerest gratitude to the AC and reviewers for their constructive comments, which greatly improve this work! Pdf: /pdf/0e169581017f64d687bfabc48e0c1386332dca8f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Global Lyapunov functions: a long-standing open problem in mathematics, with symbolic transformers
Accept (poster)
Summary: This paper presents a method using sequence-to-sequence transformers to find Lyapunov functions for dynamical systems, mainly polynomial systems. Even though it illustrates some new advances in LLMs in solving mathematical problems, the reviewer doubts the mechanism and the necessity of using large models such as transformers to learn Lyapunov functions for simple polynomial systems, as well as the fairness of the comparisons made in the paper. This work could be a very interesting and good workshop paper to show the advances of LLMs, but the reviewer doesn't think there are significant academic contributions to the math community. Strengths: This paper finds a way to explore or show the capacity of transformers, or LLMs in general, to solve existing mathematical problems, that is, finding the Lyapunov functions for nonlinear dynamical systems in this paper. Also, the backward data generation method is interesting. Weaknesses: Even though the authors tried to make the story interesting and appealing, it seems that a deeper understanding on Lyapunov stability analysis is needed when improving this paper in the future. For example, Lyapunov functions are useful for providing stability guarantees for equilibrium points for dynamical systems, but there are no such equilibrium points in the three-body problem (line 29). Please see the introduction of chapter 3, Lyapunov stability in 'Nonlinear systems' by Khalil, H. K., one of the references of this work. Moreover, it seems that the authors didn't understand the stability concept fully when writing this paper. In line 27, they stated that the goal is "discovering the Lyapunov functions that control the global stability of dynamical systems", while in Def. 3.2, they had the definition of 'stable'. Stable and globally stable are different, which should be precise as an academic paper. Also, that's different from asymptotically stable. Meanwhile, 'control' is definitely not the correct wording here. Last but not least, finding a Lyapunov function (candidate) for a nonlinear system is not a very hard problem now, even for high-dimensional (polynomial) systems, but how to verify it is indeed a valid Lyapunov function satisfying the Lyapunov conditions is the bottleneck, which is difficult to address for non-polynomial or high-dimensional systems. As shown in the paper ['Neural Lyapunov Control'](https://papers.nips.cc/paper_files/paper/2019/hash/2647c1dba23bc0e0f9cdf75339e120d2-Abstract.html), one can easily use a one-hidden feedforward neural network with 6 hidden neurons to learn a Lyapunov function for many 2- or higher-dimensional nonlinear systems which are not necessary to be polynomials. That said, the results shown in this work are not impressive at all, and the Lyapunov function candidates they found using transformers still lack correctness, which needs to be verified against the Lyapunov conditions, given a new nonlinear system. Technical Quality: 2 Clarity: 2 Questions for Authors: There are quite a couple of interesting questions that can be asked: 1. It's pretty shocking and interesting to see that the authors called SOSTOOLS which was proposed and developed more than two decades ago. Similar to the basic Lyapunov analysis explained above, it seems that the authors were unclear about the latest development of the toolbox or numerical methods for finding Lyapunov functions for nonlinear systems. Even in this year's HSCC conference, there are two tool papers on finding Lyapunov functions for general nonlinear systems: [FOSSIL 2.0](https://dl.acm.org/doi/10.1145/3641513.3651398) and [LyZNet](https://dl.acm.org/doi/10.1145/3641513.3650134). Both of them can handle the non-polynomial cases. The authors did a comparison with FOSSIL, but they claimed that they just achieved <5% accuracy. I don't think they give the toolbox a fair enough try. 2. With bullet point #1 being said, this transformer method needs a very long training time and an extremely large amount of data. However, with the aforementioned two toolboxes, for some systems, including the simple polynomial systems shown in Appendix C, valid Lyapunov functions can be found within a few seconds, and they are formally verified with SMT solvers. The same for SOSTOOLS, the obtained Lyapunov functions are guaranteed to be valid ones. On the contrary, the performances of the proposed method on some datasets are not satisfactory, such as Table 2 and Table 5. 3. Either in the control and system community or in the applied mathematics community, people barely solve Lyapunov functions by hand, the same as adding two very large numbers. The comparison with so-called 'human mathematicians' is not reasonable. 4. In line 189, '$e^i$ are diverse enough to span all of $\mathcal{H}_x$ '. When is it 'diverse' enough to span the space? 5. It's not surprising to see that after adding some data from forward datasets to the backward ones, the performances are much better. It's very likely that there are some issues with the backward datasets, while the forward datasets are guaranteed to be correct with SOS. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: The fairness of the comparisons and the correctness of the learned Lyapuonv functions, as discussed above. Also, even though the authors claimed that their methods worked well with non-polynomial system, the comparisons and examples are mainly for polynomial systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of the paper. **there are no equilibrium points in the three-body problem** We agree, we mention this problem to emphasize the importance of stability in general. This said, there are periodic orbits which, up to a change of variables, are equilibrium points of a non-autonomous system. **Stable and globally stable are different, which should be precise as an academic paper.** “Globally stable” meant “has a global Lyapunov function”. This is indeed different from asymptotic stability (LAS or GAS) or exponential stability for which we make no claim, but it coincides with GAS when the largest invariant set of $x\in\mathbb{R}^{n} | f(x)\cdot \nabla V(x) = 0$ is the equilibrium (LaSalle). The Lyapunov functions we generate are proper, we will add it in our definition and update the paper to achieve a level of rigor comparable to what we submit to math journals. **finding a Lyapunov function for a nonlinear system is not a very hard problem** We disagree, at least for non-polynomial systems: finding a global Lyapunov function for some particular examples may be easy, but in general it is a very hard problem and still an active area of research in mathematics (see for instance [Adda & Bichara, IJPAM, 2012] or [Agbo Bidi, Almeida, Coron, 2024]). **As shown in Neural Lyapunov Control, one can easily use a one-hidden feedforward neural network [...] to learn a Lyapunov function.** Neural Lyapunov Function works well on their showcase examples, but often does not converge for the systems we consider. It achieves 2-22% acc. on polynomial test sets, and <1% on non polynomials. This may be because it was designed for stabilization problems, and performs worse when there is no control feedback. **the Lyapunov function candidates they found using transformers still lack correctness** All the Lyapunov functions we find for polynomial systems are guaranteed true using a SOS checker. For non-polynomial systems, following your suggestion, we now use an SMT solver to theoretically ensure correctness (with no change in performance). **the authors called SOSTOOLS which was proposed more than two decades ago** SOSTOOLS is still one of the most widely recognized toolboxes to find Lyapunov functions. While it is two decades old, the underlying SDP solvers -at the core of the computation- have evolved and we are using up-to-date SDP solvers. **The authors were unclear about the latest development [...] for finding Lyapunov functions [...] FOSSIL 2.0 and LyZNet. [...] I don't think they give the toolbox a fair enough try.** We did provide a comparison to FOSSIL. Following your comments, we experimented extensively with the very recent FOSSIL 2.0 and LyZNet (see rebuttal PDF). We were able to reproduce their results and we confirm they have very low accuracy on our polynomial and non-polynomial test sets (see the rebuttal PDF). We will share our code and datasets. We believe the low accuracy of these models on our test sets is due to the fact that they address a different problem: discovering **local or semi-global** Lyapunov functions, while we target global Lyapunov functions. Overall, we believe our work is related, but complementary. We will update the discussion accordingly. **this transformer method needs a very long training time and an extremely large amount of data.** Our model needs to be trained once from *synthetic* data, before it can be used at very little cost for any system, a common paradigm in machine learning. In contrast, the smaller models in FOSSIL and LyZNet must be retrained for every system. **with the two toolboxes [...] Lyapunov functions [...] are formally verified with SMT solvers. The same for SOSTOOLS [...] the performances of the proposed method on some datasets are not satisfactory, such as Table 2 and Table 5.** Our Lyapunov functions are now also formally verified with an SMT solver (this makes little difference in the results, see authors rebuttal). On Table 2 and 5 (out-of-distribution and discovery) our methods perform better than SOSTOOLS and the toolboxes. **people barely solve Lyapunov functions by hand. The comparison with so-called 'human mathematicians' is not reasonable** When classical Lyapunov functions and classical techniques (LMI, BMI SOS, etc.) don’t work, mathematicians (in our community as least) do need to find the form of the Lyapunov functions by hand (see for instance [Adda & Bichara, IJPAM, 2012], [Friedrich et al, SICON, 2023] for ODE systems or [Hayat, Shang, JMPA, 2021] for an example although on PDE). Our models may help “guessing” candidate Lyapunov functions in these situations. Since finding Lyapunov function is often a textbook exercise in control theory courses, comparing our models with master students in such courses seemed to provide a fair evaluation of the hardness of the problems. **In line 189 [...] when is it 'diverse' enough to span the space?** We mean that $e_{i}(x)$, $1 \leq i \leq n$ should be a generative family of the $\mathcal{H}_{x}$. Thanks for the comment, we’ll clarify. **It's not surprising to see that after adding some data from forward datasets to the backward ones, the performances are much better** What is surprising is the proportion involved: performance on both distributions improve, after adding 50 forward samples to 1 million backward (0.05%). This is an important result, since OOD is a difficult problem in machine learning. **It's very likely that there are some issues with the backward datasets** The backward datasets are guaranteed correct by design: the steps of the process ensure the conditions from Def. 3.2 hold (see section 5.1 and Appendix A.2). **the comparisons and examples are mainly for polynomial systems** Our framework can handle both polynomial and non-polynomial systems. We provide more comparisons on polynomial systems because it is a strong baseline with a long history. We will add more comparisons for non-polynomial systems. --- Rebuttal Comment 1.1: Title: Post-rebuttal Comment: Thanks for the responses and the newly added experiments. After reviewing them, I remain unconvinced by the comparisons and the supporting arguments presented. Additionally, the references provided in the responses are difficult to locate and follow in their current format. I personally still do not see significant contributions to the community (at least in my community). - As I stated in my previous comments, finding Lyapunov function candidates for nonlinear systems is not hard. We can use neural networks or other methods to find one (candidate) easily. The critical issue lies in guaranteeing the correctness of these functions. The proposed method cannot address this problem. As the authors themselves stated, the outputs of the transformer model still require formal verification via SOS or SMT solvers, if one needs to ensure their correctness. In addition, for the neural network method in the “Neural Lyapunov Control” paper, I believe that modifying the neural network structure slightly and removing the feedback control component u should yield better accuracy. The claim that the accuracy should be as low as suggested is questionable, especially since FOSSIL is built on this framework, and the case without control input should be easier to solve for low-dimensional systems. - I also strongly doubt the claim that it can be used to find a Lyapunov function "for any system". The results are unlikely to generalize to arbitrary nonlinear systems. The systems must likely fall within the categories represented in the training dataset, given the generalization limitations of neural networks. Moreover, I struggle to envision a scenario where generating a Lyapunov function in a rapid, "plug-and-play" manner would be necessary. - Furthermore, the authors argued that the proposed method has better performance in finding global Lyapunov functions. This raises the question of why finding a global Lyapunov function is more important (than the other two). In which practical scenarios, such as robotics or power systems, do we need the global Lyapunov function? It’s better to further clarify the significance of this contribution further. - While it is true that exercises in control theory courses or publications often involve finding Lyapunov functions by hand, all of them are in a particular structure and definitely not any general nonlinear systems. Given that this paper aims to solve a mathematical problem using machine learning methods, it is crucial that it be written with rigor, as the authors themselves acknowledge. In my opinion, the current version is not ready for publication, and I would like to keep my score unchanged. --- Reply to Comment 1.1.1: Title: Response Comment: Thank you for your response. We believe most of our disagreement stems from different mathematical communities and motivations: We are focusing on training transformers to find explicit symbolic global Lyapunov functions for globally stable systems. Except for some polynomial systems, this is an open problem in the area of dynamical systems. No ML methods for this exist. In fact, addressing such an open mathematical problem would be a first for symbolic transformers, which so far have only been applied to problems with known solutions (integration, etc.). We are primarily motivated by the (pure) math problem in itself, rather than its engineering applications.  Your focus seems to be in a related field: finding (implicit) local or semi-global Lyapunov functions, with a motivation coming from engineering applications. For this, ML solutions, such as FOSSIL, do exist.  We make no claim as to the relative scientific importance of the two problems. However, these are very different problems, as our experiments with FOSSIL and other neural solvers suggest in the sense that these tools designed for semi-global Lyapunov functions do not perform well on the global problem. To make this clearer, we propose to add the word "global” to the title of our paper, and add a clarification in the introduction and the related work, about the two problems. We believe that judging the novelty of our work, targeted on a specific problem of pure mathematics, by comparing it to a related but different mathematical problem with a focus on engineering problems is not appropriate. Concerning the references, because of the characters limit of the rebuttal we couldn’t add the full links but only the author's name, journal when appropriate and year. We add the full references below for convenience: - Adda, P., & Bichara, D. (2012). Global stability for SIR and SIRS models with differential mortality. International Journal of Pure and Applied Mathematics, 80(3), 425-433. [Link](https://arxiv.org/pdf/1112.2662) - Bidi, K. A., Almeida, L., & Coron, J. M. (2023). Global stabilization of sterile insect technique model by feedback laws. arXiv preprint arXiv:2307.00846. [Link](https://arxiv.org/pdf/2307.00846) - [Friedrich, J., Göttlich, S., & Herty, M. (2023)]. Lyapunov stabilization for nonlocal traffic flow models. SIAM Journal on Control and Optimization, 61(5), 2849-2875. [Link](https://epubs.siam.org/doi/full/10.1137/22M152181X) - Hayat, A., & Shang, P. (2021). Exponential stability of density-velocity systems with boundary conditions and source term for the H2 norm. Journal de mathématiques pures et appliquées, 153, 187-212. [Link](https://doi.org/10.1016/j.matpur.2021.07.001)
Summary: In this paper, the authors propose a new method of generating synthetic training samples from random solutions, i.e. Lyapunov functions for dynamical systems in the forms of ordinary differential equations. They demonstrate that the sequence-to-sequence transformer training on such data can produce Lyapunov functions better than both existing algorithmic solvers and humans. More speficially, for polynomial systems, the method proposed in the paper can find Lyapunov functions for 10.1% of the randomly generated polynomial systems where the state-of-art algorithm can only find 0.7%, furthermore, the method can produce a Lyapunov function for 12,7% of randomly generated non-polynomial systems. Strengths: The problem of Lyapunov function problem is an important problem. The performance of the method proposed in the paper is impressive and promising. Weaknesses: The novelty of the data generating methods, those described in Sec. 5, is not clearly stated and explained. Currently, it reads as if the results are obtained accidentally. Technical Quality: 3 Clarity: 3 Questions for Authors: It is not clear how the forward and backward generation methods are different from what proposed in the past, for example those in Lample & Charton (2019), Prajna et al., 2002, and Prajna et al., 2005. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As I have pointed out, the paper does clearly describe the difference between what it proposes to some of existing methods, thus make it quite difficult to see the main reasons for the rather remarkable experiments results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your interest and your in-depth reading of the paper. **The novelty of the data generating methods, those described in Sec. 5, is not clearly stated and explained. Currently, it reads as if the results are obtained accidentally** We agree, thank you for raising this. We provide a rationale for the generation method we use in the author rebuttal (section B). We will update the paper accordingly. **It is not clear how the forward and backward generation methods are different from what proposed in the past, for example those in Lample & Charton (2019), Prajna et al., 2002, and Prajna et al., 2005.** For relation with Lample and Charton, and the novelty of our approach, this is detailed in section A of the authors’ rebuttal and will be made precise in the revised version. Concerning Prajna et al., 2002, and Prajna et al., 2005, our approach is very different: because Prajna et al. design a deterministic mathematical algorithm, there is no notion of data generation: their algorithm is thought of as a tool for mathematicians and engineers given an input system. We use the algorithm they designed (and subsequent more recent versions using the same approach) as a rejector for rejection sampling and as verifier of the Lyapunov functions provided by the model in polynomial cases. **As I have pointed out, the paper does not clearly describe the difference between what it proposes to some of existing methods, thus make it quite difficult to see the main reasons for the rather remarkable experiments results.** Thank you for pointing this out, we will clarify this in the revised paper. We believe that the main differences with existing methods are the following: Differences with Lample and Charton, 2019 - Charton, Hayat, Lample, 2020 * These papers address problems with known solutions, for which forward generation of large training sets is possible, we address an open problem still unsolved in mathematics. This raises new difficulties : backward generation and OOD generalization. To our knowledge this is the first application of transformers, trained on synthetic data to solve an open problem end to end. * We develop a bespoke technique for generating the backward dataset, that mitigates the risks discussed by Yehudah et al * We introduce a new approach to prevent performance from dropping when the model is tested on distributions different from the training distribution. This relies on injecting a tiny proposition (0.05%) of forward examples in the backward dataset. Differences with Prajna et al. and other Sum-Of-Squares (SOS) approaches * On the method side, SOS approaches are deterministic solvers relying on Semidefinite Programming, very different from our approach, we use these or similar algorithms for testing the output of our model. * Our methods allow to handle non-polynomial systems and polynomial systems that have no sum-of-square Lyapunov functions * On average on the different datasets, our models have better performances and shorter evaluation/inference time. Differences with recent neural methods (“Neural Lyapunov Function”, FOSSIL 2.0, LyZNet): * Our approach is very different: we use a Transformer and treat the mathematical problem end-to-end symbolically. Neural methods rely on numerical approximation/simulations of the dynamics, provide an implicit solution, and need to be re-trained on every system. * Our methods have much higher performances both on polynomial and non-polynomial datasets (see section 6.3 and our new results in the author rebuttal). * We provide symbolic expressions for the Lyapunov functions, whereas existing neural methods learn an implicit “black box” Lyapunov function * Our method learn and provide global Lyapunov functions on the whole $\mathbb{R}^{n}$ (rather than semi-global Lyapunov function on a compact set of $\mathbb{R}^{n}$ which is an easier problem, although still challenging) --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank the authors for providing detailed response to all the questions. I will also increase my score to 5.
Summary: This paper trains a transformer to predict Lapunov functions for both polynomial and nonpolynompial dynamical systems. They generate a dataset combining "forward-generated" problems — solutions discovered from existing solvers on randomly generated problems — with "backward-generated" problems — problems generated by generating a random solution, and then building a function solved by the solution. The trained transformer achieves great performance. On a test dataset of randomly generated problems, it solves 12% of problems, compared to 1% solved by the state of the art solver. In addition, the model solves 84% of 75 problems in the FSOTOOLS dataset where human experts only solved 9.33%. Strengths: - The paper is well-written, and clearly described, so that I could understand it even without having previously encountered Lyapunov functions. - The related work section is good - The dataset generation is very carefully described, and lots of important design decisions are included. There is even more detail in the Appendix. Despite not being an expert, it seems like the authors really know what they're doing, and have rigorously tested different interesting choices of what to include in the dataset. - The results are really cool! The authors test out of domain generalization well, by showing performance on a randomly generated dataset. Indeed, some of the models evaluated were only trained using "ground truth" solver solutions from a different randomly generated dataset. But the model gets 10x higher performance than the ground truth solver on this random dataset. - The big question for papers like this is how well the models generalize out side the train set. Due to the difficult nature of the problem, it's hard to have really good test sets, but the authors are aware of the importance of this, and do lots of OOD generalization tests of their systems, and report positive results. - Again, the authors are quite thorough with their documentation and evaluation of their system. The paper feels very polished and professional and the results are impressive. - Progress on this problem could lead to other work using deep learning for mathematics. Weaknesses: 1. The algorithmic approach is not new — training a transformer to do math problems that a large dataset can be generated for — but this is okay given the careful effort needed and given to the dataset creation process, and the importance and performance of the problem achieved. 2. I find all the different dataset variants included to be a bit confusing. The paper could be easier to read with a reduced the number of variants presented in the main section of the paper. 3. I think the discussion section goes a little far in making suggestions about LLM reasoning ability given success on this dataset. The sentences are true, but it's not necessarily true that performance here is equivalent to reasoning. the transformers are probably doing sometihng more like "intuition" given the amount of data they are seeing and the quickness at which they generate an answer. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. findylap solves the same number of problems as SOSTOOLs, right? 2. aren't the other forward datasets also datasets that SOSTOOLS can solve, since its used to generate them? if so, what decision went into the characteristics of the test set over those used for training? 3. are the polynomial systems for FSOSTOOLS homogenous, nonhomogenous, etc? I'm not an expert on the math at all, but those adjectives were used for the train set. I'm trying to understand, because the performance on the test set is the main way of knowing how good a ML model is. but if the test set is too similar to the train set, then the results aren't as meaningful. of course, you have the results on the "random systems" dataset. 4. One idea could be to try to do some sort of expert iteration with your model: use it as the base solver when generating a forward dataset, and use newly solved problems (since it's better at solving than `findylap` on the random systems dataset) to further fine-tune the model. this probably isn't that useful for increasing performance, but it would be a really cool proof of concept to see the performance keep going up on the random systems dataset with some iterations! 5. Is there a real-world (or "real math") application of finding Lyapunov functions? for example, maybe there are some applications. What kind of things do the existing tools get used for, and how might this model be used as a superior version? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The limitations are addressed well in the discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments. Here are a few answers and clarifications. **The algorithmic approach is not new** We agree. See section A of the authors’ response for a clarification of our novel contributions. Summarizing, * we demonstrate the applicability of backward generation to an open problem (instead of one that we know how to solve, like integration). [Yehudah et al, ICML, 2020] explained why this is not straightforward. The adjustments we had to implement (section B off the author response) prove their point, * we show that mixing a tiny number of forward examples into the backward training set helps mitigate the OOD problem. **I find all the different dataset variants included to be a bit confusing** We will change this in the revised version, and limit the main paper to four datasets: * Two forward sets: Fbarrier and FLyap, for two different problems: barrier functions and Lyapunov functions, * Two backward sets: BPoly and BNonPoly, for polynomial and non-polynomial systems. Results on other datasets will be moved to the Appendix. **the discussion section goes a little far in making suggestions about LLM reasoning ability given success on this dataset. (...) The transformers are probably doing something more like "intuition"** We agree, and we will clarify this paragraph. We used the word “reasoning” because it is common in current LLM literature, but we agree that its overuse makes the claim confusing. We changed it to intuition as you suggest. In particular, we propose the following rewrite of the end of the discussion: Our results suggest that transformers can indeed be trained to discover solutions to a hard problem of symbolic mathematics that humans solve through reasoning, and that this is enabled by a careful selection of training examples, instead of a change of architecture. We do not claim that the Transformer is reasoning but it may instead solve the problem by a kind of “super-intuition” that stems from a deep understanding of a mathematical problem. **Questions** **1- findylap solves the same number of problems as SOSTOOLs, right?** Yes. findlyap is a python analog of SOSTOOLS, that we needed because our computing cluster cannot run Matlab. It solves the same problems as SOSTOOLS, up to differences in speed and memory usage, which may cause different timeouts and failures. **2- aren't the other forward datasets also datasets that SOSTOOLS can solve (...)? if so, what decision went into the characteristics of the test set over those used for training?** Yes. Since SOS methods are the main techniques known to find the symbolic global Lyapunov functions we need for our forward training sets, in this paper, the word “forward” is essentially equivalent to “solvable with SOSTOOLS” (for this reason, findlyap accuracies on forward datasets, 100% by design, are not reported in table 4). The three forward datasets include systems of increasing difficulty for SOSTOOLS. FHom and FNHom both focus on the easier “barrier Lyapunov function discovery” problem. FNHomProper, focuses on the harder problem of discovering a Lyapunov function for a general polynomial system. Number of equations, degree and range of coefficients were selected so that findlyap could find a Lyapunov function in a reasonable amount of time (a few minutes per problem). The forward test sets, being held-out samples of the forward training sets, have the same distribution. We ensure that no overlap exists between train and test sets. **3- are the polynomial systems for FSOSTOOLS homogenous, nonhomogenous, etc?** This was indeed missing, thank you for pointing it out. FSOSTOOLS contains non-homogeneous proper examples that correspond to the Lyapunov function problem with the most generic distribution of polynomials. **4- try to do some sort of expert iteration with your model: use it as the base solver when generating a forward dataset, and use newly solved problems to further fine-tune the model** Thank you very much for this suggestion! We experimented with it during the rebuttal period. Specifically, we created a sample of verified model predictions, for polynomial systems, added it to the original training sample, and continued training the model. We note that adding about 1,000 correct predictions to the 1 million original training sample improves our performance on the “into the wild” test sets (section 6.4), from 11.7 to 13.5 for Poly3, and 9.6 to 11.9% for Poly5 - a 15% increase in accuracy! The performance on other test sets is unaffected. We are still experimenting with this, and will add our results in the updated paper. Thanks again for suggesting this! **5- Is there a real-world (or "real math") application of finding Lyapunov functions?** Yes, practical applications of stability analysis exist in fields as diverse as supply chain, space industry, car flow regulation on highways, chemical processes etc. A typical engineering application is the stabilization problem. Given a dynamical system that can be acted upon, e.g. hydraulic gates controlling flow on a river, nozzles controlling the trajectory of satellites, we want to select actions that keep the system stable (i.e. water levels remain constant, satellite stays on orbit). Mathematically, this amounts to setting a parameter of the dynamical equations, so that the system has a Lyapunov function. In many cases, a global Lyapunov function cannot be found, and one must settle for a local or semi-global solution, which guarantees that the system remains stable so long it is subjected to small perturbations. This is typical in robotics. However, when a global Lyapunov function can be found, more efficient controls can be designed. This is particularly important in biological systems (see for instance [Agbo Bidi, Almeida, Coron, 2024] for the regulation of mosquitoes population), epidemiology (see [Adda &Bichara, IJPAM 2012]) or traffic flow regulation (see [Hayat, Piccoli, Truong, SIAP, 2023]). --- Rebuttal Comment 1.1: Comment: Great response, and thank you for your comments and responses! I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response! And thank you again for your comments, which were very helpful!
Summary: This study addresses the problem of designing the Lyapunov function of dynamical systems via learning. The existence of the Lyapunov function is a sufficient condition of the dynamical system's stability. However, its design is not established except for systems with sum-of-square polynomials. The Authors propose a novel dataset construction framework on which Transformer models can be trained. Extensive experiments demonstrate that Transformer models successfully learn to design Lyapunov functions on several types of datasets with moderate generalization ability across datasets. It is also shown that the injection of out-distribution samples in the training set significantly boosts the success rate. Yet restricted to small systems, this study suggests that the learning approach is a promising direction for addressing hard problems of symbolic mathematics. Strengths: - This study clearly presents the problem and the approach. Overall, the writing is clean and accessible to diverse readers. - A novel backward dataset generation algorithm is proposed to train Transformer models for Lyapunov function design. - The experiments validate the proposed approach from various aspects. Particularly, the generalization across datasets is tested well. Weaknesses: I appreciate the overall contributions of this study. I'd like to raise two weaknesses, one for the methodology (backward generation) and one for the experiments. **Major comments** Backward generation. While the Author claims that the backward generation is the key innovation, the explanation of the generation process is very limited in the main text. I understand the restriction of pages, but it would be better to elaborate on the method a little more to give intuition to the readers. While the details are presented in Appendix A.2, it mainly provides the procedures, and the rationale is not much given. I was not able to fully understand the procedures either because of the unclarity of several notations (see Minor comments below). Plus, many mapping and transformations appear, but their intention and necessity are unclear to me. I encourage the authors to discuss the range of the Lyapunov function and $f$ covered by the proposed generation method and whether this is large enough or not, including, if any, the important class of functions that cannot be covered for now. As for the experiments, assuming a practical use, it is important to know whether the Lyapunov function given by Transformers is really the Lyapunov function of the targeted system, and if not, whether this is because of a simple failure or the existence of such function. Discussion and experiments seem missing in this part. **Minor comments** *(Markdown math rendering somehow fails in the first item below, so I wrote it in a bit tricky way.)* - Some notations are unclear and inconsistent. For example, vectors $\mathbf{z}$ and $\mathbf{z}_{j}^{i}$ are in bold font, while other vectors such as $x$ are not. The subscript notations $A$ [line 177] and $B$ in Eq. (10) are not defined. Assuming that it refers to the $i$-th entry of vector and $C$ is $D$, the first term of the equation in [line 177] appears to be a scalar while the second term is a vector. The Authors may represent a vector by $E$ in the first term, but still, this is a row vector while the second term is a column vector. $A = (\nabla V)_i(x)$ $B = (\nabla V(x))_{\tau_2(i)}$ $C = (\nabla V)_i(x)$ $D = (\nabla V(x))_i$ $E = (h_{\pi(i)})_i$ - Forward/backward methods and Forward/backward generations are both used. I recommend that the Authors keep terminology consistent. - [line 183] and the is --> and there is - [line 224] .In -> . In - [line 489] What does $Id$ mean (the identity map?)? - [line 492] "$g$ is positive" souds a bit wierd as $g$ is a function. - [line 489] Written $g= 1$ but $g$ is supposed to be a function. - [Eq. 10] Does $\mathbf{e}^i_j$ refers to the $j$-th entry of $\mathbf{e}^i$? Then, it should be unbold as it is a scalar. I also encourage the Authors to include more related works that exploit Transformers to address hard math problems. **Shortest Vector Problem (a series of works by the same group)** - "SALSA: Attacking Lattice Cryptography with Transformers," Emily Wenger, Mingjie Chen, François Charton, Kristin Lauter **Gröbner basis computation** - "Learning to Compute Gröbner Bases," Hiroshi Kera, Yuki Ishihara, Yuta Kambe, Tristan Vaccon, Kazuhiro Yokoyama If not restricted to Transformers, **Detection of terminal singularity** - "Machine learning detects terminal singularities," Tom Coates, Alexander M. Kasprzyk, Sara Veneziale **Integer programming** - "Machine Learning for Cutting Planes in Integer Programming: A Survey," Arnaud Deza, Elias B. Khalil Technical Quality: 3 Clarity: 2 Questions for Authors: I'd like the Authors to answer the weaknesses raised above. Plus, - What do the "pre-defined set of increasing functions" and "pre-defined set of bounded-functions" refer to at [lines 479, 485]? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations are adequately presented. For the potential improvements, see Weakness and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments. **Backward generation. it would be better to elaborate on the method a little more.** We agree. See section B of the author rebuttal for a better presentation of our methods, explaining the motivation of the different steps. We will update the paper in this direction, and add further explanations in the appendix. **I encourage the authors to discuss the range of the Lyapunov function and 𝑓** Thank you for bringing this up. One of the main advantages of the generation method we propose is that it allows us to parametrize the functions $V$ and $f$, by selecting the functions $V_\text{proper}$, $V_\text{cross}$, $g_i$ and $h_i$ in different classes. This enables us to generate polynomial or non polynomial Lyapunov functions, for polynomial or non polynomial systems. At present, we generate functions defined by the four operations, power, and usual functions (exp, ln, cos sin…), but this could be extended to any other functions, so long we can specify their gradient. As a consequence, our framework can generate any function $f$ that has a closed formula. The main limitation is the restriction on $V$. According to definition 3.2, any non negative function with a strict minimum of 0 in the origin, and infinite at infinity would do, but sampling such a function in the general case is likely a NP-hard problem. Instead, we sample $V$ in the smaller class of functions that can be written $V = V_\text{proper}+V_\text{cross}$, with $V_\text{cross}$ a product of non-negative functions, such as sum-of-squares, not necessarily polynomial, composed with bounded functions, and $V_\text{proper}$ either a positive definite homogeneous polynomial (in the polynomial case) or a function of the form $V_\text{proper} = \sum_{i=1}^{n} f_{i}(x_{i})$ with $f_i$ a composition of an even polynomial with a strictly increasing function (currently limited to $\exp$, $\ln(1+x^2)$, $\sqrt{1+x}$ but this could be extended). Still, this is a larger class than the functions considered in SOS approaches where $V_\text{proper}$ must be the second form with $f_i$ even polynomials and $V_\text{cross}$ a sum of squares. It also covers all the examples we have seen in textbook examples. Another limitation is that, for now, we reject functions $f$ that are defined on a smaller domain than $\mathbb{R}^{n}$ because we are interested in the **global** stability problem. Future improvement would include operators with a restricted domain of definition and focus on the stability problem on a restriction of $\mathbb{R}^{n}$. Following your suggestion, we’ll add a discussion on the class of functions that can be covered and the limitations in our revised version. **it is important to know whether the Lyapunov function given by Transformers is really the Lyapunov function of the targeted system, and if not, whether this is because of a simple failure or the existence of such function.** Thank you for the comment. All model predictions are passed to a verifier that checks whether it is a correct Lyapunov function for the input system. For polynomial systems, the verifier is a theoretical checker (based on sum-of-squares), for non-polynomial systems it was a numerical checker. In the new version we added a theoretical checker for the non-polynomial case (based on SMT, see our author's rebuttal, section D). The use of an external checker guarantees that all verified model predictions are indeed correct Lyapunov functions. It may happen, however, that correct model predictions are verified as incorrect, because of a failure in the checker. There are four cases: * The verifier is too restrictive. Polynomial systems may have non-polynomial Lyapunov functions, but SOS checkers always assume sum-of-square Lyapunov functions. * The verifier aborts because of timeout or memory overflow, while checking a correct solution. * The verifier aborts because of timeout or memory overflow, while checking a wrong solution, for a system that is globally stable * The verifier aborts because the system is not globally stable (so no Lyapunov function exists) The first case can be identified by changing the verifier, we believe it is rare (polynomial systems with non-polynomial functions are a very special case). Discriminating between the second and two last cases amounts to telling whether the verifier aborted for “good or bad” reasons. This cannot be done in a systematic way, but we can estimate the frequency of these failures by counting the number of verifier failures on the generated datasets, for which we know the Lyapunov functions. Following your suggestion, we estimated these failure rates and will report them in the updated paper. For non-polynomial systems with known Lyapunov functions, we observe that the SMT solver fails in about 18% of the cases. Discriminating between the two last cases amounts to determining whether a system is globally stable. This is an open question, in fact the only way to verify this is to find a Lyapunov function. The only estimate we can have is for special classes of functions that we know are globally stable (polynomials SOS methods can solve, gradient flow systems deriving from a potential). We will report the accuracy on gradient flow systems in the revised version. **What do the "pre-defined set of increasing functions" and "pre-defined set of bounded-functions" refer to at [lines 479, 485]?** Currently the pre-defined set of increasing functions is ($\exp$, $\sqrt{1+x)}$, $\ln(1+x)$) and the pre-defined set of bounded-functions is (cos, sin). We didn’t initially specify them in the procedure because these sets are generation parameters that can be user-defined (one only has to specify the function and its gradient), but we have included them now in the revised version, thanks for the comment. **Minor comments** Thank you for spotting these. We corrected them, and added the references you suggested. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. Overall, the rebuttal improved my understanding of the work. Based on the discussion, I highly recommend the authors improve the presentations, including the notations, elaboration of the proposed method with its core idea of each step, and more. The study itself is interesting; the problem, method, and experiments seem reasonable. I increased my score to 6, assuming that the authors will carefully update the manuscript. I'll also be watching the further responses from the other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for your response, and thanks again for your comments, which helped us improve the paper. We will carefully update the manuscript with your suggestions. In particular, we will improve the notations, include a rationale concerning the generation procedure and detail the core idea of each step in the generation, as you suggest.
Rebuttal 1: Rebuttal: We thank the reviewers for their comments an suggestions, which helped improve the paper. Here, we cover questions asked by several reviewers, and present the new experiments we ran for this rebuttal. The results are in the appended PDF file. **A- On novelty, and comparison with Lample and Charton (2019)** Lample and Charton introduced the backward approach as a fast data generation technique for symbolic integration. They noticed that it creates out-of-distribution (OOD) generalization issues. Models trained on backward data perform badly on forward-generated test sets. [Yehudah et al, ICML, 2020] suggested that backward-generation, when applied to hard problems, tends to target easier subproblems. This limitation comes on top of OOD issues. Ours is the first attempt to actually use a backward approach to train a model to solve an open problem, for which only a very small number of forward examples are available. We make two novel contributions. * 1- We introduce complex generation techniques, like the orthogonal decomposition (step 3), to prevent backward generation from collapsing into solving easier subproblems. While these tricks are problem specific, they show that the limitations described by Yehudah et al. can be mitigated. * 2- We show that OOD generalization issues can be greatly reduced if a tiny number of forward generated examples is added to the backward-generated training set (50 examples in a million, 0.05%). This approach, related to the priming technique (Jelassi et al 2023) for length generalization, is, to our knowledge, novel. **B- Rationale of backward generation** Our models are trained on pairs of problems and solutions. When a solver exists, we can use to generate a training set, by computing the solutions of random problems. This is the forward method. This approach fails for open problems, for which solvers do not exist, or are costly and limited to simple cases. The backward method creates a training set by sampling random solutions (Lyapunov function), and deriving associated problems (dynamical systems). Lyapunov functions must satisfy three conditions (def. 3.2). One depends on the system and two are intrinsic: being strictly minimum in 0, tending to infinity at infinity. In Step 1, we sample a function V that satisfies the two intrinsic conditions. To do so, we write $V=V_\text{proper}+V_\text{cross}$, where $V_\text{proper}$ belongs to a class with a guaranteed strict minimum in zero – e.g. sums of one-variable functions, positive definite polynomials – and $V_\text{cross}$ belongs to a large class of non-negative functions, valued 0 at the origin, but with no guarantee of a strict minimum. This class of functions is larger than those considered in usual SOS methods. In Steps 2 to 4, we address the third condition of Def. 3.2: $\nabla V(x)\cdot f(x)\leq 0$ for any $x\in\mathbb{R}^{n}$. A naive solution would be $f(x) = - \nabla V(x)$, but this would greatly reduce the class of systems we generate, and turn the Lyapunov function discovery problem (find $V$ from $f$) into an easier integration problem (find $V$ from $-\nabla V$). This is the point made by Yehadah et al. To prevent this, we transform the naive solution $f = - \nabla V(x)$ in two ways: * Step 4: we multiply it, along each coordinate axis by random non-negative functions $h_i^2$. Because, this does not change the sign of $f \cdot \nabla V$, Def. 3.2 still holds. * Step 2 and 3: we add a random function, orthogonal to $\nabla V$, $\sum^p_{i=1} e_i(x) g_i(x)$, with $e_i$ random vectors orthogonal to $\nabla V$, and $g_i$ random functions. Since $e_i \cdot \nabla V=0$, Def. 3.2 will still hold. This guarantees that the resulting system $f$ spans a very large set of systems, because any $f$ satisfying $\nabla V(x)\cdot f(x)\leq 0$ can be written as the sum of a function collinear to $\nabla V(x)$ and a function orthogonal to $\nabla V(x)$. This mitigates Yehuda's limitations, by preventing the model from guessing the solution by reversing the generative process. **C- Comparison with neural Lyapunov methods** Reviewer S4kD raised concern about recent neural methods already solving this problem. We consider our approach to be complementary to theirs. First, we focus on a different problem: proving that a system has a global Lyapunov function (valid in the full space) in symbolic form. Neural Methods, on the other hand, find implicit (black box) semi-global Lyapunov functions. While finding semi-global functions, and the corresponding region of attraction, is an important problem in many engineering applications, it is a different problem: we aim at proving stability over the full space. Second, in our experiments, we note that our methods perform much better than neural Lyapunov models (including the most recent Fossil 2.0 and LyzNet, which we added to our baselines), see the rebuttal PDF. We believe this is due to the fact that neural methods solve a different problem, as underlined above. Third, the methods are fairly different. We train a model only once, on generated data, and then use it to predict explicit Lyapunov functions. Neural methods need to be retrained on every system considered, and provide implicit “black box” solutions. **D- New experiments** * We added an SMT, verifier to theoretically guarantee the output of the models in the non-polynomial case. It confirms the performances of our models. * We provide an extensive comparison to neural methods such as FOSSIL 2.0 and LyZNet (see pdf) * We tested a suggestion by reviewer LBjn (thanks again!): fine-tuning a model on its verified predictions. We notice that the addition of 1000 verified predictions to our training set of 1 million, improves performance on the “into to wild” test sets by about 15%, while not affecting the other test sets. Adding more examples seems to be detrimental, as it decreases the performance on other benchmarks. Experiments are ongoing, we will update the paper with the new results. Pdf: /pdf/ab9bde9deae0b0a7843e0ebe8baa05223f95bec8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
This Too Shall Pass: Removing Stale Observations in Dynamic Bayesian Optimization
Accept (poster)
Summary: The paper proposes a novel algorithm for dynamic Bayesian optimization (DBO). DBO defers from traditional BayesOpt as the black-box function to be optimized is changing in time, and the goal of the optimization procedure is to keep track of the optimum across a continuous time index. The continuous time component adds an interesting dimension to the problem: the black-box sampling frequency becomes important to be able to query the function frequently enough to keep track of the optimum. This means there is a necessity for the algorithm and acquisition function solve to be as efficient as possible. One important important factor in the acquisition function solve time is the training of the GP surrogate model. In particular, the training is of order $O(N^3)$ on the number of training data-points. In a dynamical system where the function evolves with time, certain training-points can become redundant, meaning we can make the optimization more efficient if we discard them. The paper proposes a method to choose which points are worth discarding. The main intuition behind the method, is that a point which does not change the *future* predictive distribution of the GP (with respect to time) is a point that is no longer needed in the data-set. Therefore the authors propose a criterion that looks at the predictive distribution under the full data-set $D$ and measures the 2-Wasserstein distance to a GP trained on $D \backslash (x_i, y_i)$ where $(x_i, y_i)$ is training point that could potentially be removed. To make the measure interpretable (in terms of the magnitude of the criterion), we can normalize by looking at the 2-Wasserstein distance between the full data-set and the prior GP, giving the final criterion. Unfortunately, actually calculating the criterion is too computationally expensive, especially when trying to reduce costs to have a high sampling frequency. The paper then shows that both 2-Wasserstein distances can be upper-bounded, and proposes using the upper-bounds to replace the actual distance, creating an approximate criterion. In the appendix, the magnitude of the error of the approximation is investigated in practice. While it is shown to be non-negligible, the division in the criterion helps balance out the error in part, and most importantly, the empirical results of using the criterion are strong. Once the criterion is established, the paper introduces the algorithm: before every acquisition function solve, points are removed from the data-set in a greedy manner, i.e., the point with the least relevancy is compared against a budget, and if the budget allows it, then the points is removed and the budget is decreased, this is repeated until the budget is small enough. The budget is defined by the amount of relative error allowed by removing data-points. Numerical results show that a trade-off between removing enough information to be computationally tractable, while keeping enough data to identify the optimum is required, and through empirics the authors are able to provide a recommended hyper-parameter value. Then the algorithms are tested on 10 synthetic functions and 2 real-world experiments. The proposed algorithm is the strongest in 10 of the 12 benchmarks. Strengths: - (Originality) The paper proposes a novel algorithm in an understudied area of Bayesian optimization. The paper provides a strong theoretical backing of the method and a good empirical study. Some important literature connections seem to have been missed though. - (Quality) All claims in the paper seem backed-up and well justified. Details of the method are well reported, and important parts of the algorithm, such as sensitivity to hyper-parameters and approximation error are investigated. - (Clarity) The paper is exceptionally written, and the method clear to understand. - (Significance) As mentioned in the paper, Dynamical Bayesian optimization is an understudied area, however I can think of many possible applications of it. I believe the method to be potentially significant in many applications. Weaknesses: - Perhaps the biggest weakness is a lack of comparison, and overall lack of discussion of sparse Gaussian processes and their relationship to online learning. Indeed, methods for selecting inducing points share similarities with the proposed algorithm, e.g. by greedily making the data-set smaller based on information criterions (see Section 9 of [1]) and even with application to online learning and BO [2, 3, 4] (the final reference is very recent, so it was impossible for the authors to include, but it looks relevant). Such methods seem to have the potential of performing well in this setting, and if possible it should be compared against them, but at least they should be discussed. - The proposed approximation of the 2-Wasserstein distance criterion is potentially loose, however, the authors recognize this and the empirical algorithmic performance is strong. - A lot of the synthetic testing is carried out in non-dynamical benchmarks. While the importance of the method is that it is able to maintain a small data-set and still carry out BO well, I believe a larger pool of experiments that includes the main application domain is important. [1] Quinonero-Candela, Joaquin, and Carl Edward Rasmussen. "A unifying view of sparse approximate Gaussian process regression." The Journal of Machine Learning Research 6 (2005): 1939-1959. [2] Galy-Fajou, Théo, and Manfred Opper. "Adaptive inducing points selection for gaussian processes." arXiv preprint arXiv:2107.10066 (2021). [3] Moss, Henry B., Sebastian W. Ober, and Victor Picheny. "Inducing point allocation for sparse Gaussian processes in high-throughput Bayesian optimisation." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. [4] Maus, Natalie, et al. "Approximation-Aware Bayesian Optimization." arXiv preprint arXiv:2406.04308 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed at length in Appendix C and H. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer NVX3, Thank you for the detailed review. We are discussing below the weaknesses and questions you have raised. Also, please make sure to read the global response as we discuss some of your questions there. **Lack of discussion of sparse Gaussian processes.** Thank you for pointing out this very interesting connection between the sparse GP literature and our work. Although these works are definitely related to our problem (because they also seek to discard some observations while still preserving the quality of the GP inference), applying them to a dynamic setting would require some non-trivial modifications. More precisely, (i) In their current form, these works introduce solutions that take place into a spatial domain only. Consequently, they try to approximate an exact GP over a bounded domain, and assume that they can place inducing inputs arbitrarily in this very same domain. Conversely, in the dynamic setting, the domain is constantly changing and the domain where the inducing inputs can be located is restricted to the cartesian product of the spatial domain and the past $[0, t_0]$, with $t_0$ the present time. Moreover, the exact GP should be approximated on a different, complementary, unbounded domain, that is the cartesian product between the spatial domain and the future $[t_0, +\infty]$. (ii) Most sparse GP solutions have an hyperparameter that controls the number of inducing inputs to place. Unfortunately, this hyperparameter should be set specifically for each objective function. Conversely, W-DBO can adapt its dataset size depending on the nature of the objective function, without the need to adjust its hyperparameter. Still, we acknowledge the connections described by the reviewer. Upon acceptance, we will definitely discuss the mentioned papers in Section 2 of the camera-ready version. **The proposed approximation of the 2-Wasserstein distance criterion is potentially loose.** We plan to study more the approximation of the ratio of Wasserstein distances in future works, in order to better understand it theoretically. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I agree that the sparse GP methodology is relevant but cannot be applied in the current version of the papers and non-trivial work would need to be carried out. Adding a discussion of these challenges and the connections with the current problem would be sufficient. Thanks further for the clarification about the synthetic benchmarks being dynamic, I do agree it should be made clearer. I remain positive about the work, and keep my recommendation of acceptance.
Summary: The authors propose W-DBO, a statistical distance-based criterion for removing "stale" observations in the dynamic BO setting. Observations are removed based on their impact on the GP globally, as measured by an approximate integrated Wasserstein distance, for which the authors prove the approximation quality. Results show that w-DBO outperforms the competition in terms of regret over time. Strengths: __Relevant problem__: The optimization of time-varying functions appears highly relevant, and seems underexplored. __Intuitive solution__: Measuring staleness by the impact on the GP globally is a good idea. __Theory__: There is a good amount of theory included on the proposed algorithm. Weaknesses: Unfortunately, I have a substantial concern about the evaluation, which I believe requires the authors to (ideally) release code of their experimental setup and unfortunately, re-run experiments with a more conventional setup. __Benchmarking__: The authors do not appear to use an ARD kernel in their work. As far as I can tell, the method is not restricted to a single lengthscale for all dimensions, so is there a reason for this unconventional choice? Notably, using a non-ARD kernel is an issue of fairness in benchmarking. Since almost all test functions (Rastrigin, Schwefel, Shekel, Ackley, Styblinski-Tang and to a lesser degree Rosenbrock), that are chosen are symmetric, this choice is rather convenient. Specifically, as soon as one lengthscale is estimated correctly, all of them will be, which is _very_ beneficial for performance. For one, ABO estimates all of the lengthscales, and will as such be as a large disadvantage. The authors mention that standard GP-UCB is non-ARD as well (App. G3), which could very well be the explanation for its good performance on the symmetric functions. To emphasize this even more, the obvious non-symmetric function (Hartmann-3) is one one where W-DBO and GP-UCB perform worse. (Sidenote: Hartmann-6 is fairly uniform in its lengthscales.) In short: Can the authors please run their experiments with an ARD kernel on the symmetric test functions? Right now, the non-ARD provides a alrge advantage, specifically in relation to the algorithms that run ARD (ABO at least). I do not believe running a non-ARD kernel is warranted, since it is not a design choice that is relevant to the dynamic setting and provides an outsized and unrealistic advantage. __Relevance of "stale" observations__: It is not apparent to me that "stale" observations are a large problem if there is a time-varying component to the kernel which naturally decreases its relevance with time. As such, the algorithm seems to strictly target high-throughput settings due to the computational saving, which is more restrictive setting than dynamic BO generally. __Evidence for removing stale observations__: While the criterion makes sense, I fail to see evidence that the removed observations are indeed "stale". What is good evidence for this? Simply plotting downstream regret performance (especially when the benchmarking has other flaws) is not a good metric for staleness. An Illustrative example of which observations are being removed are a must. However, I also encourage the authors to assess what a good metric for staleness is, and present that in their results. __Benchmarking, part 2__: The authors propose a dynamic BO algorithm, but the synthetic benchmarks appear to all be non-dynamic (there is no time-dependence in any of the benchmarks in App. G2). I would have expected these to have a time-varying component in order for the algorithm to be relevant. If the benchmarks indeed do not have a time-varying component, shouldn't standard GP-UCB be the gold standard for performance? __Benchmarking, part 3__: Please add standard errors to the tables. __Content in Appendices__: There is a lot of important content that has been moved to the supplementary material, most importantly the variable definitions in 4.1 (App. A & B), intuition for the Wasserstein ratio in 3.2 (App. E) . I believe the authors should try and make room for these important pieces in the main paper, possibly by shortening the background section or moving some of it to SM. __Normalization by empty GP__: The idea is novel and appears intuitive, but it is not thoroughly explained insofar as why the metric is appropriate. I recommend the authors to elaborate and add a 1D illustrative example, possibly compared to simply using the Wasserstein distance. Minor: - "BO inference" is used numerous times in place of "GP inference', which is the more precise description of what is occuring. - L160: "uses" -> "used" Technical Quality: 1 Clarity: 2 Questions for Authors: Questions are apparent from the "Weaknesses" section. Confidence: 5 Soundness: 1 Presentation: 2 Contribution: 2 Limitations: Limitations have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer iNzn, Thank you for the detailed review. We are discussing below the weaknesses and questions you have raised. Also, please make sure to read the global response as we discuss some of your questions there. **On the usage of an ARD kernel.** We make no mention of setting an ARD kernel for ABO, nor for any other DBO algorithms. Appendix G.1 lists the kernels used for each solution. Let us detail precisely the number of hyperparameters that each solution has to infer: _GP-UCB and ET-GP-UCB_. * Spatial cov.: Matérn-5/2 * Temporal cov.: N/A * Number of parameters: 3 (scale $\lambda$, spatial lengthscale $l_S$, noise level $\sigma^2_0$) _R-GP-UCB and TV-GP-UCB_. * Spatial cov.: Matérn-5/2 * Temporal cov.: fixed kernel with decay parameter $\epsilon$ * Number of parameters: 4 (scale $\lambda$, spatial lengthscale $l_S$, decay $\epsilon$, noise level $\sigma^2_0$) _ABO and W-DBO_. * Spatial cov.: Matérn-5/2 * Temporal cov.: Matérn-3/2 * Number of parameters: 4 (scale $\lambda$, spatial lengthscale $l_S$, temporal lengthscale $l_T$, noise level $\sigma^2_0$) Although we acknowledge the usefulness of anisotropic kernels in practice, we would like to address (i) the issue of “conventionality” of ARD kernels in benchmarking, and (ii) the performance of W-DBO with an ARD kernel. (i) Among the DBO papers (*i.e.*, [1, 2, 3]), the vast majority of the benchmarking was done with a non-ARD kernel. More specifically, [1] explicitly uses a non-ARD kernel, [2] uses an ARD kernel on 1 experiment out of 7 (cf. Tables 4-10 in [2]), and [3] makes no mention of an ARD kernel. This makes us consider that using a non-ARD kernel for benchmarking is fairly standard, at least in the DBO literature. We do not understand therefore why the comprehensive benchmarking would be unconventional and/or unfair towards some solutions. Upon acceptance, we propose to explicitly list the kernel hyperparameters for each DBO solution in Appendix G.1 to prevent any confusion about our benchmarking. (ii) Let us discuss how our framework can be applied to anisotropic kernels. Most of our analysis remains unchanged, but we need additional convolution formulas because the ones provided in Tables 3 and 4 only hold under Assumption 3.2. We now illustrate this point by providing additional results with the anisotropic SE kernel. Consider a $d \times d$ diagonal matrix $\Sigma = \text{diag}\left(l_1, \cdots, l_d\right)$ that gathers the $d$ lengthscales in its diagonal. It can be shown that the anisotropic convolution formula is $(k_S * k_S)(\mathbf x) = \pi^{d / 2} \det\left(\Sigma\right) e^{-\mathbf x^\top \Sigma^{-2} \mathbf x / 4}$. Observe that the formula reduces to the isotropic formula described in Table 3 when $l_1 = \cdots = l_d = l_S$. Additional experiments with the ARD SE kernel are listed on the PDF, which had to be uploaded in the global rebuttal (where they are discussed too). **I fail to see that the removed observations are stale. Provide an illustrative example of which observations are removed and a metric for staleness.** The concept of staleness is used in the title, as well as in Sections 1 and 2 to introduce our work because it is a term commonly used in the literature, but we replace it with the broader concept of “relevancy” from Section 3 onwards. Relevancy can be temporal (an observation that is too distant in time to be relevant for GP inference, a.k.a staleness) but also spatial (an observation that is too close from another to bring substantial information for GP inference). In other words, staleness coincides with temporal relevancy, whereas W-DBO accounts for both temporal and spatial relevancy. We provide a metric for the relevancy of an observation $\mathbf x_i$, namely the Wasserstein distance between the posterior GP conditioned on the entire dataset $\mathcal{D}$ and the posterior GP conditioned on $\tilde{\mathcal{D}} = \mathcal{D} \setminus \\{\mathbf x_i\\}$, defined on the cartesian product of the spatial domain $\mathcal{S}$ and the future $[t_0, +\infty]$, with $t_0$ the present time. We use this metric because it is intuitive and it captures properly the concept of observation relevancy (e.g., it is 0 when removing an observation does not change the GP posterior in any way, and it can be arbitrarily large otherwise, depending on how the removal of an observation affects the posterior). We remove observations according to a normalized version of this metric, to ensure that the removed observations are indeed “stale” or, more broadly speaking, “irrelevant”. We also provide an example illustrating which observations are being removed. Animated visualizations are provided in the supplementary material, and are discussed in Appendix G.4. **Please add standard errors to the tables.** Due to lack of space, we did not report the standard errors directly in Table 2. However, they are provided graphically whenever applicable, namely in Figures 1-3 and 6-17. To further address this weakness while respecting the NeurIPS template, we propose to replicate Table 2 in a dedicated appendix of the camera-ready version, where we will also provide the standard errors. **There is a lot of important content that has been moved to the SM.** Upon acceptance, we will use the additional page allowed to move some important content from the SM to the main paper, and by possibly also reducing the Background section to make room for this material. **Typos.** Thank you for pointing out these typos. Upon acceptance, we will fix them in the camera-ready version. **References** [1] I. Bogunovic, J. Scarlett, and V. Cevher. Time-varying gaussian process bandit optimization. In AISTATS, pages 314–323. PMLR, 2016. [2] P. Brunzema, A. von Rohr, F. Solowjow, and S. Trimpe. Event-triggered time-varying bayesian optimization. arXiv preprint arXiv:2208.10790, 2022. [3] Nyikosa, F. M., Osborne, M. A., & Roberts, S. J. (2018). Bayesian optimization for dynamic problems. arXiv preprint arXiv:1803.03432. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks to the authors for their response. __Benchmarking:__ Thanks to the authors for the clarification. Ultimately, while ARD is clearly the convention in BO (I cannot speak for DBO specifically, so I thank the authors for clarifying) the issue is ultimately one of fairness in evaluation. As mentioned previously, ARD kernels will be disadvantaged on symmetric functions. I was under the impression that ABO ran an ARD kernel (their paper seems to suggest it), but if the authors say that all methods run a non-ARD kernel, I will take their word for it. __Evidence for removal:__ There has not been evidence that suggests that the removal criterion is actually efficient. One would expect there to be a comparison between not removing at all (should be ideal) and removing stale observations (should be close to ideal if the method is potent). As stated previously, regret performance as a measure of this is not convincing enough, in my view. The ablation plot on $\alpha$ is a start, but running the D-BO without removing any observations (and an otherwise identical setup to W-DBO) as a gold standard is, in my view, a must have for all experiments. __Wasserstein Distance figure:__ I think this is a good addition. I would also consider adding where the removed observation was located. ______ Given unconvincing experimental evidence that the proposed method does what it seeks to do (namely remove unimportant observations), otherwise unconvincing experimental results, and a presentation that could use some polish, I will maintain my rating. --- Reply to Comment 1.1.1: Title: More Clarifications Comment: Dear Reviewer iNzn, Thank you for your feedback on our response. We further address your points below. **Benchmarking** We agree with the reviewer that this discussion boils down to an alleged unfairness issue in the benchmarking. We think that this issue has been put to rest in three ways: (i) we have shown that most benchmarking in the *DBO* literature was conducted with a non-ARD kernel, (ii) we have made it clear that, for each experiment in the paper, each DBO solution uses a non-ARD kernel and (iii) we have run additional experiments with an ARD kernel to make sure that W-DBO was still the best-performing solution with ARD kernels. **Evidence for removal** An instance of W-DBO that does not remove any point would have a behavior very close to the behavior of TV-GP-UCB. In fact, when it does not remove any point, W-DBO is simply TV-GP-UCB with a different temporal kernel. This suggested instance of W-DBO would therefore be quite redundant with TV-GP-UCB, and would not exhibit superior performance for the same reason TV-GP-UCB does not. As time goes by, TV-GP-UCB and similar solutions that do not remove irrelevant observations (e.g., ABO and the instance of W-DBO suggested by the reviewer) experience a larger and larger GP inference time because irrelevant observations accumulate in their datasets. This increasingly large inference time prevents these solutions from querying the objective function as often as W-DBO, which regularly removes observations that are deemed irrelevant according to the definition in the paper (which all four reviewers find intuitive and reasonable). Because we benchmark the solutions in a continuous time setting, failing to query the objective function often enough ultimately hinders the optimization of the dynamic objective function by limiting how much two consecutive observations can be correlated. In other words, W-DBO outperforms TV-GP-UCB and ABO precisely because it removes some observations. At the same time, recall that W-DBO also outperforms R-GP-UCB and ET-GP-UCB, two solutions that also remove observations but in a less refined way. This leads us to make two conclusions: (i) a version of W-DBO that does not remove any irrelevant observation would **not** be the gold standard suggested by the reviewer and (ii) the very fact that W-DBO exhibits better performance than all the other DBO solutions (ABO and TV-GP-UCB on the one hand, R-GP-UCB and ET-GP-UCB on the other hand) is a strong evidence that the observations that it removes are indeed irrelevant, or at least, irrelevant enough so that removing them is a better strategy than keeping them in the dataset. Finally, we acknowledge that our paper opens multiple interesting research questions about observation relevancy, and does not provide all the answers to these questions. In our opinion, this is more a strength than a weakness. W-DBO is an original answer to the DBO problem that outperforms the existing solutions while opening various research questions, which is why we believe our paper is of interest to the BO community. We hope our answers have lifted any remaining misunderstanding.
Summary: The paper proposes a new algorithm for dynamic Bayesian optimization (DBO). To develop this new algorithm, the authors first derive a Wasserstein distance-based criterion that is a way of measuring how relevant a given collected data point is during optimization. Since dynamic functions change over time, each observation collected during DBO becomes less relevant over time, until eventually the computational cost of continuing to consider the observation outweighs any benefit it provides. When this becomes the case it therefore makes sense to remove these older observations from the dataset, especially since we care a lot about reducing computational cost since having high sampling frequency is especially important in DBO. In order to define a principled way to remove these irrelevant observations during DBO, the authors use their Wasserstein distance-based criterion to measure the relevancy of collected data points and decide whether to remove them from the dataset during the course of optimization. This strategy leads to the author’s novel DBO algorithm (W-DBO), which the authors show to perform better than relevant baselines with a convincing set of experimental results. Strengths: Originality: The author’s proposed Wasserstein distance-based criterion and resultand W-DBO algorithm is clearly a novel approach to solve a relevant problem in DBO. Quality: The paper is very well-written. Additionally, the figures and tables are all of good quality - they are both easy to parse and do a nice job of displaying relevant results. Clarity: The paper is clear and easy to follow from start to finish. The figures and tables are clear and easy to read. The paper is also clearly motivated and it’s easy to understand what the authors did and why. Significance: It is obvious to me that the problem the authors seek to solve here (DBO algorithms keeping around increasingly irrelevant data points despite the need to maintain high sample efficiency over time) is important and relevant to the community. The author’s W-DBO algorithm provides a reasonable and intuitive solution to this problem. Convincing results: The experimental results do indeed show that W-DBO outperforms other methods on a large number of BO tasks. Weaknesses: Typo (Line 305): “to the best of your knowledge” should instead be “to the best of our knowledge”. One ablation I would’ve like to see in this paper: The paper makes it clear that leveraging quantification of relevancy of observations can be used to improve DBO performance with their W-DBO algorithm. However, it is less clear to me from the experimental results that using the author’s proposed Wasserstein distance-based criterion method for relevancy quantification is necessarily better than another strategy for quantifying relevancy. While the Wasserstein distance-based criterion method the authors propose is intuitive and clearly works well in this setting, it would be interesting to see a direct comparison against other simpler (or even ad-hoc) methods for attempting to quantify how relevant a given observation is. For example, rather than using the Wasserstein distance-based criterion, one could just assume some constant rate of decline in quantitative relevancy over time. I do not think that this additional experiment is necessary for this paper to be accepted, but I do think it would strengthen the paper by showing the importance of using the author's proposed Wasserstein distance-based criterion method specifically for relevancy quantification in this setting. Technical Quality: 3 Clarity: 3 Questions for Authors: The authors state that computational biology is a field that makes “heavy use of DBO”. I am curious what scenarios in comp bio require the use of DBO rather than just traditional BO? In what scenarios do we have biological black-box functions that are changing over time? It might be useful to have some additional citations and/or discussion of this in the paper. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer JgwK, Thank you for the detailed review. We are discussing below the weaknesses and questions you have raised. Also, please make sure to read the global response as we discuss some of your questions there. **Direct comparison against other simpler (or even ad-hoc) methods for relevancy quantification (e.g., constant rate of decline over time).** TV-GP-UCB assumes a constant decreasing rate of the covariance function over time. Neglecting spatial relevancy, this precisely translates into a constant decreasing rate of the relevancy of an observation over time. Our approach (W-DBO) differs from TV-GP-UCB in three ways: (i) W-DBO can use an arbitrary time kernel instead of a fixed kernel for TV-GP-UCB (see [1]), (ii) W-DBO has a more sophisticated quantification of relevancy (*i.e.*, the Wasserstein distance) that involves the relevancy both in the time dimension (staleness) and in the spatial domain, (iii) W-DBO is able to remove observations when they are deemed irrelevant. Together, (i), (ii) and (iii) explain the difference in the performance of TV-GP-UCB and the performance of W-DBO. One could use the very same core idea of our framework (a distance function on stochastic processes) and come up with other definitions of relevancy. As an example, one could define the relevancy of an observation using the KL-divergence on the two GP posteriors. However, this would require a very different analysis to find an approximation of this distance, which would lead more to a new paper than to an ablation study. **The authors state that computational biology is a field that makes “heavy use of DBO”.** In Section 1, we cite computational biology as an application of BO, not of DBO. However, it is true that in our concluding remarks of Section 6, we cite computational biology as a field that could apply DBO. This is a typo that will be removed from the camera-ready version upon acceptance. Thank you for pointing it out. **References** [1] Ilija Bogunovic, Jonathan Scarlett, and Volkan Cevher. Time-varying gaussian process bandit optimization. In Artificial Intelligence and Statistics, pages 314–323. PMLR, 2016. --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledgement Comment: I would like to thank authors for their response and addressing the points I raised in my review. I am happy to keep my assessment of their work the same.
Summary: This paper addresses the challenge of optimizing a time-varying black-box function using GP-based Dynamic Bayesian Optimization (DBO). Unlike traditional Bayesian Optimization (BO), DBO seeks to handle dynamic functions where the optimum changes over time by incorporating time in the GP model covariance function. With the goal of expediting DBO, this paper proposes a strategy to remove ‘irrelevant’ data from the GP model, specifically by removing points that minimally affect the distribution over future predictions. To this end, the authors introduce a Wasserstein distance-based criterion (and associated bounds/approximations) and propose the W-DBO algorithm. This algorithm dynamically removes irrelevant observations, maintaining good predictive performance and high sampling frequency. Numerical experiments demonstrate W-DBO's superiority over state-of-the-art methods. Strengths: - The introduction of a Wasserstein distance-based criterion to measure the relevancy of observations is an intuitive and interesting that effectively addresses the introduced challenge of minimally altering the GP predictions. - The proposed W-DBO algorithm appears to perform well on the extensive numerical experiments, demonstrating the importance of removing irrelevant observations. The algorithm remains computationally efficient, avoiding the prohibitive growth of dataset size over time. - The strategy is relatively generalizable; a Wasserstein distance-based approach can be integrated with any BO algorithm to identify candidates for removal from the dataset, enhancing applicability of this paper across various domains. Weaknesses: - The motivation for this challenge largely is based on GP-based Bayesian optimization, which might not always be applicable for this problem setting. Specifically, GP-based BO is most relevant when samples are expensive (and are limited as a result). When samples are cheap to obtain, derivative-free optimization methods such as evolutionary strategies are more relevant, but these are not compared. - While the numerical experiments are extensive, they are limited to synthetic examples—most of which are not time-dependent (Appendix G.2). The inclusion of some real-world examples and or dynamic optimization case studies could strengthen the motivation and highlight applicability of the proposed method. - The scalability of the W-DBO algorithm itself (e.g., checking the proposed metric for all points) in high-dimensional spaces or with very large datasets is not fully addressed, which could be a limitation in some applications. Technical Quality: 3 Clarity: 3 Questions for Authors: - Is there ever a downside with the ‘greedy’ of removal of points? E.g., removing the first selected point prevents otherwise being able to remove two points? - Can the algorithm remove observations that become relevant again in the future? This is difficult to track, since $\mathcal{GP}_\mathcal{D}$ is updated with the removed sample in Algorithm 1, meaning there is no tracking of the original full model with no data removed. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, discussed in appendices. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer mGHg, Thank you for the detailed review. We are discussing below the weaknesses and questions you have raised. Also, please make sure to read the global response as we discuss some of your questions there. **The scalability of W-DBO (e.g., in high-dimensional spaces and/or with large datasets) is not discussed.** The Wasserstein distance is applied in the output space, which is one-dimensional regardless of the dimensionality of the input space, *i.e.*, the objective function domain. Our framework, which measures the observations relevancy and removes the irrelevant observations, can be seen as a simple post-processing stage running at the end of each iteration. As a consequence, it can be used in conjunction with any BO algorithm. The W-DBO algorithm presented in this paper exploits vanilla GP-UCB, and as such, will struggle in high-dimensional input spaces, precisely because GP-UCB struggles in high-dimensional input spaces. However, our framework could be paired with any state-of-the-art high-dimensional BO algorithm (e.g., [1]) and show good performance when optimizing high-dimensional dynamic black-boxes. This is an interesting future work, thank you for suggesting it. Regarding very large dataset sizes, W-DBO suffers from the same limitations as any BO algorithm. This is because it also manipulates inverses of Gram matrices, which scale with the dataset size. That is precisely why we argue that removing irrelevant observations gives W-DBO a decisive advantage upon other DBO solutions. Nevertheless, upon acceptance, we will discuss this limitation explicitly in the Appendix H of the camera-ready version. **Is there ever a downside with the ‘greedy’ of removal of points?** Our framework could easily be extended to the power set $2^\mathcal{D}$ of the dataset $\mathcal{D}$, in order to measure the relevancy of a subset of observations, instead of only quantifying the relevancy of a single observation. The greedy approach causes W-DBO to overestimate the impact of the observation removals on the GP posterior. To rapidly get an intuition explaining this, let us describe a simple example. Consider a dataset that contains an irrelevant pair of observations $\mathcal{P} = \\{\mathbf x_i, \mathbf x_j\\}$, in the sense that $W_2\left(\mathcal{GP}\_\mathcal{D}, \mathcal{GP}\_{\mathcal{D} \setminus \mathcal{P}}\right) = 0$. Furthermore, let us assume that $W_2\left(\mathcal{GP}\_\mathcal{D}, \mathcal{GP}\_{\mathcal{D} \setminus \\{\mathbf x_i\\}}\right) = W_2\left(\mathcal{GP}\_{\mathcal{D} \setminus \\{\mathbf x_i\\}}, \mathcal{GP}\_{\mathcal{D} \setminus \mathcal{P}}\right) = \epsilon > 0$. If W-DBO were able to directly capture the irrelevancy of the couple $\mathcal{P}$, it would have removed it without consuming any of its removal budget. However, the greedy removal procedure as described in Algorithm 1 causes W-DBO to remove first $\mathbf x_i$ and next $\mathbf x_j$. Consequently, it will consume $2\epsilon$ from its removal budget and, in that sense, it will overestimate the impact of removing $\mathbf x_i$ and $\mathbf x_j$. Although working with the power set $2^\mathcal{D}$ of the dataset $\mathcal{D}$ is clearly more advantageous to avoid such budget depletion caused by greedy removals, the combinatorial explosion of $|2^\mathcal{D}|$ makes this strategy prohibitive in practice, even for moderately-sized datasets. That is why it is not advertised in the paper. Upon acceptance, we will include this remark in Section 4 of the camera-ready version. Thank you for pointing it out. **Can the algorithm remove observations that become relevant again in the future?** For usual stationary, decreasing kernels (e.g., Squared-Exponential or Matérn) and because of Assumption 3.2, the covariance between a function value at a time $t$, *i.e.*, $f(\mathbf x, t)$, and a function value at a future time $t’$, *i.e.*, $f(\mathbf x’, t’)$ with $t < t’$, can only decrease as $t’$ gets further and further away from $t$. Consequently, the relevancy of the observing $f(\mathbf x, t)$ will only decrease as time goes by. An interesting case that is not discussed in the paper, is when the time kernel $k_T$ is a periodic kernel, e.g., $k_T(t, t’) = \exp\left(-\frac{2 \sin^2\left(\pi |t - t’| / p\right)}{l^2}\right)$. The objective function is thus periodic along its time dimension. In that case, the concept of stale observation vanishes completely since the observation of $f(\mathbf x, t)$ will always be useful to predict $f(\mathbf x, t’)$, even when $t’ \gg t$. The Wasserstein distance between the two posterior GPs with this time-periodic kernel can be shown to diverge to $+\infty$, and so does our approximation, as it indeed should. **References** [1] Bardou, A., Thiran, P., & Begin, T. Relaxing the Additivity Constraints in Decentralized No-Regret High-Dimensional Bayesian Optimization. In The Twelfth International Conference on Learning Representations. 2024. --- Rebuttal 2: Title: Response to Rebuttal Comment: Thanks for the response. In particular, I see I have misunderstood how the experimental case studies were defined (see authors' global response), and so this stated weakness is largely alleviated. I have raised my score correspondingly, and look forward to the authors' stated clarifications in the camera-ready (or otherwise future) version.
Rebuttal 1: Rebuttal: Dear Reviewers, We thank you all for your detailed reviews. In this global response, we address some weaknesses and questions that were raised in more than one review. We also discuss the additional figures shown in the PDF uploaded with the global rebuttal. **This is a high-throughput problem, BO is not the ideal candidate for this setting.** Most evaluations of DBO algorithms are performed in a setting that assumes a constant time step between two consecutive iterations (e.g., see [1, 2]). The underlying assumption is that the objective function is expensive to evaluate, making the GP inference time complexity negligible. In Section 2, we argue that this is an unreasonable assumption to make for two reasons, regardless of how expensive an evaluation of the objective function is: (i) Unlike static BO, the GP inference time in DBO limits how much two consecutive observations can be correlated. This in turn impacts the quality of the GP inference. (ii) Considering optimization tasks with very large (e.g., infinite) time horizons, the GP inference time will eventually become more expensive than the evaluation of the objective function itself, if nothing is done to prevent the dataset size from diverging. Now, if the objective function is expensive to evaluate, then a sample-efficient strategy should still be used, and BO is indeed the gold standard in this area. In other words, our problem setting still calls for sample-efficient optimization techniques (e.g., BO) because the objective function is expensive to evaluate. Our problem setting is also more general than the one considered in [1, 2], since it relaxes the assumption of a fixed time step between two consecutive iterations. **The benchmarks are synthetic and not time-dependent.** Among 12 benchmarks, 10 are synthetic while two are real-world experiments (WLAN and Temperature). Although the dynamic nature of the two real-world experiments is quite explicit, we agree that we did not stress enough the fact that all synthetic examples are also time-dependent. Time is simply taken as the last dimension (*i.e.*, the $d$-th dimension) of each $d$-dimensional synthetic function, while the first $(d-1)$ dimensions form the spatial domain. This explains why GP-UCB is not the best-performing strategy on the synthetic benchmarks. This is stated in Appendix G.1, but not clearly enough in the main text. We will make this point clear in the camera-ready version, by mentioning at the beginning of Section 5 that the temporal domain is always the $d$th dimension of each $d$-dimensional objective function. To further avoid confusion, we will also rewrite Appendix G.2 to make the time variable $t$ explicitly appear in the definition of each synthetic function. **Additional experiments with an ARD kernel** In the PDF uploaded with the rebuttal, you can find additional experiments run with the anisotropic SE kernel (see Figure 1). The results are quite similar to the isotropic case, although the average response time and the average regret of each solution are larger. Both of these observations can be explained by the fact that each DBO solution has to infer more hyperparameters (more precisely, three spatial lengthscales instead of one) at each iteration. We plan to add these results to a dedicated Appendix in the camera-ready version. Thank you for pointing out that it would be beneficial to extend our work to the anisotropic case. **1D example motivating the normalization by an empty GP.** A 1D example could indeed help motivating the normalization by an empty GP. You can find it in the PDF uploaded with the rebuttal. We discuss it below. Figure 2 illustrates that the very same non-normalized Wasserstein distance ($W_2(\mathcal{GP}\_\mathcal{D}, \mathcal{GP}\_{\tilde{\mathcal{D}}}) = 0.46$ in both cases) can lead to different results on the posteriors, depending on their covariance parameters (here, only the lengthscale is considered). The left part of Figure 2 depicts two widely different posteriors, while the right part of Figure 2 depicts two similar posteriors. A good metric of relevancy should capture this difference. Figure 3 depicts four different scenarios illustrating that the normalized Wasserstein distance is able to capture this difference. In fact, when the metric is low (*i.e.*, close to $0$), it always implies that the two posteriors are similar, regardless of the covariance hyperparameters (*i.e.*, the lengthscale in this example). Similarly, when the metric is large (*i.e.*, close to $1$), the two posteriors are very different, regardless of the covariance hyperparameters. Upon acceptance, we plan to add this example to Appendix E of the camera-ready version. Thank you for pointing out that a 1-dimensional example would be beneficial. **References** [1] Ilija Bogunovic, Jonathan Scarlett, and Volkan Cevher. Time-varying gaussian process bandit optimization. In Artificial Intelligence and Statistics, pages 314–323. PMLR, 2016. [2] Paul Brunzema, Alexander von Rohr, Friedrich Solowjow, and Sebastian Trimpe. Event-triggered time-varying bayesian optimization. arXiv preprint arXiv:2208.10790, 2022. Pdf: /pdf/d0aeedcc1fe206ab28f8f9fcafa9a651c6fa0bb0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Rethinking 3D Convolution in $\ell_p$-norm Space
Accept (spotlight)
Summary: The paper proposes using the \( \ell_p \)-norm, specifically the \( \ell_1 \)-norm, to replace the classic squared \( \ell_2 \)-norm convolution in 3D tasks. The \( \ell_1 \)-norm kernel function relies on addition, reducing computational cost. Initial gradient implementation revealed insufficient gradient values. To address this, the authors gradually transition the significant gradient from \( \ell_2 \) to \( \ell_1 \) as training progresses and employ momentum updates and learning rate scheduling. By replacing traditional 3D networks with their \( \ell_1 \)-norm networks, the paper demonstrates competitive performance on ModelNet classification and S3DIS semantic segmentation tasks at significantly lower costs. Strengths: i) The proposed 3D \( \ell_1 \)-norm Net is novel and inspiring for 3D tasks, which is broadly applicable somewhat. ii) The proposed tricks (the optimizer and the customized operator) make the \ell_1-norm network get competitive performance at a lower cost. iii) The proof of the appendix is detailed and carefully. Weaknesses: Minor typos error: i) In L-651, it should be "The *\ell_p*-norm" rather than ""The \ell_p norm"" ii) Very few words require consistent capitalization. iii) Formulas in separate rows may have missing order numbers, it is recommended to unify this style. iv) The reference format is not uniform enough. Technical Quality: 3 Clarity: 4 Questions for Authors: i) The current experiments seem to be on small datasets. Does the current method perform well on larger datasets? ii) During training, The key idea of this paper is to use the $\ell_2$ gradient. As far as I am concerned, another potential way is to use $\ell-_\infty$ gradients. The dual space of $\ell__\infty$ is the $\ell_1$ norm; Thus, the gradients of $\ell_ _\infty$ will be sparse. Why did the authors not try this idea and speed up the training time? iii) The \ell-p-norm Space measurement can be also seen in other filed, such as image analysis[1], Inpainting[2] , the author claim that "these method can't be **directly** applied into 3D tasks", why? iv) The proposed lower bound and upper bound are Eq(13) and Eq(14), why the authors design the bounds as these? [1] Generalized 2-D principal component analysis by Lp-norm for image analysis. [2] Rank-One Matrix Approximation With ℓ*p*-Norm for Image Inpainting. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: More generalization experiments on other datasets can be considered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We are very happy to receive such an enthusiastic review! # For Weaknesses: ***W1: Minor typos error*** Thanks for your advice! We have carefully rechecked all the writing issues and will revise them in the next version, including consistent capitalization, formulas, reference format, etc. # For Questions: ***Q1: More experiments on larger datasets*** Please refer to Sec.E.5.1, where we conduct the task *Garments pose estimation* using the baseline *GarmentNets*. In this task, the input is partial point clouds (incomplete and occluded point clouds), and the output is complete point clouds. This is a large-scale dataset, named GarmentNets Simulation, which contains six garment categories with a total data volume of *1.72TB*, including Dress, Jump, Skirt, Top, Pants, and Shirt. ***Q2: More experiments on larger datasets*** $\ell _{\infty}$ norm isn't a good choice. Notice that for any vector $y=(y(1),y(2),...,y(n))$, $\Vert y \Vert _ {2}=\sqrt( \sum _ {i=1}^{n}y(i)^{2} )$ and $\Vert y \Vert _ {\infty}= max _ {1\le i \le n} |y(i)|$. So the gradient of $\Vert y \Vert _{2}$ is $ (y(1),y(2),y(3),...,y(n) )/(\Vert y \Vert _{2} )$ and the gradient of $\Vert y \Vert _{\infty}$ is $(0,0,0,..,0,sign(y(j)),0,....,0)$, where $j$ is the maximal entry of $y$ with respect to absolute value. Notice that there is only one non-zero entry in the gradient of $\Vert y \Vert _ {\infty}$ norm, which means only one dimension will be updated in each iteration. So $\ell _ {\infty}$ is very inefficient. In practical experiments, we find that it leads to a highly unstable training process easily, which can be corroborated by the results of Tab.2 (The quantitative result of $\ell _{\infty}$ norm Net tells that it has suboptimal classification performance.) ***Q3: Why 2D methods can't be directly applied into 3D tasks*** 3D point clouds and 2D images are inherently different forms of data organization, like spareness and disorderedness in 3D point clouds. Concretely, unlike an array of pixels in an image, a point cloud is a set of points in no particular order. In other words, a network that consumes $N$ 3D point sets needs to be invariant to the $N!$ ordering of the input sets in the order of data input, which extremely differs from the location-sensitive RGD domain. This leads to an undesirable result when the $\ell_p$-norm based method from the 2D tasks (such as image analysis and Inpainting) is used directly on the 3D tasks. ***Q4: About lower bound and upper bound Eq(13) and Eq(14)*** As mentioned in L239-L244, we hope to achieve larger update magnitudes and faster convergence rates during the initial stages of training. To this end, a promising scheme for learning rate design is maintaining a higher rate in the early training phase, and returning to a lower rate in the later phase. Then we determine the lower bound and upper bound as Eq.(13) and Eq.(14), respectively. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer VAmn Comment: Thanks for the clarifications. As the authors have clearly addressed my concerns, I'd like to increase my score ,and I'm happy to recommend it for acceptance.
Summary: The paper proposes for using the $\ell_{p}$ norm in 3D convolution as a substitute for the inner product. Out of different choices, the authors pick the $\ell_{1}$-norm because it's faster and uses less energy than the inner product. This is because the $\ell_{1}$-norm relies on addition, which is simpler computationally. To optimize the $\ell_{1}$-norm-based convolution models, they propose a tailored optimization approach leveraging Mixed Gradient Strategy and Dynamic Learning Rate Controller. Finally, the method is evaluated on ModelNet10 and ModelNet40 datasets, showcasing its usage in learning point cloud features for object classification. Strengths: 1 ) The writing and organization of this paper is great. 2 ) The experiments are sufficient in both main paper and supp, ranging from global Tasks, Semi-dense Prediction, Dense Prediction. 3 ) The idea of replacing traditional convolution with a ℓ𝑝-based convolution is quite foundational and novel for 3D tasks. 4 ) The proposed methods can be easily extended to other 3D backbones, which can be more environmentally friendly. 5 ) The proposed methods demonstrate good performance on most tasks (such as ModelNet classification and S3DIS semantic segmentation) at a significantly lower cost. Weaknesses: 1 ) The baseline models should be detailed introduced, including the model structure, integration (i.e., replacing) method of the proposed method, just as Table 8 from Ablation Experiments. 2 ) The layout of Table 5 needs to be further optimized, such as splitting it into two independent tables for parallel layout. Technical Quality: 4 Clarity: 3 Questions for Authors: 1 ) Is there any basis for the authors to assume that the mean of data **X** is 0 when discussing robustness? 2 ) The authors discuss regret without detailed motivation, How does it works? I wonder if it is necessary to introduce the regret. 3 ) There are many symbols in this article, it is recommended to add more explanations in a separate section. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: See weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. Answers to some of your questions are as follows. # For Weaknesses ***W1: About baselines*** For the Classification and Segmentation task, we use PointNet and PointNet++ as the baselines. PointNet processes point clouds by embedding each point independently using a shared MLP and then aggregating global features through max pooling. PointNet++ extends this approach by incorporating hierarchical feature learning, grouping points into local neighborhoods and applying PointNet-like operations at multiple scales. For the Completion task, we use OccNet, DMC, and IFNets as the baselines. OccNet uses the network to define a continuous occupancy function, predicting the probability of a point being inside the object, instead of using discrete voxel grids. DMC extracts a triangulated mesh to provide the final surface reconstruction. IF-Nets captures fine details through a hierarchical architecture, learning features at multiple scales for accurate reconstruction of complex geometries. In these experiments, traditional conventions are all replaced, rather than partial replacement as Tab.8. ***W2: About layout*** Thanks for your advice! In the revised version, we will split it into two parallel tables, each with 8 different shapes. # For Questions ***Q1: About data $X$*** The assumption that the mean of the data $ X $ is zero was made primarily to simplify the mathematical derivations in our analysis. Given that $ G $ represents random noise, the difference in variance between $ G $ and $ X + G $ becomes negligible when $ X $ is considered constant. This simplification does not impact the demonstration of the robustness properties of the $ \ell_2 $-norm, particularly in the context of convolution operations. Our focus was on evaluating the robustness of the $ \ell_2 $-norm under noise conditions, and the zero mean assumption for $ X $ allowed us to streamline the proofs without loss of generality. In practical scenarios, even if $ X $ has a non-zero mean, the robustness properties of the $ \ell_2 $-norm would still hold due to its inherent nature of minimizing the effect of noise. ***Q2: About the regret*** Our standpoint is that regret is a valuable addition to our work, especially in the context of analyzing and proving the convergence properties of the optimization process. Regret, defined as the difference between the actual performance of an algorithm and the best possible performance in hindsight, provides a comprehensive measure of the efficiency and effectiveness of the learning or optimization algorithm. Introducing regret into our analysis has several advantages, for example, 1) Convergence Analysis. By analyzing the regret, we can demonstrate that the optimization process converges over time, thus providing a rigorous justification for the algorithm's performance guarantees. 2) Flexibility in Stochastic Environments. In scenarios involving uncertainty or noise, regret analysis helps in understanding the robustness and adaptability of the algorithm. It provides insights into how the algorithm performs under varying conditions and helps in identifying potential areas for improvement. In the appendix, we demonstrate how the regret bounds are derived and how they relate to the convergence of our proposed algorithm. By doing so, we offer a more robust theoretical foundation and validate the efficacy of our approach. ***Q3: About Notations*** For a clearer expression, we have reiterated the notations in L191-L192. In the revised version, we will add an additional section called 'Notations' for readers' reference before Sec.3 *Methodology*, Thanks for your advice! --- Rebuttal Comment 1.1: Title: Official Comment Comment: I appreciate the authors addressing my concerns. The analysis presented in the Q1(the mean of data $X$ can be 0) and Q2 are valuable, as it allows the readers to understand that the theory and motivation more clearly. Now I do not have further questions for the authors. I agree that the current work is significant enough to justify acceptance.
Summary: This paper addresses the challenge of enhancing the representational capacity of traditional convolution methods. To tackle this issue, the authors introduce a novel convolution approach based on $\ell_p$-norm and offer customized optimization strategies to expedite the training process. Extensive theoretical and empirical results have verified that the proposed algorithms demonstrate competitive performance compared to other baselines. Strengths: The concept of using the $\ell_p$ norm-based kernel to develop a convolution operator is intriguing. This proposed method has the potential to notably enhance the flexibility of the traditional convolution operator. Weaknesses: 1. The writing of this paper requires improvement, particularly in providing additional details regarding the proposed method. The current version of this manuscript may lead to reader confusion. 2. The role of the theoretical results presented in this paper is unclear. There appears to be a gap between the theoretical guarantees and the empirical evidence provided. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. The authors ought to include a formal definition of the "$\ell_p$ norm space" concept in the title of this paper. 2. The formulation of the proposed convolution operator in Eq. 2 on the 35th line of page 2 lacks an intuitive explanation, which may confuse the reader. 3. What precisely is the definition of $\mathcal{P}(S)$ in Eq. 3? Additionally, the primary method proposed in this paper is only briefly introduced in the introduction without further detailed description, making it less reader-friendly. 4. It appears that the proposed training strategy "Dynamic Learning rate Controller" is seemingly unrelated to the $\ell_p$ convolution method, if I haven't overlooked anything. 5. There are several errors are evident in theorem 2. Initially, the mismatch between the subscripts $k$ and $t$ in the statement "we could show that for any convex functions $\lbrace h_k \rbrace_{t=1}^{T^*}$ " is notable. Furthermore, it would be beneficial if the authors could clarify the significance of $h$. Moreover, although this theorem addresses convex functions, it is crucial to acknowledge that numerous deep learning tasks entail non-convex functions. In conclusion, the reasoning behind the authors' choice to include a regret analysis in theorem 2 within this paper lacks clarity. 6. Building on the previous point, it is advisable for the authors to undertake a conventional convergence analysis for the proposed optimization algorithms. This recommendation stems from the availability of numerous well-established mathematical tools designed for assessing convergence in offline optimization, particularly under non-convex or unbounded gradient conditions. These tools are often better suited for the intricacies of deep learning environments. 7. In the experimental section, it is recommended that the authors provide a comprehensive outline of the setup for the online learning tasks they consider, particularly detailing how data arrives sequentially. Additionally, the experimental section overlooks sequential decision tasks or time series tasks. Therefore, I suggest that the authors include these related tasks to further strengthen the justification for providing a regret guarantee in theorem 2. Besides, as this paper is affiliated with the learning theory track, the empirical results presented seem somewhat disconnected from the theoretical findings. Specifically, while theorem 2 offers a regret guarantee for the proposed method, the metric used in the experimental section does not measure regret. 8. In comparing the proposed method with baseline methods, the authors appear to have neglected to consider recent online learning methods, such as online non-convex learning using follow-the-perturbed-leader [1,2]. [1] Suggala, Arun Sai, and Praneeth Netrapalli. "Online non-convex learning: Following the perturbed leader is optimal." Algorithmic Learning Theory. PMLR, 2020. [2] Suggala, Arun, and Praneeth Netrapalli. "Follow the perturbed leader: Optimism and fast parallel algorithms for smooth minimax games." Advances in Neural Information Processing Systems 33 (2020): 22316-22326. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the kind and constructive comments! Please see below for responses. # For Weaknesses ***W1: About additional details*** We will add more details from the appendix in the revised version. ***W2: About the gap between theoretical guarantees and the empirical evidence*** Due to the space limitation, we provide a clearer correspondence between the *main* theoretical contributions and experiments below 1. Universal Approximation Theorem ensures that different $\ell_p$ Nets can converge and extract features effectively. Corresponding empirical evidence is given in Tab.1 ($\ell_p$ Nets with different $p$ values all have certain classification ability.) 2. For robustness, the corresponding experiments in Section E.7 illustrate the performance under different noises. 3. By introducing regret, the network integrated with the proposed optimization strategy is proven to retain the convergence properties. The corresponding empirical results can be found in global tasks, semi-dense predictions, dense predictions, and ablation results (Tab.9). # For Questions ***Q1: About the formal definition*** The concept of $\ell_p$ norm space is not entirely comprehensive but "3D Convolution in $\ell$p-norm Space". We have provided its formal definition in lines 34-36, including the formulaic expressions (Eq2) and conceptual diagrams (Fig1). ***Q2: About Eq. 2*** The variables are first defined in Eq.1 (refer to L20-L25) and are used consistently throughout the paper. This approach minimizes redundancy and maintains narrative flow. ***Q3: About primary method*** 1) As described in L106-L113, $S$ is the Input data, while $P(\cdot)$ is the proposed $\ell_p$-PointNet++, therefore, $P(S)$ is the extracted features (output). 2) The primary method of this paper is $\ell-p$-norm Nets and Optimization strategy, which is introduced in Sec.3.3 and Sec.4, respectively. ***Q4: About training strategy*** This strategy is independent of the convolution method's *design*; instead, it is specifically tailored for *optimizing the training* of the convolution method, just as described in L239-L244. ***Q5: About theorem 2*** 1) The correct definition should be ${ h_t }_ {t=1}^{T^*}$, so as the following Equation. 2) $h$ represents the objective function that needs to be optimized at the time $t$. 3) As described in L174 - L175, regret helps to analyze and prove the convergence properties of the optimization process. $h$ in regret analysis quantifies the cost or penalty incurred when a state $x_t$ is taken at step $t$. The objective is to minimize the total loss over time, leading to better long-term outcomes or global minimizers. For non-convex optimization problems, although we promise $h$ to be convex, the total loss function $\sum_t h_t$ may have multiple local minima and saddle points. And regret-based online learning optimization method can be well applied to such non-convex optimization problems. Besides, since numerous commonly used loss functions are either convex or can be approximated by convex functions, our theoretical results offer valuable insights and practical guidance. ***Q6: About conventional convergence analysis*** We provide the proof of the convergence theorem using conventional concentration inequalities under convex function conditions (refer to Appendix C for notation definitions): Let $( \mathcal{F} \subseteq \mathbb{R}^n )$ be a convex feasible set, and $f \in \mathcal{F}$, we have: $f(\mathbf{x}_t)\leq f(\mathbf{x}^*)+\nabla f(\mathbf{x}_t)^T(\mathbf{x}_t-\mathbf{x}^*)$ Next, by calculating and combining the Eq.25 in Appendix C, for some large $t = T$, we have: $T\cdot(f(\mathbf{x}_t)-f(\mathbf{x}^*))\leq\sum(f(\mathbf{x}_t)-f(\mathbf{x}^*))$ $\leq\sum_t\left(\langle g_t,x_t-x*\rangle\right)$ $=\sqrt{T}\cdot\left(\frac{B_{\infty}^{2}\cdot n\cdot p_{1}^{-1}}{2(1-q_{1})}\cdot(1+2q_{0}q)+\frac{2\cdot\alpha_{2}(1)B_{2}^{2}}{1-q_{1}}\right)-\frac{\alpha_{2}(1)B_{2}^{2}}{1-q_{1}}$ Thus, we obtain: $\Vert f(\mathbf{x} _ t)-f(\mathbf{x}^*)\Vert \leq\frac{1}{\sqrt{T}}\cdot\left(\frac{B_\infty^2\cdot n\cdot p_1^{-1}}{2(1-q_1)}\cdot(1+2q_0q)+\frac{2\cdot\alpha_2(1)B_2^2}{1-q_1}\right)=O(\frac{1}{\sqrt{T}})$ Besides, for any function that can be represented by summation of some convex function $\sum_N \left( f_N(x) \right), \mathcal{F}_N \subseteq \mathbb{R}^n$, our proof still holds. This is also a proof where the regret method can be applied. However, for some non-convex $f$ that can not be represented, this proof may not hold. Nevertheless, under the online learning regret framework in our paper, it can still assist us in understanding the convergence. ***Q7: Suggestions on measuring regret*** Thank you for your advice. In the revised version, we will provide a comprehensive outline including the details of data stream, the incremental processing of each data point at time $t$, and the continuous model updates.  More importantly, to achieve greater consistency between theory and experiments, we will include additional evaluations that specifically measure regret in the revised version. Concretely, we will implement a dedicated evaluation framework to compute and analyze the cumulative regret incurred. By doing so, we aim to empirically validate the regret bound established in Theorem 2 and provide a more explicit connection between theoretical guarantees and observed performance. Furthermore, we will explore more regret metrics, such as instantaneous regret and average regret. ***Q8: More comparisons*** The experimental setting in this paper is based on offline learning algorithms, which have less strict convergence requirements than online learning methods. Hence, the experimental setting in this paper is based on offline learning algorithms. Since online and offline algorithms are under different task settings, direct comparison is unfair. In future work, we plan to standardize all baseline methods to an online setting and conduct comparative experiments. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for addressing most of my concerns. However, I am still confused about the choice to conduct the theoretical analysis on a convex assumption, especially considering the well-established body of analysis that relies solely on smoothness for non-convex optimization. Therefore, I will maintain my current score. --- Reply to Comment 1.1.1: Title: Official Comment by Authors of Paper 1528. Comment: Thank you for your reply. Regarding the issue you are concerned about, we will further clarify it below: ---- ① Firstly, non-convex functions present significant challenges in theoretical analysis due to the limited tools available for studying their behavior. However, since most commonly used loss functions are either convex or can be approximated by convex functions, our theoretical results offer valuable insights and practical guidance. ---- ② Here, we also provide the proof of the convergence theorem using conventional concentration inequalities under ***non-convex function*** and Hölder continuous assumption conditions (refer to Appendix C for notation definitions): **Assumption 1**. Let $\alpha \in (0, 1]$ and $L > 0$, we assume that the gradient of $f(x)$ is $\alpha$-Hölder continuous in the sense that: $\begin{aligned}\Vert \nabla f(x_l)-\nabla f(x^*) \Vert \leq L \Vert x_l-x^* \Vert _2^\alpha,\quad\forall x_l,x^*\in\mathbb{R}^n\end{aligned}$ As demonstrated in [1], for any function $f(x): \mathbb{R}^n \mapsto \mathbb{R}$ with Hölder continuous gradients, we have the popular lemma playing an important role in our analysis. Our proof based on our algorithms provides a quantitative measure on the accuracy of approximating $f(x)$ with its first-order approximation. **Lemma 1.** Let $ f : \mathbb{R}^n \mapsto \mathbb{R} $ be a differentiable function. Let $ \alpha \in (0, 1] $ and $ L > 0 $. If for all $ x_t, x^* \in \mathbb{R}^n$ $\Vert \nabla f(x_t)-\nabla f(x^*) \Vert _2\leq L \Vert x_t-x^* \Vert _2^\alpha,$ then, we have $f(x^*) - f(x_t) \leq \langle x^* - x_t, \nabla f(x_t) \rangle + \frac{L}{1 + \alpha} \Vert x^* - x_t \Vert _2^{1 + \alpha}.$ Let $( \mathcal{F} \subseteq \mathbb{R}^n )$ be a feasible set satisfying Assumption 1, and $f \in \mathcal{F}$, we have: $$f(x^*) - f(x_t) \leq \langle x^* - x_t, g_t \rangle + \frac{L}{1 + \alpha} \Vert x^* - x_t \Vert _2^{1 + \alpha}$$ Next, by calculating and combining equation 25 in Appendix C, since $m_t$ is gradually approaching the optimal $m^*$, it is reasonable to believe that $x_t$ is also approaching the optimal $x^*$. So for some large $t = T$, we have: $T\cdot(f(\mathbf{x} _ t)-f(\mathbf{x}^*))\leq\sum _ {t=1}^T\left(\langle g _ t,x _ t-x*\rangle\right)+\sum _ {t=1}^T\frac{L}{1+\alpha} \Vert x^*-x _ t \Vert _ 2^{1+\alpha}$ $=\sqrt{T}\cdot\left(\frac{B_{\infty}^{2}\cdot n\cdot p_{1}^{-1}}{2(1-q_{1})}\cdot(1+2q_{0}q)+\frac{2\cdot\alpha_{2}(1)B_{2}^{2}}{1-q_{1}}\right) -\frac{\alpha_{2}(1)B_{2}^{2}}{1-q_{1}}+\frac{L}{1+\alpha}\sum_{t=1}^{T}\left(\sum_{i=1}^{n}\alpha(t)^{1/2}(x^{*}(i)-x_{t}(i))\right)^{2+2\alpha}$ Combine the argument above and notice that $\alpha(t)^{-1} \leq p^{-1}_1 \cdot \sqrt{T}$, we have: $\Vert f(\mathbf{x} _ {t})-f(\mathbf{x}^{*}) \Vert \leq\frac{1}{\sqrt{T}}\cdot\left(\frac{B _ {\infty}^{2}\cdot n\cdot p _ {1}^{-1}}{2(1-q_{1})}\cdot(1+2q _ {0}q)+\frac{2\cdot\alpha _ {2}(1)B _ {2}^{2}}{1-q _ {1}}+\frac{B _ {\infty}^{2}\cdot n^{2+2\alpha}\cdot L}{(1+\alpha)\cdot p _ {1}}\right)=O(\frac{1}{\sqrt{T}})$ Therefore, for any $\alpha$-Hölder continuous functions, our proof of convergence still holds. This is also a proof where the regret method can be applied. However, for some singular non-convex $f$, this proof may not hold. Nevertheless, under the online learning regret framework in our paper, it can still assist us in understanding the convergence. # Reference [1] Unregularized online learning algorithms with general loss functions. *Applied and Computational Harmonic Analysis 2017*. ---- ③ In practice, the experiments demonstrate that the proposed network integrated with the proposed optimization strategy retains its convergence properties. This is supported by the Sec.5 and Sec.E. ---- If our rebuttals do not address your concerns to some extent, we would be happy to have further discussions. Your feedback is an important reference for us to improve the quality of our paper, and we attach great importance to it. Thank you again for your time and effort. We look forward to your reply.
Summary: This paper introduces a new convolution method based on the \( L_p \)-norm. The authors provide a theoretical foundation by proving the universal approximation theorem for \( L_p \)-norm networks and analyzing the robustness and feasibility of \( L_p \)-norms in 3D tasks. Several key findings are highlighted in this work: 1. \( L_\infty \)-norm convolution is prone to feature loss. 2. \( L_2 \)-norm convolution essentially performs a linear transformation in traditional CNNs. 3. \( L_1 \)-norm convolution is an economical and effective method for feature extraction. To further enhance the capabilities of \( L_1 \)-norm based networks, the paper proposes a series of customized training and optimization strategies. In the experimental section, the authors apply their methods to classical 3D networks such as PointNet and PointNet++, achieving competitive performance at a lower cost. In summary, the \( L_1 \)-norm network can achieve similar performance to traditional convolutional networks but with reduced computational cost and lower instruction latency. Strengths: -- This paper is clear and easy to follow, which is well organized. -- The proof of the universal approximation theorem for Lp-norm Nets and the analysis of the robustness and feasibility of Lp-norms are interesting and beautiful. -- The comparative results between different $\ell_{p}$-norm-based convolutions presented in this paper are valuable, offering a meaningful technical reference for further method design and subsequent research in 3D vision. Weaknesses: There are some weakness/concerns need to be discussed: --Noise distribution is an interesting issue, but this paper only analyzes the impact of Gaussian noise when considering random noise. How do the authors solve other noise distribution conditions? --In this paper, how to come up with the idea of the \( L_p \)-norm convolution is not discussed in detail, although its mechanism is well illustrated. Technical Quality: 3 Clarity: 4 Questions for Authors: --This paper claim that \( L_p \)-norm are more robust than that based on inner product, however, why only \( L_2 \)-norm is proved? how about the other cases when $p$ differs? --In Table 2, the results from \( L_2 \)-norm Net is also competitive, why not choose the \( L_2 \)-norm Net for the further study but \( L_1 \)-norm? --In Sec 3.1, the proposed Universal Approximation, I wonder why the second part is presented through integration? this is confusing compared with other works such as "Multilayer feedforward networks universal approximator"? Overall, I have some questions about the theory and motivation above, which hopes to be answered. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments that help us improve our work. The following is a careful response and explanation about the weaknesses and questions. # For Weaknesses ***W1: About Noise distribution.*** In this work, we focused on Gaussian noise because it is the most prevalent type of noise in real-world scenarios. Specifically, Gaussian noise offers advantages such as its mathematical tractability and the central limit theorem's implications, which make it a common assumption in many practical applications. Here, we provide additional proof under any symmetric, independent and identically distributed (i.i.d.) noise with standard component variance $ \operatorname{Var}[{G(i)}] = 1$. Since our data is normalized to a standard interval, hence selecting the variance of noise to be 1 is appropriate. Without loss of generality, let $\mathbb{E}[G(i)] = 0$. According to the *delta method*, we have $ \operatorname{Var}[G(i)^2] = C^{\prime} < \infty$ Now, assuming $\mathrm{Var} \Vert G+P_{t}-K \Vert_{2} \approx\mathrm{Var} \Vert \Vert G \Vert_{2} \Vert $, and $G$ follows some form of noise $G \sim p_{noise}(i.i.d.) $, we have: $\operatorname{Var}\big[ \Vert G \Vert_2 \big] = \mathbb{E}_{ G \sim p _ {noise}} ( \Vert G \Vert_2^2 ) - \big( \mathbb{E} _ {G \sim p _ {noise}} [ \Vert G \Vert_2 ] \big)^2$ According to Equation 19 in the paper, we also have: $\mathbb{E}\left[\frac{\left\|G\right\|_2}{\sqrt{m}}\right]\geq\frac{1}{2}\cdot\left(2-\mathbb{E}\left[\left(\frac{\left\|G\right\|_2^2}{m}-1\right)^2\right]\right).$ Considering $\mathbb{E}\left[\left(\frac{\left\|G\right\|_2^2}{m}-1\right)^2\right]=\left(\mathbb{E}\left[\frac{\left\|G\right\|_2^2}{m}\right]-1\right)^2+\mathrm{Var}\left(\frac{\left\|G\right\|_2^2}{m}\right)$ $=\left(\frac{1}{m}\sum_i^m((\mathbb{E}[G(i)])^2+\text{Var}[G(i)])-1\right)^2+\frac{1}{m^2}\left(\sum_i^m\text{Var}[G(i)^2]\right)$ $=\frac{C^{\prime}}m$ Combining these inequalities and equations, we get $\mathbb{E}_{G\sim p _ {noise}}\left[\left\|G\right\|_2\right]\geq\frac{\sqrt{m}}{2}\cdot\left(2-\frac{C'}{m}\right).$ Therefore, by the above inequality, we have: $\mathrm{Var}[\left\|G\right\|_2]<m-\frac{m}{4}\cdot\left(2-\frac{C'}{m}\right)^2=C'-\frac{{C'}^2}{4m}=O(1)$ Thus we have shown that $\mathbf {Var}[\Vert G+P_{t}-K\Vert_{2}] = O(1)$ is a general case for any finite noise occurring in the nature. ***W2: About idea of proposed method.*** Here, let's reiterate our motivation: Traditional 3D Convolution employs the inner product as the similarity measurement between the filter and the input feature. While the inner product is a common choice, it is not the only possible similarity measurement function. For instance, similarity measurement using $\ell_p$-norms has proven to be an effective strategy, widely utilized in 2D tasks (*as discussed in Related Work*). However, one of the key features of 3D point clouds is their sparseness. $\ell_p$-norm models are reported to have natural advantages in handling sparse data, as they can extract relevant information from sparse data within a larger receptive field. This automatic complexity reduction facilitates efficient algorithm training. **Despite these advantages, understanding the characterization of 3D convolution in $\ell_p$-norm space and proposing effective algorithms that incorporate the properties of 3D tasks have not yet been fully explored.** # For Questions ***Q1: About other cases when p differs.*** Investigating the robustness of other $\ell_p$ -norms is indeed important, hence, we conducted additional numerical simulations to explore the behavior of different $\ell_p$ -norms as shown in Tab.1 of the main paper. Our findings suggest that as $p$ increases, the robustness of the norm in the presence of random noise improves. Specifically, we observed a consistent decrease in variance under random noise conditions with increasing $p$. This indicates that higher $p$ values tend to mitigate the impact of noise more effectively than lower ones, including the $\ell_2$-norm. The improvement in robustness can be attributed to the nature of the $\ell_p$ -norm, which places more emphasis on larger deviations as $p$ increases. This property helps in diminishing the influence of random noise, which typically manifests as smaller perturbations. Therefore, norms with larger $p$ values are inherently more resistant to such disturbances. ***Q2: Why not choose the $\ell_2$-norm Net for the further study.*** As discussed in the appendix, $\ell_2$-norm Nets can actually be considered a translation of traditional convolutions. ***Q3: About Universal Approximation.*** The integration is employed to assess the approximation quality of the neural network in an averaged sense across the entire domain $J$. Specifically, by considering the integral of the absolute error between the neural network output and the true function value, we aim to measure the overall approximation capability of the network. In other words, if the integral of the error is small, it means that the error is small on the vast majority of possible data points. This approach aligns with the concept of the mean approximation error, where a lower integral value indicates that the neural network provides a good approximation to the target function $g$ on average. It offers a holistic perspective on the network's performance, ensuring that even if the error at some individual points might be non-negligible, the overall approximation remains satisfactory when considered over the entire set $J$. In contrast to pointwise approximation guarantees, the integral approach provides a more flexible and often more practical metric for assessing the quality of function approximation, especially in high-dimensional spaces. This method ensures that the neural network can adequately capture the function's behavior in a broader context, making it a valuable tool for evaluating the network's performance in real-world scenarios.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging
Accept (poster)
Summary: The authors propose to use a weighted average of previous layer outputs as an input for the successive layers. The weights are learned parameters and can also take negative values. The weighting module can be coarse: it is enough to insert it only in some layers, and it is enough to attend to a subset of previous layers, which significantly lowers the overhead resulting from the weighting. The authors show perplexity gains on language modelling on OpenWebText2. They analyze the learned weights, and uncover interesting patterns. Strengths: The method is easy to use, improves perplexity, and shows interesting and consistent weight patterns. The overhead is minimal, less than for depth-wise attention, and the perplexity in a large data regime is better than that of the concurrent DenseNets. Weaknesses: The evaluation could be stronger: It would be nice to evaluate the zero-shot performance on some downstream tasks (e.g., BLiMP, CBT, PIQA). This should be easy to implement and does not require additional training. The evaluated transformers are narrow ($d_{model}=768$) and very deep (48 layers for the 378 and 72 for the 548M parameter models). It is unclear how the gains would transfer to a more traditional setup (e.g a 350M GPT-3 has 24 layers). The effects might be smaller with fewer layers if the main role of the DWA module is to provide additional shortcuts for the gradients. The unusually narrow $d_{model}$ can provide an additional advantage for DWA because there is less space to store everything in the residual directly, and the shortcuts can help with that. There are two ways to improve this: either show the gains with a more standard aspect ratio or show that the narrow+deep DWA model is better than a more traditionally shaped DWA model (both of these should be paramer-matched). Technical Quality: 3 Clarity: 4 Questions for Authors: In line 133, the authors write, "the outputs are stored in KV cache to facilitate decoding.". Depending on the implementation, this is not what is typically done. To avoid recomputation, the K and V values are stored instead of the residual. However, I agree with the authors that this is not an issue since only the state of a single column has to be stored, which is typically negligible. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Already discussed in the weaknesses section. The authors should add some discussion on the limitations of the evaluation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our work. 1. **On the KV cache comment (line 133).** We thank the reviewer for bringing this to our attention and agree the statement line 133 is incomplete. As the reviewer rightfully pointed out, while we do not need to store the past layers’ outputs, a small price has to be paid to store the intermediary states for the current token. This cost is small and does not grow with the sequence length. We will make this clear in our next revision. 2. **On the width used in our experiments.** We agree that our DenseFormer architecture helps to solve the capacity bottleneck of the residual stream typically present in transformers. As such, a narrow DenseFormer can match the performances of a wider Transformer. This is an important advantage as increasing the width also increases the training and inference costs in terms of both compute and memory. To investigate the impact of the model’s width on the perplexity, we trained multiple $24$ layer models with different widths of $384$, $768$, and $1536$. The results are summarized in the following table: | Width | Transformer | DenseFormer | Parameters | | -----------| ----------- | ----------- | ----------- | | 384 | 31.039 | **30.120** | 62M | | 768 | 25.020 | **24.631** | 209M | | 1536 | 21.597 | **21.279** | 757M | | 1664 | 21.313 | - | 881M | We observe that the gap—while smaller for wider models—does not disappear. Moreover, a $1536$ wide DenseFormer performs better than a $1664$ wide Transformer, despite having $124$M fewer parameters. In general, as the complexity of the task increases, followed by the need for more capacity, it is more compute and memory efficient to use a DenseFormer instead of a Transformer. This is especially relevant when we think of deploying LLM on devices with hardware constraints. Additionally, other hypotheses exist that could explain the advantage of DenseFormers over Transformers. For instance, we mention in our work how the DWA weights can learn to subtract earlier representations from later representations, which potentially helps to disentangle between processing the current token, and predicting the next token. Evidence for this can be seen in Fig.4 and Fig.5. We hope the above comments address your concerns and that you will consider raising your score. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the time and effort invested in their rebuttal. I appreciate their new experiment investigating more standard model shapes and am glad to see that the gains still hold. Thus, I am increasing the score. --- Reply to Comment 1.1.1: Comment: We are pleased that the reviewer found our response helpful and sincerely thank them for their decision to increase their score.
Summary: The work introduces DenseFormer, a variant of Transformer architecture using special connections between blocks. Each transformer block in DenseFormer may look at a weighted average of all previous blocks' outputs (instead of the simple output of the previous block). Authors run extensive experiments showing the performance improvement of such a design and show that this simple change beats the baseline model on the Pareto-frontier of model quality and speed. Authors also run good ablations and showcase different variants of DenseFormer. Strengths: 1. **The idea.** The idea behind the work is simple, in a very positive way, with a good story. The implementation is straightforward (even if optimizations are useful), the methods aren't costly during training, and what costs it incurs it makes up for in performance. Moreover, I believe the technique introduced could not only improve Transformer-based language models but Transformers in different modalities as well, and even other kinds of residual networks apart from Transformer - not only "classical" CNN-based ResNets, but also new architectures like Mamba. 2. **Experimental setup.** Experiments are extensive, and results show consistent improvement. Every metric that I'd want to see is present and measured, like inference throughput, training speed, and comparison to models equal in inference/training time (both through an increase in size and through an increase in training iterations). 3. **Improvements on Pareto-frontier.** The significant Improvements achieved with DenseFormer compared to the "vanilla" Transformer should interest both the research community and industry applications. What is more, **the authors provide excellent, easy-to-understand code**, so even when I had doubts about how things were implemented, I could easily check the source code; thank you so much. Weaknesses: **Non-standard model shape (issue essentially resolved during the rebuttal).** By a model shape, I mean the relative dmodel and nlayers. I am extremely worried that results would hold up on standard model shapes. The work uses a dmodel of 768, with the smallest models being 48 layers deep and the biggest model being 90 layers. This is extremely deep and thin. For reference I will use GPT3 model shapes, as they are reasonably tuned and still used by the community today ( see Table 2.1 on page 8 of https://arxiv.org/pdf/2005.14165 ). The only model with a dmodel of 768 has just 12 layers, 4x shallower than **the shallowest** model used by the authors. This ratio 768/12 was also used by BERT-base. The common ratio used by the community, shared by GPT3 6.7B, llama7B, Mistral, is 4096 dmodel with 32 layers - still shallower than any models used by the authors, and 6x as wide! Chinchilla 80B and LLama 80B use 8192/80, with depth comparable to the authors' models. Only when we get to "the GPT3", 175B, we finally get to the model deeper than any of the authors' models, but with a whopping 12288 dmodel. The above issue concerns me because one possible interpretation of DenseFormer improvements is essentially increasing the information throughput/capacity of the residual stream, as the layers can communicate directly without using the limited residual stream. With those hyperparameters artificially bottlenecking the throughput of the baseline model's residual stream (compared to the optimal hyperparameters), every technique that effectively circumvents the bottleneck gains an unfair advantage, and the results may not reflect the improvement in the real-world models. I am unsure how big of an impact this nonstandard architecture has (maybe it doesn't have any impact?), but it leads me to trust the results less. While I trust that running wider models isn't feasible with computational constraints, I would suggest the authors run models of "standard" shape, especially when it comes to the baselines. For example, seeing the results of a smaller model, like 768/12 (and/or less than 12 layers while keeping the dmodel of 768), would showcase the impact of changing the width/depth ratio of the model on the relative improvement achieved by DenseFormer. Those experiments would also be relatively cheap (I'm not suggesting authors pre-train 7B models, of course). The saving grace for now is that the code exists and everything is easy to reproduce by the community, so I'm leaning accept even with this issue standing. However, running those experiments would be perfect. **Learning rate, hyperparameter tuning.** Those are less important than the depth/width ratio, but still important. Learning rate (and other parameters, if tuned). How was it chosen? Was it tuned for the baseline? Was it tuned at all? This, again, is important - especially for those deep models, which are generally harder to train and less stable. **Minor** 1. Number of heads. In line 198, "We use models with 8 heads", while in line 602, "12 attention heads". One of those lines is incorrect (I assume 12 heads is used, though). 2. I needed to see the code to see the details about the residual connection inside the block and where the layer norm is (is Pre-LN used, in particular). Adding a "Block" implementation to the paper's naive implementation code would be nice to make it easier for future readers to check this. That said, the code is very nicely written. 3. It would be worth adding that the memory overhead (line 130) is only negligible if no activation checkpointing is used (and it is commonly used). Also, again, for the inference, the overhead is negligible if no optimizations are used to shrink the KV cache (and again, things like grouped attention are commonly used). Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses. The primary concern is the architecture shape, and the secondary is hyperparameter tuning. With this primary issue resolved, this would be outstanding work. Another question is, can we get a more detailed analysis of how exactly the technique differs from previous work, apart from being applied to Transformers? My understanding is that it has multiple novel parts that were adapted specifically, but more detailed comparison would be welcome. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Properly addressed, except for the architecture shape (see weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your careful consideration of our work and your valuable feedback. We provide the following comments to further clarify our contributions: 1. **On the non-standard model shapes.** As you have correctly pointed out, using DenseFormer “alleviates the information capacity bottleneck of the residual stream”. It is also true that increasing the hidden dimension is another possibility for resolving this bottleneck. Indeed, we conjecture that given a DenseFormer it is possible to build a standard model with a much larger hidden dimension and show that it can theoretically perform the same operations as the original DenseFormer. As such, it is possible that the gap becomes less pronounced as we increase the width. However, increasing the hidden dimension comes with significant overheads on the resources, leading to larger models and additional memory footprint (e.g. larger KV cache). To further strengthen our claims, we additionally ran experiments with a $24$ layer DenseFormer and compared it at different widths of $384$, $768$, and $1536$ with a standard Transformer. The perplexities for those models are presented in the following table: | Width | Transformer | DenseFormer | Parameters | | -----------| ----------- | ----------- | ----------- | | 384 | 31.039 | **30.120** | 62M | | 768 | 25.020 | **24.631** | 209M | | 1536 | 21.597 | **21.279** | 757M | | 1664 | 21.313 | - | 881M | For the larger widths of $1536$ and $1664$, we tuned the learning rate in $\{0.001,0.0005,0.0003\}$ and found $0.0005$ to be best. We can see that the gap between the two architectures still persists. We also validate the above conjecture by measuring the perplexity of a transformer model with a larger hidden size of $1664$, which is worse than a DenseFormer with a smaller hidden size of $1536$. This, despite a $124$M parameters difference. Furthermore, given the above discussion regarding the trade-off between capacity, width, and using DenseFormer, we expect the gap to widen when moving to more demanding settings which require additional capacity (e.g. longer sequence lengths, or training on more tokens). 2. Moreover, while current architectures are using larger hidden dimensions, this can be because until now, increasing the hidden dimension has been the only avenue to address the said information bottleneck. In our work, we point out that using DWA additionally allows operations such as subtracting the first layer’s output which we have demonstrated is actually learned by the model. 3. **On the learning rate and hyperparameters tuning.** We tried our best to tune the hyperparameters for the baseline models at different model depths given the available resources. We then used the same settings when training DenseFormers. We tested learning rate values in $\{0.0001,0.0003,0.0005,0.0007,0.001,0.002\}$, and found $0.001$ to be systematically better. We also tuned the number of warmup steps for a $48$-layer standard Transformer, and used the best values found in all subsequent experiments. We will include these clarifications in our next revision. 4. **On the memory overhead.** We agree that when activation checkpointing is used, the overhead might become more noticeable at training and thank the reviewer for pointing this out. We mention that at the decoding stage during inference, we do not need to store the intermediate outputs for already processed tokens. The overhead of storing the intermediate outputs for a single token remains negligible in comparison with the KV cache. We will include these clarifications in the next revision of our paper. 5. **On the difference with relevant prior works.** The most relevant prior method might be DenseNet. DenseNet is composed of a succession of Dense blocks. Within each block, convolution operations are interleaved with concatenation operations, which concatenate the output of all the previous convolution blocks. Therefore, within a dense block, the size of the representation is increasing. The hidden size is reduced using transition blocks consisting of convolution and pooling. Compared to DenseNet, our approach is minimalist: we simply do a linear combination of past layer’s outputs. This removes the need for transition blocks, only adding a very small number of parameters. We will include this discussion in the related work section of our next revision and try making the distinction with prior work more clear. We hope the above comments address your concerns and that you would consider raising your score. --- Rebuttal 2: Comment: I want to thank the authors for their response, and **I have raised my score (from 6 to 7)** - as I assume a reasonable response will be given to the remaining questions regarding the training setup (see below). **On model shapes.** Thank you for providing experimental results. I'm glad to see that you also included a pretty standard architecture 1536/24 (e.g., GPT-Large) and a wider range of model parameters (881M vs. 62M, so a 14x difference, instead of around 2x in the original work). While ideally, I would recommend going for more standard shapes of models, especially with a reasonable width-to-depth ratio, I agree the presented results show that DenseFormer should show gain in practice as well. Those experiments show considerably worse perplexity for a given model size than experiments in the paper. What was their training length? I assume this must have been the critical difference—or was it something else? It looks like 16k steps or so instead of 40k. Or something else was changed as well? **On other things.** The rebuttal answered my questions. Just to confirm - the learning rates tuning was done separately for the Transformer baseline and separately for DenseFormer. Am I understanding this correctly? **Change in score from 6 to 7.** Assuming that the above results (with an explanation of differences in the training setup or explanation of worse perplexity in those wider models), a brief note of hyperparameter tuning like the one in the rebuttal, and the expanded comparison to DenseNet like the one in the rebuttal, will be included in the paper or the appendix for the benefit of readers - I raise my score (from 6 to 7), as I'd like this work to be seen at NeurIPS. --- Rebuttal Comment 2.1: Comment: We thank the reviewer for increasing their score. We will add the new width experiment to our next revision, along with a description of the hyperparameters training procedure. * **Experimental details for the width experiment.** As we changed the width of the models and reduced the depth, we had to re-tune the learning rate. For both Transformers and DenseFormers alike, we ended up using a learning rate of $0.001$ for the widths of $384$ and $768$, and a learning rate of $0.0005$ for models of width $1536$. In the interest of time, we decided to reduce the batch size from $400$ to $128$. As a result, the models are trained on less tokens, which explain the gap in perplexity. Remaining hyperparameters are the same as for our other experiments, i.e. we use $40k$ iterations and a sequence length of $256$. * **Concerning the tuning of the learning rate for DenseFormer and Transformer models.** We extensively tuned the learning rates for our Transformer models for all depths. We then mostly applied those learning rates to our DenseFormer models. On a small set of experiments, we verified that the added DWA modules do not seem to be affecting the optimal learning rate value. We thank again the reviewer for his valuable feedback, and stay at his disposal in case any further clarification is needed.
Summary: The paper introduced DenseFormer -- a simple and effective architecture that boost transformer language model's performance by adding trainable residual connections to all the previous layers. The paper discusses intuitions behind the architecture (that it enhances information flow between earlier and latter layers) and design choices for quality-cost tradeoff. Empirical results on the language modeling tasks validates the effectiveness of the architecture. The paper also presents empirical observations of the residual weights which exhibit consistent patterns that might be tied to the effectiveness of the architecture. Strengths: Originality: while the idea of residual connection or trainable residual connection have been proposed, it is still surprising that a simple trainable combination of residual streams can be both effective and has consistent weight patterns. The design of this architecture and the findings of the effectiveness and weight pattern are novel. Clarity: the paper presents the idea in a clear and easy-to-follow fashion. Various design choices are discussed regarding quality-cost trade-off; and important issues such as whether the architecture remains effective if introduced in latter training stages are discussed. Significance: the simplicity and effectiveness of the method is quite appealing for consideration in practical language models. Weaknesses: A minor weakness is the robustness of the architecture across tasks: the paper focuses on the language modeling task, while transformers have been used for vision, speech and general time series data. A discussion on the robustness of Denseformer on other tasks would strength the paper. Technical Quality: 3 Clarity: 4 Questions for Authors: Regarding the memory consumption of DenseFormer, the paper mentioned that it has "Negligible Memory Overhead" since the previous layer's output are stored in the KV-cache. This seems to be at odd with the most common practice, in which the actual key and value vectors are stored in the KV-cache, not the layers' outputs. The reviewer assumes that the authors are referring to a difference way of caching -- caching the layers' output and re-compute key, value projections at inference time, which will be consistent with the negligible memory overhead claim, but introduces additional computation. A clarification on this point would be helpful. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your consideration of our work and your valuable feedback. We make the following comments to provide further clarifications: 1. We completely agree on the ambiguity regarding the negligible memory overhead claim and thank you for bringing it to our attention. We point out that at the decoding stage during inference, we do not need to store the intermediate outputs for already processed tokens. The overhead of storing the intermediate outputs for a single token remains negligible in comparison with the KV cache. However, a more careful implementation is needed to avoid additional memory usage during the pre-fill stage as you suggested. We will clarify this in the revision of our paper. 2. While we expect our results to extend to other tasks as well, we decided to focus on text for this work to save on resources. We agree that extending the results to other tasks would be an excellent direction for future work. Thank you.
Summary: The paper proposes to construct current transformer block input by weighted average of all inputs from previous transformer blocks. The weights are static and learned during training process. To reduce computation complexity, the method comes with a dialated version controlled by modulo and division relations. Experiments are conducted to observe the changes in model size, train/test speed, PPL. Models are trained from scratch on OpenWebText2. Results show better PPL, or similar PPL with smaller/faster model. Strengths: the method is well defined, and experimental results are good Weaknesses: not a rigorous study regarding experiment comparison with related work, dialation mechanism design Technical Quality: 2 Clarity: 3 Questions for Authors: 1. the abstract is a little misleading, L5 says "100B parameters", but experimetal models are at the level of millions of parameters. 2. the proposed method should consider a generalization into triangular/pyramid structure. what happens if there is DWA on top of DWA and again on top of DWA? 3. the sparsity design by modulo and division is rule based (with no intuition on hyperparameter setting), not learned from training. is it possible a simple regularization can be much better? 4. the whole design is meant for traning models from scratch. it becomes not useful given many well trained transformers are released. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our work. We hereby answer the reviewer’s questions in order: 1. The number of additional parameters required by DenseFormer increases quadratically with the model’s depth. We mentioned 100B parameter models as those large scale models are typically the deepest and represent a worst case scenario. This serves to emphasize that even in this worst case, the number of added parameters is negligible. 2. If we understand correctly, it is suggested to add DWA on top of other DWAs. Given that the current DenseFormer architecture is already mapping any block output to all future block inputs, adding additional connections on top would not make the model more expressive. Please let us know if we misunderstood your statement. 3. We opt for rule based sparsity for simplicity and to maintain hardware compatibility. Other types of sparsity do not easily yield speedups or memory savings and require modifying the loss, possibly introducing additional training overheads. Furthermore, as can be seen in Fig 3.a., the rule based sparsity is already doing quite well, allowing 4x5 DenseFormers to achieve a very close performance to 1x1 DenseFormers. The remaining gap seems too small to justify deploying adaptive sparsity techniques. 4. The reviewer mentions it is not relevant to do research on better transformer training given open-source models are being released. We strongly disagree with the reviewer on this statement. Just because pre-trained LLMs are open sourced does not mean it is not worth researching more efficient ways to train LLMs. The field is extremely fast-paced and new models are continuously trained and released. Our research---by making models more parameter efficient---increases the expected return from pre-training models, and therefore promotes the training of more models. We hope we have answered your concerns, in which case we would appreciate it if you could raise your score. --- Rebuttal 2: Comment: 1. parameter amount scale can greatly affect model performance. current experimental results are useful for million level models. however, it may be more useful, less useful, useless or possibly performance-hurting for billion level models. the hypothesis is inclined to being less useful or useless, as large models are harder and much more challenging to improve. 2. the question is looking to see the potential or the upper bound of performance improvements from using DWA. it is straightforward to test by recursively adding DWA on top of DWA to form a pyramid structure, which should not be an issue considering DWA parameter amount are quite small. 3. rule based sparsity design is easy to test and start. given sparsity design is an important part of the paper, it takes 2 sections (3.2 & 3.3) of method writing. it is expected that authors would naturally explore other simple design options, such as regularization based methods. 4. the proposed method seems to only require adding few parameters, but at the cost of training the entire model from scratch. in fact, such limitation is directly or indirectly related to above comment 1/2/3, because the huge training cost hinders many scientific explorations. Overall, it is challegening and expensive for authors or for the community to verify its effectiveness on billion level models, which are common in the community. to be clear, this is not a question/requirement on author's experimental equipment setup, but a question on the applicability, generalization, and usefulness (on billion level models) of the method. --- Rebuttal Comment 2.1: Comment: I am responding to a reviewer as another reviewer. I agree with Reviewer Ljsg in some respects. For example, **(point 1)** I also think the mention of the 100B model in the abstract is misleading, and the same thing should be phrased differently w/o implying the method was tested on such models (replacing the sentence with, e.g., "adding less than a percent of total model params"). I also partially agree with Reviewer Ljsg that **(point 4)** pretraining models are costly, and it is a clear limitation of the technique that it requires pretraining. For sure, I'd like to see experiments on Transformer-to-DenseFormer fine-tuning of the Llama or other models, probably as future work. With that said, I find the arguments about "the cost of training the entire model from scratch" or "the huge training cost hinders many scientific explorations" - **those arguments apply not only to this paper but to all papers on improving LLM pretraining** - so I would not hold that argument against authors. I'd like to see more research on more efficient and better pretraining, after all. I'd suggest to the authors, e.g., extrapolating the performance of the model to larger scales (like comparing scaling laws for DenseFormer vs for Transformer), but such an extrapolation also would require more experiments. Regarding **(point 2)**, it seems to me that adding DWA on top of DWA would not change the model's expressiveness. Reading Reviewer Ljsg's suggestion, I don't know how the proposed technique would be meaningfully different from what is already tested in the paper. Regarding **(point 3)**, while I agree with Reviewer Ljsg that regularization-based sparsity would be scientifically interesting, I also agree with the authors that the existing static sparsity pattern seems to capture the majority of the gains. Given this, it seems to me that any kind of regularization-based sparsity would be impractical because it introduces additional complexity both in terms of training (auxiliary losses) and in hardware/performance optimizations. The computational budget for such experiments could be better spent elsewhere (e.g., larger models or more experiments for determining scaling laws). --- Rebuttal 3: Comment: We thank reviewers 1YUF and EGgo for their responses and for their support. We also thank the AC for his comment. We agree with these responses and hope they address reviewer Ljsg's concerns. We remain available to answer any further questions. Concerning the mention of 100B parameter models in the abstract, we will change it in our next revision. We also agree with reviewer 1YUF in that comparing scaling laws for DenseFormers and Transformers would be a convincing way to verify that our results generalize to larger model sizes. However, deriving those scaling laws requires running experiments at a scale that needs more computation than is available to us.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fairness and Efficiency in Online Class Matching
Accept (poster)
Summary: This paper studies the online bipartite matching problem where the agents (offline nodes) are partitioned into multiple classes. Upon the arrival of an item (online node), it needs to be immediately matched to a free agent or discarded, and the goal is to optimize fairness among different classes and efficiency simultaneously. The main contribution of this paper is a simple randomized non-wasteful algorithm that simultaneously guarantees $1/2$-approximate class envy-freeness (CEF), $1/2$-approximate class proportionality (CPROP), and $1/2$-approximate utilitarian social welfare (USW). This is the first randomized non-wasteful algorithm that achieves class fairness guarantees. Furthermore, this paper complements this positive result by showing that no randomized non-wasteful algorithm can admit $\alpha$-CEF with $\alpha > 0.761$. When items are divisible, this paper also shows that no deterministic algorithm can achieve better than $0.67$-CEF. Finally, this paper studies the notion called the price of fairness, which characterizes the trade-off between fairness and efficiency. This paper shows that any randomized algorithm that achieves $\alpha$-CEF cannot achieve strictly better than $\frac{1}{1 + \alpha}$-USW. Strengths: This paper studies a well-motivated and interesting problem. Also, this paper is well-written and well-structured. All the results shown in this paper are non-trivial, and I believe that the community will be interested in them. In particular, I found the algorithmic result quite elegant and natural, for which the paper also provides interesting and technical analysis. The price of fairness notion seems to be a good complement to prior results, and it would be exciting to know the trade-off curve between possibly conflicting objectives. Weaknesses: While the price of fairness has been extensively considered by the fair division literature (e.g., [1]), this paper doesn't provide any pointers to this thread of research. Minor: - Line 151: Should $\textbf{y}$ be a bundle instead of an allocation? - Line 165: Typo: $V_i(Y_j^*(X))$. - Line 197: "is is". - In the paragraph starting from Line 238, there is no description for relating $Y_j$ to $A_i$. - Line 255: "admits". [1] The Price of Fairness for Indivisible Goods. Xiaohui Bei, Xinhang Lu, Pasin Manurangsi, Warut Suksompong. Technical Quality: 3 Clarity: 3 Questions for Authors: For Line 249-250, when constructing the vector for item $o$, does the vector depend on the history including the previously arrived items and the actions made by the algorithm so far? In both cases, it's not obvious to me why the probability in Line 251 holds. Can the authors further explain? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Certain references are missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their kind words with respect to our results and their presentation. We also appreciate the noted typos which have since been corrected. For the noted weakness on comparisons to other price of fairness (PoF) results: the first set of results on the PoF refers back to Bertsimas et al. and Caragiannis et al. Bertsimas et al. analyze the upper bound on the utility (egalitarian social welfare) loss incurred by fairness notion such as proportional fairness and max-min fairness for divisible goods. In particular, a main takeaway of their results is that for a small number of players, PoF stays relatively small, e.g., for two players PoF for proportional fairness is at most 8.6% and 11.1% for max-min fairness. Caragiannis et al. further obtain more extensive results from a similar vein, in particular for notions of proportionality, envy-freeness, and equitability, for allocation of divisible and indivisible goods and chores. As pointed out by Bei et al., their significant limitation for indivisible setting is that their guarantee does not hold for every problem instance as the results are not satisfied as a worst-case analysis. Bei et al. tackle such limitations by investigating PoF from worst-case scenario, under various notions including Nash social welfare, envy-free up to one good, balancedness, and egalitarian social welfare. We note that all of these results do not directly apply to our setting since our notion of class envy-freeness is not equivalent to any of the properties above. To your final question, the x_{a,o} are independent of rounding outcomes in the prior steps, hence the inequality holds for each agent being selected at least once with the given probability bound. *References:* Bertsimas et al.: The Price of Fairness, D. Bertsimas, V. Farias, N. Trichakis, OR’11 Caragiannis et al.: The Efficiency of Fair Division, I. Caragiannis, C. Kaklamanis, P. Kanellopoulos, M. Kyropoulou, TCS’12 Bei et al.: The Price of Fairness for Indivisible Goods, X. Bei, X. Lu, P. Manurangsi, W. Suksompong, TCS’21 --- Rebuttal Comment 1.1: Comment: Thanks for your response! I will keep my score positive.
Summary: The paper studies the online bipartite matching problem from a fairness perspective. One set of the bipartition (agents) are known a priori, and the other set (items) arrive online, wherein edges to that item are revealed and the algorithm must (possibly) select an edge to include in the matching before the next item is arrives. It is well known that the best competitive ratio for the basic problem is (1-1/e). In this work, the agents are also partitioned into disjoint “classes” (e.g., demographic groups, interest groups, etc.). The goal is to compute an approximately fair and efficient matching under the following notions: - Efficiency: - Competitive ratio on the size of the matching computed — the traditional objective, here referred to as the utilitarian social welfare or USW in keeping with the agent interpretation. - Non-wastefulness — there should not be any agent-item pairs with both the agent and item unsaturated but an unselected edge between them. In other words the matching should be maximal. Implies 1/2-approx on the previous USW objective via the typical maximal matching argument. - Fairness: - Class envy-freeness (CEF) — In the end, no class of agents should be able to redistribute the total allocation of items to another class amongst themselves in a way that would lead to a larger matching than that assigned to the group. Approximations are multiplicative on these total matching sizes. - Class Proportional Fairness (CPROP) — A given class considers the best case over all possible (even divisible) global matchings, of the minimum over classes of the optimistic total matching size they might expect by redistributing the items assigned to that other class amongst themselves. The matching satisfies CPROP if every class gets at least this total matching size. Approximation is again multiplicative on the matching size. The paper provides four primary results: 1. For the positive result, a randomized algorithm is studied that assigns each new item to a class uniformly at random, among those with an agent liking that item, and then to an agent within that class liking the item uniformly at random. This algorithm is non-wasteful by definition and thus 1/2-USW by corollary. The authors further show that the algorithm achieves a 1/2 approximation to CEF and CPROP. 2. The paper also provides impossibility of approximation results. In the same indivisible matching setting as above, hey show that no non-wasteful algorithm can achieve better than ~0.761-approximate CEF. 3. Furthermore, in the divisible matching setting (where items can be fractionally allocated), no non-wasteful algorithm can achieve better than ~0.67-approximate CEF. 4. Finally, an inverse proportionality is shown to exist between approximation of CEF and approximation on the USW objective. Strengths: The model, while shared with prior work, is a nice extension to the existing online fair division literature particularly in the subadditivity of class valuations due to the matching constraints, and the paper provides a nice analysis the expected class envy under a randomized algorithm that may be of independent technical interest. The paper also provides tighter upper (impossibility) bounds for approximating CEF non-wastefully and (in the price of fairness result) while approximating USW. The writing is generally clear. I particularly appreciate that proof sketches of at least the constructions for the counterexamples are provided in the main body along with diagrams to aid the reader. The paper also does a good job of surveying relevant related work in context. Weaknesses: The positive algorithmic contribution seems very limited, as the only algorithm described (Algorithm 1) is essentially to allocate at random (random non-wasteful class, then a random non-wasteful agent within that class). The relationship to the immediate prior work “Class Fairness in Online Matching” and the novel contribution of the current work are not clearly described. For example, lines 63-65 introduce the current results by saying “…we provide the first non-wasteful algorithm that simultaneously obtains approximate class fairness guarantees in expectation...” although the prior work cited above also defines non-wasteful algorithms that obtain approximate class fairness guarantees; the difference being that those are ex-post guarantees from deterministic algorithms rather than merely in expectation. More specifically, that prior work developed a deterministic algorithm for indivisible items for the same problem that achieves a deterministic/ex-post (rather than merely in expectation) guarantee of non-wastefulness, 1/2-USW, 1/2 MMS (maximin share), and 1/2-CEF1. Motivation is discussed in terms of prior work asking about randomized algorithms, but that was presumably motivated by a desire to achieve stronger guarantees rather than simply to introduce randomization for its own sake, and it is not clear that the new results under random allocation are significantly stronger. While the novel analysis technique is nice and the impossibility results are helpful, the significance and impact of the positive algorithmic result seems unclear overall. Minor comment: Several of the references should be updated to refer to the peer-reviewed and published versions of papers rather than the archival pre-prints. For example, the immediate prior work the current paper follows up on, “Class Fairness in Online Matching” is referenced from arXiv rather than the published 2023 AAAI paper. Technical Quality: 3 Clarity: 2 Questions for Authors: Could the authors clarify the sense in which the current algorithmic results for the indivisible case are a substantial improvement over those obtained in the prior work Class Fairness in Online Matching? In particular, why is (1/2)-CEF and (1/2)-CPROP in expectation to be preferred to (1/2)-CEF1 and (1/2)-MMS ex post, and in what sense(s) is the Random algorithm superior or an improvement over the Match-And-Shift algorithm? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: No concern Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comprehensive feedback and kind words regarding the problem setting and approximate results (both upper and lower bound). For the first noted weakness on the algorithmic contribution, we refer the reviewer to our general comments made to Reviewer gpQ6 on the approximate guarantee and nontriviality of our algorithm. For the inquiry regarding the preferences for (1/2)-CEF and (1/2)-CPROP in expectation over (1/2)-CEF1 and (1/2)-MMS ex post, as well as the comparison between the Random algorithm and the Match-And-Shift algorithm, we emphasize that the prior work was unable to provide any approximate guarantees for the CEF objective. Moreover, (1/2)-CEF is a stronger guarantee than their (1/2)-CEF1 by definition, as it ensures a greater degree of equity between groups while maintaining the same approximate fairness factor. It is also known in the fair division literature that EF1 allocations are guaranteed to exist and are computable in polynomial time whereas EF is not guaranteed to exist. For proportionality, (1/2)-CPROP is fundamentally stronger than (1/2)-MMS. CPROP considers the maximum minimal satisfaction achievable over all possible (including divisible) matchings, whereas MMS is limited to indivisible matchings. This distinction means that CPROP offers a more robust and flexible framework for ensuring fairness. The simplicity of the Random algorithm and its non-trivial yet strong O(1) expected guarantees makes it highly practical and promising. The inherent flexibility of randomized algorithms allows for a degree of leeway in the allocation process, accommodating minor errors or misestimations without significant degradation of the overall fairness or efficiency of the system. This flexibility as compared to deterministic algorithms is crucial in real-world applications where exact outcomes may be unpredictable or where slight deviations from the ideal are acceptable for the sake of greater overall efficiency or user satisfaction. Furthermore, the Random algorithm is nearly optimal at the class level (see our impossibility construction), simple to implement while ensuring the above noted improved and more practical expected guarantees. This makes the Random algorithm superior in environments where conditions are dynamic or partially unknown at the time of decision-making. We believe these methods offer significant advantages for complex allocation tasks, particularly when equity and adaptability are paramount. Lastly, we thank the reviewer for noting the potentially outdated bibtex entries. We have gone through and rectified the references which needed updates. We hope the reviewer will consider increasing their score if the above sufficiently addresses the concerns. --- Rebuttal Comment 1.1: Comment: I acknowledge that I have read the author rebuttal. Thank you for the disccusion around my concerns and questions. I appreciate and agree that CPROP is stronger than MMS and CEF is stronger than CEF1, but wish to reemphasize my point that, for example, (1/2)-CEF in expectation is incomparable with (1/2)-CEF1 in the worst case, and that, in my opinion, it is somewhat misleading to characterize the results as the first non-wasteful class fairness guarantees when Class Fairness in Online Matching, AAAI 2023 also gives a non-wasteful algorithm with a (different and incomparable) class fairness guarantee. I appreciate that the randomized algorithm may be simpler to implement in a complex environment, and that is a reasonable advantage. --- Rebuttal 2: Comment: We appreciate the reviewer for being very responsive. We agree that ex-ante guarantee and ex-post guarantee are usually not directly comparable, however, it is explicitly mentioned as one of the main open problems in Hosseini et al. 2023: 1. Any deterministic algorithm for "divisible" items with CEF better than 1-1/e and NW 2. Any deterministic algorithm for "divisible" items with any reasonable CEF, NW, and USW better than 1/2. 3. Any randomized algorithm for "indivisible" items with any reasonable CEF, NW and USW. In particular, their randomized algorithm (nor deterministic algo) does not have any CEF guarantee, but only has 0.593-PROP, 1/2-USW and NW. In addition, to provide such a missing hole in the randomized result (of no CEF guarantee), they further analyze another wasteful algorithm with CEF guarantee, but it is necessarily bound to be wasteful, which we believe is the reason that they pose open problem 3 explicitly. Hence, we believe our result in this context explicitly resolves open problem 3 with further guarantee on CPROP. We hope this meets the reviewer's expectations and would appreciate your understanding as well as a possible increase in the score in par with other reviewers.
Summary: The paper addresses the online bipartite matching problem with a focus on class fairness, proposing the first randomized non-wasteful algorithm that balances class envy-freeness, class proportionality, and utilitarian social welfare. It introduces the concept of the "price of fairness," highlighting the trade-off between fairness and optimal matching, where increased fairness results in decreased utilitarian social welfare. Strengths: -The work resolves a long-standing conjecture by developing a non-wasteful randomized algorithm that achieves non-trivial fairness guarantees in online matching problems, complementing existing bounds on class envy-freeness (CEF), class proportionality (CPROP), and utilitarian social welfare (USW). - It introduces the concept of the "price of fairness", showing the inverse proportionality between fairness and optimality, and presents an impossibility result demonstrating the trade-off between achieving CEF and maximizing USW. Weaknesses: - The motivated applications are not clear in the current context. - No experimental results are provided to show the actual performance of the proposed algorithm. Technical Quality: 3 Clarity: 3 Questions for Authors: none Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For an expansion on motivating examples of this problem setting: the study of online matching under fairness constraints is motivated by the challenges posed by the advent of Internet economics and new marketplaces, which demand solutions that are both transparent and fair, as highlighted in Moulin’s “Fair Division in the Internet Age”. These applications, where items must be matched to agents immediately and irrevocably as they arrive, include allocating advertisement slots [Mehta et al., 2007], assigning packets in switch routing [Azar and Richter, 2005], distributing food donations [Lee et al., 2019], and matching riders to drivers in ridesharing platforms [Banerjee and Johari, 2019]. Much of this work ignores the notion of fair matching. For example, a food bank that is distributing food must allocate food as it arrives due to perishability, and it’s important to ensure that these resources are sent to locations in such a manner that serves all communities equitably. With respect to the lack of experimental validation, we emphasize that this work is intended to be largely theoretical. Capturing this problem precisely with an experimental procedure raises several design questions and would be an interesting direction of future research. Specifically, since our analysis is worst-case it would be interesting to see how our algorithm performs in practice. We hope the reviewer will consider increasing their score if the above sufficiently addresses the concerns!
Summary: This paper studies the online bipartite matching problem with class fairness guarantee. In this problem, the offline vertices are divided into $k$ classes, and the challenge is to match each online vertex to an unsaturated offline vertex while providing guarantees on various fairness metrics (e.g., class envy-freeness (CEF), class proportionality (CPROP)) and efficiency metrics (e.g., utilitarian 11 social welfare (USW)). Previous work [16] has designed deterministic non-wasteful algorithms to achieve tight guarantees for these metrics. This paper introduces the first non-wasteful randomized algorithm that simultaneously guarantees 1/2-CEF, 1/2-CPROP, and 1/2-USW. It also presents more general upper bound results (for both deterministic and randomized algorithms) for CEF in both indivisible and divisible settings. Additionally, this paper formalizes the price of fairness in the fair matching setting, providing an upper bound on efficiency (USW) for a given CEF guarantee. Strengths: - This paper represents the first attempt to study non-wasteful randomized algorithms with a CEF guarantee. The analysis of the studied RANDOM algorithm is non-trivial. It also provides general upper bounds (including for randomized algorithms) for CEF, which supplements the existing results. - The hard instances used to prove the upper bound are interesting and can be useful for future research. - The topic of online matching with class fairness guarantees is important and timely in the field of algorithmic fairness, although I feel a bit concerned regarding whether this theoretical problem is sufficiently motivated for the NeurIPS community. Weaknesses: - The authors claim to have resolved the open problem raised by ref [16]: Can a randomized algorithm for matching indivisible items achieve any reasonable CEF approximation together with either non-wastefulness or a USW approximation? However, I am not fully convinced that a "reasonable" CEF means 1/2, as a non-wasteful deterministic algorithm that provides such guarantees already exists (Theorem 1 in ref [16]). Although the analysis for the current randomized algorithm is non-trivial, the significance of a 1/2 CEF is unclear. - Typos, e.g., in line 165 page 4, $V_i(Y_j^*(X)) \to V_i^*(Y_j(X))$ Technical Quality: 3 Clarity: 3 Questions for Authors: - Can you explain the significance of the 1/2-CEF1 guarantee for non-wastefulness? For example, are other natural randomized algorithms challenging to achieve $\alpha$-CEF with $\alpha > 1/2$ while ensuring non-wasteful or USW? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of our work, highlighting the novelty of our analysis and impossibility constructions, as well as constructive feedback. We also appreciate the identification of a typo which has since been rectified. We here address the noted weaknesses and related question raised. For the open problem raised by [16] on non-wasteful randomized algorithms: it is important to highlight that the 1/2-CEF guarantee provided by our algorithm represents a significant advancement in the field, effectively resolving the open question from prior work. This result is the first of its kind to ensure a nontrivial guarantee of fairness in expectation under the stringent non-wasteful constraint. The noted nontriviality of analyzing the natural “random” algorithm to achieve this level of fairness without wasteful allocations underscores the novelty and utility of our approach. We also emphasize that the original Hosseini paper was only able to achieve an approximate 1-1/e guarantee with a randomized algorithm when allowing for wastefulness. By restricting ourselves to non-wasteful and fair allocations, our task becomes considerably more challenging. For a more extensive comparison of our result against that of [16], we refer the reviewer to our remarks to reviewer h2C8 which delineates how the fairness guarantees differ (and are improved). Moreover, as discussed in our upper bound construction for a randomized algorithm, we highlight that an optimal algorithm must implement a randomization procedure at the class level. An algorithm that does not incorporate this can be exploited by an adversary. Thus, although our algorithm seems natural, it is strongly motivated by this consideration. The nontrivial analysis, which relies on doubling techniques to properly bound the expected guarantees, will likely be useful in all future works that analyze algorithms abiding by these constraints. We acknowledge that surpassing the 1/2-CEF barrier remains a formidable challenge, one that we pose as an open problem to the research community. We hope that our work will inspire future research on this problem. Furthermore, we hope the reviewer will consider increasing their score if the above sufficiently addresses the concerns! --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your reply. I agree that the randomized algorithm achieves a stronger guarantee in expectation (1/2-CEF and 1/2-CPROP) compared to the deterministic algorithm (1/2-CEF1 and 1/2-MMS) in [16]. However, I still believe that the goal of introducing randomization should be primarily to achieve a CEF guarantee greater than 1/2. After reading your rebuttal and reconsidering the paper's potential influence, I think that proposing an initial attempt at addressing this challenging problem and bringing the open question to the field are good contributions. I have therefore increased my score to 6.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Analysis of Elo Rating Systems via Markov Chains
Accept (poster)
Summary: The paper proves results about the Elo rating system, as used for instance in chess, using tools from probability and Markov chain theory. The Bradley-Terry-Luce model assumes a true ranking exists and that the Elo ranking evolves via a Markov chain when two players compete against each other. Although the stationary measure of the Elo process provides a biased estimator of the true ranking, this bias can be sufficiently small to provide a good approximation of the true ranking nonetheless. The main results consist in finite-time l2 bounds between the true ranking and the estimated Elo ranking, that show how the Elo process can approximate the true ranking. Some experimental results are also given. Strengths: The results proved are of interest, especially as the existing literature on the subject is scarce and in particular there has not been many works studying the Elo process as a Markov chain. The paper is quite well-written and does a good job at conveying the main ideas of the proof. The mathematical content is serious and I have not detected any major mistake in the proofs. Weaknesses: I do not see any particular weakness to the present manuscript. Technical Quality: 3 Clarity: 3 Questions for Authors: Here are some specific remarks and suggestions: p.2-3 In Def. 2.1 I think it is bit surprising at first that $\rho$ does not appear explicitely. I suggest precising that $\rho$ is the true ranking and that $I$ beats $J$ with probability $p_{IJ}(\rho)$ p.3 l.3 "$\pi$ the equilibrium distribution": I suggest adding a reference where existence and uniqueness of the invariant distribution is proved p.3 l.100 the notation is $\succeq$ is not defined. It does not seem to be used elsewhere in the paper. p.3 l.103 "is order $n$" should be "is \emph{of} order $n$". "This implies $\lambda_q \leq 4/n$ always": could you provide a short argument for this? p.4 l.146 "Since $n / \lambda_q \geq 1$": I think the authors mean $\lambda_q \leq 4/n$ p.5 "Even an alternative definition of the spectral gap has been proposed [13]": I think this sentence is a bit misleading as it could suggest that the spectral gap of [13], defined as in the manuscript as the second smallest singular value of the Laplacian, can be used to control the convergence to equilibrium of a Markov chain, as is done classically using multiplicative or additive reversibilization. [13] shows however this notion of spectral gap is related to convergence of empirical averages. p.5 in l.200 and l.202 what is the $\Delta$ sign at the end of the equations? p.5 l.242 the notation $\tilde{O}$ is not defined p.15 Prop. B.1 I think there is a missing $1/2$ factor in the Dirichlet characterization of $\lambda_q$. p.15 l.541 "Eloupdate" should be "Elo update" p.16 l.548 [29] is a stack exchange post, which to me is not a good reference. I would either give a classical reference or not give any reference at all, given the result stated is simply the convergence theorem on Hilbert spaces p.16 Appendix C: the notation $X^{1/2}$ is not introduced p.17 l.589 "We now turn to the bias" should be "We now turn to the variance" p.18 l.598 I think two factors $1/4$ and $1/2$ have been swapped: the maximal variance of a $[0,1]$ valued random variable is $1/4$, so the $\eta^2$ terms add up to give $\eta^2 / 2$ instead of $\eta^2 / 4$. This error is maintained throughout the proof and gives eventually a factor $4$ instead of $3$ in Theorem 2.7 p.18 l.607: "$z:=x^{0} - \mathbb{E} [X^0]$ should be $z := X^{0} - \mathbb{E} [X^0]$ p.18 l.611-612 " the maximal variance of such a random variable is $q_k /2$: there should be an additional factor $\eta^2$ p.19 l.617 the step-size $\eta$ is missing in this equation p.21 l.677-679 the notations and remarks given in this paragraph have already been used in Appendix C, so I suggest moving it there p.21 in the equation after l.679 what is $\pi_{i,j}$? p.21 l.683 $\bar{z} \in [\bar{y},\bar{x}]$ should be an index of the minimum p21 in the second equation after l.685 I think the notation $E_{\ast}$ is not defined p.22 Are ergodicity and irreducibility sufficient for Theorem D.5 to hold? I think the results from [35] require a stronger form of "uniform ergodicity". p.22 In Theorem D.5 "Let $t_{\mathrm{mix}}$ denote its mixing time. I think it would be worth giving a definition of the mixing time. Also it depends on a parameter $\epsilon$ so the authors should precise if they consider a particular $\epsilon$, e.g. 1/4. Also I find slighlty confusing that the notation $t_{\mathrm{mix}}$ is also used, e.g. in Theorem D.7, to denote an upper bound on the mixing time. p.22 In Theorem D.7 the result seems to be stated (and is proved) for a $1$-lipschitz function. In general I expect there should be a dependency in the lipschitz constant p.22 l.717 use only one notation among $\mu_f$ or $\pi(f)$ for the expectation. p.23 l.738 again the mysterious $\Delta$ sign p.23 l.741-743 about the generic constants: I haven't checked all the calculations but I think they are correct, however I must say the implicit assumptions made on $\eta, \lambda_q$, etc; are not always super clear. For instance p.25 I do not see why $t \geq \kappa^{-1} \geq n$. ALso why the equation of l/812 establishes a bound in $\delta^{1/3} = n^{-8}$ and then in l.814 it becomes $\delta^{1/6}$? p.23 l.756 "Let $s \in \{0,1 \}$" there is a slight abuse of notation in that $s$ is also used at the end of the proof as a summation index. Maybe write $S$, although I admit there is little risk of confusing. p.26 in l.821 there is a missing expectation p.26 l.829 I think using the lipschitz condition requires applying the triangular inequality first, in which case there should be norms inside the summation for the step I and step IV terms p.26 l.831 "The second term is at most $\delta^1/3"$: but the step step concluded with abound of $\delta^{1/6} / n = n^{-5}$. This probably does not affect the result, but I think these inequalities based on implicit assumptions changing from line to line are confusing and could be made clearer. p.26 l.834: the right-hand side of the inequality on $t_{\mathrm{mix}}(1/4)$ is $800 t_{\mathrm{mix}}$ given what precedes, not $t_{\mathrm{mix}} = 800 \ldots$. p.27 l.844 $\sum_{k} | A^{t,T}_{k} - \rho_j |_1$: write this either as a sum or as an $\ell^1$ norm p.27 Proof of Theroem 2.5 why introduce the new notation $\Pi_k$? p.29 l.895 "the single edge $\{ c_k, c_1 \}$": I guess this is in fact $\{ v_k, v_1 \}$ p.29 l.898: I am not sure to understand how the cycles are sampled: is it each with probability $1/3$? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper addresses the limitations of the results which are ground for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer greatly for their extremely thorough review of our paper. We are pleased they found the results of interest and importance, as well as finding the presentation well-written. No general points were raised, but many (very helpful) minor points were. We just wanted to highlight a few here. (4) p.3 l.103. This follows from the Dirichlet characterisation (Prop B.1, with the corrected factor of 1/2). Fix $k$ and take $z_i \coloneqq 1\\{i = k\\} - 1/n$. This gives Rayleigh quotient $n/(n-1) \cdot \sum_l q_{k,l}$. Since $\sum_k \sum_l q_{k,l} = 2$, we can choose $k$ so that $\sum_l q_{k,l} \le 2/n$. Hence, $\lambda \le 2/(n-1) \le 4/n$. (6) p.5. You are correct that this version is for empirical averages, *not* mixing/similar. We shall clarify this. (7) p.5 in l.200 and l.202. This triangle merely indicates the end of a remark/similar, analogously to a square at the end of a proof. We first saw this used in one of Geoffrey Grimmett's books, and liked it. (11) p.16 l.548. We weren't aware that this is just the convergence theorem on Hilbert spaces. Functional analysis is far from our area of expertise. And, actually, looking online, we don't see exactly how this relates. If you would be able to provide a reference, that would be very appreciated. \ We agree a stackexchange post is not the best reference, but we found the proof referenced quite elegant, so we decided to cite it. (19) p.21. This was supposed to be $p_{i,j}(\rho)$. At one stage, we were using $\pi_{i,j}$ for the true probabilities and $p_{i,j}$ for the estimated ones, but decided it was better to be explicit, writing $p_{i,j}(\cdot)$ where $\cdot$ varies. Apologies for missing this one; it will be corrected. (22) p.22. You're right: *uniform* ergodicity is required. Fortunately, chains on compact spaces which converge in TV are almost always uniformly ergodic in practice, and this is the case for the noisy Elo chain. In fact, positive curvature immediately implies this. We will address this lack of clarity in the camera-ready version. (24) p.22 Theorem D.7. You're right again: we meant *1*-Lipschitz functions. This is of course without loss of generality since, if $f$ is L-Lipschitz, then $g(x) := f(x)/L$ is 1-Lipschitz, so this is enough for the proof. But we would need to add in the dependence on the Lipschitz constant as you rightly point out. We will clarify this in the revised version of our manuscript (27) p.23 l.741-743 (31) p.26 l.831. Good spot. We changed the definition of $\delta$ (namely, the power of $n$) whilst writing the paper, and have clearly made a couple of mistakes. Hopefully, there is no cause for doubt that the argument works, but we definitely should correct the numbers so that it's all above board. (32) p.26 l.834. Good spot, we have conflicting definitions of tmix, differing by a factor 800. We shall correct this, making sure that the correct constants appear in the appropriate places. (36) p.29 l.898. We first sample a permutation $\Sigma$ from the Birkhoff-von Neumann decomposition; we then decompose $\Sigma$ into disjoint cycles; for any disjoint cycle we sample one of the three matchings described in the bullet list in the paper, each with probability $1/3$. So at the end, edges coming from all cycles in $\Sigma$ will appear in the final sampled matching, but not all edges of a given cycle, only a vertex-disjoint subset. --- Rebuttal Comment 1.1: Title: Reply to rebutal Comment: I thank the authors for their response and hope my feedback has been helpful. Regarding the "convergence theorem on Hilbert spaces", I actually meant projection, not convergence, and it seems indeed that many textbooks never mention the contraction property of the orthogonal projection, so a reference is not easy to find. Overall this was a very minor point.
Summary: This paper provides convergence guarantees for the time-averages of a natural dynamics (Elo) modelized after the Bradley-Terry-Luce model of elo. If the system has $n$ players and $t$ is the number of time steps, the error is shown to be $(e^{4M}/\lambda_q)(n^{-\alpha} + \log(n) \sqrt{1/t})$ with probability $1 - n^{-\beta}$ in $\ell_2$-norm, where $\alpha, \beta > 0$ are universal constants (under assumptions on the learning stepsizes), $M > 0$ is the truncation constant and $\lambda_q$ the spectral gap of the matching graph (tournament design). The authors further investigates the problem of finding efficient matching graphs (tournament design) to accelerate the convergence of the elo process. Although the analysis is *online* (the elo is adapted along the matches), the convergence guarantees nearly match the best known *offline* guarantees (computing the elo by batches, after all the matches have been played). Strengths: - The model is clear and well explained. - The result is pretty difficult. Despite its difficulty, there is a real effort of presentation, to provide intuition and to split the proofs into comprehensible steps (e.g., section D.4). - The statements are clear and the results are interesting. Weaknesses: ### (minor) Reordering a few results? I believe that slightly reordering results in section 2.1 would help to streamline the paper. For instance, remark 2.4 (plus lines 194-195) would be better *after* the statement of Theorem 2.5 in my opinion, because they make more sense once one has read the main statement. It isn't clear at first glance that Theorems 2.7 and 2.8 are preliminary results of Theorem 2.5, and it made me confused a little bit. Although they are interesting on their own, it may be worth putting them in a dedicated subsection. ### (minor) Typography The paper does not exploit the large text width of NeurIPS' template. For instance: - at lines 597 and 599, the $\le$ and $=$ are missaligned - at line 606, the second block of aligned equations do not require a linebreak (from what I can measure). Other equations could enjoy such improvements (e.g., line 829). Technical Quality: 3 Clarity: 3 Questions for Authors: ### General questions - Analysis this Elo system seems so natural to do with a SGD approach. Is there any in the literature? - Why do the asymptotic convergence guarantees depend on the number of players? Is this an inevitable feature? I would expect that, if the the stepsize is small enough (or vanishing), the system overall converges to the true elo values. Is this not the case? - From a more SGD viewpoint and looking at the average dynamics (the dynamics in line 18 are a Robbins-Monro scheme of a continuous time gradient descent), is the objective function convex? If so, SGD is known to have a few theoretical guarantees and I am curious to know how it compares with your results. ### Miscellaneous - At line 178, you say that Elo doesn't converge in total variation. Is this because the learning stepsize is constant? At the next line, you seem to claim that this comes from the fact that $X^t$ is finitely supported while $\pi$ has continuous support. It seems wrong to me because if $\hat{\mu}_n$ denotes the empirical distribution (finitely supported) obtained by sampling the gaussian $\mu = \mathcal{N}(0, 1)$, I think that $$ ||\hat{\mu}_n - \mu || \rightarrow 0 $$ almost surely, where the norm is the total variation distance. - You use the terminology "negatively curved" for "contraction" and "non-positively curve" for "non-expansive". Is there a reason for this? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The guarantees only hold under the assumption that *there exists* a true rating of players, $\rho$, itself generating the outcomes of matches. It is known that in general, the strength of human players is not an order, and more rock-paper-scissors like ($i$ beats $j$, $j$ beats $k$ yet $k$ beats $i$). ### Miscellaneous - Despite the effort provided by the authors to make the proof clear, the review time was too short so that I could provide a technical review of the results, especially because the present work is not exactly my domain specialty. Hence the present review is mostly high level and the confidence level is low. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer has found our results interesting. We thank the reviewer for their comments about presentation and typography: we will definitely take them into consideration when preparing the revised version of our manuscript. We now reply to their general questions. 1. We have not been able to find an analysis of Elo with an SGD approach in the literature. See also point 3 below. 2. We are not exactly sure what the reviewer is meaning with this, so we kindly ask them to point out if our reply does not address their question. The convergence rate needs to depend in some way on the number of players, since each player needs to have played at least once for convergence to happen. If the step size vanishes, then yes, the system converges to the true ratings (this is a consequence of Thm 2.5 in our paper). However, a non-vanishing step-size, no matter how small it is, will result in a non-zero bias. 3. The function is convex and we agree we could have studied Elo using known theoretical guarantees for SGD. However, using off-the-shelf theorems for SGD, as far as we know, results in non-optimal rates and guarantees only in expectation. That does not mean that a more sophisticated analysis from an optimisation point of view could not address these issues, but then we are not sure this would actually yield a simpler analysis. In addition to that, we believe a Markov chain analysis has its own intrinsic value since it allows us to separate the contribution in the convergence rate due to the bias and the mixing time of the underlying process. Miscellaneous questions: 1. We do not think the example mentioned by the reviewer is correct. We believe the total variation distance between the empirical distribution of a sequence of Gaussians and the Gaussian cumulative distribution is actually equal to one. For a reference, see the first page of "No Empirical Probability Measure Can Converge in the Total Variation Sense for all Distributions" by Luc Devroye and László Győrfi in The Annals of Statistics, Vol. 18, No. 3 (Sep., 1990), pp. 1496-1499. \ The reason Elo does not converge in total variation, however, is actually simpler: when we fix the starting point of the process, after t steps, the distribution is supported on a deterministic set of finite size: for each Elo step, you can choose one of the $n \choose 2$ pairs of players and we have at most two outcomes. Assuming the invariant distribution of the process is atomless, then the total variation distance between the distribution of Elo and the invariant distribution cannot converge to zero as t goes to infinity. We further remark that this observation has been made before by Aldous (reference [7] in our paper). 2. We use the term curvature in the sense of Ollivier because it is a common concept in the analysis of Markov chains. We can try and make our writing more clear for a broader audience. About the limitation pointed out by the reviewer: we agree the Bradley-Terry-Luce model is not necessarily an accurate representation of reality. It would be very interesting to study extensions of the Bradley-terry-Luce model beyond linear orderings of players, but this is outside the scope of our contribution. --- Rebuttal Comment 1.1: Comment: Thank you. You have addressed most of my concerns (including question 2). For the miscellaneous question 1, I was mistaking convergence in distribution for convergence in TV. For my failed counter-example you can simply set $A_n := \mathbb{R} \setminus \mathrm{supp}(\mu_n)$ and you get $\mu(A_n) = 1 \ne 0 = \mu_n(A_n)$.
Summary: The authors analyze Elo ratings under the BTL assumption and show that the Elo update system converges to the true ratings of the player with high probability. They do this analysis for an online setting compared to other work that has analyzed this for the offline setting (where a pool of data is collected). They then show how this leads to a tournament design. Strengths: Elo is a popular algorithm that is used from chess to sports like football and baseball. As such it is of broad interest and these results show why this method works. The authors do a thorough analysis and provide empirical data to support their conclusions. Weaknesses: The work analyzes an algorithm that has been popular and so the contribution doesn't bring any new ideas to the improvement of the ratings system. This raises the question about the magnitude of the contribution. Technical Quality: 3 Clarity: 4 Questions for Authors: Due to lack of expertise of the reviewer for this author, this is left blank Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and in particular for highlighting that our work provides a thorough analysis of an algorithm with broad interest. About the weakness highlighted, first, we believe that before improving upon a technique, it is important to understand that technique well. Elo is popular, as pointed out, yet has very little theoretical backing. Our paper is a step in this direction, and further work might be inspired by our theoretical understanding to obtain improvements on Elo. In particular, we highlight the biased nature of the Elo updates: it would be nice to design an unbiased update scheme *without* knowledge of the true ratings, but this seems difficult. Second, our observations about tournament design might be of practical interest to schemes which learn from pairwise comparisons, potentially leading to improvements. --- Rebuttal Comment 1.1: Comment: Thanks for your response
Summary: The paper presents a novel theoretical analysis of the Elo rating system, a well-established method for ranking player skills in online gaming and sports contexts, particularly in chess. The authors analyze the Elo system under the Bradley-Terry-Luce (BTL) model, employing techniques from Markov chain theory to demonstrate that the Elo system can learn model parameters at a competitive rate compared to state-of-the-art methods. The paper also explores the implications of this analysis on the design of efficient tournaments and draws parallels with the fastest-mixing Markov chain problem. The authors provide rigorous proofs and a discussion of limitations, showcasing the robustness of their analytical approach. Strengths: 1. **Theoretical Rigor**: The paper offers a thorough theoretical analysis of the Elo system, contributing to a deeper understanding of its performance under the BTL model. The use of Markov chain theory to analyze convergence rates adds significant value to the existing literature on the Elo ranking system. 2. **Connection to Tournament Design**: The paper's exploration of tournament design and its relation to the fastest-mixing Markov chain problem is a novel contribution, potentially offering practical insights for tournament organizers. 3. **Clarity and Structure**: The paper is well-structured, with clear explanations of the methods used and the results obtained. The inclusion of and detailed proof guidelines and comparisons with related work enhances authors to understand the research. Weaknesses: 1. The definition of new variables is extensive but less detailed explanation, making this paper is challenging to understand. 2. The theoretical result in Theorem2.5 is based on a somehow strong assumption of “The (deterministic) time T is a burn-in phase, which allows the Elo ratings to get 'near' the true skills, after which we start averaging”. So, this theorem only claims the convergence rate at the burn-in phase when the Elo ratings may get 'near' the true skills. If the convergence rate from the start phase could be given would be better, even though requires some conditions like the winning rate among players should be enough to make a difference. 3. Does the convergence rate in this paper definitely outperform those in other papers? In section 2.2, this paper says “Our result improves the constant in front of M”, but the estimation error of this paper is related to (logn)^2 but those of previous works are only related to logn, they are better than this paper’s result. 4. What’s the technique novelty of this paper? The form of the convergence rate result is very similar to those of previous works. 5. If the convergence rate is similar/comparable to previous work, the innovation of this paper is limited because there has existed convergence rate analysis of Elo rating system, and the usage of Markov chain theory does not gain significant improvement on convergence rate. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In definition 2.3, what the dij and deltaij means? There seem lack some detailed explanation of these symbols’ physical meaning. 2. What the symbol in the “equation” in line 110 mean? This symbol appears frequently in whole paper, but less explanation. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Please refer to weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and in particular for highlighting our paper offers "a thorough theoretical analysis of the Elo system". We first address the weaknesses highlighted. 1. We will try and add further explanations for the various quantities used in the paper in the camera-ready version. Since the camera-ready version allows for an additional page, this should not be too hard. 2. We believe with some technical work it is possible to obtain a (potential weaker) bound that does not require a burn-in time. This is due to the fact that the burn-in time T is at least a factor of log n smaller than the total "averaging" time t. \ Perhaps more pertinently, the time T is deterministic, and fully determined by measurable statistics of the system: the spectral gap λ_q of the choices q and the number n of players. It isn't some ill-defined concept of 'near', which can't actually be known. 3. Points 3-5 essentially ask what are the improvement over previous work. We would like to point out our work is, as far as we know, the first to analyse "online" Elo, which is what is actually used in practice. Previous work obtained similar bounds only for *offline* algorithms for Bradley-Terry-Luce estimation, exploiting ad-hoc algorithms that are not actually commonly used in practice. \ Using a Markov chain approach allows us to obtain concentration results. Moreover, we believe it allows us a finer understanding of the process. Mixing is perhaps the most important of these, as it measure the time for the system to 'relax' to equilibrium. In other words, after mixing time steps, the system is close to the true ratings whp, regardless of the initial state. Since Elo is a commonly used algorithm in practice, we believe these results are interesting on their own. \ As a minor point, to reply to point 3, we remark that if M > log log n, then the constant in front of M is actually more important than the additional log(n) factor. Questions: 1. $\Delta$ is simply the Laplacian of the Markov chain with transition rates $(q_{i,j})$. This is a standard matrix used in the analysis of Markov chains. You can view $d_{i,j}$ as the "degrees" of the graph with edge weights $(q_{i,j})$. 2. $x \asymp y$ simply means that there are absolute constants $c_1,c_2$ such that $c_1 x \le y \le c_2 x$. We will add an explanation to the revised version of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. This paper focuses on online Elo ratings, whereas previous works have concentrated on offline settings. This problem setting is interesting to me and addresses most of my concerns. Thus, I will maintain my positive score. Additionally, Reference [1], which also analyzes online Elo ratings, may be worth discussing. [1] Yan, Xue, et al. "Learning to identify top elo ratings: A dueling bandits approach." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022. --- Reply to Comment 1.1.1: Comment: Thanks to the reviewer for bringing up reference [1]. We will discuss it in a bit more details in the revised version of our paper, but we wanted to briefly mention that their setup is quite different than our traditional online Elo scenario, and it is rather more similar to active ranking: instead of observing players play against one another according to a fixed schedule decided before any game has been played, in [1] they actively choose the next pair of players to match up based on the outcome of previous matches.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Robustness of Spectral Algorithms for Semirandom Stochastic Block Models
Accept (poster)
Summary: The paper studies spectral algorithms for recovery in the stochastic block model (SBM). Specifically, they consider a nonhomogeneous SBM where there are two communities $P_1, P_2$ and edges inside the communities occur (independently) with probability between $p$ and $\bar{p}$ while edges between the communities appear with probability $q$. This is a generalization of the standard SBM where the edges inside communities occur with probability exactly $p$. The goal is to take a graph generated from the nonhomogeneous SBM (but without the community labels) and recover the communities $P_1, P_2$. The nonhomogeneous modification seems to only strengthen the "signal" about the communities but is actually a useful test case for the robustness of various recovery algorithms for SBM -- in fact many well-known algorithms for SBM may break in the nonhomogeneous case. The nonhomogeneous model is a special case of the more general semi-random model where edges may be arbitrarily added within communities and edges connecting different communities may be arbitrarily removed. The sharp characterization of when exact recovery of the communities is possible was shown in [ABH16]. Semidefinite programming algorithms achieve the sharp threshold and succeed in recovery even in the more general semi-random model [FK01]. The goal of this paper is understanding spectral algorithms for recovery, which may be more efficient than semidefinite programming algorithms. The main results of the paper are that under a certain gap condition on $p,\bar{p},q$ that resembles the characterization in [ABH16] but is worse by some constant factor, spectral bisection based on the unnormalized Laplacian solves the recovery problem whereas using the spectrum of the normalized Laplacian provably fails. The authors also prove a similar result in a modified setting called the deterministic cluster model. The authors also run numerical simulations to support their results. Strengths: Spectral clustering algorithms are important for a variety of applications and understanding their robustness through the lens of semi-random models could potentially help bridge theory and practice The paper has a neat conceptual takeaway about using the unnormalized Laplacian as opposed to the normalized Laplacian Weaknesses: The concrete theoretical results are somewhat weak. The authors require an upper bound on the maximum edge probability that is comparable to $p,q$ -- this type of assumption isn't necessary for other algorithms such as SDPs. They also don't achieve the sharp information-theoretic threshold of [ABH16] that other algorithms do. The experiments are run on purely synthetic data but it would be more compelling if there were experiments on real graphs. It is not clear that the nonhomogeneous SBM model proposed in the paper is a reasonable model for graphs that appear in practice because of this upper bound on the maximum edge probability and also the fact that no perturbations are allowed for the edges between different communities. Technical Quality: 3 Clarity: 3 Questions for Authors: . Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the helpful feedback! We address each concern here. > The authors require an upper bound on the maximum edge probability that is comparable to $p, q$ -- this type of assumption isn't necessary for other algorithms such as SDPs. When the degrees are sufficiently high (i.e., $p, q$ are around $n^{-1/2}$), we do prove that even with arbitrary edge additions within clusters, unnormalized spectral clustering achieves perfect recovery (please see the DCM – Model 2, lines 202-211 on page 5 and the corresponding Theorem 2). This model actually allows more power than adding in-cluster edges to an SBM. Indeed, the adversary can be adaptive, meaning they can draw arbitrary internal edges _after seeing_ the randomized crossing edges. As far as we are aware, such an adversary was not studied even for SDPs or other (more complex) polynomial time algorithms. However, the stronger model above requires high degree (around $\sqrt{n}$) to guarantee recovery, and it is an open question if we can push this result to the case of $p, q \sim (\log n)/n$ (see the discussion in Appendix A). A promising general direction here is to develop a better understanding of entrywise eigenvector perturbations for high-rank signal matrices. To our knowledge, our analysis is the first to give such a guarantee, but it will be interesting to do this for larger perturbations. > They also don't achieve the sharp information-theoretic threshold of [ABH16] that other algorithms do. Our bounds are within constants of the information-theoretic threshold, but yes, we do not know if it is possible to match it. Please refer to the response to reviewer e2Qn. > It is not clear that the nonhomogeneous SBM model proposed in the paper is a reasonable model for graphs that appear in practice because of this upper bound on the maximum edge probability and also the fact that no perturbations are allowed for the edges between different communities. Note that even the "vanilla" SBM has been a useful model in practice, by way of developing algorithms, and our model is a strict generalization. Our results provide evidence that unnormalized spectral clustering is a useful algorithm even with a significant amount of perturbation (though our proofs come with the limitation of the upper bound on max edge probability or a high-degree requirement). At a high level, the reason to study semirandom models is to understand if a given algorithm has overfit to some statistical properties about its input that should be morally irrelevant; in this sense, we believe that even with the restrictions, our results show a good level of robustness for spectral clustering. (We also refer to the discussion in lines 65-72 of the introduction, and our response to reviewer e2Qn.) --- Rebuttal Comment 1.1: Comment: Thank you for the response and addressing my concerns/questions. I still think it would be very interesting and potentially impactful to see how the proposed algorithms perform on real-world graphs!
Summary: The authors initially investigate the two-community spectral clustering algorithms applied to the 'Nonhomogeneous Symmetric Stochastic Block Model' (NSSBM). This model is characterized as a more encompassing semi-random framework, permitting a more lenient variation in the selection of in-cluster edge probabilities for each node pair, in comparison to the conventional '(Symmetric) Stochastic Block Model' (SBM). The authors subsequently apply the results derived from the NSSBM model to a more adversarial deterministic model, incorporating necessary adjustments. Finally, numerical simulation results are presented to support the theoretical insights from their study. Strengths: The idea makes sense and the problem is of interest to the community. The authors provide the theoretical guarantee of the strong consistency of the spectral method for NSSBM which has a potential high rank(\omega(n)) adjacent matrix whereas the counterpart is rank(2) for the traditional SBM model in the population level. Lemma $B.14$ ensures $\mathbold u^\star_2$ is always the second eigenvector of the Laplacian matrix even for the rank(\omega(n)) case(NSSBM). Weaknesses: (1) The NSSBM model that the authors proposed is not that realistic as variants or perturbations may happen not only in in-cluster edges but also out-cluster edges in practice. Adding the edge only inside the clusters or promoting the in-cluster edge probability will only increase the signal-noise ratio (SNR) for the SBM model(Abbe 2018). (2) A comparison of the conditions of strong consistency for NSSBM with the existing model should help understand the paper better. Technical Quality: 3 Clarity: 3 Questions for Authors: Is it possible to extend the analysis to the model with variant choices of out-cluster probability for each pair of nodes such that $q_{v, w}\in [\bar{q},q]$. It seems trivial at least at the population level but I am not sure how the new change will affect the perturbation analysis. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the helpful feedback! We address each concern here. > Adding the edge only inside the clusters or promoting the in-cluster edge probability will only increase the signal-noise ratio (SNR) for the SBM model(Abbe 2018). This is typically the case for semi-random corruptions-- the SNR only improves compared to the uncorrupted (or purely stochastic) model. This means that the recovery problem is intuitively "easier", and it indeed is easier in an information theoretical sense. But as it turns out, algorithms for the stochastic case can fail. A classic example here is the semi-random planted clique problem, where a simple spectral algorithm can find cliques of size $\sim \sqrt{n}$ in the purely stochastic case, but it is known that the adjacency matrix-based spectral method fails under a monotone adversary and it is not known whether other spectral methods succeed to this threshold under the monotone adversary (see, e.g., pages 2-3 of https://timroughgarden.org/w17/l/l11.pdf). Thus in our setting, it is not clear at all if the simple spectral algorithm (which has a reputation for being "brittle") should achieve perfect recovery when we add edges within clusters (in fact, by our Theorem 3, the _normalized_ spectral clustering algorithm provably fails in such a model). So while it would be nicer to be able to show recovery under more general corruptions, even the simpler setting we considered requires several new ideas. >A comparison of the conditions of strong consistency for NSSBM with the existing model should help understand the paper better. A comparison of these conditions can be found in lines 191-195 of the submission and in equations (14) and (15) in Appendix F (where we show additional empirical results): when $p,q = \Theta(\log n / n)$, the threshold we obtain is within a constant of that of the SSBM. We can emphasize this more in the final version. > Is it possible to extend the analysis to the model with variant choices of out-cluster probability for each pair of nodes such that $q_{v,w} \in [\overline{q},q]$? It seems trivial at least at the population level but I am not sure how the new change will affect the perturbation analysis. This is a very interesting question, and it is posed as an open problem in Appendix A. In our current analysis, we crucially use the fact that the ideal second eigenvector can be easily characterized --- i.e., it is $u_2^{\star} = [-1_{n/2} \oplus 1_{n/2}]/\sqrt{n}$. Further, this means that the Laplacian corresponding the internal edge additions remains orthogonal to $u_2^{\star}$. But if we allow changing the out-cluster probabilities, then even at a population level, $u_2^{\star}$ need not have a simple structural form, which causes the current techniques to fail. We will add more of this discussion to the final version. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I happily maintain my current evaluation.
Summary: This paper considers several semirandom variants of the SBM, and investigates whether spectral algorithms achieve exact recovery. The authors give guarantees for the performance of spectral clustering from the unnormalized Laplacian in both a nonhomogenous model, and a model in which the adversary has control over intra-community edges. Suprisingly, the normalized Laplacian is shown to perform worse than the unnormalized Laplacian in the inhomogeneous model. Strengths: As far as I know, this is the first work that provides guarantees for spectral algorithms in semirandom settings. While the analysis is an adaptation of the work of Abbe et al on entrywise eigenvector analysis, it requires some new ideas. In particular, the models studied are not low-rank. It is very interesting that the unnormalized Laplacian does better than the normalized Laplacian! Weaknesses: Sharp constants are not given. Technical Quality: 4 Clarity: 4 Questions for Authors: Could you please give an indication of which parts of your analysis could be tightened to yield sharp constants? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the supportive review! > Could you please give an indication of which parts of your analysis could be tightened to yield sharp constants? In the current analysis, the Lemmas B.5 and B.6 have constants that we have not optimized, for the sake of clarity in exposition. While it is possible to improve constants slightly by a more tedious analysis, it is an open problem whether they can be improved all the way to the information-theoretic threshold. This question is also conceptually interesting: for certain semirandom models and recovery regimes, the recovery threshold can shift [MPW16], whereas in other cases it does not. We will add this discussion to the final version of the paper. --- Rebuttal Comment 1.1: Title: Acknowledgement of rebuttal Comment: Thank you for the clarifications!
Summary: Spectral clustering has been a popular unsupervised method among mathematical statisticians and theoretical ML researchers. The first analysis of spectral clustering using perturbation analysis (Ng et al., 2001) appeared more than 20 years ago, which, of course, has a lot of limitations: Sparsity is not accounted for, and the number of clusters does not grow as the number of vertices increases. The two major results that take care of these aspects are (Rohe et al., 2011) and (Lei and Rinaldo, 2015), which establish weak consistency of spectral clustering. On the other hand, there is a lot of literature on the strong consistency and in this context, authors introduces the problem very well. This paper is aimed at studying robustness of spectral algorithms a particular types of model mispecifications, using semirandom adverseries. This paper provides several results on under what kind of semirandom models spectral clustering exactly recover groundtruth bi-clusters. Strengths: The problem is introduced very well, and even though there are a lot of technical details, one can easily go through all the results. While the present problem setting and results are based on the previous results in the literature, the contributions are respectable and provide more insights into the robustness of spectral clustering. Particularly, Theorem 3 is very interesting, which along with Theorem 1 and 2, shows that the unnormalized case is more robust to monotone adversarial changes. Weaknesses: (1) One of the biggest problem with keeping the number of clusters constant and then providing results for "n" large is that in most practical situations number of clusters grow with n. In that sense weak consistency, results that are available in the literature are much more appealing. (2) Limited to bi-partitioning. Technical Quality: 4 Clarity: 4 Questions for Authors: (1) This paper doesnot deal with weak consistency results. Can you just comment on performing this kind of analysis to weak consistency case where one allow number of clusters to grow. (2) One of the most important factor in studying spectral clustering under blockmodels models is sparsity. I could not find any comment about this in the paper. (3) Why is the bi-partitioning constraint? Is it because, when we consider top K eigenvectors, one need to perform K-means which can introduce further problems? (4) The Cheeger's inquality based description of hard instances for normalized case is nice but not difficult to perceive. Considering that we have higher-order cheeger inequalities, how do we extend this analysis to multi-partition case? Just looking for a comment if authors happens to know about these results. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the helpful feedback and questions! We address each concern here. > One of the biggest problems with keeping the number of clusters constant and then providing results for "n" large is that in most practical situations, the number of clusters grows with n. In that sense of weak consistency, results that are available in the literature are much more appealing. As you point out, this may constitute a gap between practical scenarios and the theoretical guarantees that are proven within the SBM community. To our knowledge, most of the literature that gives provable guarantees for this problem consists of results where $k$ is independent of $n$ [Abbe17, page 114]. > This paper does not deal with weak consistency results. Can you just comment on performing this kind of analysis to the weak consistency case where one allows the number of clusters to grow. As highlighted by the survey of [Abbe17], spectral methods actually do not succeed up to the KS threshold, and such thresholds in fact cannot be achieved under a monotone adversary [MPW16]. While there are spectral partitioning algorithms based on higher-order Cheeger inequalities (e.g. [LOT11]), these work with the _normalized_ Laplacian and apply to a different generative model from the one we study. By virtue of our Theorem 3 (and our answer to your last question), these algorithms seem unlikely to be robust to the kinds of perturbations we study in this work. It would be interesting to see whether spectral methods can achieve the threshold obtained by [MPW16] under a monotone adversary. > One of the most important factors in studying spectral clustering under blockmodels models is sparsity. I could not find any comment about this in the paper. The literature on block models often works in one of two regimes, the "sparse" regime where $p, q$ are $C/n$ for some constant $C$, and the "high degree" regime, where $p \gg (\log n)/n$. Our work falls into the latter setting, similar to many of the prior strong-consistency results. > Why is the bi-partitioning constraint? Is it because, when we consider top K eigenvectors, one needs to perform K-means which can introduce further problems? Yes, if we consider more eigenvectors, simply considering the sign does not suffice for obtaining a partition. For example, the works on higher-order Cheeger inequalities need more careful partitioning algorithms to obtain guarantees [LOT11]. Also, to our knowledge, it is not known whether spectral methods achieve the information-theoretic bound for exact recovery in the case of $k > 2$, even for standard generalizations of the SBM to $k > 2$ communities. Secondly, analyzing the k=2 setting is already quite non-trivial, e.g., proving that spectral algorithms achieve strong consistency for two communities was a longstanding open problem until relatively recently [AFWZ19]. While it is true that many papers in property-testing [CKKMP18] do consider the general setting of $k$, in the setting of SBMs, designing _any_ algorithm for recovery for $k>2$ often requires much more sophisticated algorithmic machinery: like the recent result of [MRW24], STOC’24 (for general $k$), generalizing the result of [DDNS21], FOCS’21 (for $k = 2$) for the robust community recovery regime. Such results are less appealing in practice than those concerning spectral algorithms, which are commonly used. > The Cheeger's inequality based description of hard instances for the normalized case is nice but not difficult to perceive. Considering that we have higher-order cheeger inequalities, how do we extend this analysis to the multi-partition case? Just looking for a comment if authors happen to know about these results. The Cheeger-based intuition of the hard instance for $k=2$ carries over to larger $k$: if the edges added by the adversary create a new, different, $k$-way sparsest cut, the embedding in the bottom $k$ eigenvectors of the normalized Laplacian should reflect (up to $poly(k)$ and square root losses) this new $k$-way sparsest cut as opposed to the planted one. [LOT11] Multi-way spectral partitioning and higher-order Cheeger inequalities (https://arxiv.org/abs/1111.1055) [MPW16] How Robust are Reconstruction Thresholds for Community Detection? (https://arxiv.org/abs/1511.01473) [Abbe17] Community Detection and Stochastic Block Models (https://arxiv.org/abs/1703.10146) [CKKMP18] Testing Graph Clusterability: Algorithms and Lower Bounds (https://arxiv.org/abs/1808.04807) [AFWZ19] Entrywise Eigenvector Analysis of Random Matrices with Low Expected Rank (https://arxiv.org/abs/1709.09565) [DDNS21] Robust recovery for stochastic block models (https://arxiv.org/abs/2111.08568) [MRW24] Robust recovery for stochastic block models, simplified and generalized (https://arxiv.org/abs/2402.13921) --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for your replies. Regarding allowing the number of clusters to grow: Check "Spectral clustering and the high-dimensional stochastic blockmodel" by Karl Rohe, Sourav Chatterjee, Bin Yu --- Reply to Comment 1.1.1: Comment: Thanks for pointing this out, we will add a pointer to this classic reference in the camera-ready version. However, note that the work only guarantees weak recovery and also assumes that $k$-means can be solved optimally, so the regime is different from ours.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their positive comments and their support of our paper. We are excited to hear the feedback and have responded to questions/provided some clarifications in comments directly responding to each review.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment
Accept (poster)
Summary: During training of Diffusion Models, images are corrupted with standard Gaussian noise and the Diffusion Model is tasked to denoise the corrupted images. The Gaussian noise used during training is often sampled independently of the clean image. This submission presents the following "Immiscible Diffusion" method: within each mini-batch, we should reassign the pairs of images and noise samples, so that each image is corrupted with a ‘closer’ noise sample. The submission claims "Immiscible Diffusion" is faster to train to reach the same FID threshold, with quantitative evaluation, and "can catch more details and more feature of objects" with some cherry-picked qualitative results. Strengths: **S1/** The metaphor of “immiscible fluids” is creative. **S2/** The described method is simple. **S3/** L489: The authors say they will publish their code (although the paper claims it is only one line) Weaknesses: The main weaknesses and reasons to reject the paper are the following: **W1/** The notations and mathematical derivations are imprecise, unnecessarily complex, and potentially incorrect. **W2/** Similar methods have already been presented in previous literature. The submission does not discuss these existing works, nor compare with them. **W3/** The presentation, including the figures quality and writing, needs to be improved. **W4/** The presented results are not convincing. **More details:** **W1/ The notations and mathematical derivations are imprecise, unnecessarily complex, and potentially incorrect.** * The Equations in the Section 3.3 are very hard to understand, and I am not sure they are necessary as they make understanding more difficult. The section could simply be rephrased as follows without the need to introduce complex and potentially incorrect equations: When the noise samples and data points are sampled independently, the optimal (in MSE sense) denoising prediction from a completely noisy observation is the mean of the data distribution. If we additionally know that data points are only corrupted with "close" noise, then the optimal denoising prediction from a completely noisy observation is, intuitively, the average of only "close" data points, which can allow to converge faster to a data point at generation time. * The notation "When $t → \infty $, $p(x_t|x_0) = N(0, I)$" in Equation 1 is not precise (equality? convergence?). * ​​L155 states that Equation 2 indicates "the distributions of the denoised images for any noise data-point are the same, which is equal to the distribution of the image data", but Equation 2 does not contain any "denoised images", only clean images $x_0$ and their corrupted versions $x_t$. * In Equation 3, the formulas for $a$ and for $b$ are inverted. Also the "minus" sign should be in front of $a$ and not $b$. The equation further does not really make sense according to the paper’s notations, since "p(x_0|x_t) = p(x_0)" only “when $t → \infty$”. It is not mathematically correct to replace only a few terms of an equation by their limit, but keeping dependencies in $t$ for other terms. $a$ and $b$ are said to be constant, but their formula clearly shows they depend on $t$ (or are $a$ and $b$ the limits “when $t → \infty$”). * L158 states that the average of a large number of images is a solid color. This is incorrect, see for instance Figure 1 in **[1]**. * In Equation 5, the notation ${\{ \}}_{\textnormal{batch}}$ is not introduced and it is not clear. It seems to contradict equation 6. * Equation 7 is not justified. Why would $f$ only depend on the difference between $x_t$ and $x_0$? **References:** **[1]** Torralba, Antonio, and Aude Oliva. "Statistics of natural image categories." Network: computation in neural systems 14.3 (2003): 391. **W2/ Similar methods have already been presented in previous literature. The submission does not discuss these existing works, nor compare with them.** * The notions of “coupling” and “optimal transport” between data points and noise samples have already been presented in the previous literature. See for instance **[2]** and **[3]**. * To understand the significance of this submission, it would be necessary to discuss these similar works and highlight the differences if any, and compare with such methods if possible. * As such, I do not see any novelty in this work (If I miss any, the authors fail to highlight the difference by discussing related work). **References:** **[2]** Tong, Alexander, et al. "Improving and generalizing flow-based generative models with minibatch optimal transport." arXiv Feb 2023 + TMLR + ICML Workshop **[3]** Pooladian, Aram-Alexandre, et al. "Multisample flow matching: Straightening flows with minibatch couplings." arXiv April 2023 + ICML **W3/ The presentation, including the figures quality and writing, needs to be improved.** * The parallel with “Immiscible Fluids” does not bring a better understanding of the method, no physics equations are given. What should we understand from Figure 2a? Even if this is schematic, where would the noise and data distribution be in this image? * The quality of Figure 3 is very bad. The images for predicted noise are completely saturated. * The authors fail to show any ablation on the “quantization method” they claim. It is not clear if this is really necessary and the choices of fp8 and fp16 appear rather arbitrary. * Many sentences do not make sense to me * L163: “T=20 means the layer of pure noise” * L160: what are “rich image classes”? * L162: “the predicted noises of different layers” * Many typos: * L89 “its” for diffusion models * text written in math font in equations * L422: “our method accelerate” * L471: “our immsicible diffusion work” (2 typos) * Caption Figure 5: “can catch more details and more feature” * Several incoherences: * L93: DDIM is 10 steps. <-> the authors experiment with DDIM but never take less than 20 steps. * linear_sum_assignment from Scipy (algorithm 1) <-> Hungarian matching (text) * Some generated images in Figures 8 and 9 are completely black. **W4/ The presented results are not convincing.** * All qualitative results presented in the main submission are cherry picked. The only non cherry picked results are shown in the appendix and the generated images, and the qualitative results seem very similar to me (no significant qualitative improvement). * The quantitative results are difficult to trust. There seems to be incoherences between the results: * The “-0.98” in the third row of Table 5 is not coherent with the two first rows. * The plots have different scales and are not clear (why does the plot for imagenet in Figure 4 starts at 20k steps and not 0?) * The quantitative results mix consistency models and diffusion models in an unorganized way (eg, the columns of table 1 alternate between them), making it difficult to understand the results. * It is not clear why some datasets use the diffusion approach and some use the consistency approach. Did the authors experiment with both for each dataset and only report the best results? * The datasets and evaluation settings are not clearly presented. How many images are there in each dataset? Was the dataset split into training, validation and test sets? Are the reported FID scores computed on training, validation or test set? The number of sampling steps is not reported for all experiments (eg table 2). Only part of the quantitative results are shown (thus potentially cherry picked too, eg not quantitative results for Stable Diffusion) ---- **Changes following the rebuttal and authors-reviewers discussion:** **Reduced soundness from 2 to 1**: The scope of the paper has been significantly expanded with the rebuttal (conditional generation, flipped dimension OT, decreased noise levels). Some arguments in the paper do not hold anymore (eg, average of all images). Revised mathematical proof remains imprecise and incorrect. **Reduced contribution from 2 to 1**: Discussions following the rebuttal made me dive more deeply in the Conditional Flow Matching paper (https://arxiv.org/abs/2302.00482). Most, if not all, findings of the reviewed submission seem to be already known and analyzed more deeply in the Conditional Flow Matching paper, which additionally uses Flow Matching formalism (generalization of diffusion to more prior distributions than Gaussian Noise) and contains sound proofs. I do not see any new meaningful contribution for the Neurips community in the reviewed submission. **W1** The authors did not address this concern correctly. To expand on the fact math assumption (Equation 7) is incorrect, suppose the target distribution is $P(x_0=0.1) = P(x_0=-0.1) = 0.5$. As $t \rightarrow +\infty$, it is clear that, if the batch size is large, $p(x_t|x_0=0.1)$ is approximately $2 \times N(x_t; 0, 1)$ if $x_t>0$ and approximately 0 if $x_t <0$. This is clearly not in the form of Equation 7. The ratio also does not decrease with the norm of $x_t – x_0$, as $p(x_t=-0.1|x_0=0.1) \approx 0$ while $p(x_t=1.1|x_0=0.1) \approx 0.4$. **W2** The authors did not address this concern correctly. They included new experiments to claim their method does not fit into “Conditional Flow Matching”, but it seems that it still does (Conditional Flow Matching also seems to allow to choose the coupling via the joint distribution $q(x_0, x_1)$). **W3** The authors did not address this concern. The quality of Figure 3 is extremely low (all images are either extremely blurry or extremely saturated. It is possible the authors forgot something (dividing by 255?) when saving the images of the predicted noise, and that they upscaled the other images with interpolation or something). The authors did not comment on the incoherences/typos/sentences that need clarifications. The parallel with physics appears decorative without supporting physics equations. **W4** is not addressed correctly. Qualitative results show marginal improvement on cherry picked results only. Quantitative results are presented weirdly (eg, axes of the plot) with incoherences. **W5** The new experiments introduced in the rebuttal significantly extend beyond the original scope of the paper (e.g., introducing OT with flipped dimensions, decreasing noise levels, and including conditional generation). Arguments made in the paper do not hold in this expanded scope. For instance, the paper's claim that the initial denoising direction points to the average of all images, and is thus not meaningful. This is not true in the conditional generation case, as the initial denoising direction points to the average of images of the desired class. Additionally, the authors seem to say that in immiscible diffusion, any noise sample can still lead to any class (so, is it miscible?), which needs to be discussed/explained/investigated more. **Reduced overall rating from 3 to 2** Due to the (important) remaining concerns, the extent of the changes proposed in the rebuttal and discussion (would need additional reviews after changes are made), as well as the conflicts of the proposed changes with arguments contained in the original submission. **Increased confidence from 3 to 5** I participated in the discussion and dived deeper into the CFM paper (which I was initially not very familiar with). I am certain about the need to reject the paper in its current state, with the possibility for the authors to address these issues in a future submission. ---- Technical Quality: 1 Clarity: 1 Questions for Authors: Here are my suggestions for this submission: **Q1/** Discussion and comparison of related works. By not discussing the related works, it is not possible for me to see the novelty in this submission, given that similar methods have been presented in existing literature. (see also **W2**) **Q2/** The presentation quality (figures, clarity) and mathematical explanations should be improved (see also **W1**, **W3** and **W4**). **Q3/** All qualitative and quantitative results need to be shown clearly, without mistake or cherry picking (see **W4**). Confidence: 5 Soundness: 1 Presentation: 1 Contribution: 1 Limitations: **L1/** The authors apply some matching within each mini-batch. So there is now a dependence on batch size, it is not sure the method would work with other batch sizes. For instance on GPU with little memory and batch size of 1, the method actually doesn’t change training at all compared to normal diffusion. **L2/** Some theoretical results on diffusion may not hold anymore, e.g. that the ideal predicted denoising direction corresponds to the score (gradient of log likelihood) of the distribution of noisy data. **L3/** The method only works for unconditional generation, which is a severe limitation. (EDIT: the rebuttal contains 1 experiment on conditional generation, but discussion is missing (how come noising is immiscible but denoising is not [noise samples can still lead to any class]?) and arguments like “the average of images does not contain much meaningful information” do not hold anymore.) **L4/** Like any generative model, this work could be unethically used. It is crucial for researchers to consider designing safeguards to ensure the model can ignore inappropriate requests (eg, training or fine-tuning your Immiscible Diffusion model on inappropriate images). Can you provide a small discussion on existing safeguards and how follow-up work can address potential misuse of your method? **L5/** The authors did not communicate the details (incl. license and terms of use) of the datasets or the models that they used via structured templates. The checklist incorrectly answered “NA” for “12. Licenses for existing assets” Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for some of the constructive feedback on our work. We would like to present following updates: For W1, we have revised our mathematical proof part in rebuttal to reviewer-neka - W1, taking into consideration your constructive suggestions. For W1, we compute the average of the images on the ImageNet dataset (1M images), the result is shown in Figure R1, demonstrating that the average of images does not contain much meaningful information, as we stated in the revised mathematical proof. For W2, we have included an additional discussion on our difference to OT-family methods in the global response for your reference. For L1, we have included a discussion of the influence of the batch size in rebuttal to review-neka-Q1 for your reference. For L3, we have provided the performance of conditional generation of immiscible diffusion in the global response for your reference. --- Rebuttal Comment 1.1: Title: Answer to authors’ individual rebuttal Comment: Thank you for your partial responses on my review. I have read the other reviews and the rebuttals (global one + individual ones). Unfortunately, the authors did not adress most of my concerns, many questions/weaknesses remain unanswered. Nevertheless, to answer to the authors: - **Mathematical proof in answer to nEkA W1**. This proof is still difficult to understand and unnecessary. When a denoising model is given pure noise as input, there is no doubt (classical derivations by setting gradient = 0) that the optimal denoising prediction (in MSE sense) is to predict the mean of the data distribution. In the case of immiscibility, similar derivations immediately show that the optimal denoising direction is to predicted a weighted mean of the data distribution, where the weights are the probabilities of each data-noise pair. Furthermore, to get equation 1, the authors assume that $a = \sqrt(1-\bar{\alpha_t})$ approaches 1, and $b = \sqrt(\bar{\alpha_t})$ approaches 0. It is unclear why the authors can use these limits for equation 1, but not in equations 3 and 8. - **Mathematical proof in answer to nEkA W1**. I have doubts regarding the assumption that $f$ only depends on the difference between $x_t$ and $x_0$. This assumption seems incorrect to me. - **average of the images on the ImageNet dataset**. It can be seen on Figure R1 that the average is not a constant, as the center looks browner. Also, the caption of Figure R1 is inconsistent with the text of the rebuttal (1k or 1M?). Furthermore, this assumption that the average is constant is destroyed when the authors experiment with conditional image generation (see the third bullet point in https://openreview.net/forum?id=kK23oMGe9g&noteId=eeBmFmG8tn ). - **W2**: See my answer in https://openreview.net/forum?id=kK23oMGe9g&noteId=eeBmFmG8tn - **L1**: The provided plot (Figure R9) is very weird, it is not clear why curves do not start from the same height (same initial FID before stating training) and from training step 0. The authors did not verify/comment the special case batch size = 1, and seem to claim that the batch size has little importance ("such influence does not vary significantly across selected BS" in rebuttal to review-neka-Q1) - **L3**: See my answer in https://openreview.net/forum?id=kK23oMGe9g&noteId=eeBmFmG8tn --- Reply to Comment 1.1.1: Title: Response to Reviewer Da24's Comments on Our Rebuttal Comment: Thank the reviewer for your reading and additional writing. We would like to offer some clarifications to the reviewer’s reply. For those related to the global response, we have provided a reply on the global response thread. **Batch Size = 1 Problem** We found that a training with batch size of 1 is hard to work for training any diffusion models *from scratch*. For examples, DDPM [Ho et al., “Denoising Diffusion Probabilistic Models”] uses batch sizes of 128/64 for training CIFAR-10/CelebA-HQ respectively, Stable Diffusion uses a batch size of 1200 for ImageNet [Rombach et al., “High-Resolution Image Synthesis with Latent Diffusion Models”]. We kindly point out that even though we have to deal with GPU memory issues, we normally use gradient accumulation instead of barely using that batch size of 1. More importantly, we assume we have to deal with batch size with 1 for some extreme cases, an alternative way to our current assignment implementation is to generate a few noise points at the same time, assigning noises to images first and then perform diffusion on each image and its corresponding noise. **Fig. R9 Starting Point Problem** Firstly, we hope to kindly clarify that all reported curves in Fig. R9 start at the *same epochs*, with training step difference caused by batch size difference. For FIDs “from training step 0”, we hope to friendly address that at training step 0, the model is randomly initialized and contains no learned information. Therefore, per common practice, we empirically choose a start epoch for reporting FIDs when it reaches a reasonably low level, to save computational resources as well as to make plots readable. This is a common practice in many works (See Fig. 5 in Lipman et al., “Flow Matching for Generative Modeling” / Fig. 3 in Tong et al. "Improving and generalizing flow-based generative models with minibatch optimal transport."). Furthermore, it is quite clear in Fig. R9 that FIDs before the starting step would not change our claims about the influence of batch size on immiscible diffusion. **Influence of Batch Size** We respectfully disagree with the reviewer Da24 in the claim “The authors … seem to claim that the batch size has little importance”. In our response to reviewer neka, we wrote that “Results in Fig. R9 shows that larger BS enjoys better FID, but the influence of immiscibility significantly outweighs that of BS, and such influence does not vary significantly across selected BS.” We clearly state that batch sizes *have an influence* on diffusion performance, and our experiment shows that *comparing to the influence of immiscible diffusion*, the influence of BS is relatively minor, and *the influence of immiscibility* does not vary *significantly*, in *selected* BS (128-512 as shown in Fig. R9). We feel that our claim is significantly different from the reviewer’s comments, so we hope to provide a clarification to avoid misunderstanding. **For the Average of the Dataset** We kindly point out that in the word 1k in ImageNet-1k means 1k *classes*, and the name ImageNet-1k is the name of a popular version of ImageNet. We also kindly refer the reviewer to our revised math proof (https://openreview.net/forum?id=kK23oMGe9g&noteId=cgCNb4SgFk), where our claim that “When the number of images is large enough, $\overline{x_0}$ contains little meaningful information”, which corresponds to what Fig. R1 says. We also observe very blur average images similar to Fig. R1 for the average of images in each class of ImageNet. **For Mathematical Proofs** We believe that mathematical narrative is not “unnecessary” for the rigor of our proposed immiscible diffusion, and for readers outside the area to understand what we are doing as math can serve as the common language. We kindly propose that whether using the limit on $a$ and $b$ in Eqn. 3 and 8 doesn’t influence our main claim, so we do not explicitly discuss this in the proof. Also we hope to kindly point out that in the updated math proof, our $f$ is defined by $f(x_t-x_0, batch size, ...)$, so it *does not* depend only on the distance between $x_t$ and $x_0$.
Summary: This paper is motivated by the miscible phenomenon of physics and transfers it into the diffusion training process, which is very interesting. The author proposes a noise-assigned strategy based on distance, which is simple but powerful for faster diffusion model training. The experiment and the theoretical analysis validate the core idea of the paper and show good results. Strengths: 1. The author provides a faster diffusion model training strategy while maintaining the generation quality. 2. The motivation is novel and interesting with good writing. 3. The experiments show the robustness and university of the proposed method and the efficiency of the projected module. Weaknesses: 1. In my view, grouping the proposed noise assignment method is similar to fixing and memorizing the relationship with noise and corresponding image, limiting the generation diversity of the diffusion model. However, the experiment shows that the FID could further be reduced, Hoping that the author can explain this phenomenon. 2. As shown in the experiment, the noise distance change is only 2%, why such a minimal change could bring remarkable training speed up? The author should do a theory analysis in depth. 3. To validate my concern, please provide two experiments i) providing the result generated by a noise and its minimal change variant (only 10 pixel value changes, etc). ii) providing the result generated by a certain noise with different seeds. 4. I’m curious about the scaling law phenomenon. As shown in stable diffusion, training the diffusion model with large image data could bring surprising topological ability. However, the way the author trains the diffusion model, the noise seems to connect with certain images (same class, etc.) which will limit the generation diversity for more data. I know this experiment requires more computational resources, the author could only provide the theory analysis of the scaling law ability of the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your acknowledgement of our work. Besides, we also hope to emphasize that our method is one simple implementation to address the miscibility problem demonstrated to be important, into which we hope to inspire more work to address. We hope this work can benefit the diffusion model society beyond the implementation itself. We hope to address your concerns below from our understanding: **W1 - Diversity** We sincerely thank you for the comment. Balancing the model fitting for FID and diversity is truly hard. When there are no image-noise relationships, as shown in Fig. 3(a), the denoising of noisy layers can not perform meaningful denoising functions due to miscibility so the model is hard to be optimized to fit the data; however when the relation comes too tight like “fixing and memorizing”, there can logically be diversity problems as you raised. Our method balances between these two extremes: we perform image-noise assignment in a mini-batch. In this way, we avoid significant miscibility and make noisy denoising layers effective (as shown in Fig. 3(b)) by encouraging the diffusion of each image to the surrounding area. From the other hand, considering the low batch size (~256 for DDIM in CIFAR) and the very high image dimension (32 \* 32 \* 3=3072 for CIFAR), such encouragement is very weak. Tab. 3 shows that the distance between image and noise only decreases ~2% after assignment (the stddev of it is >10%). Therefore, the diffuseable area for each image is still very flexible and the diversity can be kept. Our image generation experiments further confirm these claims. Firstly, as suggested by your 3rd point, we show in Fig. R7 that immiscible diffusion does not significantly affect the diversity. Furthermore, we employ immiscible diffusion to the conditional generation problem with Stable Diffusion on ImageNet. Results in Fig. R2 shows that image quality as well as the FID are improved, which further supports that our immiscible diffusion balances the miscibility and the diversity well. As a conclusion, we believe that immiscible diffusion takes the balance between miscibility and diversity, thus providing faster training without significantly sacrificing the diversity. **W2 - Theory for Performance Enhancement** We really appreciate your question. Immiscible diffusion works by making images less miscible in the noise space, which is proved in Fig. 3: where we see that the denoiser learns which image to reconstruct after immiscible training. Our further theoretical experiment in Fig. R5 shows that such immiscibility is mainly achieved by pushing away images far from each other in the noise space, as the fitted line only has a significant positive absolute value in the far right of the image. In this way, we achieve more immiscible diffusion while avoiding major disturbance to the diversity. The image-noise distance change of ~2% is to prove how minor our disturbance is, which supports that immiscible diffusion would not significantly affect the diversity as discussed in the last question. **W3 - 2 Proposed Experiments** We are so grateful for your offering of experiment designs, which inspire us a lot. We perform both sampling experiments with the setting of Unconditional DDIM + CIFAR10 + Batch Size 256 + 187k Training Steps + 50 Sampling Steps: i) We generate images with a fixed noise, altered on 10/96/320 dimensions (10 pixel = 30 dimensions due to 3 channels), which are shown in Fig. R7. We found that altering 10 dimensions would not significantly alter the image for both the immiscible and the vanilla DDIM. With 96 dimensions altered, some minor changes are observed for both models. And when we altered 320 dimensions (~10%), both models show class alternations in generated images. In conclusion, we don’t observe a significant diversity difference between the vanilla and the immiscible DDIMs. ii) The results can be found in Fig. R8 We observe that with the same noise, the global seed does not seem to have an impact on the generated image. We think that is caused by little randomness of the DDIM sampling process. For DDPM, diverse classes of images are generated, as randomness in DDPM can also come from the sampling path, in addition to the initial noise. **W4 - Scaling Law** Thanks for the comment. We studied the scaling law from theoretical analysis. For theoretical analysis, as illustrated above, our method almost does not change the assumption that $P(X_T)$ should be distributed in Gaussian while we push the image-noise distribution away from each other. This means our method still preserves the manifold of diffusion training while making the optimization on the high-noise level easier (see Fig. 3 of our main paper). In that case, our method should not hurt the scaling-law facts for Diffusion Models. We also conducted experiments on the whole ImageNet dataset and found that our immiscible-Stable Diffusion can improve the performance of the vanilla Stable Diffusion model. This further demonstrates that our method can work on large-scale datasets (see general response). --- Rebuttal Comment 1.1: Comment: I appreciate the reviewers' answer and it addresses some of my concern. The generalization ability and the motivation compared with OT and FM is definitely the most important problem of this paper. The visualization in Fig R8 confuses me as even DDIM will generate different images for different seed and I disagree with the results, and I agree with Reviewer h6mk that FID is not the all to evaluate the generalization and the experiment he/she metioned should be added but actually not. For the concern of Reviewer Da24, which is mainly about the theory, I am not the expert of this and I will remain this problem for AC. I will reduce my score to boardline accept (In current evaluation benchmark, it is truly hard to judge the generalization ability so I believe that this method has some contribution to the diffusion model training). --- Reply to Comment 1.1.1: Title: Response to Reviewer y1v3's Comments on Our Rebuttal Comment: We thank the reviewer for reading and considering our responses. We hope to provide a few further clarifications on the problems addressed by the reviewer: **1. About the generalizability** Under the suggestion of reviewer h6mk, we have added additional metrics other than FID on images class-conditionally generated by immiscible SD trained on ImageNet. We provide CLIP-Score and CMMD for evaluating the image-prompt correlation and the image quality, respectively. We evaluate these metrics on 50k images generated from conditional Stable Diffusion models trained for 20k steps, which corresponds to the settings reported in Fig. R2. The CLIP-Score for baseline and the immiscible model is both 28.55, with stddev of 0.02 and 0.01 respectively. We measured it for 3 times as the scores are so close, and our results further validate that there are no differences concerning the CLIP-Score, indicating that the image-prompt correlation is not damaged by immiscible diffusion. For CMMD, the value for baseline and immiscible model is 1.436 and 1.385 respectively (the lower the better). This further confirms that our immiscible diffusion outperforms the vanilla class-conditional SD. **2. About Comparison to OT** In our Global Response (https://openreview.net/forum?id=kK23oMGe9g&noteId=8Wj7beyL9K), we have included a thorough discussion on our difference to OT-CFM, including theoretical analysis and additional experiments. We kindly can not agree that significant similarities exist between I-CFM and us, and make our comments in the last part of our latest comment (https://openreview.net/forum?id=kK23oMGe9g&noteId=rYMAwGEnzS) We hope the reviewer can consider our opinions. **3. About DDIM on different seeds** We thank the reviewer for the question, but we hope to kindly address that in the upper part of Fig. R8, just like what the reviewer believes, we indeed observe no differences between the images generated either by vanilla or by immiscible DDIMs with the same noise and different random seeds. In the bottom part, we additionally provide images generated from vanilla and immiscible *DDPMs*, with the same *initial* noise and different seeds. As random noises are added during each sampling step of DDPM, which are not hold the same and can be influenced by the random seeds, we see diverse images generated either with vanilla or with immiscible DDPM.
Summary: The authors propose an approach to mitigate the random correspondence of noise-data mapping in vanilla diffusion models. They first assign target noise by minimizing the total image-noise pair distance within a mini-batch, and then diffuse the data into noise. Experimental results seem to demonstrate the potential of this approach. However, some of the arguments are mathematically sloppy and not well testified. Strengths: The approach sounds novel and interesting. Also the implementation seems simple. Weaknesses: 1. The way of presentation is not reader friendly. I suggest to review DDIM and define its notations first, explain your motivation from DDIM, and show your algorithm/solution. 2. No strong evidence showing your claim that vanilla diffusion models are miscible. Any toy examples to support it? 3. The argument from L155-L161 is sloppy and hard to understand the argument. For instance, Eq. (2) holds in what sense (density? certain divergence?). Indeed, I believe it will not hold for most diffusion models. 4. More justification and rationale is needed to the assumption Eq. (7). Do authors suppose that should hold for all $t$? If so, I do not think it is a correct assumption. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Is the proposed method, which does in-batch reassignment, sensitive to batch size? 2. Is the proposed algorithm applied to all $t_b$? In theory, all the arguments that authors make may only hold for large timesteps. Thus, the proposed method should work effectively even if it is applied to large diffusion time. It will be nice to see experimental results related to this. 3. Does the proposed method work effectively with fine-tuning? 4. Even though authors propose to quantize to avoid computation bottleneck, by how much the batch-wise image-noise assignment increases the runtime of training? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes, authors have discussed potential limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your acknowledgement on the potential of our method. Besides, we also hope to emphasize that our method is one implementation to address the miscibility problem, into which we hope to inspire more work to work on. Below we will address your concerns: **W1 - Mathematical Proof** We re-write the part 3.3 below: In DDPM, we know for any image data-point $x_0$, when it comes to the last diffusion step $T$, i.e. $t = T$, the image is sufficiently wiped out and nearly only Gaussian noise remains. Therefore, $$ q(x_T \mid x_0) \approx \mathcal{N}(x_T; 0, I) \approx p(x_T), where \quad x_T(x_0,\epsilon) = \sqrt{\overline{\alpha_t}} x_0 + \sqrt{1 - \overline{\alpha_t}} \epsilon, \quad \epsilon \sim \mathcal{N}(0, 1), \[1\] $$ Utilizing Bayes' Rules and Equation 1, we can find that for a specific $x_T$: $$ \quad p(x_0 \mid x_T) = \frac{q(x_T \mid x_0) \cdot p(x_0)}{p(x_T)} \approx p(x_0) \quad, \[2\] $$ which indicates that the distributions of the corresponding images for any noise data-point are the same as the distribution of all images. The simplified training objective in DDPM is the added noise $\epsilon(x_t, t)$. However, we find that for a specific point $x_T$ in the noise space at diffusion step $T$, $$ \epsilon(x_T, T) = a x_0 + b x_T = \sum_{x_0} (a x_0 + b x_T) p(x_0 \mid x_T) = a \sum_{x_0} x_0 p(x_0 \mid x_T) + b x_T \sum_{x_0} p(x_0 \mid x_T) \\ = a \sum_{x_0} x_0 p(x_0) + b x_T = a \bar{x_0} + b x_T \[3\] $$ where $a = -\frac{\sqrt{\overline{\alpha_t}}}{\sqrt{1-\overline{\alpha_t}}}$ and $b = \frac{1}{\sqrt{1-\overline{\alpha_t}}}$ are constants, and $\bar{x_0}$ is an average of images in the dataset. When the number of images is large enough, $\bar{x_0}$ contains little meaningful information. However, in our Immiscible Diffusion, while we still have $$ p(x_T) = \mathcal{N}(x_T; 0, I), \[4\] $$ for each specific data point $x_T$ or $x_0$, the conditional noise distribution does not follow the Gaussian distribution because of the batch-wise noise assignment $$ p(x_T \mid x_0) \neq \mathcal{N}(x_T; 0, I). \[5\] $$ Instead of the Gaussian distribution, we assume that the predicted noise with noise assignment has a distribution: $$ p(x_T \mid x_0) = f(x_T - x_0, batch size, ...) \mathcal{N}(x_T; 0, I), \[6\] $$ where $f$ is a function denoting the influence of assignment on the conditional distribution of $x_T$. According to the definition of the linear assignment problem, $f$ decreases when $x_T - x_0$ increases its norm, say L2 norm as our default setting. Therefore, from Eqn. 2 and 6, we have $$ \quad p(x_0 \mid x_T) = f(x_T - x_0, ...) p(x_0), \[7\] $$ which means that for a specific noise data-point, the possibility of denoising it to the nearby image data-point would be higher than to a far-away image. For the noise prediction task, we see that $$ \epsilon(x_T, T) = \sum_{x_0} (a x_0 + b x_T) p(x_0 \mid x_T) = a \sum_{x_0} f(x_T - x_0, ...) x_0 p(x_0) + b x_T = a \overline{x_0 f(x_T - x_0, ...)} + b x_T \[8\] $$ where $\overline{x_0 f(x_T - x_0, ...)}$ is the weighted average of $x_0$ with more weights on image data-points closer to the noisy data-point $x_t$. Therefore, the predicted noise leads to the average of nearby image data-points, which makes more sense than pointing to a constant... **W2 - Evidence on Miscibility** The evidence is in Fig. 3(a), which shows the predicted noise in each step, and the image generated solely with the predicted noise for this step, both from a trained DDIM. In this 20 steps sampling, T=20 is the step denoising from pure noise while T=1 is the step outputting the image. We see that the predicted noise at T=20 looks messy, and the image generated with this noise has little meaningful info. These support that the model can not determine the image to be denoised to, which supports the miscibility. Mathematically, in vanilla diffusion models, the noise added to each image during training is N(0, I). Therefore, each image would be projected to the noise space with the same possibility distribution N(0, I), which is miscible. **W3&4 - Typo on Eqn. 2 and Eqn. 7** We appreciate your comment. We change the notation t to T, which means the last diffusion step, so that Eqn. 2 would approx. hold considering the new Eqn. 1. **Q1 - Discussion on Batch Size (BS)** To see its impact, we vary the BS on DDIM with CIFAR with 50 sampling steps. Results in Fig. R9 show that larger BS enjoys better FID, but the influence of immiscibility significantly outweighs that of BS, and such influence does not vary significantly across selected BS. **Q2 - Applying Assignment to Which Steps** Yes! We further apply immiscible diffusion on large diffusion steps only, on conditional generation with Stable Diffusion (SD) and ImageNet, BS = 2048 and training steps = 20k. We compare: 1) Vanilla diffusion; 2) Immiscible diffusion on all steps; 3) Immiscible diffusion on t > 50% total diffusion steps;4) Immiscible diffusion on t > 75% total diffusion steps. The FID results are 1) 22.44; 2) 20.90; 3) 21.23; 4) 21.51. We find that denoising at large steps indeed improves the performance comparing to vanilla diffusion. This matches findings in Fig.3 that our method improves large diffusion steps. Assigning the image-noise across more steps can further improve the performance, which saves us the efforts to finetune the hyperparameters of steps. **Q3 - Finetuning** Yes. We finetune conditional Stable Diffusion with pre-trained stable-diffusion-v1.4 model on ImageNet dataset with a batch size of 512. We see that after 2.5k steps, the FID for immiscible and vanilla stable diffusion is 11.10 and 12.19 respectively, supporting that our method works effectively for fine-tuning. **Q4 - Runtime for Assignment** We refer to Tab. 3 for the detailed time necessary for the assignment with different batch sizes. In a typical situation (DDIM, batch size=256), each training step is \~460ms, where the 6.7ms assignment time is truly negligible (~1.5% overhead). --- Rebuttal 2: Comment: Reviewer `nEkA`: It is only one day away, please provide your feedback to the authors' rebuttal. AC --- Rebuttal 3: Title: Thanks for the replies Comment: I appreciate the reviewers' clarification and have decided to increase my score. --- Rebuttal Comment 3.1: Title: Thanks for the Response Comment: We sincerely appreciate the reviewer's acknowledgement on our response, and we are grateful for all the time and efforts provided by the reviewer.
Summary: This paper shows that the current diffusion training strategy diffuses each image into the entire noise space, making it difficult to optimize the model and thus slow to converge. Inspired by the fact that miscibility can be changed according to various intermolecular forces in physics, this paper proposes Immiscible Diffusion, a simple and effective method that accelerates diffusion training by pre-assigning noises in each mini-batch before standard training, which can be implemented in just one line code. This assignment operation can be accelerated by fp16/fp8, which greatly improves training efficiency while avoiding algorithm complexity and is applicable to various baselines, including Consistency models, DDIM and Stable Diffusion. Strengths: 1. This paper points out that existing diffusion training will map each image to all points in the Gaussian distribution, which means that each point can be denoised to any source image, leading to inherent difficulties in diffusion training. This is a new perspective on diffusion training acceleration, and it sounds reasonable. 2. The proposed assignment strategy is simple and effective, and one line of code achieves a significant performance improvement, without much computation complexity. 3. Immiscible Diffusion improves diffusion training speed while improving the FID metric, and it is orthogonal to existing diffusion training acceleration methods. 4. Clear and concise presentation, writing and diagrams. Weaknesses: 1. The noise assignment strategy proposed in this paper essentially introduces a priori assumption into the diffusion training process: different images should correspond to different points in the Gaussian distribution. This prior reduces the difficulty of diffusion training, but also brings concerns about practicality and generalization: - Practicality: The datasets for unconditioned generation used in this paper are all based on image classification (CIFAR/CelebA/ImagenNet), so these data distributions themselves already meet the above prior assumptions (each class is a kind of image distribution). However, for image datasets with more complex distributions, such as LAION, whether the proposed assignment strategy can still be used is an uncertain question, and this paper also lacks relevant experiments. You can demonstrate that this is not a problem by training unconditional generation experiments on a subset of LAION. - Generalization: Since images with the same data distribution are assigned to a specific noise interval during training, this may cause the model to lose the ability that denoising each point in the noise space into any source image during inference, thereby damaging the generalization of the generated images. However, the paper lacks experiments to evaluate the generalization of generated images. 2. Considering that all the unconditional generation results in the paper are trained on classification datasets, is the distribution of noise assigned by the proposed method strongly correlated with the category? Intuitively, after the proposed noise assignment, images of the same category should have closer noise in current batch. Technical Quality: 3 Clarity: 4 Questions for Authors: Please refer to the Weaknesses Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are really excited to hear that you agree with our strengths listed above, and we sincerely hope that the proposed perspective of image-noise matching can be realized and further discovered by our diffusion society. We hope to address your concerns below: **W1-Practicality** We thank you for the concern on whether immiscible diffusion would work on datasets without simple classification distributions, and we highly appreciate your suggestion on training on LAION. We train conditional Stable Diffusion model on a subset of LAION (laion-10k) with a batch size of 512. We see that after 25k steps, the FID for immiscible and vanilla diffusion is 124.55 and 146.69 respectively, showing that immiscible diffusion can work on the training efficiency of datasets with complex distributions. Note that the training steps are limited by the time and computational resources. **W1-Generalization** We can’t agree more to explore on this concern, which we have been seriously considering after submission. A typical example and an important application here is conditional image generation, where noise is randomly sampled and paired with the text prompt. We performed class-conditional image generation on Stable Diffusion with ImageNet dataset. Our results in Fig. R2 showed that immiscible diffusion outperformed the vanilla one, specifically the FID for immiscible and vanilla diffusion is 20.90 and 22.44 respectively, and the training steps necessary to go below the FID <23 is 12.5k for vanilla and 7.5k for immiscible diffusion. This effectively suggests that the generalization problem is not significant for current immiscible diffusion methods, which is not surprising as the distance between image and noise shrinks only ~2% after the assignment. We hope this experiment can help to improve our discussion on generalization. **W2 Image Correlation according to Classes** We appreciate your interesting questions, but the answer is no. The images are more correlated according to its own structure rather than the category. In our response to Reviewer y1v3 - W3 - (1), we perform an experiment showing that when enough perturbation is added onto a noise, the image would change to another image in another class, which proves our claim, as shown in Fig. R7. Furthermore, we had an experiment in W1-Practicality on LAION-10k, which nearly has no classifications and immiscible diffusion preliminarily works. This also proves that classification is not a prerequisite for immiscible diffusion. --- Rebuttal 2: Comment: Thanks for your rebuttal, here is my reply to your rebuttal: **W1-Practicality** Your reply partially addresses my concerns about the effectiveness of this method on complex datasets (rather than simple classification distributions). However, the training data for this experiment is only 10k, and I think the data size should be at least about 10% ImageNet (the dataset used for conditional generation in the paper) to prove its effectiveness. In addition, could you provide the results of how much the improvement in FID on ImageNet is compared to the improvement on FID on LAION under the same training setting (10k data + 25k training steps)? It would be interesting to explore how much performance differences your method has on training sets with different data distributions. **W1-Generalization** The authors may have misunderstood my concerns. FID is good, but it is not an accurate measure of the quality and generalization of generated images. Lower FID does not mean your generative model is better in performance and generalization * For performance, you can also report CLIP-Score [1] and CMMD [2] for conditional generation (class name as the text prompt). * Regarding generalization, I am worried that after training with your method, the model will sample pictures with the same pattern for different random initialization noises, thereby damaging the diversity of generated images. To prove this, you can refer to this representation learning method [3] to see whether the features corresponding to the generated images of the same category are evenly distributed on the feature plane, and calculate their quantized values ​​to determine whether there is enough diversity between the generated images. [1] Learning Transferable Visual Models From Natural Language Supervision, ICML 2021 [2] Rethinking FID: Towards a Better Evaluation Metric for Image Generation, CVPR 2024 [3] Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere, ICML 2020 **W2-Image Correlation according to Classes** I am satisfied with the answers and have no further questions or doubts. **About Rating** I tend to keep my score as is so far and will adjust the final score based on further explanations from the authors. --- Rebuttal 3: Title: Response to Reviewer h6mk's Comments on Our Rebuttal Comment: We are sincerely grateful for the reviewer's time and efforts on reviewing our work! We hope to provide more information to clarify our findings on the reviewer's concerns: **W1 - Practicality** 1. Larger Complex Image Dataset We thank the reviewer for raising the concern. We are actively seeking to address this concern via training on large-scale LAION dataset. However, we found that the full LAION dataset has been removed due to the legal policy so we can not download it yet. We can only take the LAION-10K dataset which is legally available on Huggingface "wtcherr/LAION10K". To try our best to do reviewer h6mk a favor, we conduct experiments on the LAION-10K dataset, the results on LAION 10K show that we improve the baseline by 22.14 FID, demonstrating the effectiveness of our method. Despite the relatively small scale, we believe it still effectively shows the sign of life for our method on complex conditional generation. 2. Comparison Experiments on LAION-10k and on 10k Images in ImageNet We thank the reviewer for proposing the experiment. We perform an additional experiment on 10k ImageNet data + 25k training steps with vanilla and immiscible conditional SD. The results are nearly the same as the experiments on LAION-10k dataset: the FIDs for vanilla SD on ImageNet (10k images) and LAION are 148.90 and 146.69 respectively, and those for immiscible SD are 128.66 and 124.55 respectively. The FID improvements are 20.24 and 22.14 respectively. The improvement is even a little bit larger on LAION-10k than on ImageNet (10k images), supporting that immiscible diffusion does not lose practicality in complex datasets. **W1 - Generalization** We understand the concern on the FID, and hope to present some additional evaluations to address your concerns: For Performance: We appreciate the suggestions of providing CLIP-Score and CMMD as other evaluation metrics for evaluating the image-prompt correlation and the image quality, respectively. We evaluate these metrics on 50k images generated from conditional Stable Diffusion models trained for 20k steps, which corresponds to the settings reported in Fig. R2. The CLIP-Score for baseline and the immiscible model is both 28.55, with standard deviation of 0.02 and 0.01 respectively. We measured it for 3 times as the scores are so close, and our results further validate that there are no differences concerning the CLIP-Score, indicating that the image-prompt correlation is not damaged by immiscible diffusion. For CMMD, the value for baseline and immiscible model is 1.436 and 1.385 respectively (the lower the better). This also demonstrates that our immiscible diffusion outperforms the vanilla class-conditional SD. For generalization: We sincerely thank the reviewer for proposing the additional experiment to prove the diversity of images under the same class-condition. We carefully read the paper provided, and we actively try to design experiments to show the image diversity with representation learning methods. However, the previous representation learning is studied on top of the feedforward models, and we find it unclear how to investigate the visual representation for a multi-step diffusion models. Nevertheless, we have submitted new proofs in Fig. R10, which contains images generated with the same prompt on our immiscible class-conditional SD without cherry picking, to the ACs with anonymous links. It clearly shows that the images are diverse in each class. We hope the release of those additional images can address the reviewer’s concerns. **W2 - Image Correlation according to Classes** We thank the reviewer for acknowledging our response, and are sincerely grateful for the reviewer’s efforts in reviewing our work. --- Rebuttal 4: Comment: Thanks to the author for his efforts and reply. After reading the author's rebuttal carefully, I decided to increase my score to Weak Accept, but I hope the author can explain in detail the impact of the change on generalization in the subsequent version. Specifically, you can generate 100 images for each class, then test the image features corresponding to these 100 images, and measure the distance between these image features to determine whether these images are diverse. If the generated images are diverse, it means that there are differences between these image features, and the distance between them is not very close, and there will be no mode collapse phenomenon similar to that in GAN (the features of all images are very close and collapse to a point in the feature space), and I believe t-SNE or the mentioned paper can better help you verify this problem. --- Rebuttal Comment 4.1: Title: Response to Reviewer h6mk's Comments on Our Rebuttal - 2 Comment: We sincerely appreciate the reviewer's acknowledgement on our response, and we are grateful for all the time and efforts provided by the reviewer. We will implement the reviewer's interesting suggestion in our final version.
Rebuttal 1: Rebuttal: Dear Reviewers and ACs, We are grateful for your time and effort in reviewing our work. We are glad to see that our work is recognized as reasonable (R-h6mk), novel and interesting (R-nEKA, R-y1v3), simple and effective (R-h6mk, R-nEKA, R-y1v3). We are also encouraged to hear that our experiments are acknowledged to show robustness, effectiveness and efficiency (R-y1v3) and our writing is clean and concise (R-h6mk, R-y1v3). We sincerely appreciate the insightful comments and we address the concerns as below: **Conditional Generation** We perform a class-conditional image generation with Stable Diffusion (SD) on ImageNet-1k with a batch size of 2048. To cater to text-to-image generation, we use class name as the text prompt for SD. Results in Fig. R2 shows that the FID for immiscible class-conditional SD is 20.90, which is 1.53 lower than that of SD baseline. Qualitative comparisons further prove such enhancements, which augment the effectiveness of immiscible diffusions into commonly-used conditional image generation. **Comparison to OT-CFM and related works** Thanks for pointing it out and we indeed miss this discussion in our submission. We will include them in our final version. Our motivation is different from OT-CFM [1], Multisample flow matching [2] and Approximated-OT [3], and we target diffusion-based methods while most of these works are for flow-matching based methods. We point out that it is a coincidence that we share a similar algorithmic spirit but we can achieve our motivation -- immiscibility from an orthogonal way. The discussion is presented below: Theoretical Difference OT-CFM is designed to “yielding straighter flows that can be integrated accurately in fewer neural network evaluations”[1], while immiscible diffusion aims to keep images immiscible in the noise space. OT focuses more on the straightness of the diffusion path, while immiscible diffusion focuses on the immiscibility of diffusion destinations. Therefore, minimizing image-noise distance in a mini-batch is only one way to achieve immiscibility. As long as the image’s miscibility is limited, it falls in our motivation. Of course, image-noise OT is one way without adding crossings in the diffusion path. Additional Experiments Splitting Two Proposals. To validate the effectiveness of immiscible diffusion alone, we design a controlled variable experiment which does not qualify OT. We still use assignment, but compared to linear assignment calculating distances between images and noises, we flip the dimension of the noise for performing the calculation and assignment. For example, say that the image and the noise are 3072D: OT-qualified assignment would calculate distance between corresponding dimensions, i.e. Dist. = sigma(||x0_i - noise_i||2) Our experiment changes this distance goal into: Dist.’ = sigma(||x0_i - noise_(3072-i)||2) By doing this, we still limit the miscibility. However, this does not qualify OT because the real distance is not optimized. In fact, the real distance reduced only <1% in this way. Interestingly, when we perform the vanilla OT assignment and flip assignment, on DDIM with 50 sampling steps, batch size of 256, no additional image normalization, they show similar performances after initial steps, as shown in Fig. R3, with OT one performs slightly better. This proves that the immiscible effect itself can help improve the diffusion performance. For their performance difference, note that the flip assignment causes crossings in diffusion paths. As shown in Fig. R4, if we have 4 images x0 to x3 and 4 noises n0 to n3, when we flip x and y axis, a path crossing shown in red circle would happen, which adversely affects the FID. [1] Tong et al. "Improving and generalizing flow-based generative models with minibatch optimal transport." [2] Pooladian et al. "Multisample flow matching: Straightening flows with minibatch couplings." [3] Kim et al. “Improving Diffusion-Based Generative Models via Approximated Optimal Transport” Domination of Immiscible Effect to OT in Noise Assignment We further hope to understand either immiscible effect or the OT dominates the performance enhancements in image-noise assignment. So we do some stats: As shown in Tab. 3, the distance reduction after assignment is only 2%. We further calculated the stddev of distances in a batch, which is ~10%. Therefore, the distance reduction is so little that, as suggested by R-y1v3, is hard to believe to become the reason for performance enhancement. On the other hand, we calculated the relation between the distance between images and their corresponding noise points, shown in Fig. R5. We see that after assignment, those images far away are assigned to noise points far away, which constitutes the concept of immiscibility. Our results in Fig. 3 also show that the vanilla diffusion model suffers from the miscible problem while immiscible diffusion significantly reduces it. In conclusion, we believe immiscible diffusion dominates the performance enhancement in image-noise assignment. Immiscible Diffusion with Image Projection We notice that simple image projection, i.e. multiple each image by a factor like 2.0 or 4.0, can also perfectly achieve this goal as long as the factor is not too large to wipe out the image in diffusion. During training, We multiply the images in DDIM, whose original stddev is 0.5, by factors of 2 and 4 to let its stddev be 1.0 and 2.0. Note that all images are centralized so there is no impact on their means. In Fig. R6, we note that after multiplications, the FID significantly improves, which suggests the effectiveness of immiscible diffusion from another angle. Adding assignment can generally further increase the training speed, but the effect would be weakened when the factor is large, as the immiscible problem is significantly solved by the factoring. This offers another way of immiscible diffusion, which will be added in the final version. Pdf: /pdf/eb4ead655a49cfd3459d1d6e58d244cceac58ab8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Saturation Effects of Spectral Algorithms in Large Dimensions
Accept (poster)
Summary: This paper concerns the convergence rate of spectral methods, particularly kernel ridge regression (KRR) and kernel gradient flow (KGF), in large-dimensional settings where the sample size $n$ is of the same magnitude as a power $\gamma$ of the input dimension $ d $, i.e., $ n \asymp d^\gamma $. It reveals a new phenomenon of the saturation effect in large dimensions, which is different from its fixed-dimensional counterpart. Specifically, it shows that in large dimensions, KRR still suffers from the saturation effect while KGF does not. The key techniques involve the use of analytic filter functions to characterize the regressors from different spectral algorithms, followed by standard concentration results on the bias-variance decomposition of the excess risk. Strengths: This paper offers solid theoretical results which improve over previous literature or reveal novel phenomenon. It offers important insights on spectral algorithms in large input dimensional setting, for example the interpolation (with different qualification $\tau$) of the learning rate between KRR and KGF, the exact description of the phenomena *periodic plateau behaviour* and *polynomial approximation barrier*. This paper also offers numerical validations on their claim. Weaknesses: There is no major weakness spotted in this paper. Technical Quality: 4 Clarity: 4 Questions for Authors: I appreciate the result of this paper and hence look for any possible extension from the current result. This paper focuses on dot-product kernels $K=\Phi(\langle \cdot, \cdot \rangle)$ with inputs distributed uniformly on a hypersphere and with polynomial spectral decay. 1. By [Belkin2018] and [Haas2024], it simply seems the function $\Phi$ cannot be smooth but at least 1-differentiable, say $\Phi$ is induced by the ReLU-NTK. If $\Phi$ is smooth, for example $K$ is the Gaussian kernel, then the spectral decay is exponential. Can the result in this paper extend to this case? If yes, where is the adaptation? If no or not obvious, what would be the main technical difficulties? 2. In realistic setting, uniform input distribution on a hypersphere is too restrictive. Could one relax the condition to the distributions which have support on the whole sphere instead? I recall Lemma Lemma F.9 in [Haas2024] stating the spectral decay is still polynomial in this case. Could one extend the analysis in this case? Also, I have some technical questions concerning the appendix. 3. In Eq (62) in Lemma D.7, the left hand side (LHS) should be independent to the noise, but why is there a term with $\sigma^2$ on the right hand side (RHS)? 4. Less like a question but more like a comment: I think that there is a typo in Eq (42): it should be $... \phi\_j^2(x)\leq ...$ instead of $... \phi\_i^2(x)\leq ...$. Also, is Eq (44) actually redundant as a special case of Eq (43) given the notation mentioned in line 602 - 603? Reference: - Haas, Moritz, et al. "Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension." Advances in Neural Information Processing Systems 36 (2024). - Belkin, Mikhail. "Approximation beats concentration? An approximation view on inference with smooth radial kernels." Conference On Learning Theory. PMLR, 2018. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: All assumptions and conditions are stated clearly in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for taking the time to read our paper and for providing valuable feedback. We are pleased to see that you not only accurately described the contributions of our work but also gave it high praise. We would like to address the questions and comments you raised regarding possible extensions of our research. **Author's response to Question 1:** Thank you for suggesting checking our assumptions on two specific kernels. After a careful check on the paper [Belkin2018] and [Haas2024], we find that $\Phi$ for both ReLU-NTK and the Gaussian kernel are analytic, with all coefficients $a_j>0$, thus satisfying Assumption 1. Therefore, the results of this paper apply to both ReLU-NTK and the Gaussian kernel. Please let us clarify in detail: - From the definition of ReLU-NTK in [Bietti2021], $\Phi$ for ReLU-NTK is analytic (see, e.g., page 6 in [Bietti2021]). Moreover, Corollary 3 of [Bietti2021] as well as Lemma B.2 of [Haas2024] both imply that we have all coefficients $a_j>0$. Notice that the proof for the positive-definiteness of the kernel in the above studies relies on [Gneiting2013], hence they also require that $\Phi$ is analytic. - From the definition of the Gaussian kernel defined on the sphere, we can show that $\Phi$ for the Gaussian kernel is analytic, with all coefficients $a_j>0$. Since both kernels satisfy Assumption 1, we have the following claims: - You are right. When $d$ is fixed dimensions, [Bietti2021] showed that the eigenvalues of ReLU-NTK satisfy $ \mu_k \asymp k^{-d} $ and $ N(d, k) \asymp k^d $ for large $k$, leading to $\lambda_j \asymp j^{-\beta}$ with $\beta = (d+1)/d$; whereas it is known that the eigenvalues of the Gaussian kernel defined on sphere decay exponentially (see, e.g, [Amnon2020]). Therefore, we may need different tools to deal with the rate of the excess risk of fixed-dimensional spectral algorithms with ReLU-NTK and the one with the Gaussian kernel. - In large dimensions, the spectra of both ReLU-NTK and the Gaussian kernel exhibit a strong block structure as follows: $ \mu_k = \Theta(d^{-k}) $ and $ N(d, k) = \Theta(d^{k}) $ for $ k \leq p+1 $ (see, e.g., Lemmas D.11 and D.13). Therefore, our results hold for both ReLU-NTK and the Gaussian kernel in large dimensions without any adaptation. --- **Author's response to Question 2:** We agree that a uniform input distribution on spheres is too restrictive. As noted in Remark 2.1, most studies analyzing spectral algorithms in large-dimensional settings concentrate on inner product kernels on spheres for two main reasons: - Firstly, harmonic analysis on the sphere is clearer and more concise. For example, the properties of spherical harmonic polynomials are simpler than those of orthogonal series on general domains. This clarity makes Mercer’s decomposition of the inner product more explicit, avoiding several abstract assumptions. - Secondly, there are few results available for Mercer’s decomposition of kernels on general domains, especially when considering the domain’s dimension. For example, no results determine the eigenvalues of Sobolev space in large dimensions. Thus, if we believe that large-dimensional KRR and KGF exhibit some new phenomena, it would be prudent to start with a more tractable setting, and then extend the results to general domains. Regarding your second question, thank you for bringing up the interesting work [Haas2024] and suggesting an extension of our analysis. We believe that the results in [Haas2024] can indeed be extended to large-dimensional settings, although we must carefully consider the constants that depend on $d$. For example, the density in Lemma D.7 of [Haas2024] is lower and upper bounded by two constants depending on $d$. We appreciate your insightful suggestion. We will add the paper [Haas2024] to the discussion section of our manuscript, and we will consider extending our results to general domains in future work. --- **Author's response to Question 3:** Thank you for pointing out the potential issue in Eq (62). We have revised our manuscript and updated Eq (62) to the following version : $$ \frac{ N_1 (\lambda) M_{1, \varphi}^2 (\lambda)}{n^2} = o( M_{2, \varphi} (\lambda) + \frac{\sigma^2}{n} N_{2,\varphi} (\lambda) ); $$ Moreover, we followed your suggestion and added a rigorous definition of $o(1)$ in our manuscript: We say two (deterministic) quantities $U(d), V(d)$ satisfy $U(d) = o(V(d))$ if and only if for any $\varepsilon > 0$, there exists a constant $D_{\varepsilon}$ that only depends on $\varepsilon$ and the absolute positive constants $\sigma, \kappa, s, \gamma, c_0, c_1, c_2, C_1, \cdots, C_8 > 0$, such that for any $d > D_{\varepsilon}$, we have $U(d)< \varepsilon V(d)$. We hope that the updated definitions clarify that Eq (62) is correct when $\sigma>0$ is an absolute constant defined in (1). --- **Author's response to Question 4:** Thank you for your careful reading of our proof and for pointing out the typo in Eq (42). We will correct it in the updated manuscript. Regarding your comment about Eq (44), we believe it is not redundant. In Eq (44), $\varphi_{\lambda}$ refers to the filter function of a specific spectral algorithm (not necessarily KRR), while $\varphi_{\lambda}^{\text{KRR}}$ in (43) specifically denotes the filter function for KRR. ### Reference: - Alberto Bietti and Francis Bach. "Deep equals shallow for ReLU networks in kernel regimes." In International Conference on Learning Representations (ICLR), 2021. - Tilmann Gneiting. "Strictly and non-strictly positive definite functions on spheres." Bernoulli, 19(4): 1327–1349, 2013. - Geifman, Amnon, et al. "On the similarity between the Laplace and neural tangent kernels." Advances in Neural Information Processing Systems 33 (2020): 1451-1461. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed response. I am content to see that the results are valid for various kernels in high-dimensional setting. I would lean to accept this paper. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback. We are pleased that you find the results valid in high dimensions. Your support and recommendation to accept the paper are greatly appreciated.
Summary: In a large-dimension setting, i.e., the dimension $d$ of the input grows polynomially with respect to the sample size $n$, this manuscript rigorously proves upper and lower bounds for spectral algorithms and shows the dependence on the qualification and the interpolation index. Consequently, the manuscript proves the saturation effect in spectral algorithms for large-dimensional data. Strengths: 1. Identify several phenomena in large-dimensional spectral algorithms based on the derived rates. These phenomena are also illustrated by figures, making the explanations easy to follow. 2. Discovered the thresholding for igniting saturation effect is different for large-dimensional and fixed-dimensional settings. Specifically, in a large-dimensional setting, the saturation effect occurs when the interpolation index exceeds the qualification, whereas in a fixed-dimensional setting, it must be more than twice the qualification. Weaknesses: 1. By checking previous work [1,2] and the proofs in these works, it looks like Theorem 3.1 has been established in Section 4 of [1], while Theorem 4.1 and Theorem 4.2 are the direct extensions of partial results in Theorem 2 and Theorem 3 from [2]. For instance, the proofs of Theorems 4.1 and 4.2 are obtained by replacing the Tikhonov regularized filter function in the variance and bias decomposition of [2] with a general filter function satisfying specific conditions such as C1 and C2. Such a proof trick has been used in previous extend from KRR (Tikhonov regularization) to general spectral algorithms, i.e., [3] to [4]. However, unlike the extension from [3] to [4], the current manuscript seems to be a partial extension of [2] with a similar proof trick as I mentioned before. Therefore, I have concerns about the technical contribution and novelty of this manuscript as a submission to a conference. This work seems more like an extension to a journal like JMLR, etc. I am just not sure whether such a partial extension of a previous article with almost the same proof technique is suitable for conference publication or whether it would be better evaluated in a journal. I defer this justification to the AC. Please disregard this comment if the AC deems the current context appropriate for conference publication. 2. I noticed there are some simulation experiments to confirm the saturation effect in fixed dimension KRR; see [5]. Is it possible to confirm the results in this manuscript? I understand given the rates are asymptotic, it might be hard to have thorough investigations due to the extremely large $d$. But I'm still curious if any preliminary experiments can be done. [1] Lu, Weihao, et al. "Optimal rate of kernel regression in large dimensions." _arXiv preprint arXiv:2309.04268_ (2023). [2] Zhang, Haobo, et al. "Optimal Rates of Kernel Ridge Regression under Source Condition in Large Dimensions." _arXiv preprint arXiv:2401.01270_ (2024). [3] Zhang, Haobo, et al. "On the optimality of misspecified kernel ridge regression." _International Conference on Machine Learning_. PMLR, 2023. [4] Zhang, Haobo, Yicheng Li, and Qian Lin. "On the optimality of misspecified spectral algorithms." _Journal of Machine Learning Research_ 25.188 (2024): 1-50. [5] Li, Yicheng, Haobo Zhang, and Qian Lin. "On the Saturation Effect of Kernel Ridge Regression." International Conference on Learning Representations. (2024) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I'm curious to know if it is possible to conduct a similar analysis under ultra-high-dimension settings like the dimension grows exponentially fast as the sample size $d = \exp\{n^{\gamma}\}$. Do we need additional techniques to conduct these analyses? 2. Based on the figure, it looks like even when $s> 2\tau$, as long as $d$ grows with $n$, the saturation effect will not happen, which is different from the fixed dimension setting. While this may be the consequence of the derived rate, can authors provide some intuition behind this? 3. Is there a particular reason that the authors concern $\gamma \in p(s+1),(p+1)(s+1))$ with $p$ as integer to derive the rates? Why is this ratio an integer? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable comments. Below, we address your concerns and questions in detail. Concern 1: Please let us clarify our novelties and contributions below. - Although the saturation effect has been observed for kernel ridge regression in a large-dimensional setting [2], its persistence for other spectral algorithms was unclear. We determine the exact rate of excess risk for analytic spectral algorithms with qualification $\tau$. Key outcomes include: - Kernel gradient flow is minimax optimal in large dimensions, with an excess risk rate of $d^{-\min(\gamma-p, s(p+1))}$. - We proved the saturation effect for large-dimensional KRR when $s>1$ and for spectral algorithms with $\tau$ when $s>\tau$. Regarding your concerns on the technical contributions: - [3][4] addressed the convergence rate for fixed-dimensional spectral algorithms when $0<s\leq 2\tau$, focusing on non-saturated cases where KRR and kernel gradient flow are rate-optimal. In contrast, they did not consider the saturation effect of spectral algorithms when $s > 2\tau$. - [1] determined the minimax rate of kernel regression for $f \in [\mathcal{H}] ^ s$ where $s=1$. The proof in [1] relies heavily on empirical process theory, requiring bounds on the empirical loss and its difference from the expected loss (excess risk). This approach does not generalize to cases where $s \neq 1$. - [2] provided a minimax lower bound for $s>0$ using integral operator techniques for KRR, hinting at a saturation effect but not generalizing to other spectral algorithms. We extended results to general spectral algorithms in large dimensions using complex variable analysis (Appendix E.1 and [6]) to match upper and lower bounds. Large dimensions introduce unique challenges: - The polynomial eigendecay assumption that $\lambda_j \asymp j^{-\beta}$ does not hold in large dimensions, since the hidden constant factors in the assumption vary with $d$. - The embedding index assumption in [6] does not hold either. We develop a new condition in (63) and (74) to replace the embedding index assumption, and we verify this new condition in Appendix D.4. We believe our contributions are novel and valuable for the machine learning community, meriting publication at NeurIPS. **Reference: [6]** Li, Yicheng, et al. "Generalization Error Curves for Analytic Spectral Algorithms under Power-law Decay." --- Concern 2: Thank you for your suggestion. We agree that conducting thorough numerical investigations can be challenging due to the extremely large dimensionality $d$. Following your recommendation, we conducted two preliminary experiments. We will follow your advice to conduct more comprehensive experiments later and report the results in the updated manuscript. --- Question 1: In this paper, we consider the asymptotic framework $n \asymp d^\gamma$ with $\gamma>0$. As discussed in lines 97-115, many studies focus on the performance of spectral algorithms within this asymptotic framework. By comparing our work with these studies, we highlight the novelty and contributions of our results to the field. In contrast, we were not aware of similar studies under ultra-high-dimensional settings, hence we did not explore this scenario in our manuscript. Your insightful suggestion implies that this could be an interesting avenue for future research. When $n \asymp d^{\gamma}$ and $\gamma$ is sufficiently small, the excess risk in our results is of rate $d^{-\gamma}$. If we consider $\log(d)$ as a limiting case of $d^{\gamma}$, we might conjecture that the correct rate of the excess risk when $n \asymp \log(d)$ is $\log^{-1}(d)$. We believe your suggestion is promising, and we will consider conducting a similar analysis under ultra-high-dimension settings in future work. --- Question 2: Thank you for your question. We guess that there might be some typos in your expression, and we guess that you wanted to state the distinction between our results and existing results in fixed-dimension settings as follows: - Our results, Theorems 4.1 and 4.2, demonstrate that the saturation effect occurs in large-dimensional settings if and only if $s > \tau$. In Figure 2 (on page 13), we illustrate this with the spectral algorithm rate (blue line) and the minimax rate (orange line). The blue and orange lines exhibit non-overlapping regions if and only if $s > \tau$. Therefore, Figure 2 aligns with our claims. - In contrast, in fixed dimension settings, the saturation effect occurs if and only if $s > 2\tau$. Then let us provide you with some intuition behind the consequence of the derived rate. We highlight that the periodic behavior of the rates with respect to $ \gamma $ in Theorem 4.1 and 4.2 is closely related to the spectral properties of inner product kernels for uniformly distributed data on a large-dimensional sphere. In Lemmas D.11 and D.13, we show that $\mu_k= \Theta(d^{-k})$ and $N(d, k) = \Theta(d^{k})$ for $k \leq p+1$. Moreover, recall that the leading terms for the bias and variance are given by $M_{2, \varphi}(\lambda) + \frac{\sigma^2}{n} N_{2,\varphi}(\lambda)$ (see Appendices D.1-D.3). Therefore, by comparing between $M_{2, \varphi}(\lambda)$ and $\frac{\sigma^2}{n} N_{2,\varphi}(\lambda)$ (with calculations detailed in Lemma D.14) under the strong block structure of the spectrum, we found that the rate of excess risk behaves periodically for each period $\gamma \in [p(s+1),(p+1)(s+1))$, $p = 0, 1, \cdots$, and that the saturation effect occurs when $s > \tau$. --- Question 3: The intervals $[p(s+1),(p+1)(s+1))$, $p=0, 1, \cdots$ naturally from our derivation process. Thus, we may focus on $\gamma\in [p(s+1),(p+1)(s+1))$. Regarding your second question about why this ratio is an integer, we are not entirely certain whether you are referring to the constant $\gamma$ or the integer $p=\lfloor\frac{\gamma}{s+1} \rfloor$. For further clarification, please let us know. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response and explanation that confirms the technical contribution of this paper. Also, thanks for the additional experiments that enhanced the content of the manuscript. I have adjusted my rating, good luck! --- Reply to Comment 1.1.1: Comment: We are pleased that you recognize the technical contributions of our paper and the additional experiments we conducted. We will follow your suggestion to include these experiments in the updated manuscript. Thank you for raising your score.
Summary: ### Summary: The authors study the saturation of spectral algorithms (KRR & GF) in high-dimensions where $n,d$ are both large, meaning that when KRR can't achieve information theoretic lower bounds with over smooth the regression functions while kernel Gradient Flow (GF) can. Theorem 3.1 states the optimal convergence rate of kernel GF which matches the provided minimax lower bound in Theorem 3.3. Moreover, they find that KRR is unableto achieve this lower bound (being suboptimal) for interpolation spaces with $s >1$. Strengths: ### Pros: - very well-written - tightening previous results on the minimax rate of kernel GF in high dimensions - proving the saturation of KRR in high dimensions Weaknesses: ### Cons: - the main results of this paper are stated on page 7, the presentation of the results is slow Technical Quality: 4 Clarity: 3 Questions for Authors: This is an interesting paper about the saturation of KRR in high dimensions. The authors provide several new results and the paper is well written and organized. - line 200 -- what does $f_\star$ mean? It is only defined in the next page. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your detailed review and thoughtful comments. We are grateful that you found our paper well-written and technically solid. Below, we address your concerns and questions in detail. **Author's response to Concern 1:** We appreciate your suggestion to present our results earlier in the paper. We agree that, in a 9-page conference paper, it is crucial to present the main results as early as possible. Therefore, we followed your recommendation and added a non-rigorous version of our main results (Theorem 4.1 and Theorem 4.2) in the contribution part (page 2 in our manuscript). For your convenience, we restate the non-rigorous version as follows: ### Theorem (Restate Theorem 4.1 and 4.2, non-rigorous) Let $s>0$, $\tau \geq 1$, and $\gamma>0$ be fixed real numbers. Denote $p$ as the integer satisfying $\gamma \in [p(s+1), (p+1)(s+1))$. Then under certain conditions, the excess risk of large-dimensional spectral algorithm with qualification $\tau$ is of order: $$ \Theta_{\mathbb{P}} ( d^{-\min ( \gamma-p, s(p+1) )} ) , \quad s \leq \tau $$ $$ \Theta_{\mathbb{P}} ( d^{-\min ( \gamma-p, \frac{\tau(\gamma-p+1)+p\tilde{s}}{\tau+1}, \tilde{s}(p+1) )} ), \quad s > \tau, $$ where $\tilde{s} = \min ${$s, 2\tau $}. We would greatly appreciate your further advice on how to better organize our paper. **Author's response to Question 1:** Thank you for highlighting this point. We will clarify the definition of $f_{\star}(x) = x[1]x[2]\cdots x[L]$ in line 200. Here, $f_{\star}$ represents the regression function, where $x[i]$ denotes the $i$-th component of $x$. We will ensure this definition is included in the revised manuscript. --- We thank you once again for your expertise and valuable feedback. Please let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: Thank you! I appreciate the authors' response. It is really helpful to have such non-rigorous versions of the results earlier in the paper, and I'm happy that the authors included this to improve the presentation of their draft. I continue supporting this paper so I keep my score positive. --- Reply to Comment 1.1.1: Comment: Thank you as well! We are pleased that the changes you suggested have enhanced the clarity and presentation of our work. We sincerely appreciate your positive assessment of our paper.
null
null
Rebuttal 1: Rebuttal: Following the recommendation of the Reviewer JAP1, we conducted two preliminary experiments using two specific kernels: the RBF kernel and the NTK kernel. Experiment 1 was designed to confirm the optimal rate of kernel gradient flow and KRR when $s=1$. Experiment 2 was designed to illustrate the saturation effect of KRR when $s>1$. **Experiment 1:** We consider the following two inner product kernels: 1. **RBF kernel with a fixed bandwidth:** $$ K^{\mathrm{rbf}}(x,x^{\prime}) = \exp{\left(-\frac{\|x-x^{\prime}\|_{2}^{2}}{2}\right)}, ~~x, x^{\prime} \in \mathbb{S}^{d}. $$ 2. **Neural Tangent Kernel (NTK) of a two-layer ReLU neural network:** $$ K^{\mathrm{ntk}}(x, x^\prime) := \Phi(\langle x, x^{\prime} \rangle), ~~x, x^{\prime} \in \mathbb{S}^{d}, $$ where $\Phi(t)=\left[\sin{(\arccos t)}+2(\pi-\arccos t)t\right]/ (2 \pi)$. The RBF kernel satisfies Assumption 1. For the NTK, the coefficients of $\Phi(\cdot)$, $(a_{j}) _ {j=0} ^ {\infty}$, satisfy $a_{j} > 0, j \in \{0, 1\} \cup \{2,4,6,\ldots\}$ and $a_{j} = 0, j \in \{3,5,7,\ldots\}$ (see, e.g., [Lu2023]). As noted after Assumption 1, our results can be extended to inner product kernels with certain zero coefficients $a_j$. Specifically, for any $\gamma>0$, as long as $a_{j} > 0$ for $j = \lfloor \gamma \rfloor, \lfloor \gamma \rfloor+1$, the proof and convergence rate remain the same. Therefore, for $\gamma<2$ in our experiments, the convergence rates for NTK will be the same as for the RBF kernel. We used the following data generation procedure: $$ y_{i} = f_{*}(x_{i}) + \epsilon_{i}, ~~ i = 1, \ldots, n, $$ where each $x_{i}$ is i.i.d. sampled from the uniform distribution on $\mathbb{S} ^ {d}$, and $\epsilon_{i} \overset{\text{i.i.d.}}{\sim} \mathcal{N}(0,1)$. We selected the training sample sizes $n$ with corresponding dimensions $d$ such that $n = d^{\gamma}, \gamma = 0.5, 1.0, 1.5, 1.8$. For each kernel and dimension $d$, we consider the following regression function $f_{*}$: $$ f_{*}(x) = K(u_{1},x) + K(u_{2},x) + K(u_{3},x), \quad \text{for some}\quad u_{1}, u_{2}, u_{3} \in \mathbb{S}^{d}. $$ This function is in the RKHS $\mathcal{H}$, and it is easy to prove that, for any $u \in \mathbb{S} ^ {d}$, Assumption 2 (b) in our revision holds for $K(u, \cdot)$ with $s=1$. Therefore, Assumption 2 holds for $s=1$. We used logarithmic least squares to fit the excess risk with respect to the sample size, resulting in the convergence rate $r$. The experimental results align well with our theoretical findings. **Experiment 2:** We use most of the settings from Experiment 1, except that the regression function is changed to $f_{*}(x) = \sqrt{\mu_2^{s}N(d, 2)} P_2(\langle \xi, x \rangle)$ with $s=1.9$, $P_2(t) := (d t^2-1)/(d-1)$ the Gegenbauer polynomial, and $\xi \in \mathbb{S}^{d}$. Notice that the addition formula $P_2(\langle \xi, x \rangle) = \frac{1}{N(d, 2)}\sum_{j=1}^{N(d, 2)}Y_{2, j}(\xi)Y_{2, j}(x)$ implies that $$ ||f_{*}|| _ {[\mathcal{H}] ^ {s}}^2 = \frac{1}{N(d, 2)} \sum _ {j=1} ^ {N(d, 2)} Y_{2, j} ^ 2 (\xi) = P _ 2(1) = 1, $$ hence $f_{*} \in [\mathcal{H}]^{s}$ and satisfies Assumption 2. Our experiment settings are similar to those on page 30 of [5]. We choose the regularization parameter for KRR and kernel gradient flow as $\lambda=0.05 \cdot d^{-\theta}$. For KRR, since Corollary D.16 suggests that the optimal regularization parameter is $\lambda \asymp d^{-0.7}$, we set $\theta=0.7$. Similarly, based on Corollary D.16, we set $\theta=0.5$ for kernel gradient flow. Additionally, we set $\gamma = 1.8$. The results indicate that the best convergence rate of KRR is slower than that of kernel gradient flow, implying that KRR is inferior to kernel gradient flow when the regression function is sufficiently smooth. ### References - Lu, Weihao, et al. "Optimal rate of kernel regression in large dimensions." arXiv preprint arXiv:2309.04268 (2023). - Li, Yicheng, Haobo Zhang, and Qian Lin. "On the Saturation Effect of Kernel Ridge Regression." International Conference on Learning Representations. (2024) Pdf: /pdf/0f52fcdad49a5a40e17ef2b7a71cecb15321ddf9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Flexible Visual Relationship Segmentation
Accept (poster)
Summary: This paper proposed a flexible framework that can effectively handle human-object interaction (HOI) detection, scene graph generation (SGG), and referring expression comprehension (REC) tasks. The proposed method further addressed the problem of promptable visual relationship segmentation and enabled the capability for open-vocabulary segmentation. Its main idea is to leverage the pretrained vision-language models for grounding textual features to visual relationship inside images. Experimental results show that the proposed framework is able to achieve SOTA performance on standard, promptable and open-vocabulary visual relationship detection/segmentation tasks. Strengths: 1. This paper is overall well-written and easy to follow. 2. The problem addressed by this paper is novel to me. Unifying various visual relationship detection/segmentation tasks with in a single framework has been seldomly studied in previous works, which might raise new research interests in this field. 3. The overall performance of the method is competitive. It is able to achieve SOTA results on most VRD tasks and outperform previous methods notably. Weaknesses: 1. According to Table 7, the major performance improvement of the proposed method comes from the supervision of the mask head, which is provided from SAM and never used by previous works. When only the box head is used, the results of the proposed method is behind the current SOTA methods notably on HICO-DET and VCOCO. This makes the benefit of the proposed unified framework questionable if such large performance drops are observed. 2. Apart from introducing the segmentation head, the major technical contribution of the proposed method is unclear to me. The model architecture and losses are very similar to those previously proposed dual-decoder architectures in GEN-VLKT, RLIP, etc., and this part needs further clarification. 3. In Table 7, from the last row, we can see that the benefit brought by adding PSG is minor. Hence, the results on PSG dataset should also be reported so that the readers could have a better understanding on how unifying the framework could benefit each individual task. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the Weakness section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have widely discussed the limitation of the proposed work and also its challenges in benchmarking. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and helpful feedback. Please refer to “The General Response to Reviewers” for the reply to the issue of the difference with previous methods and refer to the PDF for the added tables for PSG results and mask head only results. We respond below to your other comments and questions. 1. **Major performance improvement comes from the mask head.** This performance drop can be attributed to the fact that our network architecture, derived from Mask2Former, is optimized for pixel-wise segmentation tasks and does not fully leverage the benefits of box-based supervision. To further investigate, we trained existing HOI detectors (CDN, STIP, GEN-VLKT) with only mask head, using segmentation masks instead of box annotations. Surprisingly, all of these approaches led to reduced accuracy on HICO-DET as in Table 2 of the attached PDF, indicating that their networks’ performance with segmentation masks are not as effective as with box annotations. We argue that our proposed method’s preference for mask annotations is analogous to how previous methods favor box annotations. 2. **The technical contribution of the proposed method is unclear.** While our architecture shares similarities with dual-decoder models like GEN-VLKT and RLIP, our key contributions are: (1) ***Unified Framework***: As in Table 1 of the main paper, our model integrates standard, promptable, and open-vocabulary VRS tasks into a single system, offering broader flexibility than previous methods. (2) ***Mask-Based Approach***: We utilize masks to handle various VRS tasks, allowing our model to adapt to different types of annotations effectively, including HOI detection and generic scene graph generation. (3) ***Dynamic Prompt Handling***: Our approach supports dynamic prompt-based and open-vocabulary settings, addressing limitations of fixed-vocabulary models. As in the Figure 4 of the main paper, our model can even combine the promptable query with open-vocabulary ability to make the model ground novel relational objects. Please refer to “The General Response to Reviewers”, where we elaborate more on the difference of our model with previous works. 3. **Results of PSG in Table 7 of the main paper.** Thank you for your suggestion. We have added the results for the PSG dataset in Table 7 of the main paper, please refer to the PDF. The updated results show that while adding HICO-DET and VCOCO can provide slight improvements for PSG, PSG itself does not significantly enhance performance on HICO-DET. This is due to the relatively small size and distinct distributions of these datasets, which may not always be mutually beneficial. We appreciate your input and hope this additional information clarifies the impact of unifying the framework across different tasks. --- Rebuttal 2: Comment: Dear Reviewer k2zH, We sincerely appreciate your recognition and valuable feedback on our paper. In response, we have conducted additional experiments regarding the mask head and the PSG task. We have also provided a detailed clarification of our contributions compared to previous works. We kindly inquire whether these results and clarifications have adequately addressed your concerns. Please feel free to let us know if you have any further questions. Thank you a lot! Best regards, Authors of Paper 3507 --- Rebuttal Comment 2.1: Comment: Dear reviewer, a reminder to take a look at the author's rebuttal and other reviews. Did the rebuttal address your concerns?
Summary: This work presents a novel approach for visual relationship segmentation that integrates the three critical aspects of a flexible VRS model: standard VRS, promptable querying, and open-vocabulary capabilities. By harnessing the synergistic potential of textual and visual features, the proposed model delivers promising experimental results on existing benchmark datasets. Strengths: 1) The authors introduce a flexible framework capable of segmenting both human-centric and generic visual relationships across various datasets. 2) The authors present a promptable visual relationship learning framework that effectively utilizes diverse textual prompts to ground relationships. 3) The proposed method shows competitive performance in both standard close-set and open-vocabulary scenarios, showcasing the model’s strong generalization capabilities. Weaknesses: 1. The experimental comparison suffers from potential unfairness, contrasting the proposed method with others using Focal-L backbone against those employing ResNet backbone seems somewhat unreasonable. 2. Since this paper includes referring expression comprehension (REC) tasks in abstract, it is unreasonable not to report experimental results on corresponding benchmarks like RefCOCO, RefCOCO+, and RefCOCOg. 3. While the research perspective of this paper is reasonable and innovative, the overall architecture design of Flex-VRS lacks novelty compared to previous related works. 4. More visualizations are needed for demonstrating the produced fine-grained masks generated by converting existing bounding box annotations from HOI detection datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to Weakness Section for more details. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have thoroughly discussed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and helpful feedback. Please refer to “The General Response to Reviewers” for the reply to the issue of backbone comparison, difference with previous methods and refer to the PDF for the added visualizations of masks generated from bounding box annotations. We respond below to your other comments and questions. 1. **Results on referring expression comprehension (REC) task.** - The referring expression comprehension (REC) tasks on benchmarks like RefCOCO, RefCOCO+, and RefCOCOg are designed to detect objects based on free-form textual phrases, such as "a ball and a cat" or "Two pandas lie on a climber." In contrast, the promptable VRS task in our work focuses on detecting subject-object pairs within a structured prompt format, such as <?, sit_on, bench> or <person, ?, horse>, as illustrated in Figure 1 of the main paper. Our model is designed to encode and compute similarity scores for each of these elements separately. - Adapting our model to standard REC tasks would require fundamental changes, as the REC format does not align with the structured, relational queries that our model is designed to handle. Our primary focus is on relational object segmentation based on a single structured query, which differs significantly from the objectives of REC benchmarks. Therefore, it is not reasonable to directly compare our results with those benchmarks without substantial modifications to the model. 2. **The architecture design lacks novelty.** - As highlighted in Table 1 of the main paper, our Flex-VRS framework introduces a flexible architecture that supports standard, promptable, and open-vocabulary visual relationship learning. The novelty of our approach lies in the seamless integration of these diverse functionalities within a single, unified system. This design enables flexible and dynamic interactions with visual relationships, setting our work apart from previous methods that often lack such comprehensive capabilities. - Furthermore, our model's one-stage design allows it to effectively handle different types of inputs and perform various tasks based on the input, thanks to our novel query mechanism. This innovative query design is a significant departure from previous works, offering a level of adaptability and flexibility that has not been previously developed. 3. **More visualizations of generating masks from bounding box annotations.** Thank you for the suggestion. We've included additional visualizations in the attached PDF. These visualizations demonstrate how redundant bounding boxes are consolidated into a single mask, and how our IoU-based filtering effectively removes failure cases. --- Rebuttal 2: Comment: Dear Reviewer EDLE, We sincerely appreciate your valuable feedback and suggestions. In our rebuttal, we have provided explanations regarding the experimental comparisons, differences from REC tasks and previous works, and have included additional visualizations. We kindly inquire whether these results and clarifications have adequately addressed your concerns. Please feel free to let us know if you have any further questions. Thank you very much! Best regards, Authors of Paper 3507 --- Rebuttal Comment 2.1: Comment: Dear reviewer, a reminder to take a look at the author's rebuttal and other reviews. Did the rebuttal address your concerns? --- Reply to Comment 2.1.1: Comment: Dear Reviewer EDLE, Thank you again for your time to review our paper. Could you please check if our rebuttal has addressed your concerns at your earliest convenience? The deadline of the discussion period will end very soon. Thank you! Best regards, Authors of Paper 3507 --- Rebuttal 3: Title: Response to Authors Comment: The authors have addressed most of my concents. After checking the peer review comments and the author's responses, I decided to raise the given score for this work.
Summary: This work proposes an approach for visual relationship segmentation that integrates the three aspects of a VRS model: standard VRS, promptable querying, and open-vocabulary capabilities. The idea of the article is very good, but the performance seems to be lacking. Strengths: Enhancing HOI from the perspective of object segmentation is an interesting and promising idea. Weaknesses: 1-Previous work mostly uses ResNet series backbones, while the authors use a Focal-L Backbone. How do the FLOPs and parameter counts of this backbone compare to ResNet? I am concerned there may be an unfair comparison. 2-The performance of the proposed method does not seem to be particularly superior, which is a significant drawback. For example, in Tables 2 and 3, UniVRD, PViC, and RLIPv2 significantly outperform the proposed method in terms of box mAP metrics. Can the authors analyze this situation? 3-In line 208, the authors mention using SAM to generate masks. How do they handle the noise in these masks? Some masks might significantly deviate from the ground truth. 4-There are some typos, such as in line 25: "textttperson". Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and helpful feedback. Please refer to “The General Response to Reviewers” for the reply to comparison of our backbone with ResNet's and refer to the PDF for visualizations of masks generated by SAM. We respond below to your other comments and questions. 1. **Performance seems not to be particularly superior. Can the authors analyze this situation?** (1) ***Training Data and Model Efficiency***: Our approach is designed to be more data-efficient and requires fewer training data compared to methods like RLIPv2 and UniVRD, which use extensive datasets. For example, they used both VG (108,077 images) and Objects365 (more than 2,000,000 images), while our model only used single data source, e.g. HICO-DET (47,776 images), which is x50 smaller. The fact that our model does not rely on these large datasets impacts the box mAP performance in direct comparisons. (2) ***Model Complexity and Two-Stage Approach***: UniVRD, PViC, and RLIPv2 employ two-stage methods and fine-tune object detectors using extra datasets, which contribute to their higher box mAP scores. In contrast, our model uses a one-stage approach without separate object detection fine-tuning. This simple design choice, while impacting box mAP, aligns with our goal of creating a more flexible and data-efficient model. (3) ***Model Capacity***: Our model, utilizing the Focal-L backbone, is significantly smaller in terms of capacity compared to UniVRD. For instance, the Focal-L backbone has 198M parameters compared to 632M in UniVRD with the LiT (ViT-H/14) model. Despite this difference, we achieved better results in terms of overall performance metrics, such as 37.4 vs 38.1 on HICO-DET, which highlights the efficiency and effectiveness of our approach despite its smaller size. To conclude, our model prioritizes data efficiency and simplicity over sheer complexity, utilizing fewer training data and a one-stage approach without additional fine-tuning on extra datasets. Despite having a significantly smaller model capacity, our approach still achieves comparable overall performance, though at the cost of slightly lower box mAP scores. This trade-off reflects our focus on creating a more flexible and efficient model rather than optimizing for specific metrics like box mAP. 2. **Noise handling in using masks generated by SAM.** To address potential noise and inaccuracies in masks generated by SAM, we employ a filtering approach based on Intersection over Union (IoU). - IoU-Based Filtering: We compute the IoU between the generated masks and the original box annotations. Masks with an IoU score below a threshold of 0.2 are considered to have significant deviations from the ground truth and are filtered out. This threshold is chosen to balance the trade-off between including sufficient mask data and excluding those with substantial inaccuracies. - Rationale for IoU Threshold: The chosen IoU threshold helps ensure that only masks with a reasonable overlap with the ground truth annotations are retained. This threshold is set based on empirical evaluation and aims to minimize the impact of masks that are too noisy or incorrect, while still retaining as much useful data as possible. Please refer to the attached PDF for visualizations of generated masks from bounding box annotations. After using this strategy, we conduct analysis on 200 samples. We tested various thresholds and found this gets the best balance between denoising and data retaining(95% valid data retraining). 3. **Typos.** Thank you for pointing out the typo. We will ensure that the final version of the manuscript is thoroughly proofread to address any such errors. --- Rebuttal 2: Comment: Dear Reviewer W5TQ, We sincerely appreciate your recognition and valuable feedback on our paper. In response, we have analyzed the backbone issue and provided detailed comparisons and evaluation clarifications of our model against previous works. Additionally, we have clarified the handling of noise in masks and will ensure this is clearly outlined in our revised version. We kindly inquire whether these clarifications have adequately addressed your concerns. Please feel free to let us know if you have any further questions. Thank you a lot! Best regards, Authors of Paper 3507 --- Rebuttal Comment 2.1: Comment: Dear reviewer, a reminder to take a look at the author's rebuttal and other reviews. Did the rebuttal address your concerns?
Summary: This paper propose a model to handle multiple visual relationship tasks, like HOI detection and Scene Graph Generation. The proposed model is based on vision-language models similar to CLIP. It handles different formulations like standard close-set, open-vocabulary, and prompted setting. Strengths: - Unified model for HOI and SGG. - Multiple inference setting supported, like open-vocabulary, standard close-set, and prompted. - The writing is smooth, and the demonstration is generally OK. Weaknesses: - Why not detection, rather than segmentation? What's the necessity for segmentation-style output rather than traditional detection style tasks? I would like more clarification on this. - Also, (if not using the segmentation setting,) this method could be compared with more methods. Currently, the compared methods seems still not sufficient. - This general contribution and pipeline is similar to UniVRD, except (1) this model is mask-based rather than box-based, and (2) a prompted setting is further supported. Thus, the contribution seems incremental. - The performance is basically the same as UniVRD and is not as good as RLIPv2. Technical Quality: 2 Clarity: 2 Questions for Authors: - What's the essential difference between this model and UniVRD? - What's the necessity for segmentation rather than prediction? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and helpful feedback. Please refer to “The General Response to Reviewers” for the reply to the issue of the difference with previous methods and refer to the PDF for illustration of the importance of using segmentation masks. We respond below to your other comments and questions. 1. **Advantages of segmentation over detection.** - Precision and Reduced Redundancy: Traditional bounding boxes often include overlapping and ambiguous information, leading to redundancy. Segmentation masks, by accurately delineating object boundaries, provide a more precise and clear representation, reducing such redundancy, which is also illustrated in [1] and [2]. - Enhanced Visual Understanding: Segmentation allows for a more detailed analysis of visual relationships, crucial for tasks like panoptic scene graph generation and human-object interaction detection. It enables the model to better distinguish between object parts and backgrounds, leading to improved performance in complex scene understanding. As in Table 7 of the main paper, using box head leads to performance drop in 7.5 mAP on HICO-DET and 4.5 mAP on VCOCO. - Comprehensive Contextual Analysis: Bounding boxes typically miss important contextual elements, such as background categories like roads, sky, and trees, as shown in the standard PSG task in Figure 1 of the main paper. Segmentation captures these elements, offering a more complete understanding of the scene, which is vital for tasks like image and video editing. - Unified Model: Additionally, object detection models often struggle to precisely extract foreground objects, which is why they are typically combined with segmentation models like SAM for fine-grained image tasks. Our approach, however, presents a unified model that can localize both subjects and objects, along with their corresponding segmentation masks. We show visualization to illustrate this point in the attached PDF. We hope this clarifies the advantages of using segmentation masks in our approach. 2. **Could Compare with more methods if not using the segmentation setting.** Thank you for the valuable feedback! We actually conducted comparisons without the segmentation setting, as detailed in the paper. Besides computing mask mAP in segmentation setting, we also conducted box mAP in non-segmentation setting. We did this by converting our segmentation masks into bounding boxes and transforming the box outputs of previous methods into masks. In Table 2 and 3 in the main paper, we show comparison with 13 previous methods, including one-stage methods, two-stage methods, with different backbones(ResNet, Swin, ViT) and training data, and in Table 8 and 9 in the appendix, we show full comparison with over 25 methods. [1] Yang, Jingkang, et al. "Panoptic scene graph generation." ECCV 2022. [2] Yang, Jingkang, et al. "Panoptic video scene graph generation." CVPR 2023. --- Rebuttal 2: Comment: Dear Reviewer LLLd, We sincerely appreciate your valuable feedback on our paper. In our rebuttal, we have discussed the differences between our work and previous studies, as well as provided clarifications regarding the segmentation of relational objects. We kindly inquire whether these clarifications have adequately addressed your concerns. Please feel free to let us know if you have any further questions. Thank you very much! Best regards, Authors of Paper 3507 --- Rebuttal Comment 2.1: Comment: Dear reviewer, a reminder to take a look at the author's rebuttal and other reviews. Did the rebuttal address your concerns? --- Reply to Comment 2.1.1: Comment: Dear Reviewer LLLd, Thank you again for your time to review our paper. Could you please check if our rebuttal has addressed your concerns at your earliest convenience? The deadline of the discussion period will end very soon. Thank you! Best regards, Authors of Paper 3507
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback. Our proposed Flex-VRS is commended for its ability to support a variety of tasks(Reviewer LLLd, EDLE, k2zH), its integration of flexibility and open-vocabulary capability into the VRS model(Reviewer LLLd, EDLE), its competitive performance(Reviewer EDLE, k2zH), and its clear presentation(Reviewer LLLd, k2zH). We address some common questions from reviewers here and will incorporate the feedback in the revision. Please also refer to the attached PDF for added visualizations of generated masks and tables for PSG results and mask head results. 1. **Difference from previous methods (e.g. UniVRD).** We would like to clarify the contributions and differences between our method and previous work, including UniVRD and RLIPv2. - **Comparison with UniVRD** – *Methodology Difference*: UniVRD uses a two-stage approach, where the model first detects independent objects and then decodes relationships between them, retrieving boxes from the initial detection stage. In contrast, our method employs a one-stage approach, where each query directly corresponds to a <subject, object, predicate> triplet. This transition improves time efficiency from O(MxN) to O(K), where M is the number of subject boxes, N is the number of object boxes, and K is the number of interactive pairs. ***Our approach also provides greater flexibility by learning a unified representation that encompasses object detection, subject-object association, and relationship classification in a single model.*** (As shown in Figure 1, the subject-object pairs need to be accurately localized and understood.) Our model is superior in terms of model capacity and data efficiency. While we use much fewer training data (x50 less, without using VG and Objects365) and our model with the Focal-L backbone is much smaller than UniVRD (164M vs 640M) with LiT(ViT-H/14), we achieve comparable results(37.4 vs 38.1 on HICO-DET). - **Performance Comparison with RLIPv2** – *Scaling and Design*: While our method does not match RLIPv2 in performance, this is due to different design philosophies and goals. RLIPv2 is a two-stage approach optimized for large-scale pretraining and relies on separately trained detectors. ***Our model, however, is not designed for pretraining and does not include a separately trained detector. Our focus is on enhancing the flexibility of the VRS model without relying on extensive curated data(x50 more, VG and Objects365).*** Thus, the differences in performance are attributed to the scale and design objectives rather than a direct comparison of methods. 2. **FLOPs and the number parameters of the backbone compared to ResNet.** As in Table 2 and 3 of the main paper, we have done extensive comparisons with previous methods, including backbones on ResNet-50/101/152, EfficientNet, Hourglass, Swin Transformers, and LiT architectures. For previous methods that utilize ResNet backbone for HOI detection and PSG, we already conducted a thorough comparison and included (VSGNet, ACP, No-Frills), which used ResNet-152. To the best of our knowledge, larger ResNet, such as ResNet-200, and ResNet-269, are not used in previous methods on related tasks. ResNet, we have included the largest model ResNet-152, which has 65M parameters and 15 GFLOPs. Other baselines are not using the ResNet backbone, for example, UniVRD is using LiT(ViT-H/14) backbone. It has 632M parameters and 162 GFLOPs, a lot more than our198M parameters and 15.6 GFLOPs, but still performs worse than our model. Pdf: /pdf/a88de98b0414b972f78aeee497a7b3a531cfbd49.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Nesterov acceleration despite very noisy gradients
Accept (poster)
Summary: The paper studies the accelerated gradient method of Nesterov’s (NAG) for smooth and (strongly-)convex problems under the multiplicative noise model when the variance of the stochastic gradients behave as $O(\sigma^2 \| \nabla f(x) \|^2)$. The authors identify that the NAG in its original momentum formulation could tolerate multiplicative noise for $\sigma \in [0,1)$. They propose a modified version which uses 2 step size sequences for the update and momentum steps, respectively, which are a constant factor apart from each other, and prove that their method achieves the optimal rates for any positive value of $\sigma$ for general convex and strongly convex problems. Strengths: **Clarity and Presentation**\ The manuscript identifies the key contributions in terms of the algorithmic design and explains how/why the particular parameter choices are necessary for the convergence with optimal rates. **Strengths**\ The authors identify the limits of NAG and propose an easy but nice fix to remedy the limitations under multiplicative noise. They are inspired by the continuous time aspect of NAG, similar to [Even et al., 2021] and [Liu and Belkin., 2018] but their modified construction of the Lyapunov function accommodates the 2 step-size-formulation they have. Looking at the analysis for NAG and AGNES, one could see that the separation in the second term of the Lyapunov function, i.e., $\| b(n) ( x’_n – x_n ) + a(n) ( x’_n – x^* ) \|^2$, enables to handle the multiplicative noise and the 2 step size structure at the same time. This is a simple fix, but it proves to be useful. To my knowledge, this is the first work that successful studies the general convex and strongly-convex functions under multiplicative noise and proves optimal bounds. I find the results and the simplicity of the proposed method to be important. Weaknesses: **Clarity and Presentation**\ I think it should be made clearer that the proposed method is as simple, modified version of NAG rather than a new framework as the only difference is using 2 step size sequences, which are only a multiplicative factor of $\frac{1}{1 + \sigma^2}$ apart from each other. **Weaknesses** The equivalence between [Liu & Belkin, 2018] makes me regards this work as a new analysis for an existing framework. They also have overlapping proof ideas with [Even at al., 2021] (use of continuous-time perspective, similar Lyapunov function constructions). Having that said, the authors seem to have identified correct modifications on the existing methodologies. I strongly suggest to include a dedicated section of comparison between the analysis techniques and explain their main strengths/contributions in accordance with the related work as it is not clear from the manuscript. The results for the general convex case is not too interesting in my opinion. The main reason is the existing work on adaptive methods, which essentially prove that $\sum_{i=1}^{n} a_i^2 \| \nabla f(x_i) \|^2 < O(LD^2) $, where $D$ is the diameter of bounded set of iterates. Please refer to [I - IV] for relevant results. I cannot say anything for sure before analyzing in details but looking at their proofs, it should be fairly possible for those methods to tolerate even to unknown values of $\sigma$ when $\mu = 0$. Note that most probably this would not be extended to strongly-convex problems. **References** [I] Levy, K.Y., Yurtsever, A., & Cevher, V. (2018). Online Adaptive Methods, Universality and Acceleration. ArXiv, abs/1809.02864. [II] Cutkosky, A.. (2019). Anytime Online-to-Batch, Optimism and Acceleration. *Proceedings of the 36th International Conference on Machine Learning*, in *Proceedings of Machine Learning Research* 97:1446-1454 Available from https://proceedings.mlr.press/v97/cutkosky19a.html. [III] Kavis, A., Levy, K.Y., Bach, F.R., & Cevher, V. (2019). UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization. Neural Information Processing Systems. [IV] Joulani, P., Raj, A., György, A., & Szepesvari, C. (2020). A simpler approach to accelerated optimization: iterative averaging meets optimism. International Conference on Machine Learning. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Could you please comment on other formulations of acceleration such as the optimal method in [V, Eq. (3.11)] and the method of similar triangles [VI]. Specifically, optimal method in [V] has a 2 step size formulation and it might already give the desired result for multiplicative noise. 2. In Figure 3, what happens to NAG when $\sigma = 0$? Please also include a case where $\sigma < 1$ to compare NAG and AGNES. 3. Could you elaborate on the contributions of your paper in terms of the theoretical analysis w.r.t. prior work? **References**\ [V] Nesterov, Y. (2005). Smooth minimization of non-smooth functions. Mathematical Programming, 103, 127-152. [VI] Gasnikov, A.V., Nesterov, Y.E. Universal Method for Stochastic Composite Optimization Problems. Comput. Math. and Math. Phys. 58, 48–64 (2018). https://doi.org/10.1134/S0965542518010050 Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I would suggest the authors to move the Appendix B.2 to the main text and discuss the equivalence to the prior work clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >I think it should be made clearer that the proposed method is as simple, modified version of NAG rather than a new framework as the only difference is using 2 step size sequences, which are only a multiplicative factor of $\frac{1}{1+\sigma^2}$ apart from each other. We agree that a key point is that our method is a simple modification of NAG. We tried to highlight this in the abstract and will be happy to emphasize it more in the main text of a revised version of the paper. >The equivalence between [Liu \& Belkin, 2018] makes me regards this work as a new analysis for an existing framework. They also have overlapping proof ideas with [Even at al., 2021] (use of continuous-time perspective, similar Lyapunov function constructions). Having that said, the authors seem to have identified correct modifications on the existing methodologies. I strongly suggest to include a dedicated section of comparison between the analysis techniques and explain their main strengths/contributions in accordance with the related work as it is not clear from the manuscript. Thank you for this comment. We agree that our work should be regarded as a new analysis (and derivation) of the method of [Liu & Belkin, 2018]. In addition, the ideas behind the proof, in particular Lyapunov function arguments inspired by an analysis of the continuous dynamics, are fairly well-known in the optimization literature. We derived the method from our analysis of NAG independently and found the equivalence after the analysis was completed. The main novelty is a simple parametrization which simplifies the proofs of our main results (which are novel for this method). >The results for the general convex case is not too interesting in my opinion. The main reason is the existing work on adaptive methods, which essentially prove that $$ \sum_{i=1}^n a_i^2 |\nabla f(x_i)|^2 \leq O(LD^2), $$ where $D$ is the diameter of bounded set of iterates. Please refer to [I - IV] for relevant results. I cannot say anything for sure before analyzing in details but looking at their proofs, it should be fairly possible for those methods to tolerate even to unknown values of $\sigma$ when $\mu = 0$. Note that most probably this would not be extended to strongly-convex problems. Thank you for pointing out this literature, which we were previously not familiar with, and which we are planning to add to our literature review to include this work. Based upon our reading, this work is very extensive and covers the situations of online optimization and adaptivity with respect to the (additive) noise level $\sigma$ and the smoothness parameter $L$ (which may even be infinite for some results, assuming that the objective function is Lipschitz-continuous). The second case excludes for instance for strongly convex functions, since we have $$ f(x)\geq f(x^*) + \frac\mu2\|x-x^*\|^2, $$ i.e. the function grows quadratically at infinity. If the were Lipschitz-continuous, it could grow at most linearly. Many merely convex functions also grow faster than linearly at infinity. Unbounded gradients particularly affect the multiplicative noise setting we consider: The multiplicative noise is our 'friend' close to the set of minimizers since it becomes small, but it is our 'enemy' at infinity since it becomes large and may send us in the wrong direction. In particular, we have no a priori bounds on the diameter of the trajectory or on the gradient estimates in $L^\infty$. The fact that we do not constrain the trajectory also allows us to formulate an result in the setting where the objective function does not have minimizers, in which case the optimizer trajectory *has to* diverge. It is not immediately apparent how the adaptive algorithms would fare in these situations as they have not been studied with multiplicatively scaling noise. We agree that this is an important part of the literature and we will include a comparison to it in a revised version of the paper. Still, we believe that we contribute new results even in the convex case here. >Could you please comment on other formulations of acceleration such as the optimal method in [V, Eq. (3.11)] and the method of similar triangles [VI]. Specifically, optimal method in [V] has a 2 step size formulation and it might already give the desired result for multiplicative noise. Again, we thank the reviewer for pointing out this work, which we were previously unaware of. Based upon our reading, the method in reference [V, Eq. (3.11)] is an optimal accelerated method in the plain convex (i.e. not strongly convex) case, which works even with a constraint (the convex set $Q$). Without further analysis, we do not know how this method will perform with multiplicative noise, but we believe that such an analysis will be too extensive to add to this paper and thus leave it to future work. We will certainly mention this work and its contribution to accelerated methods in the convex case in a revised version of the manuscript, however. We were also very intrigued by the method of similar triangles, which gives a nice way of deriving accelerated optimization methods for convex and strongly convex objectives that we were not aware of. However, we do not know how this method will generalize to the case of multiplicative noise without significant further analysis, which we again leave to future work. Of course, we will certainly mention the contributions of this paper in our revision. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer e67y Comment: Thank you for your responses. **Convex case and adaptive methods**\ Let me try to explain what I had in mind with more details. First, I do not suggest such methods are better suited for this problem; I can see how their analysis techniques are suitable for handling multiplicative noise. In the proof of adaptive methods, one cannot guarantee descent each iteration unless the method is equipped with linesearch. Therefore, the main strategy is to prove that cumulative error of not knowing the true Lipschitz constant will be bounded by a constant that is guided by the initial step size, Lipschitz constant and the initial distance/bounded domain radius. At the end of the day, the effort of bounding this cumulative error is equivalent to showing that the adaptive step-size has a strictly positive limit and it is lower bounded by a large enough value. The step size is of the form: $$ \gamma_t = \frac{\gamma}{\sqrt{ \beta + \sum_{k=1}^{t} \alpha_k^2 \|\| \nabla f(\bar x_k) \|\|^2 }} $$ where $\gamma$ and $\beta$ are some initialization parameters, $\alpha_k$ is the averaging parameter that grows in the order of $O(k)$ and $\bar x_k$ is the weighted average of the sequence(s) generated by the algorithm with respect to the weights $\alpha_k$. Note that this step size is non-increasing and assume for simplicity that $\eta_t$ incorporates the knowledge of $L$ and $\mu$. Then, one could show that, under **unbounded** gradients, the step-size has a strictly positive limit, and the algorithm should converge at a rate of $O(1/T^2)$. This translates to showing that $\beta + \sum_{k=1}^{t} \alpha_k^2 \|\| \nabla f(\bar x_k) \|\|^2 \leq O(1)$. Multiplicative noise gives us a way to move from the noise vector to the full gradients. In its presence, the analysis will have an additional term of the form $\sigma^2 \sum_{t=1}^{T} \|\| \nabla f(x_t) \|\|^2$. The main upside is that this summation is in terms of the full gradient, but missing the scaling parameter which could be added to the expression by upper bounding it. Then, we will have two summation terms of the same form; one has a multiplicative constant that depends on $L$ (the original error term) and the other has the variance term $\sigma^2$. The proof needs additional steps to verify this logic, but I believe it is not too complicated. My point is not to downplay the results presented in this paper, but I would like to propose that there are multiple techniques that have the potential to achieve the accelerated rates under multiplicative variance. As I mentioned, this technique is not valid when we are aiming for a linear rate. Indeed, [Levy, 2017] has a result for (non-accelerated) adaptive methods for smooth and strongly convex problems. When the algorithm knows $L$ and $\mu$, it is possible to achieve (non-accelerated) linear rate, however, there is a multiplicative factor $T$, which also appears in the linear rate exponent. **References:** Levy, K. Y. Online to Offline Conversions, Universality and Adaptive Minibatch Sizes. NeurIPS 2017. --- Reply to Comment 1.1.1: Comment: We are grateful to the reviewer for going above and beyond in providing such a detailed outline, and we agree that this is a very interesting direction to pursue. We will look into this, possibly for a follow-up article, depending on how complex it will be to fill in the gaps.
Summary: This paper presents a new accelerated gradient method for smooth (possibly strongly) convex functions. It is assumed that the available gradients are noisy, where the noise level is proportional to the norm of the gradient. The authors show that Nesterov's accelerated gradient method (NAG) will converge in this setting when the constant of proportionality for the noise level is less than 1, but will diverge otherwise. To remedy this, the authors present the Accelerated Gradient descent with Noisy EStimators (AGNES) scheme. AGNES employs an additional parameter $\alpha$ in the momentum step, and show that if $\eta = \alpha$ (where $\eta$ is the step size for NAG), then NAG is equivalent to AGNES. Theoretical guarantees are presented to show that AGNES converges in the convex case, and at a better rate in the strongly convex case. Due to the noise in the gradients, the theoretical results hold in expectation, and the authors also show almost sure convergence in Corollary 5. Numerical experiments are also presented to support their findings. Strengths: Writing. The paper was well written and presented. Clear motivation for the new algorithm was given, as well as a comparison with other results in the literature to put this work in context. I enjoyed reading it. Contribution. Problems where one encounters noisy gradients are abundant in the literature, so new algorithms for such tasks are welcome. The algorithm is conceptually simple, with few parameters to tune, and theoretical results guarantee convergence. Numerical experiments. The algorithm is supported by numerical experiments, which show the practical behavior of AGNES, and support the findings of the paper. Weaknesses: Personally, I would have preferred if the authors had used different notation. For example, in (1), there are two sequences of points, $\{x_n\}$ and $\{x_n'\}$. But then, the authors use the notation $g_n = g(x_n',\omega_n)$, i.e., the gradient estimate does not have a prime, but it is associated with the sequence $x_n'$ involving a prime. I found it a bit jarring to read. To be honest, it probably would be even better to have used different letters to denote the two sequences, rather than a prime. I know it is hard to write in this style where a lot of important information needs to be put off until the appendix. However, in this case, it might have been an idea to have included the Literature Review (Appendix C) in the main body of the paper, and then taken out some of the length in the numerical experiments section (and put it in the appendix) to balance it out to 9 pages. Both of these are personal preferences for the authors to take note of (I am not asking them to do anything here). Technical Quality: 3 Clarity: 4 Questions for Authors: N/A Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their feedback and encouraging comments. > I would have preferred if the authors had used different notation. For example, in (1), there are two sequences of points, $x_n$ and $x'_n$. But then, the authors use the notation $g_n=g(x'_n, \omega_n)$, i.e., the gradient estimate does not have a prime, but it is associated with the sequence involving a prime. We thank the reviewer for the good suggestion, and we agree that this would make the notation more consistent. We will be happy to replace $g_n$ with $g'_n$ for NAG and AGNES to make it clearer that those gradients are estimated at $x'_n$. >it might have been an idea to have included the Literature Review (Appendix C) in the main body of the paper, and then taken out some of the length in the numerical experiments section (and put it in the appendix) to balance it out to 9 pages. We agree that the literature review being in the main body would help the readers put the current work in context of prior works. We will be happy use the additional content page that we would get, if accepted for publication, to move the literature review to the main body of the paper.
Summary: This paper proposes an accelerated gradient method which is applicable in stochastic convex optimisation with unbiased but highly noised estimator. The method is analysed in a continuous-time convergence framework. Special attention is paid to the application of the method in machine learning, corresponding numerical experiments were carried out which confirm the advantage of the proposed method over some other ones, including classical Nesterov's accelerated method. Strengths: The proposed method is indeed a good alternative to a classical NAG method when some simple problem statements are considered, which is the case in the problems chosen for practical evaluation. The analysis in continuous-time fashion seems unnecessary here, yet convenient to proof the results shortly and clearly. Most of minor technical details are clarified in appendices. Weaknesses: The language requires proof-reading, at least with an automated tool, and should be made more formal. The same recommendation is regarding the mathematical expressions: text takes several liberties here and there, i.e. uses not introduced notions without purpose, applies same-called functions to differently typed arguments, exploits x* for convex functions which are not guaranteed to reach its inf, etc. which makes an impression of that of informal draft, that I believe can be altered, taking into account that no significant details remain unclarified after reading the appendices. Similarly for numerical experiments: std-shadows are not added to some of the plots in stochastic experiments, axes are not labelled, etc. Besides, text contains many remarks on physical meaning, or on qualitative comparison of the proposed method with classical ones, which are so unclear that contribute nothing to understanding, but shift focus from the factual results. To summarise, text is far from being publishable. Technical Quality: 3 Clarity: 1 Questions for Authors: Contribution of the paper is not clear. Since we expect that paper contributes either to theory or to practice, I try to find if it does. In theory, it does not make exhaustive literature review (arxiv:2307.01497 is ignored, for example; the only comparision of theoretical guarantees were given on Figure 1 with SGD) which is expectable when yet another accelerated method is proposed, competes with only SGD and NAG, i.e. the simplest methods, does not consider any structured optimisation settings and does not provide some generalizable analysis with any new techniques, but only uses some known tools to show the noise-robustness in the simplest setting, again without comparing their results with the literature on optimisation with state-dependent noise. In practice, it looks better when comparison is carried out on not-toy examples like image classification, but achieved improvement does not look significant in comparison to Adam, and its advantage is still confirmed by few simple problems, which does not convince a practitioner that method worth utilisation. Thus, this method does not seem helpful to the community, at least through its presentation in this text. Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: The limitations are clear from reading. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their helpful feedback. > The language requires proof-reading [...] and should be made more formal [...] To summarise, text is far from being publishable. We understand that our presentation goes against the reviewer's stylistic preferences, and we are happy to integrate feedback into a revised version. To be sure, we would request more specific feedback about alterations the reviewer is suggesting. Since two other reviewers rated the presentation as '4: excellent', we are hesitant to consider changes without further clarification. We understand that this places additional strain on the reviewer's time, and we are very grateful for their service and the feedback they already provided. > text [...] exploits $x^*$ for convex functions which are not guaranteed to reach its inf The existence of a minimizer $x^*$ is stated as part of the assumptions in Theorems 1 and 3. In fact, we study convex objective functions *without* minimizers in Theorem 7 (see also the paragraph above it). We are happy to rephrase the statement to emphasize this in a revised version. If a minimizer does not exist, we do not expect that the algorithm would yield quantitative guarantees since even the 'heavy ball ODE' (deterministic, continuous time) can be arbitrarily slow due to Lemma 4.2 in [1] (see also lines 870-883 in Appendix F). > std-shadows are not added to some of the plots in stochastic experiments The std-shadows are included in all experiments involving neural networks (albeit imperceptibly small in some plots compared to the oscillation of the trajectory which obscure them somewhat). They are not included in the plots for convex optimization since they made the plots less clear. Standard deviations were small and comparable for all methods considered. We did not find them to add much to the message: If $\mathbb E[f(x_n)] < \varepsilon$ for instance, then $f(x_n)< n\varepsilon$ with probability at least $1-1/n$ by Chebyshev's inequality, directly yielding bounds on $f(x_n)$ since $\mathbb E [f(x_n)] \to 0$ fairly quickly. > In theory, it does not make exhaustive literature review (arxiv:2307.01497 is ignored, for example The literature review is in Appendix C. We would be very happy to move it to the main text in a revised version and dedicate the additional text page to it. In a large and complex field, the review is certainly not complete, but we provide more context with the literature there (see also the next point). We were indeed unaware of arxiv:2307.01497 and we will be happy to add it to the literature review. > the only comparision of theoretical guarantees were given on Figure 1 with SGD)[...], competes with only SGD and NAG, i.e. the simplest methods, We are emphasizing the advantages of acceleration in Figure 1, which are shared with other methods such as ACDM and CNM. We present those methods alongside AGNES in Figure 3. We would be happy to replace the label AGNES in Figure 1 by 'AGNES, ACDM'. We conjecture that the same holds for CNM, but the full result is not available in the literature. > without comparing their results with the literature on optimisation with state-dependent noise We would be happy to expand the discussion of the results of Vaswani et. al. and Even et. al. in a revised manuscript and to include the reference suggested above. > achieved improvement does not look significant in comparison to Adam We consider this work a (non-trivial) first step at designing algorithms which combine momentum and adaptivity in a more principled way. The simplicity of the algorithm, e.g. compared to ACDM, is an important part of this. Even in our fairly simple examples, it can be seen that for more stochastic gradient estimates (smaller batch or larger model), AGNES is more stable in its performance compared to Adam. A similar trend would be expected when looking at very diverse datasets (although we do not test this hypothesis here). [1] Siegel, Wojtowytsch: A qualitative difference between gradient flows of convex functions in finite- and infinite-dimensional Hilbert spaces, *arXiv:2310.17610 [math.OC]*
Summary: This paper introduces and studies AGNES (Accelerated Gradient Descent with Noisy Estimators), a generalization of Nesterov's accelerated gradient (NAG) descent algorithm. First they show that NAG's guarantees break in the high noise regime. Then they prove that AGNES can accelerate gradient descent at any noise scale in smooth convex and strongly convex minimization tasks. The paper also provides empirical evidence arguing AGNES's improved performance over existing methods in various settings such as ImageNet. Strengths: The strongest aspect of their paper is theoretical result. The fact that they only require two parameters will make it easier to apply to practical applications and also it leads to lesser space usage then Vaswani et al. Weaknesses: The optimizer comparison is done on small datasets and even then not done well since the authors either do not sweep over hyperaprameters such as learning rate or sweep with a large factor such as 10. This reduces the certainty of the claim that AGNES outperforms NAG in the high noise regime for deep learning. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. As the authors mentioned their algorithm is equivalent to a reparameterization of previous algorithms such as that in Liu and Belkin and also I believe the one in https://arxiv.org/abs/1803.05591. This should be highlighted more since currently the paper is written as presenting a new algorithm rather than presenting a better analysis. 2. Could the authors either add more sweeps over hyperparameters or add these limitations to the empirical claims (such as in Contribution 5)? I would be happy to increase my rating if the authors could answer the above queries. [Addressed] Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their helpful feedback and will be happy to incorporate it in a revised version. > This should be highlighted more since currently the paper is written as presenting a new algorithm rather than presenting a better analysis. We agree with the point that the novelty of our work lies in the analysis of this algorithm, which is a reparametrization of Liu and Belkin (2018)'s algorithm. We will emphasize this more clearly in the revision and further place our work in the broader context of similar accelerated algorithms. We derived AGNES as a modification of NAG independently in a parametrization particularly suited to proving acceleration in this setting and only realized the equivalence after. We maintain that the 'AGNES' version of the scheme is particularly suited to geometric interpretation and analysis, which is why we retained it. > This reduces the certainty of the claim that AGNES outperforms NAG in the high noise regime for deep learning... >Could the authors either add more sweeps over hyperparameters or add these limitations to the empirical claims (such as in Contribution 5)? The work is primarily theoretical and we are happy to acknowledge the limitations of our experimental section. To address the question at least on one realistic example, we ran additional experiments testing a much wider range of hyperparameters for NAG for the task of training ResNet34 on classifying images from CIFAR-10. The plots along with the complete details of the experiment are included in the pdf attached to the global rebuttal. The results indicate that AGNES outperforms NAG with little fine-tuning of the hyperparameters. Details of the experiment: We trained ResNet34 on CIFAR-10 with a batch size of 50 for 40 epochs using NAG with learning rate in {$8\cdot 10^{-5}, 10^{-4}, 2\cdot 10^{-4}, 5\cdot 10^{-4}, 8\cdot 10^{-4}, 10^{-3}, 2\cdot 10^{-3}, 5\cdot 10^{-3}, 8\cdot 10^{-3}, 10^{-2}, 2\cdot 10^{-2}, 5\cdot 10^{-2}, 8\cdot 10^{-2}, 10^{-1}, 2 \cdot 10^{-1}, 5\cdot 10^{-1}$} and momentum value in {$0.2, 0.5, 0.8, 0.9, 0.99$}. These 80 combinations of hyperparameters for NAG were compared against AGNES with the default hyperparameters suggested $\alpha = 10^{-3}$ (learning rate), $\eta = 10^{-2}$ (correction step), and $\rho = 0.99$ (momentum) as well as AGNES with a slightly smaller learning rate $5\cdot 10^{-4}$ (with the other two hyperparameters being the same). AGNES consistently achieved a lower training loss as well as a better test accuracy faster than any combination of NAG hyperparameters tested. The same random seed was used each time to ensure a fair comparison between the optimizers. Overall, AGNES remained more stable and while other versions of NAG occasionally achieved a higher classification accuracy in certain epochs, a moving time average of AGNES outperformed every version of NAG (results not plotted due to space constraints, but will be included in the revised manuscript). --- Rebuttal Comment 1.1: Title: Reponse to Authors. Comment: Thank you for the additional experiments. I am assuming that the authors will add limitations re "happy to acknowledge the limitations of our experimental section" in the final version. I will increase my score.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their valuable feedback! The pdf attached contains an additional sweep over hyperparameters to compare AGNES and NAG as suggested by Reviewer 2pRK. Details of the experiment: We trained ResNet34 on CIFAR-10 with a batch size of 50 for 40 epochs using NAG with learning rate in {$8\cdot 10^{-5}, 10^{-4}, 2\cdot 10^{-4}, 5\cdot 10^{-4}, 8\cdot 10^{-4}, 10^{-3}, 2\cdot 10^{-3}, 5\cdot 10^{-3}, 8\cdot 10^{-3}, 10^{-2}, 2\cdot 10^{-2}, 5\cdot 10^{-2}, 8\cdot 10^{-2}, 10^{-1}, 2 \cdot 10^{-1}, 5\cdot 10^{-1}$} and momentum value in {$0.2, 0.5, 0.8, 0.9, 0.99$}. These 80 combinations of hyperparameters for NAG were compared against AGNES with the default hyperparameters suggested $\alpha = 10^{-3}$ (learning rate), $\eta = 10^{-2}$ (correction step), and $\rho = 0.99$ (momentum) as well as AGNES with a slightly smaller learning rate $5*10^{-4}$ (with the other two hyperparameters being the same). AGNES consistently achieved a lower training loss as well as a better test accuracy faster than any combination of NAG hyperparameters tested. The same random seed was used each time to ensure a fair comparison between the optimizers. Overall, AGNES remained more stable and while other versions of NAG occasionally achieved a higher classification accuracy in certain epochs, a moving time average of AGNES outperformed every version of NAG (results not plotted due to space constraints, but will be included in the revised manuscript). Pdf: /pdf/82cb408fb92abedbf81fc8a50b483643af51427e.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper introduces ``Accelerated Gradient Descent with Noisy Estimators" (AGNES), a variant of Nesterov's accelerated gradient descent (NAG), and proves that AGNES achieves an accelerated convergence rate regardless of the noise level relative to the gradient in both convex and strongly convex cases. Additionally, experimental results validate the effectiveness of AGNES. Strengths: It is impressive that a simple modification of NAG results in a robust theoretical convergence in stochastic settings. The theoretical analysis, including continuous analysis, seems consistent, and the experimental studies are detailed, particularly in the context of deep learning Weaknesses: I wonder if AGNES is truly novel. In lines 34 to 36, the authors state, ``In this work, we demonstrate that it is possible to achieve the same theoretical guarantees as Vaswani et al. [2019] with a simpler scheme, which can be considered as a reparametrized version of Liu and Belkin [2018]’s Momentum-Added Stochastic Solver (MaSS) method''. Does this imply that AGNES is essentially the same as MaSS or other previously suggested algorithms? Please clarify this point. Technical Quality: 3 Clarity: 3 Questions for Authors: Are Theorem 1 and 2 new theoretical results or just induced by prior works? I recommend to cite `Continuous-Time Analysis of AGM via Conservation Laws in Dilated Coordinate Systems. J. J. Suh, G. Roh, and E. K. Ryu' as the reference of continuous analysis of NAG type algorithm. Could the convergence result in Theorem 3 and 4 extend to the expectation of gradient norm? The fifth line of the caption in Figure 6 needs spacing. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback and comments. >Does this imply that AGNES is essentially the same as MaSS or other previously suggested algorithms? Please clarify this point. AGNES is equivalent to MaSS after a reparametrization (shown in Appendix B.2). We derived the algorithm from NAG independently in a way which made the proofs geometrically transparent and allowed us to draw a clear parallel between Theorems 1 and 2 for NAG and Theorems 3 and 4 for AGNES. After the analysis was completed, we realized the equivalence, but decided to work with the version which, in our opinion, allows for a simpler geometric interpretation and more transparent proofs. The results we obtain for AGNES had not been obtained for this time stepping scheme in either parametrization. We will update the introduction of the article to more prominently reflect our contributions and avoid any confusion about the novelty of the scheme. >Are Theorems 1 and 2 new theoretical results or just induced by prior works? To the best of our knowledge, Theorems 1 and 2 are new results that we did not find elsewhere in the literature. >I recommend to cite `Continuous-Time Analysis of AGM via Conservation Laws in Dilated Coordinate Systems. J. J. Suh, G. Roh, and E. K. Ryu' as the reference of continuous analysis of NAG type algorithm. We are grateful to the reviewer for making us aware of this reference, which we will be happy to include in our discussion of continuous-time dynamics. >Could the convergence result in Theorem 3 and 4 extend to the expectation of gradient norm? Yes, since L-smooth functions satisfy the inequality $\|\nabla f(x) \|^2 \leq 2L (f(x) - \inf f)$ (see line 616 in Appendix E for context), Theorems 3 and 4 automatically lead to a bound on $\mathbb E[\|\nabla f(x) \|^2]$ in both convex as well as strongly convex cases. A bound on $\mathbb E[\|\nabla f(x) \|]$ can then be obtained using Jensen's inequality, i.e. $\mathbb E[\|\nabla f(x) \|]^2 \leq \mathbb E[\|\nabla f(x) \|^2]$. We would be happy to add a remark about this in the final version of the article. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for your response and I will keep my score.
null
null
null
null
null
null
Distributed-Order Fractional Graph Operating Network
Accept (spotlight)
Summary: This paper introduces a continuous Graph Neural Network (GNN) framework DRAGON that leverages distributed-order fractional calculus. Unlike traditional continuous GNNs that utilize integer-order or single fractional-order differential equations, DRAGON employs a learnable probability distribution over a range of real numbers for the derivative orders. This approach allows for a flexible superposition of multiple derivative orders, enabling the model to capture complex graph feature updating dynamics. provide a non-Markovian graph random walk interpretation under a specific anomalous diffusion setting. The authors conduct extensive experiments across various graph learning tasks and achieve competitive results. Strengths: The use of distributed-order fractional calculus in GNNs is novel to me, and expands the capabilities of continuous GNN models. They extend previous dynamical systems to operate under DRAGON framework and generalizes the scenarios to involve distributed-order fractional derivatives The authors conduct experiments on different tasks and plenty o datasets and conduct sensitivity analysis to better evaluate their proposed framework Weaknesses: 1. Can you provide more visualizations and explanations for non-Markovian graph random walk interpretation? 2. lack of code for reviewing and reproducibility 3. The paper does not thoroughly address the scalability of the proposed framework in real-world applications with very large graphs. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How does the computational complexity and memory usage of DRAGON compare to other continuous GNN models in practice, especially for very large graphs? I have seen the time complexity comparison on Cora, but are there more experiments on a larger dataset since cora is small. 2. Can the authors provide their code in order for better reviewing? 3. in table 4, the performance of D-CDE is comparable to F-CDE. the improvements are not obvious. 4. Can the authors provide more explanations for the intrinsic property 'its ability to capture flexible memory effects' mentioned in the paper? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors discuss limitations in the Appendix I, like limitation to continuous GNNs. However, it is better that the authors have a discussion in the main context. Besides, the scalability and computational efficiency of the proposed framework in large-scale applications may need further discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weakness 1 & Question 4: Explanation of non-Markovian graph random walk with flexible memory **Response:** Thank you for your valuable comments and suggestions. **1. Further explanation for non-Markovian graph random walk with flexible memory:** Lines 211-223 of the manuscript detail the dynamics of the random walk. For enhanced clarity, here we include the corresponding transition probability $\mathbb{P}$ representation for the non-Markovian random walker at time $t$, which explicitly accounts for node positions throughout the entire path history $(\ldots, \mathbf{R}(t-n \Delta \tau), \ldots, \mathbf{R}(t-\Delta \tau))$, as shown in Eq.(S4). Here, $\mathbf{R}\left(t\right)$ represents the walker's position on the graph nodes $\\{\mathbf{x}_j\\}\_{j=1}^{|\mathcal{V}|}$ at time $t$. This model ensures that all historical states influence transitions, emphasizing the model's non-Markovian nature. We consider a random walker navigating over graph $\mathcal{G}$ with an infinitesimal interval of time $\Delta \tau>0$. We assume that there is no self-loop in the graph topology. The transition probability of the random walker is characterized as follows: \begin{align} \begin{aligned} & \mathbb{P}\left(\mathbf{R}(t)=\mathbf{x}\_{j_t} \mid \ldots, \mathbf{R}(t-n \Delta \tau)=\mathbf{x}\_{j_{t-n \Delta \tau}}, \ldots, \mathbf{R}(t-\Delta \tau)=\mathbf{x}\_{j_{t-\Delta \tau}}\right) \\\\ & = \begin{cases} \left(1- K \right)\psi_\alpha(n) & \text{if revisiting historical positions $\mathbf{R}(t-n\Delta \tau)$ with $j_{t}=j_{t-n\Delta \tau}$, i.e., the walker's wait time is $n\Delta \tau$ and stays at the same node,} \\\\ \left(K\frac{W_{j_{t-\Delta \tau} j_{t}}}{d_{j_{t-\Delta \tau}}}\right)\psi_\alpha(n) & \text{if jumping from historical positions $j_{t-n\Delta \tau}$ to $j_{t}$, i.e., the walker's wait time is $n\Delta \tau$ and jumps to $j_{t-n\Delta \tau}$'s neighbour $j_{t}$}. \end{cases} \quad \quad \quad \quad \quad \text{(S4)} \end{aligned} \end{align} where $K\coloneqq(\Delta \tau)^\alpha d_\alpha|\Gamma(-\alpha)|$ is a normalization coefficient, $j_{t-n\Delta \tau}$ is the node index visited at time $t-n\Delta \tau$, and $\psi_\alpha(n)$ is the probability that the walker's waiting time is $n\Delta\tau$. For a specific $\alpha$, the waiting time $\psi_\alpha(n)$ follows a power-law distribution $\propto n^{-\left(\alpha+1\right)}$. Additionally, our distributed-order fractional operator $\int D^\alpha \mathbf{X}(t) \mathrm{d} \mu(\alpha)$ acts as a flexible superposition of the dynamics driven by individual fractional-order operators $D^\alpha$. This approach allows for nuanced dynamics that adapt to diverse waiting times. Theorem 2 in our DRAGON framework demonstrates its capability to approximate any waiting time distribution $f(n)$ for graph-based random walkers, thereby providing versatility in modeling feature updating dynamics with varied memory incorporation levels. **2. Visualization of the random walk:** We kindly refer the reviewer to the attached one-page PDF for the visualization. ## Weakness 2 & Question 2: Missing implementation code (already included in our initial submission) **Response:** We would like to clarify that the implementation code has already been included in "Supplementary Material" of our initial submission. ## Weakness 3 & Question 1: Scalability of DRAGON **Response:** Thank you for your comments. We would like to clarify that our manuscript includes experimental results on the ogbn-products dataset, which is a large-scale graph with 2,449,029 nodes and 61,859,140 edges, as presented in **Table 17** of our manuscript. Regarding the scalability of our DRAGON framework, we introduce two numerical solvers, detailed in Eq.(32) and (36). For further details on error analysis and complexity, we kindly refer the reviewer to **our Response to Weaknesses 2 for Reviewer uHnh** and the **model complexity** at the top **global response section**. Additionally, in the rebuttal, we have conducted a computational complexity comparison for the ogbn-arxiv dataset in **Table R1**. These results demonstrate that while our framework slightly increases computational costs compared to baseline continuous GNN models, it remains feasible for large graph datasets applications. ## Question 3: Performance of D-CDE **Response:** Regarding classification performance, we acknowledge in Table 4 that our D-CDE marginally outperforms F-CDE. However, it is crucial to emphasize a key advantage: in the baseline FROND framework, the fractional derivative order $\alpha$ is a hyperparameter that requires tuning for each dataset. This tuning is effort-intensive and must be repeated to identify the optimal $\alpha$ for each new dataset. As shown by our results in Fig.1. of the manuscript, performance significantly deteriorates with non-optimal $\alpha$ values, highlighting FROND's sensitivity to this parameter. In contrast, our DRAGON framework utilizes a set of fractional orders $\alpha_j$ and learns their weights $w_j$ automatically, removing the need for manual tuning of $\alpha$. This method provides a more practical and robust solution compared to FROND, especially in scenarios where manual hyperparameter tuning is impractical or costly. --- Rebuttal Comment 1.1: Comment: Thank you for your code, responses and additional experiments. I have raised the contribution score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thank you for increasing the contribution score. We appreciate this positive change! However, for clarification, we would like to note that in typical NeurIPS review processes, reviewers often adjust the overall rating rather than the contribution score alone. --- Rebuttal Comment 1.2: Comment: Dear Reviewer smsh, May we kindly ask if you would consider revising the overall rating based on our responses? Sincerely, The Authors
Summary: The paper proposes the a novel framework called DRAGON, which uses a learnable probability distribution over a range of real numbers for the fractional order in graph dynamics, generalizing previous continuous GNN models. The paper provides a non-Markovian graph random walk interpretation for the DRAGON framework, assuming the feature updating dynamics adheres to a diffusion principle. It also proves that the DRAGON framework can approximate any waiting time distribution for graph random walks. The paper integrates DRAGON into several existing continuous GNN models and conducts experiments on various graph benchmarks. The results show that the DRAGON framework outperforms other methods on long-range graph datasets and improves the performance of continuous backbones on homophilic and heterophilic datasets. Strengths: 1. The paper introduces the DRAGON framework, which incorporates distributed-order fractional calculus into continuous GNNs, a novel approach that goes beyond the traditional integer-order or single fractional-order differential equations used in previous GNN models. The use of a learnable measure over a range of real numbers for the fractional order is a unique feature of DRAGON, which allows for a flexible superposition of multiple derivative orders and captures complex graph feature updating dynamics beyond the reach of conventional models. 2. The paper is clearly written and well-structured. The paper provides a non-Markovian graph random walk interpretation for DRAGON, which is a new perspective in understanding the dynamics of graph neural networks. The experimental results are presented in a clear and organized way, with tables and figures that help readers easily understand the performance of the DRAGON framework compared to other methods. Weaknesses: 1. The DRAGON framework involves distributed-order fractional calculus and a learnable measure, which may increase the complexity of the model and make it more difficult to understand and implement. This complexity could potentially limit its adoption in practical applications. 2. The paper mainly focuses on a few datasets for evaluation, and the experiments could be further expanded to include more diverse and challenging datasets. Technical Quality: 4 Clarity: 3 Questions for Authors: see the above weaknesses Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: the author has adequately addressed the limitations in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weakness 1: Implementation and complexity details **Response:** Thank you for your insightful comments. We are pleased to provide additional explanation regarding the implementation and complexity of our framework. Solving the distributed-order FDE Eq.(10) consists of two steps: - **Step 1:** Discretizing the distributed-order derivative by classical quadrature rule. For example, suppose $w(\alpha)=\mu'(\alpha)$, applying composite Trapezoid rule [2-3] gives \begin{align*} \displaystyle\int_a^b D^\alpha X(t)d\mu(\alpha) = \frac{\Delta \alpha}{2} \left[w(\alpha_0) D^{\alpha_0}X(t) + 2\sum_{j=1}^{n-1} w(\alpha_j)D^{\alpha_j} X(t) + w(\alpha_n)D^{\alpha_n}X(t)\right]+ O((\Delta\alpha)^2), \quad \quad \quad \quad \quad \text{(S1)} \end{align*} where $\Delta\alpha=(b-a)/n$ and $\alpha_j=a+j\Delta\alpha$. After omitting small terms, we get the multi-term fractional differential equation Eq.(14). - **Step 2:** Solving the multi-term fractional differential equation Eq.(14) using fractional Adams–Bashforth–Moulton method Eq.(28) or Grünwald-Letnikov method Eq.(36). For additional details on **error analysis**, we kindly direct the reviewer to **our Response to Weaknesses 2 for Reviewer uHnh**. For the **model complexity**, please refer to the top **global response section**. We hope this explanation alleviates concerns regarding the complexity and illustrates the DRAGON framework's potential for broad applicability. ## Weakness 2: More diverse and challenging datasets **Response:** Thank you very much for your thoughtful suggestion. We value the chance to further elucidate the aims and outcomes of our study. Our research demonstrates how the DRAGON framework can enhance various continuous GNN models by integrating distributed-order fractional derivatives, thus boosting performance. Our experimental setup follows the methods outlined in the continuous GNNs literature. Additionally, in the manuscript, we have broadened our tests to include the Long Range Graph Benchmark, which features datasets such as Chemistry, to underscore the unique benefits of our DRAGON. The broad applicability of the DRAGON framework is notable, with potential uses in fields such as traffic forecasting [1] and learning system dynamics [2]. In the rebuttal, we have adapted integer-order differential equations in [1] to distributed-order fractional derivatives, expanding our framework's reach. Specifically, we have applied the DRAGON framework to the STG-NCDE model in preliminary time-series traffic forecasting tests, as demonstrated in Table R4. This preliminary result shows our framework's adaptability and effectiveness even without extensive parameter tuning. [1] Choi J, et al. Graph neural controlled differential equations for traffic forecasting. AAAI 2022 [2] Huang Z, et al. Coupled graph ODE for learning interacting system dynamics. KDD 2021. **Table R4: Forecasting error(%) for traffic time-series data** | | PeMSD4 | PeMSD4 | PeMSD4 | PeMSD7 | PeMSD7 | PeMSD7 | |--------------|------------|-------------|-------------|------------|-------------|-------------| | Model | MAE | RMSE | MAPE | MAE | RMSE | MAPE | | STGODE | 20.84 | 32.82 | 13.77 | 22.59 | 37.54 | 10.14 | | STG-NCDE | 19.21 | 31.09 | 12.76 | 20.53 | 33.84 | 8.80 | | D-STG-NCDE | 19.05 | 31.03 | 12.49 | 20.26 | 33.29 | 8.39 | --- Rebuttal 2: Comment: Dear Reviewer mRZs, We sincerely appreciate the time and expertise you have dedicated to reviewing our paper. We deeply value your support. May we kindly ask for your confirmation on whether our rebuttal has adequately addressed your concerns? Your feedback is crucial to us. Thank you once again for your invaluable contributions and for sharing your expertise. With sincere gratitude, The Authors
Summary: This paper presented a new type of continuous GNN that extends and unified many continuous GNN variants. The paper mainly generalize the distributed-order fractional derivatives to continuous GNN dynamics, and it now supports a mixing of fractional derivatives within a range of continuous orders. The author also presents many explanations for the presented generalization of continuous GNNs, with theoretical support and connections to non-markov random walk. Extensive experiments that coverage node-level and graph-level tasks are conducted, showing consistent improvement over existing baselines. Strengths: 1. The presented framework that relies on distributed-order fractional derivatives generalizes many existing methods in the continuous GNN area, providing good theoretical support and insight for future research. 2. While some terms and math are not familiar to me, the author present the framework and main designs greatly and easy to follow. Motivations and theorem conclusions are good to understand. 3. Extensive experiments are conducted, and the performance improvement is notable and consistent. Weaknesses: 1. While the author has conducted extensive experiments covering various domains, the hyperparameter selection strategy is not mentioned. Given the framework introduces many additional hyperparameters, the author should clearly state how they choose all hyperparameters, whether it is solely based on validation dataset. 2. The author mentioned how to solve DRAGON that contains the distributed-order fractional differential equation. It seems that the final computation equation is just the equation 36 mentioned in appendix. As the equation contains approximations, can the author clearly state the approximation error to the true solution? It seems that the equation 36 is a simple iterative based equation, and I'm not sure whether solving the distributed-order differential equation is that simple. 3. The complexity analysis is not very detailed. C and E is kind of vague. Technical Quality: 3 Clarity: 3 Questions for Authors: Except the questions listed above, can the author provide more computational complexity report? Table 8 and 9 only studies cora, I would like to see the report on large datasets. (large graph & large number of graphs) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author has clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weakness 1: Hyperparameters chosen **Response:** We appreciate the reviewer's comments. Like GRAND, CDE, and FROND, we employ a grid search on the validation dataset to optimize common hyperparameters such as hidden dimensions, learning rate, weight decay, and dropout rate. Some details are in Table 18 in our manuscript. Here we also clarify the parameters $\alpha_j$ in our implementation, distinct from other continuous GNNs. As detailed in Sec. 4.1, we restrict $\alpha_j$ to $[0,1]$, with values evenly spaced. We consistently apply the same 10 levels of $\alpha_j$, from 0.1 to 1.0, across all datasets, each associated with learnable weight $w_j$. Our approach, therefore, avoids the need to extensively fine-tune the derivative order $\alpha$, unlike the FROND model. Furthermore, in Sec. G6, we demonstrate that this discretization is not sensitive to the number of $\alpha_j$ levels used. ## Weakness 2: Numerical solver and the approximation error **Response:** Solving the distributed-order FDE Eq.(10) consists of two steps: - **Step 1:** Discretizing the distributed-order derivative by classical quadrature rule. For example, suppose $w(\alpha)=\mu'(\alpha)$, applying composite Trapezoid rule [2-3] gives \begin{align*} \displaystyle\int_a^b D^\alpha X(t)d\mu(\alpha) = \frac{\Delta \alpha}{2} \left[w(\alpha_0) D^{\alpha_0}X(t) + 2\sum_{j=1}^{n-1} w(\alpha_j)D^{\alpha_j} X(t) + w(\alpha_n)D^{\alpha_n}X(t)\right]+ O((\Delta\alpha)^2), \quad \quad \quad \quad \quad \text{(S1)} \end{align*} where $\Delta\alpha=(b-a)/n$ and $\alpha_j=a+j\Delta\alpha$. After omitting small terms, we get the multi-term fractional differential equation Eq.(14). - **Step 2:** Solving the multi-term fractional differential equation Eq.(14) using fractional Adams–Bashforth–Moulton method Eq.(28) or Grünwald-Letnikov method Eq.(36). __Therefore, the approximation error to the true solution consists of numerical quadrature error in Step 1 and numerical solver error in Step 2.__ The former one is clear from (S1). For the latter, we address the main idea by considering the general $n^{th}$ order multi-term fractional differential equation: $\sum_{j=1}^n w_j D^{\alpha_j} y(t) = f(t) $ with initial condition $y(0) = y_0$. We have > **1).** For fractional Adams–Bashforth–Moulton method Eq.(28), the multi-term fractional differential equations are equivalently transformed into a system of single-term equations, which is then addressed using the solver for single terms as detailed in Eq.(28). The approximation error for this solution is quantified as follows [1]: \begin{align*} \max_{j=0,1,\ldots,N} \left| y(t_j) - y_j \right| = O(h^{1+\min\{\alpha_j\}}), \quad \quad \quad \quad \quad \quad \quad \text{(S2)} \end{align*} where $y_j$ denotes the value of the solution at time $t_j$ as computed by the numerical method, and $y(t_j)$ represents the exact solution at time $t_j$, $h$ is the step size. > **2).** For Grünwald-Letnikov method Eq.(36), we apply Grünwald-Letnikov approximation [4] for each fractional derivative $D^{\alpha_j} y(t)$ that is given by: \begin{align*} D^{\alpha_j} y(t_i) = \frac{1}{h^{\alpha_j}} \sum_{k=0}^{i} (-1)^k \binom{\alpha_j}{k} [y(t_{i-k}) - y_0] +O(h). \end{align*} Using techniques in [5], we get the approximation error below \begin{align*} \max_{j=0,1,\ldots,N} \left| y(t_j) - y_j \right| = O(h). \quad \quad \quad \quad \quad \text{(S3)} \end{align*} The total error $E_{\text{total}}$ is actually a combination of approximation error in Step 1 and Step 2. [1].Diethelm K, et al. Detailed error analysis for a fractional Adams method. [2].Gao G, et al. Two alternating direction implicit difference schemes for two-dimensional distributed-order fractional diffusion equations. [3].Quarteroni A, et al. Numerical mathematics. [4].Podlubny I. Fractional differential equations Elsevier, 1998. [5].Jin B, et al. Correction of high-order BDF convolution quadrature for fractional evolution equations ## Weakness 3: Complexity analysis details **Response:** Thank you for your feedback. We clarify here the notation and detail the computational complexity for our DRAGON using the above two numerical solvers. > The term $E = \frac{T}{h}$ quantifies the discretization (iteration) steps necessary for the integration process. Here, $T$ represents the integration time, $h$ is the step size, and $E$ denotes the total number of iterations required. > The term $C$ denotes the computational complexity of function $\mathcal{F}$. For instance, setting $\mathcal{F}$ to the GRAND model for evaluating the soft adjacency matrix results in $C =O( |\mathcal{E}| d)$, where $|\mathcal{E}|$ represents the edge set size and $d$ the dimensionality of the features (cf. [6]). Alternatively, using the GREAD model results in $C=O((|\mathcal{E}| + |\mathcal{E}_2|)d + |\mathcal{E}| d\_{\text{max}})$, where $|\mathcal{E}\_2|$ counts the two-hop edges, and $d\_{\text{max}}$ is the maximum degree among nodes (cf. [7]). For a detailed analysis of the **model complexity**, please refer to the top **global response section.** [6] Chamberlain, Ben, et al. "Grand: Graph neural diffusion." ICML, 2021. [7] Choi, Jeongwhan, et al. "Gread: Graph neural reaction-diffusion networks." ICML 2023. ## Question 1: More computational complexity report We follow the suggestions and prove the computational complexity on large datasets including Ogbn-arxiv as well as on a large number of graphs within the Peptides-func and Peptides-struct datasets. The results of this analysis are detailed in **Table R1, R2, and R3** in the top **global response section**. These results demonstrate that while our framework slightly increases computational costs compared to baseline continuous GNN models, it remains feasible for large graph datasets applications. --- Rebuttal Comment 1.1: Comment: Thank you for these detailed response and additional measurement. Your algorithm seems to be well designed: with both theoretical support and limited computational overhead. Based on (s2) and (s3), can you give some insight to the measurement of the real error in experiments? Like how the error changes and affect the performance with respect to h in real experiments. --- Reply to Comment 1.1.1: Comment: Dear Reviewer uHnh, Thank you for your insightful questions, which have contributed to the enhancement of our paper. Could you please let us know if our responses have addressed your concerns adequately? We appreciate your guidance and look forward to your feedback. With sincere gratitude, The Authors --- Reply to Comment 1.1.2: Comment: Dear Reviewer uHnh, Thank you for your insightful questions, which have contributed to improving our manuscript. As the rebuttal period is drawing to a close, we kindly request your feedback on whether our responses have sufficiently addressed your concerns. We value your guidance and look forward to your further comments. Sincerely, The Authors --- Rebuttal 2: Comment: Thank you for the new feedback! We would first like to clarify potential ambiguities surrounding the term "error" by distinguishing between "performance error" and "numerical error": > Performance error refers to the efficacy of GNNs in tasks such as node classification, where the focus is on the model's ability to correctly predict outcomes. > Numerical error, on the other hand, concerns the accuracy of numerical solutions to fractional differential equations compared to the "true" solution trajectory, which is a key issue in computational mathematics. The two errors are related, but not fully equivalent to each other. The approximation error in (S2) and (S3) refers specifically to the "numerical error." For further clarity, we then conducted ablation studies on the Cora dataset to observe how errors change with respect to the step size $h$. > (1). In the first ablation study, we fixed all other parameters and varied the step size $h$ used in each experiment. We trained the model using different step sizes $h$ during the training phase and tested it with the corresponding step sizes during the test phase. The results are presented in **Table R5**. According to (S2) and (S3), the numerical approximation error should be large when $h$ is large. However, the results from Table R5 indicate that while classification performance deteriorates with larger step size $h$, it does not degrade to an unreasonable level and still maintains adequate classification performance. This occurs because both the training and testing phases follow the same discretization procedure, and although both are far from the true FDE solution, the loss function is designed to minimize the final classification error, resulting in satisfactory performance. > (2). In the second ablation study, we maintained fixed parameters as in the first study but changed our approach by training the model exclusively with a small step size $h=0.1$. During the testing phase, we varied the step size $h$. The results are presented in **Table R6**. According to (S2) and (S3), the numerical approximation error should be minimal when $h$ is small, and the solution with $h=0.1$ can be presumed close to the "true" solution trajectory of the FDE. We noted that when $h < 1$ and remains relatively small, the model still achieves good classification performance. This occurs because the "numerical error" is still minimal, keeping the numerical solution close to the "true" solution trajectory. However, as $h$ increases beyond this range, the "numerical error" grows significantly, diverging from both the "true" solution trajectory and the approximate solution with $h=0.1$. Consequently, classification performance deteriorates substantially. **Table R5: Step size and classification accuracy(\%) (training and test)** | Step size | 5 | 2 | 1 | 0.5 | 0.2 | 0.1 | |-------------|-------|-------|-------|-------|-------|-------| | Solver Eq.(28) | 80.24 | 82.98 | 83.11 | 83.18 | 83.15 | 83.11 | | Solver Eq.(36) | 80.03 | 82.33 | 82.57 | 82.91 | 83.11 | 83.19 | **Table R6: Step size and classification accuracy(\%) (test phase)** | Step size | 5 | 2 | 1 | 0.5 | 0.2 | 0.1 | |------------|-------|-------|-------|-------|-------|-------| | Solver Eq.(28) | 35.23 | 65.48 | 74.42 | 80.91 | 81.02 | 83.11 | | Solver Eq.(36) | 39.80 | 61.62 | 73.91 | 76.14 | 80.41 | 83.19 |
null
null
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their insightful comments and valuable suggestions. We greatly appreciate the feedback and have thoughtfully addressed each point in our detailed responses. In this "global" response, we provide a detailed analysis of the computational complexity of the DRAGON framework as suggested by reviewers uHnh, mRZs, and smsh. Additionally, we include further numerical results that substantiate our responses. Moreover, in the attached PDF file, we present visualizations of the non-Markovian graph random walk, as recommended by Reviewer smsh. **Model Complexity** In our manuscript, the time complexity of the DRAGON framework is discussed in Section 4.4. Here, we provide a more detailed analysis concerning the two numerical solvers employed within our study. For the Adams–Bashforth–Moulton method (Equation 28), we compute $\mathbf{X}\_{k+1} = \mathbf{X}\_0 + \frac{1}{\Gamma(\alpha)} \sum_{j=0}^{k} b_{j, k+1} \mathcal{F}(\mathbf{W}, \mathbf{X}\_j)$. This process necessitates repeated computation of $\mathcal{F}(\mathbf{W}, \mathbf{X}\_j)$ at each iteration. Direct computation leads to a complexity of $\mathcal{O}(C E^2)$. If we save the intermedia function evaluation values $\\{\mathcal{F}(\mathbf{W}, \mathbf{X}\_j)\\}\_j$, the total computational complexity over the entire process can be expressed as $\sum_{k=0}^E (C + O(k))$, where $O(k)$ represents the computational overhead of summing and weighting the $k$ terms at each step. We adopt this strategy in our code implementation. If the cost of weighted summing is minimal, the complexity reduces to $O(E C)$. For the Grünwald-Letnikov method (Equation 36), the computational complexity is $O(EC)$ since it requires no repeated computation of $\mathcal{F}(\mathbf{W}, \mathbf{X}_j)$, with only one $\mathcal{F}$ computation per iteration. > The term $E = \frac{T}{h}$ quantifies the discretization (iteration) steps necessary for the integration process. Here, $T$ represents the integration time, $h$ is the step size, and $E$ denotes the total number of iterations required. > The term $C$ denotes the computational complexity of function $\mathcal{F}$. For instance, setting $\mathcal{F}$ to the GRAND model for evaluating the soft adjacency matrix results in $C = |\mathcal{E}| d$, where $|\mathcal{E}|$ represents the edge set size and $d$ the dimensionality of the features (cf. [6]). Alternatively, using the GREAD model results in $C=O((|\mathcal{E}| + |\mathcal{E}_2|)d + |\mathcal{E}| d\_{\text{max}})$, where $|\mathcal{E}\_2|$ counts the two-hop edges, and $d\_{\text{max}}$ is the maximum degree among nodes (cf. [7]). This refined complexity analysis will be included in the revised paper version. [6] Chamberlain, Ben, et al. "Grand: Graph neural diffusion." ICML, 2021. [7] Choi, Jeongwhan, et al. "Gread: Graph neural reaction-diffusion networks." ICML 2023. **Table R1:** Computation time of models on the Ogbn-arxiv dataset | Model | D-GRAND-l | D-GRAND-nl | D-GraphCON-l | D-GraphCON-nl | |---------------|-----------|------------|--------------|---------------| | Inf. Time(s) | 0.083 | 0.139 | 0.141 | 0.196 | | Train. Time(s)| 0.33 | 0.67 | 0.57 | 0.92 | | Model | F-GRAND-l | F-GRAND-nl | F-GraphCON-l | F-GraphCON-nl | |---------------|-----------|------------|--------------|---------------| | Inf. Time(s) | 0.047 | 0.108 | 0.062 | 0.123 | | Train. Time(s)| 0.14 | 0.53 | 0.50 | 0.59 | | Model | GRAND-l | GRAND-nl | GraphCON-l | GraphCON-nl | |---------------|-----------|------------|--------------|---------------| | Inf. Time(s) | 0.038 | 0.099 | 0.044 | 0.105 | | Train. Time(s)| 0.10 | 0.51 | 0.15 | 0.55 | *** **Table R2:** Computation time of models on the Peptides-func dataset | Model | D-GRAND-l | F-GRAND-l | GRAND-l | |---------------|-----------|-----------|---------| | Inf. Time(s) | 0.324 | 0.275 | 0.259 | | Train. Time(s)| 2.853 | 2.298 | 2.034 | *** **Table R3:** Computation time of models on the Peptides-struct dataset | Model | D-GRAND-l | F-GRAND-l | GRAND-l | |---------------|-----------|-----------|---------| | Inf. Time(s) | 0.338 | 0.269 | 0.253 | | Train. Time(s)| 2.923 | 2.258 | 2.008 | Pdf: /pdf/e6b2b0e8222b8a83bf145b42041030b71f43478a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Large Language Models Play StarCraft II:Benchmarks and A Chain of Summarization Approach
Accept (poster)
Summary: This paper introduces TextStarCraft II, a text-based environment to evaluate the strategic decision-making and planning capabilities of large language models (LLMs) in real-time scenarios within StarCraft II (SC2). The study addresses the limitations of traditional Chain of Thought (CoT) methods by proposing the Chain of Summarization (CoS) method, which enhances LLMs’ abilities to process complex information and make strategic decisions efficiently. Key experiments include testing various LLMs against SC2's built-in AI, evaluating commercial models on SC2 knowledge, and conducting human-AI matches to assess performance and strategic adaptability. Strengths: - Chain of Summarization (CoS) Method: The introduction of the CoS method improves LLMs' ability to summarize and process complex game states, leading to more effective decision-making in real-time scenarios. - Impact on AI Research: The study’s findings contribute to the broader field of AI research by providing insights into the capabilities and limitations of LLMs in handling complex, real-time decision-making tasks. Weaknesses: - Human Interaction Assurance: The paper does not clearly explain how LLMs interact with the game. From lines 120-121, it seems that human players might use LLM suggestions to operate the game. If this is the case, how do you ensure that players fully adhere to the LLM's suggestions? - Latency and Real-Time Feedback: In Section 5.3, detailed information on latency is not provided. Can your method provide real-time feedback? - Resource Dependency: The reliance on rule-based scripts for micro-management and the limitation to text-based inputs may restrict the diversity and applicability of AI strategies. Technical Quality: 2 Clarity: 2 Questions for Authors: N/A Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your thorough review and insightful comments. Below, we address your specific concerns and outline the changes we plan to make in the revised version. **Q1**:Human Interaction Assurance We apologize for any confusion caused by our unclear explanation. To clarify, in our TextStarCraft2 environment, the LLM agent interacts directly with the game environment without human intervention. The LLM receives text-based observations of the game state and outputs text-based decisions, which are then automatically translated into game actions by our system. There is no human intermediary executing LLM decisions. In contrast, LLM follows its own suggestions to decides which action to do (such as "Build Cybernetics Core","Attack enemy base"). We will revise this section in our paper to provide a more detailed explanation of how LLMs interact with the game environment. Thank you for pointing out the clarity issue. **Q2**:Latency and Real-Time Feedback The latency in our system is primarily determined by the inference speed and model size of the LLM. By leveraging fine-tuned open-source large language models (LLMs), our approach can achieve real-time feedback. Our Chain of Summarization (CoS) method is crucial in enabling this real-time feedback, significantly outperforming traditional Chain of Thought (CoT) methods in terms of responsiveness. For instance, with CoS, even smaller LLMs like the fine-tuned Qwen-2 7B can achieve real-time interaction (100 ms/step) and demonstrate competitive performance against human players. This level of real-time capability is not possible with standard CoT methods, highlighting the effectiveness of our CoS approach in facilitating swift strategic decision-making in complex, dynamic environments. Although larger models such as GPT-4, Claude, and Gemini still face challenges in achieving real-time feedback (1s/step) due to their size, complexity, and network transmission factors, our CoS method significantly reduces their response times compared to CoT. This improvement allows for near-real-time performance in many scenarios. We are continuously optimizing our system to further enhance the real-time capabilities of these larger models. Below are the delays observed for each step when different LLMs interact with the CoS method: | Model Type | Model Name | Delay (each step) | |----------------|--------------|-------------------| | Finetuned Open-Source LLM | Qwen2-7b | 98ms | | Finetuned Open-Source LLM | Qwen2-1.8b | 64ms | | Finetuned Open-Source LLM | LLAMA2-7B | 102ms | | Closed-LLM | GPT3.5 | 1.2s | | Closed-LLM | GPT4 | 2.3s | | Closed-LLM | Gemini-PRO | 0.3s | **Q3**:Resource Dependency Our LLM utilizes macro actions such as training units, constructing buildings, and researching technologies—commonly known in the StarCraft II community as "build orders." While rule-based scripts handle some micro-level actions, they do not constrain strategic or tactical variety. In our experiments, LLM agents have successfully executed diverse strategies. For example: - Mass Void Rays: A strategy focusing on producing a large number of powerful air units. - Stalker-Colossus: A balanced army composition combining ground and support units. - Carriers: A high-tech strategy centered around powerful capital ships. These examples demonstrate the LLM's ability to adapt unit compositions and respond to potential threats dynamically, showcasing a wide range of strategic approaches without limiting the AI's ability to innovate. Regarding the limitation to text-based inputs, we consider that focusing solely on text allows us to better assess the core capabilities of language models without the need to consider their abilities in processing other modalities like vision or speech. By expressing the strategic decision-making process through text interactions, we can concentrate more on evaluating the language models' strategic reasoning and planning abilities in complex and dynamic environments. On the other hand, we also recognize the potential benefits of incorporating visual information. We are actively exploring the integration of vision-language models (VLMs) into our framework. This multimodal approach could enhance the model's overall ability to process information and make AI decisions, potentially leading to more sophisticated and adaptable strategies. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. We have reviewed your feedback and noted that most of our concerns have been addressed.
Summary: The authors introduce TextStarCraft II, a framework for transferring the state of the Real Time Strategy (RTS) Starcraft II (SC2) into text form and Chain of Summarization (CoS), an extension of the traditional Chain of Thought (CoT) style prompting for condensing information and accelerating LLM inference. The authors demonstrate their system is capable of defeating gold-level, human players in SC2 in real time matches. Strengths: The authors introduce a novel benchmark environment for LLMs in the form of TextStarCraft II, an environment that allows for a rigorous evaluation of an LLM’s ability to do real-time, complex planning. They further introduce a new prompting method, CoS, to allow for information compression and multi-step inference to combat the otherwise prohibitively slow inference speed of LLMs when it comes to a RTS game. The performance of their approach when playing against the pre-programmed AIs and human-players proves the effectiveness of their approach. Weaknesses: A weakness of the authors’ approach is that there is a high likelihood that most modern LLMs were trained on text that included comprehensive strategy discussions of SC2 in text form. However, this is mostly mitigated by the authors’ investigation of both open and closed-source LLMs in section 5.1. However, it would still be interesting to see if these results still hold in newer RTS games that are likely to not be found in most LLMs’ training distribution. A lesser weakness the reviewer would like to suggest to the authors is that it may be worth considering staying away from Starcraft-specific terms in order to reach a wider audience that may be less familiar with the Starcraft series as a whole. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.) When playing against human players, where any restrictions placed on availability of units or maps? 2.) Do the authors believe the results would generalize across the RTS genre and has there been any investigation into this? 3.) To what extent were the macro and micro actions predefined? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors briefly touch on some limitations of their work, however it is unclear whether the topics addressed in the questions were not included due to being outside the scope of the work, or not relevant given some metric or detail the reviewer may have missed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. Below is our response to your inquiry. **Weaknesses** We appreciate your suggestion about Starcraft-specific terminology. In our revised version, we will strive to make our paper more accessible to a broader audience by clarifying or simplifying Starcraft-specific terms where possible. **Q1**: Restrictions in Human vs. LLM Agent Matches In our experiments, we maintained a full-range StarCraft II environment with no restrictions on maps, units, or game versions. We utilized 14 different maps from both the 2022 and 2023 ladder seasons and tested across two game versions (5.0.11 and 5.0.12). Human players engaged with LLM agents using standard input devices (keyboard and mouse) under typical 1v1 game settings, similar to the well-known AlphaStar vs. MaNa show matches. At the same time, to ensure fairness in the game, we make sure that the LLM agent has access to the same game information as humans, for example: 1. Map Visibility: The LLM agent operates under the same fog of war constraints as human players. It does not have access to any information that would be hidden from a human player in a standard game, ensuring equal information availability for both sides. 2. Decision Frequency: We calibrated the LLM agent's decision-making frequency to be comparable to that of human players. This means the AI is not making decisions at a superhuman rate, but rather at intervals similar to what a skilled human player might achieve. These limits ensure our AI competes fairly with humans. The focus is on testing strategic thinking and decision-making, not mechanical advantages. **Q2**: Generalization Across the RTS Genre We agree that this is an interesting and valuable observation. Testing LLM capabilities on newer RTS games not included in their training data would indeed provide a fairer assessment and better reflect the models' adaptability to out-of-distribution tasks. While creating a benchmark framework for newer RTS games is more challenging, we are actively working on extending our framework to more complex, recent RTS environments(Such as Storm Gate and Battle ACE). We look forward to sharing these results in future work and welcome your continued interest in this direction. **Q3**: Predefined Macro and Micro Actions 1. We predefine macro actions as follows: - Building Construction: The LLM determines which buildings to construct based on its strategic analysis and current game state. This decision-making process follows established community conventions known as "build orders," which can be found at https://lotv.spawningtool.com/. - Unit Production: The LLM decides which units to produce, demonstrating its understanding of army composition and resource management. - Research and Upgrades: The LLM makes decisions on which technologies to research and upgrades to pursue, aligning with its overall strategy. - High-level Military Strategy: The LLM determines when to scout, defend, or launch attacks based on its assessment of the game situation. 2. Micro Actions and Execution: While the LLM focuses on high-level decision-making, rule-based scripts handle the execution details, for example: - Building Placement: Once the LLM decides to construct a building, rule-based scripts interact with the environment to execute the construction. - Unit Training Facilities: After the LLM decides which units to produce, scripts select the appropriate facilities to carry out these orders. By separating strategic decisions from tactical execution, our framework mirrors real-world scenarios where high-level planning is distinct from operational implementation. **Limitation** Regarding the limitations of our work, we will expand and clarify the following points in our revised version: 1. Generality: We will discuss how our methods can be applied to other RTS games and mention that we are extending our framework to more complex games such as "Storm Gate" and "Battle ACE" to test the adaptability of LLMs. 2. Decision and Execution Balance: We will further explain the balance between LLM decision-making and rule-based execution, describing the separation of macro decisions (e.g., building selection, unit production) from micro execution (e.g., building placement, unit training) and their effectiveness in complex strategic environments. 3. Resource Limitations: We will describe the impact of resource limitations on system performance and introduce our plans to optimize resource usage, including exploring more efficient computing resources to improve overall performance and real-time decision-making capabilities. Thank you again for your insightful comments and feedback, which are crucial for improving the quality of our research. We look forward to continuing this dialogue and further enhancing the contributions of our work. --- Rebuttal Comment 1.1: Comment: Thank you for your explanations. I'll keep my current score.
Summary: This paper introduces TextStarCraft II, a benchmark designed to assess long-term strategic decision-making in the context of playing StarCraft II. It models the game-playing process through pure textual representation and also develops a chain-of-summarization pipeline to aid LLM's decision-making to achieve victory. They evaluate various LLMs under different prompting strategies against the built-in AI mode of StarCraft II. The experimental results indicate that prompting powerful closed-source LLMs (e.g., GPT-4) and fine-tuned open-source LLMs (e.g., QWEN-7B) can perform competitively with the Level 5 (the highest level) built-in AI of StarCraft II. Strengths: - The topic is interesting. - The contribution of proposing a long-term strategic decision making benchmark for evaluating LLMs is beneficial to the community. - The evaluation is extensive Weaknesses: - The objective of how LLMs achieve victory in the game is not clearly defined. - The process by which LLMs execute actions and make observations within the textual setting is insufficiently explained. Technical Quality: 3 Clarity: 3 Questions for Authors: The paper introduces a benchmark for evaluating large language models' (LLMs) long-term strategic decision-making capabilities in the context of playing StarCraft II. This benchmark models gameplay in a purely textual environment and includes the design of a chain-of-summarization pipeline to aid LLMs in playing and winning the game. The topic is interesting and beneficial to the LLM research community. However, I have several concerns, primarily about the overall pipeline setting and the mechanics of how LLMs engage with the game. - The objective of optimizing LLMs through prompting or fine-tuning to win the game appears to be inadequately defined. It remains unclear what the specific winning criteria for LLMs are and how these models compete against the built-in AI to meet these criteria. - The methodology detailing how LLMs execute actions and observe the environment in each round lacks clarity. According to Figure 1, each round involves the LLM making decisions regarding necessary actions and translating multi-frame observations into text. However, it is not explicitly described what types of actions are executed or how multi-frame observations are represented within the purely textual environment. - The experimental results indicate that well-optimized LLMs, with appropriate prompting, can perform competitively with built-in AIs. It would be beneficial to provide deeper insights. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Apologies for the lack of clarity in our methodology description. **Q1**: Standard for Achieving Victory In our TextStarCraft2 environment, the victory conditions mirror those of a standard StarCraft II 1v1 game, consistent with established AI benchmarks such as AlphaStar, ROA-Star, or DI-Star. To secure a victory, the LLM agent must destroy all of the built-in AI structures. This criterion is a standard setting across various StarCraft II AI competitions, ensuring that our benchmark aligns with common practices in the field. To beat the built-in AI, the LLM must engage in a complex series of long-term strategic decisions. This includes resource gathering, constructing buildings, developing technologies, and producing combat units. The LLM needs to outperform the built-in AI over an average game duration of approximately 20 minutes (7000 steps). This process typically involves intricate resource allocation, tactical maneuvering, and unit countering strategies, among other sophisticated decisions. In our study, we observed some interesting behaviors from LLM agent playing a strategy game. LLM agent was controlling one faction (called Protoss) against a built-in computer opponent controlling another faction (called Zerg). Here are two notable scenarios: 1. Predicting and Preparing for Attacks(Figure 3.b): - The LLM agent scouted the enemy base and noticed a large army. - It predicted an incoming attack and decided to build defensive structures. - About a minute later, when the enemy attacked, the LLM agent was well-prepared with both defensive buildings and its own army. 2. Adapting Army Composition(Figure 7): - In the first encounter, the LLM agent's initial army (mostly ground units, zealots) struggled against the enemy's forces(roaches and hydras). - LLM agent then decided to change its strategy, producing more flying units to counter the enemy's ground-based army. - This change in tactics allowed LLM agent to barely survive the first attack wave. - By the time the enemy launched a second attack, LLM agent had built up enough flying units to defend its territory effectively. **Q2**: Action and Observation Mechanisms **Multi-frame Summarization**: After each K steps, we collect several frames of game state information, each represented as a textual description. In each round of the game: 1. We represent each frame as the following text information: - Resources (e.g., "Minerals: 500, Vespene Gas: 200") - Buildings (e.g., "2 Gateway, 1 Nexus") - Units (e.g., "15 Zealots, 5 Carriers") - In-process activities (e.g., "Researching Warpgate: 70% complete") - Visible enemy units and structures - Current research status Then, single-frame summarization step exacts important information(like supply, current army composition) for each frame. After that, multi-frame approach helps capture temporal dynamics and ongoing processes. We then prompt the LLM to summarize these multiple textual representations into a concise, structured format. This process is designed to mimic how human players process and prioritize game information. 2. Summarization Process: The LLM is instructed to summarize the multi-frame information in the following structured manner: 1. Game Overview: A snapshot of key metrics (e.g., game time, worker count, resources, supply). 2. Current Game Stage: An assessment of the game's progression. 3. Our Situation: - Units and Buildings: A summary of our army composition and structures. - Economy: An evaluation of our economic status. - Technology: An overview of our technological advancements. 4. Our Strategy: An interpretation of our current strategic position. 5. Enemy's Strategy: An analysis of the opponent's apparent strategy based on observable information. 6. Key Information: Highlighting crucial elements that require immediate attention or action. **Action Execution**: The LLM receives a textual description of the current game state and is prompted to decide on an action. It then outputs a text-based command, which our system interprets and executes in the game environment. The types of actions that can be executed include: - Building construction (e.g., "Build Cybernetics Core") - Unit production (e.g., "Train 5 Probes(protoss worker)") - Research (e.g., "Research Blink Tech") - Tactical maneuvers (e.g., "Scout with 1 probe", "Attack enemy base") For example, a command might be "Build 1 pylon(provide more supply)" or "Attack enemy base". For more detailed examples of these textual interactions, please refer to Appendix F: Policy Interpretability Examples. **Q3**: Deeper Insights In our experiments, we observed that optimizing LLMs through strategic prompting is essential for competitive performance against built-in AI opponents. Here are some crucial insights: Prompt Engineering: Effective prompts blend game mechanics with strategic concepts. For instance, prompts like "Consider resource management, tech tree progression, and army composition" led to better results than simpler directives. Summarized game state information at each decision point also enhanced performance. Prompt Sensitivity: LLMs showed high sensitivity to how decision scenarios are framed. Proactive prompts, such as "What potential threats should you prepare for?", generally led to better outcomes than reactive prompts. For example, in a test, GPT-3.5 with optimized prompts achieved an 84% win rate against the Level 4 built-in AI, significantly higher than the 12.5% win rate with basic prompts, largely due to improved resource management and anticipatory defensive strategies. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttals. My concerns are mainly addressed. I will raise my score to 6.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Isometries: Taming Transformations for Equivariant ML
Accept (poster)
Summary: The paper proposes an autoencoder framework that encodes the input symmetries into isometries in the latent space. The equivariance in the latent space is captured by a functional map $\tau$, which is regularized to be an isometry. Instead of hard constraints, the equivariance of the system is encouraged via the optimized $\tau$ and the loss function design. The proposed method is evaluated on a homography-perturbed MNIST, conformal shape classification, and camera pose estimation tasks. The experiments show the proposed method performs on par with the handcrafted equivariant baselines and outperforms a baseline with a similar approach (Neural Fourier Transform). I’m positive about the paper. However, there are some main concerns that I would like to know the answers to. I’m willing to raise the scores once they are addressed. Strengths: 1. The paper is well-written and easy to follow. 2. The paper proposes a novel framework that softly models the latent space equivariance as functional maps. 3. Unlike handcrafted equivariant networks that are limited to certain groups, the proposed framework can model different input transformations. 4. The proposed method is shown to perform on par with the baselines in homNIST and camera pose estimation tasks, and outperforms the baselines in the conformal shape classification task. Weaknesses: 1. Although being shown to be effective in the experiments, the constraint of $\tau$ being an isometry seems a bit heuristic. The paper can benefit from some theoretical investigation on why constraining to being isometries helps the performance. 2. Following up on the previous point, can all transformations be modeled as isometric functional maps in the latent space? That being said, does restricting $\tau$ to isometries affect the types of transformations the network is able to model? 3. Since the equivariance is only obtained softly via the loss function design, it’s essential to see how much equivariant is maintained/lost in the latent space. I recommend the authors report some measures of equivariance and compare them with the handcrafted methods. (For example, equivariant error/loss in the latent space.) 4. There are several heuristic designs in the proposed framework, and some of them affect the performance significantly. For example, it is not explained clearly why a smooth multiplicative mask is required to ensure $\tau_\Omega$ is semi-diagonal. The effect of such approximation is also not discussed/evaluated. The second is the multiplicity loss, which greatly affects the performance of the framework. 5. I appreciate the comparisons with handcrafted equivariant networks. However, I believe the paper can be greatly strengthened by comparing it with the same autoencoder framework while not enforcing the latent equivariance and training with data augmentation. This can demonstrate the benefits of the proposed latent equivariant design. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Since there are no hard-coded constraints (except for the regularization) in the system, the proposed method can work with different input transformations. Do you think this work can be applied to automatic symmetry discovery tasks? 2. I understand that the inner product needs to be preserved to construct an isometry, but why does $\tau$ have to commute with $\Omega$? 3. Ln 207, “we show in experiments that a major benefit of our isometric regularization is that our multiplicity loss (promoting a diagonal-as-possible and thus sparse and compact $\tau_\Omega$) can serve as an effective substitute for access to triples.” Is this true for all three experiments, or is it only validated on homNIST? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As mentioned in the paper, the proposed network is unable to perform graph-based tasks. In addition, the equivariance in the latent space is not evaluated. As a result, it is unclear how the equivariance is preserved in the latent space. Lastly, by restricting the functional maps to be isometries, the transformations that the network can model might be limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting that our paper is well-written and easy to follow, novel, and competitive with baselines, and for providing valuable feedback! We further appreciate your commitment to reconsidering your score - we have tried our best to address your feedback, but please let us know if we missed anything! ## Clarification: Automatic Symmetry Discovery We would like to clarify that this is *exactly* the goal of NIso. It *does not assume* upfront knowledge of the symmetry, but instead learns to be equivariant *automatically*. ## Why commute with $\Omega$? The goal of NIso is to discover symmetries (i.e. what is preserved between observations) of transformations. For instance, if the image transformation is a shift, then a a particular property of the images is preserved: their frequencies. For any transformation that preserves some property, there exists a basis such that its functional map (FM) is block-diagonal in that basis. The basis is defined as the eigendecomposition of some operator $\Omega$. The FM being block diagonal in the basis – with the size of each block corresponding to the eigenvalue multiplicity – means it will commute with $\Omega$ and equivalently preserve its frequencies (i.e. an isometry). Hence, we can view the problem of symmetry discovery as equivalent to jointly finding a set of maps relating observations *as well as* the operator they commute with. We will be happy to include a detailed theoretical exposition of these properties in the revision. ## Can all transformations be modeled? NIso can *exactly* model unitary transformations. Any other transformation can in theory be approximated with a sufficiently expressive latent space. We also note that some transformations that are not unitary in image space can be made so by estimating the correct latent space: consider the case of camera pose estimation from video. From frame to frame, there are occlusions so there exists no unitary map between the two images. However, there exists a unitary map between the state of the underlying 3D scenes. Our motivation with NIso is to take a first step towards a model that can discover the correct symmetries of the underlying problem. Functional maps can nevertheless be sensitive to occlusion and partiality. However, we note that NIso is already *surprisingly robust* to partiality in its inputs, as is evident in the Co3D experiments. Please see the related discussion in the general rebuttal. ## Report measures of equivariance and compare them with the handcrafted methods. We are happy to report that we were able to quantitatively evaluate the equivariance of our model following standard procedures [6-7, 43]. Please see the discussion in the general rebuttal and Tab. 1 in the attached PDF. ## Explanation of why a multiplicative mask is required to ensure diagonality. The multiplicative mask $P_\Lambda$ appears in the derivation of the closed form solution to the constrained minimization problem in Equation (7) (section A.2 of the supplement). Remarkably, the derivation reveals that enforcing commutativity with the diagonal matrix of eigenvalues $\Lambda$ (and thus block diagonality) simply reduces to element-wise multiplication with the “sharp” eigenvalue mask in Equation (19) before the Procrustes projection. ## Evaluate effect of smooth multiplicative mask. The “sharp” eigenvalue mask in Equation (19) is difficult to work with. Evaluating equality between floating point values is hard without some heuristic measure of closeness. Furthermore, we found that this approach results in poor backwards gradient flow to the eigenvalues, to the point where the NIso fails completely. Thus, we replace the “exact” mask with its “soft” analogue, approximating the Kronecker deltas via the exponential. While it induces some degree of error, it allows backprop through the mask. We note that we are not the first to introduce a “soft” eigenvalue mask with alternatives explored extensively in [Ren and Panine et al. 2019] and concurrent work [Ceng and Deng et al. 2024] employing a similar soft mask as a regularizer. ## Explanation of Multiplicity Loss. The point of the multiplicity loss is to make the operator “interesting” and force our framework to recover the symmetry of the transformation. Without the multiplicity loss, all eigenvalues could collapse to a single one. The operator is then a multiple of the identity, which would not constrain the FM in a meaningful way. The more distinct eigenvalues the operator has, the more it is forced to discover subspaces of the latent space that are preserved, i.e., symmetries. Intuitively, the multiplicity loss can also be seen as forcing the network to discover an "as-simple-as-possible" representation for the transformations by promoting diagonality. ## Effect of multiplicity loss across experiments Without the multiplicity loss our method achieves poor results in all three experiments. On Conf. SHREC`11, without the multiplicity loss NIso fails to discover a meaningful notion of equivariance in the latent space and archives a classification accuracy of only approximately 36%. In the CO3D experiments, we also indirectly evaluate its absence in the regime where we train and evaluate a version of NIso that does not learn an operator (and thus only enforces orthogonality). We find that this version performs worse than all other baselines. Together these results suggest that forcing the network to discover an operator with a diverse eigenspectrum via the multiplicity loss is integral to gaining a meaningful notion of equivariance in the practical sense. ## Compare to the baseline autoencoder framework with data augmentation. Thank you for suggesting this experiment and we are happy to report we executed it with strong results -- NIso still outperforms these baselines by a significant margin. Please see the "Comparison with data-augmentation" section in the general rebuttal and Table 2 of the PDF. --- Rebuttal 2: Title: Address remaining questions in discussion period? Comment: Dear Reviewer Sfsw, Thank you again for your comments and feedback! As the end of the author / reviewer discussion period is fast approaching, we would love to hear your thoughts and see if we can address any remaining questions in the remaining time. Please let us know whether we addressed your comments and questions appropriately! Thank you! Best, the authors --- Rebuttal 3: Comment: I thank the authors for their efforts in the rebuttal. Many of my concerns have been addressed. It's good to see NIso can achieve a similar equivariance error to the handcrafted methods. It's also interesting to see patterns similar to spherical harmonies appear in the SO(3) case. It is also great to see NIso perform better than AE with augmentation. The added experiments strengthen the paper. In addition, I kindly disagree with other reviewers on the need to solve occlusion/partiality. Occlusion and imperfect symmetry modeling can be a different research topic on its own in the field. However, I recommend that the authors refrain from claiming that NIso can generally handle occlusion/partiality robustly, as it requires additional proof and verification that is not presented in the paper. Although most of my concerns have been addressed, there are still some that remain. The multiplicity loss still seems like a heuristic trick to me, and it has a significant impact on the system's performance. I would love to see a more theoretical probe into it in future work. Secondly, although it is shown experimentally that the proposed method can deal with transformations that are not unitary, it is still not guaranteed theoretically. The authors provide a plausible explanation in response to Reviewer FFcP (there exists a domain in which these transformations are either unitary or isometric, and can be mapped to via a sufficiently expressive autoencoder). Although I personally agree with such an explanation, this is still a speculation without proper proof. I again suggest the authors refrain from making such a claim. Overall, the paper is strengthened after the rebuttal, and I believe the paper introduces a novel way to softly model equivariance in the latent space, which is a great addition to the community. Therefore, I have raised my score to reflect the above points. --- Rebuttal 4: Comment: Thank you for your response and for updating your score, we are glad that you find NIso will contribute to the NeurIPS community! Following your comments, we will update our paper to make sure to not overclaim. Specifically, we will clarify that (1) NIso can only exactly model unitary transformations; and (2) That our experimental results only suggest the potential of NIso to model more complex, non-unitary transformations, and that a rigorous theoretical investigation combined with extensive empirical results is necessary to convincingly demonstrate broad generalization. We will also clarify that (3) NIso does currently explicitly handle partiality. We agree that a deeper investigation into the multiplicity loss will strengthen our method and we are actively investigating this in follow up work. We would also like to note that an important novel benefit of our formulation is that it allows for us to regularize for block-diagonality *without backpropagating these gradients through the solve for the estimated transformation.* Specifically, the prior methods including the NFT attempt to impose a block-diagonality loss on the estimated transformation itself. We discussed with the authors who confirmed that this is only possible in a post-processing step as it otherwise destabilizes training. In contrast, by learning to parameterize an operator $\Omega$, we instead impose the regularization for block diagonality on the mask $P_\Lambda$ which is stable during training. Our comparisons with the NFT (see Tab. 3, center left) suggest that this is a key feature enabling self-supervised discovery of sparse and condensed representations, and we are excited to investigate this further. Thank you again for your comments, which will help to clarify the paper!
Summary: This paper introduces Neural Isometries, which is an autoencoder framework learning to map the observation space to a general-purpose latent space wherein encodings are related by isometries whenever their corresponding observations are geometrically related in world space. Several experiments, including camera pose estimation, are conducted. Strengths: The idea of transforming complicated equivariances in observation space into isometries in latent space is intuitive and interesting. Experiments have demonstrated the potentials of this method in practical tasks. The paper is well written and easy to understand. Weaknesses: This is an interesting paper but in-depth discussion is lacking. My main concern is that there is no quantitative or theoretical description of the equivariance. For example, 1. How equivariant is the model? i.e. How large is the equivariance loss? 2. What type of equivariance in the observation space can be modeled? Will the model fail when the pertubations are too strong? In addition, the experiment results are relatively weak. Only 3 small scale experiments are conducted, and only a few baselines are considered. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. If the observation space is already isometric, e.g. rotate by 90 degrees, what can be said about the latent sapce? 2. How robust is this model? For example, how will the latent vector change when the inputs are partially observed (masked)? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for validating our motivation and our communication - we appreciate it! ## Quantitative evaluation of equivariance: How equivariant is the model? We are happy to report that we were able to quantitatively evaluate the equivariance of our model following standard procedures [6-7, 43]. Please see the discussion in the general rebuttal and Tab. 1 in the attached PDF. ## What type of equivariance in the observation space can be modeled? NIso can *exactly* model unitary transformations. Any other transformation can in theory be approximated with a sufficiently expressive latent space. We also note that some transformations that are not unitary in image space can be made unitary by estimating the correct latent space: consider the case of camera pose estimation from video. From frame to frame, there are occlusions so there exists no unitary map between the two images. However, there exists a unitary map between the state of the underlying 3D scenes. Our motivation with NIso is to take a first step towards a model that can discover the correct symmetry of the underlying problem. ## Will the model fail when the pertubations are too strong? How robust is the model? As remarked in the general discussion, functional maps can be sensitive to occlusion and partiality. However, we note that NIso is already *surprisingly robust* to partiality in its inputs, as is evident in the Co3D experiments. Please see the discussion in the general rebuttal under the “NIso under the presence of occlusions / masking / partiality” section. ## Lack of Baselines + Only Small-Scale / Synthetic Experimental Results We are happy to report that we have provided comparisons with additional baselines, including with two SoTA representation learners (DINOv2 and BeIT) in the pose estimation experiments on the Co3D dataset. Please see the "Additional baselines" section in the general rebuttal. However, we would like to respectfully push back on the notion that we are considering too few baselines. We note that there is not much work on the topic of equivariant machine learning *without* prior knowledge of the symmetry group - the NFT is the most relevant baseline in this space. Further, the reviewer is suggesting that we are only comparing “small-scale” or “synthetic” experiments”. However, camera pose estimation on Co3D is *not* a small-scale experiment - it is a large, real-world dataset that is actively used for benchmarking of applications from pose estimation to novel view synthesis. More generally, we sought geometric deep-learning baselines for difficult, non-compact symmetry groups. To the best of our knowledge, the only existing SoTA equivariant models handling such symmetries in vision-related tasks are are homConv [6] and LieDecomp [43] (both for homographies) and MobiusConv [7] (for Mobius Transformations). Thus, benchmarking on MNIST and SHREC is absolutely vital to support our claims. These baselines are not scalable and will not run on any more complex or real-world datasets - this is not a limitation of *our* method, but a limitation of the baselines. Please also see our discussion in the general rebuttal under the section "Scope of experimental evaluations". We would be happy to compare with additional baselines if you could clarify what these baselines should be. ## If the observation space is already isometric, e.g. rotate by 90 degrees, what can be said about the latent space? Note that our toy experiment - discovering the toric laplacian - is *exactly* an instance of this problem. We parameterize images to lie on a torus, such that shifts become rotations, and hence, the shift is exactly an isometric transformation! In this case, our formulation is likely to recover a close approximation of the *correct* operator and corresponding irreducible unitary representation of the transformation in the eigenbasis - in Exp. 1, we recover a very good approximation of the laplacian operator and the block-diagonal shift representation. We further demonstrate this in the new experiment where we discover the spherical harmonic transform (see the “Discovering the spherical harmonic transform” section in the general rebuttal and Fig. 1 of the attached PDF). This experiment results in the discovery of a basis with almost *exactly* the properties of the spherical harmonics. In particular, the estimated $\tau_\Omega$ manifest *exactly* the same structure as the ground truth Wigner-D matrices corresponding to the rotation (which are the IURs of $SO(3)$), with square blocks of size $(2\ell + 1) \times (2\ell + 1)$, for the $\ell$-th distinct eigenvalue --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. I have read the experiments in the attached pdf, and I am clear now that the equivariance error is at least comparable with the existing methods. The reply on the experiments is also convincing. However, the other two concerns are not addressed: the type of equivariance can be modeled and the robustness to partial visibility. The author argued that "Any other transformation can in theory be approximated with a sufficiently expressive latent space". It is imprecise. On the other hand, due to the lack of description of the robustness against partial visibility, and the model does not have any constraint on the equivariance of the input, it is hard to understand why the model is robust or how robust I should expect it to be for practical tasks. The author mentioned that this issue will be investigated in the next step, but since current version of paper does not include any discussion on this issue, I think it is not complete. --- Rebuttal 2: Title: Response (1/2): Modeling Equivariance Comment: Thank you for your response. We are glad to hear that you view our experimental results as convincing. ## Which Types of Equivariance can be modeled? We apologize for the lack of clarity in our previous response. In the following, we will make two points: (1) We show that NIso can exactly model equivariance for any unitary transformations, which make up a large chunk of previously proposed equivariant ML architectures. (2) We provide detailed reasoning for our empirical results that clearly suggest that NIso can learn to be equivariant even to certain non-unitary groups. ### Equivariance to unitary transformations NIso can learn to be equivariant to any transformation that preserves the norm of its input. Examples for this include: Shifts on the torus, as demonstrated in Fig. 3 of the main paper. SO(3) with spherical inputs, as demonstrated in Fig. 1 of the rebuttal PDF. The group of 90 degree rotations mentioned by the reviewer, which we empirically validated but were not able to include in the rebuttalPDF due to space constraints (we will include this in the camera-ready paper!) We note that prior work in equivariant ML at top conferences is often equivariant to only **a single one** of these transformations, with significant expert crafting. For instance, SO(3) is a non-abelian group and its IURs are substantially complex – nevertheless we are able to recover a basis and representation of transformations with *exactly* the same properties. The fact that NIso learns to be equivariant to all these transformations is a significant strength - we are not aware of any previous work in symmetry discovery that has demonstrated this capacity. We hope that this is absolutely un-ambiguous: **NIso can learn to be equivariant to a transformation that preserves a norm of its input**. ### Equivariance to non-unitary transformations We agree that the statement “Any transformation can be approximated with a sufficiently strong encoder” was too loose. We specify it in the following. As discussed above, NIso can closely approximate **unitary** transformations on its input. For other more complex transformations, our experiments suggest that a sufficiently expressive autoencoder can, in fact, discover a latent space where the manifestation of the transformation is approximately unitary and even isometric. We believe this is a surprising and interesting result that is perhaps being overlooked. We note that it is not understood which transformations or groups can be embedded in this way, and point out that this is an exploration of fundamental math, beyond the scope of any ML paper. However, a potential explanation of this phenomenon could be related to the fact that for many challenging transformations, there exists a domain in which these transformations are either unitary or isometric. For example, Mobius transformations are isomorphic to the isometries of the hyperbolic ball with their action on the sphere representing their restriction to the boundary. Similarly, camera motions in the image plane are in fact projections of the isometric action of SE(3) on the underlying 3D scene. We hypothesize that by forcing the network to find an isometric representation for the observed transformations we are implicitly regularizing the learned latent space to discover information about this underlying geometry. We note that our ability to recover reasonable camera poses from the estimated transformations where other representation learners fail is indicative of this. An additional illustrative example supporting this hypothesis can in fact be seen in Figure 7 of the supplement, where we show examples of failure cases in which the pose extraction from $\tau_{\Omega}$ fails catastrophically. Here we observe that these failure cases often coincide with sequences of frames where background and foreground objects are respectively far from and close to the camera – both cases where depth-estimation pipelines also often fail. ### Conclusion All in all, NIso is a novel *first step* towards a new paradigm for symmetry discovery and equivariant ML. To the best of our knowledge, no prior work in symmetry discovery has been able to show, for instance, self-supervised discovery of approximate equivariance to a variety complex transformations, some of which do not even form a group in the observation space (such as the action of camera motions on the image plane). Using the additional page of the paper, we will add above discussion to the “Methods” section of the paper - we agree with the reviewer that this will make the paper stronger! --- Rebuttal 3: Title: Response (2/2): Analysis of Partiality Robustness Comment: ## Analysis: Robustness to Partiality In response to the reviewer’s concerns, we are happy to report that we performed a theoretically grounded experimental evaluation of robustness which we will be happy to include in the revision using the additional page of the camera-ready paper. Specifically, we seek to evaluate the robustness of the isometry-estimation module in isolation in the presence of occlusions. We consider the encoder-free paradigm – consisting of the projection to the learned basis, the estimation of the isometric map as in Equation (8), followed by unprojection – which is the same setup in the toric and spherical laplacian experiments. **This setup allows us to study exactly the effect of masking on the latents (as requested by the reviewer) which in this case are the input images.** To do so, we consider two observations $\psi$ and $T\psi$ which only partially correspond and denote $O_\psi$ and $O_{T\psi}$ to be the diagonal overlap masks such that the $i$-th diagonal element of $O_{\psi}$ is 1 if $i$ is the overlap with $T\psi$ and $0$ otherwise, with $O_{T\psi}$ defined in the same manner. We observe that the equivariance error under the partiality can thus be defined by the magnitude of the difference between the components of $O_{\psi} \psi$ that get mapped to $ O_{T\psi} T\psi $ under $\tau$. That is, we define the partiality equivariance error to be $ \lVert O_{T\psi} \tau O_{\psi} \psi - O_{T\psi} T\psi \rVert^2 / \lVert O_{T\psi} T\psi \rVert^2$. Using the toric laplacian experiments as a base, we consider two models of partiality. In the first, we no longer consider the domain to be toric and instead shifted images are clipped at the boundaries with the resulting empty pixels masked to zero – corresponding to the type of partiality often observed in video. In the second, we randomly mask out $2 \times 2$ patches in the shifted image. We train five instances of our model under both partiality regimes, masking out approximately 10%, 20%, 30%, 40%, and 50% of pixels in each instance and measuring the resulting partial equivariance error on the test set. The results are shown in the table below. | Percent Occluded | 10% | 20% | 30%| 40%| 50% | | :------------------------ | :-----: | :-----: | :-----: | :-----: | :-----: | | Shift Mask | 5.66% | 9.42% | 14.43% | 17.42% | 20.47% | | Patch Mask | 6.69% | 13.62% | 23.54% | 34.14% | 45.35%| Notably, we see that the partial equivariance error is consistently lower in the presence of shift-based occlusions, and increases less as the percentage of occluded pixels increases. We observe that the principal difference between these two regimes is that the unoccluded pixels exist in a contiguous block under the shift mask, whereas the region of unoccluded pixels is fragmented under the patch masking. Intuitively, the contiguous matches between large blocks (comprising the majority of the image) act as a strong regularizer that de-prioritizes matching the occluded areas. Conversely, when the correspondence is interrupted and fragmented it offers weaker regularization and the resulting map is more affected by the spurious matches induced by the occlusion. --- Rebuttal 4: Title: Additional question remaining in discussion? Comment: Dear FFcP, Thank you again for your response and feedback! We believe we were able to address your stated remaining concerns, including a straightforward experimental evaluation of robustness. As the end of the author / reviewer discussion period is fast approaching, we would like to be sure you are satisfied by our discuss and see if we can address any remaining questions. Please let us know whether we addressed your comments and questions appropriately! Thank you! Best, the authors
Summary: This paper proposes a generic equivariant ML framework by learning latent representations are are modelled to be related by an isometry. There are several important design choices made by the authors: (1.) An autoencoder framework that keeps the spatial structure (i.e. images get encoded into images) (2.) The isometric map is represented compactly with a basis that is the outcome of an eigen- -decomposition of a PSD operator (3.) The isometry manifests as an orthogonal matrix with a sparse block diagonal structure in its reduced projected form. Experiments are shown for (1.) Learning the spectrum of a Toric Laplacian and comparing it qualitatively to the real one in Fig 3 (2.) Homography perturbed MNIST (3.) Conformal shape classification and (4.) Camera Pose Estimation from real-world video. The results (1) show a good proof of concept and (2), (3) and (4) demonstrate a clear superiority over the closest conceptual baseline NFT. (2.) Also makes the case for achieving comparable performance with simpler equivariant architectures using the NIso latent space. Strengths: - Overall the biggest strength of this paper is that it was an enjoyable read. I found the hypothesis (i.e. learning latent codes related by an isometry for arbitrary transformations in the observation space) to be interesting and reasonable. The overall writing and description of the method is very nice. - The Experiments especially (1.) was a good proof of concept. Weaknesses: - I am actually quite confused on how the operator \Omega and Mass matrix M is learned? The Laplace Beltrami is a very structured object. See all requirements enumerated in Wardetzky, Max, et al. "Discrete Laplace operators: no free lunch." Symposium on Geometry processing. Vol. 33. 2007.). It is unclear what the nature of this operator is and its learning feels ad-hoc and unmotivated. - Do the authors make sure that the optimization for the functional map is well-posed? Typically in the original formulation a "good" functional map is the outcome of using many-many descriptors in addition to regularization constraints like commutativity with laplacian. If I understand correctly the proposed formulation seems to be confident in learning a structured functional map (like Fig 2) using just one pair of corresponding functions ? - As a high-level opinion: Overall the experimental section could be stronger. The "discovery" of the toric laplacian was nice, but a demonstration on roto-translations perhaps with scale and/or spherical images with SO(3) would be extremely convincing. I would also recommend an evaluation that focuses beyond NFT and actually quantifying the complexity of different equivariant models. - As mentioned by the authors, the requirement of generalizing to domains with diverse connectivity is indeed a hard one, but important. Technical Quality: 2 Clarity: 4 Questions for Authors: - The basis functions visualized in Fig3 (third column) are learned or from the actual toric laplacian? what are the various rows? - It would also be useful to report some feature of complexity in Table 1 indicating "meticulously-engineered, handcrafted networks" - More examples of evidence showing good basis functions - What is a perturbation in Section 5.1? Confidence: 3 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: I think the authors make a fair declaration of the limitations of their work in Section 6. I would make a stronger vote after the rebuttal. I think this paper has many ingredients to warrant acceptance: clever idea, good argumentation, and decent proof of concept. It lacks clarity in some places and the evaluation does not appear equally general as is the description of the method. For this reason, I maintain a borderline acceptance with a full intention to re-assess in the next round. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review, and for your kind words on our hypothesis being interesting and plausible, our writing enjoyable, and our validation with the laplacian helpful! We are happy to report that we executed your key task -- recovering the harmonics with spherical images – with exciting results! ## Learning the operator $\Omega$ and mass matrix $\mathbb{M}$. ### Motivation The goal of NIso is to discover symmetries (i.e. what is preserved between observations) of complex transformations in image space. For instance, in the toric laplacian experiment, the image transformation is a shift. Under shifts, a particular property of the images is preserved: their frequencies. For any transformation that preserves some property, there exists a basis such that its functional map (FM) is block-diagonal in that basis. The functional basis is defined as the eigendecomposition of some operator $\Omega$. The FM being block diagonal in the basis – with the size of each block corresponding to the eigenvalue multiplicity – means it will commute with $\Omega$ and equivalently preserve its frequencies (i.e. an isometry). Hence, we can view the problem of symmetry discovery as equivalent to jointly finding a set of maps relating observations *as well as* the operator they commute with. ### Parameterization and learning [Wang and Solomon, 2019] establish that the key property characterizing functional operators is semi-positive-definiteness (SPD), and we impose no other constraints so as not to limit the space of discoverable operators. (We will cite this work in the revision). SPD is defined with respect to some inner product, and thus we learn a diagonal mass matrix (with positive entries) which defines a functional inner product. Then, we seek to simultaneously regress the parameters for an operator $\Omega$ that is SPD w.r.t. the inner product defined by $\mathbb{M}$. We observe that *any* $\mathbb{M}$-SPD operator can be expressed in the form of Equation (4) for some matrix of $\mathbb{M}$-orthogonal eigenfunctions $\Phi$ and non-negative eigenvalues $\Lambda$. Thus we learn a set of weights parameterizing $\Phi$ and $\Lambda$ which together with learned $\mathbb{M}$ form $\Omega$. In practice, the weights of $\Phi$ are projected to the nearest $\mathbb{M}$-orthogonal matrix via the SVD. ## Well-posedness of functional map We carefully ensure that the solve for the functional map is well-posed. First, we leverage a number of regularizers that jointly ensure well-posedness: 1). Orthogonality; 2). Operator commutativity; and 3). The “Equivariance loss” is a version of the widely used “descriptor preservation” loss in the FM. These are several of the most common principled regularizers widely disseminated and discussed in the FM literature [21, 26, 38] Second, we note that our latent codes double as multi-channel pointwise descriptors, of the same kind as used in the Deep FM family of work [Litany et al. 2017, 38] , which forms the backbone of many SoTA shape correspondence pipelines. ## Discovering the Spherical Harmonic Transform We are happy to report that we have performed this experiment with *spectacular* results! Please see the discussion in the general rebuttal under “Discovering the spherical harmonics” and Fig. 1 of the PDF. We hope you find these convincing! ## Quantifying the complexity of different equivariant models. There are two dimensions of “complexity” that we failed to clearly delineate in the text: (1) *human* complexity of hand-crafting the method and (2) compute / memory complexity. While approaches for rotation equivariance are wide-spread, there are no standard equivariant networks for most difficult symmetries. Any attempt at building such a network is associated with extensive, difficult mathematical labor. MobiusConv [7], for instance, essentially derives what to the best of our knowledge is a previously unknown representation on a specialized group of filters (which is no mean feat), simply to facilitate a tractable discretization. homConv [6] and LieDecomp [43] seek to sidestep this via a recipe for general equivariance using Monte-carlo integration. However, these methods struggle with *compute* complexity. For $N$ samples (typically at least 10-25), the computational complexity of a forward pass scales as $N^L$ where $L$ is the number of layers! In practice, this means that these methods cannot be scaled past incredibly low-dimensional images, which is why both homConv and LieDecomp are run only on MNIST. In contrast, NIso relies on recovering a latent space such that the transformations can be represented as an isometry, which can thus be exploited by far more efficient and scalable isometry-equivariant networks such as [36] and [42]. . ## Evaluation that focuses beyond NFT. We agree and have since added two additional baselines to the submission - see the point “Additional Baselines” in the general reply above! However, we would like to note that besides the NFT, *we are not aware of any other methods that aim to perform equivariant representation learning without prior knowledge of the group*. Thus, we see the NFT as a important benchmark with which to compare. ## Generalizing across connectivity We completely agree and are currently exploring this as a direction of future work. ## The basis functions visualized in Fig3 (third column) are learned or from the actual toric laplacian? What are the various rows? They are learned. At the time we did not discover them ordered by a classical notion of frequency so the order is random. Rows are eigenfunctions in C-style indexing. ## More examples of evidence showing good basis functions In addition to the learned spherical harmonics, we have also visualized examples of the eigenfunctions discovered in each of the three other experiments. Please find them in the PDF, in Fig. 2. ## What is a perturbation in Section 5.1? Perturbation refers to applying a homography to the image. --- Rebuttal 2: Title: Address remaining questions in discussion? Comment: Dear JQqD, Thank you again for your comments and feedback! As the end of the author / reviewer discussion period is fast approaching, we would love to hear your thoughts and see if we can address any remaining questions in the remaining time. Please let us know whether we addressed your comments and questions appropriately! Thank you! Best, the authors --- Rebuttal Comment 2.1: Title: Rebuttal response Comment: Thank you for a refreshing rebuttal. After reading through the rebuttal and other reviews, I am quite happy to raise my score to a clear acceptance. This is an interesting paper with some fascinating observations. However, there is still some way to go before this can really be generalized and put into meaningful action. Sustaining drawbacks are some very hard questions like - how much data is needed for this approach? Equivariance to multiple non-trivial (perhaps non-unitary) transformations of the input etc. But I think as a good proof of concept, I believe this paper tells a story that would be interesting at Neurips. I would highly recommend including the spherical image experiment in the main draft (also what kind of architecture did you use for the encoder?) --- Reply to Comment 2.1.1: Comment: Thank you for your response and we are extremely glad to hear you are happy to recommend a clear acceptance! We completely agree that NIso represents a first step towards identifying underlying symmetries and that further investigation is necessary to support claims of broad generalization. The reviewer is correct to point out that answering difficult questions like those regarding data efficiency and effectiveness in the presence of multiple, different non-unitary actions are important prerequisites. We will amend the revision to reflect this and make sure that we communicate limitations on partiality and transformations modeled clearly. We will incorporate the spherical harmonic transform experiment into the main body of the revision as the results are very compelling (thank you again for suggesting this experiment!). In both this experiment and the toric laplacian experiment, the encoder and decoder are just the identity map. We pass the images directly to the transformation estimation module, learning only the parameterization of the operator with the goal of evaluating the abilities of the module in isolation.
Summary: The main motivation behind Neural Isometry is as follows: most of real world transformations in vision and geometry processing lack identifiable group structure and therefore challenging for prior work in equivariant learning that assumes such knowledge apriori. The paper proposes an auto encoder framework that maps observations (related by some geometric transformations) to a latent space where embeddings are related by a linear transform. The framework leverages the existing functional map framework and corresponding regularization to learn this latent space. The paper validates and compares their performance with the baseline NFT (Neural Fourier Transform) that assumes the knowledge of the group a priori. Strengths: - The problem of learning structured latent space in a self supervised equivariant learning set up is relevant. - The proposed framework is formalized clearly and in detail in Section 4. Weaknesses: As mentioned many times in paper, Instead of theoretical justification ( as in NFT), this paper seeks to validate the efficacy of approach experimentally on real world tasks. - Limited Experimental Set up: The datasets used are still very small scale (MNIST,) or synthetic (SHREC) and therefore far from the real world data. The submission mainly follows and compares with one baseline (NFT) that was implemented by authors themselves. The submission could motivate the problem/solution by probing/playing with the learned structured space e.g. in geometry processing tasks [1]. - Limitation of Functional map: One of the biggest limitation of functional map framework is its applicability to real world data where partiality is ubiquitous (due to occlusion etc). Therefore, Neural Isometry by design also inherits this limitation. experiments are shown on MNIST/SHREC where there is no occlusion or partiality. In the third benchmark, instead of skipping the odd frame, does the performance decays rapidly between distant frames? - Presentation: The submission contains several typos, convoluted sentences (Line 85-88) and statements without a citation. e.g. -- Line 49: < Lorentz transformations with d'almbert operator of Minkowski space> citation is missing here. Also, why do we need to know this fact in the introduction of this paper? -- Line 42: <by preserving the spatial dimensions> not sure what does this mean after reading the next 2 lines. Please clarify -- Line 89: <orthogonal relaxation of FM> citation or please detail what those are. - Related work: The submission cites a dozen variants of deep functional map paper [2] but not the deep functional map paper itself [2] that inspired them. Please justify how [20,21,22,39] are related to this work or inspired this work given other deep functional pipeline cited in paper. The reviewer would instead relate/distinguish this work with [3]. 1. Composite Shape Modeling via Latent Space Factorization, Dubrovina et al, 2021 2. Deep Functional Maps Litany et al. 2017 3. Map-based exploration of intrinsic shape differences and variability Rustamov et al. 2013 Technical Quality: 2 Clarity: 2 Questions for Authors: please see above. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The submission should state the scalability of this approach given its reliance on Laplacian Eigen basis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Limited experimental setup We would like to respectfully push back on the notion that we are considering too few baselines. We note that there is very little work on equivariant machine learning *without* prior knowledge of the symmetry group - the NFT is the most relevant baseline in this space. Further, the reviewer is suggesting that we are only comparing “small-scale” or “synthetic” experiments”. However, camera pose estimation on Co3D is *not* - it is a large, real-world dataset that is actively used for benchmarking of applications from pose estimation to novel view synthesis. More generally, we sought geometric deep-learning baselines for difficult, non-compact symmetries in vision tasks. To the best of our knowledge, the only existing models handling these symmetries are homConv [6] and LieDecomp [43] (both for homographies) and MobiusConv [7] (for Mobius Transformations). Thus, benchmarking on MNIST and SHREC is necessary to support our claims. These baselines are not scalable and will not run on any more complex or real-world datasets - this is not a limitation of *our* method, but a limitation of the baselines. We would be happy to compare with additional baselines if you could clarify what they should be. ## Limitation of functional maps (partiality) We agree that functional maps can be sensitive to occlusion and partiality. However, we note that NIso is already *surprisingly robust* to partiality in its inputs, as is evident in the Co3D experiments. Please see the discussion in the general rebuttal. ## Presentation ### Line 49: < Lorentz transformations with d'almbert operator of Minkowski space... Thank you - we agree that this is confusing and removed it. ### Line 42: <by preserving the spatial dimensions... By this, we mean that the encoder encodes images into 2D feature maps with height and width higher than 1x1 pixel. This is opposed to *collapsing* the spatial dimension, as done in the NFT, which encodes the whole image into a single, global latent vector. However, we agree that this is unclear as written, and will change the wording. ### Line 89: <orthogonal relaxation of FM> Reference [23] (cited later in the same sentence) seeks functional maps that represent near conformal, rather than isometric deformations. Conformal transformations preserve the Dirichlet inner product, and thus manifest as orthogonal transformations in the eigenbasis of the Dirichlet Laplacian. That said, we agree that the sentence is overly long and that “orthogonal relaxation” is vague as written. This will be addressed in the revision. ### Does not cite the “Deep Functional Maps” paper inspiring cited work. We will add this in the revision and make clear its influence. ### Justify how [20, 21, 22, 39] are related to or inspire present work These represent recent work that either seeks to address existing drawbacks of FM, including scalability [20] and spatial consistency [21], investigate the properties of deep FMs frameworks [39], or explore composability with other powerful tools including LLMs [22]. All methods achieve state-of-the-art results, and we include them to show that increasing the efficacy and flexibility of FMs is an active topic. However, we would be happy to instead discuss in more detail those works most closely related to our own, including Litany et al. 2017 and Rustamov et al. 2013. ### Relate/distinguish with the work of Rustamov et al. 2013 We thank the reviewer for suggesting we examine this work, as we now understand it provides an interesting comparison we did not previously realize! This work shows how performing PCA on shape difference operators constructed with FMs forms a kind of “latent space” where codes corresponding to given shapes share a notion of closeness whenever said shapes can be related by specific transformations, including conformal and authalic (area-preserving). Like NIso, this forms a framework for symmetry discovery by observing the clusterings in the latent space. However, in NIso similar observations lie on the same orbit formed by the transformations $\tau$, rather than being “close” in the Euclidean sense. Perhaps the biggest difference between NIso and this work is that NIso also recovers a dimensionally-reduced representation of the transformations between observations. In addition, we demonstrate in experiments that NIso can recover a meaningfully structured latent space for various types of transformations the observation space, whereas this work only reveals meaningful structure as it relates to conformal and authalic transformations. ## Scalability of the eigenbasis We agree that the ultimate scalability of the eigenbasis could be a limitation and we will note this in the revision. We believe the limiting factor is the parameterization of the basis directly via learned weights, which would result in large model sizes for high-res latent spaces. Currently, we find this can be mitigated by encoding to a lower resolution latent space, where the dimensions of the weight matrices are manageable, though this likely comes at the cost of some expressivity due to aliasing. Currently, we are working on a follow up to address this problem in tandem with partiality by defining an *observation-dependent* eigenbasis via the output of a second encoder inspired by the concurrent work of [Cheng et al. 2024]. However, we note that in other Deep FM frameworks, the principal computational bottleneck is the solve for the functional map. For example, the method proposed in the influential [38] to compute a near isometric functional map requires solving $K$ $K \times K$ linear systems, with $K$ the dimension of the eigenbasis. For NIso, the solve requires only a single $K \times K$ SVD computation. This is due to the closed form solution derived in Equation (8) which, while not mathematically significant, is to the best of our knowledge novel in the functional maps literature. --- Rebuttal 2: Title: Questions remaining? Comment: Dear 2Npe, Thank you again for your comments and feedback! As the end of the author / reviewer discussion period is fast approaching, we would love to hear your thoughts and see if we can address any remaining questions in the remaining time. Please let us know whether we addressed your comments and questions appropriately! Thank you! Best, the authors
Rebuttal 1: Rebuttal: We thank the reviewers for their careful reading, and detailed and considerate feedback. We are glad that reviewers deem our paper “a clever idea”, “relevant”, “interesting and reasonable”, and “intuitive and interesting”, and the writing to be “very nice”, “easy to follow”, an “enjoyable read”, and a “clear formalization”. We are glad that reviewers recognize the relevance of this first step towards self-supervised symmetry discovery and equivariant ML! Key outstanding concerns revolve around measuring the equivariance error, clarity, and experimental evaluation. We are glad to present new results and analysis to address these concerns! ## Measuring Equivariance Error We measure the standard equivariance error in the latent space [6-7, 43], with the results shown in Tab. 1 of the PDF. On both HomMNIST and SHREC, NIso is on par with expert-designed baselines. Similarly for Co3D, we find that NIso achieves low errors. While the error rises with increasing frame skip, it remains under 10%, and NIso remains the best-performing method, demonstrating that NIso remains equivariant even under increasing partiality. ## Discovering the spherical harmonic transform Following JQqD’s suggestion, we perform an experiment identical to “discovering the toric laplacian”, by mapping ImageNet to the sphere and acting on it via SO(3). We further realized that adding a simple dropout layer which randomly masks out coefficients corresponding to large eigenvalues before the basis unprojection yields eigenfunctions ordered by their energy. This experiment results in the discovery of a basis and maps $\tau_{\Omega}$ with almost *exactly* the properties of the spherical harmonics and the Wigner-D matrices, see the attached PDF, Fig. 1! ## Additional baselines ### Pose estimation In the camera pose estimation experiment, we added two strong representation-learning baselines. We extract image features from both images using two state-of-the-art vision foundation models - DINOv2 [Oquab et al. 2023] and BeIT [Bao et al. 2021]. We then pass the tokens into the DUSt3R[47]-style decoder described in section 5.3 to predict the pose. The results are shown in Table 2 of the PDF. NIso outperforms both significantly. ### Comparison with data augmentation In line with Sfsw’s comments, we compare NIso to the AE baseline with dataset augmentation during both the pre-training and fine-tuning phases in the SHREC and MNIST experiments. The results are shown in the PDF, Tab. 2. On both MNIST and SHREC, training with augmentation improves the baseline performance by approximately 34% and 7%. NIso still outperforms this baseline significantly. ## Scope of experimental evaluations 2Npe remarks a “limited experimental setup”. We would like to contextualize the scope of our evaluation with that of comparable work at top conferences. HomConv [CVPR] and LieDecomp [ICRL] benchmark *exclusively* on MNIST. MobiusConv [SIGGRAPH] benchmarks *only* on SHREC to demonstrate advantages of Mobius-equivariance. Our experiments are also commensurate with those in the NFT [NeurIPS, ICLR]. Among all these methods, we alone present results on Co3D, a large-scale, real-world dataset, in contrast to 2Npe's review ("datasets are very small scale, synthetic, not real world."). Further, 2Npe prefaces their criticism with the statement "As mentioned many times in paper, instead of theoretical justification, this paper seeks to validate the efficacy of approach experimentally on real world tasks." We respectfully point out that this is a misrepresentation of our claims. In fact, *only once*, on lines 101-102 do we make a similar statement in which we state our intention to "validate the efficacy of our approach experimentally, including in geometry processing and real-world 3D vision tasks" (referencing the conformal SHREC and Co3D experiments). At no point do we say our goal is to validate NIso on real-world tasks, only that our experiments *include* a real-world task (Co3D pose estimation). 2Npe also remarks that we could "motivate the problem/solution by probing/playing with the learned structured space". As our chief goal is self-supervised symmetry discovery, we respectfully note that our toric and spherical Laplacian experiments serve *exactly* this end. In terms of baselines, we benchmarked with the *best-performing* ones for homography and mobius equivariance, and SOTA representation learners for Co3D. To the best of our knowledge, the NFT is the only applicable baseline for equivariant representation learning without prior knowledge of the group. We contacted the authors of the NFT, who confirmed our implementation is faithful. It is challenging to pick the right evaluation when working on the new problem of self-supervised symmetry discovery / equivariant ML. We are hence grateful for the reviewers' suggestions, which we did our very best to incorporate! ## NIso under the presence of occlusions / masking / partiality We agree that showing robustness to partiality would make the paper stronger. That said, several recent works have sought to make functional maps robust to partiality [Attaiki et al. 2021, Cheng et al. 2024, Bracha et al. 2024] and we are excited to incorporate these techniques in our next steps! However, we strongly believe that NIso without this improvement remains extremely valuable. First, the limitation of not modeling partiality is not unique to NIso, but to most equivariant networks, including all of NIso’s baselines. Second, we note that NIso is already robust to partiality, as is evident in the Co3D experiments. At a frameskip 9, only about 80% of the pixels are shared across frames, and NIso outperforms two foundation model baselines with the equivariance error remaining below 10%. Third, a major application for NIso is the continuous-time regime, i.e., representation learning from video. In this case, the overlap between consecutive frames is generally large,and NIso remains acutely practical. Pdf: /pdf/b9c2882f637045e8e4636fc106b693ff03910b48.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
UQ-Guided Hyperparameter Optimization for Iterative Learners
Accept (poster)
Summary: The authors presented a novel scheme for hyperparameter optimization (HPO) in machine learning (ML) based on quantization uncertainty. The paper reflects a clear consequent way of discussion from introduction and literature overview to conclusions and very detailed appendixes. The authors demonstrated a strong mathematical background of these results with a full understanding uncertainly in machine learning model optimization process. Work presents theoretical analysis, examples and experiments on different related algorithms. Presented uncertainly quantization guided scheme can be widely applied for HPO in ML models optimization. Strengths: • Universalizing provided improving technique (originality and quality). • Using different conventional HPO algorithms for demonstration novel approach (clarity and quality). • Strong mathematical background and proofs (quality, originality and significance). • Theorical analysis presented guided scheme (quality and significance). • Clear experiments methodology and results (clarity). Weaknesses: Perhaps authors should provide more detailed discussion about UQ-scheme limitations and adaptation for non-iterative learners and add some links about such learners Mistypes and etc. Appendix C, page 21, lines 2 and 3 in the first paragraph. Broken links. Appendix D, page 24, line four in the first paragraph. Broken equation link. Technical Quality: 3 Clarity: 3 Questions for Authors: Has the UQ-scheme some limitations depending on the number of initial candidates for SH algorithm, for example? Could this be the case where UQ-scheme performance will be almost the same as in the original HPO algorithm, where regret between SH will equal SH+ with large or small vise-versa amount of the initial candidates? Or accuracy derivation for confidence curve is absolutely stable for any quantity of candidates? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations this work mainly addressed to HPO in machine learning applications. Authors build adequate descriptions and analysis of their technique only within ML applications (abstract introduction and conclusion). Moreover, the authors noted that their HPO tuning scheme can be mostly suitable for iterative learners. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Comment 1: Has the UQ-scheme some limitations depending on the number of initial candidates for SH algorithm, for example? Could this be the case where UQ-scheme performance will be almost the same as in the original HPO algorithm, where regret between SH will equal SH+ with large or small vise-versa amount of the initial candidates? Or accuracy derivation for confidence curve is absolutely stable for any quantity of candidates?** Response: We compare different numbers of initial candidates in the same setting for NATS-BENCH-201 on ImageNet. | | SH | | SH+ | | |:------------------:|:----------:|:------:|:----------:|:------:| | Initial Candidates | Top-1 Rank | Regret | Top-1 Rank | Regret | | 50 | 4.2 | 5.06 | 1.96 | 2.52 | | 500 | 22.95 | 5.3 | 5.29 | 2.74 | When the total number of initial candidates increases, both SH and SH have a higher Top-1 Rank; the regret, however, stays stable, showing that the UQ-scheme SH+ scales well as the number of initial candidates increases. > **Comment 2: Perhaps authors should provide more detailed discussion about UQ-scheme limitations and adaptation for non-iterative learners and add some links about such learners.** Response: We will add a detailed discussion on the limitations of our proposed methods in the revision. Please see the global comment ``Limitations of the current work". --- Rebuttal Comment 1.1: Comment: Thanks for your responses. I will remain the score.
Summary: The paper presents UQ-guided scheme, an approach for hyper-parameter optimisation (HPO) that quantifies uncertainty of each candidate configuration. This uncertainty quantification mechanism is applied to several state-of-the-art iterative HPO algorithms to prevent that promising configurations that have low performance in the early stages of the iterative process are not incorrectly discarded. The evaluation shows that UQ allows to find similarly performing configurations at a fraction of the exploration budget. Strengths: **Originality:** The authors claim the method is new and that they only found one work that accounts for uncertainty (reference 30), however there are more works that account for uncertainty when performing hyper-parameter optimization. For example, the following: [a] Mendes, Pedro, et al. "HyperJump: accelerating HyperBand via risk modelling." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 8. 2023. This other work may also be of relevance for the authors: [b] Swersky, Kevin, Jasper Snoek, and Ryan Prescott Adams. "Freeze-thaw Bayesian optimization." arXiv preprint arXiv:1406.3896 (2014). I would have liked to see a comparison of the proposed method against reference [30], given that it was the only work the authors found that accounted for uncertainty, and also against [a]. &NewLine; **Quality:** The claims seem well supported and the theoretical methodology sound. The methods used are appropriate. Weaknesses could be discussed in more detail and comparison agains related work could be improved/extended. &NewLine; **Clarity:** Overall, the paper is well written. There are a couple of sentences here and there that could be improved, but it doesn't substantially affect understandability. I believe the main manuscript should have pseudo-code illustrating the extra steps that should be added to any iterative learner that a user would like to change to account for model uncertainty. There is one example in the supplemental material but it seems to be specific to SH. What is generalisable from that example and what is not? The manuscript is quite heavy on math and it seems like some parts could profit from an intuition-guided explanation. For example, in section 3.2, paragraphs "Decomposition of momentum and the underlying structure of the metric" and "Solving for the Momentum Mean and Variance" could have a more high-level description of why it is important to analyse those decomposed parts, what type on information is expected to be extracted from each, and how that information can contribute to improving the selection of the (near-)optimal configuration. Some math details can be relegated to the supplemental material, should space be a concern. Finally, which parameters does the UQ approach have and that a user must set? How do those parameters affect the improvements attainable by UQ? &NewLine; **Significance:** Improving the efficiency of hyper-parameter optimization is a relevant problem and efficient methods that are shown to substantially improve over existing methods are relevant. However, I felt the current version of the manuscript misses comparisons against recent related work. Weaknesses: See above for strengths and weaknesses. Technical Quality: 3 Clarity: 2 Questions for Authors: **a)** Why do you claim that "data uncertainty is constant"? (This is in the beginning of section 3.1) **b)** Section 3.3. -- exploration vs exploitation: is k_1 - 1 == k'_i ? **c)** Section 3.4, Example 3 is not clear to me. In both cases you mention in the example, the UQ method "can return the best candidate with probability over 1 − n\delta". Do you mean that the probability of UQ returning the best candidate is larger than "1 − n\delta" (which is the probability of the UQ-oblivious method returning the best candidate)? Also, there is a 'But' in the middle of the example, but I don't understand the 'But' and I don't see where the expression for B_{UQ} comes from. **d)** Which parameters does the UQ approach have and that a user must set? How do those parameters affect the improvements attainable by UQ? &NewLine; **Comments:** - Section C.1 of the appendix has some missing pointers to assumptions. - The evaluation plots would all be much easier to read if the y-axis was labelled. - You could also highlight the moment when the UQ-version of each baseline achieves comparable performance to the UQ-oblivious version. This would make it easier to visually identify the gains achievable via UQ in terms of budget savings. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: There could be more discussion on the limitations of the proposed method. For example, when is it applicable, how do the different assumptions impact its applicability, how could future work improve the current version of the proposal to address these limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Comment 1: Why do you claim that "data uncertainty is constant"? (This is in the beginning of section 3.1)** Response: In our setting, each candidate is trained on the same set of training data so the data uncertainty among different candidates is the same. > **Comment 2: Section 3.3. -- exploration vs exploitation: is $k_1 - 1 == k'_i$ ?** Response: $k'_i$ is just another name for $k_i -1$. So yes, $k_i -1 == k'_i$. > **Comment 3: Section 3.4, Example 3 is not clear to me. In both cases you mention in the example, the UQ method "can return the best candidate with probability over $1 - n\delta$". Do you mean that the probability of UQ returning the best candidate is larger than "$1 - n\delta$" (which is the probability of the UQ-oblivious method returning the best candidate)? Also, there is a 'But' in the middle of the example, but I don't understand the 'But' and I don't see where the expression for $B_{UQ}$ comes from.** Response: Yes, we mean the UQ method can return the best candidate with probability greater than $1-n\delta$. To guarantee the same probability of identifying the optimal candidate, the UQ-oblivious method needs a budget $B_{ob}$ greater than what the UQ method needs ($B_{UQ}$). `But' here stresses that $B_{UQ}$ is smaller than $B_{ob}$. The expression denotes a lower bound budget, specifically, $\sqrt[4]{2} \cdot \gamma^{-1}(\tfrac{\nu_2-\nu_1}{2}, \delta) \cdot n\simeq \gamma^{-1}(\tfrac{\nu_2-\nu_1}{2}, \delta) \cdot n$, for the UQ method to guarantee the 1-$n\delta$ success probability. This lower bound is proved in Corollary 6. > **Comment 4: Which parameters does the UQ approach have and that a user must set? How do those parameters affect the improvements attainable by UQ?** Response: Parameters for the UQ approach include the total budget and the predefined budget resources (e.g., training epochs) for each round. We set these parameters according to the same default value as in the paper HB. To quantify the uncertainty in the HPO process, we leverage the validation history to construct an estimated loss curve. This estimation uses parameters $\mathbf{k}_t$ to model the loss curve from $F_t$ samples for each candidate. More complex parameters for modeling the loss curve may better fit the current observed loss curve but at a risk of losing generality. $F_i$ is always small in our experiments because each sample requires training an entire model. >**Comment 5: There could be more discussion on the limitations of the proposed method. For example, when is it applicable, how do the different assumptions impact its applicability, how could future work improve the current version of the proposal to address these limitations.** Response: We will add a detailed discussion on the limitations of our proposed methods in the revision. Please see the global comment ``Limitations of the current work". >**Comment 6: The work may also be of relevance for the authors: [b] Swersky, Kevin, Jasper Snoek, and Ryan Prescott Adams. "Freeze-thaw Bayesian optimization." arXiv preprint arXiv:1406.3896 (2014).** Response: This paper proposes a new method to improve Bayesian optimization (BO) for HPO. It avoids the limits of the expected improvement (EI) criterion in naive BO which always favors picking new candidates rather than running old ones for more iterations. Instead of always sampling new candidates, it can also choose old candidates for further evaluation based on the modified EI in each round. This method, however, is limited to BO method. In contrast, our UQ scheme is applicable to the vast number of early stopped and multi-fidelity-based HPO methods. >**Comment 7: I would have liked to see a comparison of the proposed method against reference [30], given that it was the only work the authors found that accounted for uncertainty, and also against [a]. [a] Mendes, Pedro, et al. "HyperJump: accelerating HyperBand via risk modeling." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 8. 2023.** Response: Comparison with the HyperJump (HJ) work. HyperJump improves HB in that, during HPO, it skips certain rounds for certain candidates if the risk of skipping is within a threshold. For the NAS-BENCH-201 trained on ImageNet-16-120, HJ reduces the running time compared to the original HB method (only needs a 5.3\% fraction of budget to achieve close to optimal results achieved by the original HB with a standard full budget), but it does not improve the HPO performance (i.e., Top-1 Rank and Regret). In contrast, our method only needs less than a 3\% fraction of the budget to achieve close to optimal results and when using around 5\% fraction of budget, our method reduces the regret by 33\%. Comparison with [30]. The performance gaps (regret) of [30] are lower than that of SH (a reduction of 30\%) but the execution time used for [30] is two times that of SH. This is because [30] needs to evaluate most of the models despite that it can skip some intermediate evaluations based on the upper-bound predictions. Our UQ method, however, can reduce 50\% of regret with the same budget resources as SH. >**Other comments on Clarity.** We will address these according to the comments and add a more high-level description for paragraphs "Decomposition of momentum and the underlying structure of the metric" and "Solving for the Momentum Mean and Variance". We will also include the pseudo-code for the other UQ-guided methods. The two assumptions in the proof of Theorem 1 are (1): $\ell(\mathbf{y}, M_{\gamma_c}^{*}(\mathbf{X})) = \lim\limits_{t\to \infty}\ell(\mathbf{y}, M_{\gamma_c}^{t}(\mathbf{X}))$ exists for $\gamma_c \in \Gamma$ and (2) $ \nu_i=\lim\limits_{\tau \to \infty} \ell_{i, \tau}$. These assumptions imply that the machine learning model will eventually converge after enough epochs. We will fix this in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications.
Summary: In the paper "UQ-Guided Hyperparameter Optimization for Iterative Learners" the authors present an uncertainty quantification method for optimizing the hyperparameters of iterative learners in a multi-fidelity setting. It is argued that the best possible candidate is often mistakenly discarded at some point and uncertainty quantification can help avoid such mistakes. Furthermore, it can help to identify the best-performing candidate with certainty earlier than running the entire promotion scheme of, e.g., successive halving. Strengths: - The paper presents a novel and interesting approach to HPO considering uncertainties throughout the HPO process to incorporate them and making decisions more cautiously. - The approach is theoretically grounded and the authors also provide a theoretical framework for their method. - In experiments, the method also performs favorably compared to methods that do not leverage the information about uncertainty. Weaknesses: - The authors claim that their method is working for any kind of iterative learners. However, this claim is not supported in the experimental section as only deep learning methods are considered. Therefore, it is questionable whether the same quality of results could be observed for other iterative learners, e.g., logistic regression. - Moreover, to make their approach work, the authors need a learner that is able to quantify its epistemic uncertainty, i.e., model uncertainty. However, I do not see how you would make this work for iterative learners in general where you cannot make use of simple tricks as in deep learning models to obtain ensembles through MCMC dropout or something similar. - It could be made clearer that the authors specifically target multi-fidelity/early stopping methods with their method. - The writing of the paper could be improved. For example, in the introduction the authors write about "steps" but it is not clear what is meant by steps, in particular, what are "steps of exploration". In general, there are some linguistic issues throughout the paper which the authors could easily get rid off using tools like Grammarly. - In Section 3.2 the authors formalize their setting by attributing hyperparameters to a model, however, some hyperparameters belong to the learning algorithm, e.g., weight decay or learning rate, but not to the model. - It is somewhat strange that the plus variants (including UQ) always start better than the basic method. In particular, it seems awkward for zero budget that the proposed method can better identify the top 1 rank than the base method. Also for the regret it is unclear how UQ can help identifying a more suitable candidate if nothing is known about this candidate. Technical Quality: 2 Clarity: 3 Questions for Authors: - How can the epistemic uncertainty be quantified for other learners? - Why does the performance of X+ always improve already over X if there is no budget used? - How is the uncertainty or let us say better certainty about the top 1 rank leveraged in the HPO tool? It would be quite straightforward to stop evaluating other candidates if there is enough evidence already that the top 1 candidate is correctly identified. - If in every iteration of successive halving the amount of budget spent for this iteration is kept at a certain level, how is then the budget distributed if more candidates are to be evaluated in that iteration or could it also happen that more budget is assigned to every remaining candidate than originally planned according to e.g. successive halving? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Limitations are not sufficiently addressed. Although the authors mention in the questionaire that limitations would be addressed there is just a single sentence that it is mostly suitable for iterative learners and needs adaptions for other learners. This is definitely not a sufficient limitations discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Comment 1: The authors claim that their method is working for any kind of iterative learners. However, this claim is not supported in the experimental section as only deep learning methods are considered. Therefore, it is questionable whether the same quality of results could be observed for other iterative learners, e.g., logistic regression.** Response: We focus on DNNs in our experiments because (1) DNNs are among the most influential models today and (2) DNN training takes a long time so selecting the optimal hyperparameters is a critical concern, making the problem more pressing. Even though we focus on DNN methods, our approach can be applied to other iterative learners. We consider a ridge regression problem trained with stochastic gradient descent on this objective function with step size $.01/\sqrt{2 + T}$. The $l_2$ penalty hyperparameter $\lambda \in [10^{-6}, 10^0]$ was chosen uniformly at random on a log scale per trial. We use the Million Song Dataset year prediction task [1] with the same experiment settings as in the original SH paper. In the limited time, we managed to get the results of ridge regression on "SH" and "SH+". Figure 12 in the attached pdf in the global rebuttal shows the Top-1 Rank results and the regret of the test error for different fractions of budgets. The average results of 30 repetitions are reported. The benefits are obvious: SH+ obtained an average of over 40\% improvement over the SH. [1] Lichman, M. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. >**Comment 2: Moreover, to make their approach work, the authors need a learner that is able to quantify its epistemic uncertainty, i.e., model uncertainty. However, I do not see how you would make this work for iterative learners in general where you cannot make use of simple tricks as in deep learning models to obtain ensembles through MCMC dropout or something similar. How can the epistemic uncertainty be quantified for other learners?** Response: For other iterative learners (e.g., regression problem trained with stochastic gradient descent on the objective function with step size and $\ell_2$ penalty hyperparameter $\lambda$), we can leverage the loss curves of the training process of the models we have already observed and use the formula in page 4 to estimate the epistemic uncertainty. >**Comment 3: Why does the performance of X+ always improve already over X if there is no budget used?** Response: We have updated Figure 3 with the fixed $x$-axis label in the attached pdf in the global rebuttal. There was a minor typo in the figure: The label 0 in the $x$-axis was supposed to be 0.03. Figure 3 shows that the UQ-guided approach can always achieve better Top-1 Rank and regret results than the UQ-oblivious method does even with a small budget (3\% of the standard full budget). >**Comment 4: How is the uncertainty or let us say better certainty about the top 1 rank leveraged in the HPO tool? It would be quite straightforward to stop evaluating other candidates if there is enough evidence already that the top 1 candidate is correctly identified.** Response: The key issue is that identifying the top-1 candidate during the HPO process is challenging. As shown in Figure 1, candidates that initially appear to be top-1 often turn out to be inferior after convergence. Our entire paper, therefore, focuses on how to quantify this uncertainty of the model quality during the intermediate steps of HPO and integrate this uncertainty into the selection process to maximize the likelihood of choosing the right candidate. >**Comment 5: If in every iteration of successive halving the amount of budget spent for this iteration is kept at a certain level, how is then the budget distributed if more candidates are to be evaluated in that iteration or could it also happen that more budget is assigned to every remaining candidate than originally planned according to e.g. successive halving?** Response: As mentioned in Figure 2, the budget is evenly assigned among candidates in each round, so each candidate will adopt $\frac{R}{K}$ budget resource ($R$ is the round budget and $K$ is the number of candidates kept in this round). That is to say, if, according to our probabilistic model, we need to keep more candidates to evaluate, then the budget for each candidate in this round is reduced. Likewise, more budgets are assigned to each one if fewer candidates remain. Note that we determine the number of candidates to keep by considering the tradeoff between the risks of discarding the best candidate and the training budget each top candidate can get. >**Comment 6: In the introduction the authors write about "steps" but it is not clear what is meant by steps, in particular, what are "steps of exploration".** Response: "8-22 steps of the exploration" here indicate that the best candidate is discarded after that number of iterations of training (because its validation loss is not in the better half among all the candidates that still remain). To make it clear and consistent with Figure 1, we will change the term "steps of exploration" to "iterations of training". We will also fix other linguistic issues in the paper. >**Comment 7: In Section 3.2 the authors formalize their setting by attributing hyperparameters to a model, however, some hyperparameters belong to the learning algorithm, e.g., weight decay or learning rate, but not to the model.** Response: We will replace "a model" with "a candidate" to denote a candidate hyperparameter configuration that can include hyperparameters for both the model and the learning algorithm. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you very much for the detailed response and the clarification. Ad Response to Comment 1: Why is the step size fixed in this example? Would not it make sense to tune the step size too? Ad Response to Comment 2: How would the loss curves be leveraged to extract the epistemic uncertainty? Learning curves can be pretty noisy and are also affected from the chosen hyperparameter values such as the step size. Just imagine a step size chosen by the tuner that is way too big. To me, it is not entirely clear how the aleatoric uncertainty is disentangled from the epistemic uncertainty and as the paper is claimed to work for iterative learners in general, for me, this is an essential part of the paper to be demonstrated. If the title of the paper was for deep learning methods, I would be satisfied with the scope of the paper. Ad Response to Comment 4: Yes I got that from the paper. But if the quantification yields that the best candidate is identified with a probability of 99%, would not it make sense to spare resources and stop the evaluation? Ad Response to Comment 5: Got it. But could not this flaw the overall HPO process? If you for example encounter a setting where you can never certainly drop any candidate, either due to very similar performance or large uncertainty bands, you will not return a model that is trained on the full assignable budget, do you? --- Reply to Comment 1.1.1: Title: Response to Reviewer P7Zr Comment: >**Commet 8: (Response to Comment 1) Why is the step size fixed in this example? Would not it make sense to tune the step size too?** Response: We follow the original benchmark setting where we set the decreasing step size according to epochs and tune the penalty parameter. The step size can be tuned as well. By applying different step schedulers, we can further expand the search space. >**Comment 9: (Response to Comment 2) How would the loss curves be leveraged to extract the epistemic uncertainty? Learning curves can be pretty noisy and are also affected from the chosen hyperparameter values such as the step size. Just imagine a step size chosen by the tuner that is way too big. To me, it is not entirely clear how the aleatoric uncertainty is disentangled from the epistemic uncertainty and as the paper is claimed to work for iterative learners in general, for me, this is an essential part of the paper to be demonstrated. If the title of the paper was for deep learning methods, I would be satisfied with the scope of the paper.** Response: The “Solving for the Momentum Mean and Variance” part from the bottom of Page 3 to Page 4 explains how the uncertainty of the model’s predictions is estimated. Essentially, it strives to approximate the variations of the loss of the model at any given epoch, including the distribution of the loss of the converged model, and hence the uncertainty of its predictions. Details are described in that part of the paper. It is important to note that it is a statistical prediction process based on history. As with any typical statistical prediction, there is influence of some kinds of inseparable noises from various sources. The default schedule in our experiments already uses large learning rates at the beginning; fluctuations of loss curves at the early stage of HPO are commonly observed in our experiments. The existence of such influence is why empirical evaluations are necessary to validate whether the statistical prediction process could practically deliver good (of course, imperfect) predictions, which is what the results in our paper demonstrate. The experiments in our paper concentrate on DNN due to the importance of efficient HPO to the time-consuming nature of DNN training. Our ridge regression experiment in the rebuttal demonstrates the applicability of the methodology to other iterative learners. But because there was very limited time in the rebuttal period, the added experiment is a demonstration rather than a series of systematic experiments. We can understand the point of view of the reviewer about the scope of the paper. For the final version, we can narrow the main claims to DNN and use the ridge regression as a demonstration to show the potential of the methodology for other iterative learners, and leave a systematic study of which to the future. >**Comment 10: (Response to Comment 4) If the quantification yields that the best candidate is identified with a probability of 99\%, would not it make sense to spare resources and stop the evaluation?** Response: It absolutely makes sense, and our approach is designed to incorporate this principle. If the quantification indicates that the best candidate is identified with a sufficiently high probability (exceeding $\tau_i$ in round $i$), only this top candidate will be retained for that round. At this point, the HPO process is considered completed, and the remaining budget resources can be spared, as there is no need to evaluate additional candidates when only one remains. > **Comment 11: (Response to Comment 5) Could not this flaw the overall HPO process? If you for example encounter a setting where you can never certainly drop any candidate, either due to very similar performance or large uncertainty bands, you will not return a model that is trained on the full assignable budget, do you?** Response: If candidates have very similar performance or large uncertainty, our approach will retain $k$ candidates whose total probabilities of being the best candidate meet the threshold $\tau_i$, dropping the rest for that round. As rounds progress, this strategy exponentially reduces the number of candidates, efficiently narrowing down to the best options.
Summary: This paper proposed an incorporate uncertainty quantification for hyperparamter tuning (learning rate, neural architecture). It is assumed the model performance metrics of interest, such as validation error, are Gaussian, and we can use the training output of first $N$ epochs to estimate the mean and variance of the performance metrics distribution. Using the distribution, this paper proposes a UQ-guided scheme by using confidence curves to find the highest performant combinations of hyperparameters. Strengths: It is shown by theory and empirical study that the proposed UQ method can enhance the existing hyperparameter tuning method like successive halving, which suffers from model uncertainty and incorrectly eliminating highly performant hyperparameter values Weaknesses: Estimating the mean and variance of the performance metrics curve may need to take some epochs. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. On page 4, it is stated that $\hat v$ and $\hat\sigma^2$ are estimated with N=200 epochs. My understanding is that the first N=200 epochs are only for estimating $\hat v$ and $\hat\sigma^2$ and the UQ-guided HPO are not used in these 200 epochs. However, I suspect we don’t have the luxury of running more than 200 epochs in some applications with large data. 2. The $Z$ in eq (3.1) seems to model a latent effect underlying the hyper parameter space. $Z$ is initialized randomly and then updated using the equation on p.4. No further analysis about $Z$ is provided in later sections, for example, whether we can use $Z$ to measure the contribution of certain combinations of hyperparameters. If estimating $Z$ explicitly doesn’t yield any useful products, perhaps there is no need to estimate Z and we just need to estimate $\alpha_u$? 3. Theorems: - How to interpret the assumptions of Theorem 1? - Does Theorem 2 apply to “any” UQ oblivious method like SH, HB? If so, it may be good to comment on/verify whether these existing methods satisfy the assumptions of Thm 2. 4. Can more details about $z_ob$ in Theorem 2 be given, e.g. how to arrive at the inequality? It is unclear how to interpret this quantity. The proof of Theorem 1 refers to two assumptions but it looks the pdf was not compiled correctly so the reference hyperlinks are missing Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The work is regarding a general algorithm aiming at optimizing HPO and belongs to a foundational research; its societal impacts are neutral. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Comment 1: On page 4, it is stated that $\hat{v}$ and $\sigma^2$ are estimated with N=200 epochs. My understanding is that the first N=200 epochs are only for estimating $\hat{v}$ and $\sigma^2$ and the UQ-guided HPO are not used in these 200 epochs. However, I suspect we don’t have the luxury of running more than 200 epochs in some applications with large data.** Response: With our method, after any number of epochs, we can use the formula on page 4 to estimate the loss and uncertainty for any $N$ values. We do not need to wait for $N$ epochs before estimating. We plug in $N=200$ to the formula to estimate the uncertainty value at the $200$ epochs when most models would have already entered the convergence stage. >**Comment 2: The $Z$ in eq (3.1) seems to model a latent effect underlying the hyper parameter space. $Z$ is initialized randomly and then updated using the equation on p.4. No further analysis about $Z$ is provided in later sections, for example, whether we can use $Z$ to measure the contribution of certain combinations of hyperparameters. If estimating $Z$ explicitly doesn’t yield any useful products, perhaps there is no need to estimate $Z$ and we just need to estimate $\alpha^{\mathbf{u}}$?** Response: Estimating $Z$ is an indirect way to estimate $\alpha^{\mathbf{u}}$ in our setting so it allows correlation between the candidates. We can also estimate $\alpha^{\mathbf{u}}$ directly for each candidate; the experiments show satisfying results as well. > **Comment 3: Theorems: How to interpret the assumptions of Theorem 1? Does Theorem 2 apply to “any” UQ oblivious method like SH, HB? If so, it may be good to comment on/verify whether these existing methods satisfy the assumptions of Thm 2.** Response: Interpretation of the assumptions of Theorem 1: The assumption $ \nu_i=\lim\limits_{\tau \to \infty} \ell_{i, \tau}$ implies that the machine learning model will eventually converge after enough epochs. $\nu_i$ here denotes the converged loss for the machine learning model. Theorem 2 analyzes the UQ-oblivious early stop approaches and can be applied to methods such as HB as long as there are intermediate results available for comparison. We only make assumptions that the machine learning models will eventually converge after enough epochs and without loss of generality, these models are ordered according to their converged loss, namely, $\nu_1 \le \nu_2 \le \cdots \le \nu_n$. We make these assumptions regardless of the HPO method and they are easy to verify by looking into individual candidate's validation loss curve. >**Comment 4: Can more details about $z_{ob}$ in Theorem 2 be given, e.g. how to arrive at the inequality? It is unclear how to interpret this quantity. The proof of Theorem 1 refers to two assumptions but it looks the pdf was not compiled correctly so the reference hyperlinks are missing.** Response: The representation of $z_{ob}$ on the right-hand-side of the inequality is very intuitive: For each $i$, to merely verify that the $i$th candidate's final loss is higher than the best candidate's with a probability larger than or equal to $1-\delta$, one must train each of the two candidates at least a number of steps equal to the $i$th term in the sum. Repeating this argument for all $i$ explains the sum over all candidates. The two assumptions in the proof of Theorem 1 are (1): $\ell(\mathbf{y}, M_{\gamma_c}^{*}(\mathbf{X})) = \lim\limits_{t\to \infty}\ell(\mathbf{y}, M_{\gamma_c}^{t}(\mathbf{X}))$ exists for $\gamma_c \in \Gamma$ and (2) $ \nu_i=\lim\limits_{\tau \to \infty} \ell_{i, \tau}$. As addressed in the previous question, these assumptions imply that the machine learning model will eventually converge after enough epochs. We will fix that in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions.
Rebuttal 1: Rebuttal: We thank the reviewers for the insightful feedback and suggestions. We address the common concern of the reviewers here: ## Limitations of the current work The key characteristic of the UQ method is the necessity to rank multiple learners during the HPO process. Gradient-based HPO methods [1], for instance, may not benefit from our UQ-guided scheme because of their sequential properties. ### For non-iteration learners The current version of our method is most suitable for iterative learners. To go beyond, it could be, for instance, applied to the model selection work in previous studies [2] that use training dataset size as the budget dimension. In this case, the learner does not need to be iterative; the selection is based on the validation loss history trained with incremental dataset sizes. The UQ component can still guide the configuration selection and budget allocation in the HPO process. [1] Micaelli, Paul, and Amos J. Storkey. ``Gradient-based Hyperparameter Optimization Over Long Horizons.” 34th Advances in Neural Information Processing Systems (NeurIPS). 2021. [2] Mohr, Felix, and Jan N. van Rijn. ``Towards model selection using learning curve cross-validation.” 8th ICML Workshop on automated machine learning (AutoML). 2021. Pdf: /pdf/463f3514dedce370d35ad445b44443a63ffd7e82.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Diffusion-Reward Adversarial Imitation Learning
Accept (poster)
Summary: The paper proposes Diffusion-Reward Adversarial Imitation Learning (DRAIL) an Adversarial Imitation Learning method where the discriminator is parameterized by the loss of a conditional diffusion model. The diffusion model takes in a state-action pair $(s, a)$ and a binary label $c$ indicating whether the state-action pair comes from the discriminator (label $c^-$) or from the training set (label $c^+$). Its loss on these inputs is denoted $\mathcal{L}\_{\textrm{diff}}(s, a, c)$. The discriminator is then defined as $D(s, a) = \sigma(\mathcal{L}\_{\textrm{diff}}(s, a, c^+) - \mathcal{L}\_{\textrm{diff}}(s, a, c^-))$, where $\sigma$ is the sigmoid function. The method resembles DiffAIL (Wang et al. 2023), which defines the discriminator as $\exp(-\mathcal{L}\_{\textrm{diff}}(s, a))$ (unconditional diffusion model), as well as Diffusion Classifier (Li et al. 2023). They benchmark DRAIL on six settings, including long-horizon planning (e.g. Maze2d), high-dimensional control (e.g. AntReach) and dexterous manipulation (e.g. HandRotate). They compare against methods including Behavior Cloning, the original GAIL, and variants such as Wasserstein AIL and DiffAIL. They also compare against Diffusion Policy, which uses a diffusion model to parameterize a policy rather than a reward function. In terms of raw performance, DRAIL obtains: - visibly better results than the baselines in 4/6 environments; - a result very close to that of DiffAIL in 1/6 environment; - a result worse than DiffAIL in 1/6 environment. Further experiments show: - DRAIL is more robust to noise on initial states and goal positions than the baselines on a block-pushing task (FetchPush); - DRAIL is more data-efficient than the baselines on the Walker environment (continuous high-dimensional control) and FetchPush; - DRAIL learns more generalizable rewards than GAIL on a synthetic 2d point dataset. Strengths: 1. Clarity: the paper is well-written and the proposed method is explained clearly. 1. Extensive comparison with DiffAIL, albeit in the appendix. 1. Methodological novelty: the authors propose using a conditional diffusion model to parameterize the discriminator for AIL, which removes limitations from DiffAIL (such as an arbitrary decision boundary at $\ln 2$; see Appendix A). 1. Significance and quality of experimental results: the paper demonstrates that defining the discriminator using a conditional diffusion model improves performance, robustness and data-efficiency, compared to DiffAIL (which uses an unconditional diffusion model in the discriminator) and to other baselines. Hence, one can argue it constitutes a step forward in Adversarial Imitation Learning. 1. Referencing strongly related prior work: the authors point to DiffAIL and Diffusion Classifier, which strongly resemble different parts of DRAIL (namely using the loss of a diffusion model to parameterize a discriminator, and how to construct such a discriminator). I personally was not aware of these two papers before, and commend the authors for bringing these to the reader’s attention. Weaknesses: ## Better delimiting the author’s contributions Despite DiffAIL and Diffusion Classifier being mentioned and their relationship to DRAIL being discussed, my impression is that more credit should be materially given to these two prior works. For example, in line 143, the authors state “we propose to leverage the training procedure of DDPM and develop a mechanism to provide learning signals to a policy using a single denoising step”. However, it seems to me that this much was already present in DiffAIL. Similarly, line 145 states “Our key insight is to leverage the derivations developed by Kingma et al. [22], Song et al. [47], which suggest that the diffusion loss, i.e., the difference between the predicted noise and the injected noise, indicates how well the data fits the target distribution since the diffusion loss is the upper bound of the negative log-likelihood of data in the target distribution”. However, it seems that a similar insight had already been used by Li et al. (2023), as mentioned in line 177. Hence, my understanding is that, in the context of these two prior works, DRAIL can be seen as a combination of DiffAIL and Diffusion Classifier (DC). Please feel free to correct me if this understanding is incorrect. This still constitutes novelty, although less than if DiffAIL and DC hadn’t preceded this work. In fact, in Appendix A the authors make what I consider to be a compelling case for why the discriminator should be “symmetric”, rather than “one-sided” like in DiffAIL. As such, I believe this paper’s contributions could be presented in a way that better separates its original contribution from that present in prior work. One way of doing this could be: - Add a section on DiffAIL and Diffusion Classifier to Section 3 (preliminaries). - In Section 4 (Approach), discuss the limitations of DiffAIL, e.g. as done in Appendix A. Argue that using a conditional diffusion model with a symmetric discriminator (via Diffusion Classifier) would address these limitations. - Hence present the final methodology as the combination of DiffAIL and DC. Again, I am open to changing my mind about the above assessment, and will raise my score accordingly in that case, or in case a refactoring of the presentation of the method is implemented by the authors. ## Baselines against planning methods that use diffusion models The authors baseline DRAIL against 3 AIL methods and Diffusion Policy. The motivation for including the latter is to “compare learning a diffusion model as a policy (diffusion policy) or reward function (ours)”. As per a recent survey of Zhu et al. (2023), diffusion models have been used in sequential decision-making not only for parameterizing policies, but also as planners, as is the case in the seminal work of Janner et al. (2022). As such, for a more holistic comparison against methods outside of the AIL framework, the authors could include diffusion-based planning baselines such as Diffuser (Janner et al. 2022). Recent work by Nuti et al. (2023) has also shown reward functions can be recovered from pairs of diffusion models in the setting of Janner et al. (2022), so that it would also be possible to compare the rewards learned via DRAIL against those obtained from Diffuser. ### References Zhu, Zhengbang, et al. "Diffusion models for reinforcement learning: A survey." arXiv preprint arXiv:2311.01223 (2023). Nuti, Felipe, Tim Franzmeyer, and João F. Henriques. "Extracting reward functions from diffusion models." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 3 Clarity: 2 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors consider the limitations of their work in Appendix G and societal impacts in Appendix I. My main concern with this work is the extent to which the authors claim originality over aspects of their method that are arguably already present in prior work. If the authors can clarify that this concern is misplaced, or if they modify the exposition of their contributions as outlined in the Weaknesses sections, I am open to raising my score. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. > Hence, my understanding is that, in the context of these two prior works, DRAIL can be seen as a combination of DiffAIL and Diffusion Classifier (DC). Please feel free to correct me if this understanding is incorrect. > This still constitutes novelty, ... In fact, in Appendix A the authors make what I consider to be a compelling case for why the discriminator should be “symmetric”, rather than “one-sided” like in DiffAIL. As such, I believe this paper’s contributions could be presented in a way that better separates its original contribution from that present in prior work. We thank the reviewer for the detailed suggestions for better organizing the paper and discussing the novelty of our work. We would like to clarify that DRAIL is not a combination of DiffAIL and Diffusion Classifier (DC). DC turns a text-to-image diffusion model, which optimizes the denoising MSE loss, into an image classifier without further training. DiffAIL follows the intuition of DC and uses how well a diffusion model can denoise a state-action pair to indicate the “realness” of the pair for adversarial imitation learning. To this end, DiffAIL learns an unconditional diffusion model to denoise expert state-action pairs well, while denoise agent state-action pairs poorly. In contrast, we propose to directly train an expert/agent binary classifier by optimizing the binary cross-entropy (BCE) loss according to the denoising performance of a conditional diffusion model conditioning on expert/agent label, and our novelty lies in such a design. In other words, our method significantly differs from DC and DiffAIL since we formulate distinguishing expert and agent state-action pairs as a binary classification task instead of a denoising task, and this design aligns with the GAIL formulation. Hence, our method is not a combination of DC and DiffAIL. Moreover, DiffAIL’s evaluation only considers locomotion tasks, while our work extensively compares our method in various domains, including navigation (Maze and AntReach), locomotion (Walker and AntReach), robot arm manipulation (FetchPush and FetchPick), robot arm dexterous (HandRotate), and games (CarRacing). Additionally, we present experimental results on generalization to unseen states and goals and on varying amounts of expert data. We completely agree with and highly appreciate the reviewer’s detailed suggestion for reorganizing the paper. We will revise the paper by presenting DC and DiffAIL in Section 3 and discussing the limitations of DiffAIL in Section 4 to motivate our method. We believe this will make our contributions more clear. > The authors could include diffusion-based planning baselines such as Diffuser (Janner et al. 2022). Recent work by Nuti et al. (2023) has also shown reward functions can be recovered from pairs of diffusion models in the setting of Janner et al. (2022), We thank the reviewer for providing these references. We will revise the paper to include the following discussions. - Diffuser [1] is a model-based RL method that requires trajectory-level reward information, which differs from our setting, i.e., imitation learning, where obtaining rewards is not possible. Therefore, it is not trivial to directly compare our method to the diffuser. - Nuti et al. [2] focus on learning a reward function, unlike imitation learning, whose goal is to obtain a policy. Hence, Nuti et al. [2] neither present policy learning results in the main paper nor compare their method to imitation learning methods. Moreover, they focus on learning from a fixed suboptimal dataset, while adversarial imitation learning (AIL) approaches and our method are designed to learn from agent data that continually change as the agents learn. **References** [1] Janner et al. “Planning with diffusion for flexible behavior synthesis.” In ICML, 2022. [2] Nuti et al. “Extracting Reward Functions from Diffusion Models.” In NeurIPS, 2024. --- Rebuttal Comment 1.1: Title: Reminder: The reviewer-author discussion period ends in three days Comment: We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses adequately address the concerns raised by the reviewer, including the following points. - A clarification of our contributions and a detailed plan to reorganize our paper based on the reviewer's suggestion - Discussions of Janner et al., 2022 (Diffuser) and Nuti et al., 2023 (Extracting Reward Functions from Diffusion Models) Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit. Again, we thank the reviewer for all the detailed review and the time the reviewer put into helping us to improve our submission.
Summary: This paper aims to address the training instability problem in generative adversarial imitation learning. The authors propose a diffusion discriminative classifier that helps achieve a more stable policy learning process by enhancing the smoothness and robustness of the reward model. Additionally, the author experimentally validates the effectiveness of DRIL across various control tasks, including manipulation, locomotion, and navigation benchmarks. Strengths: - The instability of AIL training is a key problem to its application, making research on this issue valuable. - DRAIL performs well across multiple benchmarks, improving sample efficiency and training stability. Weaknesses: - The advantages of the diffusion discriminative classifier compared to diffusion reward [1] are not well explained. The authors claim that their proposed diffusion discriminative classifier only requires two reverse steps to obtain the corresponding reward, thereby reducing the computational resource consumption needed for sampling in diffusion models. However, I do not fully agree with this. According to equations (4) and (6), DRAIL needs to compute $L_{\text{diff}}(s,a,c^+)$ and $L_{\text{diff}}(s,a,c^-)$ when obtaining rewards. From equation (3), we know both terms require taking an expectation over the diffusion steps $T$. Therefore, to obtain an accurate reward, a number of inference steps equivalent to the diffusion steps are still needed. - This paper lacks ablation experiments, including a sensitivity analysis of the hyperparameters. - Some important AIL baselines are missing, such as DAC [2] and IQLearn [3]. [1] DiffAIL: Diffusion Adversarial Imitation Learning. AAAI, 2024. [2] Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning. ICLR, 2019. [3] IQ-Learn: Inverse soft-Q Learning for Imitation. NeurIPS, 2021. Technical Quality: 3 Clarity: 2 Questions for Authors: - Why does DRAIL only require two reverse steps to obtain an accurate reward? Can you provide a more detailed explanation or analysis? - I believe that the requirement for stable algorithm training includes robustness to hyperparameters. A major issue with AIL methods is the need for extensive hyperparameter tuning, and well-tuned AIL can achieve good performance. What are the key hyperparameters for DRAIL? Can you conduct additional ablation experiments on these hyperparameters? - Can you compare DAC and IQLearn in your experiments? Both DAC and IQLearn use the gradient penalty for reward smoothing, and I am very interested in seeing a comparison between the diffusion discriminative classifier and the gradient penalty. I would like to raise my score if my concerns are well addressed. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have discussed the limitations of this paper. DRAIL is unable to learn from state-only trajectories and suboptimal trajectories. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. > The advantages of the diffusion discriminative classifier compared to diffusion reward [1] are not well explained. We extensively discuss how our proposed method differs from DiffAIL and why our proposed method produces better rewards in Section A. We will reorganize the paper to make this clear from the main paper. DiffAIL uses how well a diffusion model can denoise a state-action pair to indicate the “realness” of the pair for adversarial imitation learning. To this end, DiffAIL learns an unconditional diffusion model to denoise expert state-action pairs well, while denoise agent state-action pairs poorly. In contrast, we propose to directly train an expert/agent binary classifier by optimizing the binary cross-entropy (BCE) loss according to the denoising performance of a conditional diffusion model conditioning on expert/agent label, and our novelty lies in such a design. Our method significantly differs from DiffAIL since we formulate distinguishing expert and agent state-action pairs as a binary classification task instead of a denoising task, and this design aligns better with the GAIL formulation compared to DiffAIL. Moreover, DiffAIL’s evaluation only considers locomotion tasks, while our work extensively compares our method in various domains, including navigation (Maze and AntReach), locomotion (Walker and AntReach), robot arm manipulation (FetchPush and FetchPick), robot arm dexterous (HandRotate), and games (CarRacing). Additionally, we present experimental results on generalization to unseen states and goals and on varying amounts of expert data. > The authors claim that their proposed diffusion discriminative classifier only requires two reverse steps to obtain the corresponding reward. However, I do not fully agree with this. According to equations (4) and (6), DRAIL needs to compute $\mathcal{L}\_{diff}(𝑠,𝑎,𝑐^+)$ and $\mathcal{L}\_{diff}(𝑠,𝑎,𝑐^-)$ when obtaining rewards. From equation (3), we know both terms require taking an expectation over the diffusion steps 𝑇. We would like to clarify that we use sampling to approximate the expectation in Equation 3 instead of sampling all timesteps $T$ and averaging them. In all the experiments, we sample one single denoising time step $t$ to compute the reward, and we empirically found that sampling multiple denoising time steps does not consistently improve performance. > Why does DRAIL only require two reverse steps to obtain an accurate reward? Can you provide a more detailed explanation or analysis? Our method redesigns a conditional diffusion model to learn to perform an expert/agent binary classification task. Therefore, it can naturally provide a “realness” reward according to how indistinguishable agent state-action pairs are from the expert state-action pairs. To obtain the “realness” reward in Equation 4 given an agent state-action pair, we have to compute two $\mathcal{L}_{\text{diff}}$ in Equation 3 with the positive/expert condition ($c^+$) and the negative/agent condition ($c^-$), and therefore it requires two reverse steps. > What are the key hyperparameters for DRAIL? Can you conduct additional ablation experiments on these hyperparameters? We empirically found that our proposed method, DRAIL, is robust to hyperparameters and easy to tune, especially compared to GAIL and WAIL. Like most AIL methods, the key hyperparameters of DRAIL are the learning rates of the policy and discriminator. We additionally experimented with various values of the learning rates and reported the results in Figure R.2 in the PDF attached to the rebuttal summary. The results show that our method is robust to hyperparameter variations, including 5x, 2x, 1x, and 0.5x. We thank the reviewer for inspiring us to conduct this hyperparameter sensitivity experiment. We will revise the paper to include it. > Can you compare DAC and IQLearn in your experiments? I am very interested in seeing a comparison between the diffusion discriminative classifier and the gradient penalty. We have conducted additional experiments to address it. - **Comparison to gradient penalty**: We would like to note that one of our baselines, WAIL, has already implemented the gradient penalty and gradient clipping. As requested by the reviewer, we additionally implemented and evaluated GAIL with the gradient penalty (GAIL+GP) in CarRacing. The results are shown in Figure R.3 in the PDF attached to the rebuttal summary. GAIL with gradient penalty (GAIL+GP) initially shows a faster improvement but is unstable, and its overall performance is inferior to our method, DRAIL. - **Comparison to IQ-Learn**: As requested by the reviewer, we additionally implemented and evaluated IQ-Learn in CarRacing. The results in Figure R.3 show that IQ-Learn struggles at this task despite our substantial effort in experimenting with different hyperparameters, including actor learning rate ($10^{-4}$, $5 \times 10^{-5}$, $3 \times 10^{-5}$), critic learning rate ($10^{-3}$, $5 \times 10^{-4}$, $3 \times 10^{-4}$), and the entropy coefficient ($10^{-1}$, $5 \times 10^{-2}$, $2 \times 10^{-2}$, $10^{-2}$, $5 \times 10^{-3}$, $10^{-3}$), as well as trying various setups, including $\chi^2$-divergence and gradient penalty. - **Comparison to DAC**: The main contribution of DAC is to assign proper rewards to the “absorbing states” that an agent enters after the end of episodes. DAC requires additional annotations to determine if a state is an absorbing state or not. However, all the methods evaluated in our work do not have access to such information, making comparing them to DAC unfair. We believe the contributions of DAC and our work are orthogonal and could be combined. We will revise the paper to include the new results and the discussion. **References** [1] Wang et al. “DiffAIL: Diffusion Adversarial Imitation Learning.” In AAAI, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed answer to my questions. The response addressed my main concerns, therefore, I increased my score to 5. --- Reply to Comment 1.1.1: Title: Re: Official Comment by Reviewer PZWW Comment: We sincerely thank the reviewer for acknowledging our rebuttal and for helping us to improve our submission.
Summary: The paper proposes a novel imitation learning framework that integrates a diffusion model into Generative Adversarial Imitation Learning (GAIL). The primary aim is to address the instability and brittleness associated with GAIL by introducing more robust and smoother reward functions for policy learning. The authors develop a diffusion discriminative classifier to enhance the discriminator's performance and generate diffusion rewards. Extensive experiments in various domains such as navigation, manipulation, and locomotion demonstrate that DRAIL outperforms or is competitive with existing imitation learning methods. The paper highlights DRAIL’s effectiveness, generalizability, and data efficiency, offering a significant contribution to the field of imitation learning. Strengths: 1. **Innovative Integration**: The integration of diffusion models into GAIL is innovative and addresses the common issue of instability in adversarial imitation learning. 2. **Comprehensive Experiments**: The authors conduct extensive experiments across diverse domains, including navigation, manipulation, and locomotion, which provide robust evidence of DRAIL's effectiveness. 3. **Generalizability and Data Efficiency**: The paper demonstrates superior generalizability to unseen states and goals, as well as high data efficiency, making the approach practical for real-world applications. 4. **Robust Reward Mechanism**: The diffusion discriminative classifier enhances the robustness of the reward functions, contributing to more stable and reliable policy learning. 5. **Visualizations**: The visualized comparisons of learned reward functions between GAIL and DRAIL effectively illustrate the advantages of the proposed approach. ### Weaknesses: 1. **Complexity**: The introduction of diffusion models adds complexity to the framework, which may pose challenges in implementation and require significant computational resources. 2. **Limited Scope in Real-World Applications**: While the experiments are diverse, the applicability in highly dynamic and unpredictable real-world environments is not thoroughly explored. 3. **Comparative Analysis Depth**: Although the paper compares DRAIL with several baselines, a deeper analysis of why certain methods perform better in specific tasks could enhance understanding. 4. **Scalability**: The scalability of DRAIL in larger and more complex environments is not extensively evaluated, which could be a limitation for broader adoption. 5. **Hyperparameter Sensitivity**: The sensitivity of DRAIL to various hyperparameters is not discussed, which could impact its robustness across different settings. ### Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does DRAIL handle environments with rapidly changing dynamics, and what are the limitations in such scenarios? 2. What are the computational overheads associated with incorporating diffusion models, and how does it compare with other state-of-the-art methods in terms of efficiency? 3. How sensitive is DRAIL to the choice of hyperparameters, and what guidelines can be provided for tuning these parameters in different environments? 4. Can the diffusion discriminative classifier be extended or modified to further improve the reward robustness and policy performance in more complex tasks? 5. What specific strategies were employed to ensure the stability of the diffusion model training, and how do these strategies affect the overall performance of DRAIL? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Shown in weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. > Complexity: The introduction of diffusion models adds complexity to the framework, which may pose challenges in implementation and require significant computational resources. What are the computational overheads associated with incorporating diffusion models, and how does it compare with other state-of-the-art methods in terms of efficiency? We would like to clarify that our proposed method does not require significant computational resources, as it only requires two feedforward passes to obtain a reward for a state-action pair. This is because our design does not require going through the diffusion model generation process. Our method exhibits better efficiency compared to the most widely adopted method, GAIL, performing significantly better than GAIL with the same number of feedforward passes computed. > Limited Scope, Scalability, and rapidly changing dynamics This paper aims to propose a fundamental algorithm that improves the AIL framework. While previous AIL papers, such as GAIL, WAIL, and DiffAIL, have only tested their algorithms in locomotion environments, we evaluate our proposed method in various domains, including locomotion (Walker and AntReach), navigation (Maze and AntReach), robot arm manipulation (FetchPush and FetchPick), robot arm dexterity (HandRotate), and games (CarRacing). We believe this sufficiently demonstrates the broad applicability of our proposed method. > A deeper analysis of why certain methods perform better in specific tasks could enhance understanding We thank the reviewer for the suggestion. We will provide deeper analyses of the performance of different methods on all the tasks in the revised paper. > How sensitive is DRAIL to the choice of hyperparameters, and what guidelines can be provided for tuning these parameters in different environments? We empirically found that our proposed method, DRAIL, is robust to hyperparameters and easy to tune, especially compared to GAIL and WAIL. Like most AIL methods, the key hyperparameters of DRAIL are the learning rates of the policy and discriminator. We additionally experimented with various values of the learning rates of the policy and discriminator and reported the results in Figure R.2 in the PDF attached to the rebuttal summary. The results show that our method is robust to hyperparameter variations, including 5x, 2x, 1x, and 0.5x. We thank the reviewer for inspiring us to conduct this hyperparameter sensitivity experiment. We will revise the paper to include it. > Can the diffusion discriminative classifier be extended or modified to further improve the reward robustness and policy performance in more complex tasks? We empirically observe that the rewards produced by our diffusion model classifier are robust, resulting in stable policy learning curves, as demonstrated in the experiments. It is potentially beneficial to incorporate the recent advancements in developing diffusion models into DRAIL, which is left for future work. > What specific strategies were employed to ensure the stability of the diffusion model training, and how do these strategies affect the overall performance of DRAIL? We empirically observe that the training of diffusion models is very stable with smoothly decreased losses. We believe this justifies the design of our BCE loss. --- Rebuttal 2: Title: Reminder: The reviewer-author discussion period ends in three days Comment: We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses adequately address the concerns raised by the reviewer, including the following points. - A description of computational resources - A discussion of the broad applicability of our proposed method - An additional hyperparameter sensitivity experiment - A discussion of further improving the robustness and stability of our proposed method Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit. Again, we thank the reviewer for all the detailed review and the time the reviewer put into helping us to improve our submission. --- Rebuttal Comment 2.1: Title: Reminder: The reviewer-author discussion period ends in 20 hours Comment: We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses and additional experimental results adequately address the concerns raised by the reviewer. Please consider our rebuttal and kindly let us know if the reviewer has any additional concerns.
Summary: Imitation learning (IL) is a research area that focuses on learning a policy from expert demonstrations. One of the most popular imitation learning techniques, General Adversarial Imitation Learning (GAIL), has been proposed to mitigate some of the issues that naive IL algorithms suffer from. Although GAIL has seen a lot of success, it is known to be very unstable and difficult to tune. The authors propose Diffusion Reward Adversarial Imitation Learning (DRAIL) to mitigate some of the difficulties of training GAIL. In particular, the authors use a diffusion model to train the reward function. With this change in reward function design, the authors showed in practice that policies trained with DRAIL performed better than past approaches that attempted to mitigate GAIL issues. Additionally, the authors demonstrated that they could learn a reward function with a very low number of expert trajectories compared to other approaches. Strengths: - The paper was well written, and the author's contribution was easy to follow. - The authors performed a thorough empirical investigation to show how their proposed approach compared against baselines. - The authors propose an algorithm that takes full advantage of recent advancements in generative modeling. Weaknesses: - The paper needs more novelty. DiffAIL proposed using diffusion as a reward, and DiffAIL also proposed using the training error as a reward signal. - The paper needs to include comparisons to the two missing cites [1] and [2]. - The authors do not provide any insight into why the proposed reward function is better than DiffAIl, especially since DIffAIL sometimes matches or performs better than the proposed algorithms. Technical Quality: 3 Clarity: 3 Questions for Authors: - What denoise step values did you try? And how did they perform? - Could you explain why the proposed reward model performs better than DiffAIL? - If BC performs better than all the baselines on Walker, then that means the task should be easy for all the baselines. - Did you try increasing or decreasing the number of expert samples used for learning? Are you using D4RL? If so, what dataset split are you using? - Did you try training various levels of optimal demonstration, such as walker2d-expert-v0, walker2d-medium-v0, and walker2d-random-v0, to see how well the algorithms perform when noise is introduced? - Are you taking the argmax or sampling from the learned imitation learning policies? - Did you try other divergences besides Jensen-Shannon? Missing citations [1], [2], [1] A Coupled Flow Approach to Imitation Learning by Freund 2023 [2] Diffusion Model-Augmented Behavioral Cloning by Chen et al. 2024 Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. > The paper needs more novelty. DiffAIL proposed using diffusion as a reward, and DiffAIL also proposed using the training error as a reward signal. > ​​The authors do not provide any insight into why the proposed reward function is better than DiffAIl, especially since DIffAIL sometimes matches or performs better than the proposed algorithms. > Could you explain why the proposed reward model performs better than DiffAIL? We explicitly discuss how our proposed method differs from DiffAIL and why our proposed method produces better rewards in Section A. We will reorganize the paper to make this clear from the main paper. DiffAIL uses how well a diffusion model can denoise a state-action pair to indicate the “realness” of the pair for adversarial imitation learning. To this end, DiffAIL learns an unconditional diffusion model to denoise expert state-action pairs well, while denoise agent state-action pairs poorly. In contrast, we propose to directly train an expert/agent binary classifier by optimizing the binary cross-entropy (BCE) loss according to the denoising performance of a conditional diffusion model conditioning on expert/agent label, and our novelty lies in such a design. Our method significantly differs from DiffAIL since we formulate distinguishing expert and agent state-action pairs as a binary classification task instead of a denoising task, and this design aligns better with the GAIL formulation compared to DiffAIL. We extensively discuss this in Section A in the submission. We will revise the paper to highlight this discussion in the main paper to avoid confusion. Moreover, DiffAIL’s evaluation only considers locomotion tasks, while our work extensively compares our method in various domains, including navigation (Maze and AntReach), locomotion (Walker and AntReach), robot arm manipulation (FetchPush and FetchPick), robot arm dexterous (HandRotate), and games (CarRacing). Additionally, we present experimental results on generalization to unseen states and goals and on varying amounts of expert data. > The paper needs to include comparisons to the two missing cites [1] and [2]. We thank the reviewer for providing these references. We will revise the paper to include the following discussions and cite these works. Freund et al. (2023) [1] introduce CFIL, which employs normalizing flows for state and state-action distribution modeling in imitation learning. Chen et al. (2024) [2] augment behavioral cloning using a diffusion model focusing on offline imitation learning. In contrast, AIL and our method aim to leverage online environment interactions. Also, we would like to note that Chen et al. (2024) [2] was published at ICML 2024 (July 2024), which is later than the NeurIPS 2024 submission deadline (May 2024). > What denoise step values did you try? And how did they perform? As mentioned in Section E.1, we set the total timesteps $T$ to 1000 in all experiments. During the training phase, we uniformly randomly sample the denoising steps from the range 0 to T. During the inference phase, we followed the same procedure as in the training phase, i.e., uniformly randomly sampling from [0, T]. As requested by the reviewer, we additionally experimented with setting the denoising step to constant values \{250, 500, 750\}. The result is presented in Figure R.1 in the PDF attached to the overall response. The result shows that following the same denoising step sampling procedure as in the training phase, as adopted in our method, achieved the best performance. Also, consistently sampling with a large denoise step (750) hurts the policy learning performance. > Did you try increasing or decreasing the number of expert samples used for learning in Walker? Are you using D4RL? As mentioned in Section 5.1 (line 228), the expert Walker dataset was collected from a PPO expert policy instead of D4RL datasets. We evaluate and discuss the data efficiency, i.e., the amount of expert data required, of all the methods in Section 5.5. Figure 6 shows that our proposed method can learn with fewer demonstrations compared to the baselines in Walker and FetchPush. > Did you try training various levels of optimal demonstration, such as walker2d-expert-v0, walker2d-medium-v0, and walker2d-random-v0, to see how well the algorithms perform when noise is introduced? We didn’t try training with various levels of optimal demonstration in Walker. We evaluate the ability to generalize to varying levels of noise in Section 5.4 in FetchPush by randomizing initial states and goal locations. The results in Figure 5 show that our proposed method demonstrates the highest robustness towards noisy environments. > Are you taking the argmax or sampling from the learned imitation learning policies? We use PPO as our RL algorithm to learn the agent policy. Hence, the policy is stochastic during training and deterministic during inference. > Did you try other divergences besides Jensen-Shannon? We use JS divergence following the GAIL’s original setting. Exploring other divergences and distance metrics, such as the Wasserstein distance or f-divergences, is a promising future direction. We thank the reviewer for the suggestion and will revise the paper to include this discussion. **References** [1] Freund et al. “A Coupled Flow Approach to Imitation Learning.” In PMLR, 2023. [2] Chen et al. “Diffusion Model-Augmented Behavioral Cloning.” In ICML, 2024. --- Rebuttal 2: Title: Reminder: The reviewer-author discussion period ends in three days Comment: We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses adequately address the concerns raised by the reviewer, including the following points. - A detailed description of our contributions compared to DiffAIL - An intuitive explanation of why our method works - A discussion of Freund et al., 2023 (CFIL) and Chen et al., 2024 (DBC) - A description of data efficiency experiments, i.e., learning from different amounts of expert data, provided in the main paper - An explanation of training and sampling from learned policies using PPO - A discussion of using different divergence measures Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit. Again, we thank the reviewer for all the detailed review and the time the reviewer put into helping us to improve our submission. --- Rebuttal Comment 2.1: Title: Reminder: The reviewer-author discussion period ends in 20 hours Comment: We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses and additional experimental results adequately address the concerns raised by the reviewer. Please consider our rebuttal and kindly let us know if the reviewer has any additional concerns.
Rebuttal 1: Rebuttal: The attached PDF file contains the following content: - **[Reviewer uNxe] The Effect of Denoising Time Step**: We experimented with computing rewards using different constant denoising time steps and reported the result in Figure R.1. The result shows that following the same denoising step sampling procedure as in the training phase, i.e., uniformly randomly sample from [0, T], as adopted in our method, achieves the best performance, justifying our design choice. - **[Reviewer 3pMZ, Reviewer PZWW] Hyperparameter Ablation Study**: We experimented with varying hyperparameters, including the discriminator learning rate ($\eta_\phi$) and policy learning rates ($\eta_\pi$). The results are shown in Figure R.2, demonstrating that DRAIL maintains robust performance with varying hyperparameters. - **[Reviewer PZWW] Comparison to Gradient Penalty and IQ-Learn**: We implemented and evaluated two additional baselines, GAIL with Gradient Penalty and IQ-Learn, as suggested by the reviewer. The results shown in Figure R.3 highlight the superior performance of our proposed method. Pdf: /pdf/8046d4a11db2093301d1229d24b47cf3d59e4f46.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Metric Transforms and Low Rank Representations of Kernels for Fast Attention
Accept (spotlight)
Summary: This paper studies three seperate problems relating to metric transformations of kernel matrices via a new mathematical technique which the authors refer to as the representation theory of the hyperrectangle. The first problem is about characterising the entrywise transformations which, when applies to a low-rank matrix, preserve its low-rank structure. The problem is considered in the context of speeding up attention compuations in LLMs. They 1. show that polynomials are the only class of functions (without essential discontinuities of the first kind) which, when applied entrywise to a low-rank matrix, preserve the low-rankness of the matrix (as defined in Def. 2.2). This result generalises an existing result in the literature for a more restrictive class of functions. 2. show that the result is still true even is "low-rank" is relaxed to "approximately low-rank" (matrices with small inverse condition number), and "polynomial" is relaxed to "approximately polynomial" (functions with small finite differences). 3. show that entrywise "functions that do not grow too quickly" preserve a relaxation of low-rankness known as "stable rank". This includes non-polynomial functions. However, most applications of matrices with low stable rank require the whole matrix to be computed, which does not achieve the goal of fast attention computation. The second problem is a complete categorisation of all funcions of l_1 norm distance which result in positive semi-definite kernel matrices. These are shown to be the class of completely monotone functions. The third problem is to characterise the set of functions which transform one semi-metric space into another. They show the equivalence between a function being Bernstein, it transforming l_1 distances to l_1 distances, and transforming l_1 distances to squared l_2 distances. Strengths: This paper, including the proofs, are accurate and well-written. The first section answers some important questions about the rank of matrices after entrywise transformations. I believe this section of the work will be of great interest to those working on attention in LLMs. The section essentially closes the question of which transformations preserve low-rankness by proving that it is only functions in the class of polynomials (bar functions with essential discontinuities of the first kind, which the authors conjecture do not preserve low-rank anyway, and I am inclined to believe them). The exploration of how the answer to this question changes which the notion of "low-rank" is relaxed to "approximately low-rank" and "of low stable rank" is informative and compliments the first result nicely. The characterisation of semi-positive definite kernel matrices which are function of l_1 distance also closes an open question with an elegant result. This result will likely be of interest to researchers in the theory of kernel methods. The theoretical work in Section 4 is also interesting, however the significance the machine-learning community is perhaps slightly less than the previous sections. The mathematical tools developed to study the spectral properties of matrices whose entries are the l_1 distances between the vertices of a hyperrectangle do not receive much attention in the main text, but may well be of independent interest to mathematics research is many disciplines. Weaknesses: One might argue that this paper tries to do too much for a conference paper, and as a result much of it feels rushed or unduly relegates details to the appendix. The sections sometimes feel unrelated, and the paper lacks a clear narrative. Empirical experiments to demonstrate the results in Section 2 would be very welcomed, and I think would enhance the message of the paper. However, there is clearly not space for this as it stands. I can imagine a more focused version of this paper which moves the results in Sections 3 and 4 to another paper, and which focuses only the results in this Section 2, which I believe have the most value to the machine learning community. This would give more space to the mathematical tools in Section 5, and to experiments. That said, all of the individual contributions are strong, and I have no criticisms of any of content of this work. Technical Quality: 4 Clarity: 3 Questions for Authors: - I was surprised by the lack of mention of Mercer's theorem on the existence of the feature maps F in Section 3. Is this covered by other references? - I would be interested to see simulated experiments to demonstrate Theorems 2.6 and 2.7. - Please could you explain the implications of Theorem 4.4 on a problem in machine-learning. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors are upfront about limitations, particularly with relation to stable rank, which I think opens up interesting directions for future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. Re: “the sections sometimes feel unrelated, and the paper lacks a clear narrative”: The sections are related through the unifying technique: the representation theory of the real hyperrectangle. Indeed, all our results (from LLMs to kernels) follow from this one new technique. We plan to emphasize this early in the introduction in the final version of the paper. To answer your questions: 1. Indeed, Mercer’s Theorem is covered by our other references. For instance, it is heavily referenced in our citation [SOW01]. But you’re right that it’s important and we implicitly use it; we will cite it directly in the final version. 2. We agree that experiments relating to Section 2 would be exciting. Some prior empirical papers have performed experiments showing that polynomials in attention are effective. See, for instance, reference [KMZ23] (which appeared in ICML 2024). More experiments on our results, including a search for optimal low-degree polynomials to use in attention, would be an exciting area for future work. 3. The areas of metric transforms have numerous applications to machine learning (see lines 67-77 on page 2), and our Theorem 4.4 gives a new structural result about what is possible in metric transforms. The known statement using Euclidean instead of Manhattan distances (which our Theorem 4.4 extends) is widely used in machine learning. Some specific examples include the converse to Theorem 2 in our citation [RYW+19] about interpretability (“Visualizing and measuring the geometry of BERT” [NeurIPS ‘19]), and in Theorem 4 and 5 of our citation [FLH 15] giving a framework for kernels on popular Riemannian data manifolds in computer vision (“Geodesic Exponential Kernels: When Curvature and Linearity Conflict” [CVPR ‘15]). Again, Lines 67-77 on page 2 of our paper give a longer list of applications. We are confident that our Theorem 4.4 will be useful for proving future results similarly. Perhaps the most striking part of our Theorem 4.4 is that it is proved using the exact same technique as our other results. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I agree that the thing that ties this paper together is the new mathematical tool: the representation theory of the real hyperrectangle, and that this should be emphasised earlier on in the paper. While a strongly support this work, I am still unsure whether a venue with a 10-page limit is suitable for this work. I will increase my score to 7, and I encourage the authors to think carefully about the presentation of their results within the space constraints.
Summary: The paper achieves three main results: Firstly, it demonstrates that polynomials are the only piece-wise continuous functions for which applying them entry-wise to a low-rank matrix results in another low-rank matrix. Secondly, it shows that if f is a Manhattan kernel, then it must be a completely monotone function. Thirdly, it shows Bernstein functions are the only ones that transform the Manhattan metric to the Manhattan Metric. Strengths: 1. I found the paper well-written and highly readable, with a smooth flow. The related works are properly discussed, providing clear motivation for each problem. The proofs are divided into steps, and the ideas are explained clearly, allowing the reader to engage with the paper's storyline. I believe this paper will certainly be of interest to the NeurIPS community. 2. The paper is mathematically solid. The results and the techniques used to obtain them are interesting and mathematically rigorous and effectively address the problems outlined in the summary. 3. The technical core of all three main results involves a simple application of Schur’s lemma, which characterizes the eigenvectors of a 'Gram-like matrix' derived from the vertices of a hyper-rectangle. Another interesting aspect of the paper is the author's observation that this technical tool can be employed to address the aforementioned problems. Weaknesses: None. Technical Quality: 4 Clarity: 4 Questions for Authors: To what extent can your results for metric transformation and kernel classification be extended beyond the Manhattan metric? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have discussed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. Great question: one can show with some work that the same theorem statements hold for a broader class of metrics, including effective resistance distances from electrical network theory, or the class of graph metrics arising from all star graphs. Exploring this type of result beyond Manhattan metrics is a rich area of exploration for future work, and we expect the landscape and necessary techniques to vary for different types of metrics. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for addressing my questions and for your excellent work. I have reviewed the rebuttals and other reviews, and I would like to maintain my original score. Best regards, Reviewer
Summary: This paper studies kernel functions with a particular interest to Manhattan distance kernels. The following three questions are studied: 1) For which functions f, entrywise transformation f(M) is guaranteed to not be full rank if M is any sufficiently low-rank matrix? The authors show that if f does not have essential discontinuities of first type and entrywise f(M) has rank < n whenever M has rank $< log_2(n) + 2$, then f is necessarily a a polynomial of degree at most $log_2(n) +1$. The claim in the paper seems to be made for all n, though the proof seems to require n to be a power of 2. 2) What functions are positive definite Manhattan kernels? The authors show that f is positive definite Manhattan kernel iff f is completely monotone. (Theorem 3.4) 3) What functions transform Manhattan distances to Manhattan distances? The authors show that f transforms Manhattan distances to Manhattan distances iff f transforms Manhattan distances to squared Euclidean distances iff f is Bernstein. The key technique used in this paper is connecting all three problems to study of the eigenvalues of $2^d \times 2^d$ matrices whose entries are defined as $D_f = f(\Vert x_i - x_j \Vert)$, where $x_i, x_j$ are vertices of $d$-dimentional hyperrectangle. The key observation is that such matrices are preserved by a permutation group induced by reflections in $R^d$, which by Schurs lemma implies that eigenvectors are eigenvectors of Hadamard matrice H_d. This allows to explicitly compute and analyze eigenvalues of $D_f$ (Lemma B1), and this analysis allows to make progress on all three problems above. Strengths: The problems studied in this paper are important, natural, and in some sense fundamental (e.g., classification of positive definite Manhattan kernels). Hence, I think this is a very nice paper, which is well-motivated. Similar classification for Euclidean distances was obtained almost a century ago, and Manhattan distance is one of the "next" natural metrics for which this question can be asked. It was known for a long time (in one direction) that Bernstein functions preserve Manhattan distances and completely monotone functions are Manhattan distance positive definite. However, this paper appears to be the first one to show that only these functions are the only functions with such properties, proving the opposite direction of the classification. I am not familiar whether idea to study eigenspaces and eigenvalues of entriwise transforms of hyperrectangles appeared before in the context of these problems, but the idea is very interesting and productive. The proofs are sufficiently rigorous and appear to be correct (I verified some but not all in detail). Weaknesses: I have mainly two concerns: 1) I think the presentation of the results can be improved substantially, and I hope that the authors will polish the paper more for the final version if the paper gets accepted. In particular: -- I think it is important to include more insights about the proof technique and main novelties of the paper in the main text. At this point, only lines 365 - 380 (Section 5) provide some substantial insight into the proof. Some paragraphs of other sections can be shortened without losing much content, which would allow for more insights into the technique, which, in my opinion, is important to do in the main text. In particular, I find lines 999-1005 very insightful, and I think moving them up should help the reader a lot. -- Some statements and notation are imprecise or confusing (see below) -- There are a lot of forward references, which make it somewhat hard to read the proofs. For example, proofs in section E talk about "eigenvalues corresponding to $\xi$" (e.g., line 1407), while this makes sense only after reading section I, which is much later. There is many forward references like this. -- There is non-negligible amount of typos 2) I am not sure that this is the right venue for these results. The results proven in this paper are great, but they seem to belong to the field of metric geometry/analysis. While there is a connection to learning theory, as the authors indicate in the introduction, in my opinion, the connection is somewhat weak and tangential. Technical Quality: 3 Clarity: 2 Questions for Authors: 1) Is not Lemma B1, and hence any lemma that depend on calculation in this lemma, require n to be a power of 2? In particular, Lemma C4, C5, C6, Theorem C11, Theorem 2.5 etc all seem to require n to be a power of 2? If not, how does the proof work for the case when n is not a power of 2? If yes, can you please make it explicit in all those statements that n should be a power of 2? I think results have somewhat substantially different flavor if they hold only for powers of 2, instead of for all integer n. 2) Theorem C11 has conditions missing, even though it is marked as "formal statement". At least it should be mentioned that f is assumed to not have essential discontinuities. 3) Line 1299 talks about matrix A which is only introduced in line 1303. 4) The statement of Theorem 2.4 is somewhat confusing, more specifically the claim that Fact 2.1 is "tight". 5) nitpicking: in statement of Lemma C15 rank of M can be less than 5, if n < 5. 6) nitpicking: in line 1163 (and other places) if one wants to be pedantic, $\Delta_{\epsilon}^{d}$ is an operator applied to functions defined on $R^d$, while f is defined over $R$. I think it should be applied to $f(\langle \cdot , 1\rangle)$ at $a$. 7) L1191 closing ) is missing. 8) $S_i$ and $T_i$ are mixed up in L 1245-1246 9) Computation from lemma C9 is simple so it is a bit strange to keep it so far from Lemma C4 where it is used, and requires reader to jump back and forth. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. To address weaknesses: 1. Thank you for pointing this out. We will take your suggestions for a final draft, expand on the proof technique and main novelties in the introduction, correct typos, and make statements (such as the ones you point to) more precise. We will also remove forward references to improve readability. 2. Our techniques are based on metric geometry, and they have strong applications to machine learning, including attention computation in LLMs, kernels, and more. For example, we show that low-rank fast attention (the theoretical state of the art for method fast attention computations) *must* rely on polynomial approximations or replacements for the softmax function in the definition of Attention. Such a result is more relevant to machine learning than metric geometry. The application of metric geometry techniques to a variety of machine learning fields such as LLMs and kernels should be considered a strength of our paper, and we are particularly excited about disseminating our results to the ML community. To answer your questions: 1. The lemma as stated has $n$ as a power of 2, but the statement for general $n$ follows directly from the statement for $n$ as a power of 2. This is because of the fact from linear algebra that if a $N \times N$ matrix $M$ has full rank, then for any fixed $n \leq N$ there exists an $n \times n$ minor of $M$ with full rank. (This follows, for instance, from expansion by minors.) Suppose $n$ is not a power of $2$, and let $2^d$ be the smallest power of $2$ bigger than $n$. If $f$ is not a low-degree polynomial, our work as-stated proves that there exists a low rank $2^d \times 2^d$ matrix $M$ where $M^f$ is full rank. By the above, there exists an $n \times n$ minor $M_n$ of $M$ (with low rank, since its rank is less than that of $M$) where $M_n^f$ is full rank. Therefore, $f$ cannot preserve low rank matrices of dimension $n \times n$, if $f$ is not a low degree polynomial. 2. Thank you for pointing this out! Theorem 2.5 is actually our full formal statement as-is, and Theorem C11 is missing the essential discontinuities of the first kind (but nothing else). We will correct this in the final version. 3. Thank you for noticing this; we will correct it in the final version. 4. In Theorem 2.4, when we wrote “Fact 2.1 is tight”, we meant to say that the converse of Fact 2.1 is true when $f$ is $n-1$ differentiable. To be precise, if $f$ is $n-1$ differentiable and $f$ preserves low rank, then Theorem 2.4 states that $f$ must be a low degree polynomial. We will clarify this and remove the word “tight” from Theorem 2.4. (Just to clarify: Our new results show that this converse is true even without the $n-1$ differentiability, and is true for all piecewise continuous functions. As mentioned in Section 2, this set of functions is considerably more expansive than $n-1$ differentiable functions, and includes popular non-second-differentiable functions like ReLU, ELU, and SeLU, as well as analytic oddities like the no-where differentiable everywhere-continuous Weirstrauss function. Our new result also shows that approximate low rank preservation is impossible for piecewise continuous functions that aren’t approximations of low-rank polynomials, whereas Theorem 2.4 didn’t have any results about approximability.) Other questions: Thank you for noting these typos and other minor issues, we will correct them all in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions! The generalization to general n makes sense. I will keep my score unchanged due to presentation issues, and I hope the authors will address them in the final version.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Understanding and Improving Adversarial Collaborative Filtering for Robust Recommendation
Accept (poster)
Summary: This work investigates the efficacy and robustness of Adversarial Collaborative Filtering (ACF) in recommender systems. The authors present theoretical analyses to demonstrate how ACF enhances traditional collaborative filtering (CF) by mitigating the negative impact of data poisoning attacks. They extend these theoretical insights to further improve existing ACF methods by dynamically and personally assigning perturbation magnitudes based on users’ embedding scales. Experiments based on representative CF backbones and various attack methods strengthen the validity of this work. Strengths: S1. The work addresses a critical issue in recommender systems—robustness against attacks. The paper introduces novel theoretical insights into the benefits of ACF over traditional CF, particularly in mitigating data poisoning attacks. The combination of theoretical and empirical analyses enhances the originality of the work. S2. The authors provide rigorous mathematical proofs and extensive experimental evaluations to support their claims. The inclusion of multiple datasets and various attack methods strengthens the validity of their findings. S3. The paper is well-structured, with clear explanations of theoretical concepts and experimental methodologies. The use of figures and tables helps in comprehensively presenting the results. Weaknesses: W1. To extend theoretical understandings from the simple single-item CF scenario to practical multi-item scenarios, Corollary 1 is restricted to the dot-product-based loss function. W2. The experiment only involves two CF backbones, namely MF and LightGCN. The results will be more convincing by adding more CF methods. Technical Quality: 3 Clarity: 4 Questions for Authors: Please refer to the Weaknesses section. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have discussed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback. We deeply value your recognition of our work's contributions, particularly its importance, clear explanations, novel theoretical insights, rigorous mathematical proofs, extensive experimental evaluations, and the effective combination of theoretical and empirical analyses. Below, we provide detailed responses to each of your questions, offering clarification and additional support. --- **Q1: To extend theoretical understandings from the simple single-item CF scenario to practical multi-item scenarios, Corollary 1 is restricted to the dot-product-based loss function.** **A1**: Thank you for your question. The core of our recommendation scenario centers on the dot-product interaction between users and items. While Corollary 1 focuses on this foundational setup, exploring general formulations rather than specific modeling approaches can enhance theoretical applicability. - To assess the robustness of Corollary 1 beyond dot-product scenarios, we will include experimental results using NeurMF with non-dot-product-based loss functions. - We have found that PamaCF **achieves state-of-the-art performance, even in non-dot-product-based loss functions**. - **Recommendation Performance**: The table can be seen in the pdf (Table 3) in the global response. - **Robustness against target items promotion**: The table can be seen in the pdf (Table 4) in the global response. --- **Q2: The experiment only involves two CF backbones, namely MF and LightGCN. The results will be more convincing by adding more CF methods.** **A2**: Thank you for your suggestion. We appreciate your feedback, and in response, we have expanded our experimental evaluation to include results from NeurMF, as shown in the tables referenced in Q1. We have found that **Our findings demonstrate that PamaCF achieves state-of-the-art performance within the NeurMF**. This addition broadens the scope of collaborative filtering methods examined in our study. --- We sincerely appreciate your time, effort, and valuable suggestions during the review process. We trust that these clarifications adequately address your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' explanations. I have no further questions. --- Reply to Comment 1.1.1: Comment: We are grateful that our response successfully resolved your concerns. Thank you for your feedback and for positively evaluating our work. Sincerely, The Authors --- Rebuttal 2: Title: Follow up Comment: Thank you for taking the time and effort to review our paper. We have carefully considered your comments and responded to each point with detailed clarifications and supporting evidence. As the deadline for the discussion period between authors and reviewers is approaching, we kindly ask whether our responses have sufficiently addressed your concerns. Please let us know if there are any further questions or issues that you would like us to address. We look forward to hearing from you soon. Thank you once again for your time and effort. Sincerely, The Authors
Summary: This work investigates adversarial collaborative filtering, providing theoretical evidence for the effectiveness of such methods and proposing a novel method based on the personalized magnitude of perturbation. Overall, this work studies on an interesting problem and offer some theoretical insights. However, this work also has some limitations particularly on the impractical assumptions. As such, I give the score of 5. Strengths: 1. This work studies on an interesting and important problem. 2. This work provides a theoretical understanding of the effectiveness of the adversarial collaborative filtering, and introduces a new method based on this understanding. 3. Extensive experiments are conducted to validate the effectiveness of the proposal. Weaknesses: 1. My primary concern lies with the assumptions made in the theoretical analysis. There is a noticeable gap between the theoretical analysis and practical applications in several aspects: a) the strategy for initializing embeddings (presented in Definition 1) is seldom adopted in existing RS; b) the optimization in practical RS typically uses BCE, BPR, or softmax loss, rather than the loss function described in equation (2); c) preference prediction usually does not employ the sign function; d) recommendation is often framed as a ranking problem with corresponding metrics or losses, rather than being defined as in ‘Definition 2’ for analyzing recommendation error. I understand these impractical assumptions could facilitate theoretical derivation, but they clearly become a limitation of this work. 2. My other concern is with the theoretical analysis itself. 2.1 It is counter-intuitive that the theoretical bound does not depend on the item embeddings, given the symmetric roles of user and item embeddings in recommendation systems. Could you please provide some explanations? Or the relations $v=1/n \sum_i r_i u_i$ are always held? 2.2 It is interesting to discover that CF has a unique advantage such that the adversarial mechanism works even for clean data, unlike in basic classification tasks. Why is collaborative filtering so special? The problem formulation for collaborative filtering appears similar to basic classification tasks. Could the distinction be due to the inner product? More discussion on which characteristics of collaborative filtering lead to this advantageous property would be highly interesting. Some minor ones: 3. The theoretical part is not easy to follow. This may be due to my unfamiliarity with theories on adversarial learning. Nevertheless, I recommend that the authors provide intuitive explanations for the theorems and a brief outline of the proof procedure. 4. The paper should include more experimental details. It is unclear whether the experimental setup (e.g., initialization) follows the problem definition 1 or aligns with previous works like [2][3]. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: the authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your positive comments and recognition of the significance of our paper, including its theoretical contributions, novel methodology, and comprehensive experiments. In response to your feedback, we address your concerns and clarify some misunderstandings as follows: --- **Q1: The strategy for initializing embeddings is seldom adopted in existing RS**. **A1**: Thanks for your feedback. The Gaussian-based initialization method is chosen to derive upper and lower bounds and **does not alter our core findings**: (1) ACF outperforms traditional CF; (2) User-adaptive magnitude further enhances the effectiveness of ACF. - For instance, the conclusion in Theorem 1 is already demonstrated in Appendix D1.1 at line 667, but incorporating the properties of Gaussian distribution allows for a more accurate formulation. - In Appendix D2.1 at line 706, the expression already indicates the need for different magnitudes; introducing Gaussian distribution properties results in more accurate and readable upper and lower bounds. --- **Q2: The optimization in practical RS typically uses BCE, BPR, or softmax loss, rather than the loss function described in Eq. 2.** **A2**: Section 3's simplified pointwise loss ensures derivation clarity and simplicity, as in [1]. - To broaden the applicability of our findings, Corollary 1 in Section 4 **extends this to complex losses** (Appendix D.3). - Moreover, in the implementation of PamaCF (Eq. 6 in the main text), we adopt the extended and widely used pairwise loss, i.e., PamaCF-BPR loss, detailed in Appendix B. --- **Q3: Preference prediction usually does not employ the sign function.** **A3**: The sign function aids recommendation error definition **without affecting our derivations**. In Appendices D.1, D.2, and D.3, we demonstrate that $\mathbb{P}[\mathrm{sgn}(\langle u, v \rangle) \neq r] = \mathbb{P}[r \cdot \langle u, v \rangle < 0]$, effectively addressing any potential impact of the sign function on our results. --- **Q4: Recommendation is often framed as a ranking problem, rather than being defined as in 'Definition 2' for analyzing recommendation error.** **A4**: Thanks for your question. Our approach in Section 3 simplifies analysis by focusing on item impact via a pointwise framework. This approach provides theoretical clarity by avoiding the complexities introduced by ranking tasks. Section 4's Corollary 1 **extends these insights beyond the constraints of Definitions 1-3**, as detailed in Appendix D.3. --- **Q5: It is counter-intuitive that the theoretical bound does not depend on the item embeddings.** **A5**: **Theoretical bounds indeed depends on the item embeddings**. - When fixing a particular user, the item embeddings $v_{(0)}$ is equivalent to sampling from $\mathcal{N}(\bar{u}, \frac{\sigma^2}{n-1}I)$ (as mentioned in lines 120-123). - When deriving upper and lower bounds, we map $v_{(t)}$ to $v_{(0)}$ and then take expectations, which is why $v_{(t)}$ does not explicitly appear in the expressions. For details, refer to lines 710 or 733 in Appendix D.2. --- **Q6: Are the relations $v = \frac{1}{n} \sum_i r_i u_i$ always held?** **A6**: No, this relation holds at epoch = 0. However, with a given learning rate $\eta$, our derivation shows that $v_{(t)} = M(t, \eta) \frac{1}{n} \sum r u_{(0)}$, where $M(t, \eta)$ is a mapping function detailed in Proposition 1 (lines 625-630). --- **Q7: It is interesting to discover that CF has a unique advantage such that the adversarial mechanism works even for clean data, unlike in basic classification tasks. Why is collaborative filtering so special?** **A7**: Thanks for acknowledging our work. The key distinction may lie in the nature of the tasks, particularly in the parameter search space. Consider the following simplification (one-layer classifier and MF): - Traditional classification tasks are typically formulated as follows: - Given a set of samples $(x_i, y_i)_{i=0}^n$, where $x \in \mathbb{R}^{d_1}$ and $y \in \mathbb{R}^{d_2}$, the goal is to train a classifier $f(x_i) = w^Tx_i + b$. - The trainable component is the classifier $w \in \mathbb{R}^{d_1 \times d_2}$, and adversarial perturbations compel $w$ to locate a decision boundary within $\mathbb{R}^{d_1 \times d_2}$ that satisfies adversarial losses. - For recommender systems, where $f(u_i, v_i) = u_i^T v_i$, involving $M$ users and $N$ items: - The application of adversarial perturbations allows for a broader parameter search space, specifically $MN \times \mathbb{R}^d \times \mathbb{R}^d$. Traditional classification tasks **require $w$ to satisfy all instances and their adversarial counterparts**, whereas in recommendation tasks, **adjustments to user and item representations can satisfy each other's adversarial counterparts**. --- **Q8: I recommend that the authors provide intuitive explanations for the theorems and a brief outline of the proof procedure.** **A8**: Thank you for your suggestion. We will enhance the clarity by providing intuitive explanations and outlining the proofs. Due to space constraints on the responses, we will further supplement these in subsequent versions. --- **Q9: The paper should include more experimental details.** **A9**: Thanks for your suggestion. To address the need for additional experimental details, we have included them in Appendix C.1: - Detailed discussions on attack and defense setups are provided from lines 589 to 600. - As you mentioned, our backbone model follows LightGCN. We will emphasize this aspect more prominently in future revisions. --- We appreciate your thorough feedback and recognition of the significance of our research. We believe these clarifications adequately address your concerns and look forward to your reconsideration and reevaluation of our work. [1] Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR'21 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. My concerns have not been well addressed, particular in the gap between the theoretical analyses and practical applications. The initializing, the prediction functions and the model architectures may not be closely alignment with the pratical. Nevertheless, I acknowledge the theoretical contribution of this work and would like keep the positive score of this work. --- Reply to Comment 1.1.1: Title: Further Clarification on Initialization, Prediction Functions, and Model Architectures Comment: Thank you for your comments and for acknowledging the theoretical contributions of our work. We would like to clarify that the initialization (Q1) and prediction functions (Q3) are employed solely to simplify the formulas and aid in defining certain concepts. **These choices do not impact the theoretical results presented in our study**. Furthermore, we use the **widely adopted matrix factorization (MF) model for theoretical derivation** [1][2], which is consistent with existing mainstream architecture. --- ## Gaussian Initialization (Q1,A1) Regarding Gaussian initialization (Q1), the Gaussian initialization method is chosen to derive more precise and clearer bounds. **Even without introducing Gaussian initialization, we still achieve the following results:** - ACF outperforms traditional CF. - User-adaptive magnitude further enhances the effectiveness of ACF. For instance, as detailed in Appendix D1.1, line 667, we have shown: $\mathbb{P}_{{v}_0 \sim \mathcal{N}(\bar{{u}}, \frac{\sigma^2}{n-1}I)} \left[f({u}, {v}) \neq r \mid ({u}, r) \right]$ - $~~~~~~\mathbb{P}_{{v}_0 \sim \mathcal{N}(\bar{{u}}, \frac{\sigma^2}{n-1}I)}^{~\mathrm{adv}} \left[f({u}, {v}) \neq r \mid ({u}, r) \right]$ $~~~~= \mathbb{P}_{{v}_0 \sim \mathcal{N}(\bar{{u}}, \frac{\sigma^2}{n-1}I)}\left[-\eta (1+\lambda) \gamma_t^{{u}} M(t, \eta) \frac{\Vert {v}_0\Vert^2}{\Vert {u}_t\Vert} < \langle \frac{r{u}_t}{\Vert {u}_t \Vert}, {v}_0 \rangle \leq -\eta M(t, \eta) \frac{\Vert {v}_0\Vert^2}{\Vert {u}_t\Vert} \right] \ge 0,$ This result already supports Theorem 1. By leveraging the properties of the Gaussian distribution, we are able to derive a more precise and clearer formula: $\mathbb{P}_{{v}_0 \sim \mathcal{N}(\bar{{u}}, \frac{\sigma^2}{n-1}I)}\left[-\eta (1+\lambda) \gamma_t^{{u}} M(t, \eta) \frac{\Vert {v}_0\Vert^2}{\Vert {u}_t\Vert} < \langle \frac{r{u}_t}{\Vert {u}_t \Vert}, {v}_0 \rangle \leq -\eta M(t, \eta) \frac{\Vert {v}_0\Vert^2}{\Vert {u}_t\Vert} \right]$ $~~~~ = \Phi\left( \frac{\sqrt{n-1}}{\sigma}\left( \eta (1+\lambda) \gamma_t^{{u}} M(t, \eta) \frac{\Vert \bar{{u}} \Vert^2 + \frac{d \sigma^2}{n-1}}{\Vert {u}_t\Vert} + \langle \frac{r{u}_t}{\Vert {u}_t \Vert}, \bar{{u}} \rangle \right)\right)$ $~~~~~~ - \Phi \left( \frac{\sqrt{n-1}}{\sigma} \left(\eta M(t, \eta) \frac{\Vert \bar{{u}} \Vert^2 + \frac{d \sigma^2}{n-1}}{\Vert {u}_t\Vert} + \langle \frac{r{u}_t}{\Vert {u}_t \Vert}, \bar{{u}} \rangle\right) \right) > 0.$ This result also applies to Theorems 2-4, as discussed in A1 of our rebuttal. --- ## Prediction Functions (Q3,A3) Regarding the prediction functions (Q3), instead of the sign function, we could have used another prediction function. **Even without the sign function, we can still derive the subsequent results**: - ACF outperforms traditional CF. - User-adaptive magnitude further enhances the effectiveness of ACF. We only utilized the sign function specifically to define the recommendation error, which aligns with existing publications [3][4]. This choice allows us to express the recommendation error as: $\mathbb{P}_{{v}_0 \sim \mathcal{N}(\bar{{u}}, \frac{\sigma^2}{n-1}I)}\left[\mathrm{sgn}(\langle{u}_t, {v}_t)\rangle \neq r \mid ({u}, r)\right].$ $~~~~= \mathbb{P}_{{v}_0 \sim \mathcal{N}(\bar{{u}}, \frac{\sigma^2}{n-1}I)} \left[\langle{u}_t, {v}_t\rangle \cdot r < 0 \mid ({u}, r) \right],$ as we elaborated in A3 of our rebuttal. --- ## Model Architectures In both Theorems in Section 3 and the Corollary in Section 4, we adopt MF as the backbone model, which is consistent with existing mainstream architecture [1][2]. --- It seems there may be some misunderstanding that has led to the perception of a gap between our theoretical analyses and practical applications. We have provided further explanations on the three points you mentioned, and we hope you will reconsider the contributions of our work. Thank you very much! [1] Adversarial Personalized Ranking for Recommendation. SIGIR'18 [2] Adversarial Collaborative Filtering for Free. RecSys'23 [3] Adversarially robust generalization requires more data. NeurIPS'21 [4] Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR'21 --- Rebuttal 2: Title: Follow up Comment: Thank you for taking the time and effort to review our paper. We have carefully considered your comments and responded to each point with detailed clarifications and supporting evidence. As the deadline for the discussion period between authors and reviewers is approaching, we kindly ask whether our responses have sufficiently addressed your concerns. Please let us know if there are any further questions or issues that you would like us to address. We look forward to hearing from you soon. Thank you once again for your time and effort. Sincerely, The Authors
Summary: This paper targets adversarial collaborative filtering and provides both deeper understanding and improvement. The paper provides a theoretical explanation for the ACF's improvement upon robustness and effectiveness. PamaCF, which is presented in Appendix, can more robustly performs CF, is evaluated on three recommendation datasets. Strengths: 1. The paper is generally well-written and easy to follow. 2. The paper provides a theoretical explanation for the ACF's improvement in robustness and effectiveness. Weaknesses: 1. Part of the experiments are missing and incomplete. 2. The loss terms in Section 3 do not fully align with the cited papers. Technical Quality: 2 Clarity: 4 Questions for Authors: 1. What's the performance of various ACF methods on LightGCN? It would be more comprehensive to report another Table similar to Table 1 about LightGCN. 2. Why does the paper present the recommendation performance at HR@50 and NDCG@50 in Table 1 and present the performance against attacks in Table 2 with T-HR@50 and T-NDCG@50? 3. Is target item promotions the only attack PamaCF can prevent? 4. The loss of Gaussian Recsys in Eq 2 does not fully align with [18]. So is the same regarding Eq 3 and [15]. Why does the author make such changes? The loss term is the foundation of all derivity. All previous works are based upon pairwise loss while the paper is based on pointwise loss. Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: Although the paper presents an important theoretical and experimental analysis of ACF, the reviewer feels a strong sense of misalignment between Section 3 and 4, where the theoretical results in Section 3 do not fully motivate PamaCF in Section 4. The result of Section 3 seems to be motivated by pointwise loss, while PamaCF is built upon pairwise loss. Also, part of the experimental setup is not convincing enough. However, the reviewer might be persuaded with further evidence. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed feedback and for acknowledging the readability and theoretical clarity of our paper. Below, we address each of your questions to clarify misunderstandings and address concerns. We hope this clarifies the contributions of our work and look forward your reconsideration. --- **W1: Part of the experiments are missing and incomplete.** **W-A1**: Due to space constraints of main text, we focused on presenting results that strongly support our claims. Other reviewers also praised the thoroughness and persuasiveness of our experiments. To address your concerns comprehensively, detailed explanations and additional support will be provided in A1-A3, corresponding to Q1-Q3. --- **Q1: What is the performance of different ACF methods on LightGCN?** **A1**: Thanks for your suggestion. In response, experimental results for LightGCN will be included. Our findings show improved recommendation performance for PamaCF integrated with LightGCN. **Refer to Table 1 in the PDF in global response for details.** --- **Q2: Why does the paper use HR@50 and NDCG@50 in Table 1 to present recommendation performance, while Table 2 uses T-HR@50 and T-NDCG@50 to evaluate performance against attacks?** **A2:** Thanks for your question. In Table 1, we **actually used HR@20 and NDCG@20**, not HR@50 and NDCG@50. Your query addresses two points: (1) the choice of metrics and (2) the difference in k values. Here are our responses: 1. HR@k and NDCG@k evaluate recommendation performance, while T-HR@k and T-NDCG@k assess system robustness against attacks. We provide detailed explanations in the **Evaluation Metrics section** (lines 256-269). 2. Our selection of k values follows standard practices in the field, detailed in the **Implementation Details section** (lines 589-592). This approach ensures consistency with research that distinguishes between metrics for recommendation and defense strategies [1], underscoring the reliability of our defense methods. - HR@20 and NDCG@20 focus on the top 20 recommendations, typical in collaborative filtering assessments [2]. - T-HR@50 and T-NDCG@50 evaluate defense effectiveness against attacks, commonly using a top 50 ranking [3][4]. --- **Q3: Is preventing target item promotions the only attack PamaCF can mitigate?** **A3**: **PamaCF is capable of defending against various poisoning attacks**. These attacks typically aim to promote specific items or degrade system performance. Item promotion attacks are well-documented [3][4], while performance degradation attacks involve harmful item embeddings mainly in federated recommender systems [5]. - Theoretical analysis (Section 3) of ACF considers the impact of poisoning attacks on recommendation errors, without restricting attack objectives. **Theoretically, PamaCF can mitigate poisoning attacks aimed at different objectives.** - Empirical validation shows PamaCF not only counters item promotion attacks but also enhances overall recommendation performance. - To further validate PamaCF's robustness, we simulate performance degradation attacks with random user behaviors, confirming its effectiveness. - PamaCF demonstrates strong defense capabilities even against performance degradation attacks. - **Refer to Table 2 in the PDF in global response for details.** --- **Q4 (W2): The loss function of Gaussian Recsys in Eq. 2 does not fully correspond to [6]. So is the same regarding Eq. 3 and [7]** **A4**: **The loss function used in Gaussian Recsys does correspond to [6]**. [6] covers various loss functions, addressing any confusion about the specific one referenced. We refer to the loss function detailed in [6]'s "DETAILED PROOFS" section. The mention of "introducing the adversarial loss [7]" (line 112) in our paper refers to adopting the adversarial training paradigm (as described in Eq. 1 in the Preliminary section), where **adversarial perturbations are added to the original loss (Eq. 2), rather than using the APR loss function from [7]**. - For instance, the adversarial component discussed in [6] involves $\Delta_{v} = \arg\min_{\Delta, \Vert \Delta \Vert \le \epsilon} \langle ru, v + \Delta_{v} \rangle$, where the original loss is $\mathcal{L}(\Theta+\Delta) = \langle r(u+ \Delta_{u}), v + \Delta_{v} \rangle$. This aligns precisely with our employed loss function in Section 3. --- **Q5: All previous works are based on pairwise loss, whereas the paper is based on pointwise loss.** **A5**: It's worth noting that many methods also adopt pointwise loss [8][9]. Besides, in Section 3, our theoretical results leverage pointwise loss for clarity and simplicity. Section 4's Corollary 1 **extends the discussed pointwise loss to encompass more complex functions**, detailed in Appendix D.3. --- Thanks sincerely for your thorough review of our paper. We have addressed each of your concerns and clarified misunderstandings. Based on these clarifications, we kindly request your reconsideration of the overall assessment of this paper. We firmly believe that our work contributes valuable insights and advances the state-of-the-art in robust recommender systems, making a significant contribution to the research community. [1] LoRec: Combating Poisons with Large Language Model for Robust Sequential Recommendation. SIGIR'24 [2] LightGCN - Simplifying and Powering Graph Convolution Network for Recommendation. SIGIR'20 [3] Revisiting Injective Attacks on Recommender Systems. NeurIPS'22 [4] Revisiting Adversarially Learned Injection Attacks Against Recommender Systems. RecSys'20 [5] UA-FedRec: untargeted attack on federated news recommendation. SIGKDD'23 [6] Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR'21 [7] Adversarial Personalized Ranking for Recommendation. SIGIR'18 [8] Denoising Implicit Feedback for Recommendation. WSDM'21 [9] BSL: Understanding and improving softmax loss for recommendation. ICDE'24 --- Rebuttal 2: Title: Follow up Comment: Thank you for taking the time and effort to review our paper. We have carefully considered your comments and responded to each point with detailed clarifications and supporting evidence. As the deadline for the discussion period between authors and reviewers is approaching, we kindly ask whether our responses have sufficiently addressed your concerns. Please let us know if there are any further questions or issues that you would like us to address. We look forward to hearing from you soon. Thank you once again for your time and effort. Sincerely, The Authors --- Rebuttal 3: Title: Kindly Reminder Comment: Thank you once again for dedicating your time and effort to reviewing our paper. We are following up on our message sent yesterday regarding the discussion period between authors and reviewers, which concludes on August 13. We understand that you may have a busy schedule and truly appreciate the time and effort you have already invested in reviewing our work. We have carefully considered your comments and provided detailed clarifications and support for each concern in our rebuttal. Specifically: - **For W1:** Other reviewers (gsZx, 6kCb, MDNE) also praised the thoroughness and persuasiveness of our experiments. We understand your concerns and have provided detailed explanations and additional support in A1-A3, corresponding to Q1-Q3. - **For W2:** We clarified that the loss terms in Section 3 indeed align with the cited papers, as discussed in A4 corresponding to Q4. We are eager to learn whether our response has sufficiently addressed your concerns and if there are any further questions or suggestions. Please accept our apologies for reaching out again so soon; however, we are committed to meeting the deadline and ensuring the timely progression of our paper through the discussion process. If you require any additional information or clarification, please do not hesitate to reach out. Once again, we express our gratitude for your feedback and your continued assistance in this process. Sincerely, The Authors --- Rebuttal Comment 3.1: Comment: Thanks to the author for the rebuttal. However, the reviewer is still not convinced this paper is meaningful from the perspective of collaborative filtering. The evaluation is not solid based on the recommendation review perspective. At least two k should be adopted for HR@k and NDCG@k to validate that this method is not biased towards specific metrics. Also, adopting different losses between baseline and proposed methods is not convincing enough to demonstrate PamaCF's effectiveness for recommendation tasks. Besides, some of the responses are still not clear to the reviewer. Hence, the reviewer will keep the score. If other reviewers and AC strongly think this paper is acceptable, the reviewer respect their decision. --- Reply to Comment 3.1.1: Title: Further Clarification Comment: Thank you for your comments. We appreciate your feedback. However, there are still some misunderstandings in your replies. --- ### Regarding the Loss Function Your understanding of our loss function in the experiments is incorrect. **We indeed use the same base loss function (BPR loss) across all the baselines.** Please refer to our code in the Supplementary Material (line 223 in `./utls/trainer.py`), where all the methods share a unified loss function. --- ### Regarding the Selection of $k$ in Our Evaluation Our experimental setup involves: - Two types of backbone models - Four types of attacks - Five defense baselines - Four evaluation metrics - Three datasets For simplicity, clarity of experimental results, and due to space limitations, we only presented results with the most common top $k = 20$ choices, as seen in existing collaborative filtering works [1-5]. We also evaluated different $k$ values. Below are partial results for $k = 10$. You can also see from the code in the Supplementary Material (line 41 in `./meta_config.py`) that we set up multiple $k$ values. | Bandwagon Atk. Gowalla | HR@10 | NDCG@10 | |------------------------|----------------------------|----------------------------| | **MF** (Clean) | 7.494 ± 0.080 | 5.846 ± 0.035 | | **MF** | 7.410 ± 0.055 | 5.812 ± 0.044 | | +**StDenoise** | 6.954 ± 0.086 | 7.132 ± 0.029 | | +**GraphRfi** | 6.860 ± 0.042 | 6.910 ± 0.060 | | +**APR** | _9.204 ± 0.046_ | _9.564 ± 0.027_ | | +**SharpCF** | 8.705 ± 0.076 | 8.723 ± 0.058 | | +**PamaCF** | **9.492 ± 0.021** | **9.838 ± 0.013** | | **Gain** | +3.13% | +2.86% | | **Gain w.r.t. MF** | +28.10% | +69.27% | **We find that our method achieves consistent optimality even with different $k$.** We believe this selection is reasonable. We are willing to provide results for multiple $k$ values in the final version. We believe that the results under this different $k$, as well as the experiments in multiple scenarios, are sufficient to demonstrate the superiority of the methodology. [1] Neural graph collaborative filtering. SIGIR'19 [2] LightGCN - Simplifying and Powering Graph Convolution Network for Recommendation. SIGIR'20 [3] Empowering collaborative filtering with principled adversarial contrastive loss. NeurIPS'23 [4] Invariant collaborative filtering to popularity distribution shift. WebConf'23 [5] Distributionally Robust Graph-based Recommendation System. WebConf'24 --- ### Contribution of Our Work on Collaborative Filtering Adversarial training has been empirically demonstrated to improve both model robustness and recommendation performance. In our work, we have **explained its effectiveness in different scenarios from a theoretical point of view** and **provided ways to further enhance the effectiveness of adversarial training.** We believe that our work is valuable and meaningful for the collaborative filtering field, as also noted by the other three reviewers. --- For any of the previous replies, if there is anything else that you feel is unclear to you, please let us know. We would be happy to discuss further. --- Based on these clarifications, we kindly request you to reconsider the overall assessment of our paper. We firmly believe that our work contributes valuable insights and advances the state-of-the-art in robust recommender systems, making a significant contribution to the research community. Sincerely, The Authors
Summary: Adversarial training has been observed to degrade model performance on clean samples in the CV domain, however, ACF in recommender systems can not only enhance the robustness against poisoning attacks but also improve recommendation performance. This paper provides a comprehensive theoretical understanding of this phenomenon. Specifically, this paper shows the performance with adversarial training is better than without adversarial training in both clean and $\alpha$-poisoned scenarios. In addition, this paper provides the lower and upper bounds for the recommendation error. This paper also proposes a learning approach that can be used in more practice scenarios. Experiments on three datasets demonstrate the effectiveness of the proposed method in terms of robustness and recommendation performance. Strengths: $\bullet~$ The problem studied is important and relevant. $\bullet~$ The idea is interesting and novel. $\bullet~$ The motivation is reasonable. $\bullet~$ The theoretical results are sound and are verified in Figure 1. $\bullet~$ The evaluations are solid and convincing. Weaknesses: $\bullet~$ Can authors discuss the feasibility of relaxing the assumption in theorems such as the Gaussian recommendation system? In addition, is the the extension to multiple items natural? $\bullet~$ It will be better if the magnitude of user latent vectors is provided in Figure 1. Thus we can align the empirical findings and theoretical findings more straightforwardly. Maybe we can observe that the magnitude of the latent vector of user 3 is larger than that of user 2. $\bullet~$ Is there a more reasonable way to design the form of $c(\mathbf{u}, t)$ and equation 5? In addition, how can we choose $\rho$ in practice? Is $\rho$ learnable or it is only a hyper-parameter? $\bullet~$ Some presentation issues. For example, the $\alpha$ in line 184 should be bolded, and it should be $\Phi(\cdot)$ in line 176. In addition, the results in Table 1 should retain 2 digits instead of 3 digits to enhance the readability. The text in Figures 2 and 3 (such as x-axis and y-axis) should be larger. Technical Quality: 4 Clarity: 3 Questions for Authors: See the weaknesses part for the questions. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, this author adequately discusses the limitation and broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback. We sincerely appreciate your recognition of the importance, novelty, reasonable motivation, sound theoretical results, and the solid evaluations presented in our work. Below, we address each of your questions to resolve your concerns: --- **Q1: Can authors discuss the feasibility of relaxing the assumption in theorems such as the Gaussian recommender system? In addition, is the extension to multiple items natural?** **A1**: We appreciate your question. The assumption in the Gaussian recommender system serves primarily for clarity and simplicity of theoretical results, as also adapted in [1]. To relax this assumption, one approach involves considering more complex loss functions or scenarios involving interactions with multiple items. - **We elaborate on these scenarios in Corollary 1**, with detailed derivations available in Appendix D.3. - Specifically, for any dot-product-based loss function involving multiple items, we ascertain the range of perturbation magnitude for each user. - While the formulations in Appendix D.3 are complex, they are crucial for identifying the relationship between perturbation magnitudes and user embeddings. --- **Q2: It will be better if the magnitude of user latent vectors is provided in Figure 1. Thus we can align the empirical findings and theoretical findings more straightforwardly.** **A2**: Thank you for this excellent suggestion. We will incorporate the magnitude of user latent vectors into Figure 1. This addition will enhance the alignment between empirical findings and theoretical insights. The table below presents $\Vert u \Vert^2$ for the five users in Figure 1, which will be included in the revised paper. **The revised image can be seen in the pdf (Figure 1) in the global response**. |UserID|user 1|user 2|user 3|user 4|user 5| |--|------|------|------|------|------| |$\Vert u \Vert^2$|3.7333|2.0926|11.895|6.7923|6.8239| --- **Q3: How can we choose $\rho$ in practice? Is $\rho$ learnable or is it only a hyper-parameter?** **A3**: $\rho$ is considered a hyper-parameter used to adjust the overall magnitude in practical applications. In Experiment 5.4, we extensively studied the impact of $\rho$ on method performance, revealing that: - Even with a small $\rho$, noticeable improvements are observed. - Optimal values for $\rho$ typically fall within the range of 0.1 to 1.0 across diverse datasets. In practice, we typically search for $\rho$ within this range at intervals of 0.1. --- **Q4: Is there a more reasonable way to design the form of $c(u,t)$ and equation 5?** **A4**: Our paper introduces a specific design pattern for $c(u,t)$ that aligns with the criteria outlined in Corollary 1. The term $\rho \cdot c(u,t)$ allows for flexible adjustment of overall magnitude. Moving forward, we aim to explore more intricate design patterns aimed at better leveraging the properties identified in Corollary 1. For instance, we will investigate alternative mapping functions to replace the sigmoid function, which may better capture the relationships outlined in Corollary 1. These efforts are aimed at enhancing the effectiveness and adaptability of our approach in practical applications. --- **Q5: Some presentation issues.** **A5**: Thank you for your valuable suggestions aimed at enhancing the presentation of our paper. We will take the following actions to address these concerns: - Ensure accurate representation by verifying the correct usage of bold symbols. - Improve readability by reducing the number of digits in the tables. - Enhance visibility by increasing the text size in Figures 2 and 3. --- We appreciate your detailed comments and constructive feedback, as well as your recognition of the importance of our work. We believe these clarifications will effectively address your concerns. We eagerly await your reconsideration and reevaluation of our work. [1] Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR'21 --- Rebuttal 2: Title: Follow up Comment: Thank you for taking the time and effort to review our paper. We have carefully considered your comments and responded to each point with detailed clarifications and supporting evidence. As the deadline for the discussion period between authors and reviewers is approaching, we kindly ask whether our responses have sufficiently addressed your concerns. Please let us know if there are any further questions or issues that you would like us to address. We look forward to hearing from you soon. Thank you once again for your time and effort. Sincerely, The Authors --- Rebuttal Comment 2.1: Comment: Thanks for the detailed rebuttal, which addresses my concerns. I will keep my rating unchanged. --- Reply to Comment 2.1.1: Comment: We are delighted to hear that our response effectively addressed your concerns. We sincerely appreciate your support and the positive evaluation of our work. Sincerely, The Authors
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to all reviewers for your valuable feedback and for taking the time to evaluate our work. We are very encouraged that our main contributions have been acknowledged by all reviewers. We have taken great care to address any concerns or misunderstandings raised by the reviewers by providing detailed clarifications and additional support. We believe that our efforts have been worthwhile and are looking forward to your continued attention to our paper. To facilitate the assessment of our contributions, let us summarize the strengths recognized by reviewers: - Important research problem (Reviewers gsZx, 6kCb, MDNE) - Interesting and novel idea (Reviewers gsZx, MDNE) - Reasonable motivation (Reviewers gsZx, MDNE) - Sound theoretical results (Reviewers gsZx, svg8, 6kCb, MDNE) - Rigorous mathematical proofs (Reviewer MDNE) - Solid and convincing experiments (Reviewers gsZx, 6kCb, MDNE) - Well-structured and well-written paper (Reviewers svg8, MDNE) In our responses, we have diligently analyzed concerns and provided comprehensive evidence and justifications. Where there were misunderstandings, we offered detailed clarifications with references to specific sections of the paper. Additional figures and tables can be found in the attached PDF. In conclusion, we believe that our paper makes meaningful contributions to the field of robust recommender system and will benefit the research community. We would like to thank all the reviewers for your valuable and constructive comments for improving our paper. We kindly request that the reviewers reevaluate our work based on our responses. If there are any further concerns that would prevent the reviewer from increasing the score, please let us know, and we would be happy to address these concerns during the discussion phase. We look forward to hearing from you. Best regards, Authors of paper 5740 Pdf: /pdf/c85d69c189e5e3aaf877ae553cb06fd4a8d2734e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
IllumiNeRF: 3D Relighting Without Inverse Rendering
Accept (poster)
Summary: This paper introduces a new paradigm for the relightable 3D reconstruction: leverage a Relighting Diffusion Model to generate relit images and distill them into a latent NeRF. The proposed paradigm demonstrates superior performance compared to existing methods, which are based on inverse rendering, on both sythetic and real-world datasets. Strengths: - The writing is clear and the content is well-structured. The authors provide comprehensive implementation details. - The paper addresses a long-standing challenge by introducing a novel paradigm, which diverges from conventional inverse rendering methods. This approach offers a promising alternative for relighting. - Thorough numerical experiments showcase the effectiveness of the proposed method. The results are convincing. Weaknesses: One major weakness of this method is its efficiency. Unlike traditional methods that can directly relight a 3D scene once the inverse rendering is performed, the proposed approach requires optimizing a NeRF for each new target lighting condition. Furthermore, the computation of the latent NeRF involves a significant time investment (0.75 hours using 16 A100 GPUs). It would be beneficial to discuss potential solutions for this issue. Technical Quality: 3 Clarity: 4 Questions for Authors: - How many samples ($S$) were used during the experiments presented in Table 1 and Table 2? - As this method models the probability of relit images and fixes $Z=0$ during inference, it would be interesting to investigate how the performance varies by sampling different $Z$ values. - Typo in Line $446$ Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations have been well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## About efficiency We acknowledge that our approach currently prioritizes quality over real-time performance, as generating new samples with the RDM and optimizing a NeRF for each new lighting condition is computationally intensive (L252-254). A potential solution to mitigate this limitation is to adopt an amortization strategy, similar to [a]. Specifically, we could train a single latent-NeRF conditioned on both latent codes and a latent representation of the given lighting. This design would allow us to train a single latent-NeRF model capable of handling multiple target illuminations. As a result, when presented with a new lighting condition, the trained model could directly generate the 3D relit version without requiring further optimization. [a] Lorraine et al., ATT3D: Amortized Text-to-3D Object Synthesis. ICCV 2023. ## Number of samples used for Tab. 1 and 2 We use 16 samples per view for all benchmarking evaluations. ## Why is Z = 0? It is unclear how we could use the optimal Z that best matches the actual material besides optimizing Z using the test set images, which does not seem fair. We found that setting Z to 0 yields good results across both our synthetic and real-world benchmarks (see Tab. 1 and 2, and Fig. 5 and 6). We'll clarify this in the final version of our paper. We provide more qualitative results for different latent codes in Fig. 1 (see PDF). These results demonstrate that the optimized latent codes effectively capture various plausible explanations of the materials. ## Typo Thanks a lot for pointing this out. We will correct it.
Summary: The paper proposes a method for novel view synthesis with novel lighting given multi-view observation of an object. The pipeline is composed of a learned single-image-based relighting model based on image diffusion models, and a latent NeRF model that reconstructs the appearance of the relit object by reconciling relit images across views and samples. The relighting diffusion model (RDM) is conditioned on radiance cues generated from the geometry and target lighting under samples of roughness values, using a finetuned ControlNet architecture. The method is evaluated on both synthetic and real-world scenes, against baseline methods (mostly object-centric inverse rendering models), with both qualitative and quantitative results. Ablation study is also provided on design choices (e.g. latent/no latent NeRF, and number of diffusion samples). Strengths: [1] An alternative approach to object-centric relighting without optimization-based inverse rendering. The main novelty of the method which distinguishes it from previous methods of the task is, the method leverages a learned image-based diffusion model to directly predict the relit images, and then reconstructs 3D representation of the relit object for novel view synthesis. This approach opens up a new venue for solving the task besides baseline methods. The diffusion model is conditioned on 'radiance cues' as an input lighting proxy, which itself is not new in a learning-based inverse rendering setting, but the method is able to retarget the lighting proxy as condition for a diffusion model, which might inspire future works using diffusion models to hallucinate lighting. [2] The design of the Latent NeRF to reconcile sampled relit results from the diffusion model. The paper demonstrates that the latent NeRF is able to better reconcile sampled results from a diffusion model while preserving the details and decent specular highlights. [3] Extensive evaluation. The paper is able to include extensive evaluation on both real-world and synthetic datasets on various metrics. Weaknesses: [1] Effectiveness of the latent NeRF. The paper claims that the latent NeRF is able to reconcile difference in the sampled results from RDM. However, insufficient analysis is provided for justification. For instance, in Fig. 4, results in (a) seem to present more specularity than what is actually present in the hotdog scene (in other words, RDM seems to produce unrealistic relit images), and results from the latent NeRF in (b) seems to be an averaged version of the samples, with most of the specularity averaged out, arriving at a mostly diffuse appearance except for areas where specularity all agree among sampled. As a result, in Fig. 6, when compared to inverse rendering methods like Neural-PBIR, despite that the proposed method is able to produce better specularity in highlighted areas (thanks to explicitly conditioning on radiance cues), the results of the proposed method look 'dull', where shadows and high-frequency details seems to be lacking. This casts questions into the performance of both RDM to produce consistent and photorealistic relit images, as well as the ability of the latent NeRF to successfully reconcile inconsistency in the sampled results without sacrificing details and shadows. [2] Claim on better secularity in the results is questionable. The paper explains that, when compared to Neural-PBIR the proposed method is in second place numerically because the approximate lighting ground truth from the light probe unfairly favors Neural-PBIR despite the proposed method produced more convincing specularity. However this is largely unjustified, as (1) it is not clear how much weight is on the specular pixels in the overall metrics (2) whether the proposed method actually produces better specularity in the absence of true ground truth of relit objects. Moreover as mentioned in [1], the proposed method seems to lose more details (e.g. in the spikes of the cactus, or details in the ball). In this case, even if the methods excels in specularity, this is not the single most important property that is accounted for in those metrics, or visually, when evaluating the quality of the results. Technical Quality: 3 Clarity: 4 Questions for Authors: [1] Quality of the sampled results from RDM. Why are some results look overly specular (e.g. the bread in the hotdog scene in Fig. 4). And if more examples can be provided on results from RDM? [2] Reconciliation by the latent NeRF. Does the Latent NeRF cost details and specularity in sampled results from RDM when reconciling the results into a NeRF representation? Also if there are view-dependent effects in the sampled results, does the latent NeRF properly preserve those effects? Again, more examples are to be provided in order to further examine the behavior of the two modules beyond one single scene (and especially for synthetic scenes, where 'true' ground truth of relit objects is available). There are additional results on Standford-ORB scenes in the Supp but analysis is needed in the main paper to support the method using those results. [3] Additional ablation. Is it helpful to use more or fewer roughness levels in generating radiance cues? [4] In designing baselines, why not comparing against ALL methods in both synthetic and real-world settings? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Fig. 4.(a)’s generations are unrealistic "More specularity than what is actually present in the hotdog scene" does not imply unrealistic results. Without known source lighting, the RDM samples from the entire distribution of plausible relighting outcomes. We showcased random samples from this distribution, some of which exhibit higher specularity on the hotdog. However, due to ambiguities between materials and lighting (L127), even these seemingly less-likely images are plausible. The subsequent latent-NeRF optimization process focuses on finding the most probable results. ## Averaged results in Fig. 4.(b) This outcome is to be expected: with a sufficiently large sample size, we accurately capture the underlying material distributions. Latent-NeRF maintains consistent appearance across all samples while mitigating inconsistencies. As shown in Fig. 4(b), it preserves the plate's specular effects seen in most samples from Fig. 4(a) while reducing the bread's glossiness, keeping mild specular effects on the sauces. ## Lack of detail in Fig. 6 Our method surpasses all inverse rendering approaches in capturing high-frequency details, except for Neural-PBIR. Notably, for the salt can (column 2), our method clearly recovers the ingredient list, which most baselines miss. Similar improvements are evident for the toy car (column 6) and gnome (column 7). Compared to Neural-PBIR, our results often match or surpass its fine details, see the teapot (column 1), salt can (column 2), toy car (column 6), and gnome (column 7). The cactus (column 4) is an exception, due to UniSDF's inability to produce high geometric quality, as noted in the main paper (L251). Our method's results will inherently improve with advances in geometry reconstruction, as our RDM trains on perfect synthetic geometry. Neural-PBIR's code is unavailable, but its isolated geometry reconstruction stage (Sec. 3.1 in [39]) would allow us to directly leverage better geometry. ## Our results lacking shadows in Fig. 6 Our method consistently produces shadows that accurately match the ground truth. Although Neural-PBIR shows more pronounced shadows on the toy car (column 6), these deviate significantly from the ground truth. Shadows depend on geometry and illumination, modeled through our radiance cues, leading to realistic shadow placement (Fig. 4.(a)). This accuracy is maintained through Latent-NeRF optimization (Fig. 4.(b)). ## Consistency of samples from RDM To clarify, we do not claim that our RDM generates "consistent" relit images. As our RDM performs single-image relighting, we anticipate diverse relit images across samples for the same view, as well as across different views (see Fig. 4.(a)). The role of the latent-NeRF is to reconcile these inconsistencies into a cohesive 3D relit result. ## Questioning claim that light probe unfairly favors Neural-PBIR We appreciate this valid concern and would like to clarify. Stanford-ORB provides estimated **per-image** illumination (L235 and [21]’s Fig. 4 and 5), moving a light probe for each image (PDF’s Fig. 2). Ideally, fixed illumination per object would match reality, but aligning the object and light probe is challenging. This limitation in the Stanford-ORB benchmark can significantly affect results, especially in areas with specular highlights (PDF’s Tab. 1). Consequently, there is no “correct” way to do relighting in Stanford-ORB: our results use fixed illumination, while competitors use per-image illumination. We rendered the ground truth geometry and materials, obtained by Stanford-ORB in a controlled studio environment ([21]’s Sec. 3.2.2). Each view was rendered under **fixed** illumination (consistent environment map) and **per-image** illumination (unique environment map per view). Fig. 3 in the PDF shows both renderings alongside the corresponding real image. Note the significant variations, especially in areas with specular highlights (see marked regions and PSNR). We computed metrics for each method using both reference renderings as ground truth (Tab. 1 in PDF). Our method excels under fixed illumination but performs worse with per-image illumination. Competitors show the opposite trend, doing better with per-image illumination. Neural-PBIR did not release code or full results, limiting comparisons to 8 images in [39]’s Fig. 10, where it also performs better with per-image setup. We outperform Neural-PBIR in most metrics: |8 images|PSNR-H $\uparrow$|PSNR-L $\uparrow$|SSIM$\uparrow$|LPIPS $\downarrow$| |----|-:|-:|-:|-:| |[fixed] Neural-PBIR |24.61|30.70|0.966|0.032| |[fixed] Ours |25.19|31.33|0.968|0.030| || |[per-image] Neural-PBIR |25.03|30.97|0.966|0.032| |[per-image] Ours |25.04|30.76|0.967|0.030| Computational limitations prevented us from employing per-image illumination, as it would have required optimizing a latent-NeRF for each per-image illumination. Given the trends observed for competitors, we believe our method would demonstrate improved performance if this were feasible. ## More qualitative results for Fig. 4 See Fig. 1 in the PDF. ## About view-dependent effects Our latent-NeRF can maintain view-dependent properties corresponding to each generation. Please refer to the cactus (4th row), pepsi (5th row), grogu (6th row), and teapot (7th row) scenes in Fig. 1 in the PDF. ## Ablation on #roughnesses We empirically found that using four radiance cues yielded strong results (Tab. 1 & 2, Fig. 5 & 6). Given the computational cost of training the RDM (L452), we did not conduct further ablations. However, we are happy to provide additional ablation studies in the final version if the reviewer would like us to. ## Why not compare against all baselines? We used the official benchmark results from [17] and [21] to ensure authenticity. We also obtained genuine qualitative results directly from the benchmark. The only exception is Neural-PBIR, as its code is not publicly available, preventing us from applying it to the TensoIR dataset. --- Rebuttal Comment 1.1: Title: Discussion Comment: Dear reviewer q4ex We received diverse ratings for this submission, and you were the only one with a negative initial review. Please review the rebuttal and the comments from other reviewers and join the discussion as soon as possible. Thank you! Your AC --- Rebuttal 2: Comment: We thank the reviewer for their time in reading our rebuttal and providing the feedback. We are pleased that the reviewer appreciates the concept and formulation of our method. Our primary objective was to introduce a novel paradigm with the potential to replace hand-crafted inverse rendering techniques. Although our current method may not significantly outperform all existing baselines, we believe it lays a strong foundation for future advancements in the field. We now address each question below: ## Response to Point [1] ### **Diffusion model input** There seems to be a misunderstanding with respect to the number of views used as input to our relighting diffusion model (RDM). The RDM model (L168 - 172) is a **single-image/view** model. It has never been exposed to “multiview images” (see Eq. (4)). We also agree that it would be interesting to further improve the RDM model by conditioning on multiple views. ### **Unrealistic relit images | little to no chance that the bun is actually of specular material** We want to emphasize that our RDM models the entire distribution of plausible materials given a single observation of an object without known source lighting. While material roughness is still ambiguous when the source lighting is unknown, we agree that the samples with specular hot dog buns are less likely, and would hope that a future multi-view RDM model could improve this. ### **Different appearances for Hotdog’s bun between main paper and rebuttal** For Fig. 4(a) we chose to show a wide spectrum of plausible samples generated by the diffusion model. For the rebuttal we chose to show more samples, which included less specular buns. This further demonstrates the wide range of possible combinations between lighting and material. The final reconstructions recover the most likely materials (filter out the shiny bun samples) and do not exhibit any specular highlights for the bun, see Fig. 4(b). ## Response to Point [2] ### **On the benchmark with Stanford-ORB** We agree that in the presence of noisy GT lighting the analysis does not favor any method and will rephrase the related language. ## Response to Point [3] We show qualitatively that our method is the only one that somewhat-consistently recovers correct specularities on this dataset, while all other methods exhibit no specular highlights, apart from Neural-PBIR for a single example (the ball in the rightmost column of Figure 6). For the task of relighting this is perhaps one of the most important features. Unfortunately, Neural-PBIR only provides 8 qualitative result images (no source code is currently available), hence it is hard to properly compare our results with their approach, and we are happy to clarify our claims in the paper and limit them to only refer to specular highlights. We also admit that our method sometimes misses details due to the UniSDF reconstruction, and are happy to clarify this in a revised version. ## Overall We are happy that the reviewer recognizes the value of our proposed 3D relighting paradigm, which we consider the core contribution of our work (L42 and 256). Our method significantly outperforms all baselines except Neural-PBIR. However, due to limited results, unavailable code, and challenges with Stanford-ORB, it's difficult to determine which method is superior. We also agree that each component of our pipeline has the potential for further improvements. Since the components are independent of each other, improving one should improve overall performance. We are optimistic that future research will address these areas and build upon our framework.
Summary: The paper proposes a method to relight NeRF representations from a set of multi-view input images. The paper first trains a NeRF model and recovers geometry using UniSDF from the input images. Then, from the mesh, they obtain radiance cues which are used to condition a diffusion model to sample relit images of the scene under a target lighting. Since these relit images are samples from a distribution of plausible relit images, the sampled results are used to train a latent code conditioned NeRF model, and the lighting of the scene can be manipulated by walking through the latent space. The core contribution of the paper is to use the relit images rather than trying to explicitly model and learn the lighting and material properties. Strengths: 1. The paper shows that instead of recovering the material properties, it is enough to relight the training images and optimize the appearance to reach state-of-the-art results. The comparisons demonstrate that the results are state-of-the-art, which is important for improving the task of object relighting. 2. The diffusion model trained is described in detail and seems novel for the task of relighting. Details on the radiance cues and the architecture and training process of the diffusion model are described in detail. The model can be trained on synthetic data and applied to real-world data. 3. The lighting is multi-view consistent with the help of the latent-NeRF model 4. The presentation of the paper is clear, description of the method is high-quality, and comparisons are detailed Weaknesses: 1. The significance of using diffusion models to relight images, which are then used to train a NeRF, is limited to the context of application to NeRFs. Concurrent works have very similar methods in the context of Gaussian splatting (https://repo-sam.inria.fr/fungraph/generative-radiance-field-relighting/) - what is the benefit of this approach over those? Some discussion should be made on this. 2. The pipeline seems brittle, requiring many steps to go right and limited by the sum of limitations of many different technologies. The quality of relighting is not only limited to the quality of the diffusion model output but also depends on the quality of the UniSDF mesh reconstruction, which is required to create the radiance cues. This means that a good capture is a necessary prerequisite to relight with this method, and one could simply relight the extracted mesh to achieve a similar effect. 3. I find the abstract a bit misleading: “we first relight each input image using an image diffusion model conditioned on lighting and then reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting.” This description doesn’t accurately represent what actually happens, as it leaves out the most crucial part of the pipeline (reconstruction + conditioning) and instead suggests that the NeRF can be directly reconstructed from the relit images (Compare figure 2 of the paper). 4. The choice of UniSDF is neither justified nor are alternatives discussed. The part describing this aspect of the approach is relegated to the appendix. Consequently, there is no ablation study to determine if other reconstruction methods might work better. 5. With respect to figure 6: It makes sense to highlight the same area for all depicted approaches. For instance, the blocks and the ball have no highlights, but other approaches seem to perform better in these areas. Additionally, the normalization technique used to match the ground truth images is not ablated. Does this make a difference in the reconstructed quality? I notice that many of the other methods are darker than I’d expect and I wonder if this introduces some artifacts which hurt these methods and promote the proposed method. 6. It is unclear why the paper limits itself only to objects. UniSDF clearly shows good performance for scenes. The authors don’t discuss this limitation in the paper. 7. Changing the target light even slightly requires a wait of 45 minutes for a final rendering. Other approaches, as referenced above, seem to not require that much time. Technical Quality: 3 Clarity: 4 Questions for Authors: Many questions are discussed in the weaknesses, specifically: 1. How sensitive is the paper to the UniSDF reconstruction pipeline? Are there better methods which exist? Why does the paper only work for objects? 2. How does the normalization to ground-truth images for visualization of relit images actually affect the quality? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Some limitations are discussed in the paper, but there are additional ones which should be discussed in more detail, such as the long reconstruction time to get new lighting conditions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Method is limited to NeRF Our approach introduces a novel 3D relighting paradigm, employing a Relighting Diffusion Model (RDM) as prior to optimize a relit 3D representation. This approach is adaptable to various 3D representations like NeRFs or Gaussian Splats, since both can be conditioned on latent embeddings. For example, Wild Gaussians [a] (Sec 3.2) uses latent embeddings for a similar purpose in a different context (modeling lighting variations in internet photo collections with Gaussian Splatting). [a] Kulhanek et al., WildGaussians: 3D Gaussian Splatting in the Wild. ArXiv 2024 ## Advantages over concurrent work 1. The concurrent work focuses on forward-facing indoor scenes, while ours addresses full 360 degree 3D object-centric relighting. 2. Our method can handle intricate environment map-based lighting, whereas the concurrent work is limited to single-source directional illumination, a subset of what IllumiNeRF supports. 3. The concurrent work’s diffusion model relights images only to 18 known light directions in the dataset (Sec. 3.2). 4. Their latent code is per-view, while ours is per-sample (See Eq. 3 and Fig. 8) Note that EGSR (July 3-5) took place after the NeurIPS submission deadline (May 22). ## The pipeline seems brittle Our presented numerical metrics and qualitative visualizations either produce SOTA results (Tab. 1 and Fig. 5) or highly compelling results (Tab. 2 and Fig. 6). While we acknowledge that our approach relies on high-quality geometry estimates from UniSDF (L249), it is crucial to note that our diffusion model's training relies solely on synthetic data. This design choice completely decouples the training process from the inference pipeline depicted in Fig. 2. Consequently, our method will benefit from future improvements in geometry reconstruction as well as improvements with regard to the relighting model or the latent NeRF optimization. ## Simply relighting the extracted mesh We are unsure exactly what the reviewer is suggesting. If the reviewer suggests simply relighting the output from UniSDF, it's important to emphasize that UniSDF generates only geometric information and outgoing radiance, not material properties. Therefore, direct relighting of the mesh is not possible. If the reviewer proposes to perform inverse rendering based on the geometry predicted by UniSDF, we would like to clarify that one of the main benefits of our method is that it does not require inverse rendering (L42 - 46). ## Abstract is misleading Thank you for the suggestion! We will modify the final version of the abstract to include reconstruction and conditioning. ## Use of UniSDF is not justified UniSDF is a SOTA method for surface/SDF-based NeRF approaches (L387) on the Shiny Blender benchmark [43] (see MAE in [47]’s Tab. 1). We build on top of it, as other non-surface methods would make it harder to obtain high-quality geometry for the radiance cues computation (L249, 388 - 390). Our framework can easily be applied to any NeRF/SDF approaches that recover better geometry in the future. ## About highlights in Fig. 6 We will integrate the changes as suggested. ## About normalization for Fig. 6 The qualitative baseline results were kindly provided by the authors of Stanford-ORB [21]. Additionally, we utilized the benchmark's official code (https://github.com/StanfordORB/Stanford-ORB/blob/962ea6d2cc/scripts/test.py/#L36) to perform the normalization. ## Why focus on objects? Good question. There are multiple reasons for focusing on objects: 1. Object-centric 3D relighting is vital in various downstream applications, e.g., AR/VR, robotics, and game development. 2. Scene-centric lighting is a much harder task to solve. It is hard to set up realistic illumination for scenes. Conditioning on lighting is non-trivial as lighting representations such as environment maps are not easily applicable to scenes. 3. Established synthetic and real-world object-centric benchmarks like TensoIR and Stanford-ORB enable clear evaluation and comparison. 4. Large-scale object-centric synthetic datasets like Objaverse [10] are readily available for training the relighting diffusion model. ## Efficiency of method We acknowledge that our approach currently prioritizes quality over real-time performance, as generating new samples with the RDM and optimizing a NeRF for each new lighting condition is computationally intensive (L252-254). However, we believe this focus on quality is crucial as a first step. Our extensive evaluations and thorough numerical experiments demonstrate competitive results, offering a "promising alternative for relighting" (mZi1). We hope our work "inspires future works using diffusion models to hallucinate lighting" (q4ex), with a gradual focus on improving efficiency. We see parallels in the NeRF community, where the initial MLP-based NeRF [30], despite its impressive results, was computationally demanding. Subsequent research, like instant-NGP [31] and Gaussian Splatting, significantly improved efficiency. We envision a similar trajectory for our approach. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. It has addressed the majority of my concerns, and I do see improvement over prior work in terms of quality and acknowledge the novel contributions. However, I do feel as though the computational drawback and reliance on geometry reconstruction limit the applicability of the method in its current state enough such that it might not be presenting significant or impactful contributions at this time. I remain slightly positive on the paper. --- Rebuttal 2: Comment: We thank the reviewer for their time in reading our rebuttal and providing feedback. We are encouraged that the reviewer recognizes the quality improvements and novel contributions introduced by our method. We would like to clarify that every existing 3D relighting baseline also requires geometry reconstruction. Our current method did not focus on improving speed, but rather introducing a novel 3D relighting paradigm. It was designed as a general framework that can be further developed to substantially improve speed in the future, e.g. by using Gaussian Splatting representation instead of NeRF as well as using a faster diffusion sampler.
Summary: The paper proposes a new method called illumiNeRF for 3D relighting — given a set of posed input images under an unknown lighting, illumiNeRF produces novel views relit under a target lighting. Most of existing methods use inverse rendering to first recover the material and lighting in the given images, and apply new lighting to the object. The authors point out there is inherent ambiguity in decomposing materials and lighting. Instead, they propose a probabilistic model of representing multiple possible materials as latents, and train a latent NeRF using a relighting diffusion model. The authors compare their method with existing baselines and outperform most of them. Strengths: - The problem formulation in sec 3.1 is explained clearly. The authors point out the inherent ambiguity in inverse rendering problems when decomposing materials and lighting. - The probabilistic model of representing multiple possible materials as latents in NeRF is very interesting. Weaknesses: - Although during training, multiple materials are considered as different Z when optimizing the latent-NeRF, Z is manually set to 0 at test time, which corresponds to a specific explanation. However, Z=0 does not seem to represent the most likely material because the authors assume a uniform prior over Z according to line 163. - From Figure 6, the results lose a lot of details compared to other baselines like Neural-PBIR. For example, the method lost details of the spikes in the cactus sample. In table 2, Neural-PBIR also outperforms the proposed method. Technical Quality: 3 Clarity: 4 Questions for Authors: It would be great to show 3D renderings with different latent explanations in Figure 4(b). Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Why is Z = 0? It is unclear how we could use the optimal Z that best matches the actual material besides optimizing Z using the test set images, which does not seem fair. We found that setting Z to 0 yields good results across both our synthetic and real-world benchmarks (see Tab. 1 and 2, and Fig. 5 and 6). We'll clarify this in the final version of our paper. We provide more qualitative results for different latent codes in Fig. 1 (see PDF). These results demonstrate that the optimized latent codes effectively capture various plausible explanations of the materials. ## Result quality in Fig. 6 Our method consistently surpasses all inverse rendering approaches in capturing high-frequency details, except for Neural-PBIR. Notably, for the salt can (column 2), our method clearly recovers the ingredient list, which most baselines miss. Similar improvements are evident for the toy car (column 6) and gnome (column 7). Compared to Neural-PBIR, our results often match or surpass its fine details, see the teapot (column 1), salt can (column 2), toy car (column 6), and gnome (column 7). The cactus (column 4) is an exception, due to UniSDF's inability to produce high quality geometry, as noted in the main paper (L251). Our method's results will inherently improve with advances in geometry reconstruction, as our RDM is trained on perfect synthetic geometry. Neural-PBIR's code is unavailable, but its isolated geometry reconstruction stage (Sec. 3.1 in [39]) would allow us to directly leverage better geometry. ### About more examples for Fig. 4 (b) We provided more examples in Fig. 1 of the rebuttal PDF. We are happy to include these in a final version.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and effort spent reviewing the paper. We are grateful to hear that the reviewers agree that our work “addresses a long-standing challenge by introducing a novel paradigm" (mZi1). We also appreciate the praise for our "probabilistic model of representing multiple possible materials as latents in NeRF" (Ma7D), the recognition of our work's importance for "improving the task of object relighting" (hgwb), and the potential to "inspire future works using diffusion models to hallucinate lighting" (q4ex). Furthermore, we are pleased reviewers recognize our work’s “extensive evaluation" (q4ex) and find "the results are convincing" (mZi1). We are also delighted that all reviewers gave the highest rating for the presentation, describing it as "clear" (hgwb), "well-structured" (mZi1), and "explained clearly" (Ma7D). Below, we address each reviewer's questions. Pdf: /pdf/617e92036c6ff8dff3fd3476343091f6f21e2d01.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
To Believe or Not to Believe Your LLM: Iterative Prompting for Estimating Epistemic Uncertainty
Accept (poster)
Summary: This paper is about uncertainty quantification in LLMs, specifically illustrating some results based on an assumption involving independence of the distribution for a correct output for a question given some “iterative prompting”. The proposed information-theoretic measure is intended to help measure epistemic uncertainty of LLM output. Strengths: I think the paper has several interesting ideas, and one of the strengths is the attempt to formalize epistemic uncertainty using information theoretic ideas. There is novelty in the approach, in my view, as well as potential for these ideas to be useful in practice. Weaknesses: I found it generally hard to understand various sections of the paper, even though I am generally familiar with literature in this space. I think the writing is somewhat unclear in a few places. For instance, what is the scope of the work? Exactly what kinds of tasks are the proposed methods suitable for? Why should the key assumption hold and why should one trust it from some anecdotal examples? Several statements do not seem fully explained or justified; I will mention a few later in my detailed comments. I’m also not sure that the paper does enough empirically to highlight the key ideas – if I’m mistaken, the authors should correct me. So, while I find the work interesting and potentially impactful, I feel there is room for improvement, particularly around the exposition. I am willing to revise my suggestion for the paper based on the discussion. Technical Quality: 3 Clarity: 2 Questions for Authors: Some comments and questions follow: The title of the paper is too broad and could pretty much refer to any paper about uncertainty quantification in LLMs; I strongly recommend something more descriptive. One possibility is to use a sub-title such as “Iterative Prompting for Estimating Epistemic Uncertainty”. I don’t understand why recent references have been used to cite epistemic vs. aleatoric uncertainty; these are older ideas. Either cite something classic or remove them. The paragraph starting on line 47 makes some factual errors. The prior work does not assume there is a single correct response. For instance, in the Kuhn et al. paper, all answers that meet a threshold on the Rouge-L score (as compared to some ground truth) are deemed to be correct. I suggest rewriting this paragraph. The “contributions” part on page 2 is quite unclear, and some things only become clearer later. The authors should write this in plain language that is understandable at a high level, without getting into details. For instance, what is the “ground truth language”? Hallucinations are mentioned in the paper but never clearly defined or cited. I believe the notion here is different from that in other work. I recommend using title case for section headers. Fig. 1 is really hard to read and has inconsistent y-scales across panels. Also, I see a drop in the probability in some panels. How is this consistent with what is mentioned in the text? Is it because it does not go to 0 quicker? Please clarify. I may be mistaken but I feel there may be something wrong with how Assumption 4.1 is written. It seems like Y_t must be a ground truth answer to the original question x (without the additional information in the prompt) but Y_1, through Y_t-1 can be any text. Is this true? The How do we know that Assumption 4.1 is true? Is this based on the discussion in the preceding section? Does this depend on how long the ground truth response Y_t is? Is F_0(x) defined? Perhaps this is the case where the prompt is: “Provide an answer to the following question: Q: x. A:” I had a hard time figuring out what to make of the experiments. What is the main takeaway? When does the proposed approach perform better than baselines? There are other potential approaches (verbal and non-verbal, such as consistency-based approaches) that could be used as baselines. The P-R curves shown should mention coverage and accuracy; these are also known as accuracy-rejection curves. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations do not seem to be discussed very much. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer for many insightful suggestions, we will revise the title, references, improve section 2. ## Scope, assumptions, and definitions * **Q:** What is the scope of the work? Exactly what kinds of tasks are the proposed methods suitable for? * **A:** Our main motivation was a question-answering systems where we want to be able to detect whether answers to multiple-response questions might be incorrect. Then, such systems can abstain based on a high uncertainty score (Def. 4.4) after some calibration (see lines 219-229). Note that our proposed method is not suitable for certification when the answer (or action) is correct with high confidence. * **Q:** How do we know that Assumption 4.1 is true? Is this based on the discussion in the preceding section? Does this depend on how long the ground truth response $Y_t$ is? * **A:** The assumption is based on a common sense which we exemplify in this way: consider a query with single or multiple answers that could be encountered in the language (literature, internet, and other text sources). We assume that these answers are independent (i.e. occurrence of the answer in one textbook does not depend on the occurrence in another source). In fact, we will only care about such queries. This should be independent from how long $Y_t$ is. * **Q:** I feel there may be something wrong with how Assumption 4.1 is written. It seems like $Y_t$ must be a ground truth answer to the original question $x$ (without the additional information in the prompt) but $Y_1$, through $Y_{t-1}$ can be any text. Is this true? * **A:** Thanks for pointing this out. Indeed, $Y_1, \ldots, Y_{t-1}$ are understood as a collection of random variables distributed according to the ground truth (i.e. thet not arbitrary texts). * **Q:** Is $F_0(x)$ defined? Perhaps this is the case where the prompt is: “Provide an answer to the following question: Q: x. A:” * **A:** Yes, you're correct. Thanks, we will define this. * **Comment:** Hallucinations are mentioned in the paper but never clearly defined or cited. I believe the notion here is different from that in other work. * **A:** Thanks for pointing out this oversight. In the context of our work a hallucination is a positive real quantity and it is captured by the KL divergence between pseudo-joint distributions of LLM and the ground-truth distribution respectively (Definition 4.4). * **Q:** What is a ground-truth language? * **A:** In the context of our paper $X$ is a query, while $Y$ is a response, and ground truth $p$ is a stochastic model which captures relationships between queries and answers. In other words, it is a probabilistic model of language that we assume. ## Literature and Experiments * **Q:** Fig. 1 is really hard to read and has inconsistent y-scales across panels. Also, I see a drop in the probability in some panels. How is this consistent with what is mentioned in the text? Is it because it does not go to 0 quicker? Please clarify. * **A:** The drop in probability is the behaviour we wanted to pinpoint, that is, as the number of repetitions of incorrect answer increases, the normalized probability drops. Indeed, it is interesting that this varies for different prompts (sometimes drop happens much faster). We suspect that this depends on the amount of training data provided to the model, however it is hard to make conclusions at this point and we leave this question as a future research direction. * **Comment:** The paragraph starting on line 47 makes some factual errors. [...] in the Kuhn et al. paper, all answers that meet a threshold on the Rouge-L score [...] are deemed to be correct. * **A:** Thanks for spotting this mistake, we will rewrite this accordingly. We meant this to be all semantically equivalent responses. * **Q:** I had a hard time figuring out what to make of the experiments. What is the main takeaway? When does the proposed approach perform better than baselines? There are other potential approaches (verbal and non-verbal, such as consistency-based approaches) that could be used as baselines. The P-R curves shown should mention coverage and accuracy; these are also known as accuracy-rejection curves. * **A:** Thanks for pointing out alternative baselines and metrics. The goal of our experiments is to validate that a lower bound on the uncertainty metric (theorem 4.5) indeed captured incorrect answers in multiple-answer questions. We chose semantic entropy as a reference since it is arguably a conceptually closest baseline (since our lower bound is a mutual infromation of the pseudo-joint distribution). --- Rebuttal Comment 1.1: Title: Thanks for clarifications Comment: I thank the authors for responding to many of my questions and for being willing to make some edits, such as title change, adding references, adding some definitions, etc. Although I find the paper to be empirically somewhat on the weaker side, I think there is some value to the literature from novel ideas. I will increase my score marginally in the expectation that exposition related issues will be suitably addressed in a revision. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback, and for considering our response in your revised review.
Summary: In the paper, the authors address the challenge of distinguishing between epistemic and aleatoric uncertainty in large language models (LLMs). They develop methods to decouple these uncertainties, which is crucial for handling queries with multiple valid responses. Their approach involves iterative prompting of the LLM to generate responses and measuring the sensitivity of new responses to previous ones. This helps in identifying cases where epistemic uncertainty is high, indicating potential hallucinations by the model. They propose an information-theoretic metric to quantify epistemic uncertainty and derive a computable lower bound for this metric, which is validated through experiments on datasets like TriviaQA and AmbigQA. Strengths: 1. The topic is important and interesting. 2. The proposed iterative prompting strategy is practical and can be implemented easily. Weaknesses: 1. The writing of this paper is poor and informal. The reviewer feels hard to understand the exact insights and the principles behind the paper. Please refer to Questions for a few of these confusions. The reviewer strongly recommends polishing the presentation carefully, especially those notations and definitions. 2. The evaluation protocol is unclear to the reviewer. In Fig. 6, what is the meaning of the entropy in the x-axis? Also, some LLM UQ baselines are missing such as [1] Reference: [1] Lin, Zhen, Shubhendu Trivedi, and Jimeng Sun. "Generating with confidence: Uncertainty quantification for black-box large language models." arXiv preprint arXiv:2305.19187 (2023). Technical Quality: 1 Clarity: 1 Questions for Authors: Section 2: 1. "Moreover, consider a family of conditional distributions P ... ", if P is a set of distributions, why \mu is defined as a function? what is the meaning of \mu(x|x')? 2. what is "ground-truth conditional probability distribution"? do you mean p(Y|X) where (X, Y) are data and label? 3. where the "possible responses Y1, . . . , Yt" come from? Are they sampled from LLM Q with different decoding strategy? 4. what is the physical meaning of Z_i and also where does the second subscript i come from in (Z_i)_i? Since the explanation of Z_i, \mu are missing, it is hard for the reviewer to identify how the "information-theoretical notations" contributes to this paper. Section 3: 5. "To obtain conditional normalized probabilities, we consider the probabilities of the two responses, and normalize them so that they add to " it is unclear to the reviewer why this normalization is applied and how this is conducted. Please describe this procedure formally and justify the reason why conduct this. 6. The x-axis and y-axis labels are missing in Fig 1, Fig. 2, and Fig. 3. 7. How the proposed iterative prompting strategy (and its so-called "Conditional normalized probability") connected to epistemic uncertainty is missing. Although pieces of descriptions are mentioned in the Introduction, they are ambiguous and informal. Conventional definition of epistemic uncertainty is usually the model approximation error, e.g., p(\theta|D) where D is the training data. However, the new constructed input, i.e., the iterative prompt, is assumed to be significantly different from the D and how this is capable of quantify epistemic uncertainty is confusing for the reviewer. Confidence: 4 Soundness: 1 Presentation: 1 Contribution: 2 Limitations: No, the conclusion and limitations (and social impacts) are missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Definition of epistemic uncertainty, iterative prompting, and connection between them * _**Q:** How the proposed iterative prompting strategy connected to epistemic uncertainty is missing._ * **A:** Note that the definition of epistemic uncertainty is given in Definition 4.4. It is a KL-divergence between pseudo-joint distribution (Def. 4.2) constructed by iteratively prompting LLM and a pseudo-joint distribution of ground truth --- the formal connection is introduced and discussed in detail in Section 4. The main idea of the paper is a lower bound on this quantity (Theorem 4.5) which can be computed by only having access to LLM and iterative prompting (constructing pseudo-joint distribution). * _**Q:** Conventional definition of epistemic uncertainty is usually the model approximation error, e.g., $p(\theta|D)$ where $D$ is the training data. However, the new constructed input, i.e., the iterative prompt, is assumed to be significantly different from the $D$ and how this is capable of quantify epistemic uncertainty is confusing for the reviewer._ * **A:** Note that $p(\theta | D)$ as you mention, is typically encountered in Bayesian literature, whereas we look at the problem from the frequentist viewpoint. In our setting $\theta$ is not a random variable, but fixed throughout. Note also that our metric of epistemic uncertainty and a theorem that suggests how to measure it do not make any assumptions on the training data. That said, we consider it as an advantage over methods where assumption on $D$ is required. * _**Q:** where the "possible responses $Y_1, \ldots , Y_t$" come from? Are they sampled from LLM Q with different decoding strategy?_ * **A:** Responses $Y_1, \ldots, Y_t$ are generated iteratively given question $x$ exactly as described in the prompt in line 114. For example, given question $x$, the first answer is $Y_1$; then given $x, Y_1$, the second answer is $Y_2$; then we obtain $Y_3$ given $x, Y_1, Y_2$, and so on. Kindly note that this is explained in Remark 4.3. ## Unclear definitions and notation * _**Q:** "Moreover, consider a family of conditional distributions $\mathcal{P}$ ... ", if $\mathcal{P}$ is a set of distributions, why $\mu$ is defined as a function? what is the meaning of $\mu(x|x')$?_ * **A:** Here $\mu$ is indeed a discrete conditional probability distribution, which is a function defined on $\mathcal{X}$ (the space of finite text sequences) and that sums up to one. In other words, $\mathcal{P}$ is a set of discrete distributions, one for $x' \in \mathcal{X}$. We will make this clearer in the updated version. * _**Q:** what is "ground-truth conditional probability distribution"? do you mean $p(Y|X)$ where $(X, Y)$ are data and label?_ * **A:** In the context of our paper $X$ is a query, while $Y$ is a response, hence $p$ is a stochastic model which captures relationships between queries and answers. In other words, it is a probabilistic model of language that we assume. In general, in supervised learning, $(X, Y)$ are typically understood as input and label. * **Q:** what is the physical meaning of $Z_i$ and also where does the second subscript $i$ come from in $(Z_i)_i$ ? * **A:** $Z_1, Z_2, \ldots$ is a sequence of abstract random variables with values in some measurable set $\mathcal{Z}$. These are simply used to introduce some general definitions and avoid the use of $X$ and $Y$ which are reserved for questions and answers. A shorthand notation $(Z_i)_i$ is commonly used for describing tuples, i.e. $(Z_i)_i = (Z_1, \ldots, Z_n)$. * _**Q:** Section 3: "To obtain conditional normalized probabilities, we consider the probabilities of the two responses, and normalize them so that they add to one." it is unclear to the reviewer why this normalization is applied_ * **A:** We apply normalization to have easy comparison between experiments with different number of repetitions of incorrect response. In this way, it is easy to see that the factually correct answer decreases from $1$ to smaller probability as the number of repetitions increase. Without normalization we would need to compare log-likelihoods, in which case the change would not be as obvious. ## Questions about evaluation protocol * _**Q:** In Fig. 6, what is the meaning of the entropy in the x-axis?_ * **A:** In Fig. 6, the entropy on the x-axis represents the empirical entropy of the distribution of multiple answers obtained for a single query. We measure this for all queries and construct a histogram, by binning entropy values. This illustrates the frequency distribution of entropy across different queries, thereby providing insights into the variability and consistency of the responses. Specifically, empirical entropy measures the uncertainty or randomness in the distribution of these answers. Higher entropy values indicate greater diversity and less predictability among the answers, while lower entropy values suggest more uniform and predictable responses. * _**Comment:** Some LLM UQ baselines are missing such as (Zhen et al. 2023)._ * **A:** Thank you for pointing this out, we will consider this baselines in the updated version. --- Rebuttal Comment 1.1: Title: Request for interaction Comment: Dear Reviewer, I would appreciate if you could comment on the author's rebuttal, in light of the upcoming deadline. Thank you, Your AC --- Rebuttal Comment 1.2: Comment: Thank you for your response. The reviewer has thoroughly re-examined both the response and the clarified manuscript but still believes that significant revisions are necessary for the paper to be accepted. Therefore, the reviewer's evaluation remains unchanged. --- Reply to Comment 1.2.1: Comment: We strongly disagree that a significant revision is needed for the paper. Definitions, notations, evaluation protocol, etc. are already explained in detail in the paper, and we simply repeated the existing definitions in our rebuttal. Therefore, it is not clear what should be changed and why.
Summary: The paper considers both epistemic and aleatoric uncertainties and proposes a novel method to decouple them. This method employs iterative prompting based on its previous responses. Experiments demonstrate that the proposed approach effectively detects cases where only epistemic uncertainty is large for multi-label questions. Strengths: 1. Important problems: Aleatoric uncertainty is crucial for handling multi-label queries in practical applications. 2. Comprehensive theoretical analysis. 3. Demonstrates good performance in effectively dealing with multi-label queries by decoupling epistemic and aleatoric uncertainty. Weaknesses: 1. **Limited application scope**: While I appreciate the method and performance of decoupling epistemic and aleatoric uncertainty for multi-label queries, it falls short in estimating uncertainty for single-label queries (Fig.5ab). 2. **Overlooked over-confidence issues**: The paper overlooks the problem of over-confidence, where the model produces low entropy among multiple responses despite providing incorrect answers. 3. **Limited dataset selection**: The paper filters the WordNet dataset, retaining only queries with entropy higher than 0.7. It claims that “both the proposed method and the semantic entropy method rarely make mistakes on this dataset, and therefore we are not adding any mistakes to either method.” This indicates that the selected dataset is relatively simple. However, as discussed in weakness 2, low entropy responses can still contain errors in more challenging datasets. 4. **Limited model selection**: The paper does not validate the effectiveness of the method across different types of models. Technical Quality: 4 Clarity: 4 Questions for Authors: N/A Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The discussion on the limitations of this paper is insufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Limited application scope:** on single-label queries, we should not expect to perform better than a competitive first-order method (such as S.E.), which is specifically designed for such queries. Please note that Fig.5ab in fact shows that our method performs essentially as well as the S.E. method in estimating uncertainty in single-label queries. **Overlooked over-confidence issues:** this is an important open question, and a solution will most likely require additional novel ideas. **Limited dataset selection:** given that TriviaQA and AmbigQA already contain challenging queries with low-entropy responses, we wanted to add queries with high entropy responses to demonstrate the limitations of first-order methods. **Limited model selection:** We have made similar observations on smaller models and with different architectures. We are in the process of delivering more results and will provide them during the discussion.
Summary: This paper proposes an iterative prompt-based approach to uncertainty estimation. They make a model generate multiple answers, and estimate the probability of each, estimating uncertainty for each. They argue this method easily adapts to aleatoric and epistemic uncertainty, and can be applied to multiple choice question answering. They provide a mathematical framework to support their approach, arguing that a simple metric, Mutual Information between responses, can be a lower bound for uncertainty. Strengths: The epistemic/aleatoric approach to understanding uncertainty is a clear theoretic framework that is helpful in the MQA stage. This is highly appliable as threshodling methodology is provided, and complete examples are provided. Interpretability notions are used to explain this behaviour, giving an insight on why this happens, and not only how. Weaknesses: 1) Assumption 4.1 is, to my understanding, that the correct answer will have a probability independently of "any" context. Under this assumption uncertainty is observed when the model relies more on context. Later math is based on this assumption, which seems somewhat task specific.While this works in the controlled question answering setup, context dependence holds a lot more information(ex:linguistic dependencies, uncertainty regarding user intent...). This could make the resulting uncertainty metric reduce less useful in deployment to general usage. I think discussion of limitation to assumption 4.1 should be more throughly discussed. (It should be noted that 4.1/theorem 4.5 very clearly discusses the limitations of applying MI to infinite language) 2) Prompt based methods have been shown to be very model reliant, size reliant and tuning reliant. From a reproductibility perspective, it is unclear if results on Gemini will reproduce on different architectures, or smaller models. Is this behaviour a skill that emerges at a model size? requires specific tuning? or is it inherent to language modelling? In the same line of questioning, it is unclear to me how the prompt was chosen, and if this selection procedure needs to be repeated on a new model. If this probabilistic approach is universal to Language modeling? If so this would truly be an interesting step forward from previous works. Otherwise it is a limitation. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are adressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses 1:** Although the assumption is stated in this form for simplicity, the assumption in our theory is only for a very specific type of prompt we consider which still seeks answer to the original question $x$, and hence the ground-truth response should not change, and we also only apply it to specific contexts $Y_i$, i.e. those that could potentially be generated by the language as a response to the query. Thus, we do not really require independence for all contexts. Given this, we believe the assumption is not strong, and is intuitively stating that in the face of distractor responses, the response of the ground-truth language model should not change. Having said this, the formulation is indeed prone to adversarial attacks (and hence not perfect), e.g., if $Y_1=$"Answer1. However, we are interested in question $x_2$ instead of $x$", the new prompt would be "Consider the following question: Q: $x$. One answer to question Q is Answer1. However, we are interested in question $x_2$ instead of $x$. Provide an answer to the following question: Q: $x$" which could be misleading. However, we could change the assumption to only correspond to sequences $Y_i$ which are generated by a reasonable language model, and we will change it to restrict the family of $Y$'s (depending on $x$). **Weaknesses 2:** We believe this probabilistic approach is universal to language modeling. We have made similar observations on smaller models and with different architectures. We are in the process of delivering more results and will provide them during the discussion. We made almost no prompt engineering. The prompt used in the experiments was the second prompt tried (and not much different than the first try, which also gave very similar results). --- Rebuttal Comment 1.1: Comment: Thank you for your response. I look forward to seeing those results. I am also taking some more time to digest your explanation on context, and will come back to you, I need to think it out. --- Rebuttal Comment 1.2: Comment: Results promised on smaller models, other architectures, or other models in general were not provided (I completely understand those results are hard to provide in such a short timeline). Nonetheless despite your intuition that your results are universal, there is as of now no evidence. After thinking about your argument on weakness 1 for some time, I believe perhaps part of the issue is perhaps clarity? I did not understand what you're explaining here in the paper. I will leave my score as is, but pointing out that weaknesses I pointed out are not addressed, and are strong limitations that should be mentioned should this paper be accepted. --- Reply to Comment 1.2.1: Comment: We conducted additional experiments with a smaller language model, Gemini Nano, and the results are posted above as a comment to all reviewers. A link to an anonymised external page containing figures is shared with the area chair. Regarding weakness 1: We are not sure what explanations are missing, but we would be happy to provide additional clarifications in the remaining time.
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful comments. We are in the process of delivering more results with different LLM architectures and sizes, and we aim to provide the results during the discussion. We have already observed that small LLM models behave similarly. Please find our responses to other specific points below.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Beyond Optimism: Exploration With Partially Observable Rewards
Accept (poster)
Summary: The paper studies the setting of finite state action MDPs with partially observable rewards. To formalize the framework they introduce Monitored MDPs. They introduce the algorithm that separates exploration and exploitation and prove that for a class of MDPs with finite goal-oriented diameter their algorithm is greedy in the limit with probability 1. For exploration they use Successor Function that maximizes the cumulative discounted occurrences of the targeted state-action pair. In their experiments they show that their algorithm successfully solves all the tried tasks, while the baselines in most cases don't, or solve with with considerably more training steps. Strengths: - The algorithm is simple and and intuitive - The paper is generally well-written and I didn't have issues with understanding. - In the experiment section the algorithm is tested on several environments with different monitors, and they show significant improvements in terms of performance compared to the baselines. Weaknesses: - When you introduce the "unobserved reward" it would be great to emphasize that that this is not a reward 0, but really the agent knows it hasn't observed it, i.e. he gets the reward $\perp$. - For me it was not clear what is the agent state (i.e. what agents sees) in the setting of Monitored MDPs. Does he have acess to both the state of the environment and the monitor or just the environment? As far as I could understand it has to both and I would emhphasise that in the section where you introduce the Monitored MDPs. - The statement that starts in line 174: "As the agent explores, if every state-action pair is visited infinitely often, $\log(t)$ will grow at a slower rate than $N_t(s, a)$, ..." is wrong. It might apply to your algorithm but as you state here is sounds like it holds in general, but that is not true. Consider the case when you visit one state only at the rate $\log(\log(t))$, you still visit it infinitely often but the visition rate is slower than $\log(t)$. - To me corollary 1 is not clear why it holds. I understand the theorem 1, where you say that with your algorithm you will visit every state action pair infinitely often in the limit and that your policy is greedy in the limit, but I don't see why $\hat{Q}$ converges to $Q^*$, how do you update your $\hat{Q}$ from the data? Does the state in $Q$ function depend only on environment state or does it depends also on monitor state? (same for action) - In line 240 you claim that with the policy that comes from successor-function you will visit the targeted state as fast as possible. I think that is not true. With this policy you will maximize the cumulative discounted occurrence. If you were to change your instance reward to $\mathbb{1} \\{(s_i, a_i) \in \cup_{j=t}^k \\{(s_j, a_j)\\}\\}$, i.e. the reward is 1 if you have reached the state action pair in the past, then you would solve the task faster, because in that case you don't care of staying in the state-action pair, but just going there as fast as possible. - In line 280/281 you say that the agent will pay a cost of $-0.2$. Is the agent going to pay the price only when he pushes the button or in every step when the button is on? Technical Quality: 3 Clarity: 3 Questions for Authors: Look at Weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful insights and suggestions. Below, we discuss the main points you raised. 1. We will make the distinction between Mon-MDPs and sparse-rewards MDP more explicit. 2. You are correct, the agent sees both states. We will make it clearer in the final version. 3. You are correct that, generally speaking, $\log(t)$ may not grow at a slower rate than $N_t(s,a)$. However, this holds in our case and that sentence was just an informal statement to give the reader an intuition of why our algorithm converges. Indeed, in line 175 right after that sentence we wrote “(formal proof below)”. We apologize for the confusion, and we will make it clearer in the final version. 4. Theorem 1 is formalized using the classic MDP notation for the sake of simplicity, and we now see how this may actually cause confusion. $Q(s,a)$ depends on both the monitor and the environment, i.e., its explicit notation for Mon-MDPs would be $Q(s_e, s_m, a_e, a_m)$. For Q-learning to converge in Mon-MDPs we need the following assumption: that the monitor is “truthful”, i.e., it either hides the reward or shows it as is (Parisi et al., Monitored Markov Decision Processes, 2023). Given a truthful monitor, under ergodicity and infinite exploration (proved in Theorem 1), then $\widehat Q$ converges to $Q^*$. We mentioned the notion of “truthful monitor” in footnote 2, page 3, but we will now make it explicit in Corollary 1. We apologize for the confusion. 5. Our reference to “as fast as possible” is imprecise, although not for the reason given, and we will clarify this in future revisions. The expected return from further visits to $s_i, a_j$ upon reaching the state-action pair under the optimal policy is a fixed positive value (as the environment is Markovian and rewards are all non-negative). Therefore, the optimal policy for accumulating visit counts to the state-action pair is still to visit the state action as quickly as possible, as this results in minimizing the discount factor on the visit reward and the fixed positive future return. And this is exactly what your proposed reward function would also do (just not include the fixed positive future return), while avoiding the non-Markovian property of your specification. However, the “as fast as possible” phrasing is imprecise because of how it handles stochasticity. One might expect “as fast as possible” to minimize the expected time to reach the state-action pair, which the SR only does if the environment is deterministic. It is still incentivized to visit the state-action pair quickly, just with a different tradeoff between distributions of visit times. 6. Every step the button is on. --- Rebuttal Comment 1.1: Comment: Thanks for your explanation! My concerns have been addressed. I will raise my score accordingly.
Summary: This paper tackles the problem of exploration in MDPs where the reward is unobservable. For this, the authors perform goal conditioned exploration, i.e., the environment is explored according to how often a specified goal is reached. The authors propose an exploitation-exploration mechanism based on learning goal-conditioned policies. Policies are switched as visitation counts increase for goal state-action pair for a give goal conditioned policy. Moreover, the authors propose learning separate value functions via successor features for each goal conditioned policy. Overall, this mitigates the unobserved reward problem since the reward is not crucial anymore to solving the environment. In the environments proposed by the authors their approach outperforms other classic exploration methods especially when rewards unobserved, e.g., the reward is only given if the agent elicits a mechanism first. Strengths: - The paper is generally well written. I found the illustrations to be useful in understanding the ideas. - The authors perform good experiments, describing baselines the baselines appropriately putting their method’s performance in to context. - The idea of selecting goals systematically based on the count and then following a goal conditioned policy seems effective for the Mon-MDP setting. Weaknesses: - What makes the Mon-MDP different to a sparse rewards problem where a reward is only given at the end or reward free exploration for that matter? Shouldn’t your method be compared with other popular (intrinsic) exploration methods? - Although the algorithm is effective in tabular spaces it seems non-trivial to expand it to continuous or high-dimensions. The author mention their intentions and cite papers but do not elaborate on for instance how to systematically choose goals in high-dimensional spaces. - There seem to be no other Mon-MDP specific methods that have been compared against. Are there any other methods? Technical Quality: 2 Clarity: 3 Questions for Authors: Questions:
 - What is the main contribution of the paper? Are the successor features crucial or is it the exploration-exploitation mechanism? The authors motivate the problem very well, but I feel it is not exactly explained why these two components are crucial to improved performance in Mon-MDPS - The authors propose their own set of environments to test their method with success. Are there any other known environments that benefit from the paradigm of unobserved rewards? - Why do the other methods struggle so much on the empty environment? Is it because the sparse reward problem is harder? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: I think the authors have adequately addressed limitations in their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful insights and suggestions. Below, we discuss the main points you raised. 1. In sparse-reward RL, rewards are mostly 0 with very few exceptions ("meaningful rewards"). In Mon-MDPs, the agent cannot see the rewards, not even the 0s. While intrinsic rewards may improve exploration, they cannot replace $r_e = \bot$ (unobservable), thus the agent cannot directly maximize the sum of environment rewards. There is a need for a mechanism to learn even in the absence of rewards. This is what Mon-MDPs are for, as introduced by *"Parisi et al., Monitored Markov Decision Processes, 2023"*. We also argue that most intrinsic rewards induce non-stationarity (e.g., counts change over time), and are myopic (counts rewards only immediate visits, not long-term visits). Since our evaluation is limited to discrete MDPs, we used intrinsic rewards based on counts (*"Bellemare et al, Unifying count-based exploration and intrinsic motivation, 2016"*). To the best of our knowledge, in fact, count-based rewards are the go-to choice for discrete MDPs, while other intrinsic-reward algorithms (e.g., RIDE by Raileanu and Rocktäschel, 2020; RND by Burda et al., 2018; Curiosity-driven by Pathak et al, 2017) are more suited for continuous MDPs and deep RL. In our follow-up on continuous Mon-MDPs, we will evaluate the latest deep RL intrinsic-reward algorithms. 2. We are currently working on extending our algorithm, as promised in Section 5. Here are more details that we will be happy to add to the paper as well. First, Universal Value Function Approximators (UVFA, Schaul et al., 2015) will replace the set of S-function. That is, while first we had one S-function for every state-action (each implemented as a table) now we have one single neural network that takes the goal state and the current state as input, and outputs the action value for each goal action. That is, $S(s_{goal}, s_t) = [a_{goal}, a_t]$. We have already implemented this and it works nicely, but we are still investigating UFVA (there are different versions, and we may even propose a novel one). Regarding the use of counts, the main challenge is the presence of $\arg\min N(s,a)$. To make it tractable, we plan to either use Random Network Distillation (RND, Burda et al., 2018), or a version of prioritized experience replay (Shaul, 2016) that takes into account either pseudocount or RND prediction errors (rather than TD errors). 3. There is indeed no other Mon-MDP method yet. Mon-MDPs were introduced very recently (Parisi et al, 2023) and ours is the first work addressing exploration in Mon-MDPs. 4. The main contribution is the explore-exploit paradigm that overcomes the limitations of optimism and therefore is effective for problems where rewards are unobservable. In order to make it work, SFs are a necessary component, as they allow to decouple exploration (SF: visit a desired state-action pair) from exploration (Q-function: maximize return). On one hand, Algorithm 1 formalizes how to decouple exploration and exploitation, and the mechanism to balance the two (i.e., the ratio $\beta_t$). Theorem 1 further proves its convergence. On the other hand, Algorithm 1 depicts a general approach that uses a generic “goal-conditioned policy $\rho$”. To make it practical, we propose S-functions. While using SFs as value functions is not novel in RL literature, we argue that the way we use them is. 5. We believe that Mon-MDPs better capture the complexity of real problems, and therefore any MDP can be extended to Mon-MDPs. In a follow-up work, we are modeling a roborace problem as Mon-MDP: the agent receives rewards for a trajectory only upon explicitly asking for it, and the availability of the feedback is modeled as a monitor. This relates to RL from human feedback, with the difference that the availability of the feedback follows a Markovian process and the agent can exploit it. The goal is that using Mon-MDP the agent can ask for feedback more effectively, and in the end will learn to race without never asking for it. 6. All algorithms (ours and baselines) learn a reward model to compensate for unobservable rewards: since the benchmark Mon-MDPs are ergodic, and because every environment reward is observable for at least one monitor state (e.g., when the button is on), given infinite exploration Q-learning is guaranteed to convergence (this was proven in “Parisi et al., Monitored Markov Decision Processes, 2023”). The problem with the baselines (not ours) is that they use the Q-function to explore. In $\epsilon$-greedy (red) and intrinsic reward (green), the greedy operator is over Q; UCB-like (orange) still considers Q in its greedy operator; optimism (black) is pure greedy over Q. This is a problem in Mon-MDP because Q-function updates are based on the reward model that is inaccurate at the beginning (if the reward is unobservable, the agent queries the reward model). To learn the reward model and produce accurate updates the agent must perform suboptimal actions and observe rewards. This creates a vicious cycle in exploration strategies that rely on the Q-function: the agent must explore to learn the reward model, but the Q-function will mislead the agent and provide unreliable exploration (especially if optimistic). Our algorithm, instead, builds its exploration over the S-function, whose reward is always observable (either 1 or 0). This allows the agent to quickly learn accurate S-functions, which will then produce efficient and uniform exploration (as much as possible, at least, since visitation also depends on the initial state and the transition function). By visiting all environment and monitor states efficiently, the agent can also observe environment rewards quickly, learn an accurate reward model, and finally the optimal Q-function. We will write a dedicated paragraph to this explanation in the final version. --- Rebuttal 2: Comment: I thank the authors for their exhaustive answer! * I understand the relevance of Mon-MDPs and how they differ from sparse-rewards exploration much better now. * I share the same concern as reviewer DPys, where all other baselines tested are not aware of the formalism and thus naturally perform worse. * Also from reading the paper it does not become apparent to me why the goal directed exploration with successor representations is the best way to address the Mon-MDP formalism? I think this has to be very clear, since this seems to be the first concise method to address Mon-MDPs. * Could you maybe elaborate on this? I think this would help me gain more confidence in the paper. Thank you! --- Rebuttal Comment 2.1: Comment: The fact that other baselines are not aware of the formalism highlights a hole in RL literature, i.e., the lack of algorithms for MDPs with partially-observable rewards (that are not trivial, e.g., the observability of rewards is not just binary but given by another MDP). While there exists reward-free algorithms, they split exploration and exploitation: first, a general exploration policy is learned (environment rewards do not exists at that time), and then tasks are learned. In our paper, we are interested in solving a given task for which rewards **do exists** but are not observable --- the agent is still being evaluated even though it cannot see (sometimes) the evaluation. We argue that this is an important hole in RL literature, and we discussed more on this in the general rebuttal. In particular, a major advantage of our algorithm is indeed not having two separate exploration/exploitation phases: even if sub-optimal at firtst, the goal-conditioned policies are still better visitation policies than classic ones. In this paper, we strived for simplicify and efficiency when we designed our algorithm. The main contribution is to highlight the failure of optimism in Mon-MDPs and the introduction of the general explore-exploit algorithm. SFs naturally satisfy the requirements of our goal-conditioned policy $\rho$ (Section 3, paragraph after Corollary 1) while being simple and straightfoward to implement. A policy maximizing the SFs will maximize the visitation of state-action pairs, while being completely independent from the Q-function (that, as discussed in our previous reply, can be highly inaccurate at the beginning). We don't claim this is the **best** way to address Mon-MPDs but certainly is effective, and it is a first step to fill the hole in RL literature.
Summary: The authors propose a novel exploration strategy for Mon-MDPs based on two policies; a goal-conditioned exploration policy and an exploitation policy which maximizes the underlying reward. The proposed algorithm alternates between the two policies, naturally trading off exploration and exploitation. They show that the proposed strategy is consistent and outperforms standard exploration approaches such as optimistic exploration. Strengths: The paper is very well written. Particularly, the analysis of explore and exploit strategies for Mon-MDPs is interesting and novel. The empirical results are also good. Weaknesses: There are works in reward-free RL [1, 2] and active learning in RL [3, 5] for general continuous state-action spaces that the authors should mention. Since their strategy is based on an explore and exploit approach, intuitively, it seems that works such as [4] can be seamlessly applied to the Mon-MDP setup. Also, from my understanding, the intrinsic reward proposed in [4] is not myopic and is also consistent, in the sense that it leads to convergence guarantees for learning the MDP. Hence, I am not sure I agree with the authors' statement on lines 154 -- 155. There are also works on goal-conditioned RL such as [5, 6] that build on a similar idea for picking and exploring novel goals for exploration. While they do not consider the Mon-MDP setting, I think they are still somewhat relevant to prior work. The authors should write the rate of convergence in the main theorem statement of Theorem 1. [1] Jin, Chi, et al. "Reward-free exploration for reinforcement learning." International Conference on Machine Learning. PMLR, 2020. [2] Chen, Jinglin, et al. "On the statistical efficiency of reward-free exploration in non-linear rl." Advances in Neural Information Processing Systems 35 (2022): 20960-20973. [3] Mania, Horia, Michael I. Jordan, and Benjamin Recht. "Active learning for nonlinear system identification with guarantees." arXiv preprint arXiv:2006.10277 (2020). [4] Sukhija, Bhavya et al. "Optimistic active exploration of dynamical systems." Advances in Neural Information Processing Systems 36 (2023): 38122-38153. [5] Nair, Ashvin V., et al. "Visual reinforcement learning with imagined goals." Advances in neural information processing systems 31 (2018). [6] Hu, Edward S., et al. "Planning goals for exploration." arXiv preprint arXiv:2303.13002 (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Instead of a goal-conditioned policy, could one not train a policy with $1/N_t(s, a)$ as reward (or $-N_t(s, a)$)? In essence, this policy will try to visit states that have a low visitation count. While the reward in this setting is non-stationary, empirically this might perform better. What do the authors think about it? 2. Could [4] from above be used for active learning of dynamics and then later exploitation for general continuous state-action spaces with Mon-MDPs? Moreover, I am curious if any general active learning/reward-free RL algorithm can be combined with greedy exploitation to obtain convergence guarantees for Mon-MDPs. I am happy to increase my score if my concerns and questions above are adequately addressed. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors address the limitations of their work in the main paper (lines 336 -- 349). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful insights and suggestions. Below, we discuss the main points you raised. 1. Thank you for the additional references, we will add them to the final version. In particular, we think that [1] is indeed close to our approach in the use of SF. It is different, however, as it has two separate stages for exploration and exploitation, while ours balances between the two using the coefficient $\beta_t$ (Algorithm 1, line 2). We would like to stress, however, that [3], [4], and [6] propose model-based algorithms. In our paper, we focused on model-free RL and thus evaluated our algorithms against model-free baselines. As discussed in Section 5, we will devote future work on model-based versions of our algorithm. 2. We don't have a rate of convergence for Theorem 1, only an asymptotic convergence proof. Evidence for the practicality, in terms of sample efficiency, of our proposed algorithm is instead given by our thorough empirical experiments. Provable rates of convergence often don't imply practical algorithms (e.g., [1] and [2], which don't include any experimental results), which was one of the goals for our paper. 3. We agree that this is an interesting baseline, and we will add it to the final version. 4. Active RL (ARL) is perhaps the closest framework to Mon-MDPs but its setting is simpler. To the best of our knowledge, ARL considers only binary actions to request rewards, constant request costs, and perfect reward observations. By contrast, in Mon-MDPs (a) the observed reward depends on the monitor — a process with its own states, actions, and dynamics; (b) there may be no direct action to request rewards, and requests may fail; (c) the monitor reward is not necessarily a cost. For these reasons, ARL can be seen as a special case of Mon-MDPs, and therefore cannot fully capture the complexity of Mon-MDPs. --- Rebuttal Comment 1.1: Title: Response to Author's rebuttal Comment: 2. I think not having convergence rates is a drawback of the work. What is it that limits the authors from doing so? Intuitively, wouldn't a rate on the visitation frequency directly result in a rate for Theorem 1? This is what is at least shown in [4] (c.f., Lemma 13) 4. I am not sure if this is true for methods such as [3, 4]. In the end, they are similar in spirit to the proposed algorithm -- they visit the states where they have the highest uncertainty/lowest visitation count (exploration phase). In principle, they could be combined with a similar exploitation phase as proposed by the authors. Am I missing something here? In the end, this approach would be equivalent to the baseline discussed in the third point. Whereas, the algorithm in [6] is similar to what the authors propose. Could the authors comment more on this? --- Reply to Comment 1.1.1: Title: About convergence rate and related work Comment: **About the convergence rate** Getting a non-trivial convergence rate is likely not possible. The algorithm as presented uses $\epsilon$-greedy as the exploration method to learn the goal-conditioned S-functions. In the worst-case, this will admit $\epsilon$-greedy's poor convergence rate (and may take exponential-time in MDP size to even just visit the goal once). We could give a competitive convergence rate if optimal goal-conditioned S-functions are known in advance, but that didn't feel particularly informative. We could also replace the exploration mechanism (i.e., to follow $\rho$ and S-functions) within the exploration phase to use a more sophisticated mechanism, but we are explicitly aiming for a model-free algorithm so many choices common in literature are not suitable (e.g. MBIE, UCRL). For example, in [4], Lemma 13 refers to Eq. 16 where the problem considers the true transition function $f^*$, and the make use of it in Corollary 7. On the contrary, we don't learn any model, and we don't know the true S-function $S^*$ either. Even knowing $S^*$, we don't think Lemma 13 can be straightforwardly applied to our case. We choose a model-free algorithm (Q-learning) because we wanted to keep the method as simple as possible, while demonstrating its effectiveness through experimentation rather than theory, particularly showing that it can handle partially observable reward settings like Mon-MDPs (where traditional MDP exploration mechanisms can fail). Maybe a middle-ground might be to present a convergence rate that implicitly depends on the convergence rate of whatever algorithm is used to learn the goal-conditioned S-functions. This could show how much is lost due to the goal-directed exploration component that is needed to handle Mon-MDPs. We would then continue with experiments showing its performance with the simple $\epsilon$-greedy mechanism (even though it's not theoretically well-motivated) showing its strong performance across the tested environments. There is another angle of analysis that is distinctly more difficult as it likely would need new theoretical machinery (beyond the scope of this work). In particularly, we believe that most of the advantage of our algorithm comes from the goal-conditioned policies finding sub-optimal but still useful visitation policies. These policies can still be used to dramatically accelerate the learning of the Q-functions, allowing fast exploitation possibly even before the S-functions have identified optimal exploration policies $\rho$. We believe that this is a great advantage of not breaking the problem into some explicit exploration-only phase (e.g., as in [1]). This would almost certainly need some instance-specific analysis, though. Would the middle-ground option of a convergence rate that depended on the convergence rate of the goal-conditioned policy learning be a sufficient addition? In any case, we can add a more thorough discussion of this in the paper. **About related work** Yes, they could be combined with our approach. Maybe there has been some misunderstanding: we didn't mean that [3], [4], and [6] are unrelated, but just that they are different being model-based. For example, [3] estimates uncertainty in the feature space and uses a model to plan a trajectory to high-uncertainty states. We use counts (rather than uncertainty) and don't plan trajectory, but instead follow one-step S-function values. While replacing uncertainty with counts could be straightforward, [3] still has an extra component (the model) that allows for more powerful long-term reasoning. We believe that combining our algorithm with [3] and [4] could be promising for future work. Regarding [6], the main similarity is the presence of a goal-conditioned policy for exploration, but their algorithm is still quite different (beside being model-based). First, [6] alternates between exploration and exploitation at every episode, while ours uses a more grounded criterion (the ratio $\beta). Second, the goal of their goal-conditioned policy is randomly sampled from the replay buffer and optimized via MPPI, while ours is the state-action pair with the lowest count. Third, their reward for training the goal-conditioned policy is given only at the end of the trajectory and depends on the number of actions needed to reach the goal, while ours is given at every step by SRs. We'd like to stress once more, however, that we didn't mean to say that [3], [4], and [6] are unrelated, and we will reference them in the final version.
Summary: This paper considers the Reinforcement Learning problem in Monitored MDPs. Monitored MDPs are a formalism that has been recently introduced by Parisi et al. (AAMAS 2024), in which the value of the policy is computed on the rewards generated by the environment and those produced by a "monitor". However, the agent cannot directly observe the rewards generated by the environment, because the monitor can modify the reward observed by the agent. The authors propose an algorithm for Monitored MDPs, which alternates explicit exploration and exploitation. The method has been validated against some classic RL algorithms and variants. Strengths: (quality) The paper is very well-written. The topics are presented very clearly and the text can be read from top to bottom whithout any major misunderstanding. The related work section covers the essential work in the area (at least, the ones I am aware of), and the papers are described well. The main contribution is motivated and it is described completely. (relevance) The study of Monitored MDPs has important relations with partially observable environments, and it is very relevant for the broad RL community. Moreover, it clarifies the difference between observing rewards and being only evaluated on them. (reproducibility) The authors provided a full, well-documented Python source code of the algorithm, which ensures reproducibility. They also provided the necessary configuration files. (soundness) The proposed algorithm appears to be appropriate for the class of domains and the monitors considered in the experimental section. Also, the evaluation considers 5 monitors for each of the 4 environments, which is an interesting composition. Weaknesses: 1- Corollary 1 is the only formal guarantee regarding the performance of the algorithm that has been obtained in the paper. However, this only involves the asympotic convergence. Moreover, the proof relies on the fact that the algorithm is an exploratory policy that becomes greedy in the limit. This fact does not seem to suffice for obtaining $\hat{Q} \to Q^*$, because the monitor may arbitrarily modify the reward used for constructing $\hat{Q}$. I believe there is an implicit (but missing) assumption that the monitor may only hide rewards, and that rewards will be shown in all states an infinite number of times. 2- The evaluation compares the algorithm with other 4 baselines in some environments. However, none of the baselines have been specifically designed for Monitored MDP, nor for any component of partial observability on rewards, and it is unsurprising that many of these do not perform well. As such,the experimental evaluation gives little insights about the improvement that the proposed algorithm leads among Monitored MDP algorithms. If RL algorithms for this class do not exist yet, because the formalism is recent, then it should be also compared with algorithms are aware that rewards are partially observable, or nonstationary. Finally, the black line, which is Q-learning with optimistic initialization, is taken as a representative of "optimism". However, this is only a specific instance of optimistic approaches, and I believe that "optimism" would be better represented by UCB-style RL algorithms. 3- The contribution of the paper is limited for the following reasons: a- The algorithm mostly succeeds because it tries to reach all states of the MDPs via explicit exploration. This is generally intractable for large state spaces, or MDPs with high (or infinite) diameter. Optimism solves this issue by avoiding exhaustive exploration. This makes the idea behind the algorithm more naive than other existing optimistic approaches, especially for larger state spaces or small probabilities. Indeed, the algorithm does not seem to directly address the fact that a monitor is present, even though it has this prior information available. b- The use of Successor Representation made in the paper is mostly standard in the literature. The authors say that only Machado et al. [40] used SR to drive exploration. However, we should consider that using SR as value functions is completely equivalent to placing unit rewards in goal states. Then, similar approaches are taken by the works in goal-conditioned RL that learn goals using unit rewards, even though SR are not explicitly mentioned in those papers. Technical Quality: 3 Clarity: 4 Questions for Authors: 4- Why does the UCB baseline use an epsilon-greedy policy? I would expect bonuses to be sufficient for exploration. Epsilon-greedy may unnecessarily show it down. The authors may also address any of the weaknesses above, especially number 1. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Most limitations have been already discussed in the paper. The work has no direct societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful insights and suggestions. Below, we discuss the main points you raised. 1. You are correct, we consider only "truthful monitors" (as formalized in *"Parisi et al., “Monitored Markov Decision Processes, 2023”*), i.e., monitors that either hide the reward or show it as is. We wrote in page 3, footnote 2 that *"the monitor does not alter the environment reward"*. We will make it more explicit in Corollary 1. This is needed for our proof of convergence. The investigation of monitors that can change the reward are out of the scope of this paper and relates other directions of research (e.g., *"Ng et al., Policy invariance under reward transformations: Theory and application to reward shaping, 1999”*). Relaxing this condition, in fact, poses an additional challenge that could be addressed by having a belief over the reward (e.g., *"Marom and Rosman. Belief reward shaping in reinforcement learning, 2018"*). 2. You are correct that no baselines exist yet for Mon-MDPs because the formalism is recent. For a fair comparison against existing baselines, we included standard MDPs in our evaluation ("Full Observ." environments in Figures 5 and 6). The results show that even in classic MDPs our approach works better than existing ones: most baselines converge but they need a significantly larger amount of data. In Mon-MDPs, only ours converges in most seeds and the baselines fail most of the time. We didn't include UCB-based RL algorithms like UCRL because they are model-based and we considered only model-free methods for a fair comparison. Model-free alternatives (e.g., *“Dong et al., Q-learning with UCB exploration is sample efficient, 2020”*) usually provide guarantees of convergence (with convergence rates) but are impractical and lack evaluation on even simple domains. Also, we included a UCB-like baseline (orange line in Figure 5 and 6) that uses the UCB bonus to encourage exploration. We are happy to include more baselines if you have any suggestions. 3. a) We agree and we already discussed extending our algorithm to larger/continuous Mon-MDP and to model-based approach (e.g., learning the monitor model) in Section 5. We are currently working on extending it to larger/continuous domains using UVFA and alternative to counts, and we plan to submit a follow-up soon. Please refer to the response to reviewer **zrqd** for more details. We argue, however, that these limitations should not overshadow the contribution of the paper. We believe that the whole Mon-MDP framework is still very new and unexplored, and that our approach, proof of convergence, and evaluation are novel and thorough enough, and set the stage for many future directions of research. 3. b) Thank you for this insight. How would it be equivalent to place unit rewards in goal states, though? In our algorithm, one SF places unit rewards in **one** state, and then we learn a SF for **every** state. We see similarities between the classic use of SF and our approach, but usually SF are given an explicit goal (or features representing it) and are applied in the context of transfer learning. In our case, there are as many goals as state-action pairs. This is why we wrote that only Machado et al. applied SF for exploration, since their work was tailored to encourage visitation of state-action pairs. We are happy to cite more related work if you have any suggestions. 4. Greedy UCB performed better in some environments but very poorly in others. The average performance was best with the addition of $\epsilon$-greedy action-selection. --- Rebuttal Comment 1.1: Comment: 1. I appreciate this important change. This assumption is very reasonable to assume and it would not limit the impact of the paper. 4. b) I said that this use of successor representations is mostly standard in goal-conditioned RL literature because from what concerns the policy $\rho$, the approach appears to be equivalent to learning $|SA|$ policies using Q-lerning, where a reward of 1 is placed in one state-action pair for each policy. The overall algorithm is still original, to the best of my knowledge. I mainly argue that the exploration strategy is not only related to directed exploration with SR features, but also goal-conditioned policies. The other replies are also relevant. My main remaining concern is that the technique "is generally intractable for large state spaces, or MDPs with high (or infinite) diameter.", given that the algorithm must learn a distinct policy for each state and action in the MDP. Despite this significant limitation, the paper is well written, clear, correct and the results appear to be mostly reproducible. So, I have increased my evaluation to weak accept.
Rebuttal 1: Rebuttal: We thank the reviewers for their feedback and helpful suggestions. We are pleased to see that all reviewers appreciate the core idea of our paper — a novel exploration algorithm for Mon-MDPs, where rewards are partially observable — and the thorough evaluation against different baselines on many environments/monitors. We are also happy to see that all reviewers find our paper intuitive and very well-written. We have replied to each reviewer separately to address some specific clarifications. In this comment, we recap what we will introduce in the final version of the paper after taking into account the reviews. * We will make it clearer why Mon-MDPs are different from sparse-rewards MDPs (**zrqd** and **Y2Vd**). * We will add more related work (**ajhV**). * We will clarify that the agent sees both the environment and the monitor state (**Y2Vd**). * We will make it more explicit that convergence is guaranteed under a “truthful monitor” (**DPys** and **Y2Vd**). * Will will rephrase why S-functions lead the agent to the goal state “as fast as possible” (**zrqd**). * We will add another baseline that learns a separate Q-function with intrinsic reward $1 / N(s,a)$ (**ajhV**). * We will discuss more why Mon-MDPs are so challenging and why all baselines perform poorly (**zrqd**). * We will discuss more about future work and how we plan to extend our algorithm to continuous spaces (**DPys** and **zrqd**). We further thank reviewer **DPys** for saying that *"the main contribution is motivated and it is described completely”* and *“the study of Monitored MDPs has important relations with partially observable environments, and it is very relevant for the broad RL community. Moreover, it clarifies the difference between observing rewards and being only evaluated on them”*. We would like to follow-up on this, and emphasize that our paper highlights an important hole in the RL literature: MDPs assume rewards are observable, and areas of research that investigate cases where rewards are not observable (e.g., Active RL) still consider limiting assumptions. Mon-MDPs do not place limitations and formalize the observability of rewards as a separate Markov process, giving great freedom on how to model real-world problems. At the same time, however, Mon-MDPs highlight the lack of efficient algorithms with guarantees of convergence and the failure of optimism. While this is well-known for bandits (*“Lattimore and Szepesvari, The end of optimism? An asymptotic analysis of finite-armed linear bandits, 2017”*), RL literature lacks a similar analysis on MDPs. With this paper, we want to contribute to filling this gap with the introduction of an algorithm with guarantees of convergence and a collection of benchmarks, and set the stage for more exciting future research.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unveiling the Potential of Robustness in Selecting Conditional Average Treatment Effect Estimators
Accept (poster)
Summary: This paper presents a method for selecting a conditional average treatment effect estimator using the distributional robust optimization (DRO) technique. The proposed method does not require specifying models for nuisance functions. Strengths: 1. The result is strong, since this paper provides a method of evaluating multiple CATE estimators without specifying other baseline CATE estimator. As a result, more reliable assessment of CATE estimators is possible. 2. The empirical comparison is extensive. This paper evaluates many existing CATE estimator and transparently exhibits the performance of the proposed method. Given that the paper contains several weaknesses discussed below, I think the paper’s result is interesting and powerful in practice. Weaknesses: **Misleading statements** The following statement is generally not true. > According to Rubin Causal Model [56], the CATE is determined by comparing potential outcomes under different treatment assignments (i.e., treat and control) for a specific individual. > I think the authors are confusing the “unit-level treatment effect” with the “conditional treatment effect”. The unit-level treatment effect $Y_i(T=1) - Y_i(T=0)$ “*for a specific individual*” $i$ cannot be estimated. The CATE is a group-level treatment effect, where the group is specified by the conditioned covariates. Please explicitly distinguish between the term “CATE” and an individual treatment effect. **Noninformative Introduction** The key research question in line 38, “Given multifarious options for CATE estimators, which should be chosen” are not well-defined in the Introduction section. As a reader, the minimum expectation for Introduction section is to see what were the limitations of previous methods with a simple example illustrating the limitations. However, by the design of the paper, until reaching to Section 3, Equation (2), it’s impossible to know (1) what are the plug-in and pseudo-outcome metrics and (2) what were their problems. The unclarity of the problem setting in Section 1 and 2, makes the first two sections non-informative. **Redundancy** I don’t see the point of categorizing previous methods as plug-in and pseudo-outcome. All previous methods rely on specifying nuisances, while the proposed method does not. I think such categorization is redundant and unnecessary for the literature review and explanation of the proposed method. **Subjective statements** I think this paper contains subjective evaluations of previous methods, which, as a reader, feels somewhat embarrassing. For example, in line 125, "previous high-quality paper" seems awkward and redundant — "previous paper" is enough. Similarly, "standing on the shoulders of giants" is also embarrassing. These kinds of subjective assessments of previous papers dilute the attractiveness of this paper and distract the authors from focusing on their own work. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. What is the relation with the paper [[1911.02029] Selective machine learning of doubly robust functionals (arxiv.org)](https://arxiv.org/abs/1911.02029)? 2. Is the empirical result matched with previous literature [58, 16, 45]? Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 4 Limitations: The paper assumes the ignorability, which is pretty strong assumption in general. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 1tzr, We sincerely appreciate your positive recognition of the soundness of our theoretical results and the extensiveness and transparency of our empirical experiments. We are also grateful for your valuable suggestions in improving our presentation and bringing new insights to future work. Below we will respond to your comments one by one. **W1. Distinguishing Conditional Average Treatment Effect (CATE) and Individualized Treatment Effect (ITE)** **R1:** We agree with your argument about the difference between CATE and ITE. In the revised version, we will clarify this in the Introduction, explicitly defining ITE as $Y^1-Y^0$ (unit-level) and CATE as $\mathbb{E}[Y^1-Y^0|X]$ (subgroup-level). **W2. The presentation of Introduction** **R2:** We will restructure it as follows: * Background on causal inference and CATE. * Motivation for CATE estimator selection and existing methods for CATE estimator selection. * Challenges faced by current methods (moving content from Section 3.1 here) * Introducing our DRM method. This restructuring will allow readers to quickly grasp existing CATE model selection metrics and their limitations before introducing our method, thus providing clearer motivation for our work. Thank you for your suggestion! **W3. Redundancy of categorizing previous methods as plug-in and pseudo-outcome** **R3:** Thank you for this comment, while we still think it is proper to keep such categorization for the following reasons. * This maintains consistency with [16, 45]. * The plug-in and pseudo-outcome approaches differ fundamentally in their construction form, as detailed in Section A.2: 1. Plug-in metrics approximate the ground-truth CATE function using $\tilde{\tau}(X)$, which relies on off-the-shelf nuisance functions and only covariates variable $X$. 2. Pseudo-outcome metrics approximate the ground-truth CATE function using $\tilde{Y}(X,T,Y)$, incorporating off-the-shelf nuisance functions and variables $(X,T,Y)$. * Both DR-based and R-based objectives can be used to construct plug-in and pseudo-outcome metrics (see Section A.2). This categorization helps prevent potential confusion between Plug-R/Plug-DR and Pseudo-R/Pseudo-DR. We will emphasize their distinction in lines 115-123 of the revised paper. **W4. Subjective statements** **R4:** We appreciate this helpful suggestion for our presentation. We have removed the subjective statements and will ensure our tone remains objective and rigorous in future paper writing. **Q1. What is the relation with the paper [arxiv 1911.02029] Selective machine learning of doubly robust functionals?** **R1:** **(a) Difference**: While both papers address model selection in causal inference, they differ significantly in the interested quantity and selection objects: Interested quantity: * Our paper: Conditional Average Treatment Effect (CATE), i.e., $E[Y^1 - Y^0|X]$. * Their paper: Average Treatment Effect (ATE), i.e., $E[Y^1 - Y^0]$. Selection objects: * Our paper: Selects candidate CATE functions. The specific details are stated in Section 3, Background of CATE Estimator Selection. * Their paper: Selects candidate nuisance functions in the doubly robust (DR) estimator of ATE. The specific details are as follows: The DR estimator of ATE is $\theta_{DR}=\frac{1}{n}\sum_{i=1}^{n} [\mu(X_i,1)-\mu(X_i,0)+\frac{T_i}{\pi(X_i)}(Y-\mu(X_i,1))-\frac{1-T_i}{1-\pi(X_i)}(Y-\mu(X_i,0))]$. In $\theta_{DR}$, the nuisance function $\mu(X,T)$ can be fitted with different machine learners, e.g., Ridge regression, Support Vector Machine, Random Forest, etc, and the nuisance function $\pi(X)$ can be fitted with different machine learners, e.g., Logistic Regression, Support Vector Machine, Random Forest, etc. This learner-fitting necessity is the same as plug-in and pseudo-outcome methods in CATE estimator selection. But differently, the main goal of their paper is to select nuisance parameters $\mu$ and $\pi$ for the DR estimator of ATE, from $J_{\mu}$ candidate outcome nuisance functions $\mu_1(X,T), ..., \mu_{J_{\mu}}(X,T)$ and $J_{\pi}$ candidate propensity score nuisance functions $\pi_1(X), ..., \pi_{J_{\pi}}(X)$. **(b) Connection**: Interestingly, the nuisance function selection method in their paper offers insights for improving the baseline CATE selection methods Plug-DR and Pseudo-DR. Both Plug-DR and Pseudo-DR metrics involve the process of constructing $\tilde{Y}_i=\mu(X_i,1)-\mu(X_i,0)+\frac{T_i}{\pi(X_i)}(Y-\mu(X_i,1))-\frac{1-T_i}{1-\pi(X_i)}(Y-\mu(X_i,0))$ (see Sections A.1 and A.2). Note that similar to the mentioned paper, one also needs to select a machine learner to fit the nuisance functions $\mu(X,T)$ and $\pi(X)$ for constructing $\tilde{Y}$. Their approach could be helpful in choosing appropriate machine learning learners for fitting nuisance functions $\mu(X,T)$ and $\pi(X)$ in constructing $\tilde{Y}$. This opens an interesting avenue for future research, i.e., exploring nuisance function selection methods for Plug-DR and Pseudo-DR metrics in CATE model selection. We will discuss this interesting research direction in Related Work. **Q2.** Is the empirical result matched with [58, 16, 45]? **R2:** Our results align with some key findings from previous literature. For example, in lines 288-289, we observe the excellence of the R-objective in many scenarios, corroborating the findings in [58]. In lines 290-292, the results confirm the outperformance of R-based selectors as CATE complexity decreases, consistent with the conclusions in [16]. Notably, our analysis extends beyond previous work by examining CATE estimator performance against the level of unobserved confounders and selection bias, providing new insights into CATE estimator selection. Finally, we would like to thank you again for your time and effort in reviewing our paper. Your feedback is helpful and valuable in improving our work! We remain very open to discussing any additional comments you may have. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for addressing my concerns and questions. I sincerely hope that my concerns will be fully addressed as promised, because I believe this paper is strong and deserves acceptance. Thank you for the additional experiments and detailed explanations. I will raise my score. --- Rebuttal 2: Title: Thank you for your reply Comment: Dear Reviewer 1tzr, Thank you so much for your recognition and appreciation! We will definitely address all your comments in the final version, as we also believe the suggestions are very important in improving our paper quality. Thank you again for your prompt reply and we truly appreciate your time and help during the whole review process! Best regards, Authors of paper 4584
Summary: The paper addresses the challenging problem of Counterfactual Average Treatment Effect (CATE) model selection, where the goal is to develop a model selection metric using only observed data, as counterfactual labels are not available. Previous work has focused on model selection metrics that involve learning nuisance parameters on the validation set, making the task of choosing the appropriate CATE model selection criteria complex. The authors propose a solution with a model selection criterion, termed DRM, that avoids the need for training additional nuisance models and is designed to be robust to distribution shifts between treatment and control groups. They first derive the optimization objective for DRM and then propose a relaxation that is tractable with observed data samples. The paper includes a finite sample analysis of the proposed DRM estimator and benchmarks it against existing model selection criteria using synthetic datasets. Strengths: * The paper makes a significant contribution to CATE model selection by introducing the DRM criterion, which is notable for not requiring the training of additional nuisance models. Prior work often involves specific inductive biases (e.g., S-Learner, T-Learner, DRLearner) and necessitates training extra models, which can lead to sub-optimal choices and affect model selection performance. The DRM criterion addresses these limitations by not being tied to any particular CATE estimation design. Instead, it derives an upper bound on the ideal PEHE metric that is tractable with observed data alone, avoiding the need for additional nuisance models. * The theoretical formulation of DRM is compelling and enhances interpretability by linking the bound to the magnitude of the shift between control and treatment distributions. This directly addresses the challenge of distribution shift in effect estimation. Additionally, the authors derive an equivalent formulation in terms of the KL divergence between the control covariates ($P(X|T=0$) and treatment covariates ($P(X|T=1$), which can be computed using observed data. This approach provides a practical way to assess the impact of distribution shift on CATE model selection. * The derivation of the DRM criteria from the PEHE metric, along with the proposed estimation strategy, appears to be novel to the best of my knowledge. * The paper is well-written, with the methodology behind DRM presented in an accessible manner. The background section effectively motivates their approach, and the experimental results are clearly presented, accompanied by a thorough discussion of the performance compared to baselines. Weaknesses: While the authors have done a good job in the experiments by considering diverse synthetic datasets and interesting ablation studies, I think the empirical analysis can still be improved. I have summarized my concerns with the experiments below, which would help to provide a clearer understanding of the performance differences and strengthen the comparison of DRM with other model selection criteria. * The authors use only three base ML models to train the CATE estimators, which limits the diversity of models considered for model selection. To make the model selection problem more challenging and provide a more thorough evaluation, it would be beneficial to train a larger number of CATE estimators. This would offer a more comprehensive assessment of the DRM criterion's performance and robustness across a broader range of models. * Estimating the nuisance parameters for model selection metrics is crucial for their performance, as highlighted by Mahajan et al. [1], who showed that plug-in metrics based on T-Learner/X-Learner are effective for model selection. To ensure a fair analysis, the authors should consider using state-of-the-art techniques, such as AutoML, for estimating the nuisance parameters associated with the model selection metrics. Currently, the performance of many model selection metrics might be suboptimal due to potentially poor choices of nuisance parameters * The comparison of different model selection criteria in the results raises some concerns, particularly due to the high variance observed in the performance of several baselines. This variance might stem from poor results on a few datasets, while these baselines could be comparable to DRM on a majority of datasets. The authors should clarify this in the paper and offer more detailed insights, such as results broken down by dataset or other methods to highlight trends and performance variations. References: [1] Mahajan, Divyat, Ioannis Mitliagkas, Brady Neal, and Vasilis Syrgkanis. "Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation." arXiv preprint arXiv:2211.01939 (2022) Technical Quality: 3 Clarity: 3 Questions for Authors: * In Table 1, I don't understand why DRM is more robust to selection bias as stated by the authors. Are authors comparing the performance gains with DRM over the best baselines across scenarios ($\mathcal{E}= \{0, 1, 2\}$)? It doesn't seem like the performance gap becomes better with more selection bias. Also, the performance of Plug-R or other baselines across different scenarios ($\mathcal{E}= \{0, 1, 2\}$) is statistically indistinguishable given the difference in means lies within the confidence interval. So I am not sure whether any of the prior metrics are susceptible to selection bias in this empirical study. * Why is the Plug-T metric much worse than the Plug-S metric? T-Learner should provide more flexibility in choosing the regression models than S-Learner, so I don't understand why would the estimates of the ground truth be worse with T-Learner (hence worse model selection). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have properly addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 9qbs, Thank you for your thorough review of our paper. We are delighted that you recognize the novelty and significance of our DRM method in advancing CATE model selection, particularly its nuisance-free and robustness to distribution shift. Below we will address your comments. * Suggestions: **S1.** Considering a larger set of base models for CATE estimators. **R1.** As you may note, prior research has conducted comprehensive empirical investigations conducted a thorough and comprehensive empirical investigation, providing valuable insights for baseline CATE selectors. In our work, our primary focus has been on proposing a new metric for CATE estimator selection, and we believe the 24 CATE estimators with 8 widely-used meta-learners and 3 base ML models were adequate for the current examination. However, we do agree that considering a larger set of base models could offer a more comprehensive analysis. As such, we have expanded our experiments to include an additional Neural Net model, and the results are reported in Table 4 of the rebuttal pdf, and the updated results will be added into the revised paper. Specifically, the Net has 5 hidden layers [200, 200, 200, 100, 100], each with the ReLU activation function. The model is trained using the Adam optimizer with a learning rate of 0.001, a batch size of 64, and 300 epochs. **S2.** Using AutoML for estimating nuisance parameters. **R2.** In the original paper, we have tuned hyperparameters for each model in line with [16]. The details are discussed in GR3 of the top-page rebuttal. Thank you for recommending AutoML. A recent ICLR paper [45] has shown the benefits of using AutoML to search a broader grid of hyperparameters, ensuring well-trained nuisance functions. We will use AutoML to update our results and also in future research for hyperparameter tuning. **S3.** The high variance phenomenon should be further discussed. **R3.** Thank you for this insightful suggestion. The high variance phenomenon is discussed in GR2 of the top-page rebuttal and the discussion will be added into the revised paper. * Questions **Q1.** Not sure whether any of the prior metrics are susceptible to selection bias. Why DRM is robust to selection bias compared to baselines? **R1.** It is widely acknowledged in the causal inference literature that increasing selection bias can lead to larger biases in estimators and selectors (e.g., [8, 30, 67, 68]). Accordingly, for most selectors, the regret is expected to increase with higher selection bias - a trend clearly evident in the results presented in Table 1. Therefore, we would not expect a regret-decreasing trend with higher selection bias for most selectors, including our DRM method. DRM metric is robust to selection bias when comparing its performance to other baseline selectors, such as Plug-R. Specifically, as the level of selection bias increases from $\xi=1$ to $\xi=2$, the average regret of Plug-R increases significantly, from 1.27 to 4.38 - a jump of 3.11. In contrast, the average regret of DRM only increases from 0.11 to 1.21, a much smaller change of 1.10. However, it is important to note that in the case of no selection bias ($\xi=0$), DRM does not show a clear advantage over some baselines. This aligns with our expectations because DRM is designed to select estimators that are robust to distribution shifts. **Q2.** Why is Plug-T worse than Plug-S? **R2.** The key factor affecting the performance of Plug-T relative to Plug-S appears to be the level of heterogeneity. Heterogeneity refers to the complexity of the CATE function relative to the potential outcome functions. In our paper, the parameter $\rho$ controls the degree of heterogeneity. According to Table 1, the average regret gap between Plug-T and Plug-S decreases as $\rho$ increases from 0 to 0.3. Specifically, the regret gap decreases from 38.12 - 3.40 = 34.72 to 34.38 - 2.34 = 32.04 as heterogeneity increases. This suggests a trend that Plug-T would perform on par with (or better than) Plug-S as the level of heterogeneity increases. This observation aligns with insights from prior literature. For example, the authors in [16] found that the T-learner outperforms the S-learner in terms of PEHE when the heterogeneity is much larger. Additionally, [40] noted that the S-learner performs better when the CATE complexity is lower than the potential outcome functions, while the T-learner is preferred when the CATE is more complex. Furthermore, the authors in [45] used the original ACIC data but discarded instances with low-variance CATE to ensure sufficient heterogeneity, where they found that T-learner outperforms S-learner. Therefore, based on our results and the insights from previous studies, we believe the level of heterogeneity is a key factor that affects the relative performance of Plug-T and Plug-S. It would be interesting to further investigate other possible factors that may influence their comparative performance, as they are widely used meta-learners in real-world applications. Finally, we would like to express our sincere gratitude for your time and effort in reviewing our paper and providing valuable suggestions and feedback! We are very open to addressing and discussing any further comments or questions you may have. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response to my questions! My concerns have been addressed and I have updated my rating accordingly. --- Rebuttal 2: Title: Thank you for your reply Comment: Dear Reviewer 9qbS, Thank you so much for your recognition of our paper and rebuttal! We will carefully incorporate your suggestions in finalizing our paper. We would like to express our gratitude again for your time and effort in the review process! Best regards, Authors of paper 4584
Summary: The paper introduces a new metric for CATE model evaluation and selection. Specifically, they derive a distributionally robust metric (DRM) which is nuisance-free and robust against selection bias. They show and explain its robust performance in extensive benchmarking experiments against existing baselines. Strengths: - Robust and reliable CATE model evaluation is an important and often widely overlooked challenge in causal inference - The paper is well-structured and easy to follow - The authors provide proper theoretical derivation and extensive experimental validation for their method. Weaknesses: - The stated robustness against unobserved confounding sounds like a silver bullet. Here, the authors, provide experiments, but should more clearly discuss in theory why and to which degree of unobserved confounding their metric is more robust compared to others. Especially, since for selecting the optimal ambiguity radius they assume unconfoundedness. Hence, it is not clear if or why the metric is still robust, even though the experiments seem to support that. - The experimental results seem to be untransparent. The experimental setup for the baselines should be described in more detail, e.g. in the Appendix. It is not clear, if the models used for the baseline metrics were properly tuned. In particular, there is high standard variation in the Regret in Table 1, and large discrepancy compared to the observed performance in ranking in Table 2. Here, the setup should be described in full detail and possible reasons for this discrepancy should be discussed more clearly. Technical Quality: 3 Clarity: 3 Questions for Authors: - For selecting the ambiguity radius in Proposition 3.6, unconfoundedness is assumed. So why should the metric be still robust against unobserved confounding? Was the ambiguity radius in setting C selected differently? - Why is the ranking performance clearly worse than the average regret? The high standard deviation in the regret could imply that single runs with bad performance skew the evaluation which could be due to insufficient tuning of the used baseline models for the metrics. Clarification here would be appreciated. - What exactly do the authors mean in the limitations when they state “considering the ambiguity set constructed with other divergence such as Wasserstein may contain more diverse distributions”? What does more diverse mean here and what are possible shortcomings of KL divergence in this setting? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are mentioned sufficiently. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer e7g2, We greatly appreciate your thoughtful feedback and insightful comments. Thank you for your recognition of our theoretical and experimental analysis! We will address each of your comments below. **Q1.** Should more clearly discuss why and to which degree of unobserved confounding their metric is more robust compared to others. **R1.** Regarding "why": Please kindly refer to GR1 of the top-page rebuttal. Regarding "to what degree", the level of robustness depends on the choice of the ambiguity radius $\epsilon$. Theoretically, a larger $\epsilon$ should guarantee the DRM-selected estimator to be more robust to hidden confounding, as it allows for a broader range of possible counterfactual distributions in the ambiguity set. However, setting $\epsilon$ too large can result in overly conservative estimator selection (similar to the well-known accuracy-robustness tradeoff). Therefore, as lines 216-220 explain, determining $\epsilon$ involves a careful balance between ensuring the counterfactual distribution is contained in the ambiguity set (i.e., robustness) and maintaining a tight upper bound (i.e., accuracy). **2.** In Proposition 3.6, unconfoundedness is assumed. Why should the metric still be robust against unobserved confounding? Was the ambiguity radius in setting C selected differently? **R2.** As discussed above, the DRM-selected estimators should be robust against unobserved confounders provided that $\epsilon$ is set appropriately. However, determining the proper value of $\epsilon$ remains an open challenge in the distributionally robust optimization literature [29, 46, 39, 41, 63]. In our work, Proposition 3.6 can guide us to set $\epsilon$ when unconfounded. When unobserved confounders are present, we can set $\epsilon$ to a larger value than the one guided by Proposition 3.6. The reason is: $D_{KL}(P_C||P_T)$ $=\int_{\mathcal{X}}\int_{\mathcal{Y}^0}\int_{\mathcal{Y}^1}p(y^0,y^1|x,T=0)p(x|T=0) \log \frac{p(y^0,y^1|x,T=0)p(x|T=0)}{p(y^0,y^1|x,T=1)p(x|T=1)}dy^1dy^0dx$ $=D_{KL}(P^C_X || P^T_X)+\int_{\mathcal{X}}\int_{\mathcal{Y}^0}\int_{\mathcal{Y}^1}p(y^0,y^1|x,T=0)p(x|T=0) \log \frac{p(y^0,y^1|x,T=0)}{p(y^0,y^1|x,T=1)}dy^1dy^0dx$ $>D_{KL}(P^C_X || P^T_X)$ In setting C, yes, the ambiguity radius is set differently. We set $\epsilon_1=D_{KL}(P^C_X || P^T_X)+5$ and $\epsilon_0=D_{KL}(P^T_X || P^C_X)+5$. This ensures that the ambiguity set is sufficiently large to contain the possible uncertain counterfactual distributions. Based on your insightful question, we will further emphasize the ambiguity radius for unmeasured confounding in the revised paper. **Q3.** The high variance for baselines in regret and worse ranking performance of DRM. **R3.** The high variance phenomenon is discussed in GR2 of the top-page rebuttal and the discussion will be added in the revised paper. Regarding the ranking performance of DRM, we have provided some explanation in lines 310-317 of the original paper. Now we provide some additional discussion. * First, high rank correlation does not necessarily imply low regret. E.g., in Table 3 of the rebuttal pdf, we have 24 CATE estimators with true performance rank in ascending order: [1, 2, 3, ..., 24]. A plug-X selector may give a surrogate PEHE value, 11.59, then it selects the 6th estimator and gives a rank order [6, 7, 8, 5, 4, 3, 9, 10, 11, 2, 1, 12, 13, 14, ..., 24]. The Spearman correlation between the true rank and Plug-X rank is 0.89. However, the regret of the selected 6th-ranked estimator is 11.63, much larger than 2.00 produced by the top-ranked estimator. * Second, the DRM approach is designed to select estimators based on their distributionally robust (worst-case) performance. Consider that a coach selects athletes to attend the Olympics based on their scores over 100 games. The first athlete consistently scores between 90-95, with an average of 93, while the second athlete's scores fluctuate between 85-96, with an average of 94. The coach may prefer to select the first athlete, as the worst-case performance is much higher, even though the second athlete has a higher average score (94) and best score (96). **Q4.** Should discuss baseline setups: whether models for baseline metrics were properly tuned. **R4.** The baseline setups and hyperparameter tuning method in our paper are in line with [16]. Details are provided in GR3 of the top-page rebuttal. We will add these in the revised paper. **Q5.** What does “more diverse mean” and what are possible shortcomings of KL divergence? **R5.** By "more diverse", we mean the ability to consider a broader range of distribution types and supports within the ambiguity set. The KL-divergence defined in Eqn (6) is a non-symmetric measure and requires P and Q to have the same distribution type and support. The limitations of the KL divergence: * Non-symmetry: The lack of symmetry means the KL divergence does not satisfy the triangle inequality, which can limit its applicability in certain areas, such as domain adaptation and transfer learning. Additionally, it makes the distributional distance hard to interpret. * Requirement for the same distribution type and support: This restriction can reduce the flexibility of the KL divergence-based ambiguity set. E.g., it prevents comparisons between continuous and discrete distributions. Therefore, We recognize that using a Wasserstein-based ambiguity set would be a promising research direction. It can be used to compare distributions of different types and supports, capturing a wider range of possible counterfactual distributions. However, as mentioned in the paper, the dual problem of the Wasserstein version involves an intractable "sup" problem, which presents additional challenges that require further investigation. Thank you again for your time and effort in providing these constructive insights that help improve the clarity and rigor of our work! We are very welcome to any additional feedback you may have. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal, it addressed most of my concerns. In general, I think that the writing of the paper could be improved at some points, e.g., less subjective statements and more clarity and transparency regarding the implementation. Therefore, please include the additional explanations regarding the empirical evaluation, especially the exact tuning settings and the considered ambiguity radius, in the paper or appendix. Further, I would strongly encourage the authors to include an additional sensitivity study over varying values of the ambiguity radius under unobserved confounding and recommendations how to select the ambiguity radius in practice. However, overall I think this is still an interesting and valuable paper. Hence, I will increase my score by 1. --- Rebuttal 2: Title: Response to Reviewer e7g2 Comment: Dear Reviewer e7g2, Thank you for your thoughtful reply. We are delighted our rebuttal has addressed most of your concerns, and we greatly appreciate your suggestions and will implement them as follows: Regarding the experimental aspects, we will include more details, such as settings, model training procedures, hyperparameters, and the choice of ambiguous radius under the hidden confounder setting. We believe these additional materials will enhance the clarity and transparency of our work. As for your suggestion concerning the ambiguity radius, we will incorporate a sensitivity analysis for the proposed DRM under the hidden confounders setting. We believe this analysis will provide valuable guidance for practitioners and researchers in selecting appropriate ambiguity radius when unobserved confounders exist. We will carefully revise our paper as we respond in rebuttal. Once again, thank you for taking the time to review our paper, and we appreciate your recognition of our work! Best regards, Authors 4584
Summary: The paper proposes a new model selection method for choosing an estimator of the conditional average treatment effect (CATE), namely, a Distributional Robust Metric (DRM). The proposed method, DRM, is nuisance-free: It does not require an additional estimation of the nuisance functions, unlike the majority of existing approaches. The DRM splits a precision in the estimation of the heterogeneous effects (PEHE) into two parts: the first term can be estimated nuisance-free and the second term is upper-bounded using the KL-divergence ambiguity set. Thus, the proposed method can only utilize the observational data. Furthermore, the authors provided an estimation procedure for the DRM and the finite-sample convergence rates. Finally, multiple synthetic experiments were provided to prove the effectiveness of the method in comparison to the other existing approaches. Strengths: The paper studies an important and heavily discussed problem in causal inference, CATE model selection. I find the idea of using the upper bound on the PEHE based on the distributional ambiguity set original and novel. The paper is well-structured and easy to follow. Weaknesses: Two major weaknesses of the paper are, in my opinion, the following: 1. A lack of understanding of the asymptotic behaviour of the DRM. Specifically, when the data size grows, the upper bound on the term (b) of Eq. (5) will still stay the same, as the epsilon-ball around $P_C$ or $P_T$ stays constant. Hence, there will be always a gap between the ground-truth PEHE and the DRM from Eq. (14). This renders the DRM not consistent. I encourage the authors to provide a discussion about this issue and some experiments with the varying data sizes, where they compare the DRM with other (consistent) nuisance-free selectors, e.g., nearest-neighbours matching. 2. It was not directly clear to me, why the optimal $\epsilon^*$ is a forward KL-divergence but not, e.g., a reverse KL-divergence between treated and untreated covariates distributions. Note, that KL-divergence is not symmetric. Does it make sense to use the symmetric distributional distances?
 There are also several minor suggestions for improvement: 1. The hidden confounding setting needs to be considered more carefully. This setting requires different approaches, e.g., partial identification or sensitivity models, and, thus, fitting/selecting point CATE models is inadequate in this case. 2. Some baselines were not mentioned in the related work or the experiments, e.g., U-learner [1]. I am open towards raising my score if the authors address the above-mentioned issues during the rebuttal. References: - [1] Fisher, Aaron. "The Connection Between R-Learning and Inverse-Variance Weighting for Estimation of Heterogeneous Treatment Effects." arXiv preprint arXiv:2307.09700 (2023). Technical Quality: 2 Clarity: 3 Questions for Authors: - What ‘uncertainty in PEHE’ is meant in lines 164-165? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The main limitation of the method, i.e., the lack of consistency, was not discussed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer T6Zv, We are grateful for your thorough summary and greatly appreciate your recognition of the motivation and novelty underlying our proposed method. Thank you for your time and effort in providing constructive and helpful feedback. We will respond to each of your comments below. **Q1.** DRM is not consistent with ground-truth PEHE. Better discuss this issue and do some experiments with the varying data sizes, comparing with other (consistent) nuisance-free selectors, e.g., nearest-neighbours matching. **R1:** Thank you for this suggestion. First, we would like to emphasize that we did not claim the proposed DRM metric converges to the ground-truth PEHE. Our theoretical results only state that $\hat{\mathcal{V}}^t$ should converge to the $\mathcal{V}^t$ at a rate of $1/\sqrt{n}$. Consequently, together with the law of large numbers for other terms in $\mathcal{R}^{DRM}(\hat{\tau})$ in Eqn. (14) should converge to $\mathcal{V}_{PEHE}(\hat{\tau})$ in Eqn. (9) at a rate of $1/\sqrt{n}$. Additionally, we believe it is not a weakness that DRM is not consistent with PEHE for the following two reasons: * By definition, DRM measures the robustness of PEHE w.r.t. $\hat{\tau}$. We can never observe the ground-truth PEHE due to the uncertainty incurred by selection bias and hidden confounders (as discussed in GR_1 of the top-page rebuttal). Therefore, instead of pursuing the unavailable PEHE, we aim to quantify its uncertainty using a distributionally robust value of PEHE: If some $\hat{\tau}$ has a small $\mathcal{R}^{DRM}(\hat{\tau})$, then $\hat{\tau}$ is robust to the PEHE uncertainty caused by the uncertainty in counterfactual distribution. * To the best of our knowledge, no existing nuisance-free estimator/selector (e.g., the mentioned matching) is assured to be a consistent estimator of the ground-truth CATE/PEHE, but some nuisance-free estimators are consistent estimators of ATE (not CATE) given additional assumptions. E.g., (Abadie & Imbens, 2006) show the matching estimator converges to ATE at a rate slower than $1/\sqrt{n}$, given the assumption that the density of covariates is bounded. Per your suggestion, we have compared DRM and nearest-neighbor matching with varying sample sizes in Table 2 of the attached rebuttal pdf reports the details and results. The findings show that DRM and matching exhibit a decreasing trend in regret with increasing sample size. However, when the sample size exceeds 10,000, the average regret for matching fluctuates between 6.5 and 7, while the average regret for DRM becomes steady. These results suggest that neither DRM nor matching converges to the ground-truth PEHE because a consistent metric should have a regret that tends to zero. Nevertheless, DRM demonstrates more stable and effective performance than matching as the sample size varies. Ref: (Abadie & Imbens, 2006) Large sample properties of matching estimators for average treatment effects, Econometrica, 2006 **Q2.** Why the $\epsilon^*$ is determined as $D_{KL}(P_C||P_T)$ instead of $D_{KL}(P_T||P_C)$? Does it make sense to use the symmetric distributional distances? **R2.** Indeed, we set $\epsilon_1^*=D_{KL}(P^C_X||P^T_X)$ and $\epsilon_0^*=D_{KL}(P^T_X||P^C_X)$ in the original paper. In lines 222-237, we intended to explain the choice of $\epsilon^*$ with $\epsilon_1^*=D_{KL}(P^C_X||P^T_X)$ as an example, but forgot to explain $\epsilon_0^*$, which led to a misleading presentation. We are grateful for your pointing this out, and we will revise the incomplete expression to make it clearer. Regarding your second question, we believe using symmetric distances would be more meaningful. The non-symmetric nature of KL divergence makes it fail to satisfy the triangle inequality and can make the divergence values difficult to interpret and compare between distributions. This restricts its practical use in fields like domain adaptation and transfer learning. Exploring the use of symmetric divergences, such as the Wasserstein distance, would be very useful in DRM (as discussed in line 343). However, solving the distributionally robust optimization problem with a Wasserstein-based ambiguity set can be computationally challenging due to the intractable dual problem involving an infinite "sup" objective. Therefore, considering alternative symmetric divergences that may provide a more tractable optimization framework for DRM would be a promising direction. Minor suggestions: **S1.** More baseline CATE estimators aiming at the hidden confounding setting should be considered. **R1.** At the current stage, one key focus is developing a CATE selection method that can robustly handle the distributional shifts. Therefore, the paper is mainly around “selection” rather than “estimation”. However, we strongly agree that considering more CATE estimators aiming at hidden confounder issues would provide more insights for testing different selectors under the hidden confounder setting, which will be further investigated. We will mention this interesting suggestion and leave it for future research. **S2.** U-learner can be included as a baseline CATE estimator. **R2.** We have incorporated the U-learner [1] into our experimental evaluation, and the updated results are reported in Table 1 of the top-page rebuttal pdf. We will add a discussion of the U-learner and incorporate the updated experimental results in the final version of the paper. **Q.** What 'uncertainty in PEHE' is meant in lines 164-165? **R.** We hope GR_1 of the top-page rebuttal has addressed this question well, and we will add these details in the Appendix to provide clearer explanations for the proposed method. Finally, thank you again for providing a thorough review and constructive suggestions! Your comments are valuable in helping us improve our work, and we will carefully incorporate your suggestions when finalizing the paper. We are very welcome any further questions or comments you may have. --- Rebuttal 2: Comment: Thank you for clarifying most of the concerns. Still, it seems to me that the $k$-nearest neighbours estimator would consistently estimate both of the potential outcomes surfaces, $\mathbb{E}(Y \mid X =x, T = t)$, if $k$ is chosen in a data-driven way. I wonder how the authors chose the hyperparameter $k$ for the new synthetic experiments. --- Rebuttal 3: Title: Response to Reviewer T6Zv Comment: Dear Reviewer T6Zv, Thank you for your reply. We are pleased to hear that we have successfully addressed most of your concerns! In both our original experiment (e.g., Table 1 in original paper) and new experiments, we set $k=1$ for the k-nearest neighbours estimator, which directly follows the code and strategy of the k-nearest neighbours selection metric in [16] to maintain consistency with literature. Aditionally, if possible, we would appreciate any additional information or references about the claim that "the k-nearest neighbours estimator would consistently estimate both of the potential outcomes surfaces if k is chosen in a data-driven way." To the best of our knowledge, we have not found any evidence supporting this conclusion. We sincerely hope to address this and improve the current comparison. Your suggestions would be valuable in improving the kNN baseline, and further strengthen the experiment quality of our paper. Thank you once again for your time in reviewing our paper and responding to our rebuttal. We really look forward to your further reply! Best regards, Authors 4584 --- Rebuttal Comment 3.1: Comment: Thank you for the additional details. Regarding the consistency of the knn estimator: What about the following work? - Devroye, Luc, et al. "On the strong universal consistency of nearest neighbor regression function estimates." The annals of Statistics 22.3 (1994): 1371-1385. --- Rebuttal 4: Title: Response to Reviewer T6Zv Comment: Dear Reviewer T6Zv, Thank you for sharing this reference and giving us a chance to discuss it with you. We strongly agree with you that the kNN regressor can be a consistent estimator of target functions in some machine learning contexts. **However, such consistency property may not necessarily hold when fitting potential outcome surfaces due to the distribution shift between factual and counterfactual distributions.** The paper you mentioned and Chapters 6 and 11 of [1] prove the consistency of the kNN regression estimator under certain conditions, such as $k/n \rightarrow 0$. However, note that **the consistency holds with the assumption that data are independent and identically distributed (i.i.d.)**, as discussed in the Introduction of [1]. Though the i.i.d. assumption is common in most machine learning scenarios, it doesn't necessarily hold when estimating potential outcome surfaces $\mu_t$ for $t \in $ {0, 1}. Please allow us to explain in detail below. In the Rubin Causal Model framework, observational (factual) data are i.i.d. tuples $(x_i,t_i,y_i)_{i=1}^{n}$, following the factual distribution $P^{F}:=P(X,T,Y)$. For each pair $(x_i,t_i)$, there also exists an unobserved counterfactual outcome $y_i^{CF}$. The unobserved (counterfactual) data are i.i.d. tuples $(x_i,t_i,y_i^{CF}) _{i=1}^{n}$, following the counterfactual distribution $P^{CF}:=P(X,T,Y^{CF})$. As explained in Section 3.1 and GR1 of our top-page rebuttal, **$P^{F}$ is not identical to $P^{CF}$ in general**. Consequently, a model trained on $P^{F}$ may not predict well on $P^{CF}$. Let's consider estimating $\mu_0(x)$ for further explanations. To infer the potential outcome $Y^0$ for treated ($T=1$) samples, the kNN regressor first approximates $\hat{\mu}_ {0}(X)$ using controlled samples $(X_i,Y^0_i) _ {i=1}^{n_ {control} }$, then uses $\hat{\mu} _ {0}(X)$ to predict $Y^0$ for treated samples, with the predicted samples being $(X_i, \hat{Y}^0_i) _ {i=1}^{n_ {treat} }$. However, $\hat{\mu} _ {0}(X)$ does not necessarily generalize well on $Y^0$ predictions for treated samples, because the source training data (controlled samples) do not have the same distribution as the target prediction data (treated samples), meaning that there exists a distribution shift problem $P(X,Y^0|T=0) \neq P(X,Y^0|T=1)$. Such a distribution shift (discussed in Section 3.1 and GR1 of our top-page rebuttal) renders kNN-based learner $\hat{\mu}_{0}(X)$ to be a consistent estimator of $\mu_0(x)$. Therefore, the above analysis demonstrates that, if kNN-based learner $\hat{\mu}_ {0}(X)$ would be a consistent estimator of $\mu_ {0}(X)$, one of the following two conditions must be met: * The factual distribution for controlled samples, $P(X_i,Y^0_i|T_i=0)$, should be identical to the counterfactual distribution for treated samples, $P(X_i,Y^0_i|T_i=1)$. This necessitates the absence of distribution shift between factual and counterfactual distributions, which in turn requires two key conditions: the unconfoundedness and access to infinite data from Randomized Controlled Trials (RCTs). * The training samples should be $(X_i,Y^0_i)$ (including the whole samples) instead of $(X_i,Y^0_i)|T_i=0$ (only including controlled samples). This requires counterfactual knowledge, because the potential outcome $Y^0_i$ remains unknown for treated ($T_i=1$) data. We hope this clarifies our perspective on the consistency of kNN in causal inference contexts. Note that the above analysis and conditions for consistency property are not exclusive to kNN but also extend to other machine learning methods employed in estimating potential outcomes or CATE. We will further incorporate the above discussions in Appendix, as they further underscore the importance and significance of our method in CATE estimator selection. The DRM method offers distinct advantages compared with previous selection metrics: it does not require counterfactual data and is capable of selecting estimators that are robust to the distribution shift between factual and counterfactual samples. Thank you again for your active engagement and effort in helping us improve the paper! We are very welcome any further thoughts or questions you may have. Reference: [1] Devroye, Luc, et al. A Probabilistic Theory of Pattern Recognition. Stochastic Modelling and Applied Probability. Best regards, Authors 4584 --- Rebuttal Comment 4.1: Comment: Thank you for the quick response! Yet, I still think that the k-NN is a consistent estimator (an instantiation of the T-learner). The reason is that the distribution shift (or selection bias) only matters for CATE estimation in the low-sample regime [1]. In large-sample regimes, any possible universal function approximator can serve as a consistent estimator of $\mu_0(x)$ and $\mu_1(x)$. Thus, the distribution shift does not hinder the consistency of estimation. Nevertheless, I consider this paper very interesting and thought-provoking. Therefore, I will raise my score. P.S. Here is something that I realized after submitting the review for this paper (therefore the following does not influence my valuation of the current work). There was a recent concurrent paper from ICML 2024, which tackles the same problem of CATE model selection/validation [2]. There, the authors used a _total variation distance_ to bound the counterfactual term of the PEHE risk. Also, they compared their work with the _integral probability metrics_ bounds from [3]. In the final version of the manuscript, I encourage the authors of this work to incorporate the discussion on the above-mentioned works [2, 3] (with alternatives to KL-divergence). References: - [1] Alaa, Ahmed, and Mihaela Schaar. "Limits of estimating heterogeneous treatment effects: Guidelines for practical algorithm design." International Conference on Machine Learning. PMLR, 2018. - [2] Csillag, Daniel, Claudio José Struchiner, and Guilherme Tegoni Goedert. "Generalization Bounds for Causal Regression: Insights, Guarantees and Sensitivity Analysis." arXiv preprint arXiv:2405.09516 (2024). - [3] Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect generalization bounds and algorithms. In International conference on machine learning, pages 3076–3085. PMLR, 2017.514 --- Rebuttal 5: Title: Response to Reviewer T6Zv Comment: Dear Reviewer T6Zv, Thank you for your thorough and insightful suggestions! We find that the paper [1] you mentioned is very useful for both CATE estimation and model selection. Their findings, which reveal that the minimax rate of PEHE depends on smoothness and sparsity rather than selection bias, align well with the key investigation in [16] regarding how CATE complexity affects model selection metrics. This has provided us with new insights: How can we select estimators that achieve the optimal PEHE minimax rates? We believe this is a very interesting and important avenue, and it may be closely related to the smoothness and sparsity of CATE and response surfaces. We are also grateful for your second suggestion. The Integral Probability Metric (IPM), e.g., the Wasserstein distance used in [3], was briefly discussed in line 343 of our original paper. Together with your original question about "symmetric distribution discrepancy" and our rebuttal R2, we will incorporate more detailed discussions on the possible usage of Wasserstein distance in the proposed DRM method. Regarding the paper [2] you mentioned, we find their proposed upper bound for potential outcome error very interesting. They claim that the bound will be tight given a proper selection of the hyperparameter λ. Such a tightness property suggests a potential for f-divergence in the DRM method. In light of this, we intend to conduct a comparative analysis of the advantages and disadvantages of using IPM (such as Wasserstein or MMD) and f-divergence versus KL-divergence. Finally, we want to express our gratitude again for your quick replies and insightful discussions. The suggested references are valuable and insightful for our future research. Thank you for your time, expertise, and comments on helping us improve the study! Best regards, Authors of 4584
Rebuttal 1: Rebuttal: Dear Reviewers, We are grateful for your comments and suggestions, which are very helpful in improving our paper. Here are some general responses (GR) that might be useful for individual questions. In the separate rebuttal, we may remind you to refer to this general response. Thank you for your time and effort during the review process! **GR1. DRM is able to select CATE estimators that are robust to the uncertainty in PEHE caused by selection bias and unobserved confounders.** In Sections 3.1 and 4.1, we have presented theoretical explanations for why the DRM metric can measure a CATE estimator's robustness against selection bias and unobserved confounding. To address any potential questions from reviewers on this topic, we will provide a more specific explanation below, and these details will be further added to the Appendix. In causal inference, all the CATE estimators are constructed using observational factual data. However, we can never know how reliable the CATE estimator is due to the unavailable Oracle PEHE in Eqn. (1). As shown in Eqn. (5), the PEHE is equal to two $\hat{\tau}$-dependent terms, $\mathbb{E}[\hat{\tau}(X)Y^t|T=t]$ and $\mathbb{E}[\hat{\tau}(X)Y^t|T=1-t]$. Unfortunately, $\mathbb{E}[\hat{\tau}(X)Y^t|T=1-t]$ is in practice. This is because we can only observe the factual distribution $P^F = P(X, Y^t | T=t)$, but not the counterfactual distribution $P^{CF} = P(X, Y^t | T=1-t)$. The unobserved counterfactual distribution can be regarded as an uncertain distribution varying around the observed and certain factual distribution $P^{F}$. If we could assume a "God's perspective" and observe $P^{CF}$ directly, the counterfactual distribution would be certain - like a quantum world! Such an uncertainty in $P^{CF}$ results in the uncertainty in PEHE. In the following, we analyze the source of this uncertainty by examining the relationship between the uncertain distribution $P^{CF}$ and the certain distribution $P^F$: $P(X,Y^t|T=1-t)=P(X,Y^t|T=t) \frac{P(Y^t|T=1-t, X)}{P(Y^t|T=t, X)} \frac{P(X|T=1-t)}{P(X|T=t)}$. Therefore, the unobservable distribution $P(X, Y^t | T=1-t)$ can be expressed as the observable distribution $P(X, Y^t | T=t)$ multiplied by the term $\frac{P(Y^t | T=1-t, X)}{P(Y^t | T=t, X)} \frac{P(X | T=1-t)}{P(X | T=t)}$. In other words, the ratio $\frac{p(y^t | T=1-t, x)}{p(y^t | T=t, x)} \frac{p(x | T=1-t)}{p(x | T=t)}$ controls the discrepancy between the factual distribution $P^F$ and the counterfactual distribution $P^{CF}$. Note that if there are no unmeasured confounders, then we have $p(y^t | T=1-t, x) = p(y^t | T=t, x)$; and if there is no selection bias (covariate shift), then we have $p(x | T=1-t) = p(x | T=t)$. Now we understand that the root cause of the discrepancy between $P^F$ and $P^{CF}$ (or between $\mathbb{E}[\tau(X)Y^t | T=1-t]$ and $\mathbb{E}[\tau(X)Y^t | T=t]$) lies in the presence of unobserved confounders and selection bias. In Section 4.1, the uncertainty caused by potential unobserved confounders and selection bias in PEHE can be further measured as the distributionally robust values $\mathcal{V}^t$. Then the PEHE w.r.t. the CATE estimator $\hat{\tau}$ will be at most $\mathcal{V}^t$, which reflects the distributional robustness of $\hat{\tau}$. **GR2. The high variance regret in some baseline selectors.** This phenomenon is primarily due to the wide range of PEHE performances produced by a total of 24 CATE estimators. For example, in setting A ($\rho=0.1$), the average PEHE ranged from 2.00 for the best estimator to 141.01 for the worst estimator (we will present the PEHE performances of all 24 CATE estimators in the Appendix). This significantly large gap between the good and bad estimators leads to high variance of some baseline selectors - if a selector is able to consistently select either good or bad estimators, the variance would not be as pronounced. Therefore, it is important to further analyze the performance of different selectors in each experiment. To achieve this, we sorted all 24 estimators in ascending order of PEHE and determined the rank of the estimator selected by each selector in each of the 100 experiments. In lines 318-326 and Figure 1 of Appendix C.1, we find that many baseline methods tend to select CATE estimators ranked across different percentile ranges, resulting in a high variance in 100 selections. In contrast, the DRM selector is able to consistently select higher-ranked (i.e., better performing in PEHE) estimators, while mitigating the risk of selecting lower-ranked estimators in most cases. This not only confirms the robustness strength of the DRM but also helps explain its lower variance observed in Table 1. **GR3. Hyperparameters tuning for nuisance training.** We have tuned hyperparameters for each model whenever there is a model training process. Specifically, we used RidgeCV and LogisticRegressionCV for the linear regression and logistic regression models, respectively. We also employed GridSearchCV for the SVM and Random Forest models. The number of cross-validation folds was set to 3. The specific hyperparameter ranges we searched are as follows: Ridge regression: $\alpha \in$ \{ 0.01, 0.1, 1.0, 10.0, 100.0 \}. Logistic regression: $C \in$ \{0.01, 0.1, 1, 10\}. SVM: Kernel $\in$ \{Sigmoid, RBF\}, $C \in$ \{1, 5\}. RF and XGBoost: Max depth $\in$ \{1, 3, 6\}, n\_estimator $\in$ \{20, 100\}. Pdf: /pdf/a025ce535946bac378327377d50ea7462f880cc6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Off-Dynamics Reinforcement Learning via Domain Adaptation and Reward Augmented Imitation
Accept (poster)
Summary: This work introduces an improvement over the previous method (DARC) with theoretical and experimental contributions for solving the off-dynamics transfer learning problem. Strengths: - Using IL on states is a sensible idea to add - The theoretical contribution is quite strong, as the authors were able to replace a very limiting assumption with a possibly more relaxed (and simpler to analyze) assumption - There indeed appears to be a performance boost in the environments / transfer schemes tested. Weaknesses: - It's not clear how many samples are used as rollout in the target domain. If it is a non-trivial amount, then a baseline should be to run RL in the target domain with this sample budget. - Similarly, DARC should also be allowed to have these additional samples in the target domain for proper comparison. - Finally, the presentation quality poses a bit of a weakness. It is a bit difficult to read/analyze at times (see next section for suggestions to improve) - Figure 3 does not appear to be a significant performance boost, can you comment on the purported gain? Technical Quality: 3 Clarity: 2 Questions for Authors: Questions about the paper: In the appendix you show that lower discriminator update steps are better, stopping at 50. Can you explain why lower might be better; and why not decrease even further? - You state a few times that you do not collect reward data from the target domain. Isn't this trivial since the reward function is identical to that of the source? Various (some minor) presentation issues and references: - suggest to abbreviate as "GAIL-O" or similar. Readers will likely know GAIL and thus understand the connection more quickly - In main contributions, the first bullet point is quite unclear; can you re-word/shorten? Similarly, I believe you can remove all but the last sentence in bullet point 2. - "works" -> "work" throughout - Use of discounting: You use SAC with a discount factor, but $\gamma$ is missing from the main text and appendix in many equations. Can you include / comment on this? It appears the theory is only relevant for $\gamma=1$ which is okay but should be stated upfront. - Relatedly, it may be beneficial to cite in Sect. 3.1 e.g. https://arxiv.org/abs/1805.00909 ; https://arxiv.org/pdf/2005.01643 - Regarding the dynamics-shift problem, can you also comment on the relationship to e.g. https://proceedings.neurips.cc/paper_files/paper/2023/file/67dd6a41bf9539cffc0fc0165e4d0616-Paper-Conference.pdf ; https://ojs.aaai.org/index.php/AAAI/article/view/25817 ; https://proceedings.mlr.press/v155/pertsch21a/pertsch21a.pdf - L80 "number of data" -> "amount of data" - L84: "to mimic **an** expert" - (nitpicking) L90: discussion of observations vs. states does not sound quite right; I believe the true distinction is in the POMDP framework - Eq. 3.1 remove the extra parentheses around product. - Eq. 3.3 Do you still need a min on the RHS? (I might be missing something, please explain if so!) - the value of $c$ may be nice to see explicitly so that we can see independence on the dynamics and policy. - L128: "Domian" -> "Domain" - L135 "while we do not have much access to the target domain." can you be more specific? - L150: "followed" -> "follows" - L172: "Baye's" -> "Bayes'" - Algorithm 1; maybe highlight which parts of the algorithm are distinct from DARC? - Theorem 4.1: I believe it would be helpful to either (a) give a more formal version of the theorem with all definitions (define H, D_omega, is $\gamma$ missing?) or (b) make it an "informal" theorem, with the full result in the appendix. - Which version of the environments are you using? - L304: "DARAIL's ??? on Different..." is there a missing word (Effect)? - Figures: Can you please include a shaded region for the DARC eval/training reward lines? - Figure 3: DARRIL- -> DARAIL? ---- Overall, I believe the paper needs a significant clean-up of writing and additional experiments and discussion to merit a higher score. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: As stated above, two important limitations the authors can further comment on are (a) use of samples in the target domain and (b) gain in performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable time and effort in providing detailed feedback on our work. We hope our response will fully address all of your points. --- >How many samples are used as rollouts in the target domain? Our imitation learning step rolls out data from the target domain every 100 steps of the source domain rollouts, which is 1\% of the source domain rollouts. --- >Additional samples from the target domain for DARC. DARAIL does have more target domain rollouts. However, as mentioned in the general response, more rollouts of the target domain data is not the reason why DARAIL works. More rollouts of the target domain will not improve DARC performance, according to our analysis of why DARC fails in more general settings in the general response. As mentioned in the third bullet point in the general response, we compare DARC and DARAIL with the same amount of rollouts from the target domain in Tables 4 and 5 in the attached PDF, indicating that more target domain rollouts does not yield significant improvement in DARC's performance due to its inherent suboptimality. Notably, DARAIL consistently outperforms DARC even when subjected to comparable levels of target rollouts. --- >Explanation of Figure 3. Figure 3 in the paper is an ablation study showing that our method works well under different scales of off-dynamics shift. Specifically, we conducted experiments on Ant-v2, and the target environment is broken with probability 0.2,0.5,0.8, respectively. The smaller the broken probability is, the smaller the dynamic shift is. --- >Smaller discriminator update steps are better. In GAIL or, more generally, GAN, the discriminator serves as the local reward function for the policy training. The smaller discriminator update steps (higher update frequency) probably lead to better discriminator training (better local reward estimation). In our experiments, we did the grid search for the parameters and noticed that further decreasing the update steps (increasing the update frequency) would not improve the performance; it is probably already well-trained, so we stopped that at 50. We will provide a more thorough grid search results to justify this in the future. --- >Presentation comments. We thank the reviewer for the detailed comments that help us improve the readability of the paper. We will follow your suggestions in our revision. For several questions, we briefly answer as follows: 1) **Missing discount factor $\gamma$ in equations.** We will state clearly in the revision that we use $\gamma = 1$ in the analysis of the error bound. 2) **Discuss of the related work.** [1] is most related to our problem. However, they assume that the optimal policy in different domains has similar stationary state distribution, thus, they learn a policy in the source domain and regularize the state distribution in two domains. This method is similar to DARC, which tries to learn a policy that behaves similarly in both domains. However, when the assumption is violated, the learned policy will have performance degradation when deployed to the target domain, similar to DARC. In contrast, our method doesn’t require such an assumption and we propose one more imitation learning step to transfer the policy to avoid performance degradation. [2] and [3] are designed to solve a slightly different problem that transfers prior knowledge to solve a new task instead of just considering the dynamic shift between the tasks. We will add the discussion of these works to the related work in the revision. 3) **Discussion of observations vs. states in L90.** The observation here represents the state trajectories we obtain from the environment, which is a general terminology from imitation learning from observation. We use imitation learning from observation to distinguish from generative adversarial imitation learning (GAIL) which uses $(s_t,a_t)$ as the expert demonstration. Meanwhile, imitation learning from observation purely learns from the observation (state observation) from the environment, i.e. $(s_t,s_{t+1})$. 4) **Right-hand side needs a min for Eq 3.3.** The right-hand side of Eq 3.3 is the mathematical expansion of the reverse KL divergence loss. We will add a min to that to make it clearer. 5) **Meaning of do not have much access to the target domain.** The limited access means that we cannot freely roll out infinite data from the target domain, and we can only roll out a small amount of data from the target domain. This is a standard setup shared by many off-dynamics problems, including DARC. Also, we assume we do not query the target domain reward during the training. As discussed in the first bullet point, we roll out the states and actions from the target domain data every 100 steps of the source domain rollouts. We will make it more clear in the revision. We appreciate the reviewer's feedback. We hope we have thoroughly addressed your concerns. We are willing to answer any additional questions. --- **References** [1] State Regularized Policy Optimization on Data with Dynamics Shift. [2] Accelerating Reinforcement Learning with Learned Skill Priors. [3] Utilizing Prior Solutions for Reward Shaping and Composition in Entropy-Regularized Reinforcement Learning. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for responding to my questions. I believe you have somewhat addressed my concerns and hopefully the camera-ready version can include some of this discussion. As stated before, the paper will benefit significantly from a careful rewriting, which I hope the authors will do. I've accordingly raised my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback! We will revise our paper according to your constructive reviews. Best, Authors
Summary: This paper provides solution to a specific kind of domain adaptation, where the source action space is a subset of the target action space. In this setup, the source policy might fail in the target domain, because the policy would output arbitrary values for the action dimensions only active in the target domain, but not in source. To resolve this, the paper disentangles the effect of actions, and considers the source domain's policy's state trajectories as the optimal trajectories to imitate in the target domain. By applying imitation learning from observation, they can learn a target policy to reproduce the state transitions from the source policy, but using the action space from the target domain. One trivial solution to this is that the target policy would learn the source policy's actions, just with a 0 output (or a value that leads to no effect) for the extra dimensions in the target domain. The paper outperforms the prior work DARC on its specialized environment setup, but unlikely to be a general method for other off-dynamics setups. Strengths: - The paper provides a good overview of the prior work DARC, and helped to understand their method without having to go through DARC. - The experimental evaluation consists of several baselines, and DARAIL generally outperforms them, achieving similar to source performance. This shows that the imitation learning from observation pipeline is working. - They provide code for reproducibility. Weaknesses: ## Evaluation on a very specific kind of dynamics change The major issue with this work is its limited evaluation where the only kind of off-dynamics shift is freezing the 0-index value to 0 in the action in the source domain. Ideally, if the action support of source is a subset of the target domain, then the same policy should be perfectly fine in the target domain. Yet, this paper shows that the same policy underperforms in the target domain. The only reason why this might be happening is because the DARC policy learns an arbitrary value for the 0-th dimension of action torque, which is inconsequential in the source domain, however, becomes detrimental in the target domain. The paper never discusses this obvious reason for this failure, and the only thing a method needs to do is to learn to always output 0 for the 0th-index, for it to work in the target domain. DARAIL is specifically constructed so that it learns to output 0 and works well for this artificial environment setup, because it **assumes the source trajectory to be the optimal trajectory in the target domain**. But this is a major assumption not made in the original DARC paper. In fact, I would argue (and I am open to change my mind on this if proven otherwise) that DARAIL does not work in the off-dynamics settings considered in DARC paper. For instance, consider ant which is crippled in the target domain but not the source domain (from DARC paper). Considering the source ant's trajectories as optimal could be detrimental for the target crippled ant, because most of the state transitions $(s_t, s_{t+1})$ in source domain are not even possible in the target domain. The crippled ant requires a different optimal policy to be learned, which cannot be obtained by just imitating the source state trajectories, which DARAIL would do. In order to show that DARAIL is actually a general solution to off-dynamics RL, the paper should include experiments on: - All the same setups provided in DARC paper. - At least one more example of off-dynamics RL other than just freezing the 0-index of action. ## Paper Presentation The paper is often written in a confusing manner and there are several quality issues in presentation. The following need to be corrected in writing: - Figure 6 (b) for Ant and 6 (d) for Walker are the exact same training curves. Please fix this. - The paper never discusses why DARC "fails" is because of the 0-th action dimension being an arbitrary output in the target domain, because it was never trained to be 0 during training. - L2-3: "performance degradation especially when the source domain is less comprehensive than the target domain": "less comprehensive" is vague. - In L23-25, the paper mentions that "in domains such as medical treatment [1] and autonomous driving [2], we cannot interact with the environment freely as the errors are too costly". However, in Algorithm 1 L7, it assumes access to the target environment for rolling out. - L41-42: "the source domain has less action support, which is normally a harder off-dynamics RL problem." Why is this a harder problem? If anything, target domain having a less action support would mean that the agent needs to adapt to more adverse off-dynamics conditions. - L64-65: "DARAIL relaxes the assumption made in previous works that the optimal policy will receive a similar reward in both domains." — What does this mean, because L78 says that both the source and target domains have the same reward function? In fact, DARAIL considers the source trajectories as optimal trajectories for the target domain. - What does it mean for the two domains to have the same reward function, when the dynamics are different? Does the paper assume the reward function is of the form R(s) only and not R(s, a, s')? Any assumptions made in the problem setup should be clearly written. - In Eq 3.1 and 3.2, the product term $\Pi_t$ should include the last terms inside the bracket. - L110-111: This cannot be right. The optimal policy should be proportional to the exponential of the cumulative returns, not reward at time step t. - Figure 1 (a): It is unclear how this figure suggests that there is any **performance degradation**. Since the dynamics are different, it is possible that the optimal reward in the target domain could be just different from the optimal reward in the source domain — comparing these two does not make sense. It only works in this paper because of the artificial formulation that the source action space is a subset of the target action space. The correct comparison would be evaluating $pi_{DARC}$ in the target domain against some $\pi_{optimal}$ for the target domain, both in the target domain. The training reward obtained in the source domain does not tell us anything for general off-dynamics setups. - L151-166: This paragraph is hard to follow. - Figure 7: Some baselines are not trained for the same number of steps. ## Method is overly complicated for the problem setup The paper makes a questionable assumption that when $\pi_{DARC}$ is optimal for source domain, the trajectories it obtains are also optimal in the target domain and thus, we should achieve similar state trajectories in the target domain. While this is a very restricted assumption, let's say for the sake of the argument, this assumption makes sense for certain useful setups. Even then, why learn $\pi_{DARC}$ using any influence from the target trajectories at all? Why not just learn $\pi_{src}$ from the src reward only, while disregarding the target trajectories and not applying DARC at all. And then we can transfer that policy using Gaifo to the target environment. What is the idea of doing DARC at all, when anyway the source policy is to going to be used for imitation learning from observations in the target domain? If DARC on src is really important, it should be experimentally validated with convincing arguments. Technical Quality: 2 Clarity: 2 Questions for Authors: I have listed suggestions to improve in the weakness section above. There are a lot of changes in the paper presentation that are necessary, but my major concern would still be about the limited applicability of the proposed problem setup and the method not even best suited for that proposed problem. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper mentions a couple of limitations in the appendix section, not in the main paper. The paper does not discuss several limitations of their problem setup's applicability, their method's potential fallacies on the original DARC environments, and their limiting assumption of source trajectories being optimal for the target domain. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable time and detailed feedback. We hope our response will fully address all of your points. --- >Evaluation on a very specific kind of dynamics change. 1) In the general response, we provide additional experiment results under DARC setting and more general off-dynamics settings to show that our method works well beyond the specific dynamics change. In the second bullet, we clarify why DARC fails in the broken source and intact target settings theoretically and empirically. 2) **DARAIL is specifically constructed so that it learns to output 0.** This is not true. As mentioned in the general response, Table 1 in the provided PDF shows DARAIL works for the general off-dynamics setting other than broken action. Indeed, if we know the difference in dynamics, we can “simply” learn/set 0 for 0th index. However, we do not assume we know beforehand. We will add this discussion to our revision. 3) **DARAIL does not work in the off-dynamics settings considered in DARC.** It’s not true that most of the state transitions $(s_t,s_{t+1})$ generated by DARC in the source domain are not possible in the target domain in DARC settings. DARC learns a near 0 value for the 0th index, as mentioned in the second bullet in the general response. Table 3 in the provided PDF shows that DARAIL performs better than the DARC evaluation in the target domain in DARC setting. --- >Method is overly complicated for the problem setup. First, **we do not assume that “when $\pi_{DARC}$ is optimal for the source domain, the trajectories it obtains are optimal in the target domain.**” Instead, we notice that $\pi_{DARC}$ generates the trajectories in the source domain that **resemble** the target optimal ones, as this is the DARC’s learning objective. So we transfer $\pi_{DARC}$ to the target domain instead of mimicking the source optimal policy. We provide additional experiments that mimic the source optimal policy instead of DARC trajectories on two settings in Tables 2 and 3 in the PDF. DARAIL significantly outperforms that, probably because the optimal source domain trajectories might not perform well in the target domain. Second, we want to clarify further that we do not assume that the source trajectories generated by DARC are **exactly** the optimal trajectories in the target domain. Instead, we utilize DARC’s learning objective to learn a policy that generates trajectories in the source domain that **resemble** the target optimal ones. Moreover, we considered DARC's training error in the first term of the error bound in our analysis. --- >Paper presentation questions 1) **Performance degradation happens when the source domain is less comprehensive.** We want to clarify that less comprehensive means that the support in the source is less than the target, such as our setting. However, we agree that this is not the only case when performance degradation happens, as suggested in Table 1. We will clarify this point in our revision. 2) **Why is it a harder problem when the source domain has less support?** In the domain adaptation problem, if the source domain/training data has larger support or covers more data distribution than the target, it would be easier to generalize to the target domain because the training data has seen all possible target actions/data. In off-dynamics RL settings, if the target domain has less support, we can restrict the action support in the source domain. For example, in the DARC setting, whose target environment has less support, the DARC policy learns to output near 0 value for the 0th-index, and the problem becomes optimizing the policy in the source domain such that the DARC policy outputs 0 for the 0th-index. From this perspective, DARC is restricting the action support in the source domain to align the target domain action support. However, on the opposite, where the source has less action support, such methods don’t work well, and we need to account for the unseen/unsupported action in the target domain. 3) **DARAIL relaxes the assumption made in previous works that the optimal policy will receive a similar reward in both domains.** As we discussed in the general response, the assumption made by DARC guarantees that DARC’s performance and generated trajectories in the target are similar to those in the source domain. This assumption may not hold in other off-dynamics settings. We want to relax this assumption by not restricting the performance of target optimal policy in the source domain. Note that the DARC source domain trajectories resemble the optimal target ones, according to DARC’s learning objective. Thus, we use imitation learning to transfer such policy. And the experiment shows the effectiveness of our method when their assumption fails. 5) **Two domains have the same reward function.** It means that $r_{\text{src}}(s_t,a_t) = r_{\text{trg}}(s_t,a_t)$ not $r(s)$. We will make it clearer in the revision. 6) **Clarifications on Figure 1(a) and performance degradation.** To show the performance degradation empirically in more general settings, we employ DARC in the target domain and compare it with its performance in the source in Figure 1(a). Here, the motivation is, according to the DARC’s objective, that the DARC policy on source should achieve a similar performance (and a similar trajectory distribution) to the optimal policy on target. But we agree, empirically, it may not be true. So, we followed the reviewer’s suggestion to add the reward for the optimal policy on target to show the performance degradation better in Figure 1, Table 2 and 3. In contrast, little performance degradation happens in the DARC setting. We will update the figures and tables accordingly in our revision. 7) **Not trained for the same steps.** We train the baselines until convergence. Though we believe the result would not change much with more training steps, we will train the baselines to the same number of steps for fair comparison in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for adding the new experiments on off-dynamics settings and DARC settings. The results of the intact source and broken target environment (DARC) are convincing to me, and they make sense to me now. The DARC policy learned in the source is on the modified reward and thus will tend to imitate the target data, thus generating trajectories that resemble optimal state sequences in the target domain. So, applying DARAIL should not deteriorate your starting point of DARC. This also explains why using just an optimal source policy is not enough, and you need to start with a DARC policy. I am raising my score to 5 and would be willing to increase further if the following questions are answered. 1. I appreciate you confirming that DARC fails in your original setting because it learns an arbitrary value of zero-index. Can you provide similar qualitative reasoning for why DARC fails in the gravity-change setting but DARAIL works perfectly fine? I understand that the underlying reason is that the assumption of the DARC paper might not hold, but can you discuss how exactly that turns out in the half-cheetah/reacher environments? And why does DARAIL (which also uses a DARC-like policy) not suffer from the same issues and perform so well? 2. In the gravity experiment, DARC significantly underperforms for HalfCheetah (1544 v/s 5818) but works almost as well for Reacher. - What is the reasoning here, and how many seeds are these results? - The poor performance of DARC in HalfCheetah is quite extreme; the gap is even larger than the experiments in the original paper. This is impressive but also surprising. I don't understand why DARC would fail so significantly here. - Actually, now that I think about it, does Reacher even get affected by the coefficient of gravity? 3. In the experiment on optimal source policy imitation, did you exactly run Algorithm 1, with the only change being that you replace Step 2: [Call DARC], you do not use the modified reward $r_{\text{modified}} = r(s_t, a_t) + \Delta r(s_t, a_t, s_{t+1})$, but use the environment reward only: $r_{\text{unmodified}} = r(s_t, a_t)$ ? If not, what is the "Mimic Source Optimal Policy" experiment? 4. If I understand correctly, DARAIL exploits some rollouts in the target environment to match the DARC's source trajectory distribution. This results in a DARAIL policy that is more suitable to be deployed in the target environment than DARC's learned policy. But, could this effect of making the effect of target "stronger" be done in another way, like learning a modified DARC policy with $r_{\text{modified}} = r(s_t, a_t) + \alpha \Delta r(s_t, a_t, s_{t+1})$, where $\alpha > 1$? I know the authors added a rebuttal experiment with more target rollouts to DARC. But that only makes the $\Delta r$ term more accurate. It does not increase its influence in the DARC's learned policy. If my understanding is correct here, then this simple solution should be a baseline in this paper. If the authors can try it out, that would be great. The implementation change is as easy as adding a coefficient and experimenting with values like 1.5, 2, 10, etc. — depending on the reward scale. But if not, I would be happy with an explanation of why this would not work. Why is DARAIL, which is much more complex and comes with additional requirements for online environment rollouts with DARC policy, necessary? 5. There are several points in the paper presentation that were not addressed (understandably due to the lack of space in rebuttal response). Some of these are important. Now that there is no word limit, can you make sure everything is replied to? --- Reply to Comment 1.1.1: Comment: 1, **Qualitative reasoning for why DARC fails in the gravity-change setting, but DARAIL works perfectly fine?** As mentioned in the previous general response, DARC works when the performance of the target optimal policy is similar in the source domain. This is hardly true when we have a different gravity in general. For example, in simple cases like Pendulum and InvertedPendulum, gravity parameters directly change the angle of the pole, the velocity of the cart, etc. Thus, it’s possible that the target optimal policy can lead to a different angle and velocity, which could be more likely to fall for the pole in the source domain. Intuitively, in this gravity-changing setting, applying the DARC policy to the target domain might generate very different trajectories in the target domain than the source ones, causing performance degradation. For example, in the InvertedPendulum, we run experiments modifying the gravity coefficient from 1.0 to 10. DARC receives optimal rewards in the source domain, but only 40\% of the source domain rewards in the target domain. In this setting, the reward is defined as whether the pole stays on the cart at each step and the trajectory ends once the pole falls. This means that the length of DARC target trajectories is only 40\% of DARC source trajectories. **DARAIL works in this setting.** DARAIL works well in this setting because we propose using imitation learning to transfer these trajectories to the target domain instead of directly deploying DARC to the target domain. By doing so, we do not require the assumption that the dynamic of the target optimal policy performs similarly in the source domain. Though DARAIL utilizes DARC trajectories in imitation learning, it doesn’t mimic the DARC policy directly but mimics the DARC trajectories in the state space in the source domain, which avoids deploying DARC directly to the target domain. 2, **DARC significantly underperforms for HalfCheetah but works almost well for Reacher.** First, the experiment results are averaged over 5 runs. Second, in Reacher, changing the gravity may not create a significant dynamic shift compared to the HalfCheetah, though the coefficient of gravity is modified from 1.0 to 0.5. Besides gravity, we also noticed that modifying the “friction” of the Reacher configuration file from 1.0 to 0.5 or 2.0 will not cause very large performance degradation. How they affect the performance degradation is determined by how the parameters like gravity and friction affect the dynamics model. For tasks that depend more on gravity, like HalfCheetah, the performance degradation is more obvious. We will have more experiments in different environments, dynamic shifts, and scales of the shifts, including gravity, friction, and density, in our revision. 3, **Mimicking the source optimal policy.** Our experiment is exactly what you describe that using the unmodified reward $𝑟_{unmodified}=𝑟(𝑠_𝑡,𝑎_𝑡) $ in Step 2: [call DARC]. 4, **Modifying DARC $\Delta r$’s scale.** Thanks for the suggestion. As mentioned, DARC training constrains the DARC policy to behave similarly in the two domains, and the $\Delta r$ in the modified reward can be regarded as the constraint or the regularization term during the training. Thus, increasing the $\alpha$ will give a stronger constraint for policy optimization, making DARC behave more similarly in both domains. However, that doesn’t guarantee that DARC policy will perform well in the target domain. Moreover, increasing the $\alpha$ (stronger regularization) might affect the training to focus more on the constraint instead of maximizing the reward. We further run experiments on HalfCheetah with the broken source, the intact target environment, and the gravity-changing setting (from 1.0 to 0.5) to justify this. The following table shows the results of DARC in HalfCheetah with different $\alpha$. As we increase $\alpha$ from 1.0 to 2.0, we observe that the DARC reward in the source and target are getting closer for both cases. However, the training has a strong oscillation due to the regularization. Due to the regularization, though DARC evaluation can benefit from increasing $\alpha$, the DARC training performance will be affected to be close to the DARC evaluation performance. In this case, DARAIL still outperforms DARC with $\alpha = 2$. We will experiment more and add this baseline on the different $\alpha$ in our revision. | |DARC Training, alpha = 2 | DARC Evaluation, alpha = 2| DARC Training, alpha = 1 | DARC Evaluation, alpha = 1| DARAIL| |----|-----------|---------------------------------------------|-----------------|--|-----------------| |Changing Gravity | 2438 | 2307 | 5828 | 1544 | 5818| |Broken Environment | 6171 | 5955 | 6995 | 4133 | 7067| --- Reply to Comment 1.1.2: Comment: 5, **Paper presentation.** **Typo.** Thanks for the suggestion on the paper writing. We will fix the typo in writing, including the Figure, $\prod_t$ in Eq 3.1/3.2. Also, the optimal policy is proportional to the exponential of the cumulative rewards, not reward at time step t in Lines 110-111. **More explanation to Line 151-166 regarding the reward estimator $R_{AE}$.** The imitation learning step iteratively trains a discriminator (maximizing Eq 3.5) and then updates policy with the signal provided by the discriminator, which is $-log_{D_{\omega}}(s_t,s_{t+1})$. We can view this as a ‘local reward estimator,’ and the policy is updated with $-log_{D_{\omega}}(s_t,s_{t+1})$ in imitation learning instead of the reward $r(s_t,a_t)$. However, this estimation can be biased and inaccurate in generative adversarial training, and the problem is even more severe due to the off-dynamics shift. Similarly, in the off-policy evaluation (OPE) of offline bandit and RL, this bias and inaccurate reward estimation exist. And doubly robust estimator is employed in this case to correct the reward estimation with importance weight term. Motivated by the doubly robust estimator, we propose a surrogate reward function that uses the source domain reward $r_{src}(s_t,a_t)$ (as the reward function is the same across the domains) and the importance weight between the dynamics to correct the local reward estimator $-log_{D_{\omega}}(s_t,s_{t+1})$, which is similar to the doubly robust estimator that uses $\rho (r-\hat{r})$ to correct the reward estimator $\hat{r}$. Here, the $\rho$ is the importance weight. We believe we have covered all the questions regarding paper presentations. If there is still more clarification needed, please let us know. Thank you again for the suggestions.
Summary: The paper considers the off-dynamic RL setting and identifies that existing approach, DARC, fails to obtain the true optimal policy in the target MDP. Leveraging ideas from imitation learning literature, the paper proposes a learning objective that takes into the account the dynamic shift between the source and target MDPs. The objective is theoretically sound and the conducted experiments show that the proposed method, DARAIL, is able to account for severe dynamic shift in MuJoCo environments. Strengths: - The experiments show that DARAIL is able to account for severe dynamic shift and is not too sensitive with clipped importance sampling ratio. - The relationship between the proposed estimator and doubly robust estimator is interesting. Weaknesses: - The notation can be defined more clearly. - On page 3, line 106: Is $\tau$ a trajectory sequence of state-action pairs, or just sequence of states? - What generates this? - On page 3, Eq (3.2): Is $\tau_{\pi^*}^{trg}$ Is this trajectory generated by $\pi^*$ in the target domain? - It will be preferable if the paper can provide explanations on why DARC fails to address for the dynamic shift. Technical Quality: 3 Clarity: 2 Questions for Authors: **Questions** - On page 4, lines 120-122: Is this always true, or just may? If I understand correctly since the dynamics are "unknown" it may recover the wrong target policy (i.e. $\pi^* \neq \pi^{trg}$)? - DARC does not need any online learning while DARAIL does correct? I believe this should be highlighted if that is the case. - On page 4, lines 151-152: This shared reward is an assumption correct? What if this is not true either? **Possible typos** - Page 2, line 128: Domain Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: **Limitations** - The experiments are purely in MuJoCo so I am curious what can happen in other domain. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first thank the reviewer for their time and comments. We now address your questions point by point. 1, **Notation of $\tau$.** $\tau$ represents the trajectory sequence of state-action pairs, so it is a notation for the trajectories. For example, we use $\tau_{\pi_{\theta}}^{\text{src}}$ to represent the trajectories generated by the policy $\pi_{\theta}$ in the source domain. 2, **Why DARC fails to address dynamics shift.** First, as mentioned in the general comment, DARC works well under some dynamic shifts when the assumption about the dynamic shift made in the DARC paper is satisfied, i.e. the optimal policy for the target domain is also a good policy in the source domain. DARC works well in this setting because the objective of DARC can also be viewed as maximizing the cumulative reward in the source domain under a constraint that the learned policy performs similarly in two domains. However, in more general off-dynamics settings. This assumption is not satisfied. DARC will be suboptimal in the target domain as DARC does not perform similarly in the two domains, leading to performance degradation in the target domain. Our method is motivated by DARC, which generates trajectories in the source domain that resemble the optimal ones in the target domain. We propose to use imitation learning to transfer DARC trajectories to the target domain so that deploying the policy to the target domain will not receive such performance degradation. 3, **On page 4, lines 120-122, recover the wrong target policy.** We would appreciate any clarifications on the definition of $\pi_{trg}$ in the second part of the question. To answer the first part, the matching of the distribution is the main goal of the DARC’s learning objective. Under the unknown dynamics and unknown dynamic shift, DARC tries to learn a policy that generates trajectory distribution similar to the target trajectories from the target optimal policy. But the policy $\pi_{DARC}$ will not be the same as the target optimal policy due to the dynamics shift. Here, we only use the $\pi^*$, which is the target optimal policy, in our analysis of DARC’s failure. 4, **DARC does not need any online learning, while DARAIL does.** DARC does need online learning and they sample $(s_t,s_{t+1})$ from the target domain. Similarly, our method requires sampling from the target domain. In both cases, the amount of data rollouts from the target is significantly smaller than the rollouts from the source. The details can be referred to the third bullet point of the general response. 5, **Assumption about the shared reward**. In the off-dynamic RL setting, we specifically focus on the setting where the two domains only differ in the dynamics but share reward functions and state/action space. If this assumption is not true, this becomes a different domain adaptation problem, and we might need other techniques to adapt to the shift in the reward function. 6, **Experiment on other settings instead of Mujoco.** We follow the recent work in off-dynamics RL and experiment on the MuJoCo environment, which is easy to access and analyze. We will also add new experiments on other environment settings in our revision. We appreciate the reviewer's feedback. We hope we have thoroughly addressed your concerns. We are willing to answer any additional questions. --- Rebuttal Comment 1.1: Comment: Thank you for the response, regarding (3) you have addressed my question. I will keep the score as is. --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply! Best, Authors
Summary: The authors introduced the approach Domain Adaptation and Reward Augmented Imitation Learning (DARAIL) for off-dynamics RL. The work aim at generating the same trajectories in the target domain as expert trajectories learned via DARC in the source domain. With GAIL-style framework and reward augmentation, DARAIL reached SOTA in bench mark off-dynamiacs environments. DRAIL does not assume the learned expert policy in the source domain to be close to the optimal policy in the target domain, and the authors are able to give a theoretical error bound of the method according to the dynamics shift scale. Strengths: 1. The problem of off-dynamics RL is very meaningful, given some target environments and data are not easily accessible. 2. The shortcoming of DARC is clearly stated. Movitation is clear. 3. The theoretical bound proof is neat. 4. Sufficient experiments on both on baseline comparison, sensitivity analysis and dynamics shift sclaes. Weaknesses: 1. Typo in Line #81, expectation should be over p_trg 2. Typo in Appendix C.2 Line #537 (C.1) 3. Typo in Appendix C.2 Line #540, #546 4. There is no comparison experiements of directly using (s,s’) from the expert trejectories in the source domain for DARAIL. 5. Freezing the 0-index action is not a good way to validate the effectiveness of the method on dynamics shift. Experiemnts on ,ore diverse dynamics shift environments are needed. Technical Quality: 3 Clarity: 4 Questions for Authors: How would DARC trajectories quality influence the overall imitation learning performance in the target domain? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Did not test on more complicated dynamics shifts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first thank the reviewer for their time and comments. We now address your concerns point by point. --- > How would DARC trajectory quality influence the overall imitation learning performance in the target domain? We use trajectories generated by the DARC in the source domain as the expert trajectories for imitation learning, which resemble the optimal trajectories in the target domain, according to the DARC’s objective. In general, the imitation learning performance should be similar to the DARC performance in the source domain, with the latter one expected to perform similarly with the target optimal policy if the DARC learning error is small. The better the DARC trajectory quality, the better performance the resulting policy from imitation learning can achieve. Overall, our analysis in section 4 shows that a good DARC trajectory quality will have a smaller learning error in the first term of the error bound. Further, we want to emphasize that our method is not particularly limited by the performance of DARC because our method can be viewed as a general framework that uses imitation learning to transfer trajectories from the source domain to the target domain as long as we can obtain trajectories that resemble the optimal trajectories on the target. --- > There are no comparison experiments of directly using $(s,s’)$ from the expert trajectories in the source domain for DARAIL. We assume that by “$(s, s’)$ from the expert trajectory in source,” the reviewer means using the optimal trajectory in the source. We provide such experiment results on HalfCheetah and Reacher in Table 2 and 3 in the attached PDF. We first train a policy in the source domain (do not consider the dynamics shift) and use imitation learning to transfer it to the target domain. The results show that DARAIL outperforms the method that uses optimal trajectories in the source domain as the expert policy for imitation learning because DARC can generate trajectories that resemble the optimal ones in the target domain. --- > Experiments on other off-dynamics shifts. We provide additional experiment results on HalfCheetah and Reacher in Table 1 in the provided PDF. The off-dynamics shift is constructed by modifying MuJoCo's configuration files. We modify the coefficient of gravity from 1.0 to 0.5 for the target domain. Our results show the effectiveness of our method. In our revision, we will add more experiments on these more general cases to demonstrate the performance of DARAIL. We thank the reviewer for their detailed suggestions for improving the readability of our paper. We will follow them closely, fix all the typos, and clarify the writing of the paper.
Rebuttal 1: Rebuttal: We want to thank all reviewers for their time and constructive feedback on our paper. Since some reviewers referred to similar concerns, we would like to make a general response to address these questions. --- > Experimental results on more general off-dynamics settings and more baselines (R_mGy8 and R_VFUe). In the attached file, we provided additional experiment results on HalfCheetah and Reacher to validate our algorithm in general off-dynamics settings. The off-dynamics shift is constructed by modifying MuJoCo's configuration files. We modify the coefficient of gravity from 1.0 to 0.5 for the target domain. Our results in Table 1 of the attached PDF file show the effectiveness of our method. In our revision, we will add more experiments on these more general cases to demonstrate the performance of DARAIL. We also validate the DARAIL in intact source and broken target environments (DARC settings). The results are shown in Table 3 of the attached file. We can see that DARAIL inherits the good performance of DARC in this setting. Considering the limited settings in DARC and the more general settings in our experiment, **DARAIL will not degrade the performance of DARC in the source intact and target broken settings and will improve the performance in more general settings.** Further, as suggested by R_mGy8 and R_VFUe, in Table 2 and 3 of the attached file, we compare with the performance of the target optimal policy to better show the suboptimality of DARC. Also, we compare DARAIL with mimicking the $(s_t,s_{t+1})$ of the optimal trajectories in the source domain and show the superiority of DARAIL. --- > Regarding “why DARC fails” in more general off-dynamics cases (R_LdFt and R_VFUe). We will explain it from theoretical and empirical aspects and compare the two settings: broken source and intact target v.s. intact source and broken target. Theoretically, as stated in Lemma B.1 in the DARC paper, the objective of DARC is equivalent to training a policy (maximizing the cumulative reward) in the source domain under a constraint that DARC behaves similarly in both domains and thus receives similar rewards. Also, in the analysis of the error bound in DARC, they made a strong assumption in Assumption 1 that **optimal policy for the target domain should also receive a similar reward in the source domain**, that is $E_{p_{\text{src}}, \pi^*}[\sum_t r(s_t,a_t)] -E_{p_{\text{trg}}, \pi^*}[\sum_t r(s_t,a_t)] \leq 4R_{max} \sqrt(\epsilon/2)$, where the $\pi^*$ is the optimal policy in the target domain, the $R_{max}$ is the maximum reward of a trajectory, $\epsilon$ is the slack term and the error bound depends on the $\epsilon$. In the setting where the source is intact and the target is broken (same as in DARC paper), the optimal policy for the target domain gives a 0 value for the 0th-index. Thus, deploying this policy in the intact source domain will receive the same reward. And the assumption here is perfectly satisfied with $\epsilon=0$. Thus, the error bound for the optimal DARC policy is 0, as analyzed in Theorem 4.1 in the DARC paper. Empirically, we notice that under the DARC setting, the DARC policy learns a near 0 value for the 0th-index. This guarantees that the policy can generate similar trajectories in the two domains. Also, maximizing the adjusted cumulative reward in the source domain with a policy with a near 0 value for the 0th-index is equivalent to maximizing the cumulative reward in the target domain. Thus, DARC perfectly suits the source intact and target broken setting. However, in the source broken and target intact setting, and also other more general off-dynamics settings, the optimal policy in the target environment might not perform well in the source domain. It may even have a large performance degradation, like in the source broken and target intact settings, the policy loses one action dimension in the source domain. Thus, the assumption made in the DARC paper might not hold or, formally speaking, the $\epsilon$ there can be very large, leading to performance degradation of DARC in the target domain. In particular, intuitively, in the source broken and target intact settings, we agree with R_VFUe that the DARC fails as it learns an arbitrary value for the 0-th dimension of action torque, which becomes detrimental in the target domain. However, this is just an artifact of the particular setting. As we discussed above, the intrinsic reason that DARC fails is the violation of the assumption. In Table 1 of the provided PDF, we demonstrate DARC’s failure in other off-dynamics settings where the dynamics shift is not induced by the broken action. --- > Regarding access to target domain data (R_JwpF and R_VFUe). Both DARC and DARAIL require some limited access to the target rollouts. In DARAIL, the imitation learning step only rolls out data from the target domain every 100 steps of the source domain rollouts, which is 1\% of the source domain rollouts. We claim that more target domain rollouts will not improve DARC’s performance due to its suboptimality, and DARAIL is better not because of having more target domain rollouts. We verify it by comparing DARC and DARAIL with the same amount of rollouts from the target domain in Tables 4 and 5 in the attached file. Specifically, we examine DARAIL with 5e4 target rollouts alongside DARC with 2e4 and 5e4 target rollouts. DARAIL has 5e3 target rollouts for the Reacher environment, while DARC has 3e3 and 5e3 rollouts. From the results, we see that increasing the target rollouts from 2e4 to 5e4 (or from 3e3 to 5e3 in the case of Reacher) does not yield a significant improvement in DARC's performance due to its inherent suboptimality, as mentioned in the second bullet. Notably, DARAIL consistently outperforms DARC when given comparable levels of target rollouts. Pdf: /pdf/f7ad9af31332949d30663a6ec4c0d14a54355885.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Physically Compatible 3D Object Modeling from a Single Image
Accept (spotlight)
Summary: This paper presents an optimization method that produces 3D shapes from a single images that considers about mechanical properties and external forces. The shapes of the generated results in a state of static equilibrium are able to match the input image. The proposed methods can be used with other single-view image shape generation methods. Strengths: The strengths of the proposed method are: - The proposed optimization framework is novel and versatile. - The method supports to produce fabricatable shapes from a single image. Weaknesses: The weaknesses of the proposed method are: - The limitation of the proposed method is not discussed in the main paper. - The discussion with alternative approaches is not clear in the main paper. Technical Quality: 3 Clarity: 4 Questions for Authors: - I am curious about why the authors decided to optimize the deformation instead of optimizing the hollow of the solid geometry, similar to the method used in Make-It-Stand [Prevost et al. 2013] and the follow-up papers. - It is unclear to me what is the limitation of the proposed method. - It is unclear to me what is the performance overhead for the proposed method. I would encourage the authors to report the shape complexity and the corresponding computational time. - It is unclear to me whether image loss is used during the optimization? In fig. 2, the pipeline invovles image loss but I did not see the discussion in Sec. 2 but only in evaluation part. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: As mentioned above, I think the limitations should be discussed more in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the valuable comments from the reviewer. **Rebuttal Outline:** 1. **[Adjustment] (W1&W2&Q2)** Move Limitations and Discussions with Alternative Approaches to Main Text. 2. **[Discussion] (Q1)** Alternatively Optimizing the Hollow of the Solid Geometry. 3. **[Clarification] (Q3)** Performance. 4. **[Clarification] (Q4)** Image Loss in Optimization. Please find below our detailed responses to your comments and concerns. --- **1. (W1&W2&Q2) The limitation of the proposed method and the discussion with alternative approaches is not clear in the main paper.** Thank you for pointing out the placement of our discussions on the method's limitations and comparisons with alternative approaches. These discussions are indeed present in Appendices G and H. We acknowledge the importance of making this information more accessible and are considering moving these sections into the main body of the text. We plan to compress Figure 4 and move Figure 5 to the appendix to free up enough space to accommodate these critical discussions within the page limit. --- **2. (Q1) I am curious about why the authors decided to optimize the deformation instead of optimizing the hollow of the solid geometry, similar to the method used in Make-It-Stand [Prevost et al. 2013] and the follow-up papers.** While optimizing the hollow of solid geometry is suitable for enhancing standability, it is not well-suited to address other losses such as ensuring the static shape accurately replicates the geometry depicted in the input image. Furthermore, Make-It-Stand and its follow-ups involve an extra carving stage that hollows out the interior of the 3D shape, along with a shape deformation stage. This carving process often depends on manually defined heuristics to prevent the creation of isolated parts and is computationally intensive due to its combinatorial nature. In contrast, our method using the plastic deformation on the whole shape eliminates the need for this additional and complex stage. --- **3. (Q3) It is unclear to me what is the performance overhead for the proposed method. I would encourage the authors to report the shape complexity and the corresponding computational time.** The average shape complexity in our experiments, across 100 evaluated shapes, is approximately 5,200 vertices and 14,000 tetrahedra per shape. Regarding computational performance, the average computational time required for our method on a standard PC configuration (AMD Ryzen 9 5950X 16-core CPU and 64GB RAM) is about 2.5 minutes per shape. We will include these results in the final manuscript. --- **4. (Q4) It is unclear to me whether image loss is used during the optimization? In Fig. 2, the pipeline involves image loss, but I did not see the discussion in Sec. 2, only in the evaluation part.** Thank you for raising this point. We did not utilize image loss directly during the optimization process. Instead, we assume that the initially reconstructed shape from the input image accurately represents the target geometry, as outlined in lines 161-162. The evaluation of image loss, however, is used to assess whether the final optimized shape, when subjected to external physical forces such as gravity, still corresponds closely with the original input image. --- Please let us know if you have any further questions. We really appreciate your time. Thank you! Best regards, Authors
Summary: This paper introduces the concept of physical compatibility into single-image to 3D generation. It proposes a post-optimization method for existing 3D generation pipelines, which takes the generated mesh as the target mesh and optimizes the deformation gradient to obtain a rest-shape geometry without external forces. This rest-shape geometry can align with the target mesh under physical conditions provided by the user. The proposed method significantly improves the results for different single-image to 3D pipelines and is demonstrated to be effective through a comparison of 3D printed results under real-world scenarios. Strengths: The idea of introducing physical compatibility into 3D generation is very innovative, and it is crucial for the application of current 3D generation results to real-world scenarios. The authors provide detailed derivations and explanations, and propose five reasonable evaluation metrics to assess the degree of physical compatibility. They demonstrate the effectiveness of their method in real-world scenarios through 3D printed objects. Weaknesses: This is not strickly a weakness of the proposed method. The paper mainly focuses on the optimization of a single object. However, current 3D generation pipelines can handle the generation of multiple coupled objects, such as a castle. I wonder whether this method can be applied to such senarios. Technical Quality: 4 Clarity: 4 Questions for Authors: The proposed method optimizes the positions of mesh vertices while maintaining the connectivity of the vertices. However, existing 3D generation models may introduce incorrect connectivity for complex or partially occluded input image. I wonder if physical compatibility approach can correct such errors in connectivity during the optimization process. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer’s insightful comments. **Rebuttal Outline:** 1. **[Discussion] (W1)** Handling the generation of multiple coupled objects. 2. **[Discussion] (Q1)** Correcting errors in connectivity during the optimization process. Please find below our detailed responses to your comments and concerns. --- **1. (W1) This is not strictly a weakness of the proposed method. The paper mainly focuses on the optimization of a single object. However, current 3D generation pipelines can handle the generation of multiple coupled objects, such as a castle. I wonder whether this method can be applied to such scenarios.** We appreciate your observation highlighting a promising direction for future research. Extending our method to handle multiple objects introduces the challenge of managing interactions and collisions between objects. As a potential pathway forward, we could integrate differentiable simulation techniques and collision constraints [1] into existing 3D generative models. This adaptation would allow our framework to address the dynamic interactions and physical compatibility of multiple coupled objects. --- **2. (Q1) Existing 3D generation models may introduce incorrect connectivity for complex or partially occluded input images. I wonder if the physical compatibility approach can correct such errors in connectivity during the optimization process.** Currently, our method maintains fixed connectivity during the physical compatibility optimization. However, a promising extension to address incorrect initial connectivity could involve incorporating adaptive remeshing or subdivision techniques during the optimization process. Specifically, while adhering to our established physical compatibility framework, we could intersperse the optimization with periodic remeshing steps. This approach would adaptively correct connectivity errors throughout the optimization process, enhancing the accuracy and robustness of the reconstructed shapes. --- **References** [1] Huang, Zizhou, et al. "Differentiable solver for time-dependent deformation problems with contact." ACM Transactions on Graphics 43.3 (2024): 1-30. --- We sincerely appreciate the comments. Please let us know if you have further questions. Best regards, Authors
Summary: This paper introduces a physical compatibility optimization framework for reconstructed objects from a single image. The approach considers mechanical properties, external forces, and rest-shape geometry, integrating static equilibrium as a hard constraint. This framework improves upon existing methods by ensuring the stability and accuracy of reconstructed objects under external influences. Quantitative and qualitative evaluations show enhancements in physical compatibility. Strengths: 1. Performance: The proposed method achieves state-of-the-art results. The experiments well validate the effectiveness of the proposed methods. 2. Clarity: The paper is well-written and easy to follow. 3. Technical Novelty: The main contributions of this paper are twofold: 1) They propose a physical compatibility optimization framework for 3D objects and decompose the mechanical properties, external forces, and rest-shape geometry. 2) They optimize the rest-shape geometry using predefined mechanical properties and external forces and ensure the object’s shape aligns with the target image when in static equilibrium. Weaknesses: In this paper, the authors primarily apply the physical compatibility optimization framework to enhance the physical attributes of 3D models obtained from existing methods rather than reconstructing them from a single image. Therefore, I think the title "Physically Compatible 3D Object Modeling from a Single Image" may not be suitable, as the focus lies on enhancing physically compatible modeling of 3D objects derived from off-the-shelf methods. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. All $\mathbf{x}\_{static}$ may be $\mathbf{X}_{static}$ as the defination in line 89. 2. In Some cases discussed in the paper, like the flamingo standing on one leg, should it be standable even after optimization? I think it should not be standable under gravity. 3. Why do you evaluate the different off-the-shelf methods using connected components when this cannot demonstrate the superiority of your proposed method? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors have discussed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's effort in reading and evaluating our work carefully! **Rebuttal Outline:** 1. **[Discussion] (W1)** Title. 2. **[Adjustment] (Q1)** $\textbf{x}_{static}$. 3. **[Clarification] (Q2)** 3D Printing Results of Flamingo. 4. **[Clarification] (Q3)** Evaluation on the number of Connected Components. Please find below our detailed responses to your comments and concerns. We hope these responses assist you in finalizing your assessment of our manuscript's rating. --- **1. (W1) The title may not be suitable, as the focus lies on enhancing physically compatible modeling of 3D objects derived from off-the-shelf methods.** We appreciate your observation regarding the scope of our work and the paper’s title. The title "Physically Compatible 3D Object Modeling from a Single Image" was chosen to highlight our method's unique process: starting with a single image and constructing 3D object under physical compatibility constraints. Although our approach utilizes existing 3D reconstruction techniques, the core of our contribution lies in the introduction and application of a novel optimization framework that ensures physical compatibility. We believe that retaining the current title accurately reflects the novelty of our contribution. However, we are open to considering alternative titles such as "Enhancing Physical Compatibility of 3D Object Modeling from Single Images" if it would provide clearer communication of our work's scope and impact. --- **2. (Q1) All $x_{static}$ may be $X_{static}$ as the definition in line 89.** Thank you for your attentive reading! In our manuscript, we adhere to the conventional notation system used in physical simulation and finite element analysis [1], where the rest shape is denoted by capital letters $\textbf{X}$ and the static shape by lowercase letters $\textbf{x}$. To resolve the confusion, we will adjust the notation in line 89 to use $\textbf{x}$ consistently. Additionally, we will include a clarifying statement in the text to explicitly define that $\textbf{X}$ represents the rest shape. --- **3. (Q2) For the flamingo standing on one leg, should it be standable even after optimization?** Our optimization indeed successfully enables the flamingo model to stand on one leg. To address your concern, we've included a supplementary PDF in this rebuttal, showing the 3D-printed results of the standing flamingo after optimization from various camera angles. We hope this additional evidence clarifies the effectiveness of our method. --- **4. (Q3) Why do you evaluate the different off-the-shelf methods using connected components?** Thank you for the question. While the metric of connected components may not directly demonstrate the superiority of our proposed method, it remains a critical dimension for evaluating physical compatibility. This metric effectively captures the integrity of a 3D object, which is an essential attribute for assessing the physical realism of 3D models, since disconnected parts fall apart and thus naturally invalidate the definition of "physical compatibility". Including it provides valuable insights into the baseline performance of existing off-the-shelf methods with respect to physical compatibility. --- **References** [1] Eftychios Sifakis, and Jernej Barbic. "FEM simulation of 3D deformable solids: A practitioner's guide to theory, discretization, and model reduction." ACM SIGGRAPH 2012 courses. 2012. 1-50. --- We appreciate your time! Thank you so much! Best regards, Authors --- Rebuttal Comment 1.1: Comment: Thanks for your efforts. For W1, I think your response did not fully address my concerns. I want to point out that your work is not directly related to 3D object modeling from a single image. The proposed revised title "Enhancing Physical Compatibility of 3D Object Modeling from Single Images" and your original title have no obvious difference. --- Rebuttal 2: Title: Reply Comment: Thank you for your feedback. Based on your suggestions, we will change the title to ‘Physically Compatible 3D Object Modeling.’ We are also open to any other title suggestions from the reviewer and will adjust the paper title based on the meta-review if necessary. Please feel free to let us know if there's anything else we can provide to encourage you to increase the score. Best, Authors
Summary: The paper presents a 3D mesh optimization framework to ensure physical plausible 3D object reconstructions from a single image. These 3D reconstructions should conform with global (e.g. gravity) and user-defined constraints (material stiffness) as well as match with a target image of the reconstructed, simulated object. The reconstructed mesh is considered as a solid object. The method is combined with 5 existing single-image 3D object reconstruction methods, which are evaluated on a collection of 100 Objaverse samples. Several additional metrics are introduced to test for the physical 'compatibility' of the reconstructions: mean stress, connected components, standability and the difference to the reference view. Strengths: The motivation for physically plausible reconstruction is clearly stated. Ensuring physical constraints of reconstructed 3D meshes allows for further downstream applications, such as 3D printing, hence it is an important aspect to consider. The writing and quality of the figures (including the video) is of very high quality. The set of metrics is well chosen and the paper includes very detailed descriptions of the method. Its effectiveness is demonstrated using real-world 3D printed results with and without the proposed optimization. Weaknesses: - While I can understand the space constraints of the submission format, the main paper should still include a complete and comprehensive overview of related work instead of deferring it to the appendix. Technical Quality: 3 Clarity: 3 Questions for Authors: - Figure 3: The plot for TetSphere show minimal improvements using the proposed optimization. Could the authors elaborate on this? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The checklist is complete and the paper discusses its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive questions. **Rebuttal Outline:** 1. **[Adjustment] (W1)** Regarding the placement of the Related Work section. 2. **[Question] (Q1)** Elaboration on results of TetSphere. Please find below our detailed responses to your comments and concerns. --- **1. (W1) The main paper should still include a complete and comprehensive overview of related work instead of deferring it to the appendix.** We will adjust the manuscript to include the Related Work section within the main text. We plan to move Figure 5 to the Appendix and compress Figure 4 to free up sufficient space to accommodate the Related Work section within the page limits. --- **2. (Q1) Figure 3: The plot for TetSphere shows minimal improvements using the proposed optimization. Could the authors elaborate on this?** The modest improvement observed for TetSphere is attributed to its use of an explicit volumetric representation. As discussed in TetSphere's original paper, this representation inherently enhances robustness to slender shapes and thin structures. Unlike the other four methods we evaluated, which do not utilize explicit volumetric representations and thus are more vulnerable to fractures in thin structures, TetSphere maintains structural integrity. Nevertheless, our optimization method still enhances the robustness of TetSphere to fractures, as demonstrated in Figure 3. --- Please let us know if you have any further questions. We appreciate your time and effort. Thank you! Best regards, Authors
Rebuttal 1: Rebuttal: # General Response We sincerely appreciate the detailed reviews and the thoughtful feedback provided by all reviewers and the Area Chair. In addition to addressing specific comments from each reviewer, we would like to outline our primary contributions. - **[Motivation]** We tackle a critical issue in 3D generative modeling -- enhancing the **physical compatibility** of generated objects, a vital aspect often overlooked in contemporary research [cqis, 91LB]. - **[Methodology]** Our approach is **novel** and significantly improves the **versatility and applicability** of 3D reconstruction methods [H9KU, 91LB, XLVo], evident through the integration of physical constraints within the generative process. - **[Experiments]** We conducted comprehensive experiments that not only demonstrate the **robustness and versatility** of our method but also extend its applicability to **real-world scenarios** like 3D printing [cqis, 91LB]. - **[Presentation]** The manuscript is recognized for **its clarity and well-written quality** [cqis, H9KU]. We hope our responses address all reviewers' concerns and help improve the review scores. We thank all reviewers and the AC again for their time and efforts! Pdf: /pdf/3eb1974e5470f8bc856108b8228feeea68f98f5a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Multi-Label Open Set Recognition
Accept (poster)
Summary: This paper addresses the problem of classifying instances with unknown labels in a multi-label setting (multi-label open set recognition). A novel approach named SLAN is proposed which leverages sub-labeling information, and unknown labels are recognized by differentiating the sub-labeling information from holistic supervision. The effectiveness of the proposed method is validated on different benchmarks. Strengths: - The paper presents a detailed and well-formulated methodology, including the method for structural information discovery and the part related to optimization. - Extensive experiments on various datasets are conducted to prove the effectiveness of the proposed SLAN approach. The empirical results demonstrate superior or comparable performance to the mentioned methods. Weaknesses: - The paper introduces the MLOSR problem, which is novel, but it may overlap with existing tasks like zero-shot multi-label classification and open-vocabulary multi-label classification. - This paper has limited baselines for comparison. While the paper includes comparisons with several multi-label learning and anomaly detection approaches, it should also compare with state-of-the-art methods specifically designed for zero-shot and open-vocabulary multi-label classification. - The proposed SLAN approach need to solve multiple optimization problems, which may not scale well to large datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: - There are existing tasks with similar settings. For example, in image tasks, there are zero-shot multi-label classification, and open vocabulary multi-label classification. What's the difference between the proposed task and these tasks? Is the setting more applicable? - It would be better if the paper can include state-of-the-art methods for zero-shot and open-vocabulary multi-label classification in the experimental comparison. - It would be better to include a flowchart outlining the key step in the proposed approach, which can improve the clarity and make it easier to follow. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitation and societal impact is addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. There are existing tasks with similar settings. For example, in image tasks, there are zero-shot multi-label classification, and open vocabulary multi-label classification. What's the difference between the proposed task and these tasks? Is the setting more applicable? - **Response 1:** Thanks to the comment. Although multi-label open set recognition (MLOSR), multi-label zero-shot classification (MLZSL) and open vocabulary multi-label classification (OVML) share similar goals, there are essential differences between them. During training phase, in addition to the training set, MLZSL and OVML needs additional prior knowledge of unseen labels ,whereas MLOSR does not. Specifically, MLOSR solely trains on the training set consist of instance-labels pairs. MLZSL additionally needs to extract the semantic information from the observed label space, that is, the relevant label embeddings and the relation between seen and unseen label embeddings. In OVML, visual-related language data like image captions can serve as auxiliary supervision , which potentially has an implicit intersection with the unseen labels. During testing, due to the lack of prior knowledge, MLOSR only needs to identify novel labels and mark them as unknown, while MLZSL and OVML must classify unknown labels into specific labels. MLOSR can be considered as an extreme case of MLZSL and OVML. Each of these tasks has its own applicable real-world scenarios. It is necessary to determine the specific task based on the availability of prior knowledge. --- 2. It would be better if the paper can include state-of-the-art methods for zero-shot and open-vocabulary multi-label classification in the experimental comparison. - **Response 2:** Thanks to the suggestion. As mentioned above, MLOSR is differentiated from MLZSL and OVML based on whether there is prior knowledge of unseen labels or not. Thus, it is inappropriate to apply approaches designed for MLOSR to MLZSL and OVML scenarios without requisite prior knowledge, and vice versa. --- 3. It would be better to include a flowchart outlining the key step in the proposed approach, which can improve the clarity and make it easier to follow. - **Response 3:** Thanks to the suggestion. The flowchart of SLAN is summarized in Figure 1 in the Rebuttal PDF file. Given the multi-label training set, a weighted graph is constructed to characterize the manifold structure of feature space. After that, the alternative optimization strategy is adopted to optimize open set recognizer and multi-label classifier simultaneously. --- 4. The paper introduces the MLOSR problem, which is novel, but it may overlap with existing tasks like zero-shot multi-label classification and open-vocabulary multi-label classification. - **Response 4:** Please kindly refer to **Response 1**. --- 5. This paper has limited baselines for comparison. While the paper includes comparisons with several multi-label learning and anomaly detection approaches, it should also compare with state-of-the-art methods specifically designed for zero-shot and open-vocabulary multi-label classification. - **Response 5:** Please kindly refer to **Response 2**. --- 6. The proposed SLAN approach need to solve multiple optimization problems, which may not scale well to large datasets. - **Response 6:** Let $q$, $d$ and $m$ denote the number of labels, number of training instances and dimension of input space. Our optimization algorithm mainly contains several steps. Before alternative optimization, we need to instantiate $\mathbf{S}$ with ADMM in $\mathcal{O}(m^3)$. Then our method needs to iteratively solve three optimization problem w.r.t. $\mathbf{Z},\mathbf{F}_k,\mathbf{W}$, which can be solved in $\mathcal{O}(qmd),\mathcal{O}(qm^2)$ and $\mathcal{O}(m^2d)$ respectively. In summary, the overall complexity of our optimization algorithm is the sum of these operations mentioned above. It is noteworthy that the complexity of SLAN is related to $q$ and $m^3$ , which may be slow when applied to date sets with a large number of labels and instances. We will leave efficiency improvement for future work. --- Rebuttal Comment 1.1: Title: Looking forward to your feedback Comment: Dear Reviewer wpso, thanks for your thoughtful comments. We believe we have addressed your concerns in our response, including clarifying the difference between the proposed task and zero-shot multi-label classification/open vocabulary multi-label classification, and providing a flowchart for the proposed approach. We would appreciate your thoughts on our response. If you have any remaining or further questions for us to address, we are keen to take the opportunity to do so before the discussion period closes. Thank you! --- Rebuttal Comment 1.2: Comment: The author has addressed most of my concerns and the score is updated. --- Reply to Comment 1.2.1: Title: Thanks Comment: Thank you again for the valuable comments. We will check the manuscript again and add the discussion in the revised version.
Summary: This article introduces a new problem in multi-label open set recognition (MLOSR) and proposes a novel approach named Sub-Labeling Information Reconstruction for MLOSR (SLAN). SLAN utilizes sub-labeling information enriched by structural details in the feature space. Experimental results across various datasets demonstrate its effectiveness in addressing this new challenge of MLOSR. Strengths: 1. This article focuses on a novel problem within multi-label open set recognition (MLOSR), expanding beyond traditional multi-label learning by tackling the identification of unknown open labels. This problem is particularly relevant in practical applications due to the dynamic and open nature of real-world environments. 2. The strength of the paper lies in its clear and thorough analysis and description of the SLAN method. 3. The analysis of SLAN across various experimental metrics on different datasets is comprehensive, providing a robust demonstration of its effectiveness in the experiments. 4. The paper's process and structure are concise, making it relatively easy to understand. Weaknesses: 1. In the Parameter Sensitivity Analysis section, there is some explanation regarding the experimental parameter settings, but it primarily involves varying each parameter individually without further elaboration on their combined effects or interactions. It would be beneficial to include more extensive experimental analysis that considers the interactions between different parameter combinations. 2. The paper asserts that OC-SVM, IFOREST, and MUENLFOREST do not exhibit performance on par with SLAN. It would be advantageous to provide further elaboration on why the parameters of these comparison methods, particularly MUENLFOREST, are configured as they are. 3. Is there any insight into how SLAN performs differently compared to other methods on different datasets? Could this difference be related to dataset characteristics and distributions? Technical Quality: 3 Clarity: 4 Questions for Authors: 1. There is concern regarding the robustness of the method. How stable is its performance when errors or irrelevant labels appear in the dataset? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. There is concern regarding the robustness of the method. How stable is its performance when errors or irrelevant labels appear in the dataset? - **Response 1:** Thanks to the comment. The detailed experimental results in terms of _ranking loss_ and _F-measure_ under different error rates are reported in the follwing table. As mitigating errors is not considered in SLAN, the performance of SLAN exhibits a slight degradation when the error rate increases. For multi-label classification, the presence of errors or irrelevant labels will hinder the multi-label classifier to accurately induce decision boundary. Similarly, for open-set recognition, such errors or irrelevant labels will impair the discrimination between sub-labeling information and labeling information with holistic supervision. For future work, identifying and mitigating these errors or irrelevant labels will be considered. | | | | Ranking loss | | | | | F-measure | | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | | | | Error rate | | | | | Error rate | | |Dataset| #label | 0 | 0.1 | 0.3 | Dataset| #label | 0 | 0.1 | 0.3 | |enron| 6 | 0.169$\pm$0.011 | 0.182$\pm$0.012 | 0.208$\pm$0.016 |enron| 6 | 0.406$\pm$0.096 | 0.381$\pm$0.085 | 0.375$\pm$0.087 | || 9 | 0.172$\pm$0.012 | 0.186$\pm$0.015 | 0.213$\pm$0.019 | | 9 | 0.321$\pm$0.095 | 0.300$\pm$0.081 | 0.294$\pm$0.079 | || 12 | 0.174$\pm$0.010 | 0.188$\pm$0.013 | 0.214$\pm$0.016 | | 12 | 0.281$\pm$0.098 | 0.262$\pm$0.084 | 0.256$\pm$0.081 | |slashdot| 3 | 0.260$\pm$0.022 | 0.296$\pm$0.022 | 0.340$\pm$0.021 | slashdot | 3 | 0.528$\pm$0.127 | 0.526$\pm$0.123 | 0.516$\pm$0.119 | || 5 | 0.268$\pm$0.019 | 0.304$\pm$0.018 | 0.348$\pm$0.020 | | 5 | 0.428$\pm$0.105 | 0.427$\pm$0.104 | 0.419$\pm$0.101 | || 7 | 0.270$\pm$0.018 | 0.305$\pm$0.015 | 0.350$\pm$0.016 | | 7 | 0.350$\pm$0.069 | 0.350$\pm$0.068 | 0.344$\pm$0.067 | |corel5k| 7 | 0.266$\pm$0.013 | 0.310$\pm$0.013 | 0.366$\pm$0.011 | corel5k | 7 | 0.653$\pm$0.015 | 0.635$\pm$0.012 | 0.568$\pm$0.013 | || 12 | 0.276$\pm$0.012 | 0.318$\pm$0.011 | 0.373$\pm$0.010 | | 12 | 0.539$\pm$0.017 | 0.527$\pm$0.015 | 0.482$\pm$0.016 | || 17 | 0.283$\pm$0.012 | 0.324$\pm$0.010 | 0.376$\pm$0.009 | | 17 | 0.462$\pm$0.012 | 0.454$\pm$0.013 | 0.420$\pm$0.014 | || 22 | 0.286$\pm$0.013 | 0.327$\pm$0.009 | 0.378$\pm$0.008 | | 22 | 0.414$\pm$0.015 | 0.408$\pm$0.016 | 0.381$\pm$0.016 | --- 2. In the Parameter Sensitivity Analysis section, there is some explanation regarding the experimental parameter settings, but it primarily involves varying each parameter individually without further elaboration on their combined effects or interactions. It would be beneficial to include more extensive experimental analysis that considers the interactions between different parameter combinations. - **Response 2:** Thanks to the suggestion. The following table illustrates how the performance of SLAN changes with varying $\gamma$ and $\beta$ on data set enron. SLAN achieves relatively stable performance on _ranking loss_ and somewhat sensitive on _F-measure_, whose trend is similar to when only one parameter is changed. We will elaborate on more different parameter combinations in the revised version. |||| Ranking | loss ||| | | | | F-measure | | | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | | | | | $\beta$ | | | | | | | $\beta$ | | | ||| 0.001 | 0.01 | 0.1 | 1 | 10 | | | 0.001 | 0.01 | 0.1 | 1 | 10 | || 0.001 | 0.1734| 0.1730| 0.1698| 0.1802| 0.1908 | | 0.001 | 0.1701 | 0.0618 | 0.1300 | 0.0749 | 0.0859 | || 0.01 | 0.1736| 0.1724| 0.1701| 0.1757| 0.1828 | | 0.01 | 0.1808 | 0.0964 | 0.1771 | 0.0742 | 0.0820 | | $\gamma$ | 0.1 | 0.1707| 0.1703| 0.1686| 0.1674| 0.1684 | $\gamma$ | 0.1 | 0.2989 | 0.2783 | 0.2408 | 0.2094 | 0.2248 | | | 1 | 0.1726| 0.1726| 0.1719| 0.1714| 0.1711 | | 1 | 0.3033 | 0.3011 | 0.2812 | 0.2636 | 0.2472 | | | 10 | 0.1742| 0.1741| 0.1741| 0.1740| 0.1740 | | 10 | 0.3030 | 0.3023 | 0.2929 | 0.2313 | 0.2113 | --- 3. The paper asserts that OC-SVM, IFOREST, and MUENLFOREST do not exhibit performance on par with SLAN. It would be advantageous to provide further elaboration on why the parameters of these comparison methods, particularly MUENLFOREST, are configured as they are. - **Response 3:** Thanks to the suggestion. For OC-SVM, IFOREST, and MUENLFOREST, parameter configurations suggested in respective literatures [1,2] are employed. The following table illustrates how the performance of MUENLFOREST changes with varying parameter configurations on data set enron. The results show that the MUENLFOREST approach is not very sensitive to the settings of parameters. |||||| |:-:|:-:|:-:|:-:|:-:| |$q=$|1|3|5|7| |F-measure|0.3158|0.3137|0.3179|0.3165| |$e_m=$|7|8|9|10| |F-measure|0.3178|0.3151|0.3124|0.3110| |$C_1=$|0.001|0.01|0.1|1| |F-measure|0.3130|0.3135|0.3143|0.3162| |$C_2=$|0.001|0.01|0.1|1| |F-measure|0.3115|0.3188|0.3154 |0.3103| [1] X.-R. Yu, D.-B. Wang, M.-L. Zhang. Partial label learning with emerging new labels. Machine Learning, 2024, 113(4): 1549-1565. [2] Y. Zhu, K.-M. Ting, Z.-H Zhou. Multi-label learning with emerging new labels. IEEE Transactions on Knowledge and Data Engineering, 2018, 30(10):1901–1914. --- 4. Is there any insight into how SLAN performs differently compared to other methods on different datasets? Could this difference be related to dataset characteristics and distributions? - **Response 4:** Thanks to the comments. For multi-label classification and open-set recognition, SLAN achieves superior or at least comparable performance against the comparing approaches in most cases. Generally speaking, the increase in number of labels and label density helps differentiating the sub-labeling information from holistic supervision, facilitating the learning process of classifier and recognizer. --- Rebuttal Comment 1.1: Title: Looking forward to your feedback Comment: Dear Reviewer vnjs, thanks for your thoughtful comments. We believe we have addressed your concerns in our response. We would appreciate your thoughts on our response. Please let us know if there are any further suggestions on how we can improve to address your comments effectively.
Summary: The abstract discusses multi-label learning where instances can have multiple labels simultaneously. Traditional approaches assume a closed set scenario where test data labels are predefined during training. However, in real-world situations, new labels can emerge during testing, creating an open and dynamic environment. The paper explores Multi-Label Open Set Recognition (MLOSR), which tackles the challenge of recognizing instances with unknown labels in multi-label settings. The proposed approach, SLAN, utilizes sub-labeling information enriched by structural features in the data space to predict unknown labels. Experimental results across different datasets demonstrate the effectiveness of SLAN in addressing the MLOSR problem. Strengths: 1. The paper is well-written and easy to follow. It shows comprehensive analyses of the motivation of the studied problem and the proposed method. 2. The paper introduces a novel learning framework called Multi-Label Open Set Recognition (MLOSR), which provides a new direction for multi-label learning research. To solve the MLOSR problem, an approach named SLAN that utilizes sub-labeling information and holistic supervision to recognize unknown labels is proposed. The techniques of SLAN are sound and well-motivated, offering an effective solution for open set recognition in a multi-label environment. 3. The paper conducts comprehensive empirical validations on various datasets, showing that the proposed method achieves superior or at least comparable performance across multiple evaluation metrics, which enhances the credibility of the research. The paper includes ah findings and indicates its robustness in multi-label learning. Extensive sensitivity analysis of the trade-off parameters in the SLAN algorithm are conducted, guiding parameter selection in practical applications. Weaknesses: Generalization Ability: The SLAN algorithm may have limited generalization ability when dealing with extremely multi-label datasets, restricting its application in broader scenarios. Feature Representation Limitations: Operating within the multi-label learning framework, SLAN might be constrained by less informative or discriminative feature representations. Computational Resources: While the paper mentions the computational resources for the experiments, it does not elaborate on the computational efficiency and scalability of the algorithm, which may impact its application to large-scale datasets. Theoretical Foundation: The paper does not provide theoretical results or proofs, which might reduce the depth of understanding of the algorithm's performance. Access to Code and Datasets: Although the datasets are public, the paper states that the code will be released after acceptance, which may temporarily limit the reproducibility of the results. Technical Quality: 3 Clarity: 3 Questions for Authors: How does the SLAN algorithm perform on datasets with highly imbalanced label distributions? Does the parameter sensitivity analysis mentioned in the paper consider the characteristics of different types of datasets? How adaptable and efficient is the SLAN algorithm for real-time or dynamically changing data streams in practical applications? Does the paper consider the interpretability of the algorithm, i.e., how to explain the predictive results of SLAN? For future work, are the authors planning to combine the SLAN algorithm with deep learning models to enhance the quality of feature representation and the performance of the algorithm? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. How does the SLAN algorithm perform on datasets with highly imbalanced label distributions? - **Response 1:** Thanks to the comment. The following table summarizes the level of class-imbalance on data sets employed in the experiments including the minimum, maximum and average imbalance ratio across the label space, which shows these data sets are highly imbalanced. Macro-averaging AUC is the mostly-used evaluation metric under class-imbalance scenarios. Table 4 in appendix in terms of macro-averaging AUC shows that SLAN achieves superior or at least comparable performance against the comparing approaches. |Dataset|min|max|avg| |---|---|---|---| | llog | 6.064 | 37.968 | 20.826 | | enron | 1.009 | 43.789 | 16.148 | | recreation | 4.107 | 47.077 | 14.308 | | slashdot | 5.265 | 34.524 | 15.131 | | corel5k | 3.464 | 49.000 | 29.401 | | arts | 3.055 | 45.296 | 12.770 | | education | 2.173 | 41.017 | 12.109 | | rcvsubset2-2 | 3.216 | 47.780 | 26.370 | | bibtex | 6.097 | 49.306 | 32.877 | --- 2. Does the parameter sensitivity analysis mentioned in the paper consider the characteristics of different types of datasets? - **Response 2:** Thanks to the comment. The following table illustrates how the performance of SLAN changes with varying parameter configurations on data set llog. The performance of SLAN on both llog and enron exhibits similar variation trends. Thus, for fair comparison, we employ the default parameter setting across all data sets. | | | | | | | | |---|---|---|---|---|---|---| |$\alpha=$ | 0.001 | 0.01 | 0.1 | 1 | 10 |100| |_Ranking loss_ | 0.3600 | 0.3600 | 0.3601 | 0.3590 | 0.3596 | 0.3599| |_F-measure_ |0.5976 | 0.6106 | 0.6855 | 0.6725 | 0.6411 | 0.6823| |$\beta=$ | 0.001 | 0.01 | 0.1 | 1 | 10 |100| |_Ranking loss_ | 0.3599 | 0.3598 | 0.3602 | 0.3590 | 0.3569 | 0.3568| |_F-measure_ | 0.6859 | 0.6859 | 0.6859 | 0.6725 | 0.3549| 0.2712 | |$\gamma=$ | 0.001 | 0.01 | 0.1 | 1 | 10 |100| |_Ranking loss_ | 0.3489 | 0.3461 | 0.3437 | 0.3490 | 0.3590 | 0.3600| |_F-measure_ | 0.1202 | 0.2554 | 0.1767 | 0.6443 | 0.6725 | 0.6859| --- 3. How adaptable and efficient is the SLAN algorithm for real-time or dynamically changing data streams in practical applications? - **Response 3:** Thanks to the comment. In order to achieve a faster convergence and a good result, we adopt a warm start strategy. (1) When an unseen instance $\boldsymbol{x}_u$ appears, parameters can be updated through Algorithm 1 in the Rebuttal PDF file. (2) When the prior knowledge indicates that the newly appeared instances associated with the same unknown label, these instances can be added to buffer $B$. Once buffer $B$ is full, a pseudo label assignment $\boldsymbol{p}$ for the unknown label $l_{q+1}$ will be induced, which denotes whether an instance is associated with such unknown label. A new classifier for $l_{q+1}$ will be constructed, and parameters will be updated through Algorithm 2 in the Rebuttal PDF file. (3) These updated or augmented parameters can all be considered as a warm start for the subsequent joint optimization procedure, rather than a complete retraining. --- 4. Does the paper consider the interpretability of the algorithm, i.e., how to explain the predictive results of SLAN? - **Response 4:** Thanks to the comment. Intuitively, for multi-label classifier, the learned weights $W$ transparently indicate the importance of each input variable for each class label. Additionally, to achieve better performance of the predictive model, a kernel extension is further utilized for the general nonlinear case. For open set recognizer, the structural information might not be maintained in the sub-label space. The enriched sub-labeling information for an instance with unknown labels is differentiated from labeling information with holistic supervision. Such difference can serve as a criterion to distinguish whether an instance is associated with unknown class labels. In the future, we will conduct further interpretability study for the proposed method. --- 5. For future work, are the authors planning to combine the SLAN algorithm with deep learning models to enhance the quality of feature representation and the performance of the algorithm? - **Response 5:** Thanks to the comment. In the future, it is promising to design MLOSR approaches with deep learning models. For example, label-specific features will be extracted to for informative and discriminative feature representations generation. --- 6. Generalization Ability: The SLAN algorithm may have limited generalization ability when dealing with extremely multi-label datasets, restricting its application in broader scenarios. - **Response 6:** Thanks to the comment. Let $q$, $d$ and $m$ denote the number of labels, number of training instances and dimension of input space. The training complexity of one iteration in alternative optimization is $\mathcal{O}(qm^2+qmd+m^2d)$. It is noteworthy that SLAN learns multiple sub-labeling information matrices of which the number equals $q$, which may be slow when applied to date sets with a large number of labels. This is inevitable if considering sub-labeling information. We will leave efficiency improvement for future work. --- 7. Theoretical Foundation: The paper does not provide theoretical results or proofs, which might reduce the depth of understanding of the algorithm's performance. - **Response 7:** Thanks to the suggestion. In the future, we will perform detailed theoretical analyses for the proposed method. --- 8. Access to Code and Datasets: Although the datasets are public, the paper states that the code will be released after acceptance, which may temporarily limit the reproducibility of the results. - **Response 8:** Thanks to the comment. The code for this paper will be released and the results can be reproduced after the paper is accepted. --- Rebuttal 2: Title: Looking forward to your feedback Comment: Dear Reviewer zY3w, thanks for your thoughtful comments. Following your suggestions, we conducted additional experiments regarding parameter sensitivity analysis across different datasets. We would appreciate your thoughts on our response. Please let us know if there is anything else we can do to address your comments.
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, We greatly appreciate all of you for your thoughtful comments and valuable suggestions. These are very helpful for improving our paper. We have carefully referred to the questions and written the response. In addition to the text responses, we also report some figure results in the PDF file. We hope the responses would meet your requirements. Best Regards, Authors Pdf: /pdf/1cb66feac41ebf27779b2e2ac20af18fa6d8b0e6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CogCoM: Train Large Vision-Language Models Diving into Details through Chain of Manipulations
Reject
Summary: The paper presents CogCoM, a novel approach to training large Vision-Language Models (VLMs) using a mechanism called Chain of Manipulations (CoM). This mechanism enables the model to solve visual problems step-by-step with evidence, inspired by human cognitive processes like marking and zooming into images. CogCoM integrates manipulations such as grounding, zooming, and OCR into the VLM architecture, allowing it to handle various visual problems without external tools. The model is trained using a robust data generation pipeline and evaluated across multiple benchmarks, demonstrating state-of-the-art performance. Strengths: Advantages of the Paper 1. **Explainable Reasoning and Manipulation Mechanism**: CogCoM generates intermediate steps with evidence, making the reasoning process transparent and explainable, which is crucial for complex visual tasks. The model incorporates a flexible set of manipulations that can be adapted to various visual problems, improving its versatility and problem-solving capabilities. 2. **Data Generation Pipeline**: The paper introduces an efficient pipeline for generating high-quality training data, which is essential for training VLMs to perform detailed visual reasoning. 3. **Superior Performance**: CogCoM achieves superior results across multiple benchmarks, including detailed visual question answering and visual grounding, showcasing its effectiveness and robustness. These advantages highlight the paper's contributions to advancing the capabilities of VLMs in solving detailed and complex visual problems through a novel, human-inspired approach. Weaknesses: Weakenss in Points This paper is generally good but I can still spot the following issues. 1. **Design of Figures and Tables**: The figures in the paper are not well-designed. The first and second figures are repetitive in meaning, and the colors in the first figure are too light (consider adding black outlines to the boxes). The font size in the second figure is too small to be legible on smaller screens. Additionally, the captions for Table 2 and Table 3 are too close to the tables, violating the submission guidelines. 2. **Lack of Discussion on Related Work**: The paper lacks a discussion of existing related work. It should consider citing and comparing with at least other agentic LMMs such as LLAVA-Plus[1] to provide a comprehensive comparison and context. [1] https://arxiv.org/abs/2311.05437 Technical Quality: 3 Clarity: 3 Questions for Authors: See above Weakness part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above Weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, we are very grateful for the valuable time you have spent reviewing our paper and for your recognition of our work, which is of great significant to us. Concerning the issues your have mentioned in our paper, we will make the following improvements: - **Design of Figures**: Thank you very much for pointing out this issue. Following your suggestion, we have adjusted the Figure 1 by adding dark background to reduce the overall brightness of the image. Compared to the Figure 1 that introduces different capabilities of the model, the Figure 2 is primarily intended to depict the example mentioned in the introduction (VLMs could not answer correctly or even output hallucinations for the detailed recognition problems without reasoning). We have increased the font size for the Figure 2 and will add a diagram of the image decoder structure to depict content that differs from Figure 1. - **Design of the Tables**: We have adjusted the tables in our paper to increase the spaces between tables and corresponding captions. Please refer to the tables in *the supplemented PDF file* to view the effects of these adjustments. - **Discussion to the related work**: Thank you very much for your reminder. We have added the following description for this related work to our Related Works section: The authors from LLaVA-PLUS [1] have contributed efforts to train VLMs to develop the capability of invoking external tools. They constructed an instruction-tuning dataset incorporating tool use examples and trained VLMs to call external tools to solve challenging tasks. In comparison to their efforts, this work focus on stimulating the model’s inherent abilities to solve problems in an end-to-end manner through active reasoning. Though disadvantage in producing pixel-level masks, it offers advantages and potential for enhancing the model’s inherent reasoning abilities and reducing the time complexity of reasoning. [1] https://arxiv.org/abs/2311.05437 --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you once again for your time and valuable suggestions. We sincerely hope that our efforts can address your concerns and look forward to your response.
Summary: This paper introduces the Chain of Manipulations (CoM) mechanism for data generation to enhance visual reasoning in VLMs. The authors developed a data generation pipeline, producing 70K high-quality samples, and created the CogCoM model. CogCoM achieves state-of-the-art results across nine benchmarks, demonstrating significant improvements in various visual tasks. Strengths: 1.The CoM introduces a new data generation mechanism that enables VLMs to perform step-by-step visual problem solving with supporting evidence. 2.A data generation pipeline is proposed, producing a dataset of 70K high-quality samples. 3.The trained model, CogCoM, achieved SOTA in nine benchmarks. 4.This paper is well-written and easy to understand. Weaknesses: 1.During data generation, the process relies entirely on GPT-4 for prompting and existing models (GroundingDino, PaddleOCR) for generation. As mentioned in the appendix, inaccuracies in these current visual models can affect the quality of generated data and the model's reasoning capabilities. However, the system lacks validation or filtering mechanisms to enhance data quality. 2.To highlight the specific improvements brought by CoM, it would be helpful to provide results both with and without the incorporation of CoM data. This would clarify the impact of CoM, especially since CogCoM integrates a significant amount of additional data such as MultiInstruct and LLaVAR during the instruction tuning stage as shown in Table 1. 3.The CoM dataset includes 6K high-quality manually annotated math samples, but no test results for math problems are provided. Clarification is needed on whether the purpose of this math data is solely to enhance the model's reasoning capabilities. 4.The paper emphasizes that CogCoM is a model capable of multi-image multi-turn understanding, but no corresponding test results (qualitative or quantitative) are provided. 5.In the model section, some parameters are not specifically explained, such as the maximum turns the model can accept and the predefined threshold. 6.Typos error: Line 288 CogOM->CogCoM Technical Quality: 2 Clarity: 3 Questions for Authors: see weakness Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for taking your valuable time to review our paper and the insightful review. In response to your suggestions and questions, we have conducted experiments and provided explanations, as follows: - **The filtering strategy for quality control in data collection**: Thank you for your carefully reading of our analysis on tool recognition errors during data generation as stated in Appendix C.5. During the collection of our 70K CoM training data, we discard the wrongly recognized data (i.e., we refer to these data as the negative paths in our paper), as these data can not terminate to the golden answer node during the DFS traversal. Therefore, we used this filtering strategy to ensure that only the correct data capable of reaching the golden answer (i.e., positive paths) was included in the 70K training data. We look forward to utilizing these negative paths as negative rewards in future work. - **The ablation experiment to validate the effectiveness of CoM data**: Thanks for your suggestion. We have conducted ablation experiments on training our model with and without the incorporation of the collected CoM training data to validate the impact, and the comparison results on three typical categories of benchmarks, TextVQA, MMVet, and MathVista are shown in Table 3 of *the supplementary PDF file*. **In conclusion, our model benefits from the CoM corpus that contains informative visual reasoning evidence, achieving improvements of 6.6, 0.2 and 0.9 percentage points on the three benchmarks, respectively.** - **The experiments to validate the effectiveness of manually annotated CoM-math data**: Motivated by the effectiveness of AlphaGeometry, we annotated 6K reasoning chains for the purpose of advancing the development of VLMs on this specific task. Solving geometry math problems is highly challenging for current VLMs. We conducted experiments on MathVista, and the results are shown in Table 3 of *the supplementary PDF file*. **In conclusion, our model, trained on the 6K annotated samples, achieved an improvement of 0.90 percentage points on MathVista.** Though the relatively limited improvement in answer’s matching metrics, we believe that the annotated data involving the multimodal reasoning processes like humans (such as drawing auxiliary lines) may contribute the potential to the development of this field. - **The clarification of the multi-turn multi-image capability**: CogCoM can solve visual reasoning problems through multiple rounds of interaction with an input image (e.g., marking auxiliary lines), re-inputting the resulting image at each new round (e.g., a marked image with auxiliary lines) in an end-to-end manner. Taking the geometry math problem-solving as an example (e.g., the last case in Figure 1), given the initial inputs of an image $I_0$ and a question $Q$, our model undergoes the first round of reasoning (outputting multiple reasoning steps) and then draws an auxiliary line on the image to obtain a new image $I_1$, and then re-input this marked new image to continue the next round of reasoning (may draw additional lines or crop and zoom in on a specific region). Compared to most of existing methods, we refer to this approach, which is similar to the human thinking process, as a multi-turn, multi-image process. - **The maximum number of the multi-image turns**: Thanks for you suggestion. We have detailed the statistics for the training and testing data in the Appendix C.3, including the total number of reasoning chains, the average number of reasoning steps, and the average number of manipulation types. In accordance with your suggestion, we have also conducted statistics on the number of turns divided by multiple images. The results are as follows: in the training data source from TextVQA and ST-VQA which may involve generating new images such as through zooming, the average number of turns is **1.42**, and the maximum number of turns is **7** (we restrict the maximum number of turns to **4** during training to prevent OOM). In the test set of TextVQA, our model produced an average of **1.54** turns involving multiple images. It is worth noting that not every image requires manipulations such as zooming, and some can be answered through reasoning with evidence or direct observation. We will add this statistics to the paper content. - Thank you for pointing out the typos. We will thoroughly check the paper again to ensure there are not ambiguities and mistakes. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you once again for your time and for pointing out the shortcomings in our work. We sincerely hope that our additional experiments and explanations can address your concerns, and look forward to your response. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, I sincerely apologize for disturbing you again. In response to the multiple questions you raised, we hope that our experiments and explanations are through to address your concerns. As the discussion period is nearing its end, please let us know if you have any questions or feedback. Thank you very much.
Summary: Drawing inspiration from human cognition to solve visual problems through localizing, zooming, etc., this paper introduces a new framework called CogCom, which solves visual problems by automatically combining six types of basic manipulations. When facing a visual problem, CogCom can use reasoning to solve each step and employ basic tools to aid in the problem-solving process. To achieve this goal, CogCom constructed a data generation pipeline that leverages GPT4 to build the training data for CogCom. The CogCom leads to performance gains compared to its baseline CogVLM on several benchmarks. Strengths: 1. The CogVLM makes gains based on CogVLM on several benchmarks. 2. The pipeline that leverages GPT4 to construct manipulation pipelines for problem-solving is reasonable. Weaknesses: 1. The VQA benchmarks reported in Table 1 are not very convincing. It would be beneficial to consider more modern and challenging benchmarks such as MMBench, MathVista, and SeedBench. 2. The comparison of baseline methods seems to be based on relatively outdated approaches. It might be more informative to compare them with more recent LVLMs like LLaVA-1.5, Monkey, and ShareGPT4V. 3. It would be helpful to discuss a closely related work ViperGPT [3] and V* [4]. ViperGPT shares an idea for solving visual problems via planning tool pathways. V* shares the idea of searching and zoom-in progressively. 4. The differences with some other related works should be discussed [5][6]. [1] Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models [2] ShareGPT4V: Improving Large Multi-Modal Models with Better Captions [3] ViperGPT: Visual Inference via Python Execution for Reasoning [4] V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs [5] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding [6] DualFocus: Integrating Macro and Micro Perspectives in Multi-modal Large Language Models Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The performance improvements of CogCom in comparison to CogVLM appear to be relatively insignificant. Could this be attributed to the low success rate (around 55% in Figure 6)? Further explanations are necessary to address the situation where the planned pathway failed. I will be pleased to raise my rating if my concerns as in weaknesses and questions are resolved. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for taking your valuable time to review our paper and the insightful suggestions. We have added experiments (in the supplementary PDF) and supplemented content (in this page) of our paper, as detailed below: - We have evaluated our model on the suggested new benchmarks, MMBench, SEDD-Bench, and MathVista, and the experimental results are listed in the Table 2 of *the supplementary PDF file*. **In conclusion, our model achieved stronger performance in comparison with concurrent and the suggested new baseline models, outperforming LLaVA-1.5 by 10.2, 4.9 and 8.1 percentage points on the three benchmarks, respectively.** - We compared our model with the suggested up-to-date baseline models, and the results are listed in the Table 1 of *the supplementary PDF file*. **In conclusion, our model consistently outperforms these baseline models across the VQA and general multimodal benchmarks, demonstrating effective multimodal capabilities.** - **Discussion with closely related works**: (1) The efforts of ViperGPT shares the same basic idea with our work that decompose complex visual problems into reasoning steps. In comparison with their concentration that aims to build a training-free framework combining external VLMs using a code LLM, we focus on training an end-to-end VLM to enhance its multimodal reasoning ability to solve complex visual problems. (2) V* is a concurrent work who also aims to solve VQA problems by progressively acquiring visual cues. Their two-parts framework first utilize a VQALLM model to list visual objects needed for answering, followed by a dedicated search model to accurately acquire the objects. On the other hand, our approach focuses on using one model to actively performing reasoning and to identify or mark the most useful visual information, which may offer the potential for solving more complex reasoning tasks in addition to the detailed identification, such as the challenging geometric math problems. - **Discussion for the differences with other related works**: (1) CogVLM is a multimodal foundation model which aims to achieve reliable performance on broad multimodal tasks (e.g., VQA, Grounding, Captioning). This work is motivated by the observation that CogVLM produces hallucinating answers for the detailed reasoning questions (i.e., the Figure 2), and aims to study an effective methodology to enhance the multimodal reasoning capabilities of VLMs while reducing the hallucinations by visual evidences. (2) The DualFocus was released around the same time as ours. They also made efforts to construct a training dataset that includes intermediate cues (bounding boxes) and trained the model with two stages based on accurate localization. Compared to their work, our CoM training places more emphasis on answering questions in a single reasoning process and marking image to assist in solving complex problems. We will include the above works and discussions into the related works section. - **The insignificant improvements**: Due to the focus of this paper on enhancing the reasoning capabilities of VLMs, the improvements of our model over CogVLM on Grounding and General Multimodal benchmarks is relatively limited. However, on the evaluation sets that require logical reasoning (i.e., GQA), complex counting (i.e., TallyVQA), and detailed recognition (i.e., TxtVQA, ST-VQA), our model achieves improvements of 6.5, 10.9, 1.04 and 9.0 percentage points relatively. Additionally, we found that mistakes in reasoning paths are indeed a major factor affecting performance on the general multimodal benchmarks. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you once again for your time and valuable suggestions. We sincerely hope that our efforts can address your concerns and look forward to your response. --- Rebuttal 2: Comment: Thank you for your efforts in solving my concerns. One additional question: since mistakes in reasoning paths are indeed a major factor affecting performance on the general multimodal benchmarks, the author may have some discussions (experiments are not required) about how to compensate for the negative effects of planned pathway failure, as in the concurrent work DualFocus. Overall, most concerns are solved. I will raise my rating to 6. --- Rebuttal Comment 2.1: Comment: Dear Reviewer, We are very pleased to learn that our efforts have addressed your concerns and that you kindly raised the rating. Regarding your additional question about the discussion on how to mitigate the negative effect of incorrect reasoning paths, we have following response: - DualFocus adopts a good strategy that compares the PPLs of generated answers to switch the mode between direct-answering and reasoning-before-answering to alleviate the paths with low confidence. In comparison to DualFocus, we add the explicit launching prompts to allow the switch by users actively, and the randomness during training to encourage model switching the mode by itself accordingly with problem scenarios. As the PPL-based method is general, we will compare and implement this strategy into CogCoM in near future work. - Using Reinforcement Learning to penalize the negative paths during training is another strategy to mitigate the negative effects, as we have derived multiple negative reasoning paths during our data collection pipeline. However, using the RL-based objective will introduce the unstable training and low-efficiency compared to the straightforward cross-entropy on positive answers, and improving the effectiveness of this solution is a promising topic. - Humans solve difficult questions with a period of thinking before outputting the answers. As LLMs/VLMs generate outputs immediately following the question prompt, CoT mechanism serves as an effective substitute of the thinking. Similar to human thinking, the backtracking mechanism might be a reliable way to form correct and concise reasoning paths. However, in our experiments, we found that our integration of backtracking results in heavy time-complexity and relatively marginal benefits. Therefore, developing an effective backtracking strategy to enhance the correctness of reasoning paths while maintaining the efficiency is meaningful for mitigating the negative impact of the path failures. Thank you once again for your time and valuable comments. We will add the above discussion to our paper.
null
null
Rebuttal 1: Rebuttal: We extend our gratitude to all the reviewers for the time and effort they have invested in reviewing our paper. In response to the review comments, we have added **(1) evaluation on the suggested new benchmarks**, **(2) comparison with new baseline models**, and **(3) ablation experiment controlling for CoM training data variables**. The detailed results are included in the supplementary PDF file. Pdf: /pdf/a266e7dacd148b161bef86b5a5550f63aaadf5c0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Solving Minimum-Cost Reach Avoid using Reinforcement Learning
Accept (poster)
Summary: The papers coinsiders a reach-avoid problem under cost minimization objective, where a CMDP has to be solved under the goal and avoid constraints (the final state has to be in the certain set, and the trajectories must avoid an unsafe region). The authors derive an optimal control problem under these constraints and show how to solve it in the RL setting. The results show an improvement in terms of the optimized objective in comparison with method that include the goal objective in the optimization cost. Strengths: The paper delivers well the proposed solution and the state augmentation technique that the authors propose seem to be crucial to the solution. The algorithm does outperform the competitors, where the goal is included in the objective. Weaknesses: 1. The state augmentation is not a novel technique in constrained RL, (see references below) albeit the application here is different. 1. I am not fully convinced that this is actually an important sub-problem in RL to solve. Can the authors elaborate why we should consider this problem in more detail? For instance, can we formulate safety gym benchmarks as this? 1. Some of the design choices are not fully clear, why the authors minimize the initial state z_0 with the constraint z_t >=0. Wouldn't it be easier to track the accumulated cost and minimize the terminal state? 1. I am not fully convinced that there are no simpler methods to solve this problem. For example, for the baselines, the authors reshape the reward by giving a modest reward for goal reaching. I’d like to see experiments with heavy penalization of not reaching the goal (for example, a scale from -10 to -10000) and different rewards for reaching the goal (not just one 20). * [Sootla et al 22] Sootla, Aivar, et al. "Sauté rl: Almost surely safe reinforcement learning using state augmentation." International Conference on Machine Learning. PMLR, 2022. * [Jiang et al 23] Jiang, Hao, et al. "Solving Richly Constrained Reinforcement Learning through State Augmentation and Reward Penalties." arXiv preprint arXiv:2301.11592 (2023). * [Jiang et al 24] Jiang, Hao, et al. "Reward Penalties on Augmented States for Solving Richly Constrained RL Effectively." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 18. 2024. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. I appreciate it’s a personal preference, but I don’t see the need to formulate the optimal control problem as cost minimization. Furthermore, in safe RL the costs typically refer to constraints. 2. Line 33 “Moreover, the use of nonlinear numerical optimization may result in poor solutions that lie in suboptimal local minima [14].” How does the paper address this issue? Wouldn’t any hard problem have the same problem? 3. Line 38 “However, posing the reach constraint as a reward then makes it difficult to optimize for the cumulative cost at the same time.”. I think adding a penalty for not reaching the goal will address this issue. Recall that in RL the rewards need not to be differentiable. 4. Line 45. “However, the choice of this fixed threshold becomes key: too small and the problem is not feasible, destabilizing the training process. Too large, and the resulting policy will simply ignore the cumulative cost.” - I am not sure this statement is necessary. Problem statement is a bit of an art and the threshold is chosen on the problem design level. Sometimes it is simply given. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: limitations are discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >## The paper considers ... where a CMDP has to be solved We want to clarify **the minimum-cost reach-avoid problem is NOT a CMDP [1]** (e.g., as used in [2,3,4]), since it is NOT in the following form: $$ \max_\pi \quad \mathbb{E}\left[ \sum_{k=0}^\infty r_k \right],\quad \textrm{s.t.} \quad \mathbb{E}\left[ \sum_{k=0}^\infty d \right] \leq c_\max. $$ --- >## State augmentation is not a novel technique, albeit the application is different. **Thank you for the references to state augmentation, we will include them and discuss the relationship with our work in the final version. We use state augmentation differently to the referenced works**. We emphasize that the novelty is in the framework for solving minimum-cost reach-avoid problems. We _DO NOT_ claim to be the first to propose the _idea_ of state augmentation. Moreover, while [2-4] use the augmented state to satisfy CMDP constraints by modifying the reward function but keeping the same value function, we use the augmented state to enforce the "upper-bound property" (L145) and apply it to the reachability Bellman equation (7). --- >## Why is minimum-cost reach-avoid an important problem? **Minimum-cost reach-avoid (MCRA) is an important real-world problem, especially for \*climate change\*, but RL existing methods do not \*directly\* solve the problem.** In addition to "energy-efficient autonomous driving" (L20) and "spacecraft and quadrotors" (L23) mentioned in the introduction, we highlight the additional use cases of plasma fusion [5] (reach desired current, minimize total risk of plasma disruption) and voltage control [6] (reach voltage level, minimize load shedding amount). **Examples of MCRA in RL benchmarks:** Lunar Lander (reach goal, avoid crash, minimize thrusters), Reacher (reach goal, minimize control effort), PointGoal (reach goal, avoid unsafe region, minimize time). Since existing RL methods do not directly solve the minimum-cost reach-avoid problem, a surrogate problem needs to be manually designed. We show in Figure 4 that these methods achieve _suboptimal_ cumulative costs due to the use of this surrogate problem. This motivates us to construct a method that can solve MCRA _directly_. --- >## Can simpler methods solve this problem with the right reward function coefficients? **Unfortunately, no choice reward function coefficients match the performance of our RC-PPO even after an extensive search over the coefficients**. As suggested, we have added a penalty for not reaching the goal and performed an _extensive_ grid search over the coefficients of the reward function. **We plot the results in Fig. 2 in the response pdf (see caption for grid search details)**. Different choices of the coefficients trade between reach rate and cost, forming a Pareto front. However, even the best point is sub-optimal compared to our RC-PPO. --- >## Problem statement is a bit of an art. Sometimes it is simply given. **We agree that, given a minimum-cost reach-avoid problem, constructing a surrogate problem that yields a good solution to the original problem is hard.** If a (C)MDP is given then it can be solved directly. However, if a minimum-cost reach-avoid problem is given, it makes more sense to solve the problem _directly_ instead of relying on the "art" of reward function design to convert it to a (C)MDP. Moreover, the answer above shows that sometimes _no_ choice of coefficients gives the optimal solution. --- >## Why minimize the z_0 with the constraint z_t >=0? We DO NOT impose the constraint z_t >=0, as the question implies. The challenge of solving _constrained_ optimization problems is in handling the interplay between the _objective_ and the _constraint_. If it were only the _objective_, then unconstrained optimization methods can be used. Similarly, if there were only _constraints_, then reachability can be used. Our strategy in this work belongs to the latter approach, where we convert the cumulative cost _objective_ $\sum_{t=0}^{T-1} c$ into a cumulative-cost upper-bound _constraint_ (L145) $$ \sum_{t=0}^{T-1} c(x_t, u_t) \leq z_0. $$ This gives us a problem with only three constraints, which we can solve using reachability. If all constraints are satisfied (i.e., the goal is reached on the augmented system) for some $z_0$, then the cumulative cost is less than $z_0$ AND the other constraints are satisfied, which means the original constrained problem (1) can be solved with an objective of less than $z_0$. Consequently, the smallest $z_0$ that still satisfies all the constraints "thus corresponds to the minimum-cost policy that still satisfies the reach-avoid constraints" (L164). **Note**: As we write in Remark 1, our strategy here "can be interpreted as an epigraph reformulation of the minimum-cost reach-avoid problem (1)" (L168). This epigraph reformulation technique is standard optimization [7, p.134] but is not that well known. >## Would it be easier to track the accumulated cost and minimize the terminal state? **This is what we already do!** The reachability problem on the augmented system (4, 5) aims to keep the cumulative cost below $z_0$ and reach the goal at some terminal time. --- >## “nonlinear numerical optimization may result in ... suboptimal local minima [14].” How does the paper address this issue? **We do not claim to address the local minima issue for RL**. Rather, this sentence only serves to illustrates a weakness of nonlinear trajectory optimization when compared to RL which has been well-established previously in the literature [8,9]. --- [1] Altman 1999, "CMDP"\ [2] Aivar et al 2022, "Sauté rl..."\ [3] Jiang et al 2023, "Solving Richly..."\ [4] Jiang et al 2024, "Reward Penalties..."\ [5] Wang et al 2024, "Active Disruption Avoidance ... Tokamak ..."\ [6] Huang et al 2020, "Accelerated DeepRL ... Emergency Voltage Control"\ [7] Boyd 2004, "Convex optimization"\ [8] Suh 2022, "Do Differentiable..."\ [9] Grandesso 2023, "CACTO..." --- Rebuttal 2: Comment: Thank you for a detailed response and additional experiments. I still have some concerns regarding the claims. I am very skeptical of the authors results since their formulations of reach-avoid problems do not solve the stated problem. This indicates the problem with the formulation or the hyperparameters. I don't think the authors made a significant effort to investigate why this occurs. However, the choices they made for the CMDP are not unreasonable, the authors detailed the experiments and the readers can see when the alternative fail. They also followed my suggestions which also didn't work well. Overall, I think the formulation is interesting and the authors have convinced me that it could be better to use directly than trying to formulate a CMPD and balance all the constraints. I am raising my score. --- Rebuttal 3: Title: Additional Clarifications Comment: Thank you for raising your score! Your suggested experiments have definitely helped to better show the fundamental limitations of solving the minimum-cost reach-avoid problem using a surrogate CMDP. We are happy to clarify any further questions you may have! --- >## I am very skeptical of the authors results since their formulations of reach-avoid problems do not solve the stated problem. Could you clarify what you mean here? Theoretically, **our formulation is an \*exact\* transformation of the minimum-cost reach-avoid problem (1).** This can be shown in the following sequence of optimization problems that yield the **exact same solution** if feasible: $$ \begin{align} &\min_{\pi,T}\quad \sum_{k=0}^T c(x_k, \pi(x_k)) \quad \text{ s.t } \quad g(x_T) \leq 0, \quad \max_{k = 0, \dots, T} h(x_k) \leq 0 \tag{1} \\\\ =& \min_{\pi,T}\quad \sum_{k=0}^T c(x_k, \pi(x_k)) \quad \text{ s.t } \quad g(x_T) \leq 0, \quad I_{(\max_{k = 0, \dots, T} h(x_k) > 0)} \leq 0 \tag{B} \\\\ =& \min_{z_0, \pi, T}\quad z_0 \quad \text{ s.t. } \quad \sum_{k=0}^T c(x_k, \pi(x_k)) \leq z_0, \quad g(x_T) \leq 0,\quad I_{(\max_{k = 0, \dots, T} h(x_k) > 0)} \leq 0 \tag{C} \\\\ =& \min_{z_0, \pi, T}\quad z_0 \quad \text{ s.t. } \quad \max\left( \sum_{k=0}^T c(x_k, \pi(x_k)) - z_0,\\; g(x_T),\\; I_{(\max_{k = 0, \dots, T} h(x_k) > 0)} \right) \leq 0 \tag{D} \\\\ =& \min_{z_0, \pi, T}\quad z_0 \quad \text{ s.t. } \quad \hat{g}(\hat{x}_T) \leq 0 \tag{E} \\\\ =& \min\_{z_0} \quad z_0 \quad \text{ s.t. } \quad \min\_\pi \min_T \hat{g}(\hat{x}_T) \leq 0 \tag{F} \\\\ =& \min\_{z_0} \quad z_0 \quad \text{ s.t. } \quad \min\_\pi \tilde{V}\_{\hat{g}}^\pi (\hat{x}_T) \leq 0 \tag{G} \end{align} $$ This shows that the minimum-cost reach-avoid problem (1) is **equivalent** to the formulation we solve in this work (G), which is (8) in the paper. The formulation of RC-PPO solves (G) and thus also solves (1) because they are equivalent. This new derivation will be included in the final version to improve clarity. --- Rebuttal Comment 3.1: Comment: I mean that the CMDP formulation, which is used as a baseline to compare, should solve the problem optimally, but the computational results imply that it doesn't. I cannot see specifically why and I am not sure that an explanation was provided. --- Rebuttal 4: Comment: > ## Why does the CMDP formulation give suboptimal results? This is a very good question. **The optimal solution of the CMDP formulation is guaranteed to be suboptimal for the original minimum-cost reach-avoid problem for a wide range of problems**. The main culprit at play here is the fact that the cost threshold is a _fixed constant_. We illustrate this with the following example. ## Problem Setup Consider the following minimum-cost reach-avoid problem, where we use $C$ to denote the cost. - **Initial state distribution**: A ($p=0.5$), B ($p=0.5$) - **Goal states**: $G_1$, $G_2$, $G_3$ - **(Non-goal) Absorbing state**: $I$ - **Policy parameters**: $p_A$, $p_B \in [0, 1]$ ``` ┌───┐ pA ┌─┤ A ├─┐ 1-pA │ └───┘ │ C=10 ▼ ▼ C=20 ┌──┐ ┌──┐ │G1│ │G2│ └──┘ └──┘ ┌───┐ pB ┌─┤ B ├─┐ 1-pB │ └───┘ │ C=30 ▼ ▼ C=0 ┌──┐ ┌─┐ │G3│ │I│ └──┘ └─┘ ``` The optimal policy to this minimum-cost reach-avoid problem is to take the _left_ action from both $A$ and $B$, i.e., $p_A = p_B = 1$, which gives an expected cost of $$ 0.5 \cdot 10 + 0.5 \cdot 30 = 20 \tag{$\dagger$} $$ ## CMDP Solution To convert this to a CMDP, we introduce a reward that incentivizes reaching the goal, and add a cost constraint with threshold $\mathcal{X}\_{\text{thresh}}$: $$ 0.5(10p_A+20 (1-p_A)) + 0.5(30 p_B) \leq \mathcal{X}_{\text{thresh}} $$ This gives the following CMDP (we use $R$ to denote reward): ``` ┌───┐ pA ┌─┤ A ├─┐ 1-pA │ └───┘ │ C=10 | | C=20 R=10 ▼ ▼ R=20 ┌──┐ ┌──┐ │G1│ │G2│ └──┘ └──┘ ┌───┐ pB ┌─┤ B ├─┐ 1-pB │ └───┘ │ C=30 | | C=0 R=20 ▼ ▼ R=0 ┌──┐ ┌─┐ │G3│ │I│ └──┘ └─┘ ``` The optimal solution to this CMDP can be solved to be $$ p_A = 0, \quad p_B = \frac{\mathcal{X}_{\text{thres}} - 10}{15}. \tag{$\star$} $$ However, **the \*true\* optimal solution of $p_A = p_B = 1$ is NOT an optimal solution to the CMDP ($\star$)**. To see this, taking $\mathcal{X}\_{\text{thresh}} = 20$ as in ($\dagger$), the real optimal solution $p_A = p_B = 1$ gives a reward of $R=15$, but the CMDP solution $p_A = 0, p_B = \frac{20 - 10}{15} = \frac{2}{3}$ in ($\star$) gives $R=23.33 > 15$. Moreover, any uniform scaling of the rewards or costs does not change the solution. ## Fixes We can "fix" this problem if we choose the rewards to be high only along the optimal solution $p_A = p_B = 1$, but this requires knowledge of the optimal solution beforehand and is not feasible for all problems. Another way to "fix" this problem is if we consider a "per-state" cost-threshold, e.g., $$ 10 p_A + 20(1-p_A) \leq \mathcal{X}_A, \qquad 10 p_B + 20(1-p_B) \leq \mathcal{X}_B $$ Choosing exactly the cost of the optimal policy, i.e., $\mathcal{X}_A = 10$ and $\mathcal{X}_B \geq 30$, also recovers the optimal solution of $p_A = p_B =1$. This now requires knowing the smallest cost to reach the goal _for every state_, which is difficult to do beforehand and not feasible. On the other hand, **RC-PPO does exactly this in the second-phase when optimizing for $z_0$**. We can thus interpret RC-PPO as **automatically solving for the best cost-threshold to use as a constraint for every initial state**. --- We will include this explanation in the final version to improve clarity.
Summary: The paper introduces RC-PPO, an RL algorithm designed to solve the minimum-cost reach-avoid problem by reformulating the optimization problem on an augmented system. The paper addresses the limitations of current RL that mostly solve surrogate problems to fit to the problem setting. Furthermore, a comprehensive analysis, including theoretical foundations, algorithmic details, and empirical validation is presented. Experimental comparison with respect to existing methods is also provided. Strengths: The paper (including the appendix) provides solid theoretical foundation, including proofs and detailed explanations of the key concepts and assumptions. The paper is overall well-written, definitions and theorems are formulated clearly. The main thread of the paper can be followed. The presented experimental results are well illustrated and generally the claims based on the results are comprehensible. In addition to solid theoretical arguments, the appendix also provides vast implementational details and further experimental results. The presented idea seems to offer an elegant solution to minimum-cost reach-avoid problems. Overall a nice and insightful read. Weaknesses: (minor) The presented experimental results are solid and illustrative, but could be extended. The extra title of Figure 3 is slightly irritating. Figure 3 and Figure 4 use different axis labeling logic. Appendix G.2 is empty (- a formatting problem?). There are typos, missing spaces, missing punctuation, repeated words and repeated phrases in the paper, e.g., L99, L186, L187, L193, L200, L206, L210+L225, L215, L232, L570. Technical Quality: 3 Clarity: 3 Questions for Authors: In Section 5, the paper claims that RC-PPO remains competitive which is based on Figure 6. Can you either elaborate how this claim can be made from Figure 6, or maybe it should be Figure 4? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are briefly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > ### In Section 5, the paper claims that RC-PPO remains competitive which is based on Figure 6. Can you either elaborate how this claim can be made from Figure 6, or maybe it should be Figure 4? **You are correct, this should be Figure 4**. > ### The presented experimental results are solid and illustrative, but could be extended. **Thank you for the suggestion!** We have included additional comparisons to SAC (Figure 1 in the pdf), though the conclusions in the paper remain the same. We have also performed an extensive grid search over different reward coefficients for the baseline PPO method and plotted the Pareto front (Figure 2) across the reach rate and cost. Notably, RC-PPO vastly outperforms the entire Pareto front, demonstrating that methods that solve the surrogate problem yield suboptimal policies due to not solving for the minimum-cost reach-avoid problem _directly_. If you have any further suggestions on how the presented experimental results could be extended, we would _love_ to hear your thoughts! --- > #### The extra title of Figure 3 is slightly irritating. Figure 3 and Figure 4 use different axis labeling logic. Appendix G.2 is empty (- a formatting problem?). There are typos, missing spaces, missing punctuation, repeated words and repeated phrases in the paper, e.g., L99, L186, L187, L193, L200, L206, L210+L225, L215, L232, L570. **Thank you for the meticulous reading of the manuscript!** Unfortunately, NeurIPS does not allow for uploading a new version of the manuscript, but we will definitely include the changes in the final version. --- Rebuttal Comment 1.1: Comment: I have read the individual rebuttals as well as the shared rebuttal and thank the authors for taking the time to answer. I will maintain my current rating.
Summary: This paper proposes Reach Constrained Proximal Policy Optimization which targets to solve the minimum-cost reach-avoid problem. The authors first convert the reach-avoid problem to a reach problem on an augmented system and use the corresponding reach value function to compute the optimal policy. Next, The authors use a novel two-step PPO-based RL-based framework to learn this value function and the corresponding optimal policy. Strengths: 1.The studied problem is interesting 2.This paper is easy to follow Weaknesses: 1.What is the motivation for using two-phase PPO to solve this problem? Providing performance comparisons with other RL algorithms, such as TD3 and SAC, will significantly strengthen the rationale behind proposing this approach. 2.To verify the statement in the abstract, "which leads to suboptimal policies that do not directly minimize the cumulative cost," the authors should compare the performance of RC-PPO with other multi-objective optimization algorithms. 3. Please reconsider whether it is reasonable to reformulate the minimum-cost reach-avoid problem by constructing an augmented system, as the limitation that "two policies that are both unable to reach the goal can have the same value even if one is unsafe" is undesirable. Technical Quality: 2 Clarity: 3 Questions for Authors: 1.What is the motivation for using two-phase PPO to solve this problem? Providing performance comparisons with other RL algorithms, such as TD3 and SAC, will significantly strengthen the rationale behind proposing this approach. 2.To verify the statement in the abstract, "which leads to suboptimal policies that do not directly minimize the cumulative cost," the authors should compare the performance of RC-PPO with other multi-objective optimization algorithms. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: 3. Please reconsider whether it is reasonable to reformulate the minimum-cost reach-avoid problem by constructing an augmented system, as the limitation that "two policies that are both unable to reach the goal can have the same value even if one is unsafe" is undesirable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > ### What is the motivation of using two-phase PPO to solve this problem? **The minimum-cost reach-avoid problem (1) cannot be *exactly* framed into problem structures that existing methods are able to solve.** Our two-phase method _directly_ solves the minimum-cost reach-avoid problem (1). In comparison, alternate RL methods either: 1. _Only_ solve the reach-avoid problem without consideration of the cumulative cost. 2. Solve _unconstrained_ problems (e.g., PPO, SAC, TD3) 3. Solve problems with CMDP constraints that constrain the _expectation over initial states_ of the sum of some cost function (e.g., CPPO, RESPO) Since the minimum-cost reach-avoid problem (1) cannot be _exactly_ framed into any of the above problem types, this "prevents the straightforward application of existing RL methods to solve (1)" (L110). We introduce a two-phase method to extend reachability analysis to "additionally enable the minimization of the cumulative cost" (L126) on top of solving the traditional reach-avoid problem by constructing a new constraint called the "upper-bound property" (L145) which enforces that "$z_0$ is an upper-bound on the total cost-to-come" (L144). 1. The first phase uses PPO to learn a _stochastic_ policy $\pi$ and corresponding value function $\tilde{V}_{\hat{g}}^\pi$ for different states $x$ and upper-bounds $z_0$. 2. However, the reachability analysis relies on a deterministic policy. Hence, in the second phase, we take a deterministic version of the learned stochastic policy $\pi$ and fine-tune the learned value function $\tilde{V}^{\pi}_{\hat{g}}$ on this deterministic policy. 3. Using the fine-tuned value function $\tilde{V}_{\hat{g}}^{\pi}$, we can then find the smallest $z_0$ that satisfies the constraint (8b), which will then "corresponds to the minimum-cost policy that satisfies the reach-avoid constraints" (L164). --- > ### Providing performance comparisons with other RL algorithms, such as TD3 and SAC, will significantly strengthen the rationale behind proposing this approach **Thank you for the suggestion!** Although we believe our approach to already be empirically well-motivated by the lower cumulative cost of our proposed method (Figure 4), **we have performed additional comparisons to SAC and PPO with more extensive reward shaping in the attached PDF**. The additional experiments lead to the same conclusion as in the main paper: because RC-PPO solves the original problem (1), it can achieve "significantly lower cumulative costs" (L265) while remaining "competitive against the best baseline algorithms in reach rate" (L264). --- > ### To verify the statement that solving the surrogate problem "leads to suboptimal policies that do not directly minimize the cumulative cost", the authors should compare the performance of RC-PPO with other multi-objective optimization algorithms As described in Section 5 (L231), we **already** compare RC-PPO against other methods that solve the surrogate CMDP problem (16) and find that "RC-PPO remains competitive against the best baseline algorithms in reach rate while achieving significantly lower cumulative costs" (L264, Figure 4). The reason why the baseline methods achieve a higher cumulative cost and are more suboptimal is _precisely_ because they are only able to solve a surrogate (16) instead of the true constrained optimization problem (1). Although we have compared against CPPO and RESPO, which are _single-policy_ multi-objective optimization algorithms [1,2], there also exist _multi-policy_ multi-objective optimization algorithms which aim to recover an approximation of the entire Pareto front [1,2]. We wish to emphasize that the minimum-cost reach-avoid problem does _NOT_ require solving for the entire Pareto front, as there is only a _single_ cumulative cost function albeit with multiple constraints. Even if the Pareto front was given, the optimal solution to (1) would still need to be found on the surface of the Pareto front, which is not a simple task. --- > ### The limitation that "two policies that are both unable to reach the goal can have the same value even if one is unsafe" is undesirable. **While this is undesirable, this is an artifact of our problem formulation (1)** where we pose a constrained optimization problem. Since the reach and avoid components are both constraints and hence "equal", any candidate that does not satisfy both constraints is _infeasible_ regardless of whether it is safe or not. We are happy that you agree this is an important future direction to investigate, which we have mentioned in the limitations section (Section L293). Since this is not an issue from the perspective of our current problem formulation (1), we believe it to be out of the scope of the current work, and "leave resolving these challenges as future work" (L297). --- [1] Policy Gradient Approaches for Multi–Objective Sequential Decision Making\ [2] Empirical evaluation methods for multiobjective reinforcement learning algorithms --- Rebuttal Comment 1.1: Comment: Thank you for a detailed response and additional experiments. The problem studied by the authors can be transformed into a multi-objective optimization problem and solved using multi-objective algorithms. Although the authors have mathematically derived it into a problem that can be addressed using reinforcement learning and designed a reinforcement learning algorithm based on PPO, there are limitations in the mathematical tools used. Could you provide insights into the robustness and adaptability of the designed algorithm? Specifically, how does the algorithm perform when there are changes in the environment or when the weights assigned to reach-avoid and minimum-cost objectives vary? I am raising my score. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score! Below, 1. We provide **new experiment results** on robustness: RC-PPO degrades gracefully as noise is introduced but _remains the lowest cost method among baseline method with high reach rates_. 2. Clarify the objective weights: RC-PPO _does not require weights by construction_, unlike the baseline methods we have compared against. **Please let us know if we have addressed all of your concerns!** We are happy to clarify any further questions you may have. All additional experiments and clarifications will be included in the final version. --- > ## How does the algorithm perform when there are changes in the environment Good question. Although not the focus of this work, **we have performed additional experiments to see what happens when the environment changes**. Specifically, we add uniform noise to the output of the learned policy and see what happens on the Pendulum environment. ### 1. Reach Rates We first compare the reach rates of the different methods. On this environment, we see that the presence of noise does not affect the reach rate too much. | Name |Reach Rate|&#124; + Small Noise|&#124; + Large Noise| |----------|---------:|------------:|------------:| |RC-PPO | 1.00| 1.00| 1.00| |RESPO | 1.00| 1.00| 1.00| |PPO $\beta_L$| 1.00| 1.00| 1.00| |PPO $\beta_H$| 0.31| 0.38| 0.34| |SAC $\beta_L$| 1.00| 1.00| 1.00| |SAC $\beta_H$| 0.21| 0.37| 0.20| |CPPO $\mathcal{X}_L$ | 0.67| 0.65| 0.65| |CPPO $\mathcal{X}_M$ | 1.00| 1.00| 1.00| |CPPO $\mathcal{X}_H$ | 1.00| 1.00| 0.99| |CRL | 1.00| 1.00| 1.00| ### 2. Cumulative Cost Next, we look at how the cumulative cost changes with noise by _comparing methods with a near 100% reach rate_. Unsurprisingly, larger amounts of noise reduce the performance of almost all policies. Even with the added noise, **RC-PPO uses the least cumulative cost compared to all other methods.** | Name |Additional Cumulative Cost|&#124; + Small Noise|&#124; + Large Noise| |----------|-------------------------:|------------:|------------:| |RC-PPO | 35.3| 41.4| 132.9| |RESPO | 92.0| 93.6| 179.2| |PPO $\beta_L$ | 97.7| 98.6| 150.2| |SAC $\beta_L$ | 156.3| 157.6| 270.5| |CPPO $\mathcal{X}_M$ | 223.2| 220.5| 209.0| |CPPO $\mathcal{X}_H$ | 212.7| 299.8| 298.4| |CRL | 228.3| 229.1| 261.1| These additional experiments show that the **performance of RC-PPO degrades gracefully as more noise is added**, in line with existing RL methods. --- > ## How does the algorithm perform when the weights assigned to reach-avoid and minimum-cost objectives vary? **Our algorithm does NOT have any weight hyperparameters for the reach-avoid and the minimum-cost parts.** The motivation for our work is to solve the minimum-cost reach-avoid problem without the need to choose weights between the two. This is because the reach-avoid is a _constraint_ (always satisfy this), while the minimum-cost is an _objective_ (only minimize this if the reach-avoid constraint is satisfied). In comparison, the baseline methods DO need a choice of weights, since "all objectives are combined with a weighted sum" (L5). **We have compared with baseline methods that require such choice of weights in Section 5 and in the new pdf**, where RC-PPO outperforms all baselines for any choice of weights. --- Rebuttal 2: Comment: Once again, thank you for raising your score! To answer your concerns about whether the minimum-cost reach-avoid problem can be solved using multi-objective methods, we have built a toy example to show that **our problem may not be solvable using multi-objective methods, depending on the choice of reward function**. Consequently, it is very difficult to construct surrogate reward functions that guarantee the optimality for the original minimum-cost reach-avoid problem in general. > ## Can the minimum-cost reach-avoid problem always be optimally solved as a surrogate multi-objective problem given proper weights between the different objectives? This is a very good question. **The optimal solution of the surrogate multi-objective problem can be suboptimal for the original minimum-cost reach-avoid problem given \*any\* choice of weights**. We illustrate this with the following example. ## Problem Setup Consider the following minimum-cost reach-avoid problem, where we use $C$ to denote the cost. - **Initial state distribution**: A ($p=0.5$), B ($p=0.5$) - **Goal states**: $G_1$, $G_2$, $G_3$ - **(Non-goal) Absorbing state**: $I$ - **Policy parameters**: $p_A$, $p_B \in [0, 1]$ ``` ┌───┐ pA ┌─┤ A ├─┐ 1-pA │ └───┘ │ C=10 ▼ ▼ C=20 ┌──┐ ┌──┐ │G1│ │G2│ └──┘ └──┘ ┌───┐ pB ┌─┤ B ├─┐ 1-pB │ └───┘ │ C=30 ▼ ▼ C=0 ┌──┐ ┌─┐ │G3│ │I│ └──┘ └─┘ ``` The optimal policy for this minimum-cost reach-avoid problem is to take the _left_ action from both $A$ and $B$, i.e., $p_A = p_B = 1$, which gives an expected cost of $$ 0.5 \cdot 10 + 0.5 \cdot 30 = 20 $$ ## Multi-objective Problem and Solution To convert this into a multi-objective problem, we introduce a reward that incentivizes reaching the goal as follows (we use $R$ to denote reward): ``` ┌───┐ pA ┌─┤ A ├─┐ 1-pA │ └───┘ │ C=10 │ │ C=20 R=10 ▼ ▼ R=20 ┌──┐ ┌──┐ │G1│ │G2│ └──┘ └──┘ ┌───┐ pB ┌─┤ B ├─┐ 1-pB │ └───┘ │ C=30 │ │ C=0 R=20 ▼ ▼ R=0 ┌──┐ ┌─┐ │G3│ │I│ └──┘ └─┘ ``` This results in the following multi-objective optimization problem: $$ \min_{p_A, p_B \in [0, 1]} \quad (-R, C) $$ To solve this multi-objective optimization problem, we employ scalarization and introduce a weight $w \geq 0$, giving $$ \min_{p_A, p_B \in [0, 1]} \quad -R + w C \tag{$\dagger$} $$ Solving the scalarized problem ($\dagger$) gives us the following solution as a function of $w$: $$ p_A = \mathbb{1}\_{(w \ge 1)}, \quad p_B = \mathbb{1}\_{(w \le \frac{2}{3})} $$ Notice that **the \*true\* optimal solution of $p_A = p_B = 1$ is NOT an optimal solution to ($\star$) under any $w$**. Hence, **the optimal solution of the surrogate multi-objective problem can be suboptimal for the original minimum-cost reach-avoid problem under any weight coefficients**. Of course, this is just one choice of reward function where the optimal solution of the minimum-cost reach-avoid problem cannot be recovered. Given knowledge of the optimal policy, we can construct the reward such that the multi-objective optimization problem ($\dagger$) does include the optimal policy as a solution. However, this is impossible to do if we do not have prior knowledge of the optimal policy, as is typically the case. In contrast, RC-PPO solves the minimum-cost reach-avoid problem directly. Hence, assuming the optimization is done exactly, the true solution will be found without the need for any reward design. --- We will include the above explanation in the final version to improve clarity.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. We are excited that the reviewers identified that we provide a _novel_ ($\color{#E24A33}{\textsf{M8co}}$) and _elegant_ ($\color{#348ABD}{\textsf{zsEG}}$) solution to the minimum-cost reach avoid problem that _improves upon the optimized objective_ ($\color{#988ED5}{\textsf{186A}}$) in comparison to existing methods, _addressing the limitations of current RL methods_ ($\color{#348ABD}{\textsf{zsEG}}$). Reviewers found our paper _well-written_ ($\color{#348ABD}{\textsf{zsEG}}$, $\color{#988ED5}{\textsf{186A}}$) and _easy to follow_ ($\color{#E24A33}{\textsf{M8co}}$, $\color{#348ABD}{\textsf{zsEG}}$). We believe that RC-PPO takes a significant step towards new RL algorithms that can solve minimum-cost reach-avoid problems by construction without the need for complex reward design. --- # New Experiments As _all_ reviewers have recognized our technical novelty, the primary criticism comes from an insufficient comparison to alternative RL methods ($\color{#E24A33}{\textsf{M8co}}$, $\color{#988ED5}{\textsf{186A}}$) and the necessity of RC-PPO for solving minimum-cost reach-avoid problems ($\color{#988ED5}{\textsf{186A}}$). In the **attached PDF (below)**, we present 1. Additional comparisons with SAC on the six benchmark tasks in Fig 1 (as suggested by $\color{#E24A33}{\textsf{M8co}}$) 2. New comparison against an _extensive_ grid search over different reward coefficients in Fig 2 (as suggested by $\color{#988ED5}{\textsf{186A}}$). In particular, the grid search over reward coefficients in Fig 2 forms a Pareto frontier, where different reward coefficients trade between the reach rate and the cumulative cost. **RC-PPO outperforms the _entire_ Pareto frontier, with _no single point on the Pareto frontier coming close to simultaneously achieving the high reach rates and low costs that RC-PPO achieves_**. These results strengthen our argument that existing methods will lead to suboptimal policies since they do not _directly_ solve the minimum-cost reach-avoid problem. In contrast, RC-PPO is unique in that it solves the minimum-cost reach-avoid problem _directly_ and hence achieves lower cumulative costs. We have tried our best to resolve all raised questions in the individual responses below. If you have any additional questions/comments/concerns, please let us know. We appreciate the reviewer's precious time in providing their valuable feedback. --- **Please see the additional figures in the PDF below:** Pdf: /pdf/265fd397813f035d3a4b0b45a4fa8366e2a9a38c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient multi-prompt evaluation of LLMs
Accept (poster)
Summary: The authors introduce a novel method called PromptEval which permits efficient multi-prompt evaluation of LLMs across different prompt templates with a limited number of evaluations. Strengths: - Theoretical guarantees that PromptEval has desirable statistical properties such as consistency in estimating performance distribution and its quantiles. - Large-scale study of prompt sensitivity of 15 popular open-source LLMs on MMLU. Weaknesses: N/A Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your comments.
Summary: This paper introduces PromptEval, a novel method for efficiently evaluating the performance of large language models (LLMs) across multiple prompt templates. They propose a statistical framework based on Item Response Theory (IRT) for estimating LLM performance distribution across many prompts using limited evaluations and theoretical guarantees on the consistency of the proposed estimators. Strengths: 1. The paper addresses an important and timely problem in LLM evaluation, recognizing the limitations of single-prompt evaluations and proposing a novel solution. 2. The theoretical foundation is solid, with clear assumptions and complete proofs provided in the appendix. Weaknesses: 1. While the authors compare different variations of their method, they don't explore alternative modeling approaches beyond IRT. It would be interesting to see comparisons with other potential frameworks. 2. While the method saves evaluations, the paper doesn't discuss the computational requirements of the estimation process itself, which could be relevant for very large prompt sets. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How sensitive is PromptEval to the choice of initial prompt set? How might this impact its use in practice? 2. How does the computational complexity of the estimation process scale with the number of prompts and examples? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have provided a comprehensive discussion of limitations in the appendix, which is commendable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your dedication to our paper. We addressed the issues you raised below. Please let us know if you have any other questions. - **Use of IRT and alternatives:** We use (a generalization of) IRT because it is the model most suited to our data. Although there are other psychometric models (e.g. classical test theory based on Gaussian factor analysis), they are not as suitable as IRT. Is there a specific alternative model that you wish us to comment on? - **Computational cost of estimation for the experiments in the paper:** The estimation process is pretty cheap. For example, in our experiments, which reflect very well real use cases, we fitted logistic regression models in datasets with less than 2k samples and a couple of hundred (or a few thousand) columns, which is performed in a few seconds by a laptop. We will make this observation clear in the paper. - **More details on the computational complexity with growing dataset dimensions:** Consider the case of our experiment in which prompts are represented by embeddings of fixed size and examples are represented by one-hot encodings. Because the dimension of the embeddings does not depend on the number of prompt variations, the number of samples and variables used to fit our model does not vary with the number of prompt variations. Then, *computational costs are constant with respect to the number of prompt templates*. On the other hand, the number of variables (and consequently samples, to make estimation possible) should increase linearly with the number of examples, which are usually hundreds or a few thousand. Thus, this should not be a problem in most practical cases. We will add a comment about this point in the paper. Thank you for bringing this up! - **Sensitivity of PromptEval:** Our method aims to estimate performance distribution across a **given prompt set**, which can be obtained using methods from [1,2], for example. We anticipate the method to work well with any initial prompt set in the sense of accurately estimating performance distribution across the given prompt set, as supported by experiments considering different ways to generate the initial prompts. We note that if the initial prompt set is too limited or biased in some sense, the resulting estimate of performance distribution might not accurately reflect sensitivity to **all possible prompts**. We make this point clearer in our "Limitations" section. Thank you! **References** [1] Moran Mizrahi, Guy Kaplan, Dan Malkin, Rotem Dror, Dafna Shahaf, and Gabriel Stanovsky. State of what art? a call for multi-prompt LLM evaluation. arXiv preprint arXiv:2401.00595, 2023. [2] Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane Suhr. Quantifying language models’ sensitivity to spurious features in prompt design or: How i learned to start worrying about prompt formatting. arXiv preprint arXiv:2310.11324, 2023. --- Rebuttal Comment 1.1: Title: Reply by Reviewer Comment: Thank you for your detailed rebuttal. Your response has addressed my major concerns. I will also engage in further discussion with the other reviewers to ensure we consider your clarifications.
Summary: This paper introduces PromptEval, an efficient multi-prompt evaluation method for LLMs, showing its statistical consistency and effectiveness across benchmarks (MMLU, BBH, LMentry) and studying prompt sensitivity in 15 open-source LLMs. Strengths: - The authors conducted a comprehensive theoretical analysis to verify that PromptEval possesses ideal statistical properties, and carried out extensive experiments using three benchmarks (MMLU, BBH, LMentry) and fifteen open-source large language models (flan series, llama series, mistral series, etc.) - They introduces five variations, comparing the baseline under different budgets (i.e., different sampling rates) and various quantiles, demonstrating the significant effectiveness of PromptEval. Weaknesses: - The current experiments have only been conducted on open-source models. I look forward to experiments on some mainstream closed-source models (such as the GPT series or the Claude series) and explore corresponding variant strategies. - This method PromptEval requires that a large number of active prompt temples exist in its benchmarks, but this is often not satisfied. Does this method work if there are only a few temples in benchmarks? This requires authors to carry out some experiments to prove. - The construction of the baseline is relatively simple, and authors need to compare it with some relevant methods (like TRIPLE[1]) to highlight the advantages of PromptEval. [1] Chengshuai Shi, Kun Yang, Jing Yang, and Cong Shen. Efficient Prompt Optimization Through the Lens of Best Arm Identification, arXiv preprint arXiv: http://arxiv.org/abs/2402.09723. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to my comments in "Weaknesses". Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author pointed out the limitations in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your work on our paper. We addressed the issues you raised below. Please let us know if you have any questions. - **New experiment with closed-source models:** We have conducted a new experiment using closed-source models. Due to the high costs associated with running these models (such as GPT-4), we carried out a small-scale experiment. Moreover, to enhance the diversity of experiments and interest of our paper, we explored the concept of LLM-as-a-judge. In this setup, we generated various prompt templates to present to the judge, which in this case is a closed-source model (GPT-4o-mini). This approach allows us to assess how sensitive evaluated models' performance is to different evaluation prompts. Specifically, we used AlpacaEval 2.0 [1] as the benchmark, generated 100 prompt variations using ChatGPT, and presented these to the judge. We evaluated the performance of four LLMs with similar capabilities (cohere, Qwen1.5-7B-Chat, Mistral-7B-Instruct-v0.2, llama-2-70b-chat-hf) using only 2% of the total evaluations (1.6k/80k). The new results have been included in the additional PDF submitted through OpenReview. In summary, we show that PromptEval offers much superior performance in terms of estimation error (Wasserstein distance $W_1$) when compared with the baseline. Please feel free to reach out if you have any questions. - **New experiment with fewer prompt variations:** We repeat the main experiment in the paper (randomly) cutting the number of prompt templates by a factor of 5. This means we use only 20 prompt variations in MMLU, for example. In summary, PromptEval still does pretty well, beating the baseline. However, the gap between PromptEval and the baseline has shortened due to fewer variations. This fact highlights that the bigger the number of templates, the more useful PromptEval can be relative to the baseline. We include the new version of Figure 2 in the extra pdf submitted through OpenReview, which summarizes well the new results. - **Baselines:** We compared against TRIPLE-GSE (the best version of TRIPLE, designed to run when many prompt templates are available) in our “best prompt identification” experiment. Please check the end of Section 5, where we show that PromptEval outperforms TRIPLE in **best prompt identification** in MMLU. We do not compare against TRIPLE in the **performance distribution** estimation experiment because that method is not designed for that purpose. Currently, as far as we know, PromptEval is the only method designed for efficient estimation of performance distribution across prompt templates (besides the naive averaging 'avg', included in our experiments). **References** [1] Li, Xuechen, et al. "Alpacaeval: An automatic evaluator of instruction-following models." (2023).
null
null
Rebuttal 1: Rebuttal: Dear reviewers, Thank you for your time reviewing our paper. We include new experiments suggested by reviewer PhKA in the extra pdf. 1. In the first experiment, we explore the concept of LLM-as-a judge. We used PromptEval to estimate the distribution of performances given by a closed-source LLM (GPT-4o-mini, the "judge"), across 100 evaluation prompt templates, when evaluating other LLMs responses on AlpacaEval 2.0. Realize that, contrasting with previous experiments, we do not rephrase prompts given to the models being evaluated but only to the evaluator. This approach allows us to assess how sensitive evaluated models' performance is to different evaluation prompts. As a result, we show that PromptEval is also valuable for more robust LLM-as-a-judge benchmarks. 2. In the second experiment, we wanted to check how PromptEval performs when fewer prompt variations are available. Then, we repeat the main experiment of the paper only keeping 20% (randomly chosen) of the total number of prompts. In summary, PromptEval still does pretty well, beating the baseline. However, the gap between PromptEval and the baseline has shortened due to fewer variations. This fact highlights that the bigger the number of templates, the more useful PromptEval can be relative to the baseline. Pdf: /pdf/9c645293f3066366c94cc399a418a86725f7e922.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DRIP: Unleashing Diffusion Priors for Joint Foreground and Alpha Prediction in Image Matting
Accept (poster)
Summary: The paper describes a new alpha matting model, which builds upon Stable Diffusion v2, adding various blocks including a switcher for training for alpha and foreground color prediction using the same model, cross-domain attention and an alpha decoder. Unlike the usual emphasis on optimizing models for solely alpha prediction, the authors also tune their model to also accurately predict foreground colors. Their results are competitive with the state-of-the-art alpha matting models in alpha estimation. Strengths: * The emphasis on estimating foreground colors is spot on. In fact alpha mattes by themselves are of little practical use in the absence of foreground color estimates. * Alphas and foreground colors are predicted efficiently through a single model thanks to a foreground-alpha switcher. This is more efficient than training an additional model for estimating foreground colors, and as Tab 3. suggests seems to work slightly better than directly predicting alphas and foreground colors using a single model without a switcher. * The alpha mattes predicted by the proposed model is competitive with the state-of-the-art. Weaknesses: * The major weakness of this work is the absence of a proper baseline for the foreground color estimation. This weakness is further amplified by the authors' emphasis on foreground color estimation throughout the paper. For instance, Tang et al. Learning-based Sampling for Natural Image Matting. CVPR'19. sequentially predicts background and foreground colors before alpha prediction. Another alpha matting method by Aksoy et. al. Information-Flow Matting. 2017 has an explicit part for accurate color prediction. In addition to such methods that explicitly predict foreground/background colors, existing alpha matting methods can be trivially be extended to additionally predict foreground and background colors. Given all that, the evaluation presented in Table 3 is insufficient to prove the claims on accurate foreground color prediction. * Overall the technical novelty of the paper is limited. Technical Quality: 3 Clarity: 3 Questions for Authors: * How does your model handle different trimap shapes? Could you comment on the robustness w.r.t. the trimap accuracy? * Did you test your model on video matting? Would you expect the alpha/foreground color predictions to be temporally coherent (given coherent trimaps of course)? * Would your model potentially benefit from end-to-end training where the encoder weights are not frozen? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Some discussion on limitations is present in the final section. If space permits, I'd encourage further discussion of failure cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Absence of Baselines for Foreground Color Estimation** Thank you very much for highlighting the need for a proper baseline in foreground color estimation. We agree that this aspect warrants further discussion and comparison. 1. **Related Work Discussion** In the related work section, **we initially focused on methods that simultaneously predict alpha and foreground RGB, as well as those that output alpha followed by post-processing to obtain foreground RGB (L79-83)**. However, we acknowledge that the two works mentioned by the reviewer that adopt a sequential approach where foreground colors are predicted before alpha. This is indeed a significant method, and **we will include a detailed discussion of these works in the revised related work section**. 2. **Evaluation with Baselines** To address the reviewer's concern regarding the absence of a proper baseline, **we have added an evaluation of the Learning-based Sampling (LBS) method as a foreground estimation technique**. The dataset and metrics for foreground estimation are consistent with those in the paper (L206-212, L245-249). **The qualitative results are presented in Figure.A of the rebuttal PDF, and the quantitative results are provided in the table below.** The evaluation shows that our method, when leveraging the powerful priors from latent diffusion models (LDM), significantly outperforms other methods, achieving state-of-the-art (SOTA) performance in foreground color estimation. | Method | SAD | MSE | | --- | --- | --- | | LBS | 49.7 | 8.6 | | Ours | 34.1 | 4.3 | **W2: Technical Novelty** Thank you for your valuable feedback. Our approach addresses significant challenges in the ill-posed problem of image matting by incorporating prior knowledge through latent diffusion models. Here are our main technical contributions (L64-69): 1. **Transforming Generative Models for Matting**: We modified LDMs, which are generative models, into discriminative models for matting by modeling the task as conditional generation, making this the first LDM-driven matting method. 2. **Joint Prediction of Foreground and Alpha**: To bridge the gap between LDM's RGB output and our RGBA output, we introduced a switcher and cross-domain attention mechanism. These facilitate mutual information exchange, ensuring high consistency and accurate joint predictions. 3. **Mitigating VAE Reconstruction Errors**: We proposed a latent transparency decoder to address the VAE's inherent reconstruction error, aligning RGBA predictions with the input image to preserve necessary details for high-quality matting. **Q1: Different Trimap Shapes** We acknowledge the importance of trimap accuracy and have ensured our model's robustness. During training, **we use data augmentation (L227-234), varying the size and number of dilation and erosion kernels to enhance robustness to different trimap shapes**. For the Composition-1k and AIM-500 datasets, **predefined trimaps** facilitate method comparisons. To validate our model's robustness, **we conducted additional experiments with different trimap shapes**, adding two new trimaps with medium and large shapes in the Composition-1k dataset, using 10 and 20 iterations with kernel sizes of 50 for dilation and erosion. The **qualitative results of these tests are shown in Figure B of the rebuttal PDF**. As expected, the larger trimap includes more unknown regions, increasing the prediction challenge. However, our model still performs well. The **quantitative results in the table below** show that although alpha prediction accuracy decreases with larger unknown regions, our model maintains high precision, demonstrating its robustness. | Trimap Shape | SAD | MSE | | --- | --- | --- | | Large | 21.6 | 3.2 | | Medium | 21.2 | 3.2 | | Small | 20.8 | 3.1 | **Q2: Video Matting** We **supplemented our evaluation with tests on the VideoMatte240K dataset** [1] using the model trained on Composition-1k. For each frame, we generated a trimap with erosion and dilation (kernel size 10, iterated five times) and conducted tests at a resolution of 512×288. Evaluation metrics included MSE(L242-244) and dtSSD for temporal coherence. The **quantitative results are presented below**. We included results from a human video matting method, MODNet[2]. Since our model uses trimaps for each frame, our results generally outperform those of human video matting methods. | Method | MAD | MSE | dtSSD | | --- | --- | --- | --- | | MODNet | 9.41 | 4.30 | 2.23 | | Our Model | 5.32 | 2.21 | 1.98 | The **qualitative results, shown in Figure C of the rebuttal PDF**, indicate that our model maintains good performance on unseen video test data. This also demonstrates the generalization capability of our model. [1] Real-time high-resolution background matting. CVPR’21 [2] Modnet: Real-time trimap-free portrait matting via objective decomposition. AAAI’22 **Q3: End-to-end Training** Thank you for the insightful suggestion. End-to-end training with unfrozen encoder weights presents several challenges. Firstly, **optimizing the latent space while treating it as an optimization target can destabilize the training process**. Additionally, training both the VAE encoder and decoder simultaneously **increases memory usage significantly**. This approach would also **require balancing three types of loss functions**: latent space constraints, pixel space constraints, and latent space regularization, **necessitating an extensive hyperparameter search** and increasing computational costs exponentially. While end-to-end training could offer benefits, **the costs and complexities currently outweigh those of our two-stage approach**. Implementing this during the rebuttal period is impractical. However, we appreciate the suggestion and acknowledge its potential for future work. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. The experiment against the Learning Based Sampling method presented in W1 partly addresses one of my main concerns with this submission. I'm accordingly increasing my initial rating. It would be a lot more convincing to also include Aksoy et al. '17 as a baseline in the final manuscript, since their work specialize in accurate FG color estimation. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your valuable feedback on our manuscript. We appreciate your comments and have carefully considered the points you raised. As you suggested, we will include a comparison to the Aksoy et al. '17 method in the revised manuscript. This will allow us to more comprehensively demonstrate the superior performance of our approach in the FG color estimation task. We believe that this additional comparison, along with the existing evaluation against the Learning Based Sampling method and other baselines presented in the original paper, will significantly strengthen the evidence for the advantages of our proposed method. We are grateful for your support and guidance throughout the review process. If you have any other feedback or suggestions, please feel free to share them. Thank you again for your time and consideration.
Summary: This paper introduces DRIP, a novel image matting method that leverages pre-trained LDMs to jointly predict foreground color and alpha mattes. By integrating a cross-domain attention mechanism and a latent transparency decoder, DRIP addresses the limitations of traditional methods, achieving significant performance improvements on synthetic and natural datasets. The key contributions include enhanced prediction consistency, high-fidelity results, and setting new benchmarks in image matting accuracy. Strengths: The paper presents a pioneering approach by utilizing pre-trained LDMs for image matting, incorporating cutting-edge methodologies such as the latent transparency decoder. This method markedly enhances performance in image matting, establishing new SOTAs and overcoming shortcomings (e.g., details in semitransparent regions) of existing SOTA methods. 1. Robust empirical evidence supporting the proposed method is provided through comprehensive ablation studies. 2. The manuscript is eloquently composed and well-organized. The logical progression is sound and accessible to a wide readership. 3. The innovative application of pre-trained LDMs is a successful paradigm for future endeavors in additional computer vision applications. Weaknesses: 1. The computational complexity is unclear. Their high computational demands may impede practical application, particularly in real-time or resource-constrained settings. 2. The evaluation is predominantly conducted on synthetic datasets and a limited number of natural image benchmarks. More extensive testing on diverse real-world scenarios would enhance the demonstration of the model’s robustness and generalizability. 3. The approach depends on pre-trained LDMs, which may propagate existing biases from the training data. Can the matting model perform effectively on objects that are not well-represented by the original SD model, such as rare objects? 4. This article develops its model based on the Stable Diffusion model. It raises the question of whether it retains the textual input feature of the SD model. If so, how is image captioning handled? If not, could this omission lead to a degradation in the performance of cross-attention in SD? 5. During inference, is classifier-free guidance utilized? If so, what impact does it have on the matting performance? 6. Has there been any consideration for fine-tuning based on layer diffusion? Technical Quality: 3 Clarity: 4 Questions for Authors: See weakness. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Computational Complexity** Thank you for your valuable comments regarding the computational complexity of our model. We understand the importance of addressing computational demands, particularly for real-time or resource-constrained applications. **In the limitation section of our paper, we discussed the issues related to model complexity and deployment, specifically noting that the use of latent diffusion models substantially increases the architectural complexity of our approach (L323-L326)**. **Below, we provide detailed information** about the computational complexity: | Image Resolution | Memory Usage (MB) | | --- | --- | | 512 x 512 | 4297 | | 1024 x 1024 | 7057 | The table above shows the computation time and memory usage for processing images of various resolutions on a standard GPU (NVIDIA RTX A6000) using float16 precision. While the latent diffusion models increase the computational load, the performance remains within acceptable limits for many practical applications. **W2: Real World Scenarios** Thank you for raising this important point. In the original paper, we demonstrated our model’s generalizability by **training on synthetic datasets and evaluating on unseen real-world datasets, where it achieved outstanding results (L283-284)**. To further validate our model’s robustness, we have **supplemented our evaluation with tests on the VideoMatte240K dataset** [1], also using the model trained on the Composition-1k dataset. The quantitative results are as follows: | Method | MAD | MSE | dtSSD | | --- | --- | --- | --- | | MODNet | 9.41 | 4.30 | 2.23 | | Our Model | 5.32 | 2.21 | 1.98 | Additionally, **the qualitative results are provided in Figure C of the rebuttal PDF**. These results show that our model performs well on unseen video test data, further demonstrating its generalization capability. [1] Real-time high-resolution background matting. CVPR’21 **W3: Biases From Pretrained Model** Thank you for raising this important concern. As mentioned in the limitations section of our paper, **our approach indeed relies on pre-trained Latent Diffusion Models (LDMs), which may carry biases from their training data (L326-331)**. Regarding **rare objects that are not well-represented by the original SD model, this is an area that requires further research, and currently, there is no suitable dataset for evaluation**. However, it is undeniable that the training dataset used for LDMs, such as LAION-5B, is extremely large. This extensive dataset is a significant reason why our method demonstrates better generalization capabilities compared to other approaches (L266-284). **W4: Textual Input Feature** Thank you for raising this question. Our approach indeed builds on the Stable Diffusion (SD) model (L147-149), but we have customized it to focus specifically on the image matting task, **intentionally omitting the textual input feature of the SD model**. This decision was made to **simplify the model's architecture and optimize it for the specific requirements of image matting**. By doing so, we ensure that the cross-attention layers fully capture the intricate details and dependencies in the visual input without being influenced by text-based guidance. Our state-of-the-art (SOTA) results across multiple datasets (L266-284) demonstrate that omitting the textual input does not lead to a degradation in performance. **W5: Classifier Free Guidance** In our paper, **we clarified that our approach uses an empty text input (L222-223)**. Classifier-free guidance is typically employed to enhance the ability of a model to follow textual descriptions by combining the predictions from a given text prompt with those from an empty text prompt. However, in our method, the text input is deliberately left empty, which means that CFG is not utilized. **W6: Fine-Tuning Based on Layer Diffusion** We appreciate the suggestion to consider fine-tuning based on layer diffusion. Our current approach leverages pre-trained latent diffusion models (LDMs) for robust performance (L7-9). However, we recognize the potential benefits of layer diffusion fine-tuning. **We are exploring various fine-tuning strategies to improve our model's robustness and accuracy and plan to incorporate these techniques in future iterations.** Thank you for your valuable feedback. --- Rebuttal Comment 1.1: Title: Thanks for the authors' response Comment: Thank you for your detailed rebuttal and for addressing my concerns thoroughly. The additional experiments and insights were very helpful in clarifying the approach. The method of leveraging pre-trained LDMs for image matting is innovative and well-presented. The clarity and organization of the manuscript are notable, and I appreciate the thoroughness of this work. I stand by my recommendation to accept this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We are delighted to hear that our responses have resolved all of your concerns! If you have any further inquiries or doubts, please don’t hesitate to inform us. We are determined to address any remaining issues and respond promptly! Thanks again for your thorough review and reconsideration of our rebuttal!
Summary: The paper introduces Drip, an approach to image matting that leverages vision priors from pre-trained latent diffusion models (LDM). Drip incorporates a switcher and cross-domain attention mechanism for joint prediction of foreground color and opacity, ensuring high consistency. A latent transparency decoder mitigates reconstruction errors. Experiments demonstrate good performance and generalizability across benchmarks. Strengths: 1. The experiment is good and quite comprehensive Weaknesses: Like many other papers on the application using the DNN. The paper present an NN architecure for image matting. The building blocks are all standard one, and the design does not really reflect the uniqueness of the problem from other image editing. Overall, I am not convinced that such a paper really benefit the progress of image matting. Technical Quality: 2 Clarity: 3 Questions for Authors: NIL Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The limitation is not well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Contribution to Image Matting** Our primary contributions and insights are not focused on the neural network architecture or block design. Instead, we concentrate on exploring how to leverage the priors learned by well-scaled image generation models to address the ill-posed problem of image matting. Here are the unique aspects of our work: 1. **Joint Estimation of Foreground and Background**: In previous works, foreground extraction required complex post-processing. Our experiments demonstrate that leveraging the priors from generative models facilitates foreground image prediction. Furthermore, the alpha map and foreground have a close domain relationship, and joint estimation benefits both. 2. **Addressing Domain Gaps with Generative Models**: We tackle a significant technical challenge by adapting image generation models for alpha and foreground estimation. This involves bridging the severe domain gap between the RGB images encoded and decoded by VAEs and the RGB-A images required for matting. We believe that our approach of integrating generative model priors into the matting process brings a novel perspective and advances the field of image matting. Thanks for your valuable feedback. **W2: Inadequate Discussion of Model Limitations** Thank you for your feedback. We acknowledge that the discussion of limitations is crucial. **In the original paper, we have addressed the limitations of our model in the "Conclusion and Future Work" section (L322-331).** We have outlined several aspects where the model might face challenges or limitations. **If there are specific areas or additional limitations you believe were not adequately covered, we would greatly appreciate your insights.** We are open to discussing and incorporating any further considerations you might suggest during the discussion phase and are willing to make necessary revisions to address these points comprehensively.
Summary: The authors present the clear and straightforward method to improve image matting performance using an LDM-based model. The model is a conditioned LDM which can predict both the Alpha mask and the foreground, basically doing RGB-A prediction from an image and a trimap. To further adapt to the data domain, the authors also finetune a specialized decoder which is proven to be helpful. The overall design of the model and training method is intuitive and effective with sufficient evaluation. The training data is limited while can be scaled up by more data collection. Strengths: - Clear presentation of the methods with a straightforward implementation. The results show obvious improvement over existing discriminative models. - The cross-domain attention and finetuned decoder are necessary and interesting to explore. Weaknesses: - The training dataset still looks quite small. The model may not have good generalization capability to more unseen data in the wild. Also the batch size and training time is limited. - When the authors mentioned 'prior', is it referring to the pre-trained LDM model? Did the authors use like LoRA to finetune the model on the smaller dataset? Will the performance become worse as the training goes longer? - There seems no evidence showing the advantages of the cross-domain self-attention. Could the authors provide more details? - It seems the switcher is just conditioned embedding for choosing outputs. Will we be able to predict both at the same time without inputing the switcher values? Technical Quality: 3 Clarity: 3 Questions for Authors: - The transparent latent decoder is helpful. However, will it be more useful to also train an transparent auto-encoder by finetuning the existing antoencoder? When training the decoder, did the authors prepare and precompute the data from a trained diffusion model? - How to speed up the process and could we reduce the steps of sampling? - Any proposed methods to scale up the training data for a more robust model training? - Will Trimap be necessary? It will be great if the model can directly do foreground object RGB-A extraction without inputting the Trimap. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Trimap is needed for the model to do RGB-A extraction. Some dropping of the conditions may be helpful. - Data is quite limited and training longer time may degrade the performance. - Cross-domain self-attention is not fully evaluated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Training Dataset Size and Model Generalization** Thank you for raising concerns about our dataset size and model generalization. Our training dataset is substantial, comprising 431 foreground images and a background library of 82,783 images. Each foreground image is paired with 100 different backgrounds (L206-L209), **resulting in at least 43,100 training samples**. Additionally, **we applied data augmentation techniques** such as random horizontal flipping, cropping, and photometric distortion, further increasing the effective training dataset size (L227-L229). Thus, our dataset is quite extensive. For generalization, our model **trained on the Composition-1k dataset was evaluated on the independent AIM-500 real-world dataset. The results showed that our model generalizes well to unseen data (L283-284)**. We fully acknowledge the importance of dataset size for generalization. However, matting is a dense annotation task that is extremely challenging to label. This is precisely **why we incorporated the strong prior knowledge from the latent diffusion model trained on LIAON**, which significantly aids in improving generalization capabilities. **W2: Clarification on Prior Knowledge and Fine-Tuning with LoRA** When we refer to 'prior' in our paper, **it indeed pertains to the pre-trained Latent Diffusion Model (L147-148)**. Regarding your question about using LoRA to fine-tune the model on a smaller dataset, we have experimented with this approach. Our findings indicate that fine-tuning with LoRA **resulted in poor performance and difficulty in convergence**. Consequently, there was no issue of overfitting or performance degradation over extended training periods. We hope this clarification addresses your concerns. **W3: Evidence for Cross-Domain Self-Attention Advantages** In our paper, we **conducted an ablation study (L288-305) to evaluate this component**. The quantitative results demonstrate that cross-domain self-attention improves both alpha prediction and foreground color prediction accuracy. This improvement is attributed to the **high correlation between alpha and foreground color**, as both describe information about the foreground object. Cross-domain attention enhances the interaction and consistency of this information, leading to better overall performance. **W4: Switcher** The switcher **is indeed an embedding designed to help the network distinguish between the two outputs**: foreground color and alpha value. Both outputs are **predicted simultaneously**. Our **ablation study (L306-309)** demonstrates that providing this additional embedding information, rather than merely differentiating between foreground and alpha by channel order, enhances the neural network’s ability to distinguish and effectively utilize information from the two modalities. **Q1: Transparent Latent Decoder** We appreciate the suggestion to fine-tune the VAE. However, solely fine-tuning the VAE is not ideal for several reasons: 1. **Consistency with Pre-trained Weights:** To leverage the pre-trained weights in the LDM, especially in the U-Net, it is crucial to maintain the consistency of the latent distribution output by the VAE encoder. Over-finetuning the VAE encoder could disrupt this consistency, negatively impacting performance. 2. **Compression:** The VAE inherently introduces a non-negligible reconstruction loss due to its compression nature, which is problematic for high-resolution tasks like matting that require low-level detail. This limitation, as mentioned in the paper (L189-190, L310-314), necessitates the use of a transparent latent decoder to meet the high precision demands of matting tasks. Our current approach, combined with the transparent latent decoder, offers a balanced solution that addresses the high-resolution requirements of matting tasks while leveraging the strengths of pre-trained LDM components. **Q2: Steps of Sampling** In the appendix of our paper, **we have included results and analyses regarding the impact of denoising steps on performance (L507-510)**. The **specific values are shown in the table below**. |Steps | SAD | | --- | --- | | 1 | 22.8 | | 5 | 17.8 | | 10 | 17.3 | | 20 | 17.2 | Our findings indicate that **increasing the number of timesteps generally enhances performance, although the improvement diminishes as the number of timesteps increases**. Additionally, our experiments show that setting the sampling steps to at least 5 generally yields satisfactory results. **Q3: Scaling Up Training Data** Thank you for your comments and for inquiring about potential methods to scale up the training data for more robust model training. **We have plans for future research that involve using methods such as layerdiffusion to generate additional data**. This generated data will then be screened **with a human-in-the-loop approach to ensure quality**. In our current study, **to maintain a fair comparison with other methods, we used the same training dataset**. **Q4: Necessity of Trimap** Thank you for your comment regarding the use of trimap in our model. We understand the desire for a model that can directly perform RGB-A extraction without requiring a trimap. However, we have found that **the use of a trimap is still necessary for achieving accurate results, especially when multiple foreground objects are present in an image**. To address this concern, we conducted **a qualitative experiment where the trimap was set to an unknown region across the entire image**. This situation introduces ambiguity about which foreground object to extract, **as illustrated in Figure D of the rebuttal PDF**. For example, with a trimap indicating unknown regions, the model faces difficulty in deciding whether to extract the rabbit or the flower, leading to potential confusion and reduced accuracy. Additionally, **to ensure fair comparison with previous methods, we have maintained consistent settings and trimaps across our experiments(L251-257)**. --- Rebuttal 2: Comment: Dear Reviewer YFFw, We sincerely appreciate your time and effort in reviewing our submission and providing valuable comments. We have provided a detailed response to each concern you raised and hope they have adequately addressed your concerns. As the author-reviewer discussion phase is coming to an end **(Aug 13 11:59pm AoE)**, we would like to confirm whether our response has effectively addressed your questions. If you have any other questions or concerns, please do not hesitate to contact us. Best regards, Authors
Rebuttal 1: Rebuttal: **Summary of Revisions:** To all reviewers, We would like to express our sincere gratitude for your valuable efforts. We have meticulously reviewed all the feedback provided and made the necessary revisions to our paper. Below is a summary of the major changes incorporated into the final version. Additionally, the qualitative results are included in the PDF file below. In response to the concerns and comments raised by the reviewers, we have carefully addressed each point in the following point-to-point response. **The major changes are as follows:** 1. Added a table to clarify the relationship between sampling steps and accuracy, which complements Figure 8 in the appendix, in response to Reviewer YFFw’s comments. 2. Included a visualization figure to illustrate the necessity of the Trimap, as requested by Reviewer YFFw. 3. Reported detailed computational complexity values to address the concerns raised by Reviewer 8UU8. 4. Conducted additional experiments on the video matting dataset, incorporating the feedback from Reviewers 8UU8 and TjXY. 5. Enhanced the related work section on foreground color estimation, following the recommendations of Reviewer TjXY. 6. Introduced a new baseline for foreground color estimation using the Composition-1k dataset, as suggested by Reviewer TjXY. 7. Performed an experiment on alpha prediction with different Trimap shapes to further demonstrate the robustness of our method, based on Reviewer TjXY’s comments. We eagerly look forward to further discussions. Thank you for your thoughtful consideration. Pdf: /pdf/0ba9586e1ed757d05c78e0a55f52fb920fb3c329.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs
Accept (poster)
Summary: This paper is concerned with the overoptimization issue in reward modeling: When optimizing a policy against a reward model, this leads to a distributional shift that can lead to an increase in the proxy score while the true score decreases. This paper addresses these issues by regularizing the hidden states in reward models utilizing an additional output head that (with some variations) predicts the text of the winning trajectories in the comparison data. The idea is that this prediction task (a) helps keep the intermediate, strong representations from pretraining, and (b) adapts them to text that is more similar to what's encountered in post-training, thus leading to stronger generalization to the data encountered when optimizing the policy. Strengths: My impression is that this is a strong paper dealing with the important overoptimization issue in RLHF. The motivation is strong and the experimental evaluation considers many baselines, settings, and datasets to be checked. Weaknesses: No strong weaknesses come to mind, though see the questions and comments below for other experiments that could be interesting to gain more confidence (though imho not necessary for this submission to be accepted). Technical Quality: 3 Clarity: 3 Questions for Authors: I'm using this section for questions, suggestions, and minor weaknesses. a. "*In the RLHF stage, various policy optimization methods can be applied, with two frequently used methods being Best-of-n Sampling (BoN) and Proximal Policy Optimization (PPO).*" --- This paper seems to view RLHF as the stage *after* reward modeling, so just policy optimization. In contrast, I think it's more typical to view reward modeling as *part of* RLHF. b. "*While straightforward, the DPO regularization requires a reference model during training.*" --- Don't you also need a reference policy during the PPO stage, given by the KL regularization with the SFT model? So it seems like this is no additional cost here? c. Equation (7): Did you also consider replacing $\log \circ \sigma \circ \beta \circ \log$ by just $\log$? Do you expect this to perform better/worse? d. Did you consider to interleave reward modeling with pretraining on a pretraining dataset to regularize the hidden states? I wonder if this performs better or worse than essentially doing pretraining on the preference dataset (as you do in your SFT regularization method). e. Did you check whether in SFT regularization it matters whether you predict the text of the winning responses, instead of trying to predict the losing responses? I could imagine that the former performs better since it brings the reward model closer to understand a distribution that the PPO stage *will steer towards*, but I'm curious if that prediction holds up empirically. f. For HHH-Alignment, MT-Bench, and RewardBench, could you give more details on what the benchmarks test for and what the scores mean? g. I'd recommend putting the limitations and broader impact sections into the main paper. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Limitations are described in Appendix D. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments, and we provide clarification to your concerns as follows. We appreciate it if you have any further questions or comments. **Q1:** This paper seems to view RLHF as the stage after reward modeling ... I think it's more typical to view reward modeling as part of RLHF. **A1:** Thanks for the question. We agree with the reviewer that reward modeling is one step of RLHF. We will revise Section 2 Background to reflect this more accurately. **Q2:** "The DPO regularization requires a reference model during training." --- Don't you also need a reference policy during the PPO stage ... So it seems like this is no additional cost here? **A2:** The cost includes both **memory usage and computation time**. Although DPO regularization does not increase memory usage for PPO, it still extends the computation time required for reward modeling. Furthermore, for other policy optimization methods such as BoN and reject sampling, the introduction of a reference model results in additional memory usage. Therefore, we claim that DPO regularization is more costly for reward modeling and prefer SFT and DPO w/o reference regularization according to the experimental results. **Q3:** Equation (7): Did you also consider replacing $log \sigma (\beta log)$ by just $log$? Do you expect this to perform better/worse? **A3:** In ideal situations, the two forms should perform similarly. We also tried the $log$ form but found that it requires different hyperparameter tuning for $\alpha$ due to changes in the loss scale. In the tables below, **"GRM logreg" outperforms the baseline reward model and matches or even slightly exceeds the performance of our GRM on OOD tasks when $\alpha$ is appropriate**. We found that the current form of SFT regularization can directly use the same hyperparameters as our DPO regularization. Therefore, we opted for this solution to maintain coherence with these regularizations and avoid the need for hyperparameter adjustments. | Reward Model (400K data) | Unified Feedback (IID) | HHH Alignment (OOD) | MT Bench (OOD) | |----------------------------|------------------|---------------|----------| | Classifier (Baseline) | 72.1 | 73.4 | 71.2 | | GRM | 73.2 | 79.8 | 73.4 | | GRM logreg $\alpha=0.005$ | 72.8 | 77.6 | 72.8 | | GRM logreg $\alpha=0.001$ | **73.3** | **80.2** | **73.6** | | Reward Model (40K data) | Unified Feedback (IID) | HHH Alignment (OOD) | MT Bench (OOD) | |----------------------------|------------------|---------------|----------| | Classifier (Baseline) | 68.8 | 70.3 | 69.1 | | GRM | **71.5** | 78.7 | **73.0** | | GRM logreg $\alpha=0.005$ | 69.7 | 72.4 | 72.8| | GRM logreg $\alpha=0.001$ | 70.8 | **80.7** | 72.0 | **Q4:** Did you consider to interleave reward modeling with pretraining on a pretraining dataset to regularize the hidden states? **A4:** Thanks for the insightful quesiton. One challenge of the described approach is that we don't have the pretraining dataset for many language models. We think that other data formats can also be useful as a regularization, but preference data is more advantageous because it allows us to avoid using external datasets during reward modeling by leveraging the preference dataset, which may better match the distribution of prompts and responses. Please refer to the **global response B** for the additional results. **Q5:** Did you check whether in SFT regularization it matters whether you predict the text of the winning responses, instead of trying to predict the losing responses? **A5:** In our SFT regularization, we only use the winning responses to regularize the hidden states. During the rebuttal, we also tested regularizing with the negative responses, referred to as "**GRM neg**." The results show **very close IID evaluation accuracy but relatively lower OOD accuracy** compared to our default setting, potentially due to the generally lower quality of rejected responses. The impact of this reward model on PPO training requires further examination. However, we agree with the reviewer's conjecture that the rejected responses may lead to unsatisfactory representations for the desired distribution. | Reward Model (400K data) | Unified Feedback (IID) | HHH Alignment (OOD) | MT Bench (OOD) | |----------------------------|------------------|---------------|----------| | Classifier (Baseline) | 72.1 | 73.4 | 71.2 | | GRM | **73.2** | **79.8** | **73.4** | | GRM neg | **73.2** | 78.1 | 73.0 | | Reward Model (40K data) | Unified Feedback (IID) | HHH Alignment (OOD) | MT Bench (OOD) | |----------------------------|------------------|---------------|----------| | Classifier (Baseline) | 68.8 | 70.3 | 69.1 | | GRM | **71.5** | **78.7** | **73.0** | | GRM neg | 71.2 | 78.1 | 72.8| **Q6:** For HHH-Alignment, MT-Bench, and RewardBench, could you give more details on what the benchmarks test for and what the scores mean? **A6:** In Section 4, we briefly introduce three benchmarks. The HHH-Alignment dataset evaluates language models on helpfulness, honesty, and harmlessness. The MT-Bench dataset contains 3.3K human preferences for model responses generated by LLMs in response to MT-Bench questions. RewardBench is a new benchmark designed to evaluate the reward model's ability to select human-preferred responses for chat, reasoning, and safety tasks. In summary, these benchmarks are used to assess the reward model's alignment with human preferences and generalization ability across different prompt-response distributions. The scores represent the average accuracy across each benchmark, with higher accuracy indicating better performance. **Q7:** I'd recommend putting the limitations and broader impact sections into the main paper. **A7:** We agree with the reviewer and will move the two sections to the end of the main paper in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers to my questions! --- Reply to Comment 1.1.1: Comment: Thank you for your positive evaluation of our work. We greatly appreciate your insightful questions and suggestions, as they enhance the accuracy of our writing and the comprehensiveness of our evaluation.
Summary: This paper introduces a method that retains the base model's language model head while incorporating text-generation losses to preserve the hidden states' text generation capabilities. Strengths: * The paper is well-written, and the idea is straightforward. * The code is easy to understand and implement. * This method appears to be efficient and readily integrable with existing alignment approaches. Weaknesses: * Figure 3 (b) appears unusual, as the gold score decreases at the beginning of training. This could indicate suboptimal hyperparameter tuning or potential drawbacks in the pipeline. * The results demonstrate limited advantages. It is recommended to validate the method's benefits with a larger dataset. Technical Quality: 3 Clarity: 3 Questions for Authors: * The training details mention that the reward models are trained for only 2 epochs. This choice may not be optimal, considering the risk of overfitting in the full fine-tuning setting. It would be helpful to discuss the reasoning behind this decision or the impact of learning rate and convergence speed. * The simplicity of the regularization method is intriguing. Can it effectively mitigate reward hacking, potentially surpassing reward model ensemble techniques? It would be valuable to explore and discuss this aspect. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: * The experiments may lack solidity due to the heavy reliance on the reward model in the PPO algorithm. If the reward model is not properly trained, the results can be significantly impacted, introducing randomness to the work. * The inclusion of manually synthetic data in the dataset may not accurately reflect real-world data, limiting the generalizability of the findings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments, and we provide clarification to your concerns as follows. **Q1:** Figure 3 (b) appears unusual, as the gold score decreases at the beginning of training. This could indicate suboptimal hyperparameter tuning or potential drawbacks in the pipeline. **A1:** We found that this issue is influenced by **the learning rate of PPO**. By using a smaller learning rate (1e-5), we observed an initial improvement in the gold score during the first few steps, followed by subsequent drops (see **Figure 2 of the supplementary PDF**). Although the trend remains the same, the smaller learning rate delays the drop. In contrast, even with a larger learning rate, using our GRM allows PPO to learn stably, further emphasizing the importance of our proposed method for reward modeling and alleviating the burden of tuning hyperparameters for PPO. Moreover, we carefully followed the pipeline commonly adopted by the community [2][5] and used the open-source implementation of PPO from the Huggingface/trl package to ensure a correct and reliable pipeline. **Q2:** The results demonstrate limited advantages. It is recommended to validate the method's benefits with a larger dataset. **A2:** We would like to argue that the advantages of GRM are significant. As illustrated in Table 1 and 2 of our paper, increasing the training data from 40K to 400K enhances the OOD scores from 70.3/69.1 to 73.4/71.2. In contrast, GRM with just 40K training data achieves OOD scores of 78.7/73.0, showing a **much greater improvement than a 9x increase in the dataset size**. This is particularly important in real-world applications where there are limited samples for fine-tuning. Additionally, in **global response A**, we present further results on RewardBench, demonstrating the strong potential of scaling GRM to **larger models and datasets**. Our findings indicate that GRM even outperforms a 34B reward model and GPT-4 as a judge. Moreover, as shown in Table 1 and 2, GRM outperforms a strong ensemble baseline (n=3) while **using only about 1/3 of the computational cost**. In Section 5.2, GRM significantly outperforms other reward models for RLHF in terms of the gold score, alleviating the reward over-optimization problem of RLHF. **Q3:** The training details mention that the reward models are trained for only 2 epochs ... It would be helpful to discuss the reasoning behind this decision or the impact of learning rate and convergence speed. **A3:** In our experience, the training design depends on the dataset's quality, diversity, and size, as well as the base model and optimizer hyperparameters used for training. There is no one-size-fits-all hyperparameter setting. Prior work [2] trains reward models for 5 epochs and suggests **determining this number based on a nearly converging validation loss**. Following this insight, we reserve 1% of the training set for validation (e.g., 4K for 400K training data) and found that 2 epochs are sufficient for reward modeling with LoRA in our setting. As shown in the table below, we observed convergence in the validation loss during the second epoch, with no further improvement in the third epoch. For full-parameter training experiments added during rebuttal, which are more prone to overfitting, we train the reward model for only one epoch. | Validation accuracy (%) of reward model | 0.05 epoch | 0.5 epoch | 1 epoch | 1.5 epoch | 2 epoch | |----------------------------|------------------|---------------|----------|------| -----| | Classifier (Baseline) | 68.4 | 73.3 | 73.8 | 74.0 | 74.0 | | GRM | 69.4 | 73.5 | 73.8 |74.2 | 74.1| **Q4** The simplicity of the regularization method is intriguing. Can it effectively mitigate reward hacking, potentially surpassing reward model ensemble techniques? **A4:** Yes, in **Section 5.2**, we already explored the effectiveness of GRM for RLHF, particularly focusing on two commonly used policy optimization methods: BoN and PPO. As shown in Figure 3 of our main paper and 3 figures in the supplementary PDF, GRM consistently excels in both the PPO and noisy label experiments. Unlike the baselines (including the ensemble techniques), which show an increasing proxy score but a declining gold score, GRM demonstrates its effectiveness in mitigating reward hacking. **Q5:** The experiments may lack solidity due to the heavy reliance on the reward model in the PPO algorithm. If the reward model is not properly trained, the results can be significantly impacted, introducing randomness to the work. **A5:** We would like to emphasize that our main focus is on improving reward modeling (Section 5.1), subsequently enhancing RLHF (Section 5.2). **Our reward model is not limited to the PPO algorithm**; it can be applied to any policy optimization method based on explicit reward modeling, such as BoN or reject sampling. As the reviewer mentioned, an improperly trained reward model can be hacked by RL algorithms, as demonstrated in Section 5.2. In contrast, GRM shows better robustness against reward hacking, underscoring its importance for robust preference learning. **Q6:** The inclusion of manually synthetic data in the dataset may not accurately reflect real-world data, limiting the generalizability of the findings. **A6:** In Section 5.2, synthetic data is used in the RLHF experiment to allow the gold reward model to annotate preference labels without human labor. This approach is also adopted by prior works [2][3][5], and even research from companies like OpenAI and Google uses this synthetic method to reduce human labor. We have also mentioned this in the limitations section (Appendix D). In other experiments in **Section 5.1**, we use the UnifiedFeedback dataset, which includes **datasets based on human annotators**, such as Anthropic/hh-rlhf, lmsys/chatbot_arena_conversations, openai/summarize_from_feedback. Therefore, we believe our setting also reflects real-world data. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' comprehensive response. I would like to keep the current score. Thanks! --- Reply to Comment 1.1.1: Comment: Thank you for your time, effort, and valuable feedback. We hope that our responses, including detailed explanations regarding the hyperparameter design and validation on a larger dataset, have addressed your concerns. If our responses have resolved your issues, we would greatly appreciate it if you could consider raising the score. If you have any additional concerns, please feel free to post them, and we would be happy to discuss them further before the discussion deadline on August 13th.
Summary: This paper proposes generalizable reward model (GRM) which modifies the standard reward-learning objective by adding an auxiliary task with a separate language modeling head. The auxiliary loss is either DPO or SFT. Experiments and ablations are conducted using mistral and gemma models and data from unified-feedback and evaluating on HHH, MT-bench, and rewardbench showing GRM improves OOD performance. Finally, GRM reward models are used to train new policies. Strengths: 1. The idea of adding an auxiliary loss is simple and elegant. The paper provides a good intuitive argument for why it may be useful and clearly presents the potential ways to implement the auxiliary loss (DPO and SFT). 2. The experiments and ablations are thorough and well-motivated. The paper considers a variety of baseline techniques to make reward models more robust like adding a margin term, label smoothing, or an ensemble. The paper also does a good job of comparing the low-data and high-data regimes. 3. The results seem to show a consistent, if sometimes modest, improvement from GRM over baseline methods. And the improvements are larger in the low-data regime. Weaknesses: 1. While the paper is generally comprehensive, there could be additional exploration of just using language modeling as an auxiliary task. It is not clear if the benefit of GRM is coming from language modeling the data from the preference dataset or if it would benefit from language modeling on any data (or even data from an even broader distribution, controlling for more OOD data). A further exploration of this would be good. Technical Quality: 3 Clarity: 3 Questions for Authors: None Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There could be more discussion of limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments, and we provide clarification to your concerns as follows. **Q1:** While the paper is generally comprehensive, there could be additional exploration of just using language modeling as an auxiliary task. It is not clear if the benefit of GRM is coming from language modeling the data from the preference dataset or if it would benefit from language modeling on any data (or even data from an even broader distribution, controlling for more OOD data). A further exploration of this would be good. **A1:** Thanks for the insightful question. We believe that other data formats can also be useful, but preference data is more advantageous because it allows us to avoid using external datasets during reward modeling by leveraging the preference dataset, which may better match the distribution of prompts and responses. To illustrate this, we provide an experiment below using GRM with text-generation regularization on an open-source pretraining dataset, togethercomputer/RedPajama-Data-1T-Sample (including text from Commoncrawl, Arxiv, and books), referred to as 'GRM pretrain reg'. The results in **global response B** suggest that 'GRM pretrain reg' outperforms the baseline reward model and matches the performance of GRM when the dataset size is large (400K). However, when the dataset size is small, using a pretraining dataset is less effective than using the preference dataset. **Q2:** There could be more discussion of limitations **A2:** Thank you for the question. In addition to the limitations mentioned in Appendix D, another constraint is that we did not test reward models with parameter sizes exceeding 10B due to computational limitations. In the main paper, we tested models with sizes of 2B and 7B. During the rebuttal period, we also provided results for 8B reward models in **global response A**, which demonstrate promising potential for scaling GRM to larger reward models and datasets. Further efforts to extend our method to even larger reward models could be highly promising. --- Rebuttal Comment 1.1: Comment: Thanks for your response and for running the additional experiments. Indeed it is interesting that using pre-training data for regularization can also be effective, while not quite as effective as the original method. I will leave my accept score and continue to think this is a strong paper. --- Reply to Comment 1.1.1: Comment: Thank you for your positive evaluation of our work. We greatly appreciate the insightful questions, particularly the one regarding the use of other text data for regularization, as they help make our work more comprehensive.
Summary: The paper addresses the limitations of current reward models used in the reinforcement learning from human feedback (RLHF) framework, specifically their generalization capabilities to unseen prompts and responses. This limitation often leads to reward over-optimization, where the excessive optimization of rewards results in a decline in actual performance. The study proposes an approach to enhance the reward model's generalization ability against distribution shifts by regularizing the hidden states. Strengths: The motivation for the study is sound, and the experiments validate the effectiveness of the proposed method. Weaknesses: 1. The proposed method lacks innovation as it combines DPO and SFT loss into the RM training phase as a regularization term. The inclusion of SFT loss in RM training has already been explored in previous works, such as InstructGPT and Anthropic's RM training. 2. In the introduction, the authors mention that a randomly initialized head can distort pre-trained features, negatively impacting out-of-distribution (OOD) performance. Inspired by this finding, they propose to regularize feature distortion during fine-tuning for preference learning. However, there is no experimental evidence provided to support that this motivation holds true for RM. The improvements from adding regularization alone do not sufficiently prove that this motivation is solid. 3. The results for label smoothing are missing in Figures 2 and 3, which should be addressed. 4. The experimental section should include the alignment results after RL. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the Weaknesses section for questions regarding the paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments, and we provide clarification to your concerns as follows. **Q1**: The proposed method lacks innovation ... The inclusion of SFT loss in RM training has already been explored in previous works, such as InstructGPT and Anthropic's RM training. **A1:** We would like to clarify that **no previous work has incorporated text-generation loss as a regularization term for reward modeling**. InstructGPT and Anthropic's RM training optimize the vanilla reward loss (Eq 1 in [1]) based on the Bradley-Terry model, and their SFT regularization is typically applied to **language model training rather than reward modeling**. Our method is novel in that it integrates text-generation loss directly into the reward modeling process, a fundamentally different task from text generation. The enhancement effect of text-generation loss on reward modeling has not been explored in prior work. Additionally, our reward model structure is distinct to enforce this regularization. We maintain both a language head and a reward head within the reward model and apply text-generation loss to the language head during reward training. In our experiments, compared to previous methods that use ensemble techniques [2][3] to enhance reward model capabilities, our method significantly reduces computational costs while achieving superior performance. This demonstrates promising results for more reliable and cost-effective reward modeling. **Q2**: No experimental evidence supporting the claim that a randomly initialized head can distort pre-trained features in RM, consequently leading to a negative impact on OOD performance. **A2:** The phenomenon is well-documented by [4], both theoretically and empirically (across a range of computer vision tasks). It is also easy to validate in the preference learning setting when using a smaller dataset size. We included a baseline, "Classifier (Frozen)", which fixes the base model’s features and only fine-tunes the classification head. When the dataset size is 8K (see the table below), the OOD evaluation results of the baseline reward model (without freezing the backbone) are worse than those of the frozen one, demonstrating the negative effect of distorting pre-trained features. However, we would like to note that when the dataset size is sufficiently large, this negative effect can be mitigated, and the baseline reward model can surpass the frozen reward model due to having more trainable parameters to fit the large dataset. In contrast, by regularizing the hidden states, our GRM can achieve the regularizing effect while fine-tuning all parameters, showing strong performance with both large and small dataset sizes. | Reward Model (8K data) | Unified Feedback (IID) | HHH Alignment (OOD) | MT Bench (OOD) | |----------------------------|------------------|---------------|----------| | Classifier (Frozen) | 62.2 | 68.8 | 67.6 | | Classifier (Baseline) | 66.1 | 65.1 | 67.7 | | GRM (ours)| **69.0** | **71.9** | **69.8** | **Q3**: The results for label smoothing are missing in Figures 2 and 3, which should be addressed. **A3:** We found that reward models with label smoothing are easily hacked by BoN and PPO, resulting in worse performance compared to other baselines. Therefore, we omitted them from the policy optimization experiments. To address the reviewer's concern, we have included the label smoothing results for the BoN and PPO experiments using the 7B base model in **Figure 1 of the supplementary PDF** provided during the rebuttal. **Q4**: The experimental section should include the alignment results after RL. **A4:** In Section 5.2, we evaluated different reward models for both RL training and Best-of-N sampling, using the gold score as the measurement for alignment, as adopted by prior studies [2][5] for reward modeling. Additionally, to demonstrate the advantage of GRM over vanilla reward modeling, we evaluated **the win rate of models after PPO training with GRM against those with the vanilla reward model**. The evaluation was conducted using GPT-4o on 100 randomly selected prompts from the test set in UnifiedFeedback, with the order of responses randomly flipped to avoid order bias. The results below show a significantly higher win rate for GRM than the vanilla reward model across two different base reward models. | Base reward model | Win rate | Tie rate | Loss rate | |----------------------------|------------------|---------------|----------| | Gemma 2B it| 0.68 | 0.05 | 0.27 | | Mistral 7B Instruct | 0.73 | 0.06 | 0.21 | --- Rebuttal 2: Title: Looking Forward to Your Valuable Feedback Comment: Dear Reviewer tSHJ, We deeply appreciate your time, effort, and valuable feedback. We hope that our responses, including detailed explanations about the differences between the prior reward modeling paradigm and our approach, as well as additional experimental results on feature distortion and alignment after RL, have addressed your concerns. If our responses have resolved your issues, we would greatly appreciate it if you could consider raising the score. If you have any additional concerns, please feel free to post them, and we would be happy to discuss them further before the discussion deadline on Aug 13th. Thank you once again for your thoughtful review. Best, The Authors --- Rebuttal 3: Title: Looking Forward to Your Feedback Comment: Dear Reviewer tSHJ, We deeply appreciate your time, effort, and valuable feedback. **Since the discussion deadline is less than 10 hours away and we have not yet received an acknowledgment from you, we kindly request you to review our replies**. We hope that our responses, including detailed explanations about the differences between the prior reward modeling paradigm and our approach, as well as additional experimental results on feature distortion and alignment after RL, have addressed your concerns. If our responses have resolved your issues, we would greatly appreciate it if you could consider raising the score. If you have any additional concerns, please feel free to post them, and we would be happy to discuss them further before the discussion deadline Thank you once again for your constructive review. Best, The Authors
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive comments and are particularly grateful for their recognition of our work: 'sound and strong motivation' (Reviewers tSHJ, VYhZ), 'elegant idea' (Reviewer NAqK), 'thorough experiments' (Reviewers NAqK, VYhZ), and 'well-written' (Reviewer SDk2). We hope our responses have addressed all of the reviewers' concerns. If you have any additional questions, please feel free to post them, and we will be happy to discuss further. Below, we provide new experimental results and global references. ### A. **8B full parameter finetuning on larger dataset** To further demonstrate the effectiveness of GRM, we present additional results on RewardBench, a new benchmark for human preference. We trained GRM using the llama3-8b-instruct model, performing full parameter fine-tuning for 1 epoch on one of the largest open-source preference datasets, hendrydong/preference_700K. Our results indicate the **strong potential of scaling GRM to larger models and datasets**, even outperforming a 34B reward model and GPT-4 as a judge. | Reward model | Average | Chat | Chat Hard | Safety | Reasoning | |:-------------------------:|:-------------:|:---------:|:---------:|:--------:|:-----------:| | GRM (Ours, 8B) | 87.0 | 98.6 | 67.8 | 89.4 |92.3 | | openai/gpt-4-0125-preview | 85.9 | 95.3 | 74.3 | 87.2 | 86.9 | |openai/gpt-4-turbo-2024-04-09 | 85.1 | 95.3 | 75.4 |87.1 | 82.7| | Classifier (baseline, 8B) | 84.7 | 99.4 | 65.1 | 87.8 | 86.4 | |Nexusflow/Starling-RM-34B | 82.7 | 96.9 |57.2|88.2 |88.5| ### B. **Regularization with pretraining dataset** In our default design, we use the preference dataset employed to train reward models to regularize the text-generation ability of the language head, eliminating the need for additional datasets. While we believe that other data formats, such as pretraining datasets, can also be beneficial, preference data offers a distinct advantage. It allows us to avoid using external datasets during reward modeling, which may also better align with the distribution of prompts and responses. To illustrate this, we conducted an experiment using GRM with text-generation regularization on an open-source pretraining dataset, togethercomputer/RedPajama-Data-1T-Sample (which includes text from Commoncrawl, Arxiv, and books), referred to as **'GRM pretrain reg'**. For fairness, we only used a pretraining dataset of the same size as the training set for reward modeling. The results indicate that 'GRM pretrain reg' outperforms the baseline reward model and matches the performance of GRM when the dataset size is large (400K). However, when the dataset size is small, using a pretraining dataset is less effective than using the preference dataset. | Reward Model (400K data) | Unified Feedback (IID) | HHH Alignment (OOD) | MT Bench (OOD) | |----------------------------|------------------|---------------|----------| | Classifier (Baseline) | 72.1 | 73.4 | 71.2 | | GRM | **73.2** | **79.8** | 73.4 | | GRM pretrain reg | 73.0 | 79.2 | **74.3** | | Reward Model (40K data) | Unified Feedback (IID) | HHH Alignment (OOD) | MT Bench (OOD) | |----------------------------|------------------|---------------|----------| | Classifier (Baseline) | 68.8 | 70.3 | 69.1 | | GRM | **71.5** | **78.7** | **73.0** | | GRM pretrain reg | 70.8 | 74.5 | 72.9| ### C. **Suplementary PDF** In our supplementary PDF for the rebuttal, we include three figures: - **Figure 1**: We present the results of label smoothing to address reviewer tSHJ's concern. - **Figure 2**: We provide the results of PPO with a learning rate of $1 \times 10^{-5}$ to address reviewer SDk2's concern about the hyperparameter of PPO. - **Figure 3**: We include the results of PPO in a noisy label setting. These results consistently demonstrate the superiority of GRM compared to other reward model baselines. ### D. **References** [1] Training language models to follow instructions with human feedback. NeurIPS, 2022. [2] Reward Model Ensembles Help Mitigate Overoptimization. ICLR 2024. [3] Warm: On the benefits of weight averaged reward models. ICML, 2024. [4] Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. ICLR 2022. [5] Scaling laws for reward model overoptimization. ICML, 2023. Pdf: /pdf/a8bc9dec6e8c57761e93cf7e2a97893f2f0059a2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Node-Level Topological Representation Learning on Point Clouds
Reject
Summary: Large scale topological descriptors of data are leveraged to compute point/node-level descriptors, which encode to which large scale topological feature each point belongs to. For this, a combination of applied algebraic topology and applied harmonic analysis is used. More specifically, large scale homological features are computed using persistent homology, then represented with harmonic cocyles, and then averaged locally to obtain a point-level descriptors. The problem of topological clustering (already introduced in the literature) is addressed, whose objective is to determine to which large scale topological feature a certain data point belongs to. A set of benchmark datasets are introduced for topological clustering. The pipeline is applied to these datasets as well as real world datasets. Strengths: - Large scale topology of data is leveraged to assign point/node-level features to data. This gives concrete meaning to what it means for a data point to belong to a certain large scale topological feature. - The method is based on well-established mathematical concepts. - The concept of topological clustering is interesting and has potential. - A suite of synthetic datasets is introduced. Weaknesses: Regarding unjustified claims: - Existing approaches are undersold. Specifically, in the introduction it is said that "none of these approaches is able to represent higher-order topological information" and that "such higher-order topological information is however invisible to standard tools of data analysis like PCA of k-means clustering". However, cluster structure is topological structure. Does "higher order" mean homology in dimensions 1 and above? - Remark 4.2 says that "datasets with topological structure consist in a majority of cases of points sampled with noise from deformed n-spheres". This seems like a really strong claim. Is there any evidence of this? Regarding theory: - Theorem 4.1 applies in a very restricted scenario. Moreover, I do not understand why the harmonic representative takes values in {0, -1, 1}. This seems very surprising since harmonic cycles/cocycles almost always take fractional values (in order to minimize energy). I did not understand the proof of this fact; specifically, why $g$ being a harmonic generator for the entire filtration range of $(b,1)$ implies this claim. Regarding the methodology: - The method, specifically line 225, seems to assume that a cycle with coefficients in $Z/3Z$ will also be a cycle when interpreting those coefficients (0,-1, or 1) as real numbers. However, this need not be the case. To see this it suffices to consider a simplicial complex given by a triangle with no interior. Thus, step 3 of Algorithm 1 (and the method more generally) seems to be heuristic. - The setup up Table 1 is unclear to me. How can one compare TOPF, which produces feature vectors, with, say, DBSCAN, which produces a clustering? - Figure 4 is hard to interpret. For example, how should one assess the effectiveness of the algorithm in Fig 4(a)? - The methodology has many hyperparameters. Some choices, like delta=0.07 in line 241, seem arbitrary. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Does your method have an interpretation in the case of a Riemannian manifold? That is, suppose that I take a Riemannian manifold and a harmonic (smooth) cocycle. Is there a corresponding point-level feature function? Does it have an interpretation? Perhaps it is related to the pointwise norm of the harmonic cocycle? 2. How were the parameters of the other algorithms in table 1 selected? 3. Which further applications do you have in mind? Figure 4 is interesting, but does not hint at real life applications. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - The experimental evaluation is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough review and feedback! We believe we have significantly improved the paper based upon your comments! We will address the individual points: # "Regarding unjustified claims": 1. By higher-order we mean of order 1 and above in the sense of homology. In this interpretation, our claims are true and we are not underselling existing approaches, as cluster structure is only 0-th order homology structure. However, we want to be as clear and fair as possible. We have revised the sentences for clarity and state the differences more clearly. 2. This is not quite a strong claim as it might seem. Ordinary cluster structures are basically captured by noisy 0-spheres, 1-dimensional homology appears almost always in the form of some noisy deformed cycle, and so on. Even very basic non-sphere structures like tori appear only in very rare circumstances in real-world data, as is evidenced by the existing applied TDA literature. Finally, we explicitly did not claim this for all datasets and thus feel justified. ### Regarding Theory 1. It is true that Theorem 4.1 applies in a restricted scenario. The purpose of Thm 4.1 is to verify that TOPF does __provably correct things__ on ideal data sets recovering the implanted topological structure. 2. Thank you very much for reading the proof! :) Harmonic cycles (as those considered by topf) indeed take absolute values between 0 and 1 almost always on the majority of the simplices. This is, unless they __don't have parallel simplices around the same generalised hole__: In this case, it is not possible to minimise the energy by distributing the "flow" (in the 1-dimensional case) among the parallel simplices and a single simplex needs to account for all the contribution to homology. This is exactly the case in the theorem: Because the points lie on a __perfect n-sphere with radius 1, the first n-simplex appears only at point 1 in the alpha-filtration__. (In a VR filtration, this would not hold.) Without any n-simplices, there are __no "parallel" (n-1)-simplices__, and thus every simplex has either __value -1 or 1__ (depending on orientation) or 0 (Or a normalised version of this). We will carefully describe this argument in the proof. ### Regarding the methodology * The example given by the reviewer does not underline their claim: In Z/3Z coefficients, a homology representative for the triangle needs to assign every edge a value such that the oriented sum (*+1 at the tip of the edge and *-1 at the tail) at each of the nodes is equal to 0. If we fix the orientations of the edges to all be clockwise, this is only possible with all edges having the same value. All edges having a 0 obviously does not work, so (1,1,1) and (-1,-1,-1) remain as options. In the same setting, these are generators for real-valued homology as well. In general, it is true that being a generator of Z/p-homology does not guarantee being a generator of homology with real coefficients. However, there are multiple reasons why this is not a problem in practice. 1. The homology generators computed by persistent homology have a very special structure. In dimension 1, instead of being some random formal sum of edges, they form a sum of closed cycles. In higher dimensions, similar statements hold. This however then means that the PH generators in Z/3Z coefficients will always be in the kernel of the boundary operator in R coefficients. 2. Now the only thing that can happen is that the the generator is in the image of the boundary in R coefficients. This is precisely the case when the represented homology class has p-torsion. On real-world data, this very rare. (This is mainly due to the fact that manifold needs to be "very complicated" to have torsion, whereas torsion homology groups are easy to achieve. E.g., RP^2 (the most simple space with torsion in the homology group) or can only be embedded in R^4 or above). 3. Furthermore, it does not suffice for the space to have a homology class with some torsion, only homology classes with 3-torsion will be relevant. 4. We can very easily check whether this is the case: If the projection of the generator to the harmonic space, which we compute anyway, vanishes, the considered homology class has torsion, and thus we can safely exclude it from our analyses. We will add a check for this in our implementation. 5. We are not throwing any harmonic information away this way: all homology classes representable by harmonic forms don't have torsion (in Z-coefficients), and all homology classes without torsion appear in Z/3Z homology. We will add a version of this explanation to the appendix, thank you very much for pointing this out! * We run spectral clustering on the feature vectors produced by TOPF to obtain a clustering of the points. (k-means or similar clustering algorithms would work as well.) We have added an explanation to the experiments section and the caption of Table 1. Thank you for catching this! * The purpose of Fig. 4 is to show that TOPF corresponds roughly to our intuition of relevant topology: In a), the features detect some of the pockets (green, black, purple) of the protein and the hole in the middle (red), in b) the loops in the protein structure are recovered, and in c) the two regimes of the Lorentz attractor are recovered. Because there is no ground truth for this, we can't report an ARI, which we did in the quantitative experiments. * We have added experiments on the performance of TOPF on TCBS for a wide range of hyperparameter settings. (Fig. 1 of the pdf). In summary, TOPF is not very sensitive to hyperparameter changes around the default values, making hyperparameter choices robust. We have picked the hyperparameters for TOPF to perform well across the large range of applications in our paper. We will add the experiments to the appendix section discussing the hyperparameters. We apologise, but we will answer the remaining questions in a comment. --- Rebuttal 2: Comment: Thank you for your responses. - "Harmonic cycles (as those considered by topf) indeed take absolute values between 0 and 1 almost always on the majority of the simplices." This is not what your theorem says. It says that it takes values in {$0,\pm 1$}, which still seems wrong to me. - "parallel simplices around the same generalised hole". What is a "generalized whole"? I did not understand this part of the discussion. - "We run spectral clustering on the feature vectors produced by TOPF". How were the parameters selected? - I am not convinced by your argument for lifting Z/3Z cocycles to Z-cocycles. Indeed, the original paper that proposed to get circular coordinates out of persistent homology computations [1] has a whole section on this, "2.4 Lifting to Integer Coefficients". In particular, they say that when the "easy" lift that you use fails, they don't have particular guidance as to how to proceed: "This is all very well. Unfortunately, the equation η = d1ζ is a Diophantine linear system. At present, we can provide no particular guidance as to how to solve the system (other than by vague appeal to off-the-shelf Diophantine or integer linear programming solvers), even if we know that a solution exists." Since the publication of that paper, the software DREiMac has been published. They do address this issue using a linear programming solver. See the documentation at [2], where they also have an example showing how things can fail. In particular, that example shows that torsion is not the only issue that can occur, as that data has no torsion. [1] Persistent Cohomology and Circular Coordinates. Vin de Silva, Dmitriy Morozov & Mikael Vejdemo-Johansson. https://link.springer.com/article/10.1007/s00454-011-9344-x [2] https://dreimac.scikit-tda.org/en/latest/notebooks/parameters_prime_and_check_cocycle_condition.html --- Rebuttal Comment 2.1: Comment: Thank you very much for your in-depth response! This is incredibly helpful! We will reply to your comments below. > "Harmonic cycles (as those considered by topf) indeed take absolute values between 0 and 1 almost always on the majority of the simplices." This is not what your theorem says. It says that it takes values in {$0,\pm 1$}, which still seems wrong to me. We should have been more precise in our previous reply. Consider a k-homology representative r with simplex-values in {-1,0,1}. We consider the harmonic projection h of r into the harmonic space. Then, for a k-simplex $\sigma$, we have that $h(\sigma)=\pm 1$ iff $r(\sigma) = 1$ AND $\sigma$ is not the face of any $k+1$-simplices. This can for example be seen by representing h as the difference of $r$ and its gradient and curl parts $h=r-r_{grad}-r_{curl}$. Because r is already a cycle, $B_kr=0$ and thus $r_{grad}=0$. Now, the curl part can be written as stemming from a signal on the the $k+1$-simplices, $r_curl=B_{k+1}x_{curl}$. However, because $\sigma$ is not the face of a $k+1$-simplex, $r_{curl}(\sigma)=0$ and thus $h(\sigma) = r(\sigma)$ The setting of the theorem precisely constructs a case where we have $k$-simplices, but no $(k+1)$. This is however an idealised setting. Because we construct the simplicial complex to compute the harmonic representative somewhere in the middle (determined by the interpolation coefficient) between the birth and death time of the homology class, empirically the majority of the $k$- simplices of the $k$-homology generators are faces of $(k+1)$-simplices. In this case, the harmonic representative smoothes out the feature. > "parallel simplices around the same generalised hole". What is a "generalized whole"? I did not understand this part of the discussion. We apologise for using imprecise terminology in the reply. With parallel simplices we mean $k$-simplices that are connected via a number of $(k+1)$-simplices. "Around the same generalised hole" is just a picture for them being part of the same harmonic representative of a homology class. > "We run spectral clustering on the feature vectors produced by TOPF". How were the parameters selected? We use the default parameters suggested by the scikit-learn. In case the number of clusters is known, we pass this to TOPF. If not, the number of selected topological features in the previous step is passed as n_clusters. > I am not convinced by your argument for lifting Z/3Z cocycles to Z-cocycles. [..] Thank you very much for your detailed and well-researched response! We will dedicate a section in the appendix to this problem. However, we don't believe that this is a serious problem for TOPF, due to multiple reasons we list below. 1. The first thing we did was to analyse this problem empirically. We checked the TREFOIL knot considered in the DREiMac documentation and verified that TOPF computed correct homology representatives. We then went on to check over 3000 homology representatives as computed by ripserer.jl and lifted them to R-coefficients as described in our paper. There was not a single case where this was not a valid homology representative in R-coefficients. This suggests that this is at most an incredibly rare problem in practice. Of course, such an analysis is not entirely satisfactory. [Continued in the next comment] --- Reply to Comment 2.1.1: Comment: 2. The __key difference__ between the methods presented in the cited paper and TOPF is that TOPF works with __homology representatives__ instead of the __cohomology representatives__ used by DREiMac. TOPF computes the homology representatives using the involuted persistent homology algorithm as implemented in [1]. While homology representatives are a little more expensive to compute, they have some advantageous properties in comparison to cohomology representatives. [2] A nice picture of the difference between homology and cohomology representatives can for example be found here [3]. Intuitively speaking, the homology representative already "guides the harmonic representative around the hole", whereas the cohomology representative only selects a number of parallel simplices (I.e. connected by higher-order-simplices) where the harmonic representative starts from and ends in. 3. As stated in the paper and in the DreiMac documentation, the only problem occurs __if the lifted representative $\eta$ is not in the kernel of the (co)boundary__, i.e. $d\eta\not =0$, but $d\eta=3\omega$ for some $\omega$ (With our choice of $p=3$.) For any randomly chosen homology representative in Z/3Z coefficients, this could very well be the case and present a problem. However, the involuted homology persistent homology algorithm __does not output any random representative__. Rather, the representatives are the result of some matrix reduction algorithm. In [3], the authors of ripserer.jl state that the computed homology representatives __will alway be a cycle with all but one of the connected components being contractible__ at the time the homology class exists. Empirically, this has always been the case, with the non-trivial connected component of the cycle being an ordinary cycle. We will look into whether we can give a proof for this based on [1]. Even if this should not be possible, it will not be a problem. __We have checked over 3000 homology classes on different datasets, not a single one of them did not lift to a R-homology class__. In the unlikely case this happens, we __could catch the error__. We will reference DreiMac and use the same integer linear programming solution to rectify faulty cycles in a future release of the accompanying software package. 4. Our last observation is that in [4], the examples of the failures included the circular coordinates __wrapping around the hole too many times__. (I.e., one traversal of the cycle would amount to multiples of $2\pi$.) This __does not lead to a problem in our interpretation__, because we are just interested in __whether an edge contributes to a homology class__ as classified by its harmonic representative. In summary, we work with __homology__ representatives in comparison to __cohomology__ representatives as used by [4], which are __better behaved__. The only case where a problem could arise is the case where the Z/3Z homology representative produced by ripserer does not contain a connected component with an ordinary non-trivial cycle (I.e. a cycle in the sense of graph theory/in Z-coefficients). We believe that this does not happen in practice because of the reduction algorithm of [1] computing the homology representative. Empirically, we have __never seen this happen__ on a __large sample size__. If it would still happen against all odds, we can __simply check for this__. Thus, we believe the discussed theoretical considerations will not be a significant problem in practice. We will distill this into a section of the appendix, referencing DreiMac and a relevant selection of all the Circular/Sparse Circular/Eilenber-MacLane-coordinates papers. __Thank you so much for this discussion!__ [1] Matija Čufar and Žiga Virk: "Fast Computation of Persistent Homology Representatives With Involuted Persistent Homology." [2] Naively thinking, one could expect that because homology and cohomology are isomorphic over R-coefficients, the harmonic representative does not depend on whether homology or cohomology representatives are chosen. This is not true, however. The projections of the homology generators and the projections of the cohomology generators at certain filtration values in the persistence diagram form two distinct bases of the harmonic space. [3] https://mtsch.github.io/Ripserer.jl/dev/generated/cocycles/ [4] https://dreimac.scikit-tda.org/en/latest/notebooks/parameters_prime_and_check_cocycle_condition.html
Summary: This paper introduces TOPF, a topological feature extraction mechanism on point cloud data. The authors consider Vietoris-Rips/$\alpha$ filtrations over point clouds and compute the persistent homology. They propose a heuristic to select the “top” features from the barcodes. They consider the corresponding representatives for these features and project them onto the harmonic space of the simplices. These projected vectors are then normalized and used to construct a point-level feature vector. The authors use this framework for clustering. Towards this end, the authors introduce a topological point cloud clustering benchmark and report the experimental results on this benchmark. Strengths: The authors propose to use Hodge Laplacian and Hodge decomposition to compute feature vectors over points in point-cloud data, which is a novel idea. Weaknesses: 1. I do not fully understand the “learning” the representation here, because the representation is not particularly being learnt. It is being computed by using the persistent homology of the point cloud. 2. Experimental evaluation is limited to clustering. And even in clustering, it is primarily limited to shapes which are partially/fully topologically spherical. 3. The robustness of the approach is due to the robustness of harmonic persistent homology known in the literature. 4. The paper uses well-known notions in the TDA literature in the context of point-clouds, which amounts to an incremental progress in this direction. 5. It would strengthen the paper if the authors include a small paragraph explaining why projecting onto the harmonic subspace solves the problems that exist in using the homology representatives directly. Minor: Page 2, Line 81: ‘Spaces in topology are “continuous”’. Continuity is a notion defined for functions on topological spaces and not for topological spaces themselves. Spaces are connected. Technical Quality: 2 Clarity: 3 Questions for Authors: Can this approach be used for other point-cloud related machine learning tasks? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes, the authors have discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the review for their thorough review and feedback! We will now address the points raised in the review: > I do not fully understand the “learning” the representation here, because the representation is not particularly being learnt. It is being computed by using the persistent homology of the point cloud There seem to be different views of what should be considered as “learning” in the community. The English wikipedia article on feature & representation learning explicitly lists unsupervised algorithms like PCA, k-means, and LLE, which “learn” the representation by performing specific computations. Topological Point Features fit in well with this view on representation learning. However, we want to be very clear in what TOPF and what it does not do. Thus we have added a clarification on this to the introduction and thank the reviewer for their valuable remark! > Experimental evaluation is limited to clustering. And even in clustering, it is primarily limited to shapes which are partially/fully topologically spherical. In the qualitative experiments, we reported the feature-level vectors instead of the received clusterings. (See Figure 4 & 8) [Same as author rebuttal:] We agree with the reviewer that having more experiments is always a great thing! We have now added an experiment were we use TOPF to extract topological features on the embedding space of a variational autoencoder trained on real-world image data and show that these point-level features correspond to the underlying topology of the data acquisition process. (See Figure 2 of the attached pdf.) Already in the pre-Rebuttal-phase, the non-main-text (exluding paper checklist) was longer than the main text of our paper. Downstream-Applications in TDA usually require a lot of additional explanation and description of the experimental setup and are significantly more complex than running simple off-the-shelves benchmarks, as is the case in many other areas of Machine Learning. This is why in many cases, the introduction of new methods and the application on real-world data are even divided into two different papers. Because of these reasons, we believe that adding experiments on top of our introduction of a new Clustering Benchmark Suite (TCBS), comparison with existing methods on TCBS, experiments on real-world protein data and on state spaces of dynamical systems, on the latent space of a Variational Auto Encoder, and on robustness with respect to noisy data, robustness wrt. to hyperparameter choices, and on run-times on growing point clouds would be out of scope for a single NeurIPS paper that introduces a theoretically novel method for extracting topological point features. > The robustness of the approach is due to the robustness of harmonic persistent homology known in the literature Generally, we consider this to be a strength. However, we want to highlight that, to the best of our knowledge, this is the first work to utilise this robustness in a novel method. To the best of our knowledge, the only other prior experimental work using harmonic persistent homology [1] always construct simplicial complexes at the birth of the feature and extract different features from this than we do. The SCs at the birth of the features are unstable. > The paper uses well-known notions in the TDA literature in the context of point-clouds, which amounts to an incremental progress in this direction. We want to emphasise that doing TDA and relating the __global topology back to the local individual points__ is a very novel idea in TDA with very limited prior work. Although our work builds on already established concepts and theory, it combines these pieces in a novel way that has not been done before. > It would strengthen the paper if the authors include a small paragraph explaining why projecting onto the harmonic subspace solves the problems that exist in using the homology representatives directly. Thank you very much for this suggestion! We will happily add a paragraph on that! Please note that we already provide some explanation on this in lines 164--175. There are two main reasons that projecting onto the harmonic space is a good idea: 1. All of the countless [2] representatives representing the same homology class get projected onto the same harmonic representative. Thus, harmonic representatives form a canonical and unique homology representation. (This can be explained by the Hodge Laplacian and the fact that $$\ker L_k(X) \cong H_k(X)$$. 2. The harmonic representatives minimise the energy among all possible representatives. In other words, this ensures smoothness of the representatives and assigns every simplex a value that corresponds to how much it contributes to the homology class. (The precise mathematical formulation is simply the minimisation of energy.) We will explain this in more detail in the appendix! Thank you again for the suggestion. > ‘Spaces in topology are “continuous”’. Continuity is a notion defined for functions on topological spaces and not for topological spaces themselves. Spaces are connected. Because "continuous" has no mathematical definition for spaces, we intended to use it to convey an intuition. "Connected" is suboptimal, as even a two-point set can be connected if equipped with the trivial topology. However, to avoid confusion we will stick with "connected". Thank you again for noticing! > Can this approach be used for other point-cloud related machine learning tasks? We believe that topological features can be used in many applications with data with topological structure. We have added an experiment on the interpretability of latent spaces of VAEs. (Figure 2 of the additional pdf) [1] Davide Gurnari, Aldo Guzmán-Sáenz, Filippo Utro, Aritra Bose, Saugata Basu, and Laxmi Parida. Probing omics data via harmonic persistent homology. [2] Actually, this is still a finite number as we are working with finite simplicial complexes. --- Rebuttal 2: Comment: I would like to thank the authors for their efforts and response. I have read the responses and have adjusted my score.
Summary: The paper introduces an approach to select and compute some point-level topological features for point cloud or general data set analysis. The main ideas is to define a multi-scale simplicial complex representation, thus we can track how the homology modules change along the filtration and then select the homologies that persist for a long range of scales. Strengths: - Topological features are usually not localized, the idea of being able to bring back the topological descriptor to the relevant points is quite novel and impactful. - The approach is theoretically sound and well analyzed. -The experimental evaluation is limited but convincing. Weaknesses: - the feature selection is very heuristic. - The evaluation is only on point cloud clustering. Since we are evaluating effectiveness and robustness of localized features, feature/point correspondence problems would have been interesting. Technical Quality: 4 Clarity: 3 Questions for Authors: Is there scope to learn how to select the topological features Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The main limitation, i.e. the selection of the features, has been briefly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their feedback! We are happy to read that they find the paper sound and well-presented. > Topological features are usually not localized, the idea of being able to bring back the topological descriptor to the relevant points is quite novel and impactful. We are very excited about this as well! :) > the feature selection is very heuristic. Topological features can have very different meaning and significance based on their context. Thus it does not seem plausible that there exists one and only one provably best and correct way to select features. The approach by TOPF takes a view based on concepts of __Algebraic Topology, Differential Geometry, and TDA__ to select the most significant features, which seems to work well in a __variety of applications__. Given a lot of training data for a particular case, we could of course train a neural network to select the most relevant features in this applications. This is a promising future work, but because this would require an entirely new architecture on top of TOPF we believe this would not fit well into the current paper. Furthermore, we have conducted additional experiments on the robustness of TOPF wrt. hyperparameter choices, see Figure 1 of the attached pdf. > The evaluation is only on point cloud clustering. Since we are evaluating effectiveness and robustness of localized features, feature/point correspondence problems would have been interesting. This sounds like a very interesting problem and exciting application, thank you very much! We understand that in a nutshell, the current SOTA methods use neural networks trained on certain sets of features. As this probably requires coming up with a (at least in some part) new architecture, training the model, etc., this sounds like an entirely new additional idea, which would suggest featuring it in a follow-up paper. [Same as in author rebuttal:] We agree with the reviewer that having more experiments is always a great thing! We have now added an experiment where we use TOPF to extract topological features on the __embedding space of a variational autoencoder__ trained on real-world image data and show that these point-level features correspond to the __underlying topology of the data acquisition process__. (See Figure 2 of the attached pdf.) Already in the pre-Rebuttal-phase, the non-main-text (exluding paper checklist) was longer than the main text of our paper. Downstream-Applications in TDA usually require a lot of additional explanation and description of the experimental setup and are significantly more complex than running simple off-the-shelves benchmarks, as is the case in many other areas of Machine Learning. This is why in many cases, the introduction of new methods and the application on real-world data are even divided into two different papers. Because of these reasons, we believe that adding experiments on top of our introduction of a __new Clustering Benchmark Suite (TCBS), comparison with existing methods on TCBS, experiments on real-world protein data and on state spaces of dynamical systems, on the latent space of a Variational Auto Encoder, and on robustness with respect to noisy data, robustness wrt. to hyperparameter choices, and on run-times on growing point clouds__ would be out of scope for a single NeurIPS paper that introduces a theoretically novel method for extracting topological point features. However, we are excited to use and see others use TOPF for new applications. We thank you for your time to read our rebuttal! We are excited to hear back from in the discussion phase! --- Rebuttal 2: Title: Reviewer--Author Discussion Period Closing soon Comment: Dear reviewer, Thank you again for your review and suggestions! In the rebuttal, we have added a new experiment on the __embedding space of variational autoencoders__, validated the __robustness of TOPF with respect to various hyperparameter changes__ experimentally, studied the __runtime on point clouds with growing $n$__, and added a section on the relationship between __Hodge theory, differential forms__, and topf and the __theoretical intuition__ behind our algorithm (See the rebuttal to reviewer Zmis). In particular, we hope to have addressed the points raised by the reviewer in our rebuttal above. As the author-reviewer discussion period closes in ~ 24h, we would be happy to answer any further questions or receive any more comments on our work and our rebuttal. In particular, we would be grateful to hear whether the reviewer feels that we have addressed the points raised by them. Thank you in advance for your reply!
Summary: The paper presents a novel method for extracting per point topological features - TOPF. The method builds on previous results in topological data analysis which described a shape or a point cloud with a single global feature, by generating per-point topologically-aware features. The paper presents a quantitative evaluation and comparison of the proposed method with prior art on a new benchmark consisting of several synthetic examples, evaluates the robustness of the proposed method under noise, as well as presents qualitative examples of its performance on synthetic and real work data. Strengths: * The paper is well written and easy to follow. Prior art and the proposed algorithm description is detailed and comprehensive. * To my understanding, the paper describes a novel method for per-point feature extraction based on topological information contained in a point cloud, and describes theoretical guarantees for its correctness on point clouds sampled from multiple n-spheres. * The paper describes a new topological point clustering benchmark dataset consisting of seven synthetic point clouds with up to 5 labels, and evaluate the proposed and existing methods on this dataset showing that the proposed method outperforms existing methods in most cases. Weaknesses: * The paper lists common machine learning applications requiring point level features as a motivation for the proposed method. However, only quantitative experiments for point cloud clustering on a set of synthetic examples, and anecdotal evidence of performance on real world data, were presented. In order to fully understand the potential of the proposed approach to be applied beyond synthetic data, it would be beneficial to include additional evaluation, qualitative and quantitative, on real-world data and additional applications, e.g. as described in lines 304-307. * Specifically, it would be interesting to see experiments on non-synthetic datasets with topological structure mentioned in line 266. * Additionally, comparison with other well performing modern machine learning methods, such as graph neural networks for point cloud clustering, needs to be discussed, for completeness. Technical Quality: 2 Clarity: 3 Questions for Authors: * What is the runtime of the proposed method? How does it change with the point cloud size and does it have limitations on the size of point cloud that can be processed with it? Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: The authors adequately addressed the limitations and impact of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their review! We are happy to hear that they find our work well-written and novel. > it would be beneficial to include additional evaluation, qualitative and quantitative, on real-world data and additional applications [Cf. Authore rebuttal:] We agree with the reviewer that having more experiments is always a great thing! We have now added an experiment were we use TOPF to extract topological features on the __embedding space of a variational autoencoder__ trained on real-world image data and show that these point-level features correspond to the underlying topology of the data acquisition process. (See Figure 2 of the attached pdf.) We believe this to be an interesting contribution to the field of interpretable AI. Already in the pre-Rebuttal-phase, the non-main-text (exluding paper checklist) was longer than the main text of our paper. Downstream-Applications in TDA usually require a lot of additional explanation and description of the experimental setup and are significantly more complex than running simple off-the-shelves benchmarks, as is the case in many other areas of Machine Learning. This is why in many cases, the introduction of new methods and the application on real-world data are even divided into two different papers. Because of these reasons, we believe that adding experiments on top of our introduction of a __new Clustering Benchmark Suite (TCBS), comparison with existing methods on TCBS, experiments on real-world protein data and on state spaces of dynamical systems, on the latent space of a Variational Auto Encoder, and on robustness with respect to noisy data, robustness wrt. to hyperparameter choices, and on run-times on growing point clouds__ would be out of scope for a single NeurIPS paper that introduces a theoretically novel method for extracting topological point features. However, we are excited to use and see others use TOPF for new applications. > Additionally, comparison with other well performing modern machine learning methods, such as graph neural networks for point cloud clustering, needs to be discussed, for completeness. TOPF is an __unsupervised feature extraction algorithm__, whereas GNNs require training data and labels. Thus, using GNN on the TCBS is not possible. We have compared against the to us most-relevant non-supervised algorithms. If you believe an important non-supervised algorithm is missing, we would be happy including it in the final paper. > What is the runtime of the proposed method? How does it change with the point cloud size and does it have limitations on the size of point cloud that can be processed with it? We report the runtime of the proposed method in Table 1. All the examples of the TCBS run in under 40s on a modern laptop. Furthermore, we have __added experiments__ on how the runtime changes when increasing the number of points while preserving topology, (Figure 3 of the additional pdf). We hope to have convinced the reviewer of the soundness of our methods. We want to emphasise that doing TDA and relating the __global topology back to the individual points__ is a very novel idea with very limited prior work. TOPF is __not__ just simply a slightly different algorithm among countless similar ML approaches. Thank you for reading our rebuttal, we would be very happy to answer any questions or hear back from you! --- Rebuttal Comment 1.1: Title: response to rebuttal Comment: Dear Reviewer mNMA, could you please respond to the authors' rebuttal. Thank you. --- Rebuttal 2: Title: response to rebuttal Comment: I would like to thank the authors for their efforts and response. > Already in the pre-Rebuttal-phase, the non-main-text (exluding paper checklist) was longer than the main text of our paper. I don't believe the length of the paper can or should be used as a justification for missing experiments if these are necessary to fully appreciate the contribution of the paper. Specifically for this submission, experiments with data beyond specifically designed synthetic dataset, are *necessary* to validate that the proposed approach can be applicable for data beyond such a dataset - if it is not the case, this would greatly limit the potential of the proposed approach. I would like to thank the authors for adding the new experiments with the embedding space of a variational autoencoder. > If you believe an important non-supervised algorithm is missing, we would be happy including it in the final paper. Some examples of possibly relevant unsupervised methods in image segmentation area are "DeepCut: Unsupervised Segmentation using Graph Neural Networks Clustering" by Aflalo et al. (specifically for GNNs) or "Unsupervised semantic segmentation by distilling feature correspondences" by Hamilton et al. It would be interesting to understand whether the above approaches designed for images, or any similar unsupervised approaches, can be applied to the proposed benchmark and how they would compare with the proposed approach. --- Rebuttal 3: Comment: Thank you very much for your response! > Specifically for this submission, experiments with data beyond specifically designed synthetic dataset, are necessary to validate that the proposed approach can be applicable for data beyond such a dataset - if it is not the case, this would greatly limit the potential of the proposed approach. We agree with the well-written response of the reviewer in this regard! We just want to highlight that we already include experiments on data from __synthetic datasets__, on __protein atom coordinates__, on __state spaces of dynamical systems__, and now on __embedding spaces__ of __variational autoencoders__. > Some examples of possibly relevant unsupervised methods in image segmentation area are [...] Thank you for your suggestion! As we understand it, the two presented image segmentation benchmarks work on image data, where __every pixel in a grid has an (R,G,B) value__. The datasets of our benchmark suite are __point clouds in $n$-dimensional space without any specified or grid-like structure__. While it might be possible to turn an image into a point cloud in 5D, the other way does not work. Thus it is not possible to use image segmentation algorithms as a baseline for the Topological Clustering Benchmark. Thank you again for this discussion, we will be happy to answer any further questions! --- Rebuttal 4: Title: Additional Baseline using pretrained Pointnet Architecture Comment: We once again want to thank the reviewer for encouraging us to explore __additional baselines__ for the Topological Clustering Benchmark Suite! If we understood the reviewer correctly, they were interested in how __pretrained neural network architectures__ would perform on the introduced Topological Clustering Benchmark Suite. A good baseline for 3d point cloud segmentation tasks is __Pointnet__ [1], which is a neural architecture that directly deals with unstructured point cloud data. We have pretrained __Pointnet__ on the ShapeNet-Part Segmentation dataset [2]. We trained for 200 epochs taking ~4 hrs on 2 nVidia L40 GPUs achieving an accuracy of 0.93 and an IoU of 0.83 corresponding to the values expected from the literature. ShapeNet-Part is a dataset of 3d objects corresponding to 16 diverse categories. In the segmentation task, the neural network __learns to divide the point cloud into different segments__ corresponding to different semantic parts. We have then evaluated the performance of the pretrained pointnet on the datasets of the TCBS across all categories. We report the highest ARI (Adjusted Rand Index) for every dataset: | Dataset | TOPF |Pointnet| | -------- | ------- | ------- | | 4spheres|__0.81__|0.30| |Ellipses|__0.95__|0.50| |Spheres+Grid|0.70|0.54| |Halved Circle|__0.71__|0.36| |2Spheres2Circles|__0.94__|0.55| |SphereinCircle|__0.97__|0.39| |Spaceship|__0.92__|0.41| |mean|__0.86__|0.44| This shows that __TOPF outperforms the above-introduced Pointnet version on the TCBS__. This is as expected, as the neural network was not trained specifically on the datasets of the test set. However, as there is very scarce data and thus no training for the extraction of topological features, it is to be expected that all pre-trained neural network architecture will suffer from similar performance on TCBS. We note that pointnet is not the current SOTA on Shapenet-part-Segmentation. However, pointnet is still in the range of 6% IoU of GeomGCNN and thus representative of the general capabilities of deep learning models, whereas the mean __performance difference to TOPF on TCBS was 42%__. We thus believe to have addressed the reviewer's request for a pre-trained neural network baseline, __comparing a pretrained version of Pointnet with the performance of TOPF on our dataset__. TOPF outperforms PointNet by a wide margin of 42% ARI. [1] Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 652-660). [2] https://shapenet.org
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their very valuable feedback and comments. Your work has already helped to significantly improve the paper. We will provide a brief summary of our changes here, and give detailed answers in the individual rebuttals. Some reviewers asked us for more experiments showcasing another application of TOPF features. We have conducted an experiment to show that TOPF helps to uncover __topological structures in the latent space of variational autoencoders__ trained on image patches. Because of the point-wise nature of TOPF, this allows us to draw precise conclusions about the relation between the sample input points and the inherent topology in the sample space. We believe this to be an interesting and valuable insight to the community of interpretable machine learning. For details, we refer to Figure 2 of the attached pdf. We will include these experiments and a careful discussion into the final pdf. (Future work could combine this with the work of [1] on improving the topology of latent spaces of VAEs.) We were also asked to provide additional experiments on the hyperparameter choices. We did this in Figure 1 of the attached pdf. Our experiments show (as claimed) that TOPF shows __robust performance against moderate hyperparameter__ changes away from the defaults on almost all of the TCBS. This shows that TOPF can be expected to work well in practice with the default hyperparameters, and that our successful experiments were not simply the result of hyperparameter overturning. We have also included additional experiments on the runtime of TOPF on point clouds with increasing point density in Figure 3, showing promising scaling behaviour. While we agree with the reviewers that more experiments are always exciting, already in the pre-Rebuttal-phase, the non-main-text (exluding the paper checklist) was longer than the main text of our paper. Downstream-Applications in TDA usually require a lot of additional explanation and description of the experimental setup and are significantly more complex than running simple off-the-shelves benchmarks, as is the case in many other areas of Machine Learning. This is why in many cases, the introduction of new methods in TDA and the application on real-world data is divided into two separate papers. Because of these reasons, we believe that adding even more experiments on top of our introduction of a new Clustering Benchmark Suite (TCBS), comparison with existing methods on TCBS, experiments on real-world protein data and on state spaces of dynamical systems, on the latent space of a Variational Auto Encoder, and on robustness with respect to noisy data, robustness wrt. to hyperparameter choices, and on run-times on growing point clouds would be out of scope for a single NeurIPS paper that already introduces a theoretically novel method for extracting topological point features. However, we are excited to use and see others use TOPF for new applications in future papers. We have also addressed and clarified theoretical questions posed by reviewer Zmis in the individual rebuttal. Thank you again for your hard work as reviewers! In case you have any more questions regarding our rebuttal, we will be happy to answer them during the discussion period! [1] "Diffeomorphic interpolation for efficient persistence-based topological optimization", 2024: Mathieu Carriere, Marc Theveneau, Théo Lacombe Pdf: /pdf/acc89db589b441d448b7542f64eef5ebefdca34c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RL in Latent MDPs is Tractable: Online Guarantees via Off-Policy Evaluation
Accept (poster)
Summary: This paper studies latent Markov decision processes (LMDP) with M = O(1) number of MDPs. In other words, there exists a set of MDPs (unknown to the agent) and the environment randomly selects one MDP at the beginning of each episode. The selected MDP is not revealed to the agent and therefore the agent must infer which MDP is selected from the feedback. This paper proposes an algorithm with sample complexity poly(S,A)^M that matches the \Omega((SA)^M) lower bound. On the technical side, this paper designs a model-based exploration algorithm that actively collects new data if the collected data can shrink the confidence interval of the model parameters. On standard tabular RL, this paper shows that (a) the algorithm terminates after poly(SAH) rounds of exploration, and (b) any remaining model parameter can be used to construct a near-optimal policy after the exploration stage. Because of the generality of the algorithm, this paper further extends the algorithm to the LMDP setting and proves a poly(S,A)^M sample complexity. Strengths: - The exposition of the main ideas is very clear because of the instantiation on tabular MDPs. - The OMLE algorithm is neat, and it provides a general framework for model-based exploration. - The sample complexity result matches the \Omega(S,A)^M lower bound and requires no additional assumption on the structure of the LMDP family. Weaknesses: - The algorithm is not computationally efficient even for standard tabular MDPs because it has to enumerate both the parameters and the policy in the confidence interval, though I understand that computational efficiency is not the focus of this paper. - Section 4 is hard to follow without background knowledge of several prior works. E.g., the choice d = 2M-1 is not well-justified. I encourage the authors to revise Section 4 to make it more self-contained. - It seems to me that the title of this paper somewhat overclaims the result since the sample complexity is still exponential in M, and the assumption that M is finite is not intrinsic to the LMDP setup (although it is necessary due to the sample complexity lower bound). Technical Quality: 3 Clarity: 3 Questions for Authors: - It seems to me that the OMLE algorithm is very general. Is it straightforward to prove some general sample complexity results with problem-dependent bounds? Intuitively, the bound would include terms like the statistical complexity of the model class and some complexity measure of the MDP family. - The data collection step in Algorithm 2 (Line 5-7) seems to be quite brute-force since it has to enumerate all the segmented policy. Could the authors elaborate on its reason? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and constructive feedback on our paper. We address the mentioned weaknesses below. **Practicality of the Proposed Algorithm** As the reviewer correctly pointed out, our focus was indeed on the purely statistical learning aspect of the problem. Nevertheless, we acknowledge and appreciate the reviewer's perspective on the computational aspects of the algorithm. Our long-term goal is to design an oracle-efficient algorithm for LMDPs that can be implemented with general function classes and ERM-style oracles. While we believe this is a feasible direction, achieving it will require significant effort from both theoretical and practical standpoints. **Intuition on Policy-Segmentation Regarding Prior Work** Thank you for your suggestion. Indeed, we agree that it would be useful to provide the intuition on how we connect moment-exploration ideas presented in [1] to our off-policy evaluation results. We will add the following paragraph in our revision at the beginning of Section 4: *Before we dive into our key results, let us provide our intuition on how we construct the OPE lemma for LMDPs. Our construction is inspired by the moment-exploration algorithm proposed in [1]: when state-transition dynamics are identical across latent contexts, {\it i.e.,} $T_1 = T_2 = ... = T_M$, we can first learn the transition dynamics with any reward-free type exploration scheme for MDPs [2], and then set the exploration policy that sufficiently visits certain tuples of state-actions of length at most $d=2M-1$. For the exploration policy, the work in [1] set a memoryless exploration policy $\psi \in \Pi_{\texttt{mls}}$ which sets the visitation probability to certain tuples sufficiently large. We note that the same moment-exploration strategy cannot be applied to general LMDPs with different state-transition dynamics since learning the transition dynamics itself involves latent contexts. Nevertheless, the intuition from [1] suggests that our key statistics are this visitation probabilities to all tuples of state-actions within a trajectory.* **Significance of Our Results on $M$** We accept this criticism and appreciate the feedback. That being said, we would like to highlight that our results represent a substantial advancement from previous work on LMDPs. Specifically, our algorithm is the first to achieve sample efficiency in a partially observed setup without relying on standard assumptions such as weakly-revealing or low-rankness, which are well-studied in the literature. Even in cases where $M=2$ or $3$, the solutions to LMDPs were previously unknown, making our findings significant. While we acknowledge that our current results are achieved with $M=O(1)$, we believe it is important to emphasize the positive aspect of our contributions in pushing the boundaries of what is known in this field. **Possible Extensions to General Function Classes** This is a great question. We assume that by "problem-dependent bound", the reviewer is referring to sample-complexity bounds with general function approximation. We believe that extending our approach to general function approximation is feasible, particularly given recent advances in RL; however, this is not entirely straightforward and requires careful consideration. For Latent MDPs specifically, our starting point will be to define what is the good notion of "coverage" in LMDPs with large state/action spaces -- that will direct us to the question of what are the "moments" with general function approximation. If we can define this notion well, then we believe that we should be able to come up with a notion similar to the LMDP-coverage with function approximation. Another possibility is to come up with a general notion of statistical complexity measure for Latent MDPs/POMDPs such as DEC [3] (see also our discussion on GEC with Reviewer 8u1Y). Such an extension is beyond the scope of this paper, and we think it is an exciting future direction. **More Efficient Ways to Construct Data-Collection Policies** This is another great point. The current brute-force construction is due to the nature of worst-case analysis, where joint events at all length-$d$ tuples of state-actions need to be observed under all contexts. Since we do not have the observability over which policy drives us to the next state-action pair under latent contexts of interest, our solution brute-force all possibilities. That being said, we are hopeful that under some practical assumptions, we may expect improvement in two ways: - We can rule out some redundant combinations, as they might already be covered by other combination (e.g., this idea can be related to G-optimal design as in the Latent Multi-Armed Bandit setting [4]) - Other possibility is to identify that for some combinations, at some point, it becomes obvious that the segmentation of significantly smaller than $d$ segments would be sufficient. These are the next big questions and important future work. ------------------------ We hope that our answers to the raised questions are satisfactory. Please let us know if you have any other concerns or questions. We would greatly appreciate it if you could consider reevaluating our work, taking into account the strengths and improvements we've outlined. [1] Kwon et al., "Reward-mixing MDPs with few latent contexts are learnable", ICML 2023 [2] Jin et al., "Reward-free exploration for reinforcement learning", ICML 2020 [3] Foster et al., "The statistical complexity of interactive decision making", arXiv 2021 [4] Kwon et al., "Tractable Optimality in Episodic Latent MABs", NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thank you for the response. I will keep my original evaluation. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thank you for the feedback!
Summary: This paper studies latent MDP, where the underlying dynamics and rewards are controlled by some latent states (not revealed to the learner), and the learner attempts to do learning and planning based on the trajectory data. The algorithm builds from the optimistic MLE algorithm, which iteratively checked whether whether there exists uncovered policies. To make the OMLE works for LMDP, this paper constructed segmented policies, and iteratively check whether such policies are all covered. As a result, they showed the first polynomial sample complexity result for LMDP when the number of latent states are constant. Strengths: The paper is well written. The algorithm, theorems and proofs are clear. The results of polynomial sample complexity for LMDP with constant latent states seem very interesting. The idea of constructing segmented policies is novel. Weaknesses: I don't see any significant weaknesses of this paper. Technical Quality: 4 Clarity: 3 Questions for Authors: Can these results somehow adapt to the POMDP setting? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes. The authors addressed all the limitations listed in the guidelines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the encouraging comments and positive assessment of our paper. Below, we address an interesting question you raised: **Can these results somehow adapt to the POMDP setting?** Yes, we believe our results can be adaptable to the POMDP setting in two senses: - Algorithmically speaking, our algorithm overall falls in a general framework of OMLE; the key difference is in the data-collection policy part which may differ with different environments and assumptions. But at a higher-level, they can be viewed in a unified principle. - Technically speaking, our analysis based on the notion of coverage can be useful to give a different interpretation of solving the POMDP settings (that are known to be tractable). The key to understand in such a way is to define a suitable notion of coverage in general POMDPs -- which is in part done in some recent work on off-policy learning for POMDP [1]. This is an interesting future question. -------------------- Thank you once again for your thoughtful review and positive assessment. If you have any further questions or suggestions, please let us know. [1] Zhang and Jiang, "On the curses of future and history in future-dependent value functions for off-policy evaluation", arXiv 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your response. I don't have further questions. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thanks for the response and positive feedback!
Summary: This paper introduces a new version of the coverage coefficient for analyzing latent Markov Decision Processes (MDPs). It demonstrates how to link the proposed coverage coefficient with sample complexity using MDPs. Additionally, the paper presents an algorithm and provides a bound on the sample complexity of this algorithm. Strengths: The use of MDPs to illustrate the concept is effective, making it easier for the audience to follow. Weaknesses: 1. The comparison with related work is insufficient. Including a table that compares the results in this paper with those in [1] would be helpful. 2. The connection and comparison between this work and existing studies on the coverage coefficient [2] should be more concrete. 3. The meaning and design of the segment policy are unclear, making it difficult for the audience to understand the intuition behind why this coverage coefficient is helpful for analysis. Although a counterexample is provided in the appendix, it is still not intuitive enough. 4. In my understanding, the core idea behind the analysis is that complexity depends on the longest policy sequence $\pi_1,\ldots,\pi_k$, where $C(\pi_i,\pi_j)=\infty$. If this understanding is correct, there are two follow-up questions: (a) Can we define a complexity measure similar to the Eluder dimension? (b) Can the complexity measure generalize to cases beyond latent MDPs? (c) Is this complexity measure equivalent to or weaker than the Eluder dimension? Answers to these questions might help improve this paper. 5. Although the results in this paper do not require further structural assumptions, the significance of an exponential upper bound is questionable. 6. Lack of discussion with works that have complexity measure without Markovian assumption [3]. [1]. J. Kwon, Y. Efroni, C. Caramanis, and S. Mannor. Reinforcement learning in reward-mixing MDPs. Advances in Neural Information Processing Systems, 34, 2021. [2]. P. Amortila, D. J. Foster, and A. Krishnamurthy. Scalable online exploration via coverability. arXiv preprint arXiv:2403.06571, 2024. [3]. Zhong, Han, et al. "Gec: A unified framework for interactive decision making in mdp, pomdp, and beyond." Technical Quality: 3 Clarity: 2 Questions for Authors: See "Weaknesses" part for the questions. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 1 Limitations: See "Weaknesses" part for the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your insights and suggestions, which will help us improve our work. Below, we address the weaknesses you mentioned. **Comparison to Previous Work on LMDPs** We would like to highlight that our work is the first to propose a general exploration algorithm applicable to the entire class of LMDP models. In contrast, previous works [1] focus solely on the subclass of LMDPs known as "Reward-Mixing" MDPs. These models assume no context ambiguity in transition dynamics, only in reward models. Under this assumption, data collection becomes less challenging. One can first learn the transition model separately through reward-free exploration, and then explore specific state-action tuples in order to obtain samples of "moments". However, when transition dynamics also vary across contexts, data-collection becomes significantly more complex, and is the main challenge in our work. We will further clarify this distinction and the associated challenges in Section 1.1 - Challenge 2 in our revised manuscript. **Coverage Coefficient in MDPs vs LMDPs** Existing work that studies "the role of coverage in online exploration" is focused on fully observed settings (Low-Rank MDP, Block MDP are also fully-observed, although latent dynamics is much lower dimensional). In contrast, LMDP, and more broadly POMDP, involve dynamics and policies that are history-dependent -- an optimal policy of an LMDP may depend on the entire history. This means that a single latent-state coverability does not adequately capture the complexity of offline learning in these settings (as we illustrated in the counterexample in Appendix). Consequently, off-policy evaluation in LMDPs requires a new notion of coverage that measures coverability over *sequences of state-action pairs*. Our main contribution is the proposal of this LMDP-coverage concept, which we connect to off-policy evaluation and online exploration. We believe this not only represents a significant advancement in learning LMDPs but also offers a conceptual contribution to the learning of general POMDPs. We will make this point clearer and more accessible to readers in our revision. **Eluder Dimension vs Our Approach** The Eluder dimension essentially measures the longest possible policy sequence that can incur a large prediction error. However, in partially observed settings, this definition may fail to capture the true complexity of online exploration. For instance, in multi-step weakly-revealing POMDPs (or PSRs), without executing core-tests, we may still suffer from the curse of horizon [2]. One might modify the definition of the Eluder dimension as proposed in the GEC paper you shared (details will follow below). However, similarly to the limitation of known tractable POMDP classes, the GEC definition also relies on the prior knowledge of how to construct the data-collection policies (using core-tests). Our technical contribution lies in the design and understanding of data-collection policies in LMDPs. We believe there is significant potential to develop a more general, fine-grained notion of complexity for a broader class, potentially including both revealing POMDPs and LMDPs. We see this as an exciting direction for future research. **Upper Bounds Exponential in M?** We accept this criticism, though due to the lower bound established in [3] we cannot avoid this dependence in M without any assumptions. However, we would like to highlight that our results represent a substantial advancement from previous work. Specifically, our algorithm is the first to achieve sample efficiency in a partially observed setup without relying on standard assumptions such as weakly-revealing or low-rankness, which are well-studied in the literature. Even in cases where $M=2$ or $3$, the solutions to LMDPs were previously unknown, making our findings significant. While we acknowledge that our current results are achieved with $M=O(1)$, we believe it is important to emphasize the positive aspect of our contributions in pushing the boundaries of what is known in this field (we also refer the reviewer to our general response on the importance of studying LMDPs). **Comparison/Connection to Other Complexity Measures (GEC)** We thank the reviewer for drawing our attention to this work, and we will add this reference in our revision. Yes, the idea behind GEC resonates with us at a very high-level -- it aims to capture the largest discrepancy in prediction errors from small training errors. However, as with other complexity measures such as (generalized) Eluder-dimension or Decision estimation coefficient, the main bottleneck is to show the boundedness of GEC in LMDPs, as there has been no upper bound established up-to-date. Furthermore, our construction of exploration (data-collection) policy is very different from known methods. While all the above is in part described in our Section 1.1, we will be more explicit about comparison to existing complexity measures in our revision. ------------- We hope our responses clarify any misunderstandings and resolve the issues identified. Please let us know if there are any remaining questions or concerns. Otherwise, we kindly request a reevaluation of our work in light of the provided clarifications. Thank you for your consideration and effort. [1] Kwon et al., "Reinforcement Learning in Reward-Mixing MDPs", NeurIPS 2021 [2] Chen et al., "Lower bounds for learning in revealing POMDPs", ICML 2023 [3] Kwon et al., "RL for Latent MDPs: Regret Guarantees and a Lower Bound", NeurIPS 2021
Summary: The paper studies latent MDPs, an MDP framework with a set of MDPs and the environment samples a random MDP at the beginning of each episode. To avoid a $A^H$ sample complexity, previous works either assume separation or similarity of transitions. This work removes these conditions, and provide an algorithm with $(SA)^{O(M)}$ upper bound, which matches the lower bound with $M = O(1)$. Speficially, the algorithm adapts the information theoretic Optimistic MLE algorithm for the LMDP setting, by collecting data with all possible segments of all previous policies (+random actions). The key analysis tool is the LMDP coverage coefficient. Strengths: 1. The paper proposes the first algorithm that matches the lower bound of LMDPs (to poly order) without the assumption of separation and similar transition dynamics. 2. The proposed LMDP coverage coefficient, although exponential in $d$ ($M$), seems to capture the right complexity of the LMDPs. 3. The detailed and intutive introduce of Optimistic MLE algorithm makes the proposed algorithm easy to understand. Weaknesses: 1. The techincal innovation seems rather limited -- the main techiques are quite similar as OMLE. 2. I do not understand the relation to OPE: which part of the algorithm relates to OPE? It seems like the only change to OMLE is the way of data collection so there is no additional OPE subroutine? 3. The "without any additional structural assumption" claim in the abstract seems inaccurate: even OMLE is limited with the assumption of finite SAIL (bli-linear rank), and the proposed analysis only applies to tabular? Also, does the analysis generate to general function approximation if we ues the correponding coverage coefficient? Technical Quality: 3 Clarity: 3 Questions for Authors: see above Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a thoughtful review and constructive feedback on our paper. We would like to start by emphasizing our technical novelty. **About Technical Novelties** Our LMDP-OMLE algorithm builds on the general framework of OMLE. The key novelty of the algorithm is the design of data-collection (exploration) policies (we also refer the reviewer to our general responses on the novelties). In designing the data-collection policy, we removed the need for core-tests and well-conditionedness. Further, in our analysis, we introduced the novel concept of LMDP coverage coefficient. These innovations result from efforts to formalize *moment-exploration*, as suggested in previous work [1]. Due to these, we were able to remove the restrictive assumptions of shared transition dynamics and separation. Hence, our key results deviate substantially from existing approaches, and, hopefully, establish a new perspective on exploration in partially observed environments. Further, we believe our results contain a sufficient amount of novel concepts and analysis techniques. **Off-Policy Evaluation for Online Guarantees** Our off-policy evaluation guarantee guides the data-collection policy of the LMDP-OMLE algorithm. In particular, the key novelty lies in the proposal of LMDP coverage coefficient (Definition 4.1): this quantity measures the data-coverage in terms of the visitation probability to *tuples* of different states and actions. Algorithmically, this suggests that our exploration algorithm should aim to visit "uncovered" tuples of state-action pairs, reaching one target state (and action) and moving to the next target state (and action) in the target tuple. Once we can design the exploration policy in such a way, we can show the sample-complexity upper bound via the coverage-doubling argument (Theorem 4.5) as we have illustrated in the MDP case. Here we note that establishing online guarantees via off-policy evaluation is just one possible approach for exploration in LMDPs. It would be very interesting to see if other approaches such as going through the problem-specific complexity measures such as Eluder-dimension [2], Bellman-rank [3] or Decision-Estimation Coefficient (DEC) [4] can be applied using the results provided in this work. **Why Our Approach Doesn't Make Any Assumptions?** We clarify that this work studies LMDP "without any assumptions" in the sense that we consider the class of *all possible LMDP models*, unlike in previous work that assumes certain separations [5,6] or shared transitions dynamics [1]. ------------- We hope our responses have clarified any doubts or questions. If there are any remaining issues, please let us know. Otherwise, we would highly appreciate it if the reviewer could consider raising the score based on our responses. [1] Kwon et al., "Reward-mixing MDPs with few latent contexts are learnable", ICML 2023 [2] Russo and Van Roy, "Eluder dimension and the sample complexity of optimistic exploration", NeurIPS 2013 [3] Jin et al., "Bellman eluder dimension: New rich classes of RL problems, and sample-efficient algorithms", NeurIPS 2021 [4] Foster et al., "The statistical complexity of interactive decision making", arXiv 2021 [5] Hallak et al., "Contextual markov decision processes", arXiv 2015 [6] Chen et al., "Near-optimal learning and planning in separated latent MDPs", COLT 2024 --- Rebuttal Comment 1.1: Comment: I appreciate the author's reponse. I still think the work is solid and my original evaluation is appropriate so I will maintain my original score. Also, it will also be helpful if the authors could revise the statement of "without any additional structural assumption" in the abstract. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thank you for the feedback and the positive view on our contribution. Further, we will edit the abstract accordingly to clarify this point.
Rebuttal 1: Rebuttal: We thank all reviewers for their effort, time, and their valuable feedback. While we respond to each reviewer on the specific concerns raised, we would like to emphasize the importance of studying the Latent MDP setting as well as the technical novelties in our work. **Why Solving LMDPs is Important:** It is often useful to model the population of environments as a mixture of simpler distributions. Latent MDP is an interactive version of mixture modeling, and has a potential for many real world problems, *e.g.,* in dialogue, recommender or in healthcare systems, when complete information on a user or patient is not given. Nevertheless, no algorithm has been known beyond the setting when clustering is relatively straight-forward. We believe that studying general cases when clustering is not so straight-forward has many potentials to advancing the field both in theory and practice, as we detail below. - *LMDP requires rethinking exploration strategies in partially observed environments:* Several known approaches for general POMDPs overcome the challenge by assuming (1) the well-conditionedness of the system (latent state-observation emission matrix must be in full-rank), and (2) the prior knowledge of core-tests (those policies that guarantee the full rankness of the state-observation matrix). Not only does this require strong domain knowledge, but also there is no known approach to learn the optimal policy beyond such cases. - *Potential to enlarge the scope beyond known tractable POMDP classes and general approaches:* Our work is the first to propose the sample-efficient algorithm for an important and broad class of POMDPs that do not provide the common assumptions of well-conditionedness and known core-tests. Furthermore, the proposed LMDP-OMLE algorithm is designed with as much flexibility and generality as possible without specific domain knowledge and assertions, unlike previous solutions that rely on certain clustering-enabling or separation assumptions. We believe our result paves the way to push existing theories further to classes of other potentially tractable POMDPs (in particular, to general function approximation with large state/action spaces in POMDPs). **Technical Novelties in Our Work** While we follow the principles of Optimistic Maximum Likelihood Estimation (OMLE) [1], we emphasize that our key novelty lies in the algorithmic design and analysis of data-collection policies given the confidence set, without the given knowledge of core-tests. Our approach is the first that goes beyond the known assumptions for POMDPs, and integrates the method-of-moments in the sequential decision-making setup. In particular, we formalize the concept of moment-matching in our off-policy evaluation lemma (Lemma 4.2) by introducing the visitation probability to tuples of different state-action pairs and learning their joint probabilities (this captures the correlations and moments of the system). Utilizing this off-policy evaluation lemma, we derive a sample-efficient algorithm through our coverage-doubling argument, detailed in Lemma D.1 of the Appendix. Although the mathematics involved is not particularly fancy and is purely algebraic, we believe that the conceptual and algorithmic innovations are substantial. [1] Liu et al., "Optimistic MLE: A Generic Model-Based Algorithm for Partially Observable Sequential Decision Making", STOC 2023
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models
Accept (poster)
Summary: This paper proposes a novel method for test-time generalization of vision-language models. Specifically, the paper updates two prototypes, textual and visual prompts, online using test samples. Additionally, the authors learn task residuals by aligning the two different modality prototypes, further improving performance. Experiments are conducted on standard benchmarks. Strengths: 1. The paper writing is good. 2. The motivation is clear, and the method design is reasonable. 3. Experiments validate the effectiveness of the method. Weaknesses: 1. More ablation studies are expected: What are the different impacts of the two loss terms in Eq 10? 2. Clarification is needed: Are t and v in Eq 10 achieved after loss convergence, or are they obtained from a single update step? Is there a significant difference between the two? 3. Discussion and comparison with a closely related method (DMN [a]) are expected. Both methods maintain the online queue/memory and weight cached features to obtain new prototypes. The difference is that this paper averages cached features to obtain prototypes, while DMN uses an attention mechanism. The authors should discuss both methods and the differences in prototype generation. [a] Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models, CVPR2024 Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer vqJd, We greatly appreciate your valuable feedback on our paper. We address the raised concerns and questions below. --- **Comment (1)**: “*More ablation studies are expected: What are the different impacts of the two loss terms in Eq 10?*” **Response (1)**: Thank you for your valuable feedback. The alignment loss $\mathcal{L}\_{\mathsf{align}}$ acts from a global perspective by promoting consistent multi-modal prototypes, ensuring that the representations are aligned for all subsequent test samples. The self-entropy loss $\mathcal{L}_{\mathsf{aug}}$, in contrast, greedily targets on improving individual sample predictions by penalizing high-entropy predictions across augmented views. To provide a clearer understanding, we analyze the effects of the two loss terms on ImageNet using the ResNet-50 backbone and report the performance in the following table: | # | $\mathcal{L}_{\mathsf{aug}}$ | $\mathcal{L}_{\mathsf{align}}$ | ImageNet Acc. | | :--: | :--------------------------: | :----------------------------: | :-----------: | | 1 | &#10008; | &#10008; | 61.90 | | 2 | &#10004; | &#10008; | 63.18 | | 3 | &#10008; | &#10004; | 62.46 | | 4 | &#10004; | &#10004; | 63.41 | We can observe that while the alignment loss alone improves the performance by 0.56%, the self-entropy loss provides a greater performance gain of 1.28%. Combining both loss terms further enhances performance by an additional 0.23%. --- **Comment (2)**: “*Clarification is needed: Are t and v in Eq 10 achieved after loss convergence, or are they obtained from a single update step? Is there a significant difference between the two?*” **Response (2)**: Thank you for your careful review of our paper and sorry for any confusion caused. In Eq. 10, t* and v* are obtained from a single update step. Following your comments, we have conducted a new ablation experiment on ImageNet using the ResNet-50 visual backbone, varying the number of update steps from 1 to 5. The results are as follows: | Number of Update Steps | 1 | 2 | 3 | 4 | 5 | | ---------------------- | ----- | ----- | ----- | ----- | ----- | | ImageNet Acc. | 63.41 | **63.45** | 63.28 | 63.26 | 63.32 | As shown, the number of update steps does not significantly influence performance (within a range of 0.2%). Although setting the update steps to 2 slightly increases performance, it also linearly decreases inference efficiency. Therefore, we use a single-step update by default. We have specified this setting in revised Section 3.3, and will include this ablation experiment in the revised manuscript. --- **Comment (3)**: “*Discussion and comparison with a closely related method (DMN [a]) are expected. The authors should discuss both methods and the differences in prototype generation*.” **Response (3)**: Thank you for pointing out this. We will respond from three perspectives: - **Motivation**: While DMN(-ZS) also utilizes historical test samples to enhance the test-time generalizability of VLMs, it is important to note that DMN only updates the visual memory online while keeping the textual features/classifier unchanged. Therefore, we consider DMN similar to TDA, as both methods adapt CLIP only from a uni-modal (visual) perspective. In contrast, as we motivated in Section 1, our DPE is designed to progressively capture more accurate multi-modal representations on the fly with test samples. - **Technical Details**: Focusing solely on the visual modality, the method formulation also differs. DMN-ZS constructs a large dynamic memory (e.g., 50 image features per class) and, during testing, computes the similarity with all features in the memory for each test sample to obtain attention weights. In contrast, our DPE evolves a single representation (i.e., prototype) for each class by maintaining a relatively very small priority queue (e.g., M=3 features per class). Our prototype-based inference requires evaluating only the similarity between test features and prototype features. This technical difference results in our DPE requiring 3x less GPU memory on ImageNet compared to DMN-ZS, as shown in the following table. | Method | DMN-ZS | Ours | Ours (w/o priority queue) | | ---------- | -------- | ------- | ------------------------- | | GPU Memory | 14110 MB | 4474 MB | 2138 MB | - **Performance Comparison**: We have also included a performance comparison between DPE and DMN-ZS on robustness to natural distribution shifts using the ResNet-50 backbone of CLIP. The results are as follows: | Method | ImageNet | ImageNet-A | ImageNet-V2 | ImageNet-R | ImageNet-S | Average | OOD Average | | -------------- | :-------: | :--------: | :---------: | :--------: | :--------: | :-------: | :---------: | | DMN-ZS | **63.87** | 28.57 | 56.12 | 61.44 | 39.84 | 49.97 | 46.49 | | **DPE (Ours)** | 63.41 | **30.15** | **56.72** | **63.72** | **40.03** | **50.81** | **47.66** | As shown in the table, our proposed DPE outperforms DMN-ZS by 0.84% on average across 5 datasets, demonstrating the superiority of our method. Additionally, our method also outperforms DMN-ZS on the ViT-B/16 backbone. We have included the full results in Table 1 and added the corresponding discussions about DMN in the revised paper. --- We hope that our responses have addressed your concerns. If you have additional comments or concerns, please let us know and we will be more than happy to answer. Best, Authors --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concern has been addressed. I will increase my score to 6. --- Reply to Comment 1.1.1: Comment: Dear Reviewer vqJd, Thank you for your insightful review of our paper. We greatly appreciate your positive recommendation! Best, Authors
Summary: The paper introduces a novel test-time adaptation approach for vision-language models (VLMs) called Dual Prototype Evolving (DPE). The method effectively accumulates task-specific knowledge from multi-modalities by creating and evolving two sets of prototypes—textual and visual—during test time. This approach ensures more accurate multi-modal representations for target classes and promotes consistent representations by optimizing learnable residuals. Extensive experiments on 15 benchmark datasets demonstrate that DPE consistently outperforms previous state-of-the-art methods in both performance and computational efficiency. Strengths: +Originality: The introduction of dual prototypes (textual and visual) for evolving task-specific knowledge at test time is a novel concept that significantly improves the generalization capabilities of VLMs. +Quality: The experimental results are comprehensive and well-documented, covering a wide range of datasets and scenarios to validate the effectiveness of the proposed method. +Clarity: The paper is well-written and clearly explains the methodology, including detailed descriptions of the prototypes' evolution and optimization processes. +Significance: The proposed method addresses a crucial challenge in real-world applications by enabling efficient and effective test-time adaptation without the need for annotated samples from the target domain. Weaknesses: -Complexity: The introduction of learnable residuals and the dual prototype evolution mechanism adds complexity to the model, which might pose implementation challenges for practitioners. -Computational Cost: While the paper claims competitive computational efficiency, the requirement to maintain and update priority queues for visual prototypes can increase memory and computational costs, particularly for large-scale datasets. -Generalization to Other VLMs: The paper primarily focuses on CLIP, and it is unclear how well the proposed method generalizes to other vision-language models or tasks beyond those evaluated. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you provide more details on the computational overhead introduced by the dual prototype evolution mechanism compared to baseline methods? 2. How does the proposed method perform when applied to vision-language models other than CLIP? Have you conducted any preliminary experiments in this direction? 3. Could you elaborate on the potential limitations of the prototype residual learning approach and how it might affect the model's performance in different scenarios? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impacts of their work. The paper mentions two primary limitations: the additional computational complexity introduced by gradient back-propagation for optimizing multi-modal prototypes and the increased memory cost due to maintaining priority queues. The authors provide constructive suggestions for future work to address these issues, emphasizing the need for more efficient optimization techniques and memory management strategies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer SVvN, Thank you for your insightful comments and positive recommendation of our work. We provide point-by-point responses to address your concerns below. --- **Comment (1)**: “*The introduction of learnable residuals and the dual prototype evolution mechanism adds complexity to the model, which might pose implementation challenges for practitioners*.” **Response (1)**: While it is true that the introduction of learnable residuals and the dual prototype evolution mechanism adds complexity compared to zero-shot CLIP, we believe the idea of this work is straightforward and core components of our method can be implemented in under 100 lines of code. Besides, as we presented in Section 4, our DPE method significantly enhances the test-time generalization capabilities of the CLIP model. Therefore, we believe that this added complexity is worthwhile. Please also be assured that we will make the source code publicly available and provide detailed instructions upon acceptance to facilitate easier reproduction. --- **Comment (2)**: “*Can you provide more details on the computational overhead introduced by the dual prototype evolution mechanism compared to baseline methods?*” **Response (2)**: Thank you for your insightful feedback! Here, we provide additional details regarding our computational overhead, including both inference time and memory usage. - **Inference time**: In our DPE method, the major computational time comes from the visual prototype evolution and prototype residual learning components. Specifically, while zero-shot CLIP requires 10.1 ms to infer one image, including our prototype residual learning increases the inference time to 64.7 ms per image. Further including the visual prototype evolution extends this to 132.1 ms per image. As a reference, TPT requires 666.3 ms per image, while TDA is more efficient, using 73.5 ms per image. - **Memory**: As you mentioned, maintaining priority queues does indeed increase memory usage. Following your comments, we compared the GPU memory usage on large-scale ImageNet with and without the priority queue mechanism. Specifically, while our DPE method without the priority queue takes 2136 MB of GPU memory, including the priority queue mechanism takes 4474 MB, which doubles the GPU consumption. However, our method still shows a memory advantage compared to TPT, which takes 18701 MB to perform inference on ImageNet. --- **Comment (3)**: “*How does the proposed method perform when applied to vision-language models other than CLIP? Have you conducted any preliminary experiments in this direction?*” **Response (3)**: While our DPE method can theoretically be applied to various contrastively pre-trained vision-language models, such as ALIGN [ICML ’21], LiT [CVPR ’22], and CoCa [TMLR ’22], most of these methods are closed-source, preventing us from evaluating the effectiveness of our method on these models. Our method can certainly be applied to open-source CLIP-style VLMs, such as OpenCLIP [CVPR ’23], MetaCLIP [ICLR ’24], SigLIP [ICCV ’23], and DFN [ICLR ’24]. Here, we use larger-scale OpenCLIP (ViT-L/14) as an example and compare the performance of TDA and our method on robustness to natural distribution shifts: | Method | ImageNet | ImageNet-A | ImageNet-V2 | ImageNet-R | ImageNet-S | Average | OOD Average | | ------------------- | --------- | ---------- | ----------- | ---------- | ---------- | --------- | ----------- | | OpenCLIP (ViT-L/14) | 74.04 | 53.88 | 67.69 | 87.42 | 63.18 | 69.31 | 68.13 | | TDA* | 76.28 | **61.27** | 68.42 | 88.41 | 64.67 | 71.81 | 70.69 | | **DPE (Ours)** | **77.87** | 61.09 | **70.83** | **89.18** | **66.33** | **73.06** | **71.86** | $^*$ We evaluate TDA using the codes provided by the authors. We can observe that our DPE still outperforms TDA by 1.07% on average across 5 datasets, showcasing that our method generalizes well to larger-scale VLMs. --- **Comment (4)**: “*Could you elaborate on the potential limitations of the prototype residual learning approach and how it might affect the model's performance in different scenarios?*” **Response (4)**: Thank you for your insightful comments. As we discussed in our **Response (2)**, our prototype residual learning requires backpropagation, which increases the inference time from 10.1 ms per image in zero-shot CLIP to 64.7 ms per image. However, as shown in Figure 4 (Middle), our prototype residual learning also significantly enhances the zero-shot generalizability of the CLIP model. This represents a trade-off between efficiency and accuracy: in some real-time scenarios (e.g., decision-making in autonomous driving), simpler and more efficient methods like zero-shot CLIP may be a better fit. In contrast, for scenarios requiring higher precision (e.g., medical image analysis), our method can provide enhanced zero-shot robustness to the CLIP model. That said, we believe the DPE method strikes a great balance by achieving state-of-the-art performance (+3.5% compared to TPT) with competitive computational efficiency (5x faster than TPT), which holds the potential to inspire future research. --- We hope that our responses have addressed your concerns. If you have additional comments or concerns, please let us know and we will be more than happy to answer. Best, Authors --- Rebuttal 2: Comment: Thank you for thoroughly addressing my concerns. After reviewing the comments and responses for the other reviewers, I see that their concerns have also been resolved. The authors have provided clear definitions of terms for better understanding, conducted additional experiments to further evaluate the effectiveness of the proposed method, and offered more in-depth analysis of how the proposed method works in various settings. Overall, the rebuttal enhances my confidence in this paper. With careful consideration, I believe this paper with revision is worthy of NeurIPS and will significantly impact the test-time generalization field. My final decision is “accept.” --- Rebuttal Comment 2.1: Comment: Dear Reviewer SVvN, We sincerely appreciate the time and effort you invested in reviewing our manuscript. We greatly appreciate your positive recommendation! Best, Authors
Summary: The paper proposes a novel test-time adaptation method for CLIP models, drawing inspiration from previous works on prototype learning and CLIP-based adaptors. For each test sample, both textual and visual prototypes are optimized using learnable residual parameters based on alignment loss and self-entropy loss. These prototypes are progressively updated to better utilize the stream of test samples. Experimental results on common benchmarks demonstrate the proposed method's effectiveness over recent baselines. Additionally, the authors provide comprehensive ablation studies to highlight the impact of hyperparameters. Strengths: The method is a novel combination of existing methods. The experiments and ablation studies are comprehensive. The paper is mostly well written and the structure is clear. Weaknesses: - The method combines multiple existing techniques, each of which has been explored in previous works. For instance, learnable residuals have been studied in [1], where task residuals were used for CLIP-based adaptation, and learnable multi-modal representations were used in [2] with a similar motivation for CLIP-based adaptation. - The performance gain appears marginal (Table 1), especially given the extensive hyperparameter tuning required. The results are sensitive to hyperparameters (Section 4.3). This may raise concerns about whether the method's efficacy is inherent or merely a result of meticulous hyperparameter tuning. Note that influential factors in the proposed method include (1) the entropy threshold, (2) queue size, (3) weight scale factor, and (4) prototype update rules. - There seems to be an overstatement of the method's applicability. While the title and introduction suggest broad applicability to "Vision-Language Models," the method is only tested on CLIP models, with no experiments conducted beyond CLIP. [1] Yu et al., Task Residual for Tuning Vision-Language Models, CVPR 2023. [2] Khattak et al., MaPLe: Multi-modal Prompt Learning, CVPR 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: - Is the proposed method applicable to non-CLIP models? - Can a simplified version of the method be presented by removing certain components? Specifically, which components are the most influential? From the ablation studies, it appears that each component functions as an add-on. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer owvr, Thanks for your valuable feedback! --- **Comment (1)**: “*The method combines multiple existing techniques …*” **Response (1)**: While our method shares some similarities in method details (e.g., multi-modal prototype residuals), we focus on a completely different test-time adaptation setting. Specifically, TaskRes [1] and MaPLe [2] aim to adapt CLIP using **labeled** few-shot samples, whereas our proposed DPE approach leverages only the **unlabeled** target data stream to adapt the model to out-of-distribution domains. Moreover, we innovatively propose textual/visual prototype evolution, which enables our method to *progressively* capture more accurate multi-modal representations during test time. The two works mentioned above, while effective in learning from few-shot samples, do not incorporate such knowledge accumulation techniques. In our **Comment (4)**, we demonstrate that textual/visual prototype evolution contributes significantly to the overall effectiveness of our method. --- **Comment (2)**: “*The performance gain appears marginal (Table 1), especially given the extensive hyperparameter tuning required*.” **Response (2)**: Thanks for your thoughtful feedback. We will respond in two aspects. - **Our Performance**: While our performance gain may seem modest at first glance, it is important to note that the test-time adaptation setting for CLIP is inherently challenging. Consequently, performance improvements across methods on this benchmark have been limited over the past few years: | Method | CLIP | TPT | SwapPrompt | DiffTPT | TDA | | --------------- | :------: | :-----------: | :-----------: | :-----------: | :-----------: | | Venue | ICML '21 | NeurIPS '22 | NeurIPS '23 | ICCV '23 | CVPR '24 | | Accuracy (Gain) | 46.61 | 47.26 (+0.65) | 47.86 (+0.50) | 48.71 (+0.85) | 49.58 (+0.87) | Therefore, we believe our performance gain of 1.23% compared to TDA in such a challenging task is not trivial, and holds the potential to inspire future research. Moreover, we performed a t-test and found a p-value of 0.0036, demonstrating that our method's performance is significantly better than TDA. - **Hyperparameters**: We acknowledge that our method involves several hyperparameters that influence its efficacy. However, our DPE method consistently outperforms other approaches across a reasonable range of hyperparameter settings. For instance, as shown in Fig. 4 (Left), all combinations of entropy threshold $\tau_t \geq 0.1$ and queue size $M \geq 3$ achieve >90.3% accuracy on Caltech101, whereas TPT and TDA only achieve 87.02% and 89.70%, respectively. --- **Comment (3)**: “*Is the proposed method applicable to non-CLIP models?*” **Response (3)**: Thank you for pointing out this. While our DPE method can theoretically be applied to various contrastively pre-trained vision-language models, such as ALIGN, LiT, and CoCa, most of these methods are closed-source, preventing us from evaluating the effectiveness of our method on these models. Our method can certainly be applied to open-source CLIP-style VLMs, such as OpenCLIP [CVPR ’23], MetaCLIP [ICLR ’24], and DFN [ICLR ’24]. Here, we use OpenCLIP (ViT-L/14) as an example to compare our method with TDA: | Method | ImageNet | ImageNet-A | ImageNet-V2 | ImageNet-R | ImageNet-S | Average | OOD Average | | ------------------- | --------- | ---------- | ----------- | ---------- | ---------- | --------- | ----------- | | OpenCLIP (ViT-L/14) | 74.04 | 53.88 | 67.69 | 87.42 | 63.18 | 69.31 | 68.13 | | TDA | 76.28 | **61.27** | 68.42 | 88.41 | 64.67 | 71.81 | 70.69 | | **DPE (Ours)** | **77.87** | 61.09 | **70.83** | **89.18** | **66.33** | **73.06** | **71.86** | We can see that our DPE still outperforms TDA by 1.25% on average across 5 datasets, showcasing that our method generalizes well to larger-scale VLMs. Besides, in this work, we followed common practices [1-3] in this field by using the CLIP model as a representative VLM due to its simplicity in design and wide applicability, without loss of generality [3]. We have specified the scope to CLIP in the abstract and introduction sections of the revised manuscript. [1] Learning to Prompt for Vision-Language Models, IJCV 2022. [2] Task Residual for Tuning Vision-Language Models, CVPR 2023. [3] Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models, NeurIPS 2022. --- **Comment (4)**: “*Can a simplified version of the method be presented by removing certain components? Specifically, which components are the most influential?*” **Response (4)**: Thanks for your insightful question. Following your comments, we conducted additional ablation experiments to analyze the individual effect of each component: | # | VPE | TPE | PRL | ImageNet Acc. | | :--: | :------: | :------: | :------: | :-----------: | | 1 | &#10008; | &#10008; | &#10008; | 59.81 | | 2 | &#10004; | &#10008; | &#10008; | 61.83 | | 3 | &#10008; | &#10004; | &#10008; | - | | 4 | &#10008; | &#10008; | &#10004; | 61.59 | | 5 | &#10004; | &#10004; | &#10008; | 61.90 | | 6 | &#10004; | &#10008; | &#10004; | 62.93 | | 7 | &#10008; | &#10004; | &#10004; | 62.48 | | 8 | &#10004; | &#10004; | &#10004; | 63.41 | VPE, TPE, and PRL refer to visual prototype evolution, textual prototype evolution, and prototype residual learning, respectively. Note that Experiment #3 is invalid since TPE requires optimized textual prototypes t* from PRL. As shown, VPE is the most influential component, providing a ~2% improvement over zero-shot CLIP. The other two components also contribute significantly to the overall performance. --- If you have additional comments or concerns, please let us know. Best, Authors --- Rebuttal Comment 1.1: Comment: I appreciate the authors for the detailed response and additional experiments. It would be great to include some of these results in the main paper. My primary concern remains the marginal improvement of the proposed approach given its complexity. Despite the performance, I still lean towards recommending acceptance of the paper for its technical contribution. --- Rebuttal 2: Comment: Dear Reviewer owvr, Thank you for your encouraging positive recommendation. We greatly appreciate your recognition of the technical contributions of our work. We will incorporate these additional experiments in our revision carefully. Thank you again for your valuable feedback on our paper. Best, Authors Title: Thanks for your positive recommendation
Summary: 1. This paper proposes a novel test-time adaptation method (DPE) for VLMs that captures multi-modal representations for target classes during test time. 2. This paper introduces and optimizes learnable residuals for each test sample to align the prototypes across modalities. 3. The results of this paper are promising while maintaining competitive computational efficiency. Strengths: 1. The idea is very clear and easy to follow. This paper proposes that former methods just focus on the single-modality, and the proposed method DPE can learn from two-modalities. Meanwhile, DPE accumulates task-specific knowledge in a residual manner, which maintains computational efficiency. 2. The experimental results are promising. Compared with other related methods, DPE achieves improvement on 15 benchmark datasets, and DPE shows improved computational efficiency compared with TPT and DiffTPT, which is practical for the test-time scenarios. 3. The writing and presentation of this paper is good. Weaknesses: 1. The core idea is not novel enough. Many concurrent works with TDA which is DPE's main comparison method have shown a similar idea, such as DMN-ZS[1] actually utilizes the information of two modalities while not demanding any backpropagation, and TPS[2] which has been cited in the paper proposes that residual prototype is useful for test-time prompt learning. Though the specific designs of DPE are different from them, the core idea is somewhat similar. 2. More analysis of hyperparameters is needed. Though the introduction of visual information is helpful intuitively, hyperparameters such as (top-)M may affect the performance obviously, which should be explored comprehensively. 3. More analysis of loss functions is needed. Alignment loss is helpful for performance improvement, but too much alignment leads to a performance drop, the reason should be discussed. 4. Efficiency comparison is not comprehensive. Though DPE performs faster than TPT and DiffTPT, the efficiency comparison with backpropagation-free methods (such as TDA,TPS, and DMN) should be provided. [1] Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models. [2] Just shift it: Test-time prototype shifting for zero-shot generalization with vision-language models. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Many concurrent works (such as DMN-ZS, TPS) have shown similar ideas, and comparisons with them including the core idea design and performance should be provided. 2. A detailed analysis of hyperparameter M should be provided, and how to choose the M to obtain robust performance in the test-time scenarios also should be discussed. 3. A detailed analysis of loss functions should be provided. As shown in the paper, more alignment brings performance drop, the reason should be discussed. By the way, should the text prototype be aligned in the direction of the visual prototype? Or does the visual prototype only need to be aligned in the direction of the text? Are there any related experiments? 4. Efficiency comparison with backpropagation-free methods (such as TDA,TPS, andDMN) should be provided. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The core idea is not novel enough, and it seems to be a combination of existing several ideas. 2. Detailed analysis of hyperparameters and loss functions is missing. 3. Efficiency comparison is not comprehensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Sqqz, We really appreciate your thorough review of our paper! --- **Comment (1)**: “*Comparisons with DMN-ZS [1] and TPS [2] including the core idea design and performance should be provided*.” **Response (1)**: Thank you for pointing this out. We acknowledge that our DPE method shares some high-level ideas with DMN-ZS and TPS. However, there are some key distinctions. Here, we discuss the differences between our method and these two approaches, respectively: - **DMN-ZS**: While DMN(-ZS) also utilizes historical test samples to enhance the test-time generalizability of VLMs, it only updates the visual memory online while keeping the textual features/classifier unchanged. Therefore, we consider DMN similar to TDA, as both methods adapt CLIP only from a uni-modal (visual) perspective. In contrast, our DPE is designed to progressively capture more accurate **multi-modal representations** on the fly with test samples. - **TPS**: Similarly, since TPS only updates the textual prototypes during testing, we categorize it with TPT and DiffTPT, which also account only for uni-modal (textual) adaptation. Moreover, TPS has similar limitations to TPT, as discussed in Lines 46-49, where it treats each test instance independently, resetting to the original model for each new sample. In contrast, our DPE can **accumulate** task-specific knowledge as more test samples are processed. We have also included a performance comparison with these methods on robustness to natural distribution shifts: | Method | ImageNet | ImageNet-A | ImageNet-V2 | ImageNet-R | ImageNet-S | Average | OOD Average | | -------------- | :-------: | :--------: | :---------: | :--------: | :--------: | :-------: | :---------: | | CLIP-RN50 | 58.16 | 21.83 | 51.41 | 56.15 | 33.37 | 44.18 | 40.69 | | TPS | 61.47 | **30.48** | 54.96 | 62.87 | 37.14 | 49.38 | 46.36 | | DMN-ZS | **63.87** | 28.57 | 56.12 | 61.44 | 39.84 | 49.97 | 46.49 | | **DPE (Ours)** | 63.41 | 30.15 | **56.72** | **63.72** | **40.03** | **50.81** | **47.66** | As shown, our proposed DPE outperforms TPS and DMN-ZS by 1.43% and 0.84% on average across 5 datasets, demonstrating the superiority of our method. We have updated the results in Table 1. --- **Comment (2)**: “*A detailed analysis of hyperparameter M should be provided, and how to choose the M … should be discussed*.” **Response (2)**: Thank you for your insightful comments. In Figure 4 (Left), we provided a sensitivity analysis of $M$ on the Caltech101 dataset. Following your comments, we further analyze the impact of hyperparameter $M$ on larger-scale ImageNet: | Values of $M$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | | ------------- | ----- | ----- | --------- | ----- | ----- | ----- | ----- | | ImageNet Acc. | 62.91 | 63.17 | **63.41** | 63.34 | 63.29 | 63.28 | 63.21 | Similar to the results on Caltech101, we observe that the performance increases by 0.5% when adjusting $M$ from 1 to 3 but exhibits a slight decrease of 0.2% when further increasing $M$ to 7. We speculate that initially increasing the value of $M$ allows our priority queue to collect more diverse features and obtain representative prototypes. However, further increasing it leads to the inclusion of more low-confidence noisy samples, which has adverse effects. --- **Comment (3)**: “*... more alignment brings performance drop, the reason should be discussed. By the way, should the text prototype be aligned in the direction of the visual prototype? Are there any related experiments?*” **Response (3)**: The alignment loss $\mathcal{L}\_{\mathsf{align}}$ acts from a global perspective by promoting consistent multi-modal prototypes, ensuring that the representations are aligned for all subsequent test samples. The self-entropy loss $\mathcal{L}_{\mathsf{aug}}$, in contrast, greedily targets on improving individual sample predictions by penalizing high-entropy predictions across augmented views. Therefore, overly emphasizing alignment may cause the method to prioritize global consistency over the refinement of individual predictions, leading to less accurate results for specific samples. Besides, rather than solely aligning the visual prototypes in the direction of the text or vice versa, our DPE mutually aligns the textual and visual prototypes, updating both prototypes simultaneously. In Figure 4 (Middle), we keep either the textual or visual prototypes fixed and only align the prototypes from the other modality towards the fixed prototypes. We demonstrated that optimizing prototypes from both modalities achieves the best performance gain of 1.52%. --- **Comment (4)**: “*The efficiency comparison with backpropagation-free methods should be provided*.” **Response (4)**: Thank you for your insightful feedback. We would like to clarify a detail regarding your comment: TPS actually requires backpropagation with self-entropy loss for each sample, similar to TPT. Following your comment, we compare the inference time per image for each method (using a single A6000 Ada GPU): | Method | CLIP | TPT | DiffTPT | TPS | TDA | TDA* | DMN-ZS | DMN-ZS* | Ours | Ours* | | ------------------- | ---- | ----- | ------- | ---- | ---- | ---- | ------ | ------- | ---- | ----- | | Inference time (ms) | 10.1 | 668.3 | >2000 | 65.2 | 13.4 | 73.5 | 11.8 | 61.6 | 66.7 | 132.1 | In the table, $^*$ indicates that pseudo-label predictions are enhanced with $N=64$ augmented views and confidence selection, which increases inference time. As shown, our DPE has a inference speed that is half that of TPS, TDA*, and DMN-ZS*. However, as we presented in our **Response (1)**, our DPE exhibits a performance gain of over 0.8% across 5 datasets compared to these methods. --- If you have additional comments or concerns, please let us know. Best, Authors --- Rebuttal Comment 1.1: Comment: Dear Reviewer Sqqz, We greatly appreciate the time you have dedicated and the valuable feedback you have provided. As the discussion period draws to a close (Tue, August 13), please kindly let us know if there are any remaining questions. We will be more than happy to provide any further details or clarifications. Best, Authors --- Rebuttal Comment 1.2: Title: Official Comment by Reviewer Sqqz Comment: Thanks for your detailed response. The novelty of the core idea is still my primary concern, and I have a few questions about the response. (a) Which model (DMN-ZS or DMN-ZS*) in response (4) corresponds to DMN-ZS in response (1)? (b) Can you supplement performance comparison based on CLIP-ViT-B/16? You can just combine existing results into one table. --- Rebuttal 2: Title: Many thanks for your follow-up feedback Comment: Dear Reviewer Sqqz, Thank you for providing further detailed comments. --- **Comment (a)**: “*Which model (DMN-ZS or DMN-ZS\*) in response (4) corresponds to DMN-ZS in response (1)?*” **Response (a)**: DMN-ZS* refers to the DMN-ZS mentioned in **Response (1)**. We evaluated its performance and efficiency using the official code implementation of DMN-ZS, where the pseudo-label predictions are by default enhanced by $N=64$ augmented views and confidence selection. For the DMN-ZS variant in **Response (4)**, we manually set $N=1$ and reported the results correspondingly. --- **Comment (b)**: “*Can you supplement performance comparison based on CLIP-ViT-B/16? You can just combine existing results into one table.*” **Response (b)**: We apologize for not including the full results in the rebuttal post due to character limit. Please find the complete comparisons below: | Method | ImageNet | ImageNet-A | ImageNet-V2 | ImageNet-R | ImageNet-S | Average | OOD Average | | -------------- | :-------: | :--------: | :---------: | :--------: | :--------: | :-------: | :---------: | | CLIP-RN50 | 58.16 | 21.83 | 51.41 | 56.15 | 33.37 | 44.18 | 40.69 | | TPT | 60.74 | 26.67 | 54.70 | 59.11 | 35.09 | 47.26 | 43.89 | | DiffTPT | 60.80 | **31.06** | 55.80 | 58.80 | 37.10 | 48.71 | 45.69 | | TDA | 61.35 | 30.29 | 55.54 | 62.58 | 38.12 | 49.58 | 46.63 | | TPS | 61.47 | 30.48 | 54.96 | 62.87 | 37.14 | 49.38 | 46.36 | | DMN-ZS | **63.87** | 28.57 | 56.12 | 61.44 | 39.84 | 49.97 | 46.49 | | **DPE (Ours)** | 63.41 | 30.15 | **56.72** | **63.72** | **40.03** | **50.81** | **47.66** | | Method | ImageNet | ImageNet-A | ImageNet-V2 | ImageNet-R | ImageNet-S | Average | OOD Average | | -------------- | :-------: | :--------: | :---------: | :--------: | :--------: | :-------: | :---------: | | CLIP-ViT/B-16 | 66.73 | 47.87 | 60.86 | 73.98 | 46.09 | 59.11 | 57.20 | | TPT | 68.98 | 54.77 | 63.45 | 77.06 | 47.94 | 62.44 | 60.81 | | DiffTPT | 70.30 | 55.68 | 65.10 | 75.00 | 46.80 | 62.28 | 60.52 | | TDA | 69.51 | **60.11** | 64.67 | 80.24 | 50.54 | 65.01 | 63.89 | | TPS | 70.19 | 60.08 | 64.73 | 80.27 | 49.95 | 65.04 | 63.76 | | DMN-ZS | **72.25** | 58.28 | 65.17 | 78.55 | **53.20** | 65.49 | 63.80 | | **DPE (Ours)** | 71.91 | 59.63 | **65.44** | **80.40** | 52.26 | **65.93** | **64.43** | As shown, our proposed DPE still outperforms TPS and DMN-ZS by 0.89% and 0.44% on average across 5 datasets using the ViT-B/16 backbone of CLIP. We have updated the results in Table 1 of the revised manuscript. For more performance comparisons, please also kindly consider referring to Table B in the attached one-page PDF in the general response, where we further compare our method with TDA using the OpenCLIP ViT-L/14 backbone, and observe consistent performance improvements. --- Finally, we would like to summarize the key novelties of this work: - To the best of our knowledge, our work is the first to capture domain-specific knowledge from a **multi-modal** perspective for test-time adaptation of VLMs. Specifically, we achieve this by evolving two sets of prototypes from both textual and visual modalities to progressively capture more accurate multi-modal representations for target classes during test time. - We proposed textual and visual prototype evolution to extract historical knowledge from previous test samples, enabling **effective accumulation** of knowledge over time. - We further introduced prototype residual learning with an **alignment constraint** to enhance **consistent** multi-modal representations and ensure alignment of prototypes across modalities. - Our proposed DPE achieves state-of-the-art performance across 15 various datasets while also exhibiting competitive test-time efficiency. --- We hope that our responses have addressed your concerns. Please kindly let us know if you have any further questions. Best, Authors --- Rebuttal Comment 2.1: Title: Official Comment by Reviewer Sqqz Comment: Thanks for your response. Most of my concerns about the experiment have been addressed, but the performance improvement (+0.9%/+0.4%) is not very promising considering the increase in inference time (+100%, nearly double). Meanwhile, I maintain my opinion on novelty, so I choose to maintain my score as Borderline accept.
Rebuttal 1: Rebuttal: Dear AC and Reviewers, We are sincerely grateful to you all for dedicating time and efforts in providing these detailed and thoughtful reviews, which helped us to improve the quality of our paper. We also want to thank all the reviewers for your **unanimous recognition and positive recommendations** of this work. Here, apart from the point-by-point responses to each reviewer, we would like to summarize the contributions of this work and highlight our new results added during the rebuttal phase. --- We are delighted that the reviewers appreciate and recognize the following strengths and contributions of this work: - The proposed method addresses a crucial challenge in real-world applications by enabling efficient and effective test-time adaptation without the need for annotated samples from the target domain. **[SVvN]** - The motivation and idea are very clear and easy to follow, and the method design is reasonable. **[Sqqz, vqJd]** - The introduction of dual prototypes (textual and visual) for evolving task-specific knowledge at test time is a novel concept. **[SVvN]** - Comprehensive experiments on 15 various datasets verify that our proposed DPE significantly improves the generalization capabilities of VLMs. DPE also shows improved computational efficiency compared with TPT and DiffTPT. **[Sqqz, SVvN, vqJd]** - The paper is well-written, the structure is clear and the methodology is well explained. **[All Reviewers]** --- In this rebuttal, we have included the following discussions and experiments to address reviewers’ comments: - We discuss the unique contributions of our work compared to DMN-ZS, TaskRes. **[Sqqz, owvr, vqJd]** - **[Table A]** We provide further performance comparisons with DMN-ZS and TPS to verify the effectiveness of our method. **[Sqqz, vqJd]** - **[Table B]** We test our DPE method on the larger-scale OpenCLIP model with the ViT-L/14 backbone to verify that our method generalizes well to other VLMs. **[owvr, SVvN]** - **[Table C-F]** We conduct more detailed ablation studies analyzing different algorithm components, two loss terms, update steps, and hyperparameter $M$, explaining their different impacts in greater detail. **[Sqqz, owvr, vqJd]** - **[Table G-H]** We offer a more detailed analysis of our computational overhead in terms of both inference time and GPU memory. **[Sqqz, SVvN, vqJd]** --- Again, thank you for your time in reviewing our work! Best, Authors Pdf: /pdf/27527ba6054fed04bbef9f51b1ed1b8b0e77141e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation
Accept (poster)
Summary: The paper proposes RFLPA, which deploys a robustness algorithm based on cosine similarity detection on SecAgg, defending against privacy attacks from the server and poisoning attacks from clients. Furthermore, the paper reduces communication and computation costs through packed Shamir secret sharing and dot-product aggregation. Strengths: 1.The structure of the paper is clear, providing a comprehensive and detailed explanation of how RFLPA deploys FLTrust's cosine similarity detection scheme on SecAgg and reduces SecAgg's overhead through packed Shamir secret sharing and dot-product aggregation. 2.Experiments show that, compared to the previous work BREA, RFLPA reduces communication and computation costs by 75% while maintaining the same accuracy. 3.The paper also provides a systematic theoretical analysis and proof of RFLPA's convergence and the cryptographic protocols used. Weaknesses: 1. The paper seems to achieve practical improvements over BREA mainly by reducing communication costs, but the phase of computing cosine similarity among clients introduces additional communication costs. The advantages over other state-of-the-art MPC-based solution ELSA[1], HE & ZKP-based solution RoFL[2], and Blockchain-based solution PBFL[3] are not explained in detail in the paper. 2. Regarding Byzantine robustness, the paper adopts the FLTrust approach, assuming the server holds clean root dataset. However, this requires a substantial number of clean public samples of all labels, making this assumption impractical in real-world scenarios, which is conflicted to the goal of practical PPFL solution. 3. The RFLPA framework seems like questionable.As descripted in section 3.2, the security goal is to defend passive inference and active inference attacks both. But the packed secret shares of $g_i$ generted by client i are sent to the server and the server transmit them to the other clients using a broadcast channel. Which means that the server have all shares of the secret $g_i$ and can recover it. So, what kind of role is played by the server? 4. In the experimental section, a) the robustness testing of BREA only includes two types of Byzantine attacks. The paper lacks strong experiments to demonstrate how BREA performs under more subtle attacks like Krum-attack and Badnet. b) The communication overhead of RFLPA could not be small, because all clients are involved all computation of consin similarity $s^j_k$ of any user pair (i,j), the author should give out the detail results. 6. The notations and formulations should be more clear. a) Line95, $D$ is not predefined. b) Line210, $\tilde{\mathbf{g}}_i$ should be $\bar{\mathbf{g}_i}$. [1] Rathee, Mayank, et al. "Elsa: Secure aggregation for federated learning with malicious actors." 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023. [2] Lycklama, Hidde, et al. "Rofl: Robustness of secure federated learning." 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023. [3] Miao, Yinbin, et al. "Privacy-preserving Byzantine-robust federated learning via blockchain systems." IEEE Transactions on Information Forensics and Security 17 (2022): 2848-2861. [4] Hao, Meng, et al. "Efficient, private and robust federated learning." Proceedings of the 37th Annual Computer Security Applications Conference. 2021. Technical Quality: 3 Clarity: 3 Questions for Authors: see the above Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: no Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer VWie, Thanks for your time and valuable comments. Hope our response below could address your concern. **W1-a&W4-b: Additional communication cost of cosine similarity computation** The additional communication cost introduced by cosine similarity computation is small compared to the communication overhead reduced by our approach. We utilize secret sharing (SS) instead of homomorphic encryption (HE) to compute the cosine similarity since: (1) Existing HE-based frameworks rely on the unrealistic assumption of 2 non-colluding parties. (2) According to Appendix L.7.1, the HE-based methods take massive computation time, which far outweights the communication time introduced by SS-based frameworks. We leverage packed secret sharing to reduce the communication overhead. Appendix H presents the analysis of the overhead, and here we explain in more details: (1) For N users and M-dimensional gradients, we packed O(N) elements within a single polynomial. Then each user transmits O(max(M/N,1)\*N)=O(M+N) (instead of O(MN)) elements. (2) By computing cosine similarity of other users, each user obtains O(N) partial dot products in total. (3) During dot product aggregation, each user secret shares the partial dot products. To reduce overhead, they pack O(N) elements within a single polynomial. Then each user communicates O(N\*N/N)=O(N) elements. (4) The user locally computes final secret shares of the consine similarity and uploads them to the server. Since the final shares are secret shares of dot products by packing O(N) secret shares, each user only requires to send O(N/N)=O(1) elements to the server. **Benefiting from the packed secret share algorithm, the total per-user communication cost is reduced to O(M+N), a siginificant improvement over the O(MN + N) cost in BREA.** [**Table R1.** Break down of per-user communication cost. Cosine similarity computation incurred O(N) additional communication cost, lower than the original scale O(M+N).] |Stage|Communication cost|Incurred by cosine similarity computation| |-|-|-| |Download parameters|O(M)|No| |Secret share local gradient|O(M+N)|No| |Upload aggregated update|O(M/N+1)|No| |Sum (original)|O(M+N)| |Secret share partial dot product|O(N)|Yes| |Upload final share of dot product|O(1)|Yes| |Receive trust score|O(N)|Yes| |Sum (cosine similarity)|O(N)| **W1-b: Advantages over other solutions** Thanks for your suggestion. We summarize the advantages of our method with in Table R2, with explanation as followed: - ELSA[1]: Firstly, ELSA relies on the vulnerable assumption of 2 non-colluding servers. Secondly, ELSA utilizes a naive and limited method to filter poisonous gradients, i.e., bounding the $l_2$ norm of each gradient. It's completely impractical to generalize their framework to more advance defense method such as Krum. In contrast, our framework is compatible with defense method such as FLTrust and Krum. - RoFL[2]: Similar to ELSA, RoFL is designed specifically for a naive robust aggragation method, norm bounding. Furthermore, RoFL relies on expensive zero-knowledge proofs. It's estimated to take 3.6 hours per round to train a 1.6M MNIST classifier, while for RFLPA it's within 10 minutes in the same setting. - PBFL[3]: We have explained the limitations of PBFL in appendix L.4 and L.7.1, in terms of the assumption of 2 non-colluding parties, and expensive computation cost. - SecureFL[4]: SecureFL also relies on the unrealistic assumption of two non-colluding parties. Additionally, it utilizes expensive MPC and HE methods. The computation time for training a 1.6M MNIST classifier is estimated to be 2.5 hours per round, much higher than that for RFLPA. [**Table R2.** Comparison of various solutions. More \* implies higher level of computation overhead.] | |Collusion threshold|Robust Aggregation Rule|Computation Overhead| |-|-|-|-| |ELSA|1|Norm bound|\*\*| |RoFL|O(N)|Norm bound|\*\*\*| |PBFL|1|FLTrust|\*\*\*\*| |SecureFL|1|FLTrust|\*\*\*| |RFLPA|O(N)|FLTrust|\*\*| **W2: Assumption on clean public samples** Thanks for your comment. Firstly, the required sample data is of small size, e.g., 200 samples. The gobal model is robust even when the root dataset diverges slightly from the overall training data distribution. Secondly, we have proposed some remedies in the absence of such dataset (see response to Q1 in global rebuttal), including comparison with global weights and KRUM. **W3: The server have all shares of the secret gi and can recover it** Thanks for your comments. We have explained in Section 4.6 that **the secret shares are encrypted by the clients' secret key before sending to the server, and thus the server is ignorance of the values for secret shares given the IND-CPA of the encryption system**. Specifically, the clients establish the secret keys with each other through Diffie-Hellman key exchange protocol. During secret sharing, each client u uses the common key $k_{uv}$ to encrypt the message sent to client v, and client v could decrypt the cyphertext with the same key. **W4: Experiments on subtle attacks** Thanks for your suggestion. In our response to Q2 in global rebuttal, we added experiments on the subtle attacks. The empirical results validate RFLPA's robustness against KRUM attack, BadNets, and scaling attacks. **W5**: Thanks for pointing out. We will address these issues. [1]Rathee, Mayank, et al. "Elsa: Secure aggregation for federated learning with malicious actors." 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023. [2]Lycklama, Hidde, et al. "Rofl: Robustness of secure federated learning." 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023. [3]Miao, Yinbin, et al. "Privacy-preserving Byzantine-robust federated learning via blockchain systems." IEEE Transactions on Information Forensics and Security 17 (2022): 2848-2861. [4]Hao, Meng, et al. "Efficient, private and robust federated learning." Proceedings of the 37th Annual Computer Security Applications Conference. 2021. --- Rebuttal 2: Comment: Dear Reviewer, We are grateful for your constructive feedback. We are happy for your recongnition in the *comprehensive and detailed explanation, effectiveness, and systematic theoretical analysis* of our paper. Below we summarize the main concerns we addressed in the rebuttal: **Additional communication cost of cosine similarity computation**: We provide detailed analysis for the communication cost in each stage. We show that the additional communication cost introduced by cosine similarity computation is small compared to the communication overhead reduced by our approach. **Advantages over other solutions**: We summarize the advantage of our solution over the provided baseline in terms of three dimensions: collusion threshold, robust aggregation rule, and computation overhead. **Assumption on clean public samples**: We clarify that the required clean root dataset could be of small size and slightly diversed from the from the overall training data distribution. We also propose several remedies suppose we cannot get any clean root dataset even if the required size is small. **The server have all shares of the secret and can recover it**: We clarify that the secret shares are encrypted sending to the server and explain the encryption process. **Experiments on subtle attacks**: We conduct additional experiments against KRUM attack, BadNets, and scaling attacks and will include the results in our paper. Once more, we thank the Reviewer for the constructive feedback. We sincerely hope that the provided answers address the Reviewer's concerns and that the score can be reconsidered. Best Regards, Authors of Submission 11258 --- Rebuttal 3: Comment: Considering the higher scores given by other reviewers, I reread the paper and the rebuttal, and still have serval big questions. 1. The algorithm part is hard to follow because the symbolic representation is confusing. Maybe you can clarify some cases of them here. $\langle \mathbf{g_i} \rangle$ denotes the secret sharing of $\mathbf{g_i}$. So $\mathbf{V}^i=\{v_{jk}^i\}$ is also the secret sharing of $\mathbf{g_i}$? In the Algorithm 3 RobustSecAgg, $\{s_{ij} \}$ is the packed secrets of $\mathbf{g_i}$, and $\{v_{ij} \}$ is the witness. And again,$cs_k^j$ denotes the $j^{th}$ share of the cosine similarity between $\mathbf{g_k}$ with $\mathbf{g_0}$? Here $k$ denotes the index of clients, and then in the next part “Step 1: Secret resharing of partial dot product”, $s_{jk}^i$ denotes the share sent to user $j$ for the $k^{th}$ group of elements. Here ,$k$ denote the index of group of elements. Maybe fixing the notations can help the readers. And another, $\ell$ is the number of elements packed into one polynomial, also in the “Step 1: Secret resharing of partial dot product”, $p$ is used for the number of secrets packed. It's very hard to follow your algorithm for me. 2. Efficiency of RFLPA . The design goals include efficiency that mainly is achieved by packed secret sharing. But As we can see, the clients are burdened with a large number of computational tasks. That includes secret sharing, encryption and decryption of shares, addition, scale multiply and multiply on packed secret sharing. IMPORTANT: The multiply on packed secret sharing will generate a degree-$(d_1+d_2) or (2d)$ results and more operations need to be done to keep a degree-$d$ style. That is the main source of communication cost for MPC operation and this paper didn't discuss it. You mentioned “ELSA and PBFL and SecureFL rely on the unrealistic assumption of two non-colluding parties”, RFLPA just put those computation tasks on the clients, maybe that is more unfriendly for the clients (mobile or other edge devices). Existed works can be extended to multi-parties using shamir secret sharing. And this paper didn’t conduct experimental comparison under WAN/LAN setting. 4 round communications per iteration (that did not include the communication incurred by multiply on packed secret sharing also), that would a huge burden for WAN. --- Rebuttal Comment 3.1: Title: Reply to Reviewer's Comment Comment: Dear Reviewer VWie, Thank you for your reply and time. Hope our response below could address your additional concern. **Q1: The algorithm part is hard to follow because the symbolic representation is confusing.** Thanks for your comment. We apologize for some notation inconsistencies in Algorithm 3. The secret shares of $g_i$ should be $\mathbf{V}_i=v_{jk}^i=\mathbf{v}_{ij}$, and the witness is given by $w_{jk}^i=\mathbf{w}_{ij}$. We will fix the corresponding sentences in Algorithm 3 as followed: - Generate packed secrets $\{\mathbf{v}_{ij}\}_{j\in \mathcal{U}_0}$, commitments $\mathcal{C}$ and witness $\{\mathbf{w}_{ij}\}_{j\in \mathcal{U}_0}$ for $\mathbf{g}_i$ - Recover $(\{\mathbf{v}_{ji}\}_{j\in \mathcal{U}_1\backslash i}, \{\mathbf{w}_{ji}\}_{j\in \mathcal{U}_1\backslash i})=\mathbf{Dec}(\mathbf{c}_{ji},k_{ji})$, and verify the secret shares $\{\mathbf{v}_{ji}\}_{j\in \mathcal{U}_1\backslash i}$ The reviewer's understanding of $cs_k^j$ is correct. To avoid the confusion in index $k$, we will fix the notation in Table 3 and Section 4.5 (line 210 to 214) as followed: $cs_k^j$ -> $cs_j^i$, and $nr_k^j$ -> $nr_j^i$. In that way, $cs_j^i$ would denote the $i^{th}$ share of the cosine similarity between $\mathbf{g}_j$ and $\mathbf{g}_0$. For the interpretation of $l$ and $p$, we have their description in Table 3. In particular, $l$ denotes the number of secrets packed at a polynomial for gradient, and $p$ denotes the number of secrets packed at a polynomial for shares of partial dot product. We sincerely hope that the above clarification and fixing the notation issue could help undertand further details in the algorithm. **Q2: Efficiency of RFLPA** *Q2-a: The clients are burdened with a large number of computational tasks* **Analysis of client's computation cost**: Referring to Appendix H, we have provided stage-wise computation analysis in detail. We also summarize the computation cost for those operations the reviewer mentioned in Table R1. [**Table R1.** Break down of per-user communication cost. Cosine similarity computation incurred O(N) additional communication cost, lower than the original scale of O(M+N).] |Stage|Communication cost| |-|-| |Create packed secret shares|$O((M+N)\log^2 N)$| |Encryption and decryption of shares|$O(M+N)$| |Compute shares of partial dot product (addition and multiply on packed secret sharing)|$O(M+N)$| |Create packed secret shares of partial dot product|$O(N \log^2 N)$| |Derive final secret shares of dot product (addition, scale multiply and decoding)|$O(N^2\log^2 N)$| Therefore, the clients' computation cost is $O((M+N^2)\log^2 N)$. **Improvement in computation time**: It's important to note that our framework focus on improving the efficiency over existing solutions that combines SecAgg with defense strategies against poisonous attacks. RFLPA significantly reduces the total computation cost compared with: (1) HE-based solutions, and (2) BREA that leverages secret sharing method, as demonstrated by our empirical analysis. This is a substantial improvement in the research field that integrate privacy and robustness. **Friendly adaption to device with low computation power**: As we will discuss later in *Q2-c*, our framework can be easily adapted to reduce the computation cost for clients with low resources, such as mobile devices. In that case, the low resource clients only need to generate secret shares, and the subsequent operations will be done by the high resource clients. *Q2-b: The multiply on packed secret sharing will generate a degree-2d results and more operations need to be done to keep a degree-d style.* In RFLPA, there's no need to conduct further operations on the degree-2d results since **Dot Product Aggregation inherently converts the degree-2d partial dot product shares into degree-d final product shares**. After computing the partial shares of dot products, each $cs_j^i$ (or $nr_j^i$) is a degree-2d share vector. During the Dot Product Aggregation, we conduct secret resharing and local computation on the shares. Referring to Appendix G Explanation of Secret Re-sharing, $Chop_d$ matrix is leveraged to obtain a degree-d share vector $\mathbf{h}_k^i$ of the final dot product. We will emphasize this point in our paper. --- Reply to Comment 3.1.1: Title: Reply to Reviewer's Comment (Continued) Comment: *Q2-c: You mentioned "ELSA and PBFL and SecureFL rely on the unrealistic assumption of two non-colluding parties", RFLPA just put those computation tasks on the clients, maybe that is more unfriendly for the clients (mobile or other edge devices). Existed works can be extended to multi-parties using shamir secret sharing.* Firstly, extending existing works (ELSA, PBFL, and SecureFL) to multi-parties requires significant changes on the algorithm, and that should be another solution/direction to be explored. Furthermore, **our framework can be easily adapted to the case considering the heterogeneous computation power of each client**. For example, the server can select the clients with more computation power to participate in the round 2 & 3 & 4. In that way, clients with weaker computing power could merely generate the secret shares of their local gradients and upload them to the server. This can be achieved by simply limiting the participating client in round 2 & 3 & 4 to the computation powerful clients, thereby improving the user experience for edge devices with low computation power. *Q2-d: Experimental comparison under WAN/LAN setting* Firstly, **performing experiment under WAN/LAN setting is unnecessary since the communication time is mainly determined by the message size**. In WAN, the extra communication time for multiple rounds of communication mainly comes from latency, i.e., the time it takes for a message to travel from the sender to the receiver. The latency is negligible compared to the time it takes to transfer the message. Specifically, the latency in WAN setting ranges from 20ms to 700ms (within 1 seconds) depending on the geographical distance. On the other hand, the time to transfer the message (ignoring latency in WAN setting) is around 6.4 seconds for RFLPA, and 52 seconds for BREA, assuming 100 clients using a 1.6MB classifier and 100 Mbps per client. Therefore, the latency from multiple communication rounds is much smaller than the time RFLPA reduced for smaller message size. Furthermore, many existing SecAgg algorithms (focusing only on privacy protection) also rely on multiple communication rounds. For instance, [1] and [2] leverages 4 communication rounds per iteration. Our framework is able to integrate defense against poisoning attack into SecAgg using the same communication rounds as [1] and [2]. Overall, we sincerely hope that our explanations could address the reviewer's further concerns. [1] Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Patel, S., ... & Seth, K. (2017, October). Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1175-1191). [2] Bell, J. H., Bonawitz, K. A., Gascón, A., Lepoint, T., & Raykova, M. (2020, October). Secure single-server aggregation with (poly) logarithmic overhead. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (pp. 1253-1269). --- Rebuttal 4: Comment: *About Q2-a*: Secret sharing-based operations cost the network resources, the encryption and decryption of shares consume the computation resource, that cannot be treated as same. So RFLPA replaces the p2p channel between clients assumed in BERA by using the sever as a repeater(or forwarder) with encryption of the shares. So the encryption/decryption cost will be introduced to the clients. *About Q2-b*: I guess that you use a method like the degree-reduction technology in shamir secret sharing? Every $p$ elements in $cs^i$ is reshared as a degree-d polynomial. So the communication cost is genereated by the resharing operation. Maybe that has been included in " (3) sending and receiving secret shares of partial gradient norm square and cosine similarity (O(N) messages)"? So using a packed secret sharing with the partial dot product is the core contribution of this paper, and I encourage the authors to improve the representation of the key part. It's not very clear for me, and it's important to understand your work for the researchers in MPC/PPML. --- Rebuttal 5: Comment: *About Q2-c*: Adaption to heterogeneous computation power of each client is a good idea. But I must point out that selecting clients with more computation power by the server will increase the risk of collusion attack among the server and the clients. If we consider that the server could collude with some clients, the privacy goal of RFLPA will be failed. *About Q2-d*: Communication round is a very important factor under WAN setting that you shouldn't ignore. For example, if you have 1000 rounds communication, and the latency will increase 1000*100ms(assume the latency will be 100ms for WAN and ignored for LAN) = 100s, that is also the main cost for the secret sharing-based solutions compare with those based on HE(heavy computation) or Garbled Circuit (heavy communication bytes). You mentioned that "many existing SecAgg algorithms (focusing only on privacy protection) also rely on multiple communication rounds"[1][2]. Those works use Diffie-Hellman for key negotiation, and their multiple rounds of communication occured in **offline phase**. --- Rebuttal 6: Title: Reply to Reviewer's Comment Comment: Dear Reviewer VWie, Thank you for your reply and time. Hope our response below could address your additional concern. **Reply to Q2-a:** Firstly, the encryption and decryption is based on symmetric encryption, much more efficient than the assymetric encryption in HE-based protocol. During our experiment, we found that the encryption & decryption time on the secret is much smaller than the time for other operations on the secret shares, including secret sharing and reconstruction. Furthermore, even if in the BREA's p2p channel, the symmetric encryption on the secret shares is neccessary to avoid the ear dropper's inference attack. However, their solution omit this encryption & decryption process, posing the user's private value in great risk. The encryption is indispensable for secure communication. **Reply Q2-b:** The dot product aggregation protocol shares some relation to the degree reduction protocol in terms of secret resharing. The degree reduction is included in (3) sending and receiving secret shares of partial gradient norm square and cosine similarity (O(N) messages). We are pleased that the reviewer recognize the value and contribution of our *packed secret sharing with dot product aggregation* protocol. The major purpose of dot product aggregation is to address the issue of increased privacy leakage, and we believe that this benefit (in terms of privacy) has been clearly understood and recognized by the readers and reviewers. The degree reduction can be treated as the side benefit (though not the major purpose), and we will highlight this point in our paper. **Reply to Q2-c:** We understand that the inherent tradeoff between the number of participating clients and collusion threshold. At the same time, we would like to make it clear that: - Even if we select a proportion of clients, the privacy risk is much lower than the case in HE's 2 non-colluding parties. - Even if we use the whole set of clients, each client's computation cost is significantly lower than the protocol proposed by BREA with $O(n)$ threshold. - In cross-silo setting, the clients typically host stronger computing power (such as server), and thus there's few need to consider the client selection. In cross-device setting, the heterogeneous issue arises, but typically this setting consists of a large number of clients, and thus select a proportion of them could maintain a certain level of protection. **Reply to Q2-d** We would like to highlight that: - The reduced message size from RFLPA compared with BREA is much larger than the extra latency in WAN setting, as is deomstrated in our previous reply. - The extra latency in more communication rounds is negligible compared with the reduced computation time from RFLPA compared with HE-based methods. As is shown by Appendix L.7.1 and our initial rebuttal, the computation time for RFLPA is hours fewer than the HE-based methods for even a single iteration. Therefore, the total communication latency (100s) is much smaller. Furthermore, the communication rounds in SecAgg [1][2] is 4 for each iteration. The reviewer can refer to their algorithm for further description.
Summary: The paper proposes a defense mechanism against poisoning attacks on federated learning while maintaining privacy guarantee. The solution is based on secure aggregation and evaluates the trustworthiness of client updates with cosine similarity. Information leakage during the computation of cosine similarity is mitigated by a novel aggregation method for dot products. Theoretical and empirical analysis show that the defense successfully reduces the communication and computation overhead and keeps a competitive accuracy compared to previous work. Strengths: 1. The use of packed Shamir secret sharing with the proposed dot-product aggregation sounds novel and reliable. 2. The security and efficiency of the framework is well supported by theoretical analysis and experiments. 3. The framework largely reduces the overhead of previous works and maintains good accuracy even when no poisoning attack is considered, making deployment more practical. Weaknesses: 1. The paper focuses on a specific type of federated learning setup. Its applicability to other federated learning models, such as those involving more dynamic and heterogeneous client populations, is not explored. 2. The experiments conducted use standard datasets like MNIST, F-MNIST, and CIFAR-10. These datasets, while commonly used, do not fully represent the complexity and diversity of real-world data. The effectiveness of RFLPA in more diverse and complex datasets remains to be tested. 3. The performance comparison against other state-of-the-art methods might not be exhaustive. There might be other emerging methods or variations of existing ones that were not considered in the evaluation. Technical Quality: 3 Clarity: 3 Questions for Authors: In section 4.2 normalization, should the gradients with norm smaller than $||g_0||$ also be normalized? If so, how to verify that they are normalized? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper has adequately addressed its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer WU56, Thank you for your recognition and valuable comments. Hope our response below could address your concern. **W1: Applicability to other federated learning models, such as those involving more dynamic and heterogeneous client populations** Thanks for your comment. Our approach is also applicable in the federated learning setup involving dynamic and heterogeneous client populations. We would address the concern from two perspectives: the heterogeneous clients and dynamic change. - **Heterogenous clients**. The experiment result is already presented in Appendix L.8 Non-IID Setting, where each client holds a dataset biased towards a certain class. Compared with BREA and FedAvg, **RFLPA demonstrates resilient performance against poisoning attacks under the setting where clients hold heterogeneous dataset**. For 30% adversaries, the accuracy of RFLPA is over 0.89 and 0.79 on MNIST and F-MNIST dataset, respectively. - **Dynamic change**. For dynamic settings, we consider the case where the data of the clients change during the federated training with the arrival of new data. We added experiments for this case and will include the analysis in the appendix. To simulate such setting, we leverage Dirichlet Distribution Allocation (DDA)[1] to sample non-iid dataset, and change the distribution for each client every 20 epochs. Table R1 presents the accuracy under the gradient manipulation attack, demonstrating **RFLPA's robustness under the dynamic setting**. [**Table R1.** Accuracy under dynamic client data distribution against gradient manipulation attack.] | Proportions | of attackers |No|10%|20%|30%| | ---- | ----- | ---- |---- |---- |---- | |FedAvg|MNIST|0.98|0.26|0.29|0.30| ||F-MNIST|0.86|0.52|0.21|0.18| |RFLPA|MNIST|0.96|0.94|0.92|0.90| ||F-MNIST|0.83|0.79|0.79|0.77| **W2: The effectiveness of RFLPA in more diverse and complex datasets remains to be tested.** Thanks for your comment. In Appendix L.6 Performance on Natural Language Processing (NLP) Dataset, we present the result under two NLP datasets, Recognizing Textual Entailment (RTE)[2] and Winograd NLI (WNLI)[3]. Table R2 highlights a portion of the results, and the full results can be found in Appendix L.6. [**Table R2.** Accuracies on NLP dataset against gradient manipulation attack.] | |RTE| | | |WNLI| | | | | ---- | ----- |--|--|--|--|--|--|--| | Proportions of attackers|No|10%|20%|30%|No|10%|20%|30%| |FedAvg|0.599|0.509|0.487|0.462|0.619|0.563|0.437|0.437| |RFLPA|0.596|0.582|0.582|0.577|0.619|0.592|0.592|0.563| To test a more complex CV dataset, we further added the experiments on Cifar-100 dataset using ResNet-9. The analysis of these experiments will be included in the appendix. Table R3 presents the accuracy under varying proportions of attackers performing gradient manipulation attacks. [**Table R3.** Accuracy on Cifar-100 dataset under gradient manipulation attack.] | Proportions of attackers |No|10%|20%|30%| | - | ----- | ---- |---- |---- | |FedAvg|0.55 | 0.11 |0.10|0.10| |RFLPA|0.55|0.54|0.50|0.49| **W3: The performance comparison against other state-of-the-art methods might not be exhaustive. There might be other emerging methods or variations of existing ones that were not considered in the evaluation.** Thanks for your comment. We attempt to review the SoTA methods that achieve the goals of privacy protection as well as robustness against poisonous attacks as much as possible. We also discuss the advantages of RFLPA against additional frameworks in the response to reviewer VWie's W1-b. We would be pleased to make further comparisons if you provide additional baselines for us. **Q1: Should the gradients with norm smaller than $\|g_0\|$ also be normalized? If so, how to verify that they are normalized?** That's an interesting question. In principle, the gradients with norm smaller than $\|g_0\|$ should also be normalized. However, the server could simply verify that $\|\bar{g}_i\|\leq \|g_0\|$ for the following reasons: - **A malicious user has no motivation to skip the normalization if $\|g_i\|<\|g_0\|$.** For benign users, they would honestly execute the protocol. For malicious user, skipping the normalization would reduce their trust score, and thus the weight on the aggregated gradients. Specifically, the trust score is a ReLU function of the dot product between the normalized local gradient and server model update. Therefore, a smaller scale of $\|\bar{g}_i\|$ leads to lower level of trust score, thus reducing the malicious user's impact on the global model. - **The convergence bound in Theorem 5.2 is derived in the case where all gradients are smaller than $\|g_0\|$.** Even if it's un-normalized, we can still obtain the convergence bound in Theorem 5.2 because the derivation only requires that $\|\bar{g}_i\|\leq \|g_0\|$ for each $i$, rather than $\|\bar{g}_i\|$ being close to $\|g_0\|$. [1] Luo, M., Chen, F., Hu, D., Zhang, Y., Liang, J., & Feng, J. (2021). No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. Advances in Neural Information Processing Systems, 34, 5972-5984. [2] Bentivogli, L., Clark, P., Dagan, I., & Giampiccolo, D. (2009). The Fifth PASCAL Recognizing Textual Entailment Challenge. TAC, 7(8), 1. [3] Levesque, H., Davis, E., & Morgenstern, L. (2012, May). The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The author has addressed my concern and I will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer aBFW, Thanks for your reply. We appreciate your recognition in the *novelty*, *soundness*, and *effectiveness* of our paper. We are pleased to know that we have addressed your concern. If you have any remaining concern, feel free to let us know. Best Regards, Authors of Submission 11258
Summary: This paper aims to enhance the vulnerability of FL when dealing with poisoning attacks. A common strategy used in FL to avoid directly sharing local updates is called secure aggregation(SecAgg), which works with ciphertexts and is incompatible with defense techniques against poisoning attacks. To address this issue, the paper proposes using verifiable packed Shamir's secret sharing to compute the cosine similarity between local and global models for aggregation. Additionally, a novel dot product aggregation method is introduced to reduce the communication overhead caused by Shamir's secret sharing. Strengths: + The problem of addressing poisoning attacks in FL is timely and challenging, especially with provable security guarantee + The attempt to reduce the communication overhead caused by secret sharing is valuable + Extensive experimental validations and very detailed comparison with existing frameworks, especially for various aggregation methods Weaknesses: - The assumption that the server has trusted root of clean datasets seems a fundamental limitation - The presentation could be improved; there are too many concepts introduced, making it hard to distinguish between what is new and what is from existing literature. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. It seems that only the experiments of malicious users are examined. I could not find that case when server conducts an active inference attack, is it included? 2. If the clean dataset at the server is not available, is it still possible to adjust the proposed protocol to deal with this situation? Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: Limitations are adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer WU56, Thank you for your recognition and valuable comments. Hope our response below could address your concern. **W1 & Q2: Assumption on clean public samples** Thanks for your comment. As discussed in our response to Q1 in global rebuttal to program chair, we proposed some remedies in the absence of such dataset, including comparison with global weights and KRUM. Furthermore, the public samples can be of small size (e.g., 200 samples), and the gobal model is robust even when the root dataset diverges slightly from the overall training data distribution. **W2: Distinguish between new and existing concept** Thanks for your suggestion. We clarify the distinctions as followed. The cryptographic techniques introduced in Section 3.4 Cryptographic Primitives are from existing literature, including packed Shamir secret sharing, Diffie-Hellman (DH) key exchange protocol,symmetric encryption and UF-CMA secure signature scheme. The Verifiable Packed Secret Sharing (V-PSS) is our proposed concept, as previous verifiable secret sharing scheme are mainly associated with the vanilla Shamir secret sharing. We leverage the commitment technique in [1] to design the V-PSS algorithm. Furthermore, the Dot Product Aggregation and the related concept *partial dot product* is proposed by our paper. The secure communication scheme for secret sharing is proposed by our work, building on the aforementioned DH key exchange protocol, symmetric encryption and signature scheme. We will emphasize the new concepts in our paper. **Q1: Experiment when server conducts an active inference attack** Thanks for your question. In activate inference attack, the server would manipulate certain users' messages to obtain the private values of targeted users. In particular, the server may alter the secret shares user sents to others to infer their private value. We didn't include the experiment in our paper since **the integrity of the message is protected by the signiture scheme, and thus the server is prevented from forging the message of any user**. If the server changes the value of the encrypted message, then the signiture verification would indicate the other user that it's an invalid message. In Appendix C.4, we provide a security analysis for the signature scheme. For a UF-CMA secure signature scheme, no probabilistic polynomial time (PPT) adversary is able to produce a valid signature on an arbitrary message with more than negligible probability. We added experiments for passive inference attack using the Deep Leakage from Gradients (DLG)[2]. DLG attempts to reconstruct the original image from the aggregated gradients. We conducted an attack on the CIFAR-10 dataset, using the specifications in Appendix L.2. The average PSNR of generated image with respect to original image is 11.27, much lower than the value of 36.5 when no secure aggregation is involved. We also upload an PDF in the global rebuttal section, presenting the original and inferred image under RFLPA. It can be observed that the inferred images are far from the raw images under DLG attack. We will include the explicit analysis for both inference attacks in our paper. [1] Kate, A., Zaverucha, G. M., & Goldberg, I. (2010). Constant-size commitments to polynomials and their applications. In Advances in Cryptology-ASIACRYPT 2010: 16th International Conference on the Theory and Application of Cryptology and Information Security, Singapore, December 5-9, 2010. Proceedings 16 (pp. 177-194). Springer Berlin Heidelberg. [2] Zhu, L., Liu, Z., & Han, S. (2019). Deep leakage from gradients. Advances in neural information processing systems, 32. --- Rebuttal Comment 1.1: Comment: Thank you for the response. My concerns have been addressed, and good to see that the experimental results using DLG align with our expectations. My score remains unchanged. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 4mMR, Thanks for your reply. We are grateful for your recognition in our paper and additional experiments. We are pleased to know that your concerns have been addressed. If you have any remaining concern, feel free to let us know. Best Regards, Authors of Submission 11258
Summary: This paper presents a framework to address the dual challenges of privacy leakage and poisoning attacks in federated learning. The authors propose RFLPA, which integrates cosine similarity for robust aggregation with verifiable packed Shamir secret sharing to ensure secure aggregation without compromising on robustness. The framework also introduces a new dot-product aggregation algorithm to prevent information leakage. The proposed method is evaluated on 3 benchmark datasets. Strengths: The paper is well-written and well-structured. The paper addresses a very important problem in the security of federated learning systems. Thorough experimentation and analysis are undertaken to evaluate the proposed method. Weaknesses: The paper’s reliance on the assumption that the server has a clean root dataset and that secure communication can be maintained without significant overheads may not always be realistic. Additionally, the code was not revealed and could not be evaluated at the time of this review. Finally, one of the main threats that challenge robust aggregators is backdoor attacks, which are tested on the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could the authors provide more insights into the practical implementation challenges of integrating verifiable packed Shamir secret sharing into existing FL systems? 2. How does the framework scale with the number of clients and the dimensionality of models, and are there potential bottlenecks for large-scale deployments? 3. Could the authors elaborate on the potential vulnerabilities in the cryptographic mechanisms used and how they are mitigated? 4. How realistic is the assumption of the server having a clean root dataset in different FL application domains, and can the framework be adapted for scenarios without such a dataset? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer WU56, Thank you for your recognition and valuable comments. Hope our response below could address your concern. **W1 & Q4: Assumption that the server has a clean root dataset and that secure communication can be maintained without significant overheads** Thanks for your comment. For the first assumption, referring to our answer to Q1 in global rebuttal to program chair, the public samples can be of small size (e.g., 200 samples), and the gobal model is robust even when the root dataset diverges slightly from the overall training data distribution. Moreover, we have proposed two remedies in the absence of such dataset, including comparison with global weights and KRUM. For the second point, we may clarify that we haven't explicitly assumed that secure communication can be maintained without significant overheads. Instead, our paper aims to minimize the communication cost during secure communication by reducing the message size. It should be noted that secure communication is neccessary to protect the secrecy and integrity of the messages transmitted during federated learning. We are pleased to provide further clarification if there's any misunderstanding in your concern. **W2: The code was not revealed** Thanks for your comment. We have sent the code in anonymized link to AC. **W3: Evaluation of backdoor attacks** Thanks for your suggestion. As discussed in our response to Q2 in the global rebuttal, we tested RFLPA's robustness on two backdoor attacks, BadNets, and scaling attacks. We will include the empirical analysis in appendix. **Q1: Insights into the practical implementation challenges of integrating verifiable packed Shamir secret sharing into existing FL systems?** Thanks for your question. The verifiable packed Shamir secret sharing could be vulnerable to the following implementation errors: (a) the client conducts incorrect computation, such as aggregation and dot product, on the secret shares, and (b) the client might collaborate to recover the secret. In our paper, we propose the following mitigations to such challenges: (a) The Reed-Solomon decoding algorithm allows the server to recover the correct computation results in case of certain computation errors. For a degree-d packed Shamir secret sharing with n shares, the Reed-Solomon decoding algorithm could recover the correct result with E errors and S erasures as long as $S + 2E + d + 1 \leq n$. (b) Formal security guarantee on the packed secret shares suggests that any O(N) shares reveal no information of the secret values. **Q2: How does the framework scale with the number of clients and the dimensionality of models, and are there potential bottlenecks for large-scale deployments?** Thanks for your question. We explain the scalability in terms of communication and computation cost. The theoretical overhead is listed in Section 5.1 Complexity Analysis, and we summarize the cost in Table R1. [**Table R1.** Complexity summary of RFLPA for N clients and M dimensional model.] | |Computation|Communication| |-|-|-| |Server|$O((M+N)log^2NloglogN)$|$O((M + N)N)$| |User|$O((M + N^2)log^2N)$|$O(M + N)$| The empirical overhead can be found in Section 6.2.2 and Appendix L.7. We also highlight the per client communication and computation cost under varies client size in Table R2. [**Table R2.** Per client overhead of RFLPA using a MNIST classifier (1.6M).] |client size|100|200|300|400| | ---- | ----- | ---- |---- |---- | |Communication cost (MB)|82.50|82.50|82.51|82.52| |Computation cost (minute)|3.41|11.44|24.51|42.60| In our study, we observed that communication costs remain relatively stable as N increases significantly, with message size staying nearly constant with N but linearly scaling with M (when $M\gg N$). For computation cost, training time shows an approximate second-order polynomial increase with N and a linear increase with M. For large-scale deployments, the bottlenecks are as followed: (1) Server communication: the server could suffer heavier communication overhead, which mainly comes from the stage of distributing the user's secret shares. To mitigate this issue, we can allocate the secret distribution task to multiple nodes to relief the server's workload. The signature and encryption schemes prevent the nodes from conducting man-in-the-middle attacks on the secret shares. (2) Synchronization: each training iteration consists of four rounds, raising dropout problem. The server should wait for the result of participating clients to continue. Noted that the secret sharing scheme could inherently mitigate the problem. For a degree-d packed Shamir sharing, the reconstruction can be done on receiving responses from d+1 users. **Q3: Could the authors elaborate on the potential vulnerabilities in the cryptographic mechanisms used and how they are mitigated?** Diffie-Hellman (DH) key exchange protocol: The major vulnerability of DH key exchange protocol is the Man-in-the-Middle attack. The attacker could alter the key exchange messages between the communicating parties. Such vulnerability could be addressed by the signature scheme. In particular, each client could receive their own signing key, as well as the verification keys for all other users from a trusted third party. The third party could be a government agency or certificate authorities that provide digital certificates for secure communication. The signiture scheme prevents the attacker from modifying the messages. Symmetric Encryption & Signature Scheme: (1) Side-channel attacks. The attacker could infer the sensitive information exploiting side-channel signals, such as timing information. It's important to utilize mature encryption and signature packages robust to side-channel attacks. (2) Key management. Weak key management practices could lead to key leakage or theft. It's crucial to store private keys in secure environments, such as Hardware Security Modules. --- Rebuttal Comment 1.1: Comment: Thank you for the provided answers. My questions are answered, and my decision remains the same. --- Reply to Comment 1.1.1: Comment: Dear Reviewer WU56, Thanks for your reply. We are happy that the Reviewer found *our work addressing a very important problem and the empirical evaluation thorough*. We are pleased to know that we have addressed your questions and concerns. If you have any remaining concern, feel free to let us know. Best Regards, Authors of Submission 11258
Rebuttal 1: Rebuttal: Dear Reviewers, We want to express our profound gratitude for your insightful comments and valuable suggestions. We address some common questions in this global rebuttal section. **Q1: The paper relies on the assumption that the server has trusted root of clean datasets.** Please note that the required clean root dataset in our approach is of small size, e.g., 200 samples. Suppose we cannot get any clean root dataset even if the required size is small, we propose several remedies: - **Cosine similarity with global weights.** Existing studies show that global weights could be useful in detecting malicious updates[1]. We can compute the cosine similarity between each local update and the global weights as follows: $$cos(W_i^t, W_G^{t-1})=\frac{<W_i^t, W_G^{t-1}>}{\|W_i^t\|_2\|W_G^{t-1}\|_2}$$ , and filter out the clients with similarity smaller than a pre-specified threshold. To validate the effectiveness of such approach, we conduct experiments against the Byzantine attack under various proportions of attacks. It can be observed in Table R1 that **compared with FedAvg, RFLPA-GW effectively improves the accuracy in the presence of attackers**. **The communication and computation cost of such an approach is at the same scale of RFLPA’s original level** as both compute the cosine similarity with a single baseline. [**Table R1.** Accuracy for defense based on global weight under different proportions of attackers. RFLPA-GW replaces the robust aggregation rule in RFLPA with the method based on cosine similarity with global weight.] | Proportions | of attackers |No|10%|20%|30%| | ---- | ----- | ---- |---- |---- |---- | |FedAvg|MNIST|0.98|0.46|0.40|0.32| ||F-MNIST|0.88|0.55|0.51|0.45| |RFLPA-GW|MNIST|0.98|0.95|0.92|0.91| ||F-MNIST|0.90|0.80|0.77|0.75| - **KRUM-based method.** We can substitute the aggregation module with KRUM when a trusted baseline is missing. Though KRUM incurs greater cost than the original method, we show in Appendix L.7.2 Ablation Study that **there’s a notable reduction in communication and computation cost compared with BREA**, benefiting from the design of our secret sharing algorithm (see Table 2 for a summary). **The accuracy of RFLPA (KRUM) is expected to be the same as BREA**, as both utilize the same aggregation rule. [**Table R2.** Communication (in MB) and computation cost (in Minutes) with MNIST classifier (1.6 parameters).] | | RFLPA | |BREA | |RFLPA-KRUM|| | ---| ----| ---- |---- |---- |---- |---- | |client size|300|400|300|400|300|400| |Communication cost|82.51|82.52|1909.02|2544.45|79.58|82.25| |Computation: per-user cost|24.51|42.46|182.27|294.27|46.48|75.78| |Computation: server cost|15.00|26.47|216.96|287.22|39.76|62.81| - **Collection of slightly biased root data.** [2] has conducted experiments where the root data is biased towards a certain class. Empirical evidence presented in suggests that **the performance of the global model is robust even when the root dataset diverges slightly from the overall training data distribution**. Specifically, the accuracy of MNIST-0.5 is at least 0.92 when the bias probability is within 0.4. Note that for MNIST dataset, the bias probability ranges from 0.1 to 1, and a larger bias probability suggests a more imbalanced distribution of the root dataset. We summarize the pros and cons of three robust aggregation modules in Table R3. We will add a discussion in the appendix. [**Table R3.** Pros and cons of different aggregation modules.] | Approach | Pros | Cons | |-|-|-| | RFLPA-FLTrust | Lower overhead | Collection of validation data | | | No need for prior knowledge about # of poisoners | | | | Robust under slightly biased validation data | | | RFLPA-GW | Lower overhead | Lack of theoretical convergence guarantee | | | No need for prior knowledge about # of poisoners | | | | No need for validation data | | | RFLPA-KRUM | No need for validation data | Higher overhead | | | | Need prior knowledge about # of poisoners | **Q2: Experiments on more stealthy attacks.** We added experiments for more stealthy attacks: (1) KRUM attack (untargeted attack)[3], (2) BadNets (backdoor attack)[4], and (3) Scaling attack (backdoor attack)[5]. We will include the experiment results in the appendix. Table R4 shows the results for the additional attacks on CIFAR-10. For backdoor attacks, the classification accuracy on the triggered dataset is shown in parentheses. Compared with FedAvg, RFLPA not only improves the accuracy on the general dataset, but also the accuracy on the triggered dataset. [**Table R4.** Accuracies on CIFAR-10 under varying proportions of attackers. For backdoor attacks, the values are presented as *overall accuracy (backdoor accuracy)*.] | Proportions | of attackers |10%|20%|30%| |-|-|-|-|-| |FedAvg|KRUM attack|0.27|0.12|0.11| ||BadNets|0.68 (0.54)|0.67 (0.54)|0.55 (0.28)| ||Scaling|0.70 (0.22)|0.68 (0.21)|0.54 (0.19)| |RFLPA|KRUM attack|0.71|0.70|0.70| ||BadNets|0.71 (0.68)|0.70 (0.68)|0.69 (0.66)| ||Scaling|0.70 (0.69)|0.70 (0.69)|0.69 (0.69)| [1] Yaldiz, D. N., Zhang, T., & Avestimehr, S. Secure Federated Learning against Model Poisoning Attacks via Client Filtering. In ICLR 2023 Workshop on Backdoor Attacks and Defenses in Machine Learning. [2] Cao, X., Fang, M., Liu, J., & Gong, N. Z. (2021, January). FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. In ISOC Network and Distributed System Security Symposium (NDSS). [3] Fang, M., Cao, X., Jia, J., & Gong, N. (2020). Local model poisoning attacks to {Byzantine-Robust} federated learning. In 29th USENIX security symposium (USENIX Security 20) (pp. 1605-1622). [4] Gu, T., Liu, K., Dolan-Gavitt, B., & Garg, S. (2019). Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7, 47230-47244. [5] Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2020, June). How to backdoor federated learning. In International conference on artificial intelligence and statistics (pp. 2938-2948). PMLR. Pdf: /pdf/afb1e7fbad66de04f23c66a4b2786eac58ff1657.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
EGODE: An Event-attended Graph ODE Framework for Modeling Rigid Dynamics
Accept (poster)
Summary: This paper presents a graph-ODE simulator for capturing rigid body dynamics with collision events. It introduces a design based on object-level and mesh-level representations. It also introduces an event module to capture collision and incorporate it into the network. The paper conducts an evaluation on two rigid-body datasets Rigid-Fall and Physion and compares its performance with multiple learning-based simulators. Strengths: The paper picks an interesting and potentially impactful research topic. Rigid-body simulation with contact and collisions has wide applications in ML/Robotics, perhaps much more than simulations of any other physics systems (fluids, deformable volumes, cloths, elastic rods, etc.). One of the most critical technical components in such simulations is contact and collision handling, which is also the source of many interesting behaviors of rigid bodies. Rethinking this problem with learning-based techniques may lead to new opportunities, e.g., ML-friendly rigid-body simulators that can be seamlessly integrated into a modern deep-learning pipeline. Weaknesses: I will combine my comments on “Weaknesses”, “Questions”, and “Limitations” here. I am confused by this paper’s view on rigid-body simulation. State-of-the-art numerical simulation of rigid bodies with contact, e.g., Isaac Sim and Mujoco 3, is quite powerful. With efficient contact handling algorithms and GPU acceleration, they can simulate fairly complicated (articulated-)rigid bodies like humanoids and quadrupeds at a speed of several millions of time steps per second (https://github.com/google-deepmind/mujoco/discussions/1101). To me, the Rigid-Fall and Physion examples are trivial and solved problems for modern rigid-body simulators. However, the paper did not include these simulators as baselines and limited its comparison with learning-based simulators only. I struggle to see a strong motivation that can support this (lack of) comparison. The paper also contains several confusing statements regarding physics simulation in its introduction, which I feel are bending the storytelling in the paper’s favor: 1. Paragraph 1: Each sentence alone is technically correct, but combining them gives readers the impression that simulating rigid collision is computationally expensive and data-driven approaches are promising. I am not sure this is the right impression to make. Simulating fluids and deformable bodies is potentially expensive, but simulating rigid bodies with collisions is quite cheap using modern algorithms and hardware. 2. Paragraphs 2-3: Following paragraph 1, these paragraphs lead readers to focus on GNNs (plus standard message passing) and their difficulties in rigid-body simulation. These difficulties do not have to exist in the first place. To me, using graphs to capture rigid-body dynamics is a somewhat contrived idea, because each rigid body contains a small and constant number (6) of DoFs regardless of its mesh resolution. Storing DoFs like x and v on mesh nodes is highly inefficient and unnecessary because they are governed by the 6 DoFs only. GNNs are more suitable for capturing fluids and deformable bodies because their governing equations involve PDEs with spatial derivatives (so information exchange between neighbors is needed) and high DoFs after discretization. In this sense, choosing GNS and DPI as baselines is also contrived: They are not designed specifically for rigid bodies, and they are capable of capturing much more complicated dynamics. Instead, I feel that classic numerical simulators should have been a baseline. I also have a few more comments regarding the technical method and experiments: 1. It looks like the positions and velocities of all mesh nodes are included in the state variable and evolved in the ODE (Eqns. 1-2). The number of these variables grows as mesh resolution increases, but they are essentially governed by their underlying rigid-body DoFs (6 per rigid body) only. Is it necessary to assign and evolve DoFs at each mesh node? 2. Eqns. 3-4 roughly captures the linear motion of the “center of mass” at each rigid body. Using them as the object-level state does not seem to capture the angular motions and angular velocities of each object. Missing angular information on the object level is a bit counter-intuitive from a physics perspective. 3. The collision events visualized in the figures seem between parametric surfaces (cubes and spheres) only. Such collision events could be easily resolved with closed-form solutions. These examples do not seem to show the benefits of having a mesh representation in collision detection. A more complicated scene with multiple organic surfaces would be a better example to necessitate these meshes. Technical Quality: 2 Clarity: 2 Questions for Authors: See Weakness. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful review. Here we address your comments in the following. > Q1. The Rigid-Fall and Physion examples are trivial and solved problems for modern rigid-body simulators. However, the paper did not include these simulators as baselines and limited its comparison with learning-based simulators only. I struggle to see a strong motivation that can support this (lack of) comparison. The paper also contains several confusing statements regarding physics simulation in its introduction, which I feel are bending the storytelling in the paper’s favor. A1. Although modern numerical simulations can effectively solve rigid body dynamics problems, learn-based methods also possess unique research value. It is based on long-standing considerations followed: 1. Numerical simulations are not easily integrated with learn-based neural network methods. Specifically, when simulating complex dynamic systems involving both rigid bodies and fluids, it is challenging to delegate fluid simulation and rigid body simulation separately to neural networks and numerical simulators. This makes it difficult to leverage the advantages of neural networks in fluid simulation. On the other hand, end-to-end neural networks can adapt to such complex scenarios by incorporating the simulation of rigid bodies, fluids, and their interactions together (which we are currently exploring). Although this paper does not address fluid simulation, the exploration of learn-based simulations of rigid bodies constitutes a significant precursor to the complex system with rigid and non-rigid. I believe this is one of the most impressive directions for practice and development of our EGODE. 2. Besides practical implications, our EGODE also has methodological development. We adopt an event module with coupled ODE architecture to model the instantaneous updating of object states. This novel approach can be used not only in rigid body collision simulations but in any dynamic processes involving impulse or angular impulse, and even more broadly in systems including instantaneous state changes (eg. crushing and deformation). 3. Based on the aforementioned reasons, we have sufficient motivation to study and improve learn-based methods for simulating rigid body dynamics and leverage their strengths in appropriate scenarios, so we only utilized learn-based methods as our baselines for performance comparison. 4. Moreover, the simulation speed of EGODE is not inferior to numerical simulations. You mentioned that numeral simulators can compute fairly complicated rigid bodies at a speed of several millions of time steps per second. Similarly, our EGODE can also achieve efficient inference by leveraging GPUs and deep learning toolkits such as torch_geometric. In our paper, each scenario visualization contains thousands of nodes and spans 0.5~1 minutes, yet it only requires less than 1 second to simulate and render on a single laptop. > Q2. It looks like the positions and velocities of all mesh nodes are included in the state variable and evolved in the ODE (Eqns. 1-2). The number of these variables grows as mesh resolution increases, but they are essentially governed by their underlying rigid-body DoFs (6 per rigid body) only. Is it necessary to assign and evolve DoFs at each mesh node? A2. The positions and velocities of mesh nodes are essential for modeling the interaction and inner-action of objects (eg. contacting, sliding, and pressing), initiating collision events, and performing dynamics calculations. Although the number of these variables increases with higher grid point density, it represents a trade-off between computational cost and performance. Furthermore, the degrees of freedom of surface and interior mesh nodes enable more refined modeling of dynamic phenomena such as compression, friction, and adhesion. Therefore, unless there is a complete absence of interaction between objects, recording the states of these nodes is beneficial. > Q3. Eqns. 3-4 roughly captures the linear motion of the “center of mass” at each rigid body. Using them as the object-level state does not seem to capture the angular motions and angular velocities of each object. Missing angular information on the object level is a bit counter-intuitive from a physics perspective. A3. In Eqns. 3-4, linear motion is explicitly recorded at the object nodes of rigid bodies because $x$ and $v$ directly influences the state updating of simulation ($x_i^{t+1}$ is largely determined by $x_i^{t}$and $v_i^{t}$). Although angular momentum and angular velocity are also important, they can be implicitly embedded with other valuable information in hidden state $h^t$, rather than explicitly represented. The model learns to express and utilize them in an optimal way. > Q4. The collision events visualized in the figures seem between parametric surfaces (cubes and spheres) only. Such collision events could be easily resolved with closed-form solutions. These examples do not seem to show the benefits of having a mesh representation in collision detection. A more complicated scene with multiple organic surfaces would be a better example to necessitate these meshes. A4. More visualization with more complex objects can be found in Figure.1 and Figure. 7. Specifically, Figure 1 showcases complex objects such as tower-shaped and torus-shaped objects. Our EGODE is designed compatible with rigid bodies of arbitrary shapes, not limited to parametric surface objects. Moreover, our baseline Physion dataset includes objects with various geometries and trajectories of various angles of collision and contact, to validate the generalization capability of our proposed EGODE. Additionally, the results of a complex scenario like Drape, which models interactions between deformable objects and rigid bodies, are reported in Table 3. The visualization of complex scenarios like Drape will be added in the revised version. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you for answering my questions. The rebuttal partially addressed my concerns with the paper, so I will increase my rating to 5. However, I feel the rebuttal continues to introduce several ungrounded or biased arguments in favor of the paper, particularly in the answer to Q1. I agree with the potential of learning-based simulators in the long term, but this answer overgeneralizes the proposed approach to fluids/impulse/crushing/deformation without sufficient evidence. For example, I don't see how the claim "This novel approach can be used ... in any dynamic processes involving impulse or angular impulse, and even more broadly in systems including instantaneous state changes (eg. crushing and deformation)." can be justified before we see such a crushing/deformation example. I am also unsure of the answer regarding the neural simulator's efficiency. For rigid-body scenes, "each scenario visualization contains thousands of nodes" does not completely reflect the rigid-body simulation's complexity. The number of objects and constraints from joints and the number of collision events better characterize the simulation difficulty. If I understand correctly, the reported statistics indicate that the proposed simulator is around 30x to 60x faster than real time (0.5-1 min / <1s). This is subpar to what modern rigid-body simulators with contact can handle, and I don't think this can be considered "Similarly" to "a speed of several millions of time steps per second." In summary, despite my positive rating now, I recommend that 1) the authors carefully go over the paper and rephrase any biased or ungrounded claims left and 2) optionally, replicate the examples on modern rigid-body simulators, e.g., Mujoco 3 or Isaac Sim, and report its performance for reference. The primary goal is to give readers and follow-up works a sense of the proposed method's position in rigid-body simulation. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful and detailed feedback. We deeply appreciate your recognition of our work's potential and your decision to increase the rating. Your insights are invaluable in helping us improve the quality and clarity of our paper. We acknowledge your concern regarding the overgeneralization of our approach, especially event function module, to various dynamic processes such as fluids, impulse, crushing, and deformation. Your point is well taken, and we agree that more concrete evidence and examples are necessary to substantiate such claims. We will revise the statement in the updated version to ensure that our paper are more grounded and accurately reflect the current scope of our work. Regarding the efficiency of the neural simulator, we appreciate your clarification on the complexity metrics of rigid-body simulations. We understand that the number of objects, constraints from joints, and collision events are all critical factors in characterizing simulation difficulty. We will increase the experiments with these factors to better assess the model's efficiency in the future. Also, we will consider your suggestion to replicate our examples using modern rigid-body simulators, to help provide a clearer benchmark against learning-based methods and give readers a more comprehensive understanding of rigid-body simulation. Thank you once again for your constructive feedback and for helping us improve our work. We are committed to addressing your concerns and enhancing the quality of our paper.
Summary: The paper introduces EGODE, a method simulate rigid-body dynamics with contacts using a hierarchical representation. The main system consists of two parts: a mesh node representation and object representation that both are coupled inside a neural ODE that simulates the rigid body system dynamics. In addition, an extra event module processor is learned to make instant changes to the system state in case of a collision. This combination enables the model to model the dynamic effects more accurately in several simulation settings and even generalize to novel scenarios involving external forces. Strengths: - the idea to combine GraphODE with event modules looks novel and is well motivated given the shortcomings of current methods that typically apply GNNs, which cannot handle instantaneous changes of collisions well - Overall the paper is structured well and is easy to understand even for people from other domains. The introduction does a good job to motivate the proposed method. Related work covers various recent approaches and the differences to EGODE. - Detailed ablation studies are conducted to justify various design decisions of the framework and the method is tested against many recent baselines on two benchmarks and surpasses them in all tested environments Weaknesses: - Fignet is mentioned in the introduction multiple times but it is not used as a baseline - Generalization experiment has no reference baselines, which makes it hard to asses the performance of the proposed method Technical Quality: 3 Clarity: 3 Questions for Authors: - Is there a reason to leave out FiGNet [1] as a baseline when its mentioned in the introduction and its limitations are discussed in Related Work? Can you add the baseline to the experiments? - Could you elaborate on the factors contributing to EGODE's higher relative performance on RigidFall compared to Physion? Are there specific characteristics of these benchmarks that favor EGODE's architecture? - For the generalization experiment shown in Figure 5 could you add some baselines like SEGNO for comparison? Its hard to see how good the proposed method generalized without any baseline references Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful review. Here we address your comments in the following. > Q1. Is there a reason to leave out FiGNet [1] as a baseline when its mentioned in the introduction and its limitations are discussed in Related Work? Can you add the baseline to the experiments? A1. We appreciate your suggestion to include FiGNet [1] as a baseline in our experimental evaluation. Unfortunately, due to the unavailability of the original code publicly, we didn't implement FiGNet in our scenario as a baseline for comparison in the first time. However, we understand the importance of this baseline and have decided to replicate FiGNet using the information provided in the original paper. Given the computational resources required to run FiGNet and the need to ensure a fair comparison, we plan to provide the results in an updated version of our submission or as supplementary material. We expect to complete this within the next few days. >Q2. Could you elaborate on the factors contributing to EGODE's higher relative performance on RigidFall compared to Physion? Are there specific characteristics of these benchmarks that favor EGODE's architecture? A2. The superior performance of EGODE on the RigidFall benchmark can indeed be attributed to the simpler nature of the physical interactions present in this dataset. RigidFall consists of simulations involving just three cubes under varying gravitational conditions, all with uniform internal properties such as particle numbers, shapes, and friction coefficients, as shown in Figure.3. In contrast, Physion is a more complex dataset, encompassing eight distinct scenarios with multiple objects that vary in size, shape, and friction. For instance, Figure.1 showcases complex objects such as tower-shaped and torus-shaped objects. Additionally, Physion includes not only rigid-rigid interactions but also rigid-deformable interactions. The simplicity of the RigidFall dataset aligns well with the strengths of our method, which excels at modeling straightforward physical interactions. In addition, the performance of EGODE across both datasets demonstrates its robustness and adaptability in handling a range of physical phenomena. >Q3. For the generalization experiment shown in Figure 5 could you add some baselines like SEGNO for comparison? Its hard to see how good the proposed method generalized without any baseline references A3. We agree that including additional baselines would enhance the interpretability of our generalization experiment presented in Figure 5. To address this, we will incorporate SEGNO and other relevant baselines for comparison. This will provide a clearer context for evaluating the generalization capabilities of our proposed method. We aim to finalize these additions and submit the updated figures within the next few days. --- Rebuttal Comment 1.1: Title: Thank you for addressing my concerns. Comment: I have read all answers of the authors and the concerns and thank the authors for addressing all raised points. I raised my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your valuable and insightful feedback, as well as the score improvement. We are delighted to learn that our responses have effectively addressed your concerns. In accordance with your valuable suggestions, we will incorporate the rebuttal content into the main paper for the final version.
Summary: The paper presents a Graph ODE framework to model rigid body dynamics. In a departure from previous works on Graph ODEs, the proposed framework incorporates a hierarchical structure by explicitly modeling both mesh-based representations and object-level representations of the rigid bodies. Furthermore, the framework also introduces a learnable event-detector and an event-processor. The event-detector module is used to estimate the time at which a potential collision occurs and the event-processor deals with the instantaneous state update after a collision event. The method is evaluated in the RigidFall and Physion datasets and consistently outperforms other baselines. Strengths: - The paper is well written and the overall ideas are clear. - The proposed method is evaluated thoroughly in challenging benchmarks and it consistently outperformed state-of-the-art baselines in terms of contact prediction accuracy and in terms of the mean-squared error of the prediction of Euclidian coordinates. - The framework has low sensitivity wrt to hyperparameters. Weaknesses: - The learnable event function, which operates on pairwise mesh points, requires exhaustive evaluation which might hinder the scalability of the approach in terms of the number of rigid bodies that can be simulated and in terms of the mesh resolution of the simulated rigid bodies. [Minor] L77. The wording might be lacking a negation. Are other approaches able to "handle intrinsic continuity and discontinuity in rigid models"? Technical Quality: 3 Clarity: 3 Questions for Authors: - How fine-grained can the mesh-based representation be without deteriorating the performance of the approach? - Are there any scenarios where using standard mean square error (MSE) loss leads to poor performance? Would incorporating physics priors in the loss function further improve the performance or would it potentially reduce the amount of data required for learning? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Limitations of the method are mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, and your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification. > Q1. The learnable event function, which operates on pairwise mesh points, requires exhaustive evaluation which might hinder the scalability of the approach in terms of the number of rigid bodies that can be simulated and in terms of the mesh resolution of the simulated rigid bodies. A1. We agree that the pairwise evaluation of mesh points for event detection could potentially limit the scalability as the number of rigid bodies and mesh resolution increase. However, our event function does not directly process all pairwise mesh nodes, but instead performs rule-based filtering first (which was not elaborated in the main text due to space limitations). Specifically, Equation (7) first uses the distance condition $d(x_i, x_j)<d_{threshold}$ to filter out the node pairs that require event function computation, and then uses GNN to propagate information. This significantly reduces the time complexity of the function. Moreover, as the nodes become denser, we can also reduce $d_{threshold}$ to alleviate the computational burden of the event function. Since denser nodes make the collision analysis more detailed, reducing $d_{threshold}$ does not significantly affect the performance. We are conducting experiments to examine the performance and efficiency pratically, in terms of the number of rigid bodies and mesh node number, the results will be provided during the interaction periods in several days. > Q2. L77. The wording might be lacking a negation. Are other approaches able to "handle intrinsic continuity and discontinuity in rigid models"? A2. Thanks for pointing this out, We will fix it in the official release. In our work, we utilize graph ODE and event function to handle intrinsic continuity and discontinuity in rigid models. We believe that other potential methods are also worth discussing, such as using 3D space or physical inductive biases of rigid bodies. We will further explore these approaches in future work to find more solutions to this problem beyond our proposed EGODE. > Q3. How fine-grained can the mesh-based representation be without deteriorating the performance of the approach? A3. We have already discussed in A1 how to reduce the complexity of the event function. By reasonably selecting $d_{threshold}$, the event function can be controlled within $O(N)$, where $N$ is the number of nodes. We are conducting an experiment about the most fine-grained the graph can achieve without deteriorating the performance. Due to the heavy computation burden, we will provide the results during the interaction periods in several days. > Q4. Are there any scenarios where using standard mean square error (MSE) loss leads to poor performance? Would incorporating physics priors in the loss function further improve the performance or would it potentially reduce the amount of data required for learning? A4. Although we currently achieve good results using the MSE loss, incorporating physics priors into the MSE is a novel idea. We have tried supervising the center of mass position $X_c^t$ using MSE as well. Due to the heavy computation burden, we will provide the ablation study during the interaction periods in several days. --- Rebuttal Comment 1.1: Title: Thanks for the clarifications. Comment: I've read the author's response and look forward to the results of the ablation study. --- Reply to Comment 1.1.1: Comment: Thank you for your patience! We have completed the ablation experiments, and we will clarify our answers to your insightful questions with the experimental results. > Q1. The learnable event function, which operates on pairwise mesh points, requires exhaustive evaluation which might hinder the scalability of the approach in terms of the number of rigid bodies that can be simulated and in terms of the mesh resolution of the simulated rigid bodies. A1. We interpolate the object mesh to increase the original data's mesh resolution by 2 to 20 times and evaluate our EGODE in the Dominoes and Collide scenarios of Physion (we pre-tuned the optimal hyperparameters such as $d_{threshold}$ on the validation dataset). The experimental results of prediction accuracy and simulation time of a 6-second-sequence are as follows, the numbers in the header represent the average number of mesh nodes. | Accuracy (%) | 2173 | 4347 | 10867 | 21733 | 43467 | |-------------|-------------|---------------|---------------|----------------|----------------| | Dominoes | 94.7±1.4 | 94.4±1.4 | 93.9±1.6 | 94.1±1.3 | 93.5±1.6 | | Collide | 90.0±1.0 | 89.7±1.2 | 89.3±0.7 | 89.4±0.7 | 88.7±1.4 | | Time (s) | 2173 | 4347 | 10867 | 21733 | 43467 | |-------------|-------------|---------------|---------------|----------------|----------------| | Dominoes | 0.65±0.09 | 0.68±0.08 | 0.68±0.11 | 0.70±0.13 | 0.75±0.16 | | Collide | 0.81±0.11 | 0.85±0.10 | 0.85±0.13 | 0.87±0.16 | 0.93±0.20 | We can find that: 1. increasing the mesh resolution may slightly affect the collision prediction accuracy of our EGODE, as a larger GNN results in longer information transmission paths. However, the performance degradation is relatively slight, which is credited to the hierarchical design of EGODE. 2. The influence of mesh resolution on simulation time is also slight. The time complexity of EGODE is approximate $O(TL|\mathcal{E}|)$, where $T$ is the number of output time steps, $L$ is the number of propagation layers in the GNN, and $|\mathcal{E}|$ is the number of edges that require information transmission. When node density increases, due to our adaptive adjustment of parameters like $d_{threshold}$, the growth of $|\mathcal{E}|$ and the number of node pairs processed in the event function is linear to mesh resolution. Since PyTorch's matrix multiplication has good parallel properties, the computation time does not increase significantly even when mesh resolution and $|\mathcal{E}|$ grows. The additional time consumption might come from higher GPU I/O. In the Physion dataset, The number of objects varies. The performance of our EGODE on scene with different numbers of objects is shown as follows, the numbers in the header represent the number of objects. | Accuracy (%) | 4 | 5 | 6 | 7 | |-------------|-------------|---------------|---------------|----------------| | Dominoes | 95.3±1.3 | 94.8±1.2 | 94.7±1.8 | 94.4±1.6 | | Collide | 91.2±0.8 | 91.0±1.3 | 89.8±1.4 | 89.4±1.2| The experiments above demonstrate the robustness of our EGODE with variations in mesh resolution and the number of objects. > Q3. How fine-grained can the mesh-based representation be without deteriorating the performance of the approach? A3. With the experimental results from A1, we can find that performance and time impose minimal constraints on the granularity of the mesh. So we can use more precise object descriptions at a relatively low cost. In our setting, $20\times$ mesh resolution means 10,000 or more nodes per object, which is sufficient to depict objects precisely. > Q4. Are there any scenarios where using standard mean square error (MSE) loss leads to poor performance? Would incorporating physics priors in the loss function further improve the performance or would it potentially reduce the amount of data required for learning? We incorporate the center of mass position $X^t_c$ and its angular velocity $\Omega^t_c$ by MSE as physics priors to the loss function, The ablation study is as follows: | Accuracy (%) | Dominoes | Contain | Link | Drape | Support | Drop | Collide | Roll | |-------------|------------|------------|------------|------------|------------|------------|------------|------------| | EGODE | 94.7±1.4 | 79.0±1.3 | 75.0±1.1 | 61.7±0.6 | 71.7±0.8 | 75.3±1.3 | 90.0±1.0 | 85.7±0.8 | | EGODE with P | 94.8±1.1 | 79.0±1.2 | 75.1±1.2 | 61.7±0.4 | 71.6±0.6 | 75.4±1.1 | 90.0±1.3 | 85.8±0.7 | where EGODE with P is our EGODE with the extra loss. The results suggest that addtional loss is indeed helpful for the model. We will also add your suggestion about future works to our revised version. Thanks again for appreciating our work and for your constructive suggestions! Please let us know if you have further questions.
Summary: This paper introduces EGODE, a method for modeling rigid dynamics. To do this, they introduce a framework which integrates neural ODEs and GNNs, and they integrate this with an event module approach for collision modeling. They demonstrate superior performance to baselines on two standard benchmarks and demonstrate via ablations the contribution of components of their model. Strengths: This paper is overall clearly written, with a clear and explicit methods section, good motivation, and well-structured results. Methods are (to my knowledge) novel, with particular interest to the graph ODE framework. Experiments appear to use sound approaches and appear significant, with fairly good improvements over a wide range of popular baselines. The ablation study is convincing. Weaknesses: I’ve got some minor clarity comments. -It would be helpful to be a bit more concrete with existing model shortcomings in the introduction. It is not completely clear what it means for existing approaches to “fail to take…into consideration” instantaneous changes. I think I know what you’re saying (e.g. GNNs model large instantaneous changes via iterative message passing, which can lead to issues — is that right?) but it would be good to be a bit more explicit in outlining these issues. -Benchmarks could be described better. E.g. I assume the success metric shown for Physion is the accuracy on whether the objects of interest collided, right? Yet as far as I can tell, this is never mentioned. In judging significance, if I’m understanding things correctly, you’re providing standard deviation estimates in the tables. It would be helpful to report SEM or give confidence intervals. The reported +- are often not particularly small relative to the improvements, to the extent that a number of them appear like they may not be significant. It would be helpful to clarify this. More broadly, it makes sense to focus on rigid bodies, but much related work is for deformable bodies as well, and arguably, a major reason for using these complex neural models is to accommodate different sorts of media. It would be helpful to at least give a sense for how this method could be extended, or what challenges there are in extending it. Technical Quality: 3 Clarity: 3 Questions for Authors: Just reiterating the above — could you provide confidence intervals on experiments? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses re: deformable bodies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification. > Q1. It would be helpful to be a bit more concrete with existing model shortcomings in the introduction. It is not completely clear what it means for existing approaches to “fail to take…into consideration” instantaneous changes. I think I know what you’re saying (e.g. GNNs model large instantaneous changes via iterative message passing, which can lead to issues — is that right?) but it would be good to be a bit more explicit in outlining these issues. A1. Thanks for your comment. Your understanding is correct. GNN methods utilize iterative message passing to model the spatial relationships based on current distances, which are hard to capture large instantaneous changes. We will include it in the revised version. > Q2. Benchmarks could be described better. E.g. I assume the success metric shown for Physion is the accuracy on whether the objects of interest collided, right? Yet as far as I can tell, this is never mentioned. A2. Thanks for your comment. The success metric denotes the accuracy of predicting whether the objects of interest collided. We will include it in the revised version. > Q3. In judging significance, if I’m understanding things correctly, you’re providing standard deviation estimates in the tables. It would be helpful to report SEM or give confidence intervals. The reported +- are often not particularly small relative to the improvements, to the extent that a number of them appear like they may not be significant. It would be helpful to clarify this. A3. Thanks for your comment. We have conducted one-sample paired t tests to justify that all of the improvements with the best baseline are statistically significant with p-value < 0.01. Our problem focuses on the classification tasks and trajectories regression tasks, which is not a parameter inference task and thus cannot have SEM or CI for performance comparison. > Q4. More broadly, it makes sense to focus on rigid bodies, but much related work is for deformable bodies as well, and arguably, a major reason for using these complex neural models is to accommodate different sorts of media. It would be helpful to at least give a sense for how this method could be extended, or what challenges there are in extending it. A4. Thanks for your comment. Deformable body dynamics involve complex interactions and behaviors not present in rigid body dynamics, such as stretching, bending, and compressing, which require different modeling approaches. Although we do not have specific low-level modules designated for deformable objects, our model can still handle complex scenarios involving interactions between deformable objects and rigid bodies due to its generalization ability. For instance, Table 3 shows our model's ability to model deformable objects' interaction with rigid bodies in the Drape scenario, where deformable objects are represented as cloths. This result showcases the potential application of our approach in modeling deformable object dynamics. In future work, we will extend our methods to deformable bodies on more complex and challenging datasets and may design modules specifically for processing deformable objects. > Q5. Could you provide confidence intervals on experiments? A5. Thanks for your comment. Please refer to A3. Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions. --- Rebuttal Comment 1.1: Title: Thanks! Comment: I appreciate the response and all generally makes sense. I'll maintain my score. --- Reply to Comment 1.1.1: Title: Thank you for your feedback and support! Comment: Thank you for your feedback and support! We are pleased to know that our responses have fully addressed your concerns. We will add the rebuttal contents to the main paper in the final version following your valuable suggestions.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces EGODE, a novel framework for modeling rigid dynamics that has applications in robotics, graphics, and mechanical design. The framework addresses the limitations of existing graph neural network (GNN) simulators by incorporating both mesh node representations and object representations within a coupled graph ODE (Ordinary Differential Equation) structure. EGODE introduces an event module to capture instantaneous changes during collisions, providing a more accurate and continuous model of rigid dynamics. The framework's performance is validated through extensive experiments on benchmark datasets, demonstrating superiority over various state-of-the-art baselines. Strengths: 1. **Innovative Approach**: EGODE proposes a new perspective on modeling rigid dynamics by combining continuous evolution and instantaneous changes using graph ODEs. 2. **Coupled Graph ODE Framework**: The use of a coupled architecture effectively models hierarchical structures in rigid-body systems. 3. **Event Module for Collisions**: The introduction of an event module enhances the framework's ability to handle instantaneous changes during collisions. Weaknesses: 1. The paper focuses on empirical results, with less emphasis on theoretical insights or proofs of concepts. What are the theoretical underpinnings of the coupled graph ODE framework, and are there any proofs of concept? 2. While EGODE shows good generalization within the tested scenarios, its performance in other types of rigid dynamics needs further evaluation. How does EGODE compare with traditional physics engines in terms of generalization ability, computational efficiency and accuracy? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does EGODE compare with traditional physics engines in terms of computational efficiency and accuracy? 2. Can the authors elaborate on how EGODE might be extended to handle scenarios with rigid body hinges and deformable objects? 3. What are the theoretical underpinnings of the coupled graph ODE framework, and are there any proofs of concept? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. **Rigid Body Hinges and Deformable Objects**: EGODE currently cannot accommodate complex scenarios involving rigid body hinges and deformable objects. 2. **Dataset Dependency**: The framework's capabilities are limited by the scope and diversity of the datasets used for training and validation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful review. Here we address your comments in the following. > Q1. How does EGODE compare with traditional physics engines in terms of computational efficiency and accuracy? A1. Thanks for your comment. The simulation speed of EGODE is not inferior to numerical simulations. Our EGODE can achieve efficient inference by leveraging GPUs and deep learning toolkits such as torch_geometric. In our paper, each scenario visualization contains thousands of nodes and spans 0.5~1 minutes, yet it only requires less than 1 second to simulate and render on a single laptop. Due to the heavy computation burden of data adaptation to traditional physics engines, we will provide the experimental speed comparation during the interaction periods in several days. > Q2. Can the authors elaborate on how EGODE might be extended to handle scenarios with rigid body hinges and deformable objects? A2. Hinges and deformable objects are both meaningful and more challenging than rigid body dynamics. For hinges, each block can be treated as an independent rigid body, and special edges can be used in our EGODE to model the interactions between different parts at the hinge points, maintaining geometric and mechanical consistency. The fixed parts of the hinges can also be considered a special case of rigid body dynamics under constraints, where we evaluate our EGODE on a similar setting called Contain in Table 3 where container objects restrict other objects movements. The model can be trained by collecting sufficient data on hinge dynamics scenarios to improve its performance. While we did not specifically evaluate our model on single deformable objects, our framework is designed to handle more complex scenarios involving interactions between deformable objects and rigid bodies. As an example, we have evaluated our method on a scenario called "Drape" and the results are reported in Table.3, where deformable objects (cloth) interact with rigid bodies. This evaluation demonstrates the potential of our approach to handle deformable object dynamics and its generalization ability. > Q3. What are the theoretical underpinnings of the coupled graph ODE framework, and are there any proofs of concept? A3. In Appendix B, we have provided the theoretical underpinnings to show that it is possible to propagate gradients across the event times to the input arguments of the system and therefore our method can be optimized by SGD. Besides, our work focuses on empirical results since both neural ODEs and physical scenarios involve too many parameters. In future work, we consider simplifying the model and developing more interpretable models with theoretical underpinnings. We will include this in the revised version. Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: I appreciate the response and all generally makes sense. I'll maintain my score.
null
null
null
null
null
null
Towards Croppable Implicit Neural Representations
Accept (poster)
Summary: This paper proposes Local-Global SIRENs, which partition the space into different regions and fit each region with smaller local INRs, leading to croppable INR by cropping the weights relative to the local regions. The model further proposes to use local and global feature extraction to improve the fitting performance. The experiments show that their method supports cropping INRs for Image, Audio, Video and CT. Strengths: 1. The idea of using partition-based INRs for croppable INRs is novel. Croppable INRs are an interesting application for partition-based INRs. 2. The paper is well-written and easy to follow. 3. The cropping performance looks fancy and the fitting performance is slightly improved compared to SIREN due to the global feature extraction. Weaknesses: 1. Cropping INRs is a natural property of partition-based INRs, which makes the contribution minor. 2. While the authors mention their method supports automatic partitioning, they do not show the detailed implementation of how to automatically partition the space. 3. Even though the authors have conducted an ablation study on partition factors, the range of experimented partition factors is not enough. I recommend the authors to try Partition Factors as low as (2,2) and as high as (512, 512) for the 512 * 512 images and show how the partition affects the fitting performance. 4. Cropping based on simple grids may be impractical for real scenarios. I recommend the authors improve their method by implementing semantic segmentation-based cropping. See partition-based INRs with semantic segmentation [1]. 5. I am confused about the conclusion that enlarging the partition factors leads to faster training while decreasing the partition factors enhances reconstruction accuracy (line 293). Since SIREN is just your Local-Global SIREN with partition factors (1,1), if your conclusion is right, SIREN should have better performance than your LG SIREN. As pointed out in [1], partition should generally improve the fitting performance with larger partition factors. And please provide some explanation about why increasing the number of partitions enhances overall speed. [1] Liu, Ke, et al. "Partition speeds up learning implicit neural representations based on exponential-increase hypothesis." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Could you please discuss how to automatically partition the region with your method? 2. Could you please discuss whether your method can be extended to semantic segmentation-based cropping? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The limitations and potential negative societal impact of their work have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for the constructive feedback and great reference. We appreciate the time and effort. Please see our comments and additional results below. **[W1] Contribution** While training an INR-per-partition does allow for cropping with a proportionate weight decrease, our extensive experiments across various modalities show that this method significantly degrades encoding quality and introduces unwanted artifacts. These artifacts are illustrated in Figures 4, 6, and 11, with quantitative results presented in Tables 1, 2, and 12. Our method not only addresses these issues but also **improves upon the baseline INR itself**. Local-Global INRs enable cropping capabilities while enhancing encoding quality and performance on downstream tasks. Although other INR-per-partition methods using larger or semantically-uniform partitions might improve over uniform partitions, they have two downsides: (1) They limit the flexibility of cropping. (2) Since the partitions are not fixed across various signals, future applications of partition-based downstream tasks become more difficult to achieve. **[W2/Q1] Automatic Partitioning** The automatic partitioning process is described briefly in Section 3.6 and involves a straightforward approach. Given a target partition size (e.g., 32x32 image pixels) and a required global weight ratio (typically around 10%), we compute the partition factors through simple division and determine the hidden dimensions of sub-networks using binary search. The implementation is provided in the `compute_partitions.py` module. We will include the relevant pseudocode in the final revision of the paper (please see pseudocode in the general top comment). **[W3] Partition Factors Range** We agree that extending the range of partition factors is valuable, though space constraints limited our initial revision. We have now included image encoding results for partition factors (2, 2), (32, 32), and (64, 64). The results are as follows: | Partition Factors | MSE (*10^-4) | SSIM | PSNR | Train Time (s) | |-------|----------|--------------|-------|---------| |(2,2)|**11.2**|**0.946**|**32.59 ± 0.52**|74| |(4,4)|12.0|**0.946**|32.10 ± 0.47|40| |(8,8)|13.5|0.942|32.29 ± 0.42|26| |(16,16)|15.3|0.934|32.00 ± 0.39|22| |(32,32)|19.0|0.917|31.51 ± 0.28|**15**| |(64,64)|31.9|0.868|29.56 ± 0.45|**15**| Using larger partition factors, as the reviewer mentioned, is not feasible as there are not enough weights in the network to distribute among the partitions. Further increasing partition sizes would require expanding the network capacity beyond the typical 200k parameters used in previous literature for image encoding tasks. We will include these extended experiments in the final revision of the paper. **[W4/Q2] Segmentation-based Cropping** Thank you for the suggestion. Implementing semantic segmentation-based cropping could indeed enhance the quality of our method by aligning partitions with meaningful image features. However, we believe integrating this approach into our current work is beyond the scope of this paper. We recognize that while semantic segmentation-based methods can improve the encoding quality and offer context-aware cropping, they may also limit cropping flexibility since only entire segments may be cropped (we briefly mentioned this in the first subsection of this rebuttal). We will definitely consider exploring this direction as part of our future research. Additionally, the paper you cited is highly relevant to our approach, and we will include it in both the related work and future work sections of our paper. **[W5] Quality-Latency Trade Off** Thank you for highlighting this point. It is important to clarify that while SIREN with partition factors of (1,1) might appear similar to our Local-Global SIREN, the two architectures are **significantly different**. LG-SIREN with (1,1) still involves two networks with features intertwined throughout the forward pass. This intricate architecture affects performance, as seen for the image encoding task discussed above: | Method | MSE (*10^-4) | SSIM | PSNR | Train Time (s) | |-|-|-|-|-| |LG-SIREN (1,1)|**12.9**|**0.939**|**32.52 ± 0.66**|87| |SIREN|18.4|0.914|31.17 ± 0.68|34| Additionally, the extreme case of (1,1) is somewhat analogous to INCODE, where a small modulator network augments the features of a large network. It is not surprising that LG-SIREN in this case achieves high reconstruction quality, even though it does not explicitly perform local feature learning. We will make sure that these distinctions are clearly presented in the revised paper. Regarding why larger partition factors enhances training speed, note that larger partition factors (i.e. *smaller partitions*) results in a *larger* amount of *smaller* local sub-networks, which means there is less interconnectivity between neurons in the entire architecture. This quadratically reduces the number of floating-point operations (FLOPs) in the linear layers, and allows for better parallelization, thereby speeding up training. We briefly mention the quadratic complexity of FLOPs in Section 3.4, but we agree that this point needs to be more explicit. We will make sure it is clear in the final revision of the paper. Regarding the observation made by [1], it seems like increasing the number of partitions is beneficial in terms of PSNR when using segmentation-based partitions (PoS), but this is not necessarily the case for a fixed grid (PoG) and SIREN, which is similar to our use-case. Also, please note that in our experiments, we explore a significantly larger number of partitions (from 2x2 up to 64x64 in the results above), and in Figure 6 in [1], the authors experiment with up to 12 partitions. It would be interesting to explore the effects of *significantly* increasing the number of partitions for PoS. We once again thank you for pointing our attention towards [1], this work is extremely interesting and relevant to our own. --- Rebuttal Comment 1.1: Title: Official Comment from Reviewer XqfY Comment: I appreciate the author's effort in providing such a detailed rebuttal. After reading all the reviewers' opinions and the rebuttal, I think this method showcases that partition-based INR can be used to crop the image in an elegant manner, which is a great potential application of partition-based INR and may contribute to object detection based on INR. Most of my concerns have been well addressed. I tend to raise my rating to vote for "weak accept". --- Reply to Comment 1.1.1: Comment: We thank the reviewer for considering our rebuttal in such detail. We appreciate the reviewer’s support and the decision to raise the rating. The authors
Summary: The paper proposes a method for learning patchwise INRs that are integrated with a global INR. The method is designed with cropping in mind, and this cropping can be achieved by pruning the relevant patchwise INR - similarly, cropping is limited to the pre-defined patches. This allows for novel post-training cropping, where the INR can be cropped in a way that reduces the number of model weights (and therefore the storage space). The method shows benefits beyond cropping - faster training time (or better visual quality at equal epochs). It is also flexible, and can be applied to various MLP-based INRs. Strengths: [S1] The paper is exceptionally easy to follow, with good reasoning/motivation, well-explained method, and experiments that support the claims of the paper. Last paragraph of the intro, Figure 1, and Figure 3 are especially good in this regard. [S2] While on face the applications seem limited, since the pruned parameters can be discarded, this method could be quite useful in the compression regime, where the MLP INR could be cropped by the end user and size would be reduced accordingly. The faster training time is also beneficial in this setting. [S3] The method has benefits beyond its target problem (cropping) - better quality with less training. [S4] The experiments are very thorough, proving the flexibility of the method with applications across multiple models and domains. [S5] The supplementary even goes beyond what's necessary to provide comparisons with e.g. KD for INR training. Weaknesses: The patch-based formulation is a little unsatisfying in the following ways. [W1] It doesn't reveal anything very new about the INRs themselves. Rather than discover pruneable parameters that correspond to cropping, or developing some objective that imparts some locality on the representation, the local vs. global distinction is very rigidly enforced as a prior. [W2] The image can only be cropped according to the pre-defined patches. Separately, [W3] Partly connected to W1, the post-training editing is totally restricted to only cropping. This might have applications to compression, but is otherwise at this point more of a novelty, in the sense that it is not very practical. Technical Quality: 4 Clarity: 4 Questions for Authors: What are the practical applications of this method? The main reason my score isn't higher is because I think the impact of this work is somewhat limited to more nice applications that might be of interest to those that study INR, but less so to the broader community. What does this reveal about the nature of INRs? It would also be nice if the paper revealed some deeper insights about INR in general. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for the thoughtful feedback and insightful comments. We appreciate the time and effort. Please see our comments and additional results below. **Weaknesses:** **[W1]** We appreciate the reviewer’s observation regarding the rigid enforcement of the local-global distinction as a prior. This rigidity is indeed central to achieving zero-shot croppability in our method. While it may not reveal novel aspects of INRs themselves, our approach demonstrates that (1) combining local and global features serves as a robust prior, enhancing reconstruction quality compared to the INR-per-partition method, (2) the quadratic complexity of fully connected layers in INRs can be relaxed, and (3) there is potential for partition-based downstream tasks and editing. **[W2]** We agree that the current method only allows cropping according to predefined patches. As mentioned by reviewer *XqfY*, integrating semantic segmentation-based partitioning could address this limitation. Although this is beyond the scope of the current paper, we find it an exciting direction for future research. **[W3]** We acknowledge that post-training editing is currently limited to cropping. Our long-term vision is to expand this capability, enabling more sophisticated partition-based editing operations. We believe this work lays groundwork for future developments in editable INRs. **Questions:** **Practical Applications:** While our research is particularly relevant to those exploring INRs, we believe it also offers broader implications. By extending a baseline MLP-based INR with our local-global approach, we achieve enhanced quality and latency. This improvement can benefit various lines of work incorporating INRs, including image reconstruction, super-resolution, and potentially more complex tasks. In addition, we believe our work can be used for partition-based downstream tasks. For instance, while Functa [13] has shown the use of INRs for classification, our partition-based approach can be used for more complex tasks such as object detection (per-partition classification). Additionally, the locality of weights in our method could facilitate partition-based editing, with methods which directly operate over the weight space such as [29]. **Insights about INRs:** While our work might not provide groundbreaking new insights into the nature of INRs, it demonstrates that integrating local and global features is a promising approach for improving reconstruction quality and reducing computational complexity. We aim to lay foundations for future partition-based editing and inspire the development of INRs which are inherently editable. We appreciate the reviewer’s feedback and hope that our comments help illustrate the potential impact and future directions of our work. --- Rebuttal Comment 1.1: Title: Rating unchanged Comment: I appreciate the rebuttal. I still think the work is strong, but not groundbreaking, so I keep my rating at strong accept. --- Rebuttal 2: Comment: We greatly appreciate the reviewer's support for our work. The authors
Summary: This paper proposes a new INR architecture to admit easy cropping of the target datum to a certain partition, allowing one to save memory and inference cost without any retraining. Comparing with training a new INR for the target partition, the approach lets one utilize the global context as well. The idea is to train multiple local networks, and modulate their intermediate features with the features of a global network. Through experiments on image/audio/video encoding, the paper demonstrates that the method enjoys faster and more accurate fitting than training one INR per partition. Also, the paper shows that the method works when combined with many INR architectures, e.g., SIREN and INCODE. Strengths: - A notable strength of the proposed method is that it is very simple and easy to use, making it likely to be scalable and generally applicable. I believe that the method can also be combined well with the triplane-based neural fields or instant-ngp. - The method is also very clearly presented. In particular, Figure 3 is very effective in delivering how the proposed architecture works. - Lastly, I appreciate the fact that the paper provides experiments on many different modalities, from image to video. Weaknesses: - **Utility of croppability?** The key weakness of this paper is the motivation. Apart from "saving storage & compute," the practical utility of having a croppable INR is unclear; will local-global INRs also be useful in performing any subsequent "editing" operations? I suspect that this is why authors provide section 4.4, where the authors "extend" the LGS to parameterize the larger image than the one originally considered. However, for such purposes, there are already other good meta-learning-based solutions such as [26]. I am not sure why the model should be croppable for such applications. - **Novelty & Ablations.** The proposed method bears much similarity with [26], which modulates the local model with another global model. The key difference here is how we modulate; this paper uses an extra linear layer to process the local+global features, while [26] uses multiplications (later works, such as functa, used addition). To fully understand what this paper contributes, there should be an explicit comparison with these similar methods as a baseline. - **Evaluation.** If I understood correctly, the evaluation is mostly based on how the model fits the seen coordinates. I wonder how these affect the generalizability of the learned signal to unseen coordinates. As this is one of the key strengths of having a global context, I have enough reasons to believe that LGS will work well. However, an explicit verification is needed. - **Hyperparameter tuning.** I wonder how the hyperparameters for the models and the baselines are selected. In particular, how were the values of "omega" and the learning rate selected? These two are quite critical in determining the fitting speed, so this point should be crystal clear. Technical Quality: 3 Clarity: 4 Questions for Authors: Please see the "weakness." Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for the constructive feedback. We appreciate the time and effort. Please see our comments and additional results below. **Utility of croppability** The utility of our method stems from the inspiration for editable INRs. We began with the fundamental cropping operation and demonstrated various other benefits. The primary goal is the ability to identify which neural network weights to remove based on the required target signal in a zero-shot manner. Our method offers several additional advantages, as detailed in the paper: **improved accuracy** over the alternative INR-per-partition method and the baseline INR, **faster convergence** in terms of iterations and latency, and **enhanced performance** on various downstream tasks such as CT reconstruction, super-resolution, and denoising. Additionally, we showed its adaptability across multiple modalities. Regarding more advanced editing operations, we believe our proposed architecture provides a foundation for more sophisticated per-partition editing operations and downstream tasks. Methods that directly manipulate the network’s weight space, such as [29], can leverage the inherent locality of our method. Regarding section 4.4 - We demonstrate that our method can be used for extending signals. However, it does not compete with other meta-learning solutions. Instead, it serves as a *complementary* method, as one can meta-learn a good initialization for Local-Global INRs and benefit from both. Note that the additional benefit of extending signals with our method lies in the ability to *enlarge the number of weights* to fit larger signals, which is not possible with meta-learning-based solutions that require learning a new initialization. The intuition is demonstrated in Figure 7, where the small SIREN reaches stagnation since the large signal requires more parameters to properly encode. **Novelty & Ablations** While the mentioned method bears a resemblance, it does not support cropping of the encoded signal with proportionate decrease in neural network weights, which is the main goal of our architecture. Although [26] improved encoding capabilities with a local-global approach, the network remains a fully-connected INR and thus cannot be modified as discussed. In contrast, our method leverages an NN architecture that uniformly distributes weights across signal partitions, inherently supports removing unwanted weights, and lays the foundation for future extensions of per-partition editing over the weight space. On the other hand, one might consider our Local-Global SIREN (LGS) as a potential replacement for [26]’s synthesizer. This idea closely resembles our proposed Local-Global INCODE (LGI). In LGI, we retain the harmonizer (similar to the modulator in [26]) and replace the composer network (similar to the synthesizer in [26]) with a Local-Global version. Since LGI improved both encoding quality and performance on various downstream tasks, we believe replacing the modulator in [26] with a LGS could also enhance the method. Having said that, we provide a comparison with [26] on the DIV2K subset mentioned in our paper. We trained [26] with the default configuration in the official implementation. Since their implementation currently only supports square images, we used the seven 512x512 samples from the DIV2K subset. The methods were trained from scratch on a single image for 2k iterations, sampling all coordinates in each iteration. Note that [26] requires significantly more training iterations (>60k) to converge, while our LGI achieves better performance after only 2k iterations. *Additionally, [26] has 5.5 times the number of trainable parameters compared to ours.* | | [26] | SIREN| LGS| INCODE| LGI | [26] (64k iterations) | |-|-|-|-|-|-|-| |Average PSNR|23.85|32.18|32.88|37.85|**38.37**|35.21| |#Params|1.1M|199k|200k|205k|205k|1.1M| **Evaluation** We follow the evaluation methods used in many previous INR architecture papers, focusing on various signal encoding capabilities. Towards the end of our paper, we present three downstream tasks where our Local-Global version outperforms the baseline. Regarding generalizability, this is a great point that we also addressed in the paper (Section 4.6). Specifically, we train an INCODE and a LGI on a 4x downsampled version of an image and then evaluate the SSIM/PSNR on the full-resolution image. This experiment replicates the one presented in [20]. To provide further evidence of our method's generalizability, we conducted this experiment on the entire DIV2K subset (25 images) mentioned in the paper. The results below show that the Local-Global approach manages to enhance generalization capabilities compared to the baseline INR. | Method| Mean PSNR | Mean SSIM | |-|-|-| |INCODE|26.68|0.703| |LGI|**26.76**|**0.734**| **Hyperparameters** To ensure a fair comparison, we used the same omega values as in the original SIREN and INCODE implementations. For the LR, we primarily followed the default configurations from baseline methods. In some experiments, we made slight adjustments to the learning rate, ensuring that the selected LR benefits all compared architectures. We will emphasize these points in the final revision of the paper. For the Local-Global configuration, hyperparameters are selected using our automatic partitioning logic. This logic determines hidden layer dimensions based on a target network size and target global weights ratios (please refer to the general comment for the pseudocode). Detailed experiment configurations and hyperparameters can be found in Appendices A.1 and A.2, as well as in the provided code implementation. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. **Croppability.** The response makes some sense. However, some points are still unclear to me. - The combinability with meta-learning has not been verified experimentally. In fact, this method seems like one specific way to perform meta-learning. - The argument that "enlarging the number of weights is impossible with meta-learning" is exaggerated; one can simply use multiple models (e.g., the one generated by MAML). Instead, I do agree to the point that meta-learning-based solutions may not be as flexible as the proposed one, having a much larger weight-granularity. **Novelty and ablations.** Thank you for the detailed experiment. **Evaluation.** I may have missed section 4.6. Thank you for pointing this out. **Hyperparameters.** Thank you for stating this. --- Many of my concerns have been verified (with some due to my misunderstanding). Although I am still slightly worried about certain points, I am no longer against acceptance of this paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the detailed response, additional comments, and for raising the score. Regarding the additional points: * We agree that definitive claims regarding the applicability of meta-learning techniques on a Local-Global architecture would require explicit experimental verification. While we have not identified specific limitations that would prevent a Local-Global SIREN network from benefiting from a similar meta-learning approach as applied to SIREN, we acknowledge that further research is needed in this area and plan to explore this in future work. * We also agree that stating "enlarging the number of weights is impossible with meta-learning" is not entirely accurate. The rephrasing suggested by the reviewer, noting that meta-learning-based solutions are not "as flexible," is indeed more appropriate. We appreciate the reviewer’s reconsideration and their support towards the acceptance of our paper. The authors
null
null
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to all the reviewers for their valuable feedback and insightful comments. We have carefully addressed each point raised in the individual reviews and provided detailed responses in the corresponding review replies. Additionally, we recognize that the details of the automatic partitioning method were not entirely clear in the initial submission. The automatic partitioning determines the hidden layer dimensions based on the target network size, global weights ratio, and the selected partition size. To clarify, we are providing the pseudocode for the automatic partitioning process below, which will be added to the revised paper: FUNCTION AutomaticPartitioning(target_total_params, target_global_weight_ratio, target_partition_size, signal_resolution): groups = ComputeNumGroups(signal_resolution, target_partition_size) // Compute global hidden dimension global_hidden_dim = FindDimension(target_total_params * target_global_weight_ratio) global_weights = ComputeGlobalWeights(global_hidden_dim) // Compute local hidden dimension target_local_weights = target_total_params - global_weights local_hidden_dim = FindDimension(target_local_weights, groups) local_weights = ComputeLocalWeights(llocal_hidden_dim, groups) // Adjust global weights to meet target ratio and total parameter count WHILE (global_weights + local_weights) NOT WITHIN 1% OF target_total_params: IF (global_weights + local_weights) > target_total_params: global_hidden_dim -= 1 ELSE: global_hidden_dim += 1 global_weights = ComputeGlobalWeights(global_hidden_dim) RETURN local_hidden_dim, global_hidden_dim // ComputeNumGroups: Calculates the number of partitions the signal is divided to, using simple division // ComputeGlobalWeights: Calculates the number of weights in the global network based on the given configuration // ComputeLocalWeights: Calculates the number of weights in local networks based on the given configuration // FindDimension: Uses binary search to find optimal hidden dimension given target weight count and network parameters. Thank you, The authors
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Kronecker-Factored Approximate Curvature for Physics-Informed Neural Networks
Accept (poster)
Summary: This work develops a KFAC approximation for approximate second order optimization of physics-informed neural network (PINN) losses, in order to address optimization difficulties of PINNs.The first key idea is to use forward mode automatic differentiation for the different derivatives of the PDE and notice that it corresponds to a forward pass with additional inputs (corresponding to the different derivatives) and weight sharing. The second key idea is to then tie this to KFAC approximations for weight sharing architectures. Both of these ideas are already known previous work. However, the combination of these two ideas in the context of PINN optimization in order to obtain a scalable approximate second order method is novel and promising. Experimental evaluation involves only training loss curves for 2 PDEs (heat equation and poisson equation) as a function of time or iterations. This has been done only for a single run (no error bars), hyper-parameters were tuned with different HPO methods. Strengths: - The paper is generally very well written: The introduction, contributions and related work section is kept short and on the point, providing a clear explanation of the problem setting and motivating the approach of this paper. - The paper is technically solid. The authors are very precise in terms of the derivations and the notation. I could not spot any mistakes. - Viewing the derivatives as forward propagation of additional inputs with weight sharing and tying this to recently developed KFAC approximations is a neat insight, even if both on their own are not novel. Weaknesses: - While the authors are very rigorous in their notation (which I listed as a strength), it often took me a very long time to parse everything due to the overwhelming information for simple expressions. For instance, even the simple expression of the chain rule that has just two terms can look quite complicated when writing the Jacobians using an operator applied to a vector and both having multiple indices that correspond to indices for layer, data point, derivative. Again, this makes the math precise and avoids confusion. But I think that some parts could be simplified to help the reader to focus their attention on what is actually new throughout all the respective parts of the paper, rather than having to slowly parse every detail and remembering what each index and variable corresponds to. I am wondering if some of this could be simplified without much loss of precision. Perhaps, one quick win would be to remove layer indices (superscript l): Every layer is approximated independently (factorizing). The indices would only be important if you had cross-layer interactions, but I do not see this. Secondly, I wish you could write the Jacobians directly with a single symbol rather than operator applied to a vector or matrix (but I guess this is more difficult, because you need to differentiate between weights and neurons, etc.). - My main criticism is that the choice of experiments is unfortunately lacking similar rigor as the rest of the paper. The loss curves do show that the KFAC approximation is often favorable, but only for a single run. It would be important to do multiple runs to average out the noise from different initializations and mini-batching, and also include error bars. Furthermore, the loss curves only show the training loss and it is thus not clear if KFAC leads to more overfitting compared to other optimisers. I would have also liked to see some visualizations for some low-dimensional problems that show failure-cases of SGD and how KFAC fixes this. Or some other informative visualizations other than loss curves. --------------------- - Minor: You noted that "Taylor-mode" AD is synonym to forward-mode AD and refer to related work in 3.1. However, "Taylor-mode" was mentioned a few times before, e.g. in the last sentence of related work, it is mentioned without a reference or explanation that it is also known as forward mode. Just put this information earlier. Technical Quality: 3 Clarity: 3 Questions for Authors: - I noticed that the EMA update rate hyper-parameter is often super large, even up to 0.988 (for heat equation, appendix A9), which effectively looks only at the most recent batch. Do you have an explanation for why this is? What happens if you use a much smaller mini-batch size? - Is it correct that for linear PDEs, the approximation boils down to simply averaging over the additional forward passes corresponding to the additional derivatives on top of the average over the data points? So in comparison to the standard KFAC, you only have on additional average for these derivatives? - For the non-linear PDEs, you had to make an additional approximation in C26 - C27 of the supplementary material. Does this correspond to linearizing the non-linear PDE? Is this analogous to a last-layer non-linearity, e.g. if we had some additional tanh activation before the regression outputs in order to bound the output values between -1 and 1? If I understood C24 correctly, the Psi term appears just due to chain rule, but I did not understand why it was or had to be grouped with the Hessian of the loss w.r.t. outputs. Could you expand on this? - It seems very odd that KFAC performs better if the random mini-batch are *not* sampled (new) at every step. Have you noticed any such thing for KFAC on standard problems (not PINN losses)? Are the matrices in KFAC inverted every iteration or also just every T-th iteration? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed the limitations, but I think the answer in (7) Experiment Statistical Significance should be No, as there are no error bars. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer aiiA, thanks a lot for the time and effort you put into reviewing our work. Regarding readability, we agree that layer indices can be omitted and will do so in the updated version. We will also think about other ways to make the notation lighter. **Weaknesses** > [...] the loss curves only show the training loss and it is thus not clear if KFAC leads to more overfitting compared to other optimisers. We respectfully disagree. *We report relative L2 error rather than training loss, which avoids overfitting*. The relative L2 error is defined as $\frac{\lVert u_\theta - u^\star\rVert_{L^2(\Omega)}}{\lVert u^\star\rVert_{L^2(\Omega)}}$, where $u^\star$ is the true solution of the PDE. In PINNs, the relative L2 error is usually considered as the relevant quantity and we estimate it on held-out points that are not used during training, similar to the test loss in supervised learning. We apologize for not explicitly introducing the relative L2 error and have adjusted this in our revised version. > The loss curves do show that the KFAC approximation is often favorable, but only for a single run. It would be important to do multiple runs to average out the noise from different initializations and mini-batching, and also include error bars. It is true that our plots show the performance of a single run. However, this run was obtained through extensive hyper-parameter tuning, and we found our results to be consistent when using a different search strategy (random or Bayesian). We think this already supports the robustness of our results, but understand your concern. Since our computational ressources are limited, would you be satisfied if we presented an additional experiment with error bars for one of the sub-experiments? > I would have also liked to see some visualizations for some low-dimensional problems that show failure-cases of SGD and how KFAC fixes this. Or some other informative visualizations other than loss curves. This is a great idea. We already have the code to do such visualizations because it was helpful to debug all optimizers. We will add such visualizations for the low-dimensional PDEs in the appendix. **Explicit questions** > I noticed that the EMA update rate hyper-parameter is often super large, even up to 0.988 (for heat equation, appendix A9), which effectively looks only at the most recent batch. Do you have an explanation for why this is? What happens if you use a much smaller mini-batch size? High EMA factors correspond to *slowly* forgetting pre-conditioners from previous iterations and are common in KFAC. For example, [Martens and Grosse](https://arxiv.org/pdf/1503.05671) uses an EMA factor increasing to 0.95 (page 19), which is also the [default value in KFAC's JAX implementation](https://github.com/google-deepmind/kfac-jax/blob/755f647d85252423f7dca6a14cf101735b5c46d8/kfac_jax/_src/optimizer.py#L298-L299). > Is it correct that for linear PDEs, the approximation boils down to simply averaging over the additional forward passes corresponding to the additional derivatives on top of the average over the data points? So in comparison to the standard KFAC, you only have on additional average for these derivatives? Yes, this is correct and very concise. We have added a clarifying sentence using your suggested phrasing. > For the non-linear PDEs, you had to make an additional approximation in C26 - C27 of the supplementary material. [...] Could you expand on this? Yes, the GN-preconditioner corresponds to linearizing the PDE. The approximations (C26)-(C27) are analoguous to the linear case and do not require further approximations compared to the linear case. The interpretation of $\Psi$ as a nonlinear output layer is correct. As $\Psi$ is not part of the convex loss, we have to linearize it for the GN-preconditioner. We group the Jacobian of $\Psi$ with the Hessian of the sample loss as it does not depend on the trainable parameters. It can also be grouped with the Jacobians $J_{n, \alpha}^{(l)}$, however, this results in the same approximation. We hope that this clarifies the question, but please do not hesitate to let us know whether we should elaborate on this any further. > It seems very odd that KFAC performs better if the random mini-batch are not sampled (new) at every step. Have you noticed any such thing for KFAC on standard problems (not PINN losses)? Are the matrices in KFAC inverted every iteration or also just every T-th iteration? We are not aware of any findings in the literature that have observed something like this. To make our comparison fair, we allowed the number of iterations a mini-batch is used as a hyper-parameter on which we performed Bayesian optimization. We found that for all optimizers it was favorable to keep a mini-batch for significant number of iterations. Thus, we regard this finding as something specific to the PINN problem rather than to KFAC. Our methods update the Kronecker matrices and inverses at every step. ___ Thanks once more for your constructive comments. We hope our responses addressed them to your satisfaction. We remain attentive to your feedback! --- Rebuttal Comment 1.1: Title: Additional experiment with error bars Comment: Dear Reviewer aiiA, We are happy to inform you that we can report the results of an additional experiment testing the optimizers for a variety of initializations. For this, we took the optimized hyper-parameters and run the different optimizers for 10 different initializations and mini-batches for the 4+1d heat equation for the MLP with 10 065 parameters (Figure 2, middle plot). Unfortunately, we are not allowed to share figures during the discussion period. The table below summarizes variability of final performance. We will add the corresponding figure to the appendix. We find that all optimizers except LBFGS perform stably when using different data/model initialization. When taking the average performance, the same trend as shown in the original submission is visible where KFAC* outperforms the competing optimizers. Please let us know in case that you have any further questions. | Optimizer | Relative L2 error | |-----------|--------------------| | SGD | $(6.77\pm2.36)\cdot 10^{-3}$ | | Adam | $(3.10\pm2.45)\cdot 10^{-3}$ | | Hessian-free | $(1.97\pm0.47)\cdot 10^{-5}$ | | ENGD (full) | $(4.99\pm9.25)\cdot 10^{-2}$ | | ENGD (layer-wise) | $(1.68\pm0.26)\cdot 10^{-4}$ | | KFAC | $(6.50\pm0.84)\cdot 10^{-5}$ | | KFAC* | $(1.46\pm0.48)\cdot 10^{-5}$ | Let us know if you would like to see similar results for other sub-experiments. --- Rebuttal 2: Title: Answer to Rebuttal Comment: Thank you for your response and for answering my questions. > We respectfully disagree. We report relative L2 error rather than training loss [...] Thank you for the clarification, this was a misunderstanding on my side. This is indeed mentioned in Sec. 4. > It is true that our plots show the performance of a single run. However, this run was obtained through extensive hyper-parameter tuning [...] I was about to suggest running 10 different seeds and data splits, which is what you have reported in the comment below. > High EMA factors correspond to slowly forgetting pre-conditioners [...] Thanks, this makes sense, of course, it was a misunderstanding on my side. > We found that for all optimizers it was favorable to keep a mini-batch for significant number of iterations. This is quite interesting and weird at the same time. If you find the time, an ablation for this would be quite interesting.
Summary: This paper generalizes the K-FAC method (which is well-known in optimization for deep learning) to enable it to train PINNs. The key idea is to combine Taylor-mode autodifferentiation with recent work on K-FAC with weight sharing layers. The proposed method is evaluated on several PDEs, where it outperforms first-order methods like SGD and Adam and quasi-Newton methods like L-BFGS, while performing comparably to (matrix-free) ENGD. Strengths: * The paper makes a novel contribution by extending K-FAC to PINNs, which had not been previously done in the literature. * The paper does a good job of showing how the Kronecker-factored preconditioner can be extended to the residual portion of the PINN loss. I really appreciated the example derivations in sections 3.2 and 3.3! * The paper empirically evaluates their methods against other second-order/quasi-Newton methods used in training PINNs, such as ENGD, matrix-free ENGD, and L-BFGS. The comparison appears to be fair, since all methods receive equal amounts of hyperparameter tuning. Weaknesses: * The paper is missing more recent work on difficulties of training PINNs, such as [1, 2]. * K-FAC/K-FAC* do not reach the lowest error obtained by running ENGD with $D = 449$ (Figure 2). Why is this the case? * I don’t think the PDE settings tested in this paper are particularly challenging. For example, the 2d Poisson equation and (4+1)d heat equation have solutions that have relatively low frequencies, but PINNs are known to struggle more with learning solutions containing high frequencies [3, 4]. How would K-FAC/K-FAC* fair in more challenging settings [5, 6]? * The approach for computing the Kronecker-factored approximation uses “a larger net with shared weights”. Could this cause out-of-memory issues with larger networks? How much additional memory is required? * I think notation is being reused too much in equations (5) and (6). For example, I don’t think it’s a good idea to have $z^{(l)} = W^{(l)} z^{(l - 1)}$ and $z^{(l)} = \sigma(z^{(l - 1)})$. * I’d recommend defining the symbol $\odot$ as the Hadamard product before it appears in equation (6). * Equation below line 168: $N_\Omega$ in the sum should be $N$. * I believe the outer product decomposition in line 200 is missing a transpose on the second $l_{n, m}$ in the summation. * There is no per-step computational complexity provided for the proposed methods. * No open-source implementation is provided for the proposed methods. Minor comments: * The sentence containing equation (3) is a bit confusing. Saying “$u_n = u_\theta(x_n)$, and it coincides with the classical Gauss-Newton matrix” makes it sound like $u_n$ itself is the classical Gauss-Newton matrix (although I can see that this is not what the authors meant). * For consistency with existing literature, I would recommend writing “KFAC” as “K-FAC” and “LBFGS” as “L-BFGS”. [1] “On the Role of Fixed Points of Dynamical Systems in Training Physics-Informed Neural Networks.” Rohrhofer et al. (2023) [2] “Challenges in Training PINNs: A Loss Landscape Perspective.” Rathore et al. (2024) [3] “On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks.” Wang et al. (2021) [4] “When and why PINNs fail to train: A neural tangent kernel perspective.” Wang et al. (2022) [5] “PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs.” Hao et al. (2023) [6] “PDEBENCH: An Extensive Benchmark for Scientific Machine Learning.” Takamoto et al. (2022) Technical Quality: 3 Clarity: 2 Questions for Authors: * Line 64: Could the authors please elaborate on why the preconditioner not having a Laplacian term is beneficial? * Line 131: Could the authors please elaborate on why the interior Gramian cannot be approximated with the existing K-FAC? Is this due to computational and/or autodiff limitations? * Is using a general loss $\ell$ in equation (11) necessary? For PINNs, $\ell$ is almost always the least-squares loss, but I could also understand if the authors wanted to show that their method works in a more general setting. * Does the proposed approach generalize to inverse problems involving PINNs? If it does generalize, how does it affect the calculation of the Kronecker-factored approximation? * Are there operators besides the Laplacian for which the weight sharing can be reduced? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Please see "Weaknesses" Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer YHUU, we would like to thank you for the time and effort you put into reviewing our work. In the following, address some of the points that were rightfully raised by you. > How much additional memory is required [for our KFAC]? > There is no per-step computational complexity provided In our general response, we compare memory requirements and computation time of the proposed scheme. We find that the viewpoint of 'a larger net with shared weights' alias the forward Laplacian graph saves both memory and compute time when compared to the standard approach relying on nested backpropagation, as implemented in `functorch`. If there are any further questions, please do not hesitate to contact us. > No open-source implementation is provided We will release the code after acceptance with the camera ready version. > K-FAC/K-FAC* do not reach the lowest error obtained by running ENGD [...]. Why is this the case? As KFAC is designed as an approximation of ENGD we do not expect KFAC to provide results superior to ENGD. Note however that ENGD requires the solution of a system of linear equations in the parameter space of the network. Hence, while we can use it as baseline for a small network, it does not scale to larger nets. Our updated discussion of Figure 2 elaborates on this. > How would K-FAC/K-FAC* fair in more challenging settings [5, 6]? We believe that PINNs should be employed for high-dimensional PDE problems where classical methods are intractable and are currently working on the log-Fokker-Planck equation which is a challenging nonlinear PDE. We aim to present first results in the discussion period and refined results for a camera-ready version. > The paper is missing more recent work on difficulties of training PINNs, such as [1, 2] Thank you for the pointers to these references, which we have included into our introduction. **Notation/typos** > I think notation is being reused too much in equations (5) and (6) Thank you for this feedback. We are currently looking into ways to streamline and improve our notation for the camera-ready version. > I’d recommend defining the symbol as the Hadamard product before Thanks for the pointer, we have added the definition. > Equation below line 168 Thanks for spotting this, indeed, we have been slightly inconsistent in carrying the subscript $N_\Omega$ for the number of interior integration points. We have adjusted this and have made our notation consistent. > the outer product decomposition in line 200 is missing a transpose Thanks for catching this typo, which we have corrected. **Minor comments:** > The sentence containing equation (3) is a bit confusing. Thanks for the feedback. To our knowledge the matrix $J_\theta^\top u J_\theta u$ is usually called Gauß-Newton matrix, where $J$ is the Jacobian with respect to the model parameters. This is exactly the form of the matrix defined in (3) if $u = (u_1, \dots, u_n)^\top$. Please let us know, whether you have any further questions and we are willing to adjust our wording to reduce ambiguities. > I would recommend writing “K-FAC” and “L-BFGS”. Thank you for the feedback, of course we are willing to follow the common nomenclature. **Questions** > Line 64: Could the authors please elaborate on why the preconditioner not having a Laplacian term is beneficial? We are deeply convinced that when optimizing a PINN, preconditioners with a PDE term, e.g., a Laplacian, are much more meaningful than preconditioners without a PDE term. In line 64 we mention a line of works that use KFAC for PINN, but only to approximate a preconditioner that does not include PDE terms. Hence, we regard these approaches as not satisfactory which motivated us to develop a KFAC method for preconditioners including PDE terms. We have slightly changed our wording in line 64 to improve clarity. > Line 131: Could the authors please elaborate on why the interior Gramian cannot be approximated with the existing K-FAC? Is this due to computational and/or autodiff limitations? K-FAC has only been proposed when the loss and therefore the pre-conditioner only incorporates function evaluations. This excludes PDE terms in the pre-conditioner, which appear naturally when incorporating second-order information in PINNs. Hence, the reason existing KFAC approaches can't be used is not a computational limitation but quite simply the lack of a KFAC for problems including PDE terms. > Is using a general loss in equation (11) necessary? We decided to work with a general loss as losses other than the least-square problem appear in practice. This is for example the case for variational approaches including the [*deep Ritz method*](https://link.springer.com/article/10.1007/s40304-018-0127-z) and [*consistent PINNs*](https://arxiv.org/pdf/2406.09217) which use Lp-norms. Of course, there is a trade-off between generality and accessibility. We decided to provide the general version as we already provide a didactic specific example with the Poisson equation and as we found the general form to be not much more complex. If you strongly disagree, we can also offer a simplified version in the main body and defer the fully general case to the appendix. > Does the proposed approach generalize to inverse problems involving PINNs? We have not yet considered inverse problems. In general, we see no conceptual problem, however, one needs to take into account two (or more) neural networks when dealing with inverse problems. We leave this for future work. > Are there operators besides the Laplacian for which the weight sharing can be reduced? A similar reduction can be obtained for linear scalar valued PDE operators, meaning whenever the PDE of consideration is a linear function of the partial derivatives of some order. ___ Thanks once more for your comments that have helped use improving our manuscript and we hope our responses could address your comments to your satisfaction. We remain attentive to your feedback! --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thank you for addressing most of my comments. I would have liked to see the results on the log-Fokker-Planck equation, but it is ok if this is not available in time (I understand the discussion period is rather short). I will raise my score accordingly. Minor question (does not affect my score): why would solving an inverse problem with this approach require two (or more) neural networks? I thought the standard formulation for the PINN inverse problem uses a single neural network with the inverse coefficients as inputs. --- Reply to Comment 1.1.1: Comment: Thank you for your question. Assume we are given an inverse problem (let's say a source recovery problem for simplicity) where we are looking to find both the function $u$ and the right-hand side/source $f$ given some observations of $u$ and a PDE that $u$ should satisfy. In this case, the loss function contains a term for the PDE residual and boundary values, and additionally a data fidelity term that incorporates the observations of $u$. A canonical approach would be to parametrize both $u$ and $f$ by neural networks, which explains the two neural networks in our answer above.
Summary: This work considers the problem of optimising partial differential equations (PDE) with neural networks, in particular second-order optimization. Even if simple models (multi-layer perceptrons) are used, for which KFAC approximations of the curvature matrix are well-known, it needs to derive new approximations because the loss function used for solving PDEs is different than previous work (square loss or cross entropy), in particular it includes derivatives of the model with respect to the input, which is interpreted as a larger model with weight sharing. Therefore, it uses the previously (but recently) developed technique of KFAC with weight sharing. Experiments show that the new method is successful when applied to a few example PDEs. Strengths: The paper is clearly written and seems correct, both the theory and experiment. It seems to be the first application of KFAC to PDEs and has the potential of a high impact. Weaknesses: Even if experiments with a relatively large number of parameters are shown (10^5), the paper does not provide the complexity of the algorithm, in terms of relevant quantities, for example the input dimension, output dimension, batch size, number of parameters, etc. For example, does the method scale poorly with the input dimension? I would expect quadratic scaling with input dimension. I have not read the forward laplacian paper, but I have the feeling that improves scaling by only a constant factor. https://arxiv.org/abs/1206.6464 argued that it may not exist an algorithm for computing the diagonal of the Hessian that is linear in the dimension using automatic differentiation. Also, as with other KFAC approximations, it remains unclear how general is the approach, e.g. how to apply it to higher order PDEs or other neural network models. Technical Quality: 3 Clarity: 3 Questions for Authors: NA Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer TQnM, we would like to thank you for the time and effort you put into reviewing our work. In the following, we want to address some of the points that you rightfully raised. --- > paper does not provide the complexity of the algorithm > does the method scale poorly with the input dimension? I would expect quadratic scaling with input dimension. You are right that we currently do not discuss this; we address this in detail in our global rebuttal and will add a discussion to the main text. The quick summary is that empirically we observe linear run time and memory scaling in the input dimension for computing the Laplacian either via `functorch`'s nested first-order autodiff or the forward Laplacian framework. This linear scaling is intuitive from the forward Laplacian perspective, which performs `input_dimension + 2` forward passes of the net (assuming the network's forward pass is independent of the input dimension, which is a good approximation for the MLP we used, but in general will depend on the architecture). In Landau notation, the scaling with the input dimension will depend on the choice of architecture. Moreover, we observe some practical advantages of the forward Laplacian over the nested first-order autodiff approach: - The constants in the scaling of the forward Laplacian are smaller than those in the `functorch` implementation. The forward Laplacian is roughly 2x faster. We think this is because its computation graph is less sequential. Also, we observe that the forward Laplacian uses roughly half the memory. These constants matter in practise. - Backpropagation on the forward Laplacian graph (which we need to compute the gradient of the loss function) allows us to populate the K-FAC matrices without much re-computation. Overall, it seems that the forward Laplacian is the current state-of-the-art method to compute input Laplacians, comes with advantages over nested first-order autodiff, and integrates nicely with our proposed K-FAC approximation. --- > it remains unclear how general is the approach, e.g. how to apply it to higher order PDEs or other neural network models. Our approach is general: - The proposed K-FAC approximation generalizes to arbitrary PDEs, both in the PINN formulation (i.e. least squares of the strong form) and for variational formulations (deep Ritz). We discuss this in Section 3.3, see also equation (14). - The Taylor-mode AD needed to assemble the K-FAC matrices works for general neural network architectures. For instance, JAX offers a general purpose Taylor-mode implementation ([`jax.experimental.jet`](https://jax.readthedocs.io/en/latest/jax.experimental.jet.html)) onto which our proposed K-FAC framework could be built on. - Like the traditional KFAC, our KFAC approximation is architecture-dependent. However, thanks to the recent formulation of KFAC for linear layers with weight sharing [2], a wide range of architectures is covered, e.g. fully-connected/attention/convolution layers. --- Thanks again for helping us to improve our manuscript! We hope our responses address your comments. Please let us know if you have follow-up questions. We would be happy to discuss. ## References [2] Eschenhagen, R., Immer, A., Turner, R. E., Schneider, F., & Hennig, P. (NeurIPS 2023). Kronecker-factored approximate curvature for modern neural network architectures. --- Rebuttal Comment 1.1: Comment: Thank you for the response. How is the forward pass independent on the input dimension? In an MLP, you must compute a matrix-vector product, where the input vector has size d, and the matrix has size d' x d (hidden layer size d'), this has complexity o(d'd), linear in d. Maybe d is not your bottleneck now, but if you increase it at some point you should suffer from it. I'm asking also because I have been working on score matching, which requires computing the trace of the Hessian of the log-likelohood, different setting but same computational problem (see for example https://arxiv.org/abs/1905.07088). That is impossible to scale up when d is in the range of millions or higher. --- Rebuttal 2: Comment: Thanks for your follow-up question! You are of course right that the MLP's cost depends linearly on $d_\Omega$ due to the first layer's matrix multiply ($d_\Omega \to 768$). For the values of $d_\Omega$ we experimented with, the evaluation cost is dominated by the sub-sequent matrix multiplies (e.g. $d_\Omega \le 100$ but the second layer is $768 \to 768$). Therefore we empirically observe linear scaling in this regime, i.e. 'approximately' no dependency of the MLP w.r.t. $d_\Omega$ and linear scaling from the number of forward passes for the Laplacian. You are completely right though that if we let $d_\Omega \to \infty$, e.g. for score matching with $d_\Omega \sim 10^6$, the empirical behavior would eventually become quadratic on this model. However, for the PDE applications we target in this paper, we think that practically relevant values are $d_\Omega \sim 10^2$ (at most $10^3$). Therefore, we believe the empirical linear scaling we measured in this regime is representative for our method's scaling on PINN problems. We will add a sentence to clarify this limitation for extremely high dimensions of $d_\Omega$ which are (1) out of scope for PINNs and (2) for which the autodiff-based approach suffers from the same limitation. Please let us know if you have any follow-up question.
null
null
Rebuttal 1: Rebuttal: We want to thank all reviewers for their thorough evaluation of our submission and provide extensive answers to the individual points raised by the reviewers below. Here, we want to elaborate on our method's per-iteration complexity, both theoretically and empirically, as this concern was raised by multiple reviewers. We will add the following discussion to the main text. On top of evaluating the loss and computing the gradient, there are two parts of KFAC that require additional resources; the assembly and inversion of the Kronecker factors: 1. **Inversion of Kronecker factors:** Inverting layer $l$'s Kronecker approximation of the Gramian requires $O(h_l^3+h_{l+1}^3)$ time and $O(h_l^2+h_{l+1}^2)$ storage, where $h_l$ is the number of neurons in the $l$-th layer. Note that inverting the exact block for layer $l$ (as done by 'ENGD (layer-wise)') requires $O(h_l^3h_{l+1}^3)$ time and $O(h_l^2 h_{l+1}^2)$ memory. In general, the improvement from the Kronecker factorization depends on how close to square the weight matrices of a layer are, and therefore on the architecture. In practise, the Kronecker factorization usually significantly reduces memory and run time. Further improvements can be achieved by using structured Kronecker factors, e.g. diagonal or block-diagonal matrices as proposed in [1], which are beyond the scope of our paper. 2. **Assembly of Kronecker factors:** To compute the Kronecker factors, we use the layer inputs from evaluating the loss (for free), and need to backpropagate through its graph. The loss is computed with the forward Laplacian framework, which is the state-of-the-art method to compute input Laplacians of neural networks. The additional backpropagation is similar to the one performed by SGD with the only difference that we differentiate w.r.t. layer outputs rather than parameters. We want to stress that evaluating the Laplacian and parameter derivatives is essential for *all* gradient-based PINN optimizers, not only for KFAC. To provide concrete insights into the computation time and memory of this routine, we ran an additional experiment and compared the forward Laplacian we use with the alternative `functorch` implementation. **We find that the forward Laplacian is roughly 2x faster and uses roughly half the memory compared to PyTorch's automatic differentiation. Further, the computation time for both methods scales linearly in the input dimension in our experiments.** We provide a detailed description of our findings below and in the attached PDF and will add it to the updated version. **Conclusion** Thanks again for your helpful comments and questions. We are convinced that they have helped us to substantially strengthen the manuscript and hope we addressed them to your satisfaction. We remain attentive to your feedback! ## Experimental details Due to the limited length of the discussion period, we present a short experiment. We will add a more detailed comparison to the manuscript. **Setup:** We consider two Laplacian implementations: 1. **Autodiff Laplacian:** Computes the Laplacian with PyTorch's automatic differentiation (`functorch`) by computing the batched Hessian trace (via `torch.func.hessian` and `torch.func.vmap`). This is the standard approach in many PINN implementations. 2. **Forward Laplacian:** Computes the Laplacian via the forward Laplacian framework. We used this approach for all PDEs and optimizers presented in the experiments. We use the biggest network from our experiments (the $D_\Omega \to 768 \to 768 \to 512 \to 512 \to 1$ MLP with tanh-activations from Fig. 3 right), then measure run time and peak memory consumption of computing the net's Laplacian on a mini-batch of size $N=1024$ with varying values of $D_\Omega$. To reduce measurement noise, we repeat each run over five independent Python sessions and report the smallest value. We use the same GPU as all other experiments, i.e. an NVIDIA RTX 6000 with 24 GiB memory. **Results:** The following tables compare run time and peak memory between the two approaches: | $D_\Omega$ | Autodiff Laplacian [s] | Forward Laplacian [s] | |-----------|--------------------|-------------------| | 1 | 0.051 (1.6x) | 0.033 (1.0x) | | 10 | 0.20 (2.0x) | 0.10 (1.0x) | | 100 | 1.7 (2.0x) | 0.84 (1.0x) | We observe that the forward Laplacian is roughly twice as fast as the `functorch` Laplacian. | $D_\Omega$ | Autodiff Laplacian [GiB] | Forward Laplacian [GiB] | |-----------|--------------------|-------------------| | 1 | 0.21 (0.96x) | 0.22 (1.0x) | | 10 | 0.98 (1.6x) | 0.61 (1.0x) | | 100 | 8.8 (1.9x) | 4.6 (1.0x) | We observe that the forward Laplacian uses significantly less memory for large input dimensions, up to only one half when $D_\Omega = 100$. We visualized both tables using more values for $D_\Omega$ and observed linear scaling in both memory and run time; see the attached PDF. **Other scalings:** - Both the forward Laplacian and nested backpropagation scale linearly w.r.t. batch size. - Scaling w.r.t. output dimension (as asked by TQnM) is less relevant as many high-dimensional PDE problems are scalar-valued. - Scaling w.r.t. number of trainable parameters is highly architecture-dependent. This is also true for simple backpropagation in standard deep learning applications and we believe this question needs to be answered on a case-by-case basis. ## References [1] Lin, W., Dangel, F., Eschenhagen, R., Neklyudov, K., Kristiadi, A., Turner, R. E., & Makhzani, A. (ICML 2024). Structured inverse-free natural gradient descent: memory-efficient & numerically-stable KFAC. Pdf: /pdf/782705ec114066ffa9448a8e2a66db4037109a94.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sequential Harmful Shift Detection Without Labels
Accept (poster)
Summary: The authors introduce an approach for detecting distribution shifts that negatively impact the performance of machine learning models in continuous production environments, which requires no access to ground truth data labels. They solution substitute true errors with the predictions of a learnt error estimator. Strengths: This work is the first to propose a principled method for detecting harmful distribution shifts without requiring true label. Weaknesses: It is worth noting that while I am familiar with distribution shifts, I do not know continuous production environments is limited. I will be evaluating this paper from the perspective of an outsider in this field, and will rely more on the feedback of other reviewers. ************************************** I encourage the AC to bring in reviewers who are familiar with this field. I encourage the AC to give less weight to my review comments. ************************************** Weaknesses: 1) I suggest that the authors provide a section on related work concerning distribution shifts and continuous production environments, to help readers clearly understand the motivation behind this paper. To my knowledge, the Out-of-Distribution (OOD) field’s research on Dataset Shift [1-4] also discusses distribution shifts; please include a discussion on this. The introduction does not well introduce the research questions of this paper, especially considering that this paper has not yet reached the length limit. 2) My major concern about this paper is whether “requires no access to ground truth data labels” is a worthwhile background setting to study, especially since Podkopaev and Ramdas [2022] have already satisfactorily addressed more common scenarios. Considering that the method proposed by the authors is relatively straightforward: training an error estimation model to predict the performance of the primary model on production data. [1] Puli et al., Don’t Blame Dataset Shift! Shortcut Learning Due to Gradients and Cross Entropy, in NeurIPS 2023. [2] Huang et al., Harnessing Out-of-Distribution Examples via Augmenting Content and Style, in ICLR 2022. [3] Silva et al., The Value of Out-of-Distribution Data, in ICML 2023. [4] Yang et al., Not All Out-of-Distribution Data Are Harmful to Open-Set Active Learning, in NeurIPS 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: N/A Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper. We appreciate the suggestion to include a dedicated section on related work concerning distribution shifts and continuous production environments. Regarding the question of whether "requires no access to ground truth data labels" is a worthwhile background setting to study, we believe it is highly relevant. In many real-world environments, ground truth labels are either unavailable, delayed, or expensive to obtain. Examples include: * **Healthcare:** (Raravian et al., 2015, Population-level prediction of type 2 diabetes from claims data and analysis of risk factors) * **Insurance:** (Noodhun et al., 2018, Risk prediction in the life insurance industry using supervised learning algorithms) * **Predicting Future Outcomes:** (Zhang et al., 2019, Time-aware adversarial networks for adapting disease progression modeling) * **High Collection Costs:** (Lieu et al., 2022, Deep unsupervised domain adaptation: A review of recent advances and perspectives) Regarding the remarks about our method, it is not solely about learning an error estimator. The cornerstone of our method is the calibration step that leverages even an imperfect estimator to achieve performance comparable to methods that have access to ground truth labels. This is particularly useful since it is often unrealistic to expect a highly accurate error estimator in many practical scenarios. --- Rebuttal Comment 1.1: Comment: I have gone through the responses. **I encourage AC to completely ignore my comments and score.** Good luck :)
Summary: The paper introduces a sequential drift detector designed to identify drifts that may negatively impact model performance without requiring target labels in production. This is achieved by training an error proxy model on a calibration dataset, which can then be applied in a production scenario, thereby eliminating the need for target labels. While the practical benefits of this method are evident, its performance is initially worse than the reference method that uses labels. This issue is addressed by proposing a modification to the scoring metric, making the performance of both methods comparable. Although the method that uses labels detects drifts earlier, the label-less method now demonstrates similar performance. The methods are tested and compared on multiple synthetic and real-world datasets. Strengths: ### Originality - **Novel Approach:** The introduction of a sequential drift detector that does not require target labels in production is a welcomed addiotion in the field of drift detection. - **Error Proxy Model:** The use of an error proxy model trained on a calibration dataset to identify drifts is a powerful contribution that addresses the challenge of label scarcity in production environments. This approach also opens up the possibility to use specialized error proxy models in the future to extend the approach. ### Quality - **Thorough Evaluation:** The methods are rigorously tested and compared on multiple synthetic and real-world datasets, providing a comprehensive evaluation of the proposed approach. ### Clarity - **Clear Explanation:** The paper clearly explains the methodology, including the training of the error proxy model and its application in production scenarios. - **Well designed mathematical Notation**: The provided notation is intuitive and provides detailed introspection into the newly proposed parts of the method. ### Significance - **Practical Relevance:** The method's ability to function without target labels in production is highly relevant for real-world applications where obtaining labeled data is often challenging. - **Comparable Performance:** By achieving performance comparable to traditional methods that use labels, the proposed method offers a valuable alternative for drift detection in label-scarce environments. Weaknesses: ### Line 17-19: - Domain knowledge regarding performance-based drift detection methods is not mentioned. - The claims regarding the properties of “traditional” drift detection methods are unfounded and can be seen as misinformation. - For further references the survey "Learning under Concept Drift: A Review” of Jie Lu, Fellow, IEEE, Anjin Liu, Member, IEEE, Fan Dong, Feng Gu, Jo˜ao Gama, and Guangquan Zhang provides a structured overview of different detection mechanisms and differences. - In addition, the use of a reference window is presented as a weakness, but the proposed method also uses a calibration set, which can be seen as a reference window. ### Line 134: - Phrases like “If sufficient”, “Should align”, “would achieve” involve a lot of assumptions. - If the error estimator is a statistical model, it would open this expression up to further introspection. - While the statements proposed by the work seem solid, sections where these subjective claims are mentioned (e.g., here and the beginning of section 2) are very loose and are left unanswered. ### Line 193: - The FDP plot has no blue in it; what does this sentence refer to? ### Equations 10 and 11: - These equations confuse more than they clarify. - A new form is introduced just to show how it can be directly inferred from Equations 1 and 2. This, by itself, would be fine, but then the notation is changed again directly. - Why not go from Equations 1 and 2 directly to Assumption 4.1? ### Experimental Reproducibility: - While the data is well explained, the configuration of the experiments is left up to interpretation. - Access to the scripts used to run the experiments would highly benefit reproducibility. - While the paper checklist mentions that the code will be provided, this is currently not realized, making it impossible to include it in the review. Technical Quality: 3 Clarity: 4 Questions for Authors: ### Line 7: - Is "learnt" the old variation of "learned" or is there a specialized meaning to it? ### Line 46: - Does the predictor for the model error serve as an approximation of the conditional probability to produce a higher error in areas of the feature space with sparse or contradictory behavior? - Would the representation of this not be best captured by a Gaussian Process model? - You do not mention the model you used to produce your results. (Or do you always use the same model for the main prediction and the error prediction, if yes, why?) ### Line 74: - In which domain do X and Y reside? - What are we trying to predict given what kind of information? - The current specification does not allow the assumption of a suitable bounded error function. Specifying this kind of error for regression problems proves to be very difficult and highly dependent on the application context, for example. ### General Question on DDM and PHT: - The DDM (Gama, J., Medas, P., Castillo, G., Rodrigues, P. (2004). Learning with Drift Detection. In: Bazzan, A.L.C., Labidi, S. (eds) Advances in Artificial Intelligence – SBIA 2004. SBIA 2004.) and PHT (Page, E.S.: Continuous inspection schemes. Biometrika 41(1-2), 100–115 (1954). DOI 10.1093/biomet/41.1-2.100. and Gama, J., Sebastião, R. & Rodrigues, P.P. On evaluating stream learning algorithms. Mach Learn 90, 317–346 (2013). https://doi.org/10.1007/s10994-012-5320-9) drift detection tests are frequently cited and also track the development of the mean model error over time and restrict its allowed deviation based on sequential analysis. How does this compare to your work, and what are the differences? ### Line 186: - Which type of R² score is used here? What is the reference model? - How is the R² score used in the production scenario, as it is batched based the application on sequential data points is not natively defined. - Against your assumption in section 2, the R² score is not bounded! Depending on the formulation, it can have large positive values but be bounded to larger than 0 or have large negative values but be always smaller than 1. R² is only bounded if it is viewed as a data descriptor that assumes that a linear model is used to describe the data, then it is a measure for the explainable variance. This specification confuses more than it help. The R² formulation you use should be clearly specified in mathematical notation. ### Equations 12 to 15: - Are you referring to joint probabilities or conditional probabilities? So, S(X)=1 and E>q or S(X)=1 given E>q? ### Line 235: - If the Hoeffding interval is based on the Hoeffding Inequality, does your application fulfill all stated requirements? - With special focus on the R² Score you use as your “bounded” metric, Theorem B.1 seems to have unfulfilled assumptions. ### Section 5.1: - What error metric is used for the stated experiment? Accuracy? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The limitations are mentioned and discussed. The assumptions used for some proves are however not discussed into detail. Further Details in Questions and Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and acknowledging our contributions. > The claims regarding the properties of “traditional” drift detection methods are unfounded and can be seen as misinformation. Regarding the issue raised about traditional shift methods (Lines 17-19), specifically their inability to distinguish harmful from benign shifts and their inflated false alarm rates, our assertions are based on the findings of Podkopaev and Ramdas (2022). Their analysis in Appendix A ("Issues with existing tests for distribution shifts/drifts") of their paper supports our claims. We will cite this appendix directly in the final version of our paper. > Phrases like “If sufficient”, “Should align”, “would achieve” involve a lot of assumptions. Yes, we agree. In Line 134, we were describing the naive approach of using a plug-in error estimator. This served as a prelude to introducing our strategy, which aims to achieve good performance using an imperfect proxy for model performance. > If the error estimator is a statistical model, it would open this expression up to further introspection. We agree. Additionally, as the reviewer highlighted, our approach allows for the use of any proxy, not only error estimation. For instance, in classification problems, we can use calibrated probabilities instead of the prediction of the error estimator as a proxy before our calibration step. We have tested this approach and found that it demonstrated similar performance to the error estimator. > The FDP plot has no blue in it; what does this sentence refer to? The variance of the distribution was so small that it appeared as a line, making the blue colour not visible as in the production data case. We will clarify this in the final version. > Why not go from Equations 1 and 2 directly to Assumption 4.1? We will simplify the presentation by going directly from Equations 1 and 2 to Assumption 4.1. > Is "learnt" the old variation of "learned" or is there a specialized meaning to it? Yes, it is the British variant, and they both have the same meaning. > Does the predictor for the model error serve as an approximation of the conditional probability to produce a higher error in areas of the feature space with sparse or contradictory behavior? Would the representation of this not be best captured by a Gaussian Process model? Yes, exactly. It is an implicit way of defining the regions where the model performs poorly. Particularly in the case of tabular data, we have considered tree-based models as they are well-known to work best. In addition, it opens the possibility of examining the leaves of the tree to define hyper-rectangular region. These regions can be represented as rules, providing valuable insights to users about which covariate spaces cause the model to perform poorly. > You do not mention the model you used to produce your results. (Or do you always use the same model for the main prediction and the error prediction, if yes, why?) We use the same class of model and hyper-parameters by default to avoid biasing the results by explicitly fine-tuning the error estimator. However, the objective is to find the best possible error estimator, and users can choose any model that suits their problems. > In which domain do X and Y reside? In our experiments, the domains of X and Y have been $\mathbb{R}^p $ and $\mathbb{R}$, with $p$ being the number of variables. The continuity or discreteness of the data is not important. It can be anything, as long as a bounded score value is attributed to each combination of prediction and true labels. > Which type of R² score is used here? What is the reference model? How is the R² score used in the production scenario, as it is batched based the application on sequential data points is not natively defined. Against your assumption in section 2, the R² score is not bounded! but be always smaller than 1. R² is only bounded if it is viewed as a data descriptor that assumes that a linear model is used to describe the data, then it is a measure for the explainable variance. This specification confuses more than it help. The R² formulation you use should be clearly specified in mathematical notation. We use the classical R² score from the scikit-learn library, defined as $\( R²(y, \hat{y}) = 1 - \frac{\sum (y - \hat{y})^2}{\sum (y - \bar{y})^2} \)$. However, this is not what we monitor in our experiments. Throughout the paper, we only use the R² score to describe the general performance of the error estimator. In production, the loss we monitor and update sequentially is the absolute residual $\( \ell(X, Y) = |y - \hat{y}| \)$. We apologize for the confusion and will make it clearer that the monitored loss function is the absolute distance $\( \ell(X, Y) \)$, computed with an output Y normalized between (0, 1). > If the Hoeffding interval is based on the Hoeffding Inequality, does your application fulfill all stated requirements? For both confidence sequences and intervals, the only assumption we need is the boundedness of the loss. In all our experiments, we ensure the score is bounded by using the absolute distance with bounded output. > What error metric is used for the stated experiment? Accuracy? We use the absolute distance between the predicted probability and the label. > General Question on DDM and PHT The main differences between these methods and ours are that they require access to ground truth labels to monitor performance, whereas our method does not. > Are you referring to joint probabilities or conditional probabilities? So, S(X)=1 and E>q or S(X)=1 given E>q? We were referring to the joint probability. > Domain knowledge regarding performance-based drift detection methods is not mentioned. We will include it along with the suggested references. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal! I'll briefly outline my remaining questions and concerns: ``` computed with an output Y normalized between (0, 1). ``` **MOST IMPORTANT QUESTION**: Please verify that this does not violate the independence assumptions of Hoeffding's Inequality. Based on my experience, having attempted to use Hoeffding's Inequality in regression for non-stationary environments, I can confirm that Hoeffding's Inequality requires both bounded and INDEPENDENT random variables. By normalizing your data targets, you may introduce interdependencies among them, which could be problematic. ``` The claims regarding the properties of “traditional” drift detection methods ``` I agree with your point, but your reference specifically mentions tests based on distributions. There is an entire class of performance-based drift detectors that do not have these issues but do require target labels. Please clarify this distinction in your discussion. ``` In addition, the use of a reference window is presented as a weakness, but the proposed method also uses a calibration set, which can be seen as a reference window. ``` Unfortunately, this point was not addressed in your rebuttal. ``` You do not mention the model you used to produce your results. ``` While you mentioned that the model can be changed, what specific model did you use for your evaluations? This is important for reproducibility. ``` General Question on DDM and PHT ``` I understand that DDM and PHT require labels, while your approach does not. However, my question was about what happens afterward. After using your proxy model to predict the error, you still need to determine whether there is a drift. This decision-making process seemed quite similar to DDM to me. Could you elaborate on the differences, if any, or clarify if your method was inspired by DDM? --- Rebuttal 2: Comment: Thank you for your response. Please find below our answers to the remaining points you raised: > Please verify that this does not violate the independence assumptions of Hoeffding's Inequality. Based on my experience, having attempted to use Hoeffding's Inequality in regression for non-stationary environments, I can confirm that Hoeffding's Inequality requires both bounded and INDEPENDENT random variables. By normalizing your data targets, you may introduce interdependencies among them, which could be problematic. You’re correct that Hoeffding’s Inequality, as well as the inequalities used for confidence sequences, require both bounded and independent random variables. Regarding the normalization, we are only rescaling the output before constructing or splitting the data into production, calibration, and training sets using min-max scaling with the same minimum and maximum values. To the best of our knowledge, this rescaling process should not violate the assumption of independence. > I agree with your point, but your reference specifically mentions tests based on distributions. There is an entire class of performance-based drift detectors that do not have these issues but do require target labels. Please clarify this distinction in your discussion. In the second paragraph of the Introduction (L 25-32), where we describe methods that might detect harmful shifts without labels, we will also include techniques that do require labels in the final version of the paper, including the ones you suggested. >In addition, the use of a reference window is presented as a weakness, but the proposed method also uses a calibration set, which can be seen as a reference window. We agree that the calibration set can be seen as a window, but our comment was about inference time. Our method, being completely sequential, does not require defining a window for the production sample size, as it processes the data as it arrives, one by one. In Lines 17-20, we will specify that the prespecified sample set refers to the production set. > You do not mention the model you used to produce your results. While you mentioned that the model can be changed, what specific model did you use for your evaluations? This is important for reproducibility. For the image dataset, as stated in Lines 268-272, we used a ResNet-50 as the base model, modifying only the head for error estimation. For the tabular data, we consistently used a Random Forest regressor/classifier with the default parameters from the scikit-learn library (num_trees=100, max_depth=None, boostrap=True). We will clarify this further in the paper. >General Question on DDM and PHT. I understand that DDM and PHT require labels, while your approach does not. However, my question was about what happens afterward. After using your proxy model to predict the error, you still need to determine whether there is a drift. This decision-making process seemed quite similar to DDM to me. Could you elaborate on the differences, if any, or clarify if your method was inspired by DDM? Our paper is not inspired by DDM but by the work of Ramdas et al. (2022). While there may be some similarities in the decision-making process, as DDM computes an approximate confidence interval in the reference set (called the concept set) and compares it with a threshold value, there is a substantial difference. DDM is designed for contexts where the model also changes dynamically. Unlike DDM, the method proposed by Ramdas et al. (and by extension, our methods) uses a calibration step and defines the warning threshold based on statistical tools that provide provable false alarm guarantees in finite samples with minimal assumptions, such as independence and bounded loss.
Summary: This paper proposes a new method for identifying harmful distribution shifts when no labels are available at test time. This work is a good contribution to the field of ML and has practical importance. Strengths: - The paper proposes a theoretically motivated method for the detection of harmful distribution shifts with no labels at test time; - The paper discusses the conditions under which the method controls Type-I errors; Weaknesses: - In my view, there are two main weaknesses in this paper: 1. The error prediction function is a function of the covariates X. That implies that this method only works in detecting harmful shifts that involve shifting the distribution of X (e.g., covariate shifts). If the shifts happen only in P_{Y|X}, instead, the method would have low power. However, this limitation is not discussed by the authors (Please let me know if I have any misunderstanding here). If this is true, the authors should compare their work with [1], because it's not clear how their work is different in nature from that work. If it turns out that both works have a similar nature, the new method would have to be superior in terms of performance; 2. In the experiments, the authors do not compare their method with any other competitors. You could, for example, compare your method with covariate shift detection methods [1,2] (which do not rely on the presence of labels). References: [1] Ginsberg, Tom, Zhongyuan Liang, and Rahul G. Krishnan. "A learning based hypothesis test for harmful covariate shift." arXiv preprint arXiv:2212.02742 (2022). [2] Polo, Felipe Maia, Rafael Izbicki, Evanildo Gomes Lacerda Jr, Juan Pablo Ibieta-Jimenez, and Renato Vicente. "A unified framework for dataset shift diagnostics." Information Sciences 649 (2023): 119612. Technical Quality: 3 Clarity: 4 Questions for Authors: The questions I have pertain to the first point I mentioned in the weaknesses section. Could you clarify that point for me? I would be willing to increase my score if I understand how your work builds on the previous research by Ginsberg et al. and recognize a significant difference. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and for providing valuable references that we were not aware of. We appreciate the opportunity to clarify our contributions in light of these references. First, we note that paper [2] focuses on detecting distribution shifts in general and not specifically harmful shifts, unlike our method and the first suggested paper [1]. Consequently, using [2] as a benchmark in cases where there are harmful/benign shifts would be unfair since it does not differentiate between harmful and benign shifts and would likely result in a much higher rate of false alarms. Therefore, the first suggested paper is more relevant to our work as it directly targets harmful shifts. However, we must highlight that comparing our method directly with those in [1] is challenging due to fundamental differences in setup. The methods in [1] are designed for an offline setup, requiring a batch of production data to train their statistics and raise alarms for a batch of production data. In contrast, our methodology was designed for an online setup where shifts occur gradually and continuously, necessitating real-time decisions without observing an unlabelled batch of examples first. Our methodology aims to detect harmful performance shifts on the fly as observations are received one by one and do not require access to the full production data to fit the statistic. Additionally, applying offline methods in an online setting would be impractical and unfair, as these methods require learning a model or statistics from a batch of data. It would also be computationally expensive, as [1] requires learning a new model for each batch of production data. In this case, we would have to learn a number of models proportional to the size of the production data. However, we evaluated our method against the best method from [1] (Detectron) in a batch setting, increasing the size of production data or out-of-distribution (OOD) data. We generated the shifts similar to the setting described in Section 5.2 and ensured no shift in the first 1300 samples of the production data. We used NHANESI classification data as Detectron is only available for classification tasks. We replicated the experiments 50 times, resulting in a total of 10200 different shifts. Our results, summarized in the table below, show that for smaller sample sizes (100 and 1000), our method did not detect any shifts, which is expected since no shift occurred in the initial 1300 samples, while Detectron raised a significant number of alarms (1126 out of 1700 shifts for size 100 and 1493 for size 1000), all of which were false alarms. For larger sample sizes, while the method of [1] shows high power in detecting shifts, it also exhibits high false alarms. In contrast, our method demonstrates lower power but significantly better FDP control, aligning with our goal of minimizing false alarms. These results validate that our method performs robustly in a batch setting, as well as working effectively in the online setting considered in the paper. | OOD Size | Power Detectron | FDP Detectron | Power SHSD | FDP SHSD | |----------|-----------------|----------------------|------------|----------| | 100 | N/A | 1 | N/A | 0 | | 1000 | N/A | 1 | N/A | 0 | | 2000 | 0.96 | 0.61 | 0.40 | 0.02 | | 3000 | 0.98 | 0.60 | 0.63 | 0.02 | | 3500 | 0.98 | 0.60 | 0.67 | 0.02 | | 8593 | 0.98 | 0.60 | 0.74 | 0.04 | We hope this clarifies the distinctions and the rationale behind our methodological choices. We will also clarify these differences and include additional experiments demonstrating them in the final version of the paper. Thank you again for your insightful feedback. --- Rebuttal Comment 1.1: Comment: Thank you for your reply! I wanted to clarify some points: 1. As far as I understand, in a practical sense, the main difference is that one method is designed for online settings while the other is designed for offline settings. Is that true? 2. I did not understand if your method is designed to detect harmful shifts of X. Is that the case? 3. Do you know why the Detectron has such a bad FDP in your experiment? --- Rebuttal 2: Comment: Thank you for your response. > As far as I understand, in a practical sense, the main difference is that one method is designed for online settings while the other is designed for offline settings. Is that true? I did not understand if your method is designed to detect harmful shifts of X. Is that the case? Yes, the major difference is that our method is designed for online use, while the other is for offline. Regarding whether our method is designed to detect harmful "shifts of X" (i.e. covariate shifts), our central assumption is Assumption 4.1, i.e. that the selector function generalizes in terms of FDP from calibration to production. This is more likely to hold in the case of pure covariate shift, where the relationship Y | X is invariant. However, it may still be effective if this relationship changes, provided the model tends to have high error in the same regions of the input space before and after the shift. Note that in our empirical experiments, we study shifts in natural datasets, where the Y | X relationship is very likely to be at least somewhat non-invariant. Our method is shown be effective in these cases. > Do you know why the Detectron has such a bad FDP in your experiment? Detectron’s main idea is to learn a Disagreement Classifier that performs as well as the original model on the training distribution while disagreeing with the original/base model's predictions on the production data. This method is highly sensitive to the performance of the base classifier, the function class used, and the size and nature of the production data. While it effectively detects harmful shifts (as shown by its high power in our experiments), it may fail when the shift is benign. Below is a simple code example that generates a plot illustrating a failure case. In this example, the training data points is represented in red and green, with the ground truth shaded in corresponding colors. The base model is shown as a solid black line. For simplicity, we assume the model is a perfect classifier. The data has shifted to the right, creating unlabeled production data, all correctly classified by the base classifier. We’ve also depicted the potential learnable classifier as a dashed line, representing the boundary of all possible functions, which depends on the model type, complexity used for the disagreement classifier, and the nature and size of the data. We have shown a potential disagreement classifier in orange that performs similarly to the original model on training data but disagrees on the predictions of the base classifier in the production data. As shown, even with a benign shift, we can still find a disagreement classifier that performs well on training data but disagrees significantly in production, raising a false alarm. ```{python} import matplotlib.pyplot as plt import numpy as np # Generating sample data x = np.linspace(-3, 3, 500) base_classifier = 2 * np.sin(x) shaded_area = 2 * np.sin(x) boundary_upper = 5 * np.sin(x) boundary_lower = 0.5 * np.sin(x) disagreement_classifier = 3.5 * np.sin(x) * (x <= 0) + 4.5 * np.sin(x) * (x > 0) np.random.seed(0) x_class1 = np.random.uniform(-2, -1, 10) y_class1 = np.random.uniform(-2, 1, 10) + 3 x_class2 = np.random.uniform(-2, -1, 10) y_class2 = np.random.uniform(-5, -2, 10) - 3 x_class1_shifted = np.random.uniform(-2, -1, 20) + 3 y_class1_shifted = np.random.uniform(-2, 2, 20) + 4.5 x_class2_shifted = np.random.uniform(-2, -1, 10) + 3 y_class2_shifted = np.random.uniform(-5, -2, 10) + 2 x_unlabelled = np.concatenate([x_class1_shifted, x_class2_shifted]) y_unlabelled = np.concatenate([y_class1_shifted, y_class2_shifted]) # Plotting plt.figure(dpi=200, figsize=(8, 5)) plt.fill_between(x, shaded_area, 12, color='green', alpha=0.1, edgecolor='none', label='Ground Truth (Class 1)') plt.fill_between(x, -12, shaded_area, color='red', alpha=0.1, edgecolor='none', label='Ground Truth (Class 2)') plt.scatter(x_class1, y_class1, c='green', marker='+', label='Training Data (Class 1)') plt.scatter(x_class2, y_class2, c='red', marker='+', label='Training Data (Class 2)') plt.scatter(x_unlabelled, y_unlabelled, facecolors='none', edgecolors='black', label='Unlabelled Production Data') plt.plot(x, boundary_upper, 'gray', linestyle='--', label='Set of Potential Learnable Classifiers') plt.plot(x, boundary_lower, 'gray', linestyle='--') plt.plot(x, base_classifier, 'k-', label='Base Classifier') plt.plot(x, disagreement_classifier, 'tab:orange', linestyle='-', label='Disagreement Classifier') plt.legend(loc='upper left', bbox_to_anchor=(1, 1), fontsize='small', frameon=False) plt.grid(False) plt.tight_layout() plt.axis('off') plt.show() ``` --- Rebuttal Comment 2.1: Comment: Thank you for the detailed explanation! Please include our discussion in your final version of the paper. I have raised my score. --- Reply to Comment 2.1.1: Comment: Thank you for your valuable feedback and for raising your score. We appreciate your insights and will incorporate our discussion into the final version of the paper.
Summary: This work is an extension of Podkopaev and Ramdas. They propose a framework to detect the harmful distribution shift without accessing the true labels during detection. To do that, the authors introduce an error estimator model to measure the error scores. Besides, the authors propose a strategy to manage false positives by leveraging these imperfect error predictions. Experiments show that the method effectively controls false positives and performs well under various distribution shifts. Strengths: - The label-free solution is essential for practical applications. - Algorithm-agnostic makes the approach more applicable. - The approach does not rely on pre-assumptions about data distribution. - The authors provide detailed mathematical derivations. Weaknesses: - In the experiments section, the authors only compare with a baseline scheme and ignore other existing distribution shift detection algorithms (e.g., [1-2]), although they may not claim to only detect harmful distribution shifts. ### Reference 1. Hinder F, Artelt A, Hammer B. Towards non-parametric drift detection via dynamic adapting window independence drift detection (dawidd)[C]//International Conference on Machine Learning. PMLR, 2020: 4249-4259. 2. Frias-Blanco I, del Campo-Ávila J, Ramos-Jimenez G, et al. Online and non-parametric drift detection methods based on Hoeffding’s bounds[J]. IEEE Transactions on Knowledge and Data Engineering, 2014, 27(3): 810-823. Technical Quality: 3 Clarity: 3 Questions for Authors: - In lines 117-118, could the authors explain how they connect the upper bound of the training dataset to the lower bound of the test dataset to derive this probability inequality? - The proposed method does not rely on distribution assumptions and has very low accuracy requirements for error estimation (In section 4.1). Additionally, there are no hyper-parameters that need to be preset (in section 4.2). However, there is no free lunch. Despite these excellent properties, does the method will face any other challenges? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The proposed approach is to detect harmful distribution shifts in production environments, which is a very interesting problem setting. However, there are many out-of-distribution detection and robust learning algorithms that can cover or solve the harmful distribution shift problem, or even solve it directly, which makes the proposed method not so necessary in practical applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review and acknowledge our work. > In the experiments section, the authors only compare with a baseline scheme and ignore other existing distribution shift detection algorithms (e.g., [1-2]), although they may not claim to only detect harmful distribution shifts. Regarding the suggested paper, neither would be a suitable baseline for our method as they either attempt to detect shifts in general or require access to labels to assess model performance. Comparing them with our method would be unfair since they are likely to have high false alarm rates in a context where there are both harmful and benign shifts. Moreover, the primary purpose of our methodology is to detect harmful performance shifts without using labels. However, the closest baseline to ours is the one suggested by Reviewer KyAy, although it is designed for batch or offline contexts rather than the sequential or online context of our work. We provide comparisons (please refer to our response to Reviewer KyAy) showing that our method also works in the batch context. > However, there is no free lunch. Despite these excellent properties, does the method will face any other challenges? Regarding the limitations, we acknowledge that our method may perform less effectively when there is a significant concept shift or when we do not have enough calibration data to train and calibrate our proxy. We will include this consideration in the final version of the paper. > In lines 117-118, could the authors explain how they connect the upper bound of the training dataset to the lower bound of the test dataset to derive this probability inequality? Below, you will find the detailed derivation of Equation 7 (Lines 117-118): $$ P_{H_0}\left\[ \exists t \geq 1: \Phi_m(E_1, \dots, E_t) = 1 \right\] $$ $$ = P_{H_0}\left\[\exists t \geq 1: \hat{L}(E_1, \ldots, E_t) > \hat{U}(E_1^{0}, ..., E_1^{0})+ \epsilon_{\text{tol}}\right\] $$ $$ = P_{H_0}\left\[\exists t \geq 1: \left(\hat{L}(E_1, \ldots, E_t) -\frac{1}{t} \sum_{k=1}^t \theta(P_{E}^{k})\right) - \left(\hat{U}(E_1^{0}, \ldots, E_n^{0})-\theta(P_{E}^{0})\right) > \epsilon_{\text{tol}} - \left( \frac{1}{t} \sum_{k=1}^t \theta(P_{E}^{k}) - \theta(P_{E}^{0}) \right) \right\] $$ $$ \leq P_{H_0}\left\[\exists t \geq 1: \left(\hat{L}(E_1, \ldots, E_t) -\frac{1}{t} \sum_{k=1}^t \theta(P_{E}^{k})\right) - \left(\hat{U}(E_1^{0}, \ldots, E_n^{0})-\theta(P_{E}^{0})\right) > 0 \right\] $$ $$ \leq P_{H_0}\left\[\exists t \geq 1: \left(\hat{L}(E_1, \ldots, E_t) -\frac{1}{t} \sum_{k=1}^t \theta(P_{E}^{k})\right) > 0 \right\] + P_{H_0}\left\[\left(\hat{U}(E_1^{0}, \ldots, E_n^{0})-\theta(P_{E}^{0})\right) < 0 \right\] $$ $$ \leq \alpha_{\text{source}} + \alpha_{\text{prod}} $$ The third last line is because we are under $H_0$ and the second last line is due to if $\exists t \geq 1: \left(\hat{L}(E_1, \ldots, E_t) -\frac{1}{t} \sum_{k=1}^t \theta(P_{E}^{k})\right) - \left(\hat{U}(E_1^{0}, \ldots, E_n^{0})-\theta(P_{E}^{0})\right) > 0$, then either $\exists t \geq 1: \hat{L}(E_1, \ldots, E_t) -\frac{1}{t} \sum_{k=1}^t \theta(P_{E}^{k}) > 0$ or $\hat{U}(E_1^{0}, \ldots, E_n^{0})-\theta(P_{E}^{0}) < 0$ --- Rebuttal Comment 1.1: Comment: Thank you for addressing the question, it is very clear to me.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improved Bayes Regret Bounds for Multi-Task Hierarchical Bayesian Bandit Algorithms
Accept (poster)
Summary: The manuscript discusses a multi-task learning problem and proposes a hierarchical Bayesian (HB) bandit approach. In this approach, the agent maintains a meta-posterior distribution over the hyperparameters of the within-task bandit problems. In the HB bandit, each bandit task is characterized by a task parameter. The paper examines both the sequential and concurrent settings for two types of bandit problems: 1) Gaussian Linear bandit and 2) Semi Bandit. In particular, for both sequential and concurrent bandit in the Gaussian Linear bandit setting: 1. The paper improves upon the existing gap-independent Bayes regret bound of HierTS from O(m \sqrt{n \log n \log (mn)} ) to O(mn\log n) for infinite actions set where m is the number of tasks and n is the number of iterations per task. 2. The paper also extends the HierTS and proposes a new algorithm for finite action sets, HierBayesUCB, with gap-dependent O(m \log n \log (mn)) regret bound. Moreover, the paper extends the HierTS and HierBayesUCB for multi-task Gaussian combinatorial semi-bandit setting and derives regret bounds of O(m \sqrt{n} \log n) and O(m \log (mn \log n)). Strengths: This paper's contribution is important and novel because: 1. It improves upon the existing regret bound for HeirTS in the infinite action setting 2. Proposes a new algorithm to solve the finite action case. 3. They also extend these algorithms to semi-bandit settings with SOTA regret bounds, which is novel. Weaknesses: Many parts of the paper can be significantly improved: 1. [Novelty] I don’t think we need the last paragraph in the introduction. Instead, the authors should briefly describe the technical novelty they have introduced to improve the HierTS regret bound for both the sequential and concurrent bandit over the existing works. Why was the previous analysis not sufficient? What is their novel and non-trivial contribution beyond BayesUCB? Please broadly discuss the novelties summarized in the appendix in the main paper, as this is your main contribution that led to improved bounds. Also, discuss the novelty in the techniques beyond using the improved inequality from [21]. 2. 140-141: Why do we need this assumption of at least n-round interaction, and how is it ensured in the algorithm? 3. Alg1 and 2: How do the algorithms choose the set \mathcal{S}_t of the tasks at each time t? Even the task sequence is also not input to the algorithm. It is not clear to me. 4. What is \hat \mu and \hat \simga in Alg 1 (HierBayesUCB)? It is not defined yet. I found it to be defined later in (3). 5. display (4): If the order of V_m,n is the same as derived in [17] up to multiplicative factors, how does it help to improve the regret bound? I cannot find the intuition to understand what improved the bound. The authors must discuss this in the main part of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. L228-229: Please elaborate more on how you have achieved this. Minor: - Single task semi-Bandit: L 121-122—The authors should mention in the introduction and abstract that they consider coherent cases of semi-bandit problems. - L53: Latest - L143: Fix ‘=‘ - L159: insert ‘and’ Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The nature of the work is theoretical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to the Review by Reviewer rxXv (Part 1/2) **Q1. Please broadly discuss the novelties summarized in the appendix in the main paper, as this is your main contribution that led to improved bounds.**\ A1: Thank you for this suggestion, we will broadly discuss the novelties summarized in the appendix in the main paper in the final version. **Q2. Also, discuss the novelty in the techniques beyond using the improved inequality from [21].**\ A2: Thanks. Besides using the improved inequality from [21], our technical novelties lie in the following three aspects and we will clarify them in the revised version: \ $\mathbf{(1)}$ For the improved regret bound for HierTS in Theorem 5.1 in the sequential bandit setting: our proof has two novelties: $\mathbf{(i)}$ We use a more technical positive semi-definite matrix decomposition analysis (i.e. our Lemma B.1) to reduce the multiplicative factor $\kappa^{2}(\Sigma _{0})$ to $\kappa(\Sigma _{0})$. $\mathbf{(ii)}$ Define a new matrix $\tilde{X} _{s,t}$ such that the denominator in the regret is $\sigma^{2}+B^{2}\lambda _{1}(\Sigma _{0})$, not just $\sigma^{2}$. Avoid the case that the variance serves alone as the denominator. Such technical novelties are also listed in our Table 4.\ $\mathbf{(2)}$ For the improved regret bound for HierBayesUCB in Theorem 5.2 in the sequential bandit setting: our technical novelty lies in decomposing the Bayes regret $\mathcal{BR}(m,n)=\mathbb{E}\sum _{t\geq1}\sum _{s \in \mathcal{S} _{t}}\Delta _{s,t}$ into three terms: $$\mathbb{E}\sum _{t\geq1}\sum _{s \in \mathcal{S} _{t}}\Delta _{s,t}=\mathbb{E}\sum _{t \geq 1,s \in \mathcal{S} _{t}}\Delta _{s,t}\big[\mathbf{1}\lbrace\Delta _{s,t}\geq \epsilon, E _{s,t}\rbrace + \mathbf{1}\lbrace\Delta _{s,t}< \epsilon,E _{s,t}\rbrace +\mathbf{1}\lbrace \bar{E} _{s,t}\rbrace\big],$$ and bounding the first term with a new method as well as the specific property of BayesUCB algorithm as follow: $$ \mathbb{E}\Delta _{s,t} \mathbf{1}\lbrace\Delta _{s,t} \geq \epsilon, E _{s,t}\rbrace=\mathbb{E}\frac{\Delta _{s,t}^{2}}{\Delta _{s,t}} \mathbf{1}\lbrace\Delta _{s,t} \geq \epsilon, E _{s,t}\rbrace \leq \mathbb{E}\frac{C _{t,s, A _{s,t}}^{2}}{\Delta _{\min}^{\epsilon}},$$ resulting in the final improved gap-dependent regret bound for HierBayesUCB as follows $$ \big(\sum _{t\geq1,s\in \mathcal{S} _{t}}\lVert A _{s,t}\rVert _{\hat{\Sigma} _{s,t}}^{2}\log{\frac{1}{\delta}}\big)/\min _{s,t}\lvert\Delta _{s,t}\rvert \leq O\big(md\log{(n)}\log{\frac{1}{\delta}} \big),$$ which is of order $ O\big(md\log{(n)}\log{(mn)} \big)$ if we set $\delta = \frac{1}{mn}$.\ $\mathbf{(3)}$ For the improved regret bounds for HierTS and HierBayesUCB in the concurrent setting and in the sequential semi-bandit setting: besides the aforementioned technical novelties in $\mathbf{(1)}$ and $\mathbf{(2)}$, the additional technical novelty lies in leveraging more refined analysis (e.g. using Woodbury matrix identity repeatedly) to bound the gap between the matrices $\bar{\Sigma} _{t+1}^{-1}$ and $\bar{\Sigma} _{t}^{-1}$ (more details can be found in Lemma D.1 and Equation (6) in Page 22). **Q3. L140-141: Why do we need this assumption of at least $n$-round interaction, and how is it ensured in the algorithm?**\ A3: Thanks. Actually in L140-141 the assumption is at $\mathbf{MOST}$ $n$-round interactions. We adopt such assumption just for convenient comparison with exiting regret upper bounds (e.g. the bounds in Tables 1 and 2) for multi-task bandit/semi-bandit problem which directly assumes $m$ tasks and $n$ iterations per task. Such assumption is without loss generality and hence can be easily ensured in the algorithm (more implementation details can be confirmed in the code in our submitted zip file). **Q4. Alg1 and 2: How do the algorithms choose the set $\mathcal{S} _t$ of the tasks at each time t? Even the task sequence is also not input to the algorithm.**\ A4: Thanks. The task set $\mathcal{S} _t$ is chosen randomly at each iteration $t$. **Q5. What is $\hat{\mu}$ and $\hat{\sigma}$ in Alg 1 (HierBayesUCB)? It is not defined yet. I found it to be defined later in (3).**\ A5: Sorry for the confusion. $\hat{\mu}$ and $\hat{\sigma}$ in Alg 1 (HierBayesUCB) are actually the expectation and covariance of the conditional distribution (given the history $H$) of the true task parameter $\theta _{*}$ (including the form of hierarchical Gaussian bandit in Eq.(3)). We will clarify them when introducing HierBayesUCB algorithm in the revised version. --- Rebuttal 2: Title: Response to the Review by Reviewer rxXv (Part 2/2) Comment: **Q6. Display (4): If the order of $\mathcal{V} _{m,n}$ is the same as derived in [17] up to multiplicative factors, how does it help to improve the regret bound?**\ A6: Thanks. Our explanations are two-fold: \ $\mathbf{(1)}$ It is true that the order of $\mathcal{V} _{m,n}$ is the same (w.r.t. $m$ and $n$) as derived in [17], but our bound on $\mathcal{V} _{m,n}$ has a smaller multiplicative factor (i.e. using our proposed Lemma B.1 to reduce $\kappa^{2}(\Sigma _{0})$ to $\kappa(\Sigma _{0})$), this is where our first improvement lies in.\ $\mathbf{(2)}$ The improvement of the order of regret bound (i.e. w.r.t. to $m$ and $n$) is actually attributed to the novel strategy to transform the multi-task Bayes regret $\mathcal{BR}(m,n)$ into an intermediate regret upper bound that involves the posterior variance $\mathcal{V} _{m,n}$ as the dominant term. Previous work [17, Lemma 1] used traditional UCB bounding technique (with the confidence factor $\delta \in (0,1)$) to derive the intermediate regret upper bound $\sqrt{mn\mathcal{V} _{m,n} \log{(1/\delta)}} + mn\delta$, resulting in the additional multiplicative factor $\sqrt{\log{(1/\delta)}}$ which is $\sqrt{\log{(mn)}}$ if we set $\delta = \frac{1}{mn}$. Our strategies differ from [17] in the following two aspects:\ $\mathbf{(i)}$ For the improved regret bound of HierTS in Theorem 5.1: we leverage a novel Cauchy-Schwartz inequality from [21] to transform the multi-task Bayes regret $\mathcal{BR}(m,n)$ into an intermediate regret upper bound of $O(\sqrt{mn\mathcal{V} _{mn}})$, which is different from the UCB bounding technique and hence removes the additional $\sqrt{\log{(1/\delta)}}$ factor.\ $\mathbf{(ii)}$ For the improved regret bound of HierBayesUCB in Theorem 5.2: we choose to decompose the Bayes regret $\mathcal{BR}(m,n)=\mathbb{E}\sum _{t\geq1}\sum _{s \in \mathcal{S} _{t}}\Delta _{s,t}$ into three terms: $$\mathbb{E}\sum _{t\geq1}\sum _{s \in \mathcal{S} _{t}}\Delta _{s,t}=\mathbb{E}\sum _{t \geq 1,s \in \mathcal{S} _{t}}\Delta _{s,t}\big[\mathbf{1}\lbrace\Delta _{s,t}\geq \epsilon, E _{s,t}\rbrace + \mathbf{1}\lbrace\Delta _{s,t}< \epsilon,E _{s,t}\rbrace +\mathbf{1}\lbrace \bar{E} _{s,t}\rbrace\big] ,$$ and bound the first term (the main term) with the property of BayesUCB algorithm (with confidence factor $\delta \in (0,1)$) as follow $$ \mathbb{E}\Delta _{s,t} \mathbf{1}\lbrace\Delta _{s,t} \geq \epsilon, E _{s,t}\rbrace=\mathbb{E}\frac{\Delta _{s,t}^{2}}{\Delta _{s,t}} \mathbf{1}\lbrace\Delta _{s,t} \geq \epsilon, E _{s,t}\rbrace \leq \mathbb{E}\frac{C _{t,s, A _{s,t}}^{2}}{\Delta _{\min}^{\epsilon}},$$ leading to the improved intermediate regret bound of $\sum _{t\geq 1, s\in \mathcal{S} _{t}}\frac{8\log{\frac{1}{\delta}}\lVert A _{s,t}\rVert _{\hat{\Sigma} _{s,t}}}{\Delta _{\min}^{\epsilon}}=O(\mathcal{V} _{m,n}\log{\frac{1}{\delta}})$, which is of $O(m\log{n}\log{(mn)})$ if we set $\delta = \frac{1}{mn}$. **Q7. L228-229: Please elaborate more on how you have achieved this.**\ A7: Thanks. The improvement lies in our upper bound on $\lambda _{1}(\Sigma _{0}^{-1}\tilde{\Sigma} _{s,t}\tilde{\Sigma} _{s,t} \Sigma _{0}^{-1})$, and detailed explanations are two-fold:\ $\mathbf{(1)}$ Previous work [17, Appendix B] directly used Weyl’s inequality to upper bound $$\lambda _{1}(\Sigma _{0}^{-1}\tilde{\Sigma} _{s,t}\tilde{\Sigma} _{s,t} \Sigma _{0}^{-1}) \leq \lambda _{1}^{2}(\Sigma _{0}^{-1}) \lambda _{1}^{2}(\tilde{\Sigma} _{s,t}) \leq \lambda _{1}^{2}(\Sigma _{0}^{-1}) \lambda _{1}^{2}(\Sigma _{0}) = \kappa^{2}(\Sigma _{0}).$$ $\mathbf{(2)}$ Instead of directly using Weyl’s inequality, we first propose Lemma B.1 which uses positive semi-definite matrix diagonalization technique to bound $$\lambda _{1}\big[\big((I+AB)(I+BA)\big)^{-1}\big]\leq \frac{\lambda _{1}(A)}{\lambda _{d}(A)}.$$ Then we apply Lemma B.1 to upper bound $$\lambda _{1}(\Sigma _{0}^{-1}\tilde{\Sigma} _{s,t}\tilde{\Sigma} _{s,t} \Sigma _{0}^{-1}) = \lambda _{1}(\Sigma _{0}^{-1}\tilde{\Sigma} _{s,t}\tilde{\Sigma} _{s,t} \Sigma _{0}^{-1})\leq \lambda _{1}\big[\big((I+\Sigma _{0}\tilde{\Sigma} _{s,t})(I+\tilde{\Sigma} _{s,t}\Sigma _{0})\big)^{-1}\big] \leq \kappa(\Sigma _{0}),$$ resulting in a smaller multiplicative factor than that in [17]. **Q8. Single task semi-Bandit: L 121-122—The authors should mention in the introduction and abstract that they consider coherent cases of semi-bandit problems.**\ A8: Thanks for your suggestion, we will clarify in the introduction and abstract that we consider coherent cases of semi-bandit problems in the revised version. **References**\ [17] Hierarchical Bayesian Bandits. AISTATS 2022.\ [21] An Improved Regret Bound for Thompson Sampling in the Gaussian Linear Bandit Setting. ISIT 2021. --- Rebuttal Comment 2.1: Comment: I thank the authors for their detailed response. I will maintain my score. --- Reply to Comment 2.1.1: Title: Thank you for the response. Comment: Dear Reviewer rxXv,\ Thank you for the response. We benefit a lot from your reviews and kind suggestions. We will take them into the revision of this paper and give more explanations for our technical contributions. Thank you!\ Best,\ Authors
Summary: The paper improves the Bayesian regret bound for hierarchical Bayesian bandit algorithms in multi-task bandit and semi-bandit settings. Firstly, it improves the gap-independent bound by a factor of $\mathcal{O}(\sqrt{\log(mn)})$ for infinite action set, $m$ being number of tasks and $n$, the number of iterations per task. For finite action set, the authors propose and analyze HierBayesUCB algorithm and analyze its gap-dependent Bayesian regret. Finally, the paper extends these algorithms to multi-task combinatorial semi-bandit setting. Strengths: The paper deals with an interesting hierarchical bandit/semi-bandit setting, where the agent needs to optimize its policy for several tasks simultaneously based on semi-bandit feedback. For the Gaussian setting, the paper provides Bayesian regret bounds that either improve upon previous results, or provide new results (e.g., HierBayesUCB in the combinatorial setting). Weaknesses: The main weakness of the paper is the requirement of Gaussian distributions for each layer of the hierarchy including the noise. While such an assumption facilitates the development of closed-form posteriors to ease the learning task, it is highly restrictive since in many cases, the noise is not Gaussian (assuming sub-Gaussian would be better, many classes of noise distributions including uniform, truncated Gaussian or Rademacher could be accommodated). This should at least be mentioned in the abstract since the current abstract gives the sense of a more general theory being developed in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: Here are a few questions regarding the paper: 1. In the case of several tasks, what is the benefit of assuming a hierarchical structure rather than dealing with each task independently? In Bayesian hierarchical models, this kind of shared structure often allows borrowing of strength across the different groups (tasks here), so that the overall rate could be better -- in this case, the overall regret is linear in $m$, which seems to be similar to that of dealing with the tasks independently. Can the authors shed more light on this. 2. Is it possible to use this result to derive frequentist regret bounds (either high probability bounds or in expectation), given either some true $\theta_{s}^*$. To keep the tasks exchangeable, one can alternatively assume a true $Q^*$ and $\{\theta_s^*\}$ as latent variables. Furthermore, in the Gaussian case, is the lower bound $\Omega(d\sqrt{n})$ tight in this hierarchical (shared) setting (note that this lower bound is for a linear bandit learning a single task, while in this case, it is not immediately obvious whether or not the shared structure across the tasks could help in the regret bound) 3. I am concerned with the Gaussian assumptions required for all the theoretical results. While the distributions for $Q$ and $\theta_{s}^*|\mu_*$ are fine to assume Gaussian (equivalent to a Gaussian prior on $(\mu_{1}^*,\dots,\mu_{m}^*)$ with dependence structure), the issue is with the Gaussian noise, which is restrictive to apply the method to broader range of problems. Unlike this work, Thompson sampling for linear bandits have been studied where the Gaussianeity assumption in the likelihood is only taken for algorithmic convenience, whereas the theoretical results only require milder assumptions like sub-Gaussianeity (e.g., [Agrawal and Goyal]). Furthermore, the paper does not deal with the case where $\mu_q, \Sigma_q, \Sigma_0, \sigma$ are unknown -- although the experiments place further priors on them, but this essentially makes the model a mixture, whereby the theoretical results are no longer valid. 4. The term $\mathcal{V}_{m,n}$ is the posterior variance of which distribution? 5. Regarding novelty in technical contribution, it seems the application of the Cauchy Schwarz inequality from [C. Kalkanli and A. Özgür.] (line 204) results in the reduction of the extra factor of $\sqrt{\log(mn)}$ from both Theorems 5.1 and 5.3, which is the main improvement in the paper. For Theorem 5.2, it seems like the fact that $|\mathcal{A}|<\infty$ is the key to reducing the $\sqrt{n}$ factor in the averaged Bayes regret (compared to Theorem 5.1), otherwise, trivially $\Delta_{s,\min}=0$ for compact $\mathcal{A}$ -- however, the additional term of $\log(m)$ is intriguing, can the authors comment on this? The last comment in line 267 shows that if $\Delta_{\min}$ is small, then this bound is much worse, scaling as $m^{3/2}$ (also the additional $\sqrt{\log(mn)}$ reappears). 6. Can the authors comment on the technical difference to remove the polynomial dependence of $K$ to logarithmic in Theorem 5.4 for the combinatorial case? 7. The bound in Theorem 5.5 is stronger than that in [S. Basu, B. Kveton, M. Zaheer, and C. Szepesvári.] in the case $\Delta_{\min}$ is large, otherwise the comment in line 266 would result in a bound worse -- is this correct? 8. Algorithms 1 and 2: Apart from $Q$, the update step for $Q_{t+1}$ also requires the various other likelihood and model parameters (assumed known) like $\mu_q, \Sigma_q, \sigma$ which should be included as input. Also, $\delta$ is an input parameter for the HierBayesUCB algorithm. 9. The *ORACLE* used in the semi-bandit setting: In practice, this is a combinatorially hard problem (e.g., the work by Chen, Wang and Yuan considers a $(\alpha,\beta)-$approximate oracle for efficiency). No simulation examples were shown to demonstrate the performance and more importantly, the computational complexity, for this important step in both HierTS and HierBayesUCB algorithms. 10. What are the values of $m, n$ in Figure 1(f) -- it seems like HierTS surprisingly performs very poorly, in fact, with higher $T$, vanilla TS might have better regret than HierTS. Any insights as to why this is the case? This is also related to my first question. Other small comments: In line 153, since $t$ is mentioned as a suffix in $P_{s,t}$, it might be better to use $\mu_t$, i.e. $\mathbb{P}(\theta_{s}^*=\theta|\mu_*=\mu_t, H_{s,t})$. Also, $\hat{\mu}_{s,t}$ in line 160 is not defined till the following page (equation 3). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to the Review by Reviewer diEN (Part 1/4) **Q1. The Gaussian distribution assumption for each layer of the hierarchy is highly restrictive since in many cases, the noise is not Gaussian (assuming sub-Gaussian would be better). This should at least be mentioned in the abstract since the current abstract gives the sense of a more general theory being developed in the paper.**\ A1: Thank you for the suggestion, we will mention the Gaussian linear bandit setting in the abstract and introduction in the revised version to make the statement more rigorous. For the Gaussian assumption, our explanations are two-fold:\ $\mathbf{(1)}$ It is true that the Gaussian distribution assumption for each layer of the hierarchy (especially the Gaussian noise) is restrictive. On the other hand, we also need to point out that the existing Bayes regret bounds [7,17,25] for hierarchical Bayesian bandit algorithms all adopt the Gaussian distribution for each layer of the hierarchy.\ $\mathbf{(2)}$ Extending our results to the more general settings (e.g. sub-Gaussian noise) is also one of our future directions. However, the generalization is not easy, mainly because the non-Gaussian assumption could not lead to the closed-form posteriors and hence could not apply our proposed algebraic analysis (e.g. Lemma B.1, Propositions B.1, D.1 and E.1). Instead, we may need to choose other tools like information-theoretic analysis in [32] to derive Bayes regret bounds for hierarchical Bayesian bandit algorithms. **Q2. What is the benefit of assuming a hierarchical structure rather than dealing with each task independently? The overall regret is linear in $m$, which seems to be similar to that of dealing with the tasks independently.**\ A2: Thanks. Our explanations are two-fold:\ $\mathbf{(1)}$ It is true that, from the theoretical perspective, the current Bayes regret bound for multi-task hierarchical Bayesian bandit is similar to that of dealing with the tasks independently. This is not very surprising because the regret bound $O(\sqrt{n\log{n}})$ in [21] for single-task Bayesian bandit is also near-optimal. Besides, deriving more insightful regret bounds (e.g. the bound revealing the impact of task-similarity to the generalization) to illustrate the benefits of multi-task Bayes regret bound over the single-task regret bound for hierarchical Bayesian bandit is also one of our ongoing research directions.\ $\mathbf{(2 )}$ From the algorithm perspective, in the hierarchical Bayesian bandit problem, using specifically-designed hierarchical Bayesian bandit algorithms can access more information of $\mu _{\star}$ via interacting with different tasks, than using traditional single-task Bayesian bandit algorithms. Such benefit can also be found in the lower regrets of HierTS/HierBayesUCB than that of vanilla TS in our Figures 1(f) and 2(f). **Q3. Is it possible to use this result to derive frequentist regret bounds (either high probability bounds or in expectation?**\ A3: Probably not. Detailed explanations are two-fold:\ $\mathbf{(1)}$ Notice that frequentist regret bound (for any fixed task instance) is the upper bound on the Bayesian regret. Besides, as shown in [31, Sect 3.1], only in some cases can Bayes regret bound be converted to frequentist regret bounds. Therefore it is not easy to use our Bayes regret bound to derive frequentist regret bounds. \ $\mathbf{(2)}$ Obtaining frequentist regret bound may require more advanced techniques. To the best of our knowledge, deriving frequentist regret bounds always need to use advanced analysis tools such as anti-concentration inequality or martingale method (see [1,15]), while deriving Bayes regret bounds in our work just uses less technical algebraic bounding analysis. **Q4. In the Gaussian case, is the lower bound $\Omega(d\sqrt{n})$ tight in this hierarchical setting (note that this lower bound is for a linear bandit learning a single task, but it is not immediately obvious whether or not the shared structure across the tasks could help in the regret bound).**\ A4:Thanks for pointing this out, we believe that the lower bound $\Omega(d\sqrt{n})$ is tight in the hierarchical Gaussian setting, due to the following reason. For any fixed hyper-parameter $\mu _{\star}$ (drawn from the hyper-posterior $Q$), [30, Theorem 2.1] shows that the lower Bayes regret bound for Gaussian bandit instance $\theta _{s,\star} \sim \mathbb{P}(\cdot | \mu _{\star})$ (for any policy) is $$ \mathbb{E} _{\theta _{s,\star} \sim \mathbb{P}(\cdot | \mu _{\star})} \sum _{t=1}^{n}(A _{s,\star}^{\top}\theta _{s,\star} – A _{s,t}^{\top} \theta _{s,\star}) \geq 0.006d\sqrt{n}.$$ Taking expectation over $\mu _{\star} \sim Q$, we can obtain the lower Bayes regret bound for hierarchical Gaussian linear bandit: $$ \mathbb{E} _{\mu _{\star} \sim Q} \mathbb{E} _{\theta _{s,\star} \sim \mathbb{P}(\cdot | \mu _{\star})} \sum _{t=1}^{n}(A _{s,\star}^{\top}\theta _{s,\star} – A _{s,t}^{\top} \theta _{s,\star}) \geq 0.006d\sqrt{n}.$$ **Q5. While the distributions for $Q$ and $\theta _{s,\star}|\mu _{*}$ are fine to assume Gaussian, the issue is with the Gaussian noise, which is restrictive to apply the method to broader range of problems.**\ A5: Thanks for pointing this out. It is true that the Gaussian noise makes our theoretical results restrictive to be applied to a broader range of problems. Extending our results to the more general setting (e.g. sub-Gaussian noise setting) is also one of future directions, and may require more advanced analysis tools rather than the algebraic bounding technique in the current work. --- Rebuttal 2: Title: Response to the Review by Reviewer diEN (Part 2/4) Comment: **Q6. The paper does not deal with the case where $\mu _{q}, \Sigma _{q}, \Sigma _{0}, \sigma$ are unknown.**\ A6: Thanks for your comments. Our explanations are two-fold:\ $\mathbf{(1)}$ It is true that the current version is unable to deal with the case where $\mu _{q}, \Sigma _{q}, \Sigma _{0}, \sigma$ are unknown, and this may be one limitation of the algebraic bounding technique (see more explanations in Remark F.1) to derive Bayes regret bounds (in [7,17,25] and ours) for hierarchical Bayesian bandit algorithms. \ $\mathbf{(2)}$ To deal with the case where $\mu _{q}, \Sigma _{q}, \Sigma _{0}, \sigma$ are unknown, we believe we need to leverage other tools (like information-theoretic analysis) to analyze general cases. **Q7. The term $\mathcal{V} _{m,n}$ is the posterior variance of which distribution?.**\ A7: Sorry for the confusion, our explanations are two-fold:\ $\mathbf{(1)}$ The posterior variance $\mathcal{V} _{m,n} \leq \mathbb{E}\Big[\sum _{t\geq 1}\sum _{s \in \mathcal{S} _{t}}\lVert A _{s,t}\rVert _{\hat{\Sigma} _{s,t}}^{2}\Big]$ is just a notation, not the variance of certain random variable. And the expectation is taken over the randomness of the random variables $A _{s,t}$, $\hat{\Sigma} _{s,t}$.\ $\mathbf{(2)}$ $\mathcal{V} _{m,n}$ measures the variation change of the Mahalanobis-norm of the actions $A _{s,t}$. Therefore, we call it posterior variance as in previous works [7,17]. **Q8. For Theorem 5.2, it seems like the fact that $\lvert\mathcal{A}\rvert < \infty$ is the key to reducing the $\sqrt{n}$ factor in the averaged Bayes regret (compared to Theorem 5.1).**\ A8: Thanks. Our explanations lie in the following two aspects:\ $\mathbf{(1)}$ Actually the fact hat $\lvert\mathcal{A}\rvert < \infty$ is $\mathbf{not}$ the key to reducing the $\sqrt{n}$ factor, because $\lvert\mathcal{A}\rvert$ is the multiplicative factor in the regret bound of Theorem 5.2 and $\lvert\mathcal{A}\rvert < \infty$ is used to ensure that the regret bound will not be infinity.\ $\mathbf{(2)}$ The key to reducing the $\sqrt{n}$ factor in Theorem 5.1 is actually the following strategy to decompose the Bayes regret into three terms: $$ \mathcal{BR}(m,n)=\mathbb{E}\sum_{t\geq1}\sum_{s \in \mathcal{S} _{t}}\Delta _{s,t}=\mathbb{E}\sum _{t\geq 1,s \in \mathcal{S} _{t}}\Delta _{s,t}\big[\mathbf{1}\lbrace\Delta _{s,t}\geq \epsilon, E _{s,t}\rbrace + \mathbf{1}\lbrace\Delta _{s,t}< \epsilon,E _{s,t}\rbrace +\mathbf{1}\lbrace \bar{E} _{s,t}\rbrace\big],$$ and we use the equal transformation $\Delta _{s,t}\mathbf{1}\lbrace\Delta _{s,t}\geq \epsilon, E _{s,t}\rbrace=\frac{\Delta _{s,t}^{2}}{\Delta _{s,t}} \mathbf{1}\lbrace\Delta _{s,t}\geq \epsilon, E _{s,t}\rbrace$ to bound the first term as $$ \mathbb{E}\sum _{t\geq 1,s \in \mathcal{S} _{t}}\Delta _{s,t}\big[\mathbf{1}\lbrace\Delta _{s,t}\geq \epsilon, E _{s,t}\rbrace\big] \leq \mathbb{E}\sum _{t\geq 1,s \in \mathcal{S} _{t}}\frac{\Delta _{s,t}^{2}}{\Delta _{\min}^{\epsilon}} \leq \mathbb{E}\sum _{t\geq 1,s \in \mathcal{S} _{t}}\frac{(8\log{\frac{1}{\delta}})\lVert A _{s,t}\rVert _{\hat{\Sigma} _{s,t}}^{2}}{\Delta _{\min}^{\epsilon}} = O(\frac{\log{\frac{1}{\delta}}\mathcal{V} _{m,n}}{\Delta _{\min}^{\epsilon}}),$$ leading to the regret bound of order $O(\frac{m\log{n} \log{\frac{1}{\delta}}}{\Delta _{\min}^{\epsilon}})$ in Theorem 5.2 which removes the $\sqrt{n}$ factor in Theorem 5.1. **Q9. The additional term of $\log{m}$ in Theorem 5.2 is intriguing, can the authors comment on this?**\ A9: Thanks for pointing this out. Our comments are two-fold:\ $\mathbf{(1)}$ The regret bound in our Theorem 5.2 is (informal form) $$O(mn[\epsilon+\delta]+\frac{\log{\frac{1}{\delta}}}{\Delta _{\min}^{\epsilon}}m\log{n}),$$ and becomes $O(\log{(nm)}m\log{n})$ if we set $\delta = \frac{1}{nm}$, $\epsilon = \frac{1}{mn}$ and $\Delta _{\min} > \epsilon$ is large. Therefore, the additional term of $\log{m}$ is actually caused by setting the confidence factor $\delta = \frac{1}{mn}$, which to some extent is an unavoidable term when combining $m$ regret bounds for UCB-type algorithms.\ $\mathbf{(2)}$ We can also remove the additional term of $\log{m}$ by setting $\delta = \frac{1}{n}$, then our Bayes regret bound in Theorem 5.2 becomes $O([mn\epsilon + m]+ \frac{\log{n}}{\Delta _{\min}^{\epsilon}}m\log{n})$, which is order of $O(m\log^{2}{n})$ if we set $\epsilon =\frac{1}{mn}$ and the gap $\Delta _{\min} > >\epsilon$ is large. --- Rebuttal 3: Title: Response to the Review by Reviewer diEN (Part 3/4) Comment: **Q10. Can the authors comment on the technical difference to remove the polynomial dependence of $K$ to logarithmic in Theorem 5.4 for the combinatorial case?.**\ A10: Thanks. We need to admit that the improvement of the polynomial dependence of $K$ to logarithmic in Theorem 5.4 is actually attributed to the $d$-dimensional feature representation of each action. Detailed explanations lie in the two following aspects:\ $\mathbf{(1)}$ If we use our Theorem 5.4 to derive regret bounds for multi-task $K$-armed bandit, then $d=K$ and we need to set the feature representation of the $k$-th arm ($k \in [K]$) as the one-hot vector whose $k$-th element is $1$ and others $0$. Then we need to replace $d$ in our Theorem 5.4 with $K$, and will still get a regret bound for multi-task $K$-armed bandit with polynomial dependence of $K$.\ $\mathbf{(2)}$ Even though our Theorem 5.4 will lead to a regret bound with polynomial dependence of $K$ for traditional $K$-armed bandit problem (without feature representation), our Theorem 5.4 still reveals an insight that: if the size $K$ of arms is truly large (e.g. $K >1000$), representing each action/arm with a $d$-dimensional feature ($d < K$) will result in a sharper regret bound without the polynomial dependence of $K$. **Q11. The bound in Theorem 5.5 is stronger than that in [S. Basu, NeurIPS2021] in the case $\Delta _{\min}^{\epsilon}$ is large, otherwise the comment in line 266 would result in a bound worse, is this correct?**\ A11: Thanks. It is true that if the gap $\Delta _{\min}^{\epsilon}$ is small and $\epsilon=\frac{1}{mn}$, our bounds in Theorems 5.2 and 5.5 become worse. However, if we choose $\epsilon = \frac{1}{\sqrt{n}}$ (or choose $\epsilon = \frac{1}{n}$ as explained in Answer 9 to **Question 9**), we can still obtain a good regret bound even if the gap $\Delta _{\min}^{\epsilon}$ is small. Detailed explanations are two-fold and will be added in the final version:\ $\mathbf{(1)}$ If we set $\epsilon = \frac{1}{\sqrt{n}}$, $\delta = \frac{1}{n}$, and $\Delta _{\min} >> \epsilon$ (i.e. $\Delta _{\min}$ is large), our regret bound in Theorem 5.2 (similar regret bound in Theorem 5.5 can be derived in a similar way) is of order $O(m\sqrt{n} + m\log^{2}{n}) = O(m\sqrt{n})$ and is still improved over the latest bound $O(m\sqrt{n\log{n}\log{(mn)}})$ in [17, Theorem 3] in Table 1.\ $\mathbf{(2)}$ If we set $\epsilon = \frac{1}{\sqrt{n}}$, $\delta = \frac{1}{n}$, and $\Delta _{\min} \leq \epsilon$ (i.e. $\Delta _{\min}$ is small), our regret bound in Theorem 5.2 is of order $O(m\sqrt{n}+\sqrt{n}\log{n} [m\log{n}])=O(m\sqrt{n}\log^{2}{n})$, which is still comparable with the latest one $O(m\sqrt{n\log{n}\log{(mn)}})$ in [17, Theorem 3]. **Q12. Algorithms 1 and 2: Apart from $Q$, the update step for $Q _{t+1}$ also requires the various other likelihood and model parameters (assumed known) like $\mu _{q}, \Sigma _{q}, \sigma$, which should be included as input.**\ A12: Sorry for the confusion. Actually Algorithms 1 and 2 are general forms of HierTS/HierBayesUCB algorithms (hence for any hierarchical distribution assumption), not the specific forms of HierTS/HierBayesUCB algorithms under hierarchical Gaussian assumption. Therefore, the input of general HierTS/HierBayesUCB algorithms does not necessarily include model parameters like $\mu _{q}, \Sigma _{q}, \sigma$. We will correct the step **Update $Q _{t+1}$ with Eq.(2)** in Algorithms 1 and 2 with **Update $Q _{t+1}$** to avoid confusion in the revised version. --- Rebuttal 4: Title: Response to the Review by Reviewer diEN (Part 4/4) Comment: **Q13. No simulation examples were shown to demonstrate the performance of *ORACLE*. operator and more importantly, the computational complexity, for this important step in both HierTS and HierBayesUCB algorithms.**\ A13: Thanks for pointing this out. It is true that in the current work we did not conduct simulation experiments to demonstrate the performance of *ORACLE* operator and combinatorial HierTS/HiereBayesUCB. This paper mainly focuses on providing improved regret analysis and novel algorithms for hierarchical Bayesian bandit/semi-bandit settings. Practical implementation of the proposed hierarchical semi-bandit algorithms is left as one of our research directions. **Q14. What are the values of $m,n$ in Figure 1(f) -- it seems like HierTS surprisingly performs very poorly, in fact, with higher $T$, vanilla TS might have better regret than HierTS.**\ A14: Thanks. As shown in Experiment section, we set $m=10, n=400$ in Figure 1(f). There are two insights behind the performance of HierTS in Figure 1(f): \ $\mathbf{(1)}$ It is true that in Figure 1(f) (where we set $\sigma _{q}=1$) HierTS outperforms vanilla TS by a small margin. But in Figure 2(f) (where we set a smaller $\sigma _{q}=0.5$) HierTS outperforms vanilla TS much better. Therefore, the outperformance of HierTS over vanilla TS could vary in different hierarchical Bayesian bandit environments. \ $\mathbf{(2)}$ The different outperformance of HierTS over vanilla TS is consistent with our regret bound in Theorem 5.1 for HierTS: our regret bound involves terms $\lambda _{1}(\Sigma _{q})$ and $tr(\Sigma _{q} \Sigma _{0}^{-1})$, which reveals that a larger variance $\sigma _{q}$ leads to a larger regret bound. Hence, the larger variance $\sigma _{q} =1.0$ in Figure 1(f) leads to larger regret of HierTS than that in Figure 2(f) (with a smaller variance $\sigma _{q} =0.5$). **Q15. In line 153, since $t$ is mentioned as a suffix in $\mathbb{P} _{s,t}$, it might be better to use $\mathbb{P}(\theta _{s,\star}=\theta | \mu _{*}=\mu _{t}, H _{s,t})$. Also, $\mu _{s,t}$ in line 160 is not defined till the following page (equation 3).**\ A15: Thank you for the kind suggestion, we will use $\mathbb{P}(\theta _{s,\star}=\theta | \mu _{\star}=\mu _{t}, H _{s,t})$ in the final version. Besides, $\hat{\mu}$ and $\hat{\sigma}$ in line 160 are the expectation and covariance of the conditional distribution (given the history $H$) of the true task parameter $\theta _{*}$, and we will clarify them when introducing HierBayesUCB algorithm in the revised version. **References**\ [1] Improved Algorithms for Linear Stochastic Bandits. NeurIPS 2011.\ [7] No Regrets for Learning the Prior in Bandits. NeurIPS 2021.\ [15] Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures. NeurIPS 2023.\ [17] Hierarchical Bayesian Bandits. AISTATS 2022.\ [21] An Improved Regret Bound for Thompson Sampling in the Gaussian Linear Bandit Setting. ISIT 2021.\ [25] Meta-Thompson Sampling. ICML 2021.\ [30] Linearly Parameterized Bandits. MoOR 2010.\ [31] Learning to Optimize via Posterior Sampling. MoOR 2011.\ [32] An Information-Theoretic Analysis of Thompson Sampling. JMLR 2016. --- Rebuttal 5: Comment: Thanks for the response. I am overall satisfied with the responses and will raise my score to 6. However, based on the discussion, I feel there are a lot of interesting directions to pursue on various fronts within the current setup. --- Rebuttal Comment 5.1: Title: Thank you for the response. Comment: Dear Reviewer diEN,\ Many thanks for your response. We appreciate your support very much, and really benefit a lot from your detailed reviews. We will take your suggestions into the revision to improve the quality of this paper. Thank you! Best,\ Authors
Summary: This paper studies the multi-task Gaussian linear bandit and semi-bandit problems. It uses a Bayesian approach to maintain a meta-distribution over the hyper-parameters of within-task parameters. It provides an improved regret bound for the for multi-task for HierTS algorithm in the case of infinite action set. For the same setting, it proposes HierBayesUCB which is the UCB counterpart of HierTS. The authors extend their algorithm to the concurrent multi-task and combinatorial semi-bandit settings as well. Strengths: Contribution: The paper improves upon prior works in several directions; it proposes tighter regret bounds for HierTS, extends multi-task bandit algorithms to combinatorial semi-bandit setting. Presentation: the paper is very well-organized, it clearly defines the problem, pseudo-code of the algorithms are clearly presented, and the experimental results are extensive. Weaknesses: The paper is lacking real-world dataset experiments. This could help assess the algorithm's robustness w.r.t model misspecification and gauge the theoretical results consistency w.r.t assumptions violation. Technical Quality: 3 Clarity: 4 Questions for Authors: No questions. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. The paper improves upon prior works in several directions; it proposes tighter regret bounds for HierTS, extends multi-task bandit algorithms to combinatorial semi-bandit setting.**\ A1: Thanks for your positive comments. we will continue to improve the quality of this paper. **Q2. The paper is lacking real-world dataset experiments. This could help assess the algorithm's robustness w.r.t model misspecification and gauge the theoretical results consistency w.r.t assumptions violation.**\ A2: Thanks for your comments. It is true that in the current work we only conduct synthetic experiments to validate the effectiveness of our theoretical results and our proposed algorithms. Conducting real-world dataset experiments is also one of our future directions and we will try to add it in the final version.
Summary: This paper revisits the learning problems of (multi-task) Bayesian linear bandit/semi bandits, which is interesting. The improvement of the Bayesian regret bound for the multi-task Bayes regret bound of HierTS is very marginal. For the remaining presented upper bounds, there are still near-optimal up to some $\log$ factors from the regret lower bounds. I do not check the proofs in the appendix in detail, but it seems they are correct and the proofs are well written. In the introduction section, the description “The gap between the cumulative reward of optimal actions in hindsight and the cumulative reward of agent 24 is defined as regret” seems not to be aligned with the learning problems to be solved. For me, I think we still use pseudo rewards. So, we may not care about the optimal actions in hindsight. Instead, we care about the action with the highest mean given a problem instance. Questions: I have a question, when using the notion of Bayesian regret, why is there an instance-dependent gap? For the UCB-based algorithms, I am feeling that it is also possible to have an instance-dependent regret bound using the notion of frequentist regret. Why not do it in that way? In Line 53, there is a typo. I guess it should be “latest” instead of “latext”. Can you rotate Table 4? It is hard for readers who use a desktop. Strengths: see the first box Weaknesses: see the first box Technical Quality: 4 Clarity: 4 Questions for Authors: see the first box Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Frequentist analysis may be more interesting, but I kind of know the reasons why not to use frequentist regret as the combination of TS and frequentist regret needs to have anti-concentration bounds, which are always hard to develop. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. The description “The gap between the cumulative reward of optimal actions in hindsight and the cumulative reward of agent 24 is defined as regret” seems not to be aligned with the learning problems to be solved. Instead, we care about the action with the highest mean given a problem instance.**\ A1: Thanks for pointing this out. It is true that in Problem Setting we care about the action with the highest mean reward (not the highest reward and hence seems not aligned with the pseudo regret definition in Introduction), and we define the action with the highest mean reward as the optimal action. We will make a more rigorous statement of our introduction in the revised version. **Q2. When using the notion of Bayesian regret, why is there an instance-dependent gap?**\ A2: Thanks. There are two reasons for the existence of instance-dependent gap:\ $\mathbf{(1)}$ The first reason is that we use a new bounding technique to upper bound $\mathbb{E}\Delta _{s,t}$ (e.g. see Lines 549-550) as follows (informal form): $$ \mathbb{E}\Delta _{s,t} \mathbf{1}\lbrace\Delta _{s,t} \geq \epsilon\rbrace=\mathbb{E}\frac{\Delta _{s,t}^{2}}{\Delta _{s,t}} \mathbf{1}\lbrace\Delta _{s,t} \geq \epsilon\rbrace \leq \mathbb{E}\frac{C _{t,s, A _{s,t}}^{2}}{\Delta _{\min}^{\epsilon}}.$$ Therefore, there exists $\Delta _{\min}^{\epsilon}$ in the denominator in our bound.\ $\mathbf{(2)}$ The second reason is that the instance-dependent gap is actually a random variable, and our bound needs to take the expectation of this random variable. Note that the instance $\theta _{s, \star}$ is random, and the gap $\Delta _{s,t} = \theta _{s,\star}^{\top}A _{s, \star}- \theta _{s,\star}^{\top}A _{s, t}$ is a random variable. Finally, the term $\mathbb{E}{\frac{1}{\Delta _{\min}^{\epsilon}}}$ in our regret bound (e.g. see Lines 549-550) takes the expectation over the randomness of $\Delta _{s,t}$. Similar instance-dependent gap based regret bound can also be found in the latest work [3, Thm5]. **Q3. For the UCB-based algorithms, I am feeling that it is also possible to have an instance-dependent regret bound using the notion of frequentist regret. Why not do it in that way?**\ A3: Thanks for your suggestions, our explanations are two-fold:\ $\mathbf{(1)}$ Using frequentist regret bound to derive Bayes regret bound (either instance-dependent or instance-independent) is also one of our future research directions, because the techniques for deriving frequentist regret bounds are more fruitful and more general. But in the current work, we focus on applying improved Bayes regret analysis (like improved matrix analysis in Lemma B.1 and novel regret decomposition strategy in Theorem C.1) to derive sharper Bayes regret bounds.\ $\mathbf{(2)}$ It seems difficult to apply existing gap-dependent frequentist regret bound for UCB-type algorithm to derive instance-gap-dependent Bayes regret bound for multi-task hierarchical Bayesian bandit problems. To the best of our knowledge, the existing instance-dependent frequentist regret bound for UCB-type algorithm is in [1, Theorem 5], and is not suitable to be applied in our setting due to the following two reasons: $\mathbf{(i)}$ [1, Theorem 5] assumes the boundedness of the instance parameter $\theta _{s,\star}$, which hence could not be applied to analyze Gaussian bandit instance where the instance parameter $\theta _{s,\star}$ is not uniformly bounded. $\mathbf{(ii)}$ The frequentist regret bound in [1, Theorem 5] is more lengthy than the single-task Bayes regret bound for UCB in [3, Theorem 5], and hence could not lead to a concise multi-task Bayed bound as in our Theorem 5.2. **Q4. In Line 53, there is a typo. I guess it should be “latest” instead of “latext”.**\ A4: Thanks for pointing this out. We will correct it in the final version. **Q5. Can you rotate Table 4? It is hard for readers who use a desktop.**\ A5: Sorry for the inconvenience. Because Table 4 is much wider than it is tall, we choose Table 4 with horizontal layout. **References**\ [1] Improved Algorithms for Linear Stochastic Bandits. NeurIPS 2011.\ [3] Finite-Time Logarithmic Bayes Regret Upper Bounds. NeurIPS 2023.
Rebuttal 1: Rebuttal: # Response to All Reviewers We sincerely thank all reviewers for their detailed reading and constructive comments. We address below one concern that is common among reviewers and then we address the concerns of each reviewer individually: **Common Question 1. Discuss the novelty in the techniques beyond using the improved inequality from [21]**\ A1: Besides the application of the improved inequality from [21], our technical novelties lie in the following three aspects and we will clarify them in the revised version: \ $\mathbf{(1)}$ For the improved regret bound for HierTS in Theorem 5.1 in the sequential bandit setting: our proof has two novelties: $\mathbf{(i)}$ We use a more technical positive semi-definite matrix decomposition analysis (i.e. our Lemma B.1) to reduce the multiplicative factor $\kappa^{2}(\Sigma _{0})$ to $\kappa(\Sigma _{0})$. $\mathbf{(ii)}$ Define a new matrix $\tilde{X} _{s,t}$ such that the denominator in the regret is $\sigma^{2}+B^{2}\lambda _{1}(\Sigma _{0})$, not just $\sigma^{2}$, avoiding the case that the variance serves alone as the denominator. Such technical novelties are also listed in our Table 4.\ $\mathbf{(2)}$ For the improved regret bound for HierBayesUCB in Theorem 5.2 in the sequential bandit setting: our technical novelty lies in decomposing the Bayes regret $\mathcal{BR}(m,n)=\mathbb{E}\sum _{t\geq1}\sum _{s \in \mathcal{S} _{t}}\Delta _{s,t}$ into three terms: $$\mathbb{E}\sum _{t\geq1}\sum _{s \in \mathcal{S} _{t}}\Delta _{s,t}=\mathbb{E}\sum _{t \geq 1,s \in \mathcal{S} _{t}}\Delta _{s,t}\big[\mathbf{1}\lbrace\Delta _{s,t}\geq \epsilon, E _{s,t}\rbrace + \mathbf{1}\lbrace\Delta _{s,t}< \epsilon,E _{s,t}\rbrace +\mathbf{1}\lbrace \bar{E} _{s,t}\rbrace\big],$$ and bounding the first term with a new method as well as the specific property of BayesUCB algorithm as follow: $$ \mathbb{E}\Delta _{s,t} \mathbf{1}\lbrace\Delta _{s,t} \geq \epsilon, E _{s,t}\rbrace=\mathbb{E}\frac{\Delta _{s,t}^{2}}{\Delta _{s,t}} \mathbf{1}\lbrace\Delta _{s,t} \geq \epsilon, E _{s,t}\rbrace \leq \mathbb{E}\frac{C _{t,s, A _{s,t}}^{2}}{\Delta _{\min}^{\epsilon}},$$ resulting in the final improved gap-dependent regret bound for HierBayesUCB as follows $$ \big(\sum _{t\geq1,s\in \mathcal{S} _{t}}\lVert A _{s,t}\rVert _{\hat{\Sigma} _{s,t}}^{2}\log{\frac{1}{\delta}}\big)/\Delta _{\min}^{\epsilon} \leq O\big(m\log{(n)}\log{\frac{1}{\delta}} \big),$$ which is of order $ O\big(m\log{(n)}\log{(mn)} \big)$ if we set $\delta = \frac{1}{mn}$.\ $\mathbf{(3)}$ For the improved regret bounds for HierTS and HierBayesUCB in the concurrent setting and in the sequential semi-bandit setting: besides the aforementioned technical novelties in $\mathbf{(1)}$ and $\mathbf{(2)}$, the additional technical novelty lies in leveraging more refined analysis (e.g. using Woodbury matrix identity cleverly) to bound the gap between the matrices $\bar{\Sigma} _{t+1}^{-1}$ and $\bar{\Sigma} _{t}^{-1}$ (more details can be found in Lemma D.1 and Equation (6) in Page 22). Besides the concerns, we also thank gratefully reviewers for pointing out minor typos and we will fix them in the revised version.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Canonicalization Perspective on Invariant and Equivariant Learning
Accept (poster)
Summary: The paper introduces a canonization perspective for designing frames in neural networks. Canonization maps inputs to their canonical forms, allowing efficient and even optimal frame design. The paper shows the connection between frames and canonical forms, leading to the development of new frames for eigenvectors under Orthogonal Axis Projection (OAP) that outperform existing methods. The reduction to canonization uncovers equivalences between previous methods and unifies existing invariant and equivariant frame averaging approaches. Strengths: - Theoretically describing frame through canonization is beneficial. Understanding the equivalence between frame and canonization (Theorem 3.1) is great, and the theoretical definition of optimal canonization (Theorem 3.2) is straightforward. - The primary method, OAP, uses Gram-Schmidt orthogonalization to canonicalize eigenvectors and resolve eigenvalue degeneracy (or basis ambiguity), which is also straightforward. Weaknesses: ### **Major**: The overall writing could be improved. The authors claim to present a unified and essential view of equivariance learning, but the application is limited to the orthogonal group and eigenvector canonization. I would expect more examples to justify the theorem, such as the canonization view on the translation group or the product group $E(d)\times S_n$, similar to what frame averaging [1] achieves. ### **Minor**: - The OAP results appear to be marginal, though considering this as a theoretical paper compensates for this shortcoming. I would like to know what is the result of MAP + LSPE in ZINC 500k compared to OAP + LSPE. - Please add references in the main text to the appendix for proofs and n-body experiments to enhance readability. - I suggest the authors add related works including [2] and [3]. I also hope the authors can give a discussion about the highly related concurrent works [4] and [5] to strengthen the contribution, not necessarily in this rebuttal since they are both new papers accepted in this ICML 2024, but please do consider it after this rebuttal. [1] *Frame Averaging for Invariant and Equivariant Network Design*. Omri Puny, et al. ICLR 2022. [2] *Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance*. Jinwoo Kim, et al. NeurIPS 2023. [3] *A Hitchhiker’s Guide to Geometric GNNs for 3D Atomic Systems*. Alexandre Duval, et al. [4] *Equivariant Frames and the Impossibility of Continuous Canonicalization*. Nadav Dym, et al. ICML 2024. [5] *Equivariance via Minimal Frame Averaging for More Symmetries and Efficiency*. Yuchao Lin, et al. ICML 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: In line 992 (Appendix E.6), why is $\\{Pe_i\\}_{1≤i≤n}$ full rank? Shouldn't it have a rank of $\text{min}(n,d)$? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors do mention one of the limitations is that OAP does not fully resolve the optimality of eigenvector canonization under permutation equivariance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 8tUB for the constructive review and acknowledging our theoretical contributions. We address your main concerns as follows. --- **Q1.** The overall writing could be improved. The authors claim to present a unified and essential view of equivariance learning, but the application is limited to the orthogonal group and eigenvector canonization. I would expect more examples to justify the theorem, such as the canonization view on the translation group or the product group $E(d)\times S_n$, similar to what frame averaging [1] achieves. **A1.** Thanks for the suggestions. We will keep improving the writing to be more clear and readable. For the application of canonization, we provided multiple application scenarios of canonization in the paper: - **sign and basis equivariance**: we design optimal canonization algorithms (OAP) for sign and basis equviariance of eigenvectors, and show clear benefits in efficiency and performance. - **permutation equivariance**: in the EXP experiment (Section 5.1), we use canonization to achieve permutation equivariance on graph data (not sign/basis invariance of eigenvectors). - **the product group of the permutation group and the sign/basis transformation group:** for the graph case, we do consider the product group and show a strictly better canonization design (OAP for graphs). - **rotation equivariance:** In Appendix D, we applied canonization to achieve rotational equivariance of particles in PCA-frame methods. Besides, it is also easy to extend our results to the **translation group**. For example, a translation that moves the center of the input to the origin is a canonization under translation. Combining this canonization with the orthogonal group gives a canonization under the Euclidean group. In summary, we believe that these evidences show that our method has the generality to be applied to various tasks and different symmetries with improved efficiency over frame averaging. We will add these elaborations in the revision following your advice. --- **Q2.** I would like to know what is the result of MAP + LSPE in ZINC 500k compared to OAP + LSPE. **A2.** Following your suggestion, we further evaluate MAP + LSPE on ZINC. As shown in the following table, OAP can still outperform MAP under similar parameter constraints. | Model | #Param | MSE | | --- | --- | --- | | GatedGCN + MAP + LSPE | 475K | 0.101 ± 0.001 | | GatedGCN + OAP + LSPE | 491K | 0.098 ± 0.0009 | | PNA + MAP + LSPE | 550K | 0.104 ± 0.005 | | PNA + OAP + LSPE | 549K | 0.095 ± 0.004 | --- **Q3.** Please add references in the main text to the appendix for proofs and n-body experiments to enhance readability. **A3.** Thanks for the suggestion. We will provide these references in our revision. --- **Q4.** I suggest the authors add related works including [2] and [3]. I also hope the authors can give a discussion about the highly related concurrent works [4] and [5] to strengthen the contribution, not necessarily in this rebuttal since they are both new papers accepted in this ICML 2024, but please do consider it after this rebuttal. **A4.** Thanks for pointing out these related works! They are definitely related to the topic and we will discuss them in the revision. Among them, Kim et al [2] proposed a probabilistic frame averaging to achieve equivariance. Duval et al [3] gave a comprehensive review of geometric GNNs for 3D atomic systems, and introduces a taxonomy of geometric GNNs. We believe canonization methods can serve as a new direction for achieving the required equivariance of 3D atomic systems. More relevantly, Dym et al [4] proved the impossibility of continuity for canonizations with single canonical forms, which explains the subpar performance of some current canonization methods. Since in our framework, the canonical form is defined more generally as a set rather than a single element, it allows for more flexibility and continuity with weighted averaging. Lin et al [5] proposed a framework of canonization that assumes a single canonical form, so their framework is essentially a special case of our framework by taking $|\mathcal{C}(X)|=1$. The “minimal frame” in their paper is just the induced frame of an optimal canonization in our framework. However, the assumption of a single canonical form has the following key limitations: 1. **Theoretical Impossibility.** A single canonical form may be theoretically impossible under some constraints. For instance, canonization of LapPE requires permutation equivariance, in this case it is theoretically impossible to find a single canonical form for some eigenvectors. 2. **Computational Intractability.** A single canonical form may be computationally intractable to find. For instance, computing graph canonization is NP-hard. In this case, we have to adopt an approximate approach that admits a set of canonical forms, as in our EXP experiment. Even if we could find a single canonical form for all inputs, as pointed out by [4] such canonizations are still not continous, which hurts their performance. In summary, compared to previous canonization works that only consider single-element canonization that may be unrealizable, we take canonizability into account throughout the analysis and thus gives a more rigorous and general framework that works for both canonizable (single canonical form) and uncanonizable (no single canonical form) inputs. We will add these discussions in our paper. --- **Q5.** In line 992 (Appendix E.6), why is $\\{Pe_i\\}_{1\leq i\leq n}$ full rank? Shouldn't it have a rank of $\min(n,d)$? **A5.** In our paper $d$ refers to the dimension of the eigenspace, so we must have $d\leq n$. We will clarify this point in the proof. --- We thank Reviewer 8tUB again for the valuable review. We hope our response addresses your concerns. If you have further concerns, we are happy to address them in the discussion period. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 8tUB Comment: Thank you for your responses and I am satisfied with them. Please do include these discussions in the paper in future revisions. I have changed the score accordingly.
Summary: This paper highlights a one-to-one connection between frames over finite groups with ‘canonization’ over the space on which the group acts. This allows authors to prove non-universality of SignNets. Furthermore, authors claim that this view helps highlight equivalence of certain existing algorithms (MAP and FA-lap) arising in sign invariance and permutation equivariance problems, and ultimately leads to a proposal of a novel algorithm OAP. Strengths: Given the recent rise in papers covering the topic of frame averaging, this paper proposes a more useful view, by considering ‘canonizations’ instead, which are always smaller or same in size, compared to the corresponding frames. The proposed view turns out to be useful for proving universality under sign invariance and highlights issues with existing approaches for permutation equivariance - this makes it interesting for investigating in the setting of other groups. The appendix seems to be quite informative and well-written. Weaknesses: This paper reads as if it was extremely rushed, resulting in a completely unreadable manuscript. There are a number of places where terms are not defined, the problem settings are not stated at all, and connections to existing work are not stated/clarified. Theorems are almost always presented without stating the full problem and without a clear outline of the assumptions. This is particularly true for superiority proofs, which clearly assume something about the hash function, but nothing is ever stated in regards to it. In addition to that, it is not immediately clear where the ‘canonization’ view can be useful, beyond the example provided in this paper. Unfortunately, I am unable to comment on the numerical section, as I am not familiar with these experiments. However, given the problems in the rest of the manuscript, I urge the other reviewers to have a closer look at the numerical evaluation. Technical Quality: 3 Clarity: 1 Questions for Authors: Below I provide a number of comments and questions. * First and foremost, while I realize the authors must be approaching from the graph side of the literature - within frame averaging the term used in all papers is ‘canonicalization’, and not ’canonization’, and I would suggest for consistency to refer to it as such, to avoid creating confusion. * In subsection 2.1, as soon as frame averaging is presented, an assumption is made about finiteness of the group - but is never stated. Furthermore, after section 3.2, the vector spaces considered becomes simply finite dimensional Euclidean spaces. * Line 87 and 95 - universal approximation is claimed, however neither a citation, nor a theorem reference is provided. * In 110 you refer to LapPE. Ideally after introducing a term, you would directly provide a reference to it, instead of at the end of the next sentence. * Line 128 - why are automorphisms of the input a major obstacle in analyzing complexity of frames? * Line 144 - what you refer to as contractive ‘canonization’, is usually referred to as orbit canonicalization. Eg see https://arxiv.org/abs/2402.16077 * Line 175 - the same thing is stated twice. * In 3.3 and onwards, MAP is mentioned, however, since the work is built on top of it, an introduction to MAP (or at least an outline of what it is) should be present in the paper. * In line 205 what is the norm of a vector? * Whenever a theorem is stated, a reference to where the proof can be found should be stated. * In both section 3 and 4 the actual problem being solved is never stated, making it very difficult to follow. By this I mean that given a matrix, you find its eigenvectors, and wish to construct a function which is permutation equivariant … * Line 262 - what is the hash function? I can not seem to find any information about what it is, not in numerical experiments either, could you point to where this information is located? * You mention that you have not been able to show that OAP is optimal - given the indices may not exist, how could it be optimal? * Line 304 - you never say what FA/CA is - I presume this is frame averaging and canonisation averaging? * Whenever superiority or optimality is claimed in the sense of the new definitions, it would be useful to refer the reader to the definition, as it is not clear whether superiority is meant as a technical term, or simply as a qualitative assessment. (e.g. line 204) Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer TcMZ for the careful reading and the constructive review. We acknowledged that this paper uses conventional notations from the GNN theory literature, such as hash function, without detailed explanations. We will definitely improve the clarity of the writing and the readability of the paper. Below, we carefully address your main concerns. --- **Q1.** This is particularly true for superiority proofs, which clearly assume something about the hash function, but nothing is ever stated in regards to it. **A1.** We note that the hash function in our paper is consistent with the usage in the GNN theory literature, where the hash function is often directly used without further explanation (see, e.g., the seminal work [1] among many others). As a common concept in computer science, in a hash function, identical inputs lead to identical outputs, while different inputs result in distinguishable outputs. In our experiments, we use $\operatorname{hash}(h_i,\\{\\!\\!\\{h_j\\}\\!\\!\\})=h_i+\sum h_j^3$, though it could be substituted with other choices. We will clarify this in our experiments section. **References:** [1] Xu, Keyulu, et al. "How powerful are graph neural networks?" ICLR 2019. --- **Q2.** In addition to that, it is not immediately clear where the ‘canonization’ view can be useful, beyond the example provided in this paper. **A2.** We note that canonization is a generic method, and it can be applied to scenarios beyond those demonstrated in our paper. The major advantage of canonization is two folds. - Theoretically, it enables us to have an essential view of frames and frame averaging, a formal hierarchy of frame complexity, and a principled path to designing optimal frames. With this theory, we can show the non-universality of SignNet, which is a critical open problems. - Empirically, we show that canonization can be applied in many different scenarios with different types of symmetries: - **sign and basis equivariance**: we design optimal or better algorithms (OAP) for sign and basis equviariance of eigenvectors, and show clear benefits in efficiency and performance. - **permutation equivariance**: in the EXP experiment (Section 5.1), we use canonization to achieve permutation equivariance on graph data (not sign/basis invariance of eigenvectors). - **rotation equivariance:** In Appendix D, we applied canonization to achieve rotational equivariance of particles in PCA-frame methods. --- **Q3.** First and foremost, while I realize the authors must be approaching from the graph side of the literature - within frame averaging the term used in all papers is ‘canonicalization’, and not ’canonization’, and I would suggest for consistency to refer to it as such, to avoid creating confusion. **A3.** Thanks for the suggestion. We agree with this and will switch to "canonicalization" in the revision for better consistency. --- **Q4.** In subsection 2.1, as soon as frame averaging is presented, an assumption is made about finiteness of the group - but is never stated. Furthermore, after Section 3.2, the vector spaces considered becomes simply finite dimensional Euclidean spaces. **A4.** This is true. We will clarify these assumptions in our paper. --- **Q5.** Line 87 and 95 - universal approximation is claimed, however neither a citation, nor a theorem reference is provided. **A5.** Thanks for pointing out. We will cite the frame averaging paper after these claims. --- **Q6.** Line 128 - why are automorphisms of the input a major obstacle in analyzing complexity of frames? **A6.** The size of the automorphism group (and thus the frame) depends on the input --- more "symmetric" inputs with larger automorphism group would lead to larger computational complexity in the averaging step. Instead, the size of canonical forms depends less on the input and is more stable. For example, an optimal canonization would have size 1 on all inputs, while the corresponding frame size could be exponential (e.g., for graphs). For the sign ambiguity of LapPE, some eigenvectors become uncanonizable, but their canonization size is still less than or equal to 2, while their corresponding frame sizes could be exponentially large depending on the input. --- **Q7.** Line 144 - what you refer to as contractive ‘canonization’, is usually referred to as orbit canonicalization. **A7**. Thanks for pointing out. We will switch the term to orbit canonicalization for consistency. --- **Q8.** In line 205 what is the norm of a vector? **A8.** On line 205 $|\cdot|$ refers to element-wise absolute value. We apologize for the confusion and will clarify this in the theorem. --- **Q9.** In both section 3 and 4 the actual problem being solved is never stated, making it very difficult to follow. By this I mean that given a matrix, you find its eigenvectors, and wish to construct a function which is permutation equivariant … **A9.** Indeed, we would state the problem more clearly for better readability. --- **Q10.** You mention that you have not been able to show that OAP is optimal - given the indices may not exist, how could it be optimal? **A10.** OAP is optimal iff such indices always exist for **canonizable** inputs. So far we are not able to construct such inputs where these indices do not exist, therefore we are uncertain whether OAP is optimal. --- **Q11.** Line 304 - you never say what FA/CA is - I presume this is frame averaging and canonisation averaging? **A11.** This is true. We defined FA on line 29 and CA on line 146. Nevertheless, we will make sure to clarify this in the main text. --- **Q12.** The mentioned typos and suggestions. **A12.** We will fix them in the revision. --- We thank Reviewer TcMZ again for carefully checking our paper and providing many constructive suggestions. We will surely modify our paper according to your suggestions. If you have further concerns, we are happy to address them in the discussion stage. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their clarifications and answers. I would like to emphasise that my only concern at this point remains in readability of the paper on two fronts: problem statements and theorem statements. As long as these two are improved in the revision in line with the comments, I would be happy to raise my score. --- Reply to Comment 1.1.1: Comment: We thank Reviewer TcMZ for the prompt response. We clarify our problem statements and theorem statements in the following points, and we will add these discussions to the camera-ready version of our paper for better readability. --- ### Problem Statements Our paper consists of two parts, each addressing different problems. We describe the problems in these sections in detail as follows: 1. **Section 3** is focused on the theoretical problem of finding a principled way (i.e., canonicalization) to characterize the complexity of frames $\mathcal{F}(X)$ and frame averaging, a general class of invariant and equivariant learning methods. We have included the definitions of invariant and equivariant learning in Section 2, and the definitions of frames frame averaging in Section 2.1. We will state the main problem more clear in the beginning of Section 3 for better readability. 2. Guided by the theoretical insights in Section 3, **Section 4** is to design better or optimal canonicalization algorithms for a widely appeared class of problems, the sign and basis invariance of eigenvectors. Specifically, we aim to design a canonicalization algorithm $\mathcal{C}$ operating on eigenvectors $\mathbf{U}\in\mathbb{R}^{n\times d}$, that is **invariant** to sign/basis transformations, **equivariant** to permutation transformations, and outputs a set of eigenvectors $\mathbf{U}^*\in\mathbb{R}^{n\times d}$ in the **same eigenspace** as $\mathbf{U}$. We consider two settings of sign and basis invariance: without (Section 4.1) and with (Section 4.2) permutation equivariance, corresponding to different problem scenarios. We will make it more clear in the beginning too. --- ### Theorem Statements We understand that we define some notations out of the theorems, making them less self-contained and easy-to-understand. To ease your concerns, we will define the notations more clearly, point out the key messages of the main theorem, and add references to previous definitions. Below we give two examples of the modified theorem statements: - **Theorem 4.1.** Given a set of eigenvectors $\mathbf{U}\in\mathbb{R}^{n\times d}$, let $\mathscr{P}=\mathbf{UU}^\mathrm{T}$ denote the projection matrix of the eigenspace. Let $\mathbf e_1,\dots,\mathbf e_n$ denote the standard basis vectors. Then, there exists indices $1\leq i_1<\cdots<i_d\leq n$, such that for all $1\leq j\leq d$, we have $\lVert\mathscr P\mathbf e_{i_j}\rVert>0$, and the vectors $\mathscr P\mathbf e_{i_1},\dots,\mathscr P\mathbf e_{i_d}$ are linearly independent. - **Theorem 4.3.** Let $\alpha_i\ (i=1,\dots,n)$ be the outputs of the hash function in the OAP algorithm defined in Equation (4), and let $i_j\ (j=1,\dots,d)$ be the indices found in Algorithm 3. Then, the MAP algorithm is equivalent to the OAP algorithm by taking $\alpha_i=\lVert\mathscr P_i\rVert$ for all $1\leq i\leq n$ and $i_j=j$ for all $1\leq j\leq d$. The FA-lap algorithm is equivalent to the OAP algorithm by taking $\alpha_i=\mathscr P_{ii}$ for all $1\leq i\leq n$. We will modify all the theorems in our paper similarly to enhance readability and self-containedness. These changes will be reflected in the camera-ready revision of our paper. We hope these changes address your concerns. If you have further concerns or suggestions, please feel free to reach out.
Summary: The work establishes a significant connection between canonicalization and frame averaging, demonstrating an equivalence between the two concepts. By establishing such a relationship, the study efficiently compares the complexity of frames and determines the optimality of frames in relation to the symmetries of eigenvectors. This guides the authors in designing novel frames for eigenvectors that are superior to existing methods, achieving optimality in certain simpler cases. These new frames are both theoretically sound and empirically validated, revealing equivalences between previous methods that had not been identified before. The authors conducted experiments showing the proposed frames' effectiveness on benchmark datasets, achieving higher performance. Strengths: 1. The paper is well-written and nicely explained. 2. The paper provides strong theoretical results. 3. Empirical analysis backs the theoretical reasoning for the proposed architecture achieving higher performance. Weaknesses: I found the empirical evaluation to be the weak point of the work. The work provides strong theoretical results. However, a more detailed empirical evaluation would make the work more *complete*. For example, considering another molecule dataset (e.g., Alchemy) and texture reconstruction task (section 4.3 SignNet and Basisnet) 1. Line 98: $G_X$ is not define before. I think it should be defined here instead of Line 129 2. proof of theorems in the supplementary is not in order of the main text. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Line 138: “ This converts the problem of finding a G-equivariant subset of the group to finding a G-invariant set of inputs.” - further explanation of this would be great. 2. Line 175: “The canonization size |C(X)| may differ for different inputs.” — what does the size of |C(X)| tell about the input X? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The authors discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer MgpB for the constructive review. We address your concerns as follows. --- **Q1.** A more detailed empirical evaluation would make the work more *complete*. For example, considering another molecule dataset (e.g., Alchemy) and texture reconstruction task (section 4.3 SignNet and Basisnet) **A1.** We provide experiment results on Alchemy in the following table. Due to the limited time of the rebuttal period, we directly followed the SignNet setting and did not tune the hyperparameters for our method, so there is room for improvement. As shown in the table, SignNet, MAP, and OAP all achieve comparable performance. Unfortunately, we are unable to reproduce the texture reconstruction task in SignNet since it was a private code and the authors did not release it (see the [SignNet repo](https://github.com/cptq/SignNet-BasisNet)). Following your suggestion, we will add the full experiment results in our paper to make it more complete. | Model | Test MAE | | --- | --- | | GIN | 0.180 ± 0.006 | | SignNet | 0.113 ± 0.002 | | MAP | 0.114 ± 0.0007 | | OAP | 0.115 ± 0.001 | --- **Q2.** Line 98: $G_X$ is not define before. I think it should be defined here instead of Line 129. Proof of theorems in the supplementary is not in order of the main text. **A2.** Thanks for pointing this out. We will define $G_X$ before line 98 and adjust the order of proofs in the appendix. --- **Q3.** Line 138: “ This converts the problem of finding a G-equivariant subset of the group to finding a G-invariant set of inputs.” - further explanation of this would be great. **A3.** Thanks. Here is a more elaborate explanation. In frame averaging, we need to design a "frame" $\mathcal{F}$ that is a subset of the whole group $G$ and is **equivariant** to the group actions in $G$. Theorem 3.1 gives the equivalence of frame and canonization, which allows us to convert the problem of finding a *frame* into finding a *canonization*. In canonization, for each input, we need to find a subset of the input space that is **invariant** to the group actions. This set is called the canonical form of the input. Doing so has several advantages: - Firstly, since equivariance is a strong requirement, it is often easier to directly construct an invariant canonical form instead of an equivariant frame. - Secondly, as proved in Theorem 3.1, the canonization size is $|G_X|$ times smaller than the frame size, making canonization more efficient than frames. - Thirdly, canonization theory allows us to characterize the existence of uncanonizable elements when further equivariance constraints are imposed, which further allows us to solve the open problem of the expressivity of SignNet. We will add this elaboration in the revision. --- **Q4.** Line 175: “The canonization size |C(X)| may differ for different inputs.” — what does the size of |C(X)| tell about the input X? **A4.** The canonization size $|\mathcal{C}(X)|$ reveals **the extent of symmetry (w.r.t. group** $G$) **of the input**, in that **more symmetric inputs often lead to a larger canonization size**. For example, more symmetric graphs with more automorphism (e.g., a regular graph) often result in a larger canonization size. Specifically, the canonization sizes of graphs in the ZINC molecular datasets are mostly within the order of $10^3$, while in the more symmetric EXP dataset, the canonization sizes are in the order of $10^{19}$. We will elaborate on this point in the revision. --- We hope our response addresses your concerns. If you have further concerns, we will be happy to address them during the discussion period. --- Rebuttal Comment 1.1: Comment: Thanks for the response.
Summary: This work makes connections between two model-agnostic approaches to designing equivariant networks: frame averaging and canonization. It is first shown that any function obtained using frame-averaging can also be obtained using canonization. Then it is shown that canonization is computationally more efficient than frame averaging. Further, it is shown that not all elements are "canonizable", i.e., some elements may not have a unique canonical form. This insight helps prove that SignNet and BasisNet are not universal, and they further propose Orthogonalizes Axis Projection (OAP) that leads to optimal designs for the problem of sign and basis equivariance when permutation equivariance is not required. The optimality of the case when permutation equivariance is required remains unsolved. Experiments on several graph datasets confirm validate the working of the proposed OAP method. Strengths: - The paper is well-written. - The connection between canonization and frame averaging is very interesting. It helps resolve the issue of universality of sign and basis networks. Further, for the case without permutation, a universal algorithm is also proposed. - Experimental results show that the proposed method is expressive and computationally more efficient than frame averaging. - Experimental results on various graph datasets show superior performance to prior methods. Weaknesses: - Could you please provide a comparison of compute memory and time for all the networks in the experiments. Especially, comparison with non-frame-averaging methods such as GIN would give better insights to the readers on their applicability. Currently, it is not clear how much more expensive/cheap FA/canonicalization is compared to baselines such as GIN. - In Tab. 2, the practical advantage of canonization over frame averaging in computational complexity for graph dataset seems negligible because of the huge absolute compute time. In Tab 2., how is the averaging done over such a large set? Please provide any preprocessing time required for such computations as well (if applicable). - Although the initial results in the paper are applicable to the general area of equivariant learning, the applications of the initial results seems to be only useful in the context of equivariance to sign and basis. But the title of the paper has no mention about sign and basis, which makes it look more generally useful than what the experiments indicate. Technical Quality: 3 Clarity: 3 Questions for Authors: - In Tab. 1, out of curiosity what happens if we use only GIN+ID? I am curious because GIN+ID is universal (unlike GIN), so, the results on GIN+ID would help understand the benefit of permutation equivariance. Also, some results/plots on the convergence benefits from equivariance would help. - Are there any other domains where the results on the connection between frame averaging and canonization is directly applicable? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer y3Wh for the constructive review. We address your concerns as follows. --- **Q1.** Could you please provide a comparison of compute memory and time for all the networks in the experiments. Especially, comparison with non-frame-averaging methods such as GIN would give better insights to the readers on their applicability. Currently, it is not clear how much more expensive/cheap FA/canonicalization is compared to baselines such as GIN. **A1.** Yes! Methodologically, the proposed canonization algorithm is a pure preprocessing algorithm that does not increase the training time, and the pre-processing time is negligible compared with training time. On the other hand, FA methods such as SignNet increase the training time. The canonization algorithm also has no influence on the memory. We compare the time and memory of canonization methods with their non-FA backbone in the following table. Using canonization algorithms only increases the pre-processing time of the backbone, which is negligible compared to the training time. On the other hand, the two-branch architecture of SignNet increases the training time and memory. We will include all compute time and memory statistics in our paper. | Model | Pre-processing time | Training time | Total Time | Memory | | --- | --- | --- | --- | --- | | GatedGCN backbone | - | 3h26min | 3h26min | 1860MiB | | GatedGCN + SignNet | 30.03s | 4h13min | 4h13min | 2124MiB | | GatedGCN + MAP | 133.67s | 3h20min | 3h22min | 1850MiB | | GatedGCN + OAP | 186.38s | 3h25min | 3h28min | 1860MiB | | PNA backbone | - | 16h31min | 16h31min | 2242MiB | | PNA + SignNet | 30.03s | 18h1min | 18h1min | 2570MiB | | PNA + MAP | 133.67s | 16h47min | 16h49min | 2244MiB | | PNA + OAP | 186.38s | 14h54min | 14h57min | 2312MiB | --- **Q2.** In Tab. 2, the practical advantage of canonization over frame averaging in computational complexity for graph dataset seems negligible because of the huge absolute compute time. In Tab 2., how is the averaging done over such a large set? Please provide any preprocessing time required for such computations as well. **A2.** Since the frame and canonization sizes are both extremely large, we need to subsample them in practice. For fair comparison, we adopt the same subsampling size. In this case, **the advantage of canonization at the averaging complexity translates into a better sample efficiency for approximating the averaging expectation, which leades to faster training convergence, as we proved in Appendix C**. As for computing the frame/canonization of the same size, frame/canonization takes 2.53s and 3.15s for pre-processing, whose difference is negligible compared to the training time of 67.37s. --- **Q3.** Although the initial results in the paper are applicable to the general area of equivariant learning, the applications of the initial results seems to be only useful in the context of equivariance to sign and basis. But the title of the paper has no mention about sign and basis, which makes it look more generally useful than what the experiments indicate. **A3.** We note that our canonization perspective established in Section 3 is generic and applicable to different types of equivariance. In Section 4, we focus on sign/basis invariance because they are one of the most challenging problems with exponentially large group sizes. Apart from that, we also applied our canonization algorithms to other symmetries, including: - **permutation equivariance** of graph data in the EXP experiment; - **rotational equivariance** of particles In Appendix D. These new applications illustrate the generality of our analysis and the proposed algorithms, which leads to the use of a more general title. We will elaborate more on this part to avoid any confusion. Thanks! --- **Q4.** In Tab. 1, out of curiosity what happens if we use only GIN+ID? I am curious because GIN+ID is universal (unlike GIN), so, the results on GIN+ID would help understand the benefit of permutation equivariance. Also, some results/plots on the convergence benefits from equivariance would help. **A4.** Following your suggestion, we evaluate GIN with two kinds of commonly used node IDs: one-hot node ID and random node features. 1. Models with only one-hot node IDs is **not robust**: when we apply a random permutation to the input graph during training, the model fails with only 0.5 accuracy, while FA-GIN+ID is not affected. This shows that without permutation equivariance, the model is not able to learn real structural information. 2. Models with only random features **converges much slower** than the deterministic FA-GIN+ID model (see a training progress comparison in Figure 3 of [1]). This is because it’s hard to learn from random features without exact permutation symmetry. The above results show that lacking permutation equivariance hurts the robustness and convergence speed of GNNs. Real-world tasks are much more complex than EXP, thus lacking equivariance could harm the performance of GNN models significantly. **References:** [1] Ma et al. Laplacian canonization: A minimalist approach to sign and basis invariant spectral embedding. *NeurIPS 2023*. --- **Q5.** Are there any other domains where the results on the connection between frame averaging and canonization are directly applicable? **A5.** Since our analysis applies to general groups, the connection applies to any type of equivariance. In this paper, we have explored **sign/basis invariance, permutation equivariance, and orthogonal equivariance**. In future work, it is also possible to apply frame and canonization to the rotational equivariance of point clouds/molecules, the permutation equivariance of multisets, etc. Therefore, we believe that canonization would serve as a new generic perspective for understanding and attaining equivariance. --- We hope our response addresses your concerns. If you have further concerns, we will be happy to address them during the discussion period. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I thank the authors for their clarifications. For readability, please provide references in the main text to the results in the appendix. Overall, I am happy to increase my score since most of my concerns are clarified. I still find the applications/usefulness limited and the title of the paper seems more general than the applications. --- Reply to Comment 1.1.1: Title: Thanks Comment: We are glad to hear that our responses have addressed your concerns. We will incorporate references in the main text to the results in the appendix and revise the title to more accurately reflect the content of the paper in the revision. Thank you again and have a good day!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient Temporal Action Segmentation via Boundary-aware Query Voting
Accept (poster)
Summary: The paper introduces BaFormer, a novel Transformer network designed to improve the efficiency of Temporal Action Segmentation (TAS) while maintaining high performance. BaFormer tokenizes video segments into instance tokens and utilizes instance queries for segmentation along with a global query for boundary prediction, resulting in continuous segment proposals. The model employs a voting strategy during inference to classify segments based on instance segmentation, significantly reducing computational costs. Experiments on popular TAS benchmarks demonstrate BaFormer's superior efficiency with comparable or better accuracy than state-of-the-art methods like DiffAct. Strengths: In general, the proposed method is somehow simple but effective, and I am satisfied with the paper. The proposed Beformer utilizes a simple structure but can achieve performance gain and inference speed gain over previous methods. The proposed method also has detailed ablation studies, which can verify the effectiveness of the proposed BeFormer. Though BeFormer is a single-stage method, it performs comparable to the most recent two-stage methods. Weaknesses: However, there are some small concerns about the paper. 1. Despite the amazing performance, I think the novelty is limited. The author utilizes existing frameworks, including existing frame encoder and ASFormer encoder, also, the decoder is a common query-based decoder. The main novelty exists in the boundary-aware query voting part. However, it is a post-processing process. Thus in general the novelty is minor. 2. In Table 1 (teaser), the author claims that BaFormer with SSTCN and ASFormer backbones performs better than the original module with less running time. However, in Table 7, they have more FLOPs than the original methods but with less inference speed. Does the extra running time come from the NMS post-processing? If so, I think it is somehow unfair here. Maybe the author could list the modeling forward time and the post-processing time for a better and fair comparison. Otherwise, it is somehow confusing here. Technical Quality: 3 Clarity: 3 Questions for Authors: Please mainly see the weaknesses section for my problems. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: the authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for investing time in reviewing our work and acknowledging our contributions. **1. For Weakness 1** (**utilizes existing frameworks**) We understand the reviewer's concerns regarding our use of existing frameworks; however, we employ them solely to achieve the desired functionality. An important aspect we need to clarify is that we focus on how to adopt a new pipeline, i.e., query-based pipeline, into efficient TAS rather than some specific network design. To achieve more efficient TAS, we experimentally find that query-based processing can reduce dense information, thus reducing FLOPs. Therefore, we adopt query-based prediction from a DETR-style model to improve the running time. This lets us establish a new paradigm in TAS, different from the frame prediction in previous work. Our main effort is how to adopt the DETR-style model into TAS to achieve the expected performance, rather than to design some specific network structures. Specifically, we focus on introducing functional modules with effective organization, i.e., frame-wise encoder and decoder, transformer decoder. These modules can be implemented with any network. However, in our paper, as acknowledged by the reviewer, we employ existing frameworks to achieve these functional modules. This approach allows us to: 1) fairly compare with previous methods by sharing the same backbone, and 2) verify the effectiveness of our pipeline, which relies on the new paradigm rather than network design. Furthermore, although these existing frameworks have been successful in previous works, they may not be effective in our new pipeline, which is organized with different functional modules. Therefore, in terms of TAS tasks, we have made numerous efforts to optimize our approach. These efforts include experimenting with different matching strategies, establishing connections between frame decoders and transformer decoders, and implementing auxiliary loss functions. We analyze these aspects in detail in the main paper and supplementary materials and hence provide insights for designing a superior pipeline. It is worth mentioning that there are many impactful works to adopt the DETR-style model into different fields as what we do in TAS tasks, e.g., TrackFormer[1] for multi-object tracking, MaskFormer[2] for image segmentation and CLTR[3] for crowd localization. However, our work is the first such exploration for TAS. [1] T. Meinhardt, A. Kirillov, L. Leal-Taixe, and C. Feichtenhofer. Trackformer: Multi-objecttracking with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8844–8854, 2022. [2] B. Cheng, A. Schwing, and A. Kirillov. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34:17864–17875, 2021. [3] D. Liang, W. Xu, and X. Bai. An end-to-end transformer model for crowd localization. In European Conference on Computer Vision, pages 38–54. Springer, 2022. **2 .For Weaknees 2** (**Running Time**) Thank you for the suggestion to include detailed results to improve our paper. Based on the reviewer's suggestion, we have added the table below. And find that the original module's additional running time is due to the NMS post-processing. To ensure a fair comparison, we have listed both the forward time and the post-processing time below. Thanks again for the reminder and we will update the manuscript. |Method | Total time(s) | Backbone time(s) | Post-processing time(s) | |:--------:|:----------------:|:----:|:--------:| |SSTCN| 0.080 |0.031 | 0.049| |BaFormer(SSTCN) | 0.074 | 0.040 | 0.034| |ASFormer(Encoder) | 0.158 |0.092 | 0.066 | |BaFormer(ASFormer Encoder) |0.139 |0.105 | 0.034 | --- Rebuttal Comment 1.1: Title: Response to the authors. Comment: Thanks the authors for the reply, my concerns, especially the fair comparison part are mostly solved. I will keep my current rating as borderline accept, slightly towards 6 due to the novelty issue. --- Reply to Comment 1.1.1: Comment: Thank you very much for your time and effort in reviewing our work. I would appreciate the opportunity to further discuss and address your concerns regarding the novelty of our approach.
Summary: The authors introduce BaFormer, an innovative single-stage boundary-aware Transformer network designed for temporal action segmentation (TAS). This method employs instance queries for instance segmentation and a global query for class-agnostic boundary prediction, achieving substantial reductions in computational costs while preserving or enhancing accuracy relative to state-of-the-art approaches. Experiments across multiple datasets highlight BaFormer's efficiency and effectiveness, demonstrating its capability to achieve competitive results with significantly lower running time and resource usage. Strengths: 1. The proposed approach is interesting and show good performances on various of datasets. 2. The method is clearly described and easy to understand. The whole paper is well-written, 3. Efficient temporal action segmentation is an important research direction. Weaknesses: 1. Lack of the ablation studies. The authors are encouraged to add more ablation studies, e.g., regarding different number of query tokens. Apart from those ablations, the ablation of the loss function is also interesting since there are so many losses incorporated in the proposed method. 2. Lack of qualitative results. The authors are also encouraged to provide some qualitative results or TSNE figure to show more insights regarding the efficacy of the proposed method compared with some baselines. 3. Since the authors mentioned that the proposed method is efficient, however when we have a look at Table 1, the proposed approach do not clearly show the benefits in terms of the model size compared with DiffAct, which is regarded as one weakness. 4. The sensitivity of the parameters is interesting to be analyzed, since the proposed method do preserve large amount of the hyper parameters. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How could the variation in the number of query tokens impact the results of the proposed method? 2. Can the authors provide a detailed comparison of the different loss functions used in the proposed method? 3. Could you provide examples of baseline methods that should be included in the qualitative comparison? 4. Can you provide examples of how changes in hyperparameters could affect the performance of the proposed method? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No, the authors mentioned that the limitations are included in the supplementary however there is no supplementary materials for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable suggestions to enrich the analysis of our model, most of which are included in the existing supplementary. It looks like you didn’t find the supplementary material -- we double-checked and it’s not a problem, and other reviewers haven’t had this issue either. Please inform us or contact the conference organizer/AC if there are any issues with locating or downloading the supplementary material. We refer to the location of the following question in our supplementary material. (**Weakness 1 and Question 1: more ablation studies and query tokens**) We agree that more ablation studies help us to analyze the model further, and we include some of them in supplementary, such as the number of query tokens, single/multi-level feature connections, auxiliary loss, and other factors. Specifically, as for the number of query tokens, the experiment indicates that a slight increase in the number of queries can enhance the model’s ability without significantly raising computational costs. However, the model’s performance peaks at 100 queries, achieving the highest accuracy of 89.5% at a computational cost of 4.45G FLOPs (more details can refer to Table 2 of the supplementary material). (**Weakness 1 and Question 2: a detailed comparison of the different loss functions**) In supplementary Table 3, we present the performance results when the auxiliary loss is included under single or multiple features. Notably, applying auxiliary losses in multi-level features yields a substantial improvement in accuracy (2.1%), surpassing the 0.5% accuracy gain observed in single-level features. This suggests that auxiliary losses have a more pronounced effect on multi-level feature representations. (**Weakness 2 and Question 3: qualitative results**) Qualitative results, which can help us to analyze segmentation results, especially among the time dimension. So, in the supplementary material, we also provide additional qualitative results in Figure 2 to demonstrate the efficacy of our proposed query voting. In the 50Salads dataset, we present enhanced visualizations for a more comprehensive analysis. The figure includes the outcomes of instance segmentation, the segmentation results without boundary integration, and the ground truth. (**Weakness 3: model size compared with DiffAct**)We apologize for the concerns and recognize that we should clarify that efficiency refers to shorter running time. Thus, despite DiffAct's smaller size, its running time is longer than ours. DiffAct is a multi-stage method based on the diffusion model. It is built on a model with a small size but requires 25 iterations to refine the results, significantly increasing the running time. By contrast, our BaFormer requires only 6% of the time used by DiffAct and obtains a comparable performance. Directly comparing single-stage methods with multi-stage ones is somewhat unfair, we therefore divide them into two groups in Table 6. Alternatively, we can adjust the models to have similar running times for a fair comparison, we present the results in Table 7. Our method shows superior performance, achieving an accuracy of 89.5% and F1@10 of 89.3, compared to DiffAct's accuracy of 74.2% and F1@10 of 48.3 (more experimental results with different steps for DiffAct are shown in Table 4 in the supplementary material). Furthermore, we can extend our method to a multi-stage one to increase the accuracy to 90.5%, which surpasses the original DiffAct. However, in this paper, we focus on improving the performance of single-stage methods rather than relying on multi-stage methods. (**Weakness 4 and Question 4: more analysis of hyperparameters**) The supplementary material includes an analysis of hyperparameters, such as the number of the query tokens(Table 2), the connection between the frame-wise and transformer module (Table 3), the transformer decoder layer(Table 1), the single or multiple features (Table 3) and so on. (**Limitation**) The limitation is also discussed in the supplementary. Our proposed approach is under the query-based Transformer framework, which is well-known for its slow training convergence and data thirty. We observe that training query-based Transformer converges slower than training frame-wise approaches. For the discontinuous binary mask predictions, we assume that it may also be attributed to the limited data of action segmentation benchmarks. We hope that these factors can inspire future work. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank you for your response. However, according to the instructions for the paper submission, the technical appendix should be in the main submission pdf together with your main paper and checklist. Other supplementary materials such as data and code can be uploaded as a ZIP file. But it is ok for me if it does not violate the review policy. Could you provide more detailed analysis regarding why auxiliary losses work better on multi-level features compared with single-level features? --- Reply to Comment 1.1.1: Title: More analysis auxiliary loss on multi/single-level features Comment: Thank you again for your valuable feedback and comments. We will put the technical appendix together in the manuscript. (**auxiliary losses work better on multi-level features compared with single-level features**). We explain how the auxiliary losses work to address your concern about why they work better on multi-level features than single-level features. The auxiliary losses originate from each Transformer decoder layer, excluding the final layer. According to the formulation of outputs from each Transformer decoder layer (The definitions of variables are provided in the paper.): $\textbf{X}_i=$ $\text{Linear}(\text{softmax}(\frac{\textbf{P}^m_{i-1}}{\sqrt{C}}\odot\hat{\textbf{Q}}_i\hat{\textbf{K}}_i^T)\hat{\textbf{V}}_i)+$ $\textbf{Q}_{i-1}$ , the difference in auxiliary losses between single-level and multi-level features lies in the origin of $\hat{\textbf{K}}_i$ and $\hat{\textbf{V}}_i$. Specifically, as illustrated in Figure 1 of the supplementary material about the details of connections, in the single-level feature connection, $\hat{\textbf{K}}_i$ and $\hat{\textbf{V}}_i$ are derived from the outputs of the frame-wise encoder $\textbf{F}_e$ and remain consistent across different Transformer decoder layers. In contrast, in the multi-level feature connection, $\hat{\textbf{K}}_i$ and $\hat{\textbf{V}}_i$ are obtained from the outputs of different frame-wise decoder layers $\textbf{f}_i$, as shown in Equation 2 of the paper. According to backpropagation theory, the frame-wise decoder can undergo more gradient updates in the multi-level feature connection. Moreover, as outlined in Equation 3 and 4 of the paper, since the frame-wise decoder contributes to both query mask and boundary prediction, enhanced parameter learning in this component benefits the entire model.
Summary: This paper proposes a fully-supervised TAS approach that predicts both segment and frame predictions. Two levels of predictions are then combined to form the final action boundaries. The approach achieves competitive performance and at the same time reduces inference time for better effiency. Strengths: 1. The motivation of leveraging segment-wise prediction to help reduce the temproal dimension processed by model is sound. The design drawing inspiration form image classification looks soild for TAS. 2. The apporach successfully boosted the processing efficiency of action segmentation while maintaining high accuracy on common TAS benchmarks. Weaknesses: The reviewer understands that the authors' main objective claimed is to reduce the processing time. However, given the design of voting between frame-wise and segment-wise predictions, one expected advantage of having access to segment prediction is to reduce the over-segmentation issue, yet the effect of such voting scheme between frame-wise and segment-wise predictions did not achieve the expected effect. On most of the metrics, the propose approach did not manage to outperform existing SOTA approaches, for example DiffAct, especooally on the segmental metrics. The paper is diffult to parse from Sec 3 with very confuing notations. For example: 1. L.102 frame-wise encoder-decoder, the encoder is the feature extractor and what is the decoder? why is it called a decoder?Also what is the value for dim $C$ used in exps? The reviewer failed to find such information. 2. Likewise, L. 107 $P_i^m = \phi_m(Q_l, F_d)$, what does $m$ indicate? and why the underscripts are inconsistent (m, l, d) in the equation, and it makes the reviewer guess about the meaning. The definitions of variable shall always be around where the variables are first introduced in text. The authors should consider explain these notations for example with all variables at the same layer, i.e., a fixed underscript for all varibles to faciliate a clear understanding. The current rating is boardline reject given the above concerns but the reviewer would consider raise the rating if the authors could address the issues. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. What is the advantage of leveraging an extra branch of segment predict? What advantage did it bring in terms of accuracy and combating the issue of over-segementation? 2. Clarifications reagarding the confusing points listed in weakness. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors had discussed in the supp. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our contributions and below is our response regarding the reviewer’s concerns. **1.For Weakness**: >However, given the design of voting between frame-wise and segment-wise predictions, one expected advantage of having access to segment prediction is to reduce the over-segmentation issue, yet the effect of such voting scheme between frame-wise and segment-wise predictions did not achieve the expected effect. On most of the metrics, the propose approach did not manage to outperform existing SOTA approaches, for example DiffAct, especooally on the segmental metrics. **Response**: We apologize for not providing a clear note on how to compare our performance with others, despite having separated the methods into single-stage and multi-stage in Table 6. Since our work focuses on the efficiency aspect of TAS, it is somewhat unfair to compare the single-stage (i.e., BaFormer) and multi-stage (i.e., DiffAct) models on segmental metrics. That is why we categorize all methods into multi-stage and single-stage groups, as shown in Table 6. Specifically, multi-stage models involve many stages of refinement, which are time-consuming. Our single-stage model is designed to be more efficient. Although our model shows a 0.5% decrease in F1@10, we improve accuracy by 0.8% on the 50Salads dataset, and notably, our model requires only **6%** of the inference time compared to DiffAct. DiffAct, based on a diffusion model, includes 25 inference steps, significantly increasing the time required to refine segmental metrics. When comparing models with similar inference times, DiffAct achieves **64.9%** on F1@10 and **88.6%** accuracy, while our method achieves **89.3%** on F1@10 and **89.5%** accuracy, demonstrating superior segmental metrics of our model. Further details can be found in Table 11 of the supplementary. We can extend our methods to multi-stage to improve the segmental metrics further; however, experimental results indicate that this increases processing time, which contradicts our primary objective in this paper—to design an efficient TAS model, as acknowledged by the reviewer. To demonstrate the effect of our model, we also extended it into a multi-stage framework and modified the visual features for accuracy 91.2% and 90.8%, which are superior to DiffAct. To fairly assess the effectiveness of the voting mechanism in reducing the over-segmentation issue, we need to compare it within single-stage models. These models all include one stage with post-processing, and comparisons should focus on different processing techniques with similar backbones. For instance, compared to UVAST with FIFA, our method demonstrates superior performance in segmental metrics (88.9% vs. 88.3%), accuracy (89.5% vs. 84.5%), and faster inference (0.139s vs. 1.765s), which result from the query-based model design. **This demonstrates that our approach has already outperformed existing SOTA approaches for the one-stage model.** **2. For weakness of confusing notation in Sec 3** (**About decoder**) Sorry for the confusion. **It should be noted that the encoder is frozen in all previous work for fair comparison**, mentioned in the experiment of the paper. Thus, the decoder aims to further process features from the encoder into high-level information, making them more adaptable to the TAS task through training. We call it a decoder to distinguish it from the frozen feature encoder/extractor. The decoder architecture can be any network adopting from the existing work such as MSTCN and Asformer. ($C$ **in exps**) In the model, all instances of $C$ are set to 64, which is the feature dimension shared by different features. The value can be found in L217 of the experiment settings with "an output dimension of 64". We will clarify this by changing the phrase to "an output dimension $C$ of 64". (**Notation clarity**) Thanks for the suggestion and we will add improve the clarification and eliminate the notation inconsistency in the revision. Specifically, $\( \textbf{P}_i^m = \varphi_m(\textbf{Q}_l, \textbf{F}_d) \) $ represents the \(i\)-th mask features. Here, $m $ is the abbreviation of "mask". We have reviewed all the notations and will use superscripts to represent the type of the variable and subscripts to represent the index. We will update the following notations: | Old | New | Meaning | |:--------:|:----------------:|:----:| | $ \textbf{F}_e, \textbf{F}_d $ | $ \textbf{F}^e, \textbf{F}^d $| the output features of **e**ncoder or **d**ecoder | | $ \varphi_c, \varphi_m,\varphi_b $ | $ \varphi^c, \varphi^m,\varphi^b $ | heads for predicting query **c**lass, query **m**ask, **b**oundary | Furthermore, for better understanding existing notations, we will add the following descriptions: | Notation | Meaning | |:--------:|:----------------:| |$ \textbf{Q}_l $ | instance query embedding from $l^{\text{th}}$ layer | | $ \textbf{P}_i^m $| Query masks $\textbf{P}^m$ from $i^{\text{th}}$ layer (the $m$ represents **m**ask.) | | $\textbf{P}_i^b $| Boundary probability $\textbf{P}^b$ from $i^{\text{th}}$ layer (the $b$ represent **b**oundary.) | | $\textbf{y}_i^m $| binary mask $\textbf{y}^m$ from the $i^{\text{th}}$ query (the m represents **m**ask) | **3. For Question 1** (**Advantage of extra branch**). As our approach is based on voting to yield segmentation results, the extra branch for boundary generation is essential for our method. The extra branch can generate high-quality boundaries without much additional effort (nearly 5% Flops assumption). It can bring a 2.3% improvement in accuracy and a 2.1% improvement in Edit score, compared to a direct summary of all query predictions without an extra boundary prediction branch. **4. For Question 2** Refer to the answer of above part "2. For weakness of confusing notation in Sec 3" . --- Rebuttal Comment 1.1: Comment: The reviewer thanks the authors for providing the detailed rebuttal. The intent behind the first raised weakness was to request an explanation or analysis regarding why the combination of frame-wise and segment-wise prediction underperforms in the segmental metrics. Could this issue be attributed a weaker backbone?The author could also consider including an ablation study to show where the performance gain come form, i.e., starting query based segmentation to evaluate how well the query-based instance segmentation works and then add in the boundary branch to show its effectiveness. The reviewer assumes that this corresponds the authors reponse to Q1? It would be much appreciated if the authors could provide a detailed table containing such information. The second notation table is wrong? Should it be $p_i^m$ instead of $P_i^m$? and similarly $P_i^b$. What is the relationship about $l$ and $i$? are they identical? As a side comment, why are the main comparsion constrained to the single stage approaches only? The main objective is the model efficiency, so as long as the multiple stage approaches can reach a similar running time, they are good counterparts for comparsion. --- Rebuttal 2: Title: Detailed table and Notation Comment: We thank the reviewer for providing us with the opportunity to conduct a more detailed analysis of the performance on segmental metrics. **1.Respond to detailed table and gains** (**Detailed table** ) As the reviewer suggested, the answer is "Yes"—the weaker backbone, which we adopt for fair comparison, is one of the reasons for the underperformance in segmental metrics. Therefore, the detailed results of adopting another visual encoder, TSM[1], in the backbone to achieve superior performance are as follows: | Visual Encoder | F1@{10,20,50} | Edit | Accuracy | |:--------:|:----------------:|:----:|:--------:| | I3D | 89.3 88.4 83.9 | 84.2|89.5 | | TSM[1] | 90.7 89.7 85.6 | 86.0| 90.8| This demonstrates that a stronger backbone can improve segmental metrics, with F1@10 increasing by 1.4%, Edit by 1.8%, compared to a weaker backbone. [1] J. Lin, C. Gan, and S. Han. Tsm: Temporal shift module for efficient video understanding. In Proceedings of the IEEE International Conference on Computer Vision, 2019 (**Performance gains**)The answer is "Yes" to the reviewer's assumption of the performance gains corresponds to the above response to Q1. To provide more details on where the performance gains come from, the reviewer's suggestion to present the performance of query-based instance segmentation, both with and without the addition of the boundary branch, is highly valuable. The results are as follows: | Method |query-based instance segmentation | boundary branch | F1@{10,20,50} | Edit | Accuracy | |:--------:|:----------------:|:----:|:--------:|:--------:|:--------:| | A | ✓ | |86.5 85.9 80.6 | 82.1 | 87.2| |B | ✓ | ✓ | 89.3 88.4 83.9 | 84.2 | 89.5| **2. Respond to the notation** (**Notation**) We appreciate the reviewer's efforts to help us in clarification. The second notation table is correct but little confusing; both the $\textbf{p}^m$ and $\textbf{P}^m$ exist, and their relationship is described in L156 of paper, $\textbf{P}^m = (\textbf{p}^m_i)_{i=1}^M$. To clarify, we assign the $l$ as the index for the layer of the transformer decoder, while the $i$ is the index for the query. Then the above notation table can be modified to: | Notation |Meaning | |:--------:|:----------------:| |$\textbf{Q}_l$ | instance query embedding from the $l^{th}$ layer| |$\textbf{P}^m_l$ |Query masks prediction from the $l^{th}$ layer (the $m$ represents **m**ask.)| |$\textbf{p}_i^m$|The $i^{th}$ query mask prediction in $\textbf{P}^m$ ( the index $l$ is omitted in $\textbf{P}^m$ for simplicity )| |$\textbf{y}_i^m$| binary mask from the $i^{th}$ query (the $m$ represents **m**ask.) | **3. Respond to comparison** (**comparsion**)In the paper, shown in figure 1, most of two-stage and one stage methods cannot achieve a good trade off between effectiveness and efficiency. Thus, we categorize all the methods into two classes to see what achievement we have for temporal action segmentation. In addition to Table 6, we also consider recent multi-stage methods, adjust them to have similar running times, and present the results in Table 7. These results demonstrate that we achieve better performance when our method operates with similar time constraints to the multi-stage methods. Furthermore, a more detailed comparison with multi-stage methods across different numbers of stages is included in Table 4 of the supplementary material.
Summary: The paper introduces BaFormer, a boundary-aware Transformer network designed to enhance the efficiency of Temporal Action Segmentation while maintaining high performance. BaFormer tokenizes each video segment as an instance token for intrinsic instance segmentation and employs instance queries for segmentation and a global query for class-agnostic boundary prediction. This approach allows for continuous segment proposals and a simple voting strategy during inference. Strengths: 1, The proposed Query Voting method, though simple, is highly effective as demonstrated by the ablation experiments in Table 3. 2, The proposed BaFormer being a single-stage method is a practical advantage. 3, The proposed BaFormer achieves performance close to state-of-the-art two-stage methods. Weaknesses: 1, One of the major innovations, Boundary-aware Query Voting, is implemented only during the inference stage, and its process is very simple. The method involves summing up query masks within segment proposals and selecting the highest, which is a common approach in many temporal action proposal works. This raises doubts about the innovation's originality. Additionally, BaFormer's overall network design follows previous works, with the main innovations being in the Matching Strategies and Voting. 2, Besides I3D, the authors should compare the impact of other visual encoders on the model's performance. Understanding the effects of different visual features is crucial, and the visual encoder used in Table 6 should be clearly stated to ensure fair comparisons. 3, In Table 7, the terms "CNN and Transformer backbones" need clarification. The results for BaFormer with a Transformer-Base backbone correspond to those in Table 6, suggesting that "backbone" might refer to the Frame Decoder. This is inconsistent with the "Frame-wise Encoder-Decoder" terminology used by the authors. 4, The authors incorrectly cite KARI[18] as a NeurIPS 2024 paper, whereas it is actually from NeurIPS 2023. It seems the authors are not aware that they are submitting to NeurIPS 2024 themselves :) 5, Figure 6 aims to explain why Query Voting is superior to frame-based voting, but the visualization of queries and the meaning of different colors are not clearly explained, causing confusion for readers. Additionally, it would be interesting to know the performance of frame-based methods on shorter segments, as they might perform better than query-based methods in such cases. Technical Quality: 3 Clarity: 3 Questions for Authors: The questions have been detailed in the "Weaknesses". Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We provide discussions and explanations about your concerns as follows. **1. For Weakness1** (**Query voting is very simple**) We thank you for acknowledging our contributions to Matching Strategies and Voting. Regarding your concern about the simplicity of the process, we intentionally designed it to be simple to contribute to efficiency. In our paper, we aim to design an effective TAS model with high efficiency. So when designing such a model, we pursue two key points: (1) a highly efficient model structure to avoid heavy multi-stage or multi-step architecture; (2) integration of simple but effective post-processing upon high-quality model predictions. Specifically, our model transforms frame-wise predictions into query class-mask predictions for TAS. At the same time, we put much effort into exploring different matching strategies and extensively ablate different components (number of decoder layers, query quantity, auxiliary loss, global query for boundary generation, etc., as reported in experiments and supplementary material.) of the network architecture. **After achieving high-quality query class-mask predictions as well as precise boundary generation, we are then enabled to use very simple but highly efficient query voting to realize the expected efficient TAS**. We can also adopt more complicated post-processing, such as the Viterbi algorithm, activity grammar, and so on. However, previous papers have verified that they are time-consuming, which is not what we desired. For example, in another single-stage method UVAST, when using the Viterbi algorithm it spends more than **480s** with an accuracy of **87.4%** on 50Salads dataset; while when using simpler post-processing, i.e., alignment decoder, it takes **0.577s** but with a degraded accuracy of **79.5%**. By contrast, our BaFormer employs simple voting with **0.139s** and achieves a higher accuracy of **89.5%**. (**BaFormer follows previous work**) As for the concern that BaFormer follows previous work, BaFormer is designed to improve efficiency, and we experientially find that a query-based model can change the dense information into sparse information to reduce FLOPs. So, we follow the DETR-style pipeline to predict the query-based results. It gives us insight to take a new paradigm in TAS. **Our focus is on how to innovatively adopt the DETR-style model into TAS that has not been fully explored by such an approach**. Like BaFormer, many other promising works also follow the DETR model to achieve their different targets, for example, MaskFormer in image segmentation, CLTR in crowd localization, TrackFormer in multi-object tracking, and so on. They design new pipelines similar to DETR for different tasks as we do. **2. For Weakness 2** (**Other visual encoder**) Thank you for the suggestion to adopt other visual encoders, which will enrich our model's performance. Accordingly, we conducted experiments with TSM visual encoders compared with I3D. The results on 50Salads are as follows: | Visual Encoder | F1@{10,20,50} | Edit | Accuracy | |:--------:|:----------------:|:----:|:--------:| | I3D | 89.3 | 88.4 | 83.9 | 84.2 | 89.5 | | TSM | 90.7 | 89.7 | 85.6 | 86.0 | 90.8| The results indicate that a superior visual encoder can further enhance performance. **3. For Weakness 3** (**Terminology**) Sorry for the confusion. In our study, the term "CNN backbones" refers to the single-stage MSTCN, and the "Transformer backbone" refers to the single stage of ASFormer. In "Frame-wise Encoder-Decoder", the structure of the frame-wise encoder is fixed as I3D for all methods in Table 7. Therefore we focus on different structures of the frame-wise decoder. As acknowledged by the reviewer, the term "backbone" specifically refers to the frame-wise decoder. We apologize for any confusion and thank the reviewer for pointing it out. For clarity, we have replaced the term "backbones" with "frame-wise decoder". **4. For Weakness 4** (**Typo in reference**) Thank you for pointing out this embarrassing typo. We promise to fix it and check carefully other references in the revision. **5. For Weakness 5** (**Visualization in Figure 6**) Again, sorry for the confusion, and we will add relevant clarification in the revision. Figure 6 comprises two parts. Specifically, to generate the top part, we binarize the predicted masks $\in R^{n\times T}$ to 0 or 1 with a threshold 0.5, where $n$ is the number of queries, and $T$ is the length of the video. Thus, the positions with a value of 1 indicate the presence of actions corresponding to the query class. Next, we apply different colors to these positions and each color represents a single action. Then query masks are stacked to form the top part. In the bottom part, we present video segment results of frame-based and query-based voting, along with the ground truth. The red arrow points towards a specific query within a segment proposal, illustrated between two vertical black dashed lines. The red dashed box shows the segment results within the proposal. (**Performance on segments with different lengths**) Thanks for the suggestion. We conducted experiments with different segment lengths and the results are summarized below. We find that the performance of frame-based methods is slightly better on shorter segments, and the accuracy is shown below: | Length |0~1000 | 1000~2000 | 2000~ 3100 | |:--------:|:----------------:|:----:|:--------:| | Frame-based | 87.68 | 90.98 | 92.47| | Query-based | 86.85 | 92.01 | 95.38| --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank the authors for their detailed responses to the review. Regarding the response to Question 1, while I understand the importance of maintaining concise perspectives in both the model and post-processing stages, I still hold reservations about the novelty of the proposed method. Concerning Question 2, I am puzzled by the decision to select I3D as the backbone, given that TSM outperforms I3D in the results. Questions 3 to 4 have been thoroughly addressed. Regarding Question 5, the performance of frame-based methods on shorter segments is better than that of query-based methods. This is a significant drawback of the proposed query-based method that also needs to be considered. Finally, I remain concerned about the modest performance improvement compared to the 2022 UVAST method on GTEA. --- Rebuttal 2: Comment: Thank you for the valuable feedback and the opportunity for further discussion. **1.Respond to I3D or TSM** (**I3D or TSM**) We understand the reviewer's concerns regarding the results under different backbones. To ensure a fair comparison, all methods in Table 6 of the paper use I3D features. Specifically, for all methods, TAS tasks use fixed I3D features rather than TSM features as input. We believe it is fairer to use I3D as the visual encoder, even though TSM features may yield superior performance. **2. Respond to shorter segments** (**For shorter segments**) Thank you for the valuable suggestion, which has helped us analyze methods more deeply across different segment lengths. We will include additional analysis based on the table presented for Question 5. This type of case arises from mask prediction in the query-based method. In binary mask prediction for shorter segments, the ratio between foreground and background is smaller than in longer segments, leading to varying accuracy in mask prediction across different segment lengths. This is similar to the challenge faced in small object detection using DETR. We will include this observation in the limitations of the manuscript. To improve this case, potential solutions could involve reweighting the ratio of positive to negative samples, increasing the weight assigned to the mask prediction heads, and so on. This approach could guide our efforts in future work. **3.Performance on GTEA** (**GTEA**)We acknowledge the reviewer's concern regarding the performance on GTEA. While our method results in a 0.7% decrease in F1@10, it improves accuracy by 3%. This outcome is likely related to the shorter segment lengths in GTEA as described in part “for shorter segments”. We have provided data statistics across different datasets, where "Ratio" refers to the ratio between segment length and video length. |Dataset|number of Video| video length(min)|segment length(s)| Ratio| |:--------:|:--------:|:--------:|:--------:|:--------:| |GTEA |28 |1.24 | 2.21 | 0.0297| |50Salads|50|6.4|36.8|0.0958| |Breakfast|1712| 2.3|15.1|0.1094| We observe that the ratio in GTEA is only 1/3 of that in 50Salads or Breakfast. Beyond accuracy, for efficiency, we achieve a running time of approximately 1/5000 of UVAST's time, resulting in a significantly faster TAS.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Advection Augmented Convolutional Neural Networks
Accept (poster)
Summary: The paper introduces Advection Augmented Convolutional Neural Networks (ADRNet), integrating a semi-Lagrangian push operator into CNNs to improve spatio-temporal sequence prediction. Mimicking the Reaction-Advection-Diffusion equation, ADRNet addresses challenges in long-range information propagation and explainability. Evaluations on scientific datasets (CloudCast, SWE) and video prediction datasets (Moving MNIST, KITTI) show ADRNet's superior performance over baseline models. Strengths: **Originality**: - Innovative use of a semi-Lagrangian push operator for long-range information in CNNs. **Significance**: - Addresses a key limitation in CNNs for spatio-temporal prediction tasks. - Potential impact in fields like weather prediction and traffic flow analysis. Weaknesses: **readability**: - The introduction of the method in this paper primarily relies on formulas, lacking the necessary explanations and descriptions, making it less readable. - There are writing errors in the article, making it difficult to understand, such as in line 246. **Scalability**: - Scalability to larger, real-world datasets and more complex models is not thoroughly explored. **Replication of results**: - Although the experimental parameter descriptions are provided, due to the lack of running code, going to reproduce the results of the experiment, however, requires quite a bit of work Technical Quality: 2 Clarity: 2 Questions for Authors: What criteria or methods did you use to determine hyperparameters for different datasets and models? Have you tested ADRNet on larger and more diverse datasets? What were the results? Are there ways to enhance the generative capabilities of ADRNet for more complex video prediction tasks? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: **Addressing Limitations**: - The paper acknowledges limitations in generative capabilities for video prediction. **Suggestions**: - Explore methods to enhance generative aspects. - Test on larger, more complex datasets and discuss results and challenges. - Discuss potential societal impacts, such as ethical use of predictive models in sensitive areas like healthcare. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We appreciate the Reviewer’s positive assessment of our paper. We’re pleased that our innovative use of the semi-Lagrangian push forward operator and its potential impact on spatio-temporal tasks, such as weather prediction and traffic flow, were well-received. We are grateful for the constructive feedback, which has helped us improve the paper. We hope our responses address your concerns satisfactorily and encourage you to consider revising your score. We are happy to discuss any further questions or comments you may have.** **Regarding W1 (Readability):** Our work blends concepts from numerical methods for PDEs with Neural Network architectures. We have tried to strike a balance between known material in the PDE literature to ML knowledge. We would like to note that our aim is to cater to a wide audience in the ML community, and we therefore follow your proposal. Specifically, we added a number of explanations (e.g. the Operator Splitting approach and the role of advection in allowing long-range information transfer) to the manuscript. Also, the problem indicated by the reviewer in Line 246 was due to the floating figure not being aligned to the top, and we have now corrected it. We sincerely appreciate your comments and have implemented them into our revised paper for enhanced readability. Thank you. **Regarding W1 (Scalability):** We are thankful for your suggestion. Following your guidance, we have now measured the scalability of the method. Importantly, please note that in our ADRNet, there are two main operations: reaction-diffusion, which is implemented similarly to a standard ResNet, and the advection mechanism. In order to provide a comprehensive benchmarking of the scalability of the method, we time each component in ADRNet, that is, the (i) reaction-diffusion and (ii) advection, on different image sizes. This allows to directly measure the added computational cost by using ADRNet compared with a standard network like ResNet. The results are reported in the Table below, and they were also added to our revised paper. | Image Size | 32 | 64 | 128 | 256 | | |---------------|--------------------------------|----------------------------------|-----------------------------------|-----------------------------------|---| | ResNet | Train:4.2ms / Inference:0.6ms | Train:20.3ms / Inference: 10.4ms | Train:79.2ms / Inference: 45.8ms | Train:192.8ms / Inference:58.2ms | | | ADRNet (Ours) | Train:12.8ms / Inference:3.7ms | Train:53.3ms / Inference:19.1ms | Train:175.8ms / Inference:72.8ms | Train:484.3ms / Inference:179.1ms | | Furthermore, to accommodate your suggestion of demonstrating our ADRNet on a large dataset, we have added a new result for the Navier-Stokes dataset from the PDEBench suite of datasets. The Navier-Stokes dataset is large and consists of 21000 images of resolution 512x512. The results are provided in the Table below, and we also include visualizations of the results in **Figure 1 in the attached PDF rebuttal file**. **Regarding W3 (Code):** Thank you for the query. As we state in our paper, it is our full intention to publicly publish the code on GitHub if the paper is accepted for publication. The code includes all the scripts for the different problems and figures presented in our paper for a full reproduction of our results. **Regarding W4 (Hyperparameters):** We have used a grid-search approach to determine the hyper-parameters based on the validation set in each respective dataset. The hyperparameter ranges are $lr \in \\{1e-4,1e-5,1e-6,2e-6\\}$, batch_size $\in \\{64,128,256\\}$, number_of_layers $\in \\{1,2,4,8\\}$, number_of_channels $\in \\{128,196,256\\}$. In our Appendix we provided the selected hyperparameters, and we added the details provided here to the revised Appendix. Thank you. **Regarding W5 (Large dataset and diversity):** Thank you for the question. We used six diverse datasets in our submission: moving MNIST (synthetic), PDEbench (scientific), CloudCast (atmospheric), KITTI (natural images), KTH Action (real-world videos), and Taxi-BJ (spatio-temporal locations). These diverse datasets cover a range of properties and domains, demonstrating the effectiveness of our method across different types of data. Furthermore, following your request we add a new result for the Navier-Stokes dataset from the PDEBench. The Navier-Stokes dataset is large and contains 21,000 images with a resolution of 512x512. The **results are provided in our general response** to all Reviewers, showing that our ADRNet achieves the 2nd place among several leading methods. Also, we added a **visualization of the results in the attached rebuttal PDF in Figure 1**. All the results were also added to our revised paper. **Regarding W6 (Generative capabilities):** We thank the referee for this question. We believe that this indeed can be done, by using training similar to recent score matching techniques. This is a large excellent project that we are currently looking at. We added this discussion to the revised paper. **Regarding Suggestions :** Thank you for your thoughtful suggestions. We address them as follows: (i) Generative aspects: We are exploring this as a future direction, which could benefit scientific tasks like weather prediction. (ii) Larger datasets and additional tasks: Please refer to our response and results for W5. (iii) Societal impact: We agree this is important. ADRNet's ability to provide competitive performance across various datasets highlights its potential applications. We emphasize the need for ethical use of strong predictive models. Our paper includes results on publicly available datasets, which we believe do not involve unethical data. We have added these discussions to the revised paper. Thank you. --- Rebuttal Comment 1.1: Title: Thank the author for the detailed reply. The reviewer maintains the current score. Comment: Thank the author for the detailed reply, which has solved most of the doubts. I think the current rating is appropriate. The author's work is very interesting to the reviewer, and the author is encouraged to release the source code to support the reviewer to better verify the presented work.
Summary: This work is based on an analogy of CNNs to Advection-Diffusion-Reaction PDEs: Classical ResNets can be understood as pure Diffusion-Reaction PDEs, without an Advection Term. Thus, this work proposes to introduce a new layer resembling the Advection Term, to allow for long-range transportation of information. More specifically, they propose to learn the velocity fields, with which to advect the coordinates with a CNN. Advected quantities can then be computed by simply moving the quantities on the old grid to the new coordinates, and then interpolating back to the old grid (which they propose to do in a mass-conserving way, i.e. resembling conservative semi-lagrangian advection). Leveraging this new advection layer, the authors find performance on par with state-of-the-art approaches on a few benchmark datasets from PDE emulation, weather radar forecasting and video prediction. In addition they find indication, that the added advection layer can improve direct longer horizon forecasts for the Shallow Water Equation. Strengths: 1. Introduces a new neural network layer that mimics semi-lagrangian advection 2. Shows on multiple common (and not too easy) benchmarks against strong baselines that the derived ADRNet architecture is competitive 3. Paper is reasonably well written, with some less-important details moved into the appendix. Weaknesses: Major points: 1. If the CFL conditions are fulfilled, then Advection should be local. In other words, if the time step is small enough (compared to velocity), advection only moves quantities to neighboring pixels. Then, a regular CNN with local filters should be able to capture advection. Hence, i'd expect your layer to be most useful for simulations with large time steps compared to velocity. I am unsure if the datasets presented in this paper fall into this category! You already partially discuss this, in Line 204-207. I suggest to extend the discussion and maybe also study how important long-range advection is in the datasets presented here, e.g. by performing image co-registration between the time steps (you could use 2D cross correlation for this), and then plotting the average displacement distance for each dataset. Potentially compare this with the average displacement your trained models learn inside the advection layers. You should also compare this to the receptive field, that the CNNs have without the advection layer. You could also tailor the benchmark datasets to be more suitable for your model by deliberately increasing their time step and thus the need for your advection layer. Of course if you do this, communicate it clearly, and best report scores for multiple step sizes, which should ideally show how the advection layer becomes more and more important, the longer the time step. 2. For video prediction, optical flow is still often integrated in SOTA approaches. It is similar to advection, and since you propose your model may be useful for video prediction, I suggest you discuss this in the related works. 3. Your experiment section is very short. As is, it is not clear what exactly you evaluate. Please elaborate in the text. 4. The description of the Advection layer should be central, as it is the main contribution. I found it a bit short, in particular the Pseudocode Algorithm 1 does not detail how the Push-Operator works. Also section 3.2 could be extended to describe more in-depth how the conservative regridding works in practice. Maybe even add a figure that describes your approach. 5. Section 2 seems a bit long. I mean it is very nice to draw this analogue of CNNs to the ADR equation to introduce the advection layer, but it is neither essential for understanding the layer itself, nor is it very novel. I suggest to shorten this part. Same holds for Section 3.1 Minor Points: 1. Please place your floating elements on the top of the page, [t], as is pages 7 & 8 are somewhat visually confusing. 2. Typos and missing words, e.g. L248 Figure _ 3. Tab4 what do 10->5, 10->10 and so on mean? Describe it in the caption! Technical Quality: 3 Clarity: 2 Questions for Authors: - How about dilated Convolutions? The recent GraphCast paper introduces long-range connections in their multimesh to solve the issue of propagating information over longer distances. The equivalent for CNNs would be dilated convolutions, that is, the filters contain holes/zeros, and thus allow for propagating information from further away without heavy oversmoothing. It would be cool to see an ablation study against this. - A common application of models, like the PDE emulators in the study, is to use them in long-term autoregressive rollouts. Standard CNNs are often unstable over such longer rollouts. It would be super interesting to see how the ADRNet does in such a setting. - Would have been great if this submission would have had a code repo, i would have liked to try the network myself. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The study of limitation is very short and only existing in the survey. Please discuss in the main text, do you observe any artifacts from introducing the advection layer? (e.g. how about at the image boundaries?) How well does training go, is it easier or harder to train? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the positive, thorough, and detailed feedback, which combines profound knowledge from both ML and numerical PDEs perspectives. We address each of your comments and suggestions below and hope our responses are satisfactory, and that you will consider revising your score. We welcome any further questions or suggestions and are happy to discuss them.** **Regarding W1 (CFL):** We appreciate your suggestion and feedback. Unlike convolutions, which follow the CFL condition, the explicit semi-Lagrangian method can be thought of as an implicit method (please see reference [28]) and therefore is not limited by the CFL condition, making it suitable for advection in PDEs and inspiring its use in ADRNet. We agree that ADRNet is effective for long-range predictions requiring large receptive fields. In Table 4, ADRNet outperforms FNO in predicting 50 time steps ahead on the SWE dataset, and Table 10 shows its effectiveness on the CloudCast dataset. **Figure 6 in the rebuttal PDF illustrates receptive fields from 1 to 14 pixels on a 64x64 image.** Following your and Reviewer 2i2R's suggestions, we compared long-range predictions using dilated convolutions. **Figure 3 in the attached PDF** shows our advection mechanism performs better. Further details are in our response to W7. **Regarding W2 (Optical Flow):** The Reviewer is correct about the relationship between learning advection in ADRNet and optical flow methods. The two key differences are that (i) optical flow finds displacement fields in the image domain, while ADRNet learns fields for each channel in the hidden space and (ii) the goal of ADRnet is to predict the next frame while in optical flow one uses the frame to estimate the velocity. We added a discussion and references like [R1,R2,R3]. Thank you. [R1]VideoFlow: Exploiting Temporal Cues for Multi-frame Optical Flow Estimation [R2]Novel Video Prediction for Large-scale Scene using Optical Flow [R3]FlowNet: Learning Optical Flow with Convolutional Networks **Regarding W3 (Experiments):** Please refer to our global response for details on our extensive experiments. We performed comparisons across 6 diverse datasets (including synthetic, scientific, natural images, real-world videos, and GPS data) and benchmarked against SOTA methods. Our Appendix covers ADRNet's strengths, limitations, dataset details, evaluation metrics, and hyperparameters. We have improved the connection between the main text and the appendix as suggested. Additionally, we added a 7th dataset, the Navier-Stokes dataset from PDEBench, with results detailed in our global. **Regarding W4 (Description of Advection):** Thank you for the comments and suggestions. We explain the method in the paragraph below Equation (10) and in Figure 3. For the Push-Operator implementation, please refer to Jan Modersitzki's book 'FAIR: Flexible Algorithms for Image Registration,' which dedicates a chapter to this problem and includes code examples. We used a Pytorch implementation with the grid_sample function to advect the current state using the learned advection field. We will add a figure to the revised paper. Also, as we state in our paper, it is our full intention to publicly publish the code (including all scripts of the experiments and figures in our paper) on GitHub if the paper is accepted. **Regarding W5 (Sections 2 and 3.1):** We thank the referee for this comment. The referee clearly has a background in numerical methods for PDEs. Our aim was to write a paper that is readable to the larger audience in the ML community. In fact, Reviewer 2i2R requested the addition of more background material for Section 2. We are aiming to strike a balance between two different and important communities, and we are using your invaluable guidance to improve our paper. **Regarding W6 (typos, floating elements, Table 4 caption):** Thank you for thoroughly reading our paper. We have now placed all floating elements at the top of the page and fixed the typos. We also added an explanation to Table 4, ensuring it is self-contained. The term'10->5' means the network was trained on 10 previous time steps to predict the next 5. These configurations show that ADRNet offers strong performance on various prediction horizons. We clarified it in the revised paper. **Regarding W7 (Dilated Convolutions):** We appreciate the suggestion from both you and Reviewer 2i2R to use dilated convolutions. We have added experiments with two architectures: one using dilated convolutions for diffusion and another with dilation without advection. Specifically, we tested: (i) ADR with dilated convolutions for diffusion, and (ii) dilated convolution to replace the advection layer (by larger receptive field) while keeping standard convolution for reaction-diffusion. Convergence plots are in **Figure 3 of the attached rebuttal PDF**. Our ADRNet shows better performance than dilated convolutions on the moving MNIST dataset. The results and discussions are included in the revised paper. Thank you. **Regarding W8 (Applications of ADRNet):** Thank you for your interest in our method. We are eager to publish our code, including all scripts for the visualizations and experiments, on GitHub upon acceptance. We look forward to seeing our concepts applied to additional applications. **Regarding W9 (Limitations):** Thank you for the question. In addition to the discussion of the limitations of the method in Section 1, we discuss the generative limitation of physics-inspired architectures such as our ADRNet, in Appendix A. Our semi-Lagrangian advection method operates on an infinite domain, which implies that information can move out of the image domain without generating boundary artifacts. Regarding training, we did not encounter training difficulties, as reflected in our choice of hyperparameter, using the same learning rate for all parameter, with a relatively small grid search . In all cases, the training procedure was stable. --- Rebuttal Comment 1.1: Comment: Dear Authors, thank you for your effort to respond to my comments. My concerns have been addressed, I am thus raising my rating. Thanks!
Summary: This paper proposes a method to enhance the performance of networks in solving PDE problems using CNNs. To achieve this, the authors present a computational approach for efficiently retrieving information from distant points in spatio-temporal datasets. Their main contributions can be summarized as follows: 1) They propose an interpretation method for information using the advection-diffusion-reaction step. To implement this accurately, 2) they introduce a semi-Lagrangian linear layer. 3) They present a training approach for these components. Despite minor issues such as experimental validation, computational efficiency, or the readability of figures, this paper is well-structured mathematically and demonstrates high performance in the field. Strengths: 1. The interpretative range of information in a CNN, considered a function, is limited to its receptive field. This limitation makes it challenging to interpret information outside of this domain. The paper addresses this issue by proposing the Advection-Diffusion-Reaction approach, significantly improving performance. 2. The equations proposed in the paper are clear regarding variable declarations and usage, and they are written in a way that is easy to follow. While aspects like rigor may require a more detailed review, there appear to be no major issues based on my examination. The paper effectively formalizes the challenges faced by existing networks and provides solutions to address these issues. Weaknesses: 1. The most notable drawback is the absence of visualizations, such as attention maps, to confirm that the concept shown in Figures 1 and 2 operates as expected at each feature level, even in toy examples. While the network demonstrates significant performance improvements, experiments are needed to verify whether the concept functions correctly compared to the computational complexity involved. In particular, it would be impressive if a pre-trained model could observe the proposed problem of long-range information being advected and diffused. 2. Overall, there is a need to enhance readability, including improving the concept figures. Specifically, Figures 2 and 5 require better clarity. Additionally, the graph in Figure 1-(c) could benefit from readability improvements. The absence of source code makes it difficult to understand how each step was implemented, which adds to the difficulty of reading and evaluating the paper. Technical Quality: 3 Clarity: 1 Questions for Authors: Check Weakness Confidence: 2 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank the Reviewer for the overall positive assessment of our paper. In particular, we are delighted to read that the Reviewer finds that our paper addresses existing interpretability issues in CNNs, that the paper is clear and easy to follow, and that our ADRNet demonstrates high performance. We are also very grateful for your constructive feedback, which we made great efforts to address during the rebuttal period. We also found your feedback to be highly insightful and valuable, and it allowed us to improve our paper. Below, we provide our responses to each of your queries and suggestions. We hope that you find them satisfactory and that you will consider revising your score accordingly. We are also happy to discuss any additional comments or questions you may have.** **Regarding W1 (Visualizations):** We thank the reviewer for their suggestions. We have now added figures of the learned advection field and attention maps (please see **Figure 5 in the attached rebuttal PDF**). These figures confirm the concept described in Figures 1 and 2 in the paper. We note that the advection works on all channels in the embedded space. For the example at hand (moving MNIST), we have 64 channels and 5 convolution layers, that blend 10 previous time steps given as the input. In the added Figure (Figure 5 in the attached PDF) we inspect one of the channels across all layers, and plot a quiver plot of the advection field. This quiver plot shows the direction in which the advection is guided to solve the task in the moving MNIST dataset. In addition, we plot the absolute value of the advection field. The absolute value of the advection field is equivalent to an attention map, because it shows the areas in the original image that move the most to generate the final image. Including this figure allows us to visualize which areas in the input need to be moved to obtain the desired target. As can be seen from Figure 5, the pixels that correspond to the digits in the input images (10 time channels) are the ones that obtain larger values of displacement in the learned advection field – this result is in accordance with the concept of learning advection fields, as done in our ADRNet. We would like to thank the reviewer for the important suggestions, and we added the figures suggested by the reviewer to our revised paper. **Regarding W2 (Pre-trained model):** We would like to thank the reviewer for the insightful proposal of using a pre-trained model. We believe that such a success can improve the acceptability of our approach by the ML community. We have therefore added an experiment to test your proposal. Before moving to the description of the added experiment, we would like to kindly note that the computational complexity of our method, despite adding the advection term, is reasonable considering the fact that it effectively adds another layer to the network compared to a standard off-the-shelf Convolutional ResNet. Furthermore, note that some problems (e.g., our synthetic problem of moving a pixel from one corner of the image to another, cannot be solved effectively using a standard Convolutional ResNet, as shown in Figure 1 in our paper). In particular, we provide a table with measured training and inference runtimes for varying image sizes in {32x32,64x64,128x128,256x256}. The runtimes are provided below: | Image Size | 32 | 64 | 128 | 256 | | |---------------|--------------------------------|----------------------------------|-----------------------------------|-----------------------------------|---| | ResNet | Train:4.2ms / Inference:0.6ms | Train:20.3ms / Inference: 10.4ms | Train:79.2ms / Inference: 45.8ms | Train:192.8ms / Inference:58.2ms | | | ADRNet (Ours) | Train:12.8ms / Inference:3.7ms | Train:53.3ms / Inference:19.1ms | Train:175.8ms / Inference:72.8ms | Train:484.3ms / Inference:179.1ms | | The runtimes were measured on an ADRNet and a ResNet, both with 32 hidden dimensions, 10 input features, 1 output feature, batch size of 32, and 2 layers. We also added them to our revised paper. With regards to showing **results on a pre-trained model**, we have now experimented with a new, larger dataset from the PDEbench suite of datasets. Namely, we used the Navier Stokes (NS) equations dataset. We report the results of this experiment in our general response to all Reviewers. **Given this pre-trained ADRNet for the Navier-Stokes dataset**, we now **adopt it to predict the SWE system**, which is also studied (from scratch) in our paper in Table 2. Because the number of inputs and outputs is different between the NS and SWE datasets, we trained a **single linear layer followed by a SiLU activation** and **used the output of the ADRNet trained on NS data to predict the solution for SWE**. As can be observed in **Figure 2 in the attached PDF**, as well as in the results reported in the Table below, the results obtained with this pre-trained model are better than using a random ADR model, and given that only a single layer was trained, are reasonably well compared to the training the complete network from scratch. These results indicate that the network can be utilized to generalize the advection process which is known to appear in the NS equations as well as the SWE problem. We added these results and discussions to our revised paper. Thank you for your suggestion, as it truly helped us to improve the paper. **Regarding W3 (Readability and Clarity):** Thank you for the concrete suggestions. We have revised the caption text in the figures and visually improved them (for example, we changed the font size in Figure 2 in the paper, which we agree was hard to read). Regarding source code, as promised in our submission checklist and main paper text, upon acceptance, we will publicly share our code on GitHub. We confirm that this is indeed the case. --- Rebuttal Comment 1.1: Comment: Dear Authors Thanks for the detailed answer. Most of my concerns have been addressed so I keep my initial score. Please include those modifications in the final version.
Summary: This study addresses spatio-temporal prediction problems in the physical sciences. Since existing methods for temporal prediction mechanisms based on convolutional neural networks (CNNs) often underperform when propagating long-range information, the authors proposed a physics-inspired architecture that mimics the reaction-advection-diffusion equation in high dimensions. Strengths: - The motivation of the proposed algorithm is straightforward. - The idea of operator splits to construct a neural network from the advection-diffusion-reaction differential equation is interesting. Weaknesses: - The authors need to include many essential steps so that readers can fully understand the proposed idea. For example, the approximation described in eq. (8) needs to be explained in detail; the authors merely refer the readers to [1]. This approximation is a crucial process that enables operator splitting, and therefore, a detailed analysis should at least be included in the appendix. - I wonder why the diffusion and the reaction can be combined in a 3$\times$3 convolution layer. I agree that the reaction step can be implemented with a nonlinear 1$\times$1 convolution layer. However, for a diffusion step, as shown in Fig. 2, the dilated convolution is better suited to describe the diffusion step. - There is no quantitative comparison between the proposed ADRNet and other algorithms. Even though Fig. 5 illustrates the results of ADRNet, these frames are not suitable for demonstrating spatio-temporal prediction as there are no significant transitions between temporal frames. - As can be seen in Table. 8 and Table. 9, ADRNet performs poorly compared to the other algorithms and even shows quite a gap to SOTA performance, which raises doubts that the proposed ADRNet can consistently address the spatio-temporal prediction problems. Technical Quality: 3 Clarity: 2 Questions for Authors: - The order of reaction, diffusion, and advection seems to be crucial for the design of the neural network. In Fig. 2, the process is shown in the order advection-diffusion-reaction, while the proposed operator splitting processes the data in the order reaction-diffusion-advection. Is there a particular reason for adopting this sequence in operator splitting? In addition, it would be necessary to empirically investigate whether this method can be generalized to various scientific datasets that do not follow this sequence of processes. - In Table.6, DMVFN [21] is duplicated. Please check it. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See the weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank you for the detailed and constructive feedback. We are glad you found our submission well-motivated and our approach interesting. We have addressed all your comments and suggestions and are open to further discussion. We hope you find our responses satisfactory and consider revising your score accordingly.** **Regarding W1&W4 (Derivations, and order of reaction-diffusion-advection)**: Due to space limitations, we initially omitted the derivation for Equation (8) and referred readers to [1]. However, we have now included the derivation in the revised paper. Our derivation demonstrates the validity of the operator splitting approach used, as detailed below: In numerical PDEs, for an initial value problem (IVP) $dx/dt = Ax$, the solution is $x(t) = exp(tA)x(0)$. For $dx/dt = Ax + Bx$, the solution is $x(t) = exp(t(A+B))x(0)$. If matrix exponentials were treated as scalars, the solution would be $x(t) = exp(tA)exp(tB)x(0)$, implying that the order of operations (reaction-diffusion and advection) is irrelevant. Unfortunately this analysis does not work for matrices. To analyze this, we review matrix exponentials. The matrix exponential $exp(A)$ is defined by: $$exp(A) = \sum_{k=0}^{\infty} \frac{1}{k!} A^k $$ For the sum of matrices $A$ and $B$, the expansion, using Taylor's theorem, is: $$exp(t(A+B)) = I + t(A+B) + 0.5t^2(A^2 + B^2 + AB + BA) + O(t^3) \quad (1)$$ For the product of two matrix exponentials: $$ exp(tA) \cdot exp(tB) = (I + tA + 0.5t^2A^2 + O(t^3)) \cdot (I + tB + 0.5t^2B^2 + O(t^3)) \quad (2)$$ **Remark 1**: Generally, for matrices $A$ and $B$, $AB \neq BA$ unless they share eigenvectors. Expanding Equation (2) and collecting terms, we get: $$ exp(tA) \cdot exp(tB) = I + tA + tB + 0.5t^2(A^2 + B^2 + 2AB) + O(t^3) $$ Comparing this with Equation (1), we find the approximation error is \( O(t^2) \): $$ exp(tA) \cdot exp(tB) - exp(t(A+B)) = 0.5t^2(AB - BA) + O(t^3) $$ This error depends on how $AB$ differs from $BA$. **Regarding the order of operations**: Changing the order of $A$ and $B$ yields: $$exp(t(A+B)) = exp(tA) \cdot \exp(tB) + O(t^2) = exp(tB) \cdot exp(tA) + O(t^2)$$ Thus, the order of operations (advection vs. reaction-diffusion) does not fundamentally affect numerical accuracy with order of $O(t^2)$. Encouraged by your question, we added an experiment comparing two orders of operations: advection followed by reaction-diffusion, and vice versa. Training convergence and sample prediction results on the moving MNIST dataset, shown in **Figure 4 of the attached PDF**, are similar. These results support our earlier analysis. We appreciate your insightful comments and have included this analysis, results, and discussion in the revised paper. **Regarding W2 (reaction-diffusion and dilated convolution)**: Thank you for the important discussion. We address it in two parts: first, why diffusion and reaction can be combined into a 3x3 convolution; second, using dilated convolution for diffusion as suggested. Combining diffusion and reaction into a single step stems from solving a PDE with linear reaction. For a 1D diffusion-reaction PDE: $$I_{xx} + r I = q$$ In Fourier space: $$\omega^2 \hat{I} + r \hat{I} = \hat{q}$$ Thus, $I = F^{-1}\left(\frac{1}{\omega^2 + r} \hat{q}\right)$. The filter $\frac{1}{\omega^2 + r} $ combines diffusion and reaction. Implementing this requires Fourier or Cosine transforms, which can be computationally expensive. A simpler approximation is a 3x1 kernel, like 1/4*[1,2,1] in 1D, used in computer vision and image processing (e.g., Nagy and Hansen, reference [35]). We have added this discussion to our revised paper. **Regarding the use of dilated convolutions for diffusion**, we agree with the reviewer, that it is another option to implement diffusion and have added relevant experiments. Specifically, we tested: (i) ADRNet with dilated convolutions for diffusion versus standard convolutions, and (ii) dilated convolutions for advection while using standard convolutions for reaction-diffusion, as suggested by Reviewer zzNS. The convergence plots in Figure 3 of the attached PDF show that our ADRNet implementation outperforms dilated convolutions on the moving MNIST dataset. We also found dilated convolutions more effective for diffusion than for advection. These results and discussions have been included in the revised paper. Thank you. **Regarding W3&W5 (comparison with other methods and additional data)**: We would like to kindly ask the reviewer to refer to our **global response for detailed information regarding our extensive experiments**. Briefly, *we conducted over 8 quantitative comparisons across 6 diverse datasets (including synthetic, scientific, atmospheric, natural images, real-world videos, and GPS spatio-temporal data), benchmarked against multiple SOTA methods*. Our results, discussed comprehensively in the appendix, highlight both the strengths and limitations of ADRNet. Regarding Tables 8 and 9, we present these results in Appendix A to highlight the limitations of ADRNet and identify scenarios where it may not achieve state-of-the-art performance. This rigorous evaluation is crucial in scientific studies, including Machine Learning. Our results show that ADRNet performs comparably or better than many recent methods on the main datasets and works well on two additional datasets, despite not being state-of-the-art. We hope you appreciate our thorough analysis of ADRNet's strengths and limitations, which we have further enhanced in the revised paper. To further demonstrate ADRNet on scientific data, as suggested by you in W5, we have added results from the Navier-Stokes equations dataset from PDEbench. These results are included in our general response to all reviewers. **Regarding W6 (Table 6):** Thanks for noting this error. In DMVFN [21], two versions of the method were proposed. It was a typo on our side not to include the specific versions. We have fixed the table accordingly. --- Rebuttal 2: Comment: Thank the authors for their response. Several of my concerns have been addressed. However, I would like to ask for further clarification as some of the comments are confusing to understand. - As shown in Figure 3 and Figure 4 in the attached file, the authors only offered the analysis based on the convergence of training. While I agree that training loss convergence is important, I would like to see a quantitative comparison with test datasets. - The authors mention that they also found that dilated convolution is more effective for diffusion than for advection, but I cannot find that in the attached file or in the comments. - I would like to suggest the authors to provide the quantitative values of RMSE, cRMSE, fRMSE, etc. [47] for the experiment shown in Figure 1 of the attached file. Furthermore, if it is possible to make quantitative comparisons with other algorithms using this dataset, I believe this would further substantiate the novelty of the proposed algorithm. --- Rebuttal Comment 2.1: Title: Response by Authors Comment: **We thank the reviewer for the responsiveness and the discussion. We are happy to read that our responses addressed some of the concerns, and we now address your questions, comments, and suggestions. We find that they further helped to improve the paper, and all the results and discussion were added to our revised paper. We hope that you find our responses satisfactory.** **We are happy to discuss any remaining questions, and we look forward for your feedback.** *** **Regarding Figure 3 and Figure 4 in the rebuttal PDF:** We agree with the reviewer that both training and test performances are important. Following your question, we provide a comprehensive comparison of the test performance on the moving MNIST dataset (both MSE and MAE metrics), in the Table below. Specifically, we compare: **(i)** our ADRNet, as shown in the paper. **(ii)** An implementation where the order of the operators is flipped, as suggested by the reviewer and shown in our Authors’ Rebuttal text and PDF (Figure 4 and Response to W1&W4), we denote this variant as DRANet. **(iii)** Using dilated convolutions to implement diffusion, as suggested in your review, we call this variant ‘Dilation with Advection’, **(iv)** Following *Reviewer zzNs* to replace advection with dilated convolutions, we also experiment with the case where diffusion-reaction is implemented as in our ADRNet, but instead of using our Advection operator, we use dilated convolutions. We call this variant “Dilation without Advection”. Please note that these four cases were also discussed in our responses and attached rebuttal PDF file, although in separate because they stem from different questions. The results are provided in the Table below: | Method | MSE (lower is better) | MAE (lower is better) | |------------------------------------------------------|-----------------------|-----------------------| | ADRNet (case (i) as shown in the paper) | 16.1 | 50.3 | | DRANet (case (ii), Changing order of operators) | 16.2 | 50.3 | | Dilation with Advection (case (iii)) | 16.6 | 51.1 | | Dilation without Advection (case (iv)) | 25.7 | 72.8 | As can be seen from the Table, our ADRNet, as well as its variant where the order of the operators is flipped, DRANet, achieve better performance than cases (iii) and (iv). *** **Regarding the results on dilated convolutions:** Please kindly refer to Figure 3 in the rebuttal PDF. We provide three plots: **(i)** our ADRNet, in *blue* . **(ii)** Dilation with Advection, which implements your suggestion – i.e., using dilated convolutions for diffusion, in *orange*. **(iii)** Dilation without Advection as suggested by Reviewer zzNs, i.e., replacing advection with dilated convolutions, in *green*. Please see that for convenience we also included a description of the different methods in the Caption of Figure 3. From Figure 3, we found that your suggestion of using dilated convolutions is more effective for diffusion compared with replacing advection with dilated convolutions. We see it because your proposed method offers lower training loss and faster convergence than case (iv), and it is *also supported by the test set results provided in the Table above*. *** **Regarding the results in Figure 1 in rebuttal PDF:** We welcome your suggestions. We kindly note that in our **Global Response above, we provided a comparison of our ADRNet with 6 different architectures** on the Navier Stokes dataset (which we illustrated in Figure 1 in the rebuttal PDF). Furthermore, we followed your suggestion and added more metrics used in PDEBench, which include your suggested metrics, and we compare them with the popular UNET and FNO architectures below: | Metric (lower is better) | UNET | FNO | ADRNet (Ours) | |--------------------------|---------|--------|---------------| | RMSE | 0.33 | 0.28 | 0.17 | | Max Error | 2.2 | 1.8 | 0.68 | | cRMSE | 0.015 | 0.012 | 0.0041 | | bRMSE | 0.36 | 0.28 | 0.083 | | fRMSE (low) | 0.065 | 0.05 | 0.014 | | fRMSE (mid) | 0.032 | 0.031 | 0.026 | | fRMSE (high) | 0.0085 | 0.0063 | 0.0018 | The results show that when considering additional metrics as suggested by the reviewer, our ADRNet continues to offer good performance.
Rebuttal 1: Rebuttal: # Global Response: We thank the four Reviewers for their detailed and constructive feedback, which overall was positive, stating that our method and paper are (i) **motivated and interesting** (*Reviewer 2i2R*), (ii) **addressing issues in existing CNNs, clear and easy to follow, and demonstrating high performance** (*Reviewer Lq8m*), (iii) **introduce a new neural network layer, reporting results on multiple “(not too easy) benchmarks”** showing competitive results, as well as that our paper is reasonably well written (*Reviewer zzNS*), and, finally, (iv) makes an **innovative use of the semi-Lagrangian push forward operator**, addresses a key limitation in CNNs for spatio-temporal prediction tasks, and can have **potential impact in fields like weather prediction and traffic flow analysis** (*Reviewer wodQ*). **Your feedback included questions and suggestions that helped us to improve our paper. A non-exhaustive list of your suggestions and our revisions includes:** 1. The addition of derivation of operator splitting approach, and the experimentation with a different order of the terms in ADRNet (*Reviewer 2i2R*). Our experimental results support the analysis of the operator splitting scheme, presented in **Figure 4 in the rebuttal PDF**. 2. Experimenting with dilated convolutions mechanisms to implement diffusion and to allow long range propagation compared with Advection (*Reviewers 2i2R and zzNS*). Our results show the strength of our ADRNet compared with other existing mechanisms, and are , presented in **Figure 3 in the rebuttal PDF**. 3. Using of pre-trained ADRNet (*Reviewer Lq8m*). We have shown that we can transfer learned ADRNet to different datasets, showing an additional benefit of ADRNet. The results are discussed in our responses and provided in **Figure 2 in the rebuttal PDF**. 4. We added visualization of the learned advection fields and attention maps, to understand the behavior of ADRNet (*Reviewer Lq8m*). The result are provided in **Figure 5 in the rebuttal PDF**. 5. Showing results on an additional large dataset (*Reviewer wodQ*). Please see our results below and in **Figure 1 in the rebuttal PDF**. *** While we provided a dedicated response for each Reviewer, we found a repeated point in some of the reviews, and we would like to address it globally: **Regarding Evaluation:** Several reviewers commented about the scope of our experiments. We would like to enlist the **set of experiments conducted in our paper, which are extensive and contain diverse datasets from different sources, for different types of problems, which are evaluated and benchmarked against multiple SOTA methods from recent years.** In particular, we have done more than 8 quantitative comparisons on a wide range of datasets and tasks, including standard metric comparison as-is done in other well-known papers in the field, as well as ablation studies that show the ability of ADRNet to perform long-range predictions). The experimental results in the paper include the following: * Table 2 compares our method to 7 other methods on the SWE dataset from PDEbench, a scientific dataset. * Table 3 compares our ADRNet to 4 other methods of the CloudCast data set, which is a real-world dataset of spatio-temporal climate prediction. * Table 4 compares our method to FNOs on long-range predictions for the data from PDEbench. In all these comparisons, our method performs better and in some cases, much better **, 30% improvement, than SOTA on this data set**. * Table 5 compares our ADRNet to 8 different methods on the moving MNIST dataset. As discussed in Section 4.1, this dataset contains movements of digits and is widely used to show performance on spatio-temporal capabilities on video prediction. Our results in Table 5 show that ADRNet is the second-best among leading methods. * Table 6 compares our method to 6 other methods on KITTI, a real-world natural images dataset. From Table 6, we see that our ADRNet is the leading method on 3 out of 4 of the considered cases and is 3rd best on the remaining metric. * We provide an ablation study on the impact of learning to predict various future frames ahead in Tables 4 and Table 10, which show the effectiveness of ADRNet for problems that require a large receptive field. * We provide results on two additional datasets (KTH Action and TaxiBJ) , as well as providing training and validation curves, all reported in Appendix A. Overall, we have experimented with **6 different datasets with different prediction tasks**. Note that these datasets are very diverse. One is synthetic (moving MNIST), one is scientific (PDEbench), one is atmospheric (satellite images in CloudCast), one is comprised of natural images (KITTI), one contains real-world videos (KTH Action), and the last contains GPS spatio-temporal information (TaxiBJ). Thus, we have experimented with datasets that have very different properties. **In addition**, in order to further demonstrate our ADRNet on scientific data, as has been we have run an additional data set from PDEbench, the Navier-Stokes equations. This is a large dataset, comprised of 21,000 images of size 512x512, and our ADRNet achieves the 2nd place among multiple methods, casting it among the SOTA methods, as shown in the Table below: | Method | UNET | FNO | MPP-AVIT-TI | MPP-AVIT-S | MPP-AVIT-B | MPP-AVIT-L | ADRNet (Ours) | |--------|------|-------|-------------|------------|------------|------------|---------------| | nMSE (lower is better) | 1.67 | 0.243 | 0.0312 | 0.0213 | 0.0172 | 0.0142 | 0.0168 | We visualize the results in Figure 1 in the attached PDF rebuttal. All the results and discussions were also added to our revised paper. *** Lastly, we hope that the reviewers find that we have adequately addressed their concerns, and we would be grateful if they would consider reassessing their rating. We are also open to further discussion and welcome any additional input. Pdf: /pdf/e80490c63365cc720c357ddf83332147e9cab786.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unified Graph Augmentations for Generalized Contrastive Learning on Graphs
Accept (poster)
Summary: This paper investigates the generality of graph augmentations across various types of graphs and tasks. Firstly, the paper presents a new interpretation that unifies the various graph augmentations into local attribute modifications of each node from a basic message-passing perspective. Then, the paper develops a novel graph augmentation module that alters node attributes to simulate the augmentation effects. Utilizing this module, the paper presents a novel framework for graph contrastive learning, where the balance between consistency and diversity across different augmented views is carefully considered. Finally, it provides a theoretical understanding of the generality of the proposed framework. Extensive evaluations demonstrate its superior performance on diverse datasets and tasks. Strengths: 1) This paper is well-organized and well-written. 2) The motivation of unifying graph augmentations from a message-passing view is interesting. 3) The proposed framework is simple and efficient with solid theoretical guarantee. 4) Experiments across various types of graphs and tasks demonstrate the superior efficacy and efficiency of the proposed method in comparison to existing works. Weaknesses: 1) There are a few typos and grammar mistakes in the paper that need fixing. 2) The theoretical analysis requires intuitive explanations. While the authors offer the proof of generality, they should elucidate how the theorem relates to the superior performance. 3) The concepts of consistency and diversity deserve a more comprehensive explanation. Additionally, how do the definitions of consistency and diversity across augmentations in self-supervised learning scenarios differ from those in semi-supervised learning scenarios? 4) The connection between the number of AC vectors and model performance is not clear. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to Weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1. There are a few typos and grammar mistakes in the paper that need fixing.*** R1. Thanks for your thoughtful reminder. We will polish our manuscript to elevate the quality and clarity. --- ***Q2. While the authors offer the proof of generality, they should elucidate how the theorem relates to the superior performance.*** R2. An intuitive explanation is that the generality improves the flexibility of the augmentation, which is the need for diverse graph tasks. Graphs and corresponding tasks possess diverse characteristics and, hence, need different augmentation strategies to meet their diverse and complicated requirements. However, existing graph augmentations, no matter whether heuristics or learnable ones are limited to selecting from discrete candidate sets, e.g., node or edge dropping and attribute masking; thus, some flexible and continuous augmentations can NOT be achievable to satisfy these diverse needs. Fortunately, the proposed GA module is endowed with theoretical versatility, enabling it to be equivalent to any flexible graph augmentations. Therefore, by equipping with this versatile module, the proposed model can be flexibly tailored to meet the diverse GA demands across different tasks. The versatility and flexibility have propelled our framework to achieve outstanding performance across various datasets and tasks, outperforming existing GCL baselines. --- ***Q3. The concepts of consistency and diversity deserve a more comprehensive explanation.*** R3. **Consistency** means that the augmented graph maintains the underlying structure and key features of the original graph. Therefore, GAs should minimally impact the similarity between the representations from different augmented graphs to preserve the intrinsic semantic integrity of samples. In contrast, **diversity** indicates that the augmented graphs originate from diverse distributions. Thus, another objective of GAs is to minimize the overlap between augmented graphs, ensuring the model does not become overly fixated on the specific features of a single distribution. --- ***Q4. How do the definitions of consistency and diversity across augmentations in self-supervised learning scenarios differ from those in semi-supervised learning scenarios?*** R4. The distinction in the concepts of consistency and diversity between self-supervised and semi-supervised learning scenarios primarily stems from the availability of supervision signals. 1. **Semi-supervised learning** - Consistency: It is often defined as the extent to which augmented graphs align with the inherent patterns of the labeled data. - Diversity: It is often defined as the difference in distribution between the augmented and input graphs. 1. **Self-supervised learning** - Consistency: It is often defined as the extent to which various augmented graphs preserve the essential characteristics of the input graph. - Diversity: It is often defined as the distributional differences among the various augmented graphs. --- ***Q5. The connection between the number of AC vectors and model performance.*** R5. We recognize your concerns about the selection of the $k$, which is a key hyperparameter in the proposed GOUDA. The proposed framework sets $k$ as a small number, which is independent of the size of the graph. The submitted manuscript has presented an experimental assessment of the impact of varying $k$ on model performance, as shown in Figure 7, where $k$ is selected from the set {1, 5, 10, 20}. Our observations indicate that GOUDA is relatively insensitive to the hyperparameter $k$ as explained in Section 4.2. Therefore, one need not be overly concerned with the precise value of $k$ for GOUDA’s robustness to changes in $k$. --- Rebuttal 2: Title: Response to authors Comment: Thank you for the authors' detailed response. I am satisfied with the authors' response and revised manuscript, so I would like to increase my score from 7 to 8.
Summary: This paper explores the characteristics of local attribute modification in current graph contrastive learning methods. It then integrates diverse augmentation strategies with attribute learning into a unified framework. This unified approach introduces a novel and straightforward graph contrastive learning framework that establishes consistency between embeddings and ensures diversity through augmentation. Specifically, diversity is enforced using the HSIC term. The proposed framework offers two main advantages: 1) high efficiency and 2) universally learnable augmentation. Its effectiveness is validated across graph-level and node-level tasks, demonstrating its superiority. Strengths: - The exploration of graph contrastive learning mechanisms from a spatial perspective is insightful. - The unified augmentation proposed and the subsequent framework are both straightforward and efficient. - The use of shared AC and HSIC is technically robust. - The experimental evaluations provide convincing evidence. Weaknesses: - The symbols $\gets$ used in many equations, such as Eq. (4), should be changed to $\to$ for convenience. - Some experiments require further explanation. It's unclear why the proposed framework demonstrates robustness to topology and attribute noises. - Some hyper-parameters remain unverified. It's uncertain whether the parameter $\varepsilon$ in Eq. (7) significantly affects performance. - As far as I know, the HSIC term is computationally intensive. Does this affect efficiency? - Descriptions from Lines 143 to 153 are difficult to understand. - Reference [33] is based on adversarial training rather than adversarial attack. - Some commas are missing after equations, for instance, in Eq. (9). Technical Quality: 3 Clarity: 3 Questions for Authors: See above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1. The symbols $\leftarrow$ used in many equations, such as Eq. (4), should be changed to $\rightarrow$ for convenience.*** R1. Following your advice, we will revise the equations to replace the symbols $\leftarrow$ with the symbols $\rightarrow$ to enhance readability. --- ***Q2. It's unclear why the proposed framework demonstrates robustness to topology and attribute noises.*** R2. Before we explain the robustness advantages of our proposed framework that employs learnable Graph Augmentation (GA) over the baseline models that utilize static and random GAs, we first describe the impact of noise attacks on graphs. Specifically, noise attacks can potentially destroy the semantic subgraphs crucial for predictions, thereby inducing distortions in the learned representations. However, random GAs struggle to preserve the information-rich subgraphs and may further destroy them, exacerbating semantic bias. In contrast, the proposed learnable GAs, trained with a relevant proxy task like contrastive loss, can preserve such subgraphs and, hence, can mitigate semantic bias. --- ***Q3. It's uncertain whether the parameter $\varepsilon$ in Eq. (7) significantly affects performance.*** R3. According to your suggestion, we verify the impact of the hyper-parameter $\varepsilon$ on model performance. $\varepsilon$ stands for the threshold to suppress the elements in $B$ to zeros. Thus, to eliminate bias due to network size, $\varepsilon$ is not freely tuned. Instead, it is set as the output of the selection function, i.e., $\varepsilon = \text{selection}(B, s)$, which estimates this threshold. $B$ represents the matrix of propagation weights from AC vectors to nodes, and $s$ denotes the presence of the largest elements retained. $s$ is chosen from the set {0.2, 0.4, 0.6, 0.8} in the experiments. Thus, the impact of $s$ on the performance is shown in the following table. It can be observed that $s$ does not significantly affect the model performance. | GOUDA-IF | 0.2 | 0.4| 0.6 | 0.8 | |:--------| :---------:|:--------:|:--------:|:--------:| | Cora | 85.29 | 84.71 | 84.19 | 86.11 | | CiteSeer | 74.55 | 73.20 | 73.11 | 73.62 | | PubMed | 86.96 | 87.25 | 87.55 | 87.05 | | GOUDA-BT | 0.2 | 0.4| 0.6 | 0.8 | |:--------| :---------:|:--------:|:--------:|:--------:| | Cora | 85.07 | 84.56 | 85.88 | 85.99 | | CiteSeer | 74.34 | 74.47 | 73.98 | 73.41 | | PubMed | 87.59 | 86.94 | 86.76 | 87.11 | --- ***Q4. As far as I know, the HSIC term is computationally intensive. Does this affect efficiency?*** R4. There may be some misunderstandings. The HSIC term has a quadratic time complexity of $O(N^2)$ for the number of samples $N$. However, this complexity is mitigated to $O(K^2)$ in the proposed framework since HSIC is applied only to $K$ AC vectors instead of the entire node set. The proposed framework sets $K$ as a small number, which is independent of the size of the graph. Therefore, the employment of the HSIC term does NOT cause high computation in the proposed framework. --- ***Q5. Descriptions from Lines 143 to 153 are difficult to understand.*** R5. We have carefully reviewed the content and have reorganized the passage for better clarity and comprehension. The passage now presents as follows. (1) Edge augmentation is to add or remove the augmented edges, which corresponds to inserting or dropping the nodes connected by these edges in the neighborhood of the impacted nodes. The shown edge removal will lead to dropping certain 2-hop neighbors (nodes 1, 4, 7, and 8) of node 0 during the aggregation phase. (2) Attribute augmentation is to replace the attributes of the impacted nodes with the altered attributes, which affects all nodes connected to them. The shown attribute augmentation can be seen as modifying the attributes of the augmented neighbors (nodes 1, 3, 4, 5, 6, 7, and 8) of node 0 during the aggregation phase. (3) Subgraph augmentation is to modify the specific subsets of the input graph (including its edges and attributes), which also can be seen as the perturbation in the neighborhoods of impacted nodes. The shown subgraph augmentation will lead to the removal of nodes 2, 4, 6, and 8 from the 2-hop neighborhood of node 0 during the aggregation phase. Furthermore, node augmentation represents a specific case of subgraph augmentations, with the subset limited to a single node, thus rendering the aforementioned conclusion applicable to it. --- ***Q6. Reference [33] is based on adversarial training rather than adversarial attack.*** R6. Thank you for the meticulous correction. We will correct the methodology of Reference [33] to adversarial training. --- ***Q7. Some commas are missing after equations, for instance, in Eq. (9).*** R7. Thank you for your attention to detail. We will carefully check and comprehensively polish the manuscript to make sure that all formulas are followed by the correct punctuation. --- Rebuttal Comment 1.1: Comment: Thank you for the response, which solves most of my concerns, and I will maintain my score on this paper.
Summary: The paper reconsiders the formulation of graph augmentations in graph contrastive learning, introducing a novel perspective on GA through message passing. The proposed UGA framework interprets graph augmentations as mechanisms for aggregation and propagation between nodes, highlighting the significance of local aggregation and propagation within GA. The effectiveness of the proposed GOUDA framework is demonstrated in experimental evaluations. Strengths: - The formulation of GA from the perspective of message passing is intriguing. Construing GA as aggregation and propagation among neighbors adds significant flexibility. - The proposed GOUDA framework demonstrates effectiveness and robustness. - The paper exhibits superiority across various graph-based tasks, including node classification, node clustering, and graph classification. These results indicate that GOUDA is not limited to specific tasks. - The computational complexity is significantly reduced over a wide range. - Well-written. Weaknesses: - The paper lacks presentation on the interpretability of learned graph augmentation vectors. Given that graph augmentation can occur at nodes, edges, attributes, and sub-graphs, an explanation of how these learned vectors are interpreted is necessary. - The paper should include a comparison of time consumption during experiments. Technical Quality: 4 Clarity: 4 Questions for Authors: - Can the proposed GOUDA, especially the Graph Augmentation Vectors (GAVs), be integrated with other Graph Contrastive Learning (GCL) methods? - How to assess the contribution of the proposed GOUDA framework to Graph Contrastive Learning (GCL)? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The paper requires interpretability and should clarify the contribution to GCL. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1. The paper lacks presentation on the interpretability of learned graph augmentation vectors.*** R1. To provide interpretation for Graph Augmentation (GA), we would first introduce the interpretable explanations for GNNs (i.e., graph encoders) on graphs [1]. Within the encoder-relevant computation graph, i.e., the k-hop subgraph, a subgraph that is informative and most influential on the label is specified as an explanation. For a given node, the computation graph corresponds to its $k$-hop neighborhood; for a given graph, it represents the entire graph. Based on this introduction, GAs can be interpreted as techniques that seek to preserve the information-rich subgraphs. Our GA method, which interpolates a batch of Augmentation-Centered (AC) vectors into the input graph to emulate the GAs' effect, is proposed based on an observation: GA is equivalent to modifying the node attributes within the computation graphs of nodes, that is, local attribute perturbation. During the weight optimization guided by an error metric, these AC vectors adaptively capture task-related perturbation information from the computation graphs of nodes. Therefore, the learned AC vectors can be interpreted as the representations of subgraphs in the computation graph of nodes. [1] GNNExplainer: Generating Explanations for Graph Neural Networks. NeurIPS 2019 --- ***Q2. The paper should include a comparison of time consumption during experiments.*** R2. We understand your concerns about the complexity of the proposed framework. We have conducted a comprehensive comparison of the running time consumption of GCLs, as shown in Figure 3 of our manuscript. The accuracy and the running time for each training epoch are provided in the following table for your review. | Accuracy(%) / Time(s) | IMDB-BINARY | IMDB-MULTI | |:--------| :---------:|:--------:| | Infograph | 73.03 / 0.82 | 49.69 / 0.89 | | GraphCL | 71.14 / 0.49 | 48.58 / 0.56 | | JOAO | 71.60 / 1.88 | 49.20 / 1.79 | | AD-GCL | 71.49 / 1.31 | 50.36 / 1.44 | | MVGRL | 74.20 / 1.16 | 51.20 / 1.02 | | GOUDA-IF (Ours) | 75.22 / 0.41 | 52.43 / 0.47 | | GOUDA-BT (Ours) | 76.80 / 0.46 | 53.05 / 0.55 | It is evident from the table that GOUDA not only outperforms the baselines but also consumes less time per epoch, demonstrating that GOUDA is an effective yet lightweight framework. The detailed analysis has been presented in Section 4.1 of our submitted manuscript. --- ***Q3. Can the proposed GOUDA, especially the Graph Augmentation Vectors (GAVs), be integrated with other Graph Contrastive Learning (GCL) methods?*** R3. Of course. As a general framework, GOUDA possesses high compatibility with most GCLs, enabling integration through slight modifications of the graph encoder or the loss function. In our submitted manuscript, GOUDA is implemented as two models: GOUDA-IF, which utilizes InfoNCE loss, and GOUDA-BT, which employs BarlowTwins loss. Extensive experiments (such as graph classification in the table of R2) on various datasets and tasks demonstrate the outstanding performances of GOUDA-IF and GOUDA-BT, highlighting the broad compatibility and effectiveness of the framework GOUDA. --- ***Q4. How to assess the contribution of the proposed GOUDA framework to Graph Contrastive Learning (GCL)?*** R4. We would like to illustrate the contribution of the proposed framework GOUDA to GCL from the following aspects: 1. **A novel perspective of graph augmentations**. GOUDA is designed with a thorough analysis of the mechanisms of existing Graph Augmentations (GAs) in GCLs. We provide a unified interpretation of these mechanisms from a message-passing perspective, offering new insights into the GCL field. 2. **A general graph augmentation module**. GOUDA presents a lightweight yet effective GA module, i.e., UGA, that achieves a theoretical unification of diverse GAs, providing an innovative GA strategy to GCLs. 3. **New SOTA**. The implemented models, GOUDA-IF and GOUDA-BT, have achieved a new state-of-the-art performance on many tasks, including node/graph classification. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My concerns have been addressed, so I have decided to maintain my original score.
Summary: The paper introduces GOUDA, a versatile framework for Graph Contrastive Learning that addresses the limitations of existing graph augmentation techniques. GOUDA proposes a unified graph augmentation module capable of simulating various explicit graph augmentations, enhancing the generality and efficiency of GCL models. By incorporating both widely-adopted contrastive losses and a novel independence loss, GOUDA ensures consistency and diversity across different augmentations. Strengths: 1. The paper addresses a crucial problem in graph contrastive learning: the specificity, complexity, and incompleteness of current graph augmentation techniques. 2. The paper conducts experiments on multiple datasets and compares the results with various baseline models. Weaknesses: 1. Some of the mathematical derivations and formulas are complex and may be difficult for readers to follow. Although the notations are defined, the paper could provide more intuitive explanations and step-by-step derivations to enhance understanding. 2. While authors have compared with many GCL methods, the comparison with GA techniques like simple node augmentation is missing. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How to interpolate the learned AC vectors? 2. How is k selected in line 181? 3. Graph augmentation is based on the assumption that the augmentation does not change the original class label, how does UGA guarantee this? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1. The paper could provide more intuitive explanations and step-by-step derivations to enhance understanding.*** R1. Thanks for your valuable suggestion. We will further clarify the mathematical formulas with intuitive explanations in the revised manuscript. Besides, please refer to the Appendix for detailed derivations. --- ***Q2. While authors have compared with many GCL methods, the comparison with GA techniques like simple node augmentation is missing.*** R2. Simple node augmentation has been compared in our experiments since it is the augmentation strategy employed by many existing GCL methods, such as GraphCL [1]. Specifically, GraphCL employs random node dropping, a prevalent and simple node augmentation strategy, in the augmented graph construction. Following your suggestion, we will clarify this in the revised version. Figure 3 shows these comparisons from both effectiveness and efficiency perspectives. For convenience of review, the total accuracy and the running time for each training epoch are given as follows. | Accuracy(%) / Time(s) | IMDB-BINARY | IMDB-MULTI | |:--------| :---------:|:--------:| | GraphCL | 71.14 / 0.49 | 48.58 / 0.56 | | GOUDA-IF (Ours) | 75.22 / 0.41 | 52.43 / 0.47 | | GOUDA-BT (Ours) | 76.80 / 0.46 | 53.05 / 0.55 | It is evident that GOUDA not only achieves superior performance but also possesses efficiency similar to GraphCL. This highlights that GOUDA is a lightweight yet highly effective framework. [1] Graph Contrastive Learning with Augmentations. NeurIPS 2020 --- ***Q3. How to interpolate the learned AC vectors?*** R3. The AC vectors are interpolated via a graph-based strategy. Besides the nodes in the original graph, AC vectors are treated as another type of nodes in the new graph. The additional connections (i.e., edges) from AC nodes to the original nodes are constructed according to the feature similarity, as formulated in Eq. 6. This interpolation strategy benefits both the reduction of the tuning parameters for graph augmentations and the flexibility by dynamically adjusting the varying importance of relationships. --- ***Q4. How is k selected in line 181?*** R4. We recognize your concerns about the selection of the $k$, which is a key hyperparameter in the proposed GOUDA. The proposed framework sets $k$ as a small number, which is independent of the size of the graph. The submitted manuscript has presented an experimental assessment of the impact of varying $k$ on model performance, as shown in Figure 7, where $k$ is selected from the set {1, 5, 10, 20}. Our observations indicate that GOUDA is relatively insensitive to the hyperparameter $k$ as explained in Section 4.2. Therefore, one need not be overly concerned with the precise value of $k$ for GOUDA’s robustness to changes in $k$. --- ***Q5. Graph augmentation is based on the assumption that the augmentation does not change the original class label, how does UGA guarantee this?*** R5. Note that NO data augmentation can guarantee semantic invariance in self-supervised learning since label information is unavailable. All the augmentation-based graph contrastive learning methods are based on the assumption of local smoothness in the space of graphs, and thus, slight changes between original and augmented graphs do not change the semantics, i.e., the label, of the graphs. Thus, the proposed UGA module, which unifies the four types of graph augmentations, has yet to guarantee the semantic invariance of augmentation. Fortunately, the learnable characteristic of UGA prevents the semantic shift from random operations in previous GAs. Thus, the invariance can be enhanced via the consistency constraints. --- Rebuttal Comment 1.1: Comment: Thanks for authors' rebuttal, I have adjusted my score based on the response. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback. We hope that our response appropriately answers your questions. We are willing to provide further clarification if you have any additional concerns.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Medformer: A Multi-Granularity Patching Transformer for Medical Time-Series Classification
Accept (poster)
Summary: This paper introduces Medformer, a transformer variant designed to learn complex dynamics from medical signals. This is achieved in three key ways : 1. Cross-channel patching for token embedding; 2. Multi-length patching for coarse and fine feature processing; 3. Multi-granularity self-attention for information granularities integration; The proposed model is then extensively validated through comparison to ten baselines on five datasets. Strengths: - Originality : The model proposed in this paper is the first to integrate both cross-channel patching with varying patch lengths as an embedding method, and a new multi-granularity self-attention mechanism. - Quality : The performed tests are extensive (five different datasets, ten competing baselines). Reproducibility is largely facilitated as all datasets and the full code are publicly available. To validate all components, ablation studies are provided. Since the introduction of multi-granularity could seem like it would increase complexity, care has been put so that this is not the case (e.g. with the two-stage router-based self-attention). - Clarity : The paper is well-written and easy to follow, and the figures are intelligible and informative. All equations and steps are clearly explained. - Significance : Improving diagnosis performance is doubly useful in the medical domain, as it directly aids healthcare, and indirectly eases the adoption of models by clinicians. On the technical side, this paper opens the door for multi-granularity transformers able to learn from both fine and coarse-grained features. Weaknesses: This paper suffers from one main weakness : processing medical time series without providing explainability studies undermines its usability and transparency. This could be addressed in the future by adapting one of the techniques developed to interpret predictions made by transformer models. The provided code could also benefit from more comments, for example in the `exp/*` files. Technical Quality: 4 Clarity: 4 Questions for Authors: - For clarification, does the random seed only affect the initialization of the model? Are the training/validation/test splits the same for all five runs? - What was your reasoning behind the different augmentation techniques used depending on the dataset? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have addressed the technical limitations of their work, albeit not in the main body of the article. However, they have not addressed the societal impact of their model, were it to be adopted in a healthcare setting : steps should be taken to either prevent or disclose clearly whether demographic biases (for example) found in the data are also found in the model's predictions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback on our work! We appreciate you carefully reading our method, equations, figures, and even the code! We are happy to receive your endorsement and feel that our paper is easy to follow! Again, thank you for your comments. Here are the responses to your questions and concerns. *** **Q1**: Processing medical time series should provide explanability and transparency. **A1**: Thanks for raising this concern. We agree explanability is crucial to medical tasks, therefore we further clarify how we can provide explanations from our trained model with an example from the TDBrain dataset. This is a dataset of EEG signals for prediction on Parkinson's disease, which is suitable for exploring if our model is utilizing the correct channels' information and the signals at the right frequency for a reasonable diagnosis. To find out important channels contributing to our model's prediction, we first mask out the rest of the brain regions and only use certain combinations of them to perform predictions. Important channels contribute most to the prediction accuracy, as shown by Fig R.1 (b) in the attached PDF. In Fig R.1 (b), we can find Central sulcus contributes most to the prediction accuracy, followed by Frontal, Occipital and Parietal lobes, and plural available regions are generally better than single region. This aligns with the central sulcus's role in movement and its close anatomy relationship with Basal Ganglia, which is the primary brain region indicating Parkinson's disease [9], meaning that our model has successfully learned to pick up informations from brain regions that matters much more. Furthermore, Fig R.1 (a) shows how different routers are attending to different segment of the same signal (deeper red means that the router attends more to this patch). This demonstrates that other than capturing features at the 32Hz and 16Hz scaffolded by the patch length $L=8$ and $L=16$ for 256Hz EEG signals, the model is as well focusing on brain waves at 8~10Hz due to which the model is "downsampling" the tokens with attention scores in stripe patterns. This precisely aligns with the interval at which the EEG signals differs the most between healthy individuals and patients [10] and therefore is crucial in the diagnosis process of Parkinson's disease. In conclusion, our method is capable of focusing on important channels as well as time-frequency information which is well-supported by literatures and may give inspiration for future studies and applications. *** **Q2**: The provided code could benefit from more comments. **A2**: Thanks for raising this concern. We will update and add more comments to our code in our next version. *** **Q3**: Does random seed only affect the initialization of model? Are the training/validation/test splits the same for all five runs? **A3**: Yes. The random seed only affects the model's initialization, while the train-test splits remain consistent across the 5 runs. To satisfy the subject-independent setup, where the train-test split are based on subjects, we fixed the subjects used in each set. This means that the subjects included in the training, validation, and test sets remain the same, and only the model's initialization is randomized for each run. *** **Q4**: What was your reasoning behind the different augmentation techniques used depending on the dataset? **A4**: We did some preliminary analysis and evaluations to choose augmentation methods. Our goal is to select augmentation methods that do not change the label for classification while introducing some variance in the samples. In general, timestamp masking is the safest meth for classification tasks, followed by jittering. Timestamp masking involves randomly masking certain timestamps in the time domain without damaging the semantic information. Jittering, when applied with a low ratio of random noise, also preserves the semantic information in the raw data. We also experimented with frequency band masking but found that it sometimes negatively impacted performance. We believe this could be due to the semantic information in some datasets being concentrated in specific frequency bands, which this type of augmentation may disrupt. *** **Q5**: Considering the societal impact of their model, steps should be taken to prevent the disclosing of demographic information in the dataset. **A5**: Thank you for raising this concern. During data preprocessing, we manually removed the demographic information in the dataset and did not incorporate it into model training. We strongly agree that protecting sensitive information in the data is important when dealing with medical data. To further protect the demongraphic information, we may incorporate techniques such as differential privacy in future works. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I appreciate your demonstration that Medformer can be explainable, and the clarifications about augmentation strategies and demographic information. I hope you will find a way to include this and your additional benchmarks in your final paper. My rating remains unchanged and positive (Accept); I do not think that a method being primarily applicable to clinical data and not general time series is grounds for rejection, since the "Machine learning for healthcare" area of NeurIPS exists for this purpose. --- Reply to Comment 1.1.1: Comment: Thank you again for your endorsement! We will include the explainable results and new experiments in our final paper.
Summary: The authors have created a model that seems to perform especially well for high-frequency waveform classification. They have benchmarked their model over several datasets, and its overall rank is higher than that of other models. Strengths: The model successfully combines three (seemingly pre-existing ideas). The model seems to perform reasonably well in the provided benchmarks. The subject task distinction is an interesting addition and shows that thought has been put into the design of the tasks. The code is provided together with some documentation, significantly improving reproducibility. Weaknesses: The limitations and discussion should at least be briefly discussed in the paper itself, not just in the appendix. The compared models, while plentiful, seem only transformer-based; it would be interesting to see a comparison to different architectures, especially since TCN is also mentioned in the related work (e.g., try the Mamba model https://arxiv.org/pdf/2312.00752). The improvements over existing transformer models seem marginal. The efficiency of the models should be benchmarked (e.g., training time). Table 3: some results are tied as they lie in the range of the standard deviation of the best model. It would be good to highlight these results as well. Line 724: Comparison misspelled. Technical Quality: 3 Clarity: 3 Questions for Authors: The authors use two types of vital signs to benchmark their model. Is there enough reason that we could assume the superior performance also translates to other vital signs and, more broadly, medical time-series prediction tasks? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Tasks could be extended to (cheap to collect) PPG signals. Additionally, one could think of a multi-model setup as we often have more (static or lower frequency time-series) data on specific test subjects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions to our paper! We are happy you interested in our design for experiments and carefully review our paper and code. Here are the detailed response for your questions and concerns. *** **Q1**: The limitations should be discussed in the main paper, not appendix. **A1**: Thanks for the suggestion. We will move the limiation to main paper in our revised version. *** **Q2**: Baselines are all transformer-based methods. It would be interesting to compare with different architectures (e.g., try the Mamba model). **A2**: Thanks for raising this concern. We add three more baselines that are not transformer-based for comparision: TCN, ModernTCN, and Mamba. The results are presented in R-Table 1 in the attached pdf file in general response. We evaluated our method and 6 baselines across the 3 new datasets and the previously used TDBrain dataset. Our method achieved the best F1 score on 3 datasets and the second-best on the remaining one. *** **Q3**: The improvements over existing methods seems marginal. Some results are tied as they lie in the range of the standard deviation of the best model. It would be good to highlight these results as well. **A3**: Thanks for raising this concern. PatchTST outperforms our method on the PTB-XL dataset by 0.6 in F1 score. As we observe from the ablation study, PTB-XL does not benefit from cross-channel patching too much; we guess the reason is the smaller number of channels(12) of PTB-XL. In other words, we believe our model is more suitable for datasets that demonstrate more channel correlations. We will discuss and analyze more on results in our revised version of paper. *** **Q4**: The efficiency of the models should be benchmarked (e.g., training time). **A4**: We have included a table (Table R.2) in the attached PDF that presents the runtime performance of several older baselines known for their strong performance, as well as the three new baselines, on the newly added dataset: FLAAP (13,123 samples, 10 classes). In general, CNN-based methods are fast, and the Mamba is extremely slow for unknown reasons. We guess this is because Mamba's new architecture does not match the GPU hardware optimization well. Our method achieves close runing time to other transformer-based methods. *** **Q5**: Can the superior performance also extends to other vital signs and medical time-series prediction tasks? For example, PPG data and other multi-modal datasets. **A5**: Yes, we believe our model is adaptable to various types of medical time series, particularly those with multi-channel correlations. For multi-modal learning of medical time series collected simultaneously (e.g., PPG, EEG, fNIRS, ECG), we see three potential research directions for applying our method: 1) Interpolating and Concatenating: Interpolating all modalities to the same length and vertically concatenating them into a multi-channel time series. 2) Individual Representation Learning: Using separate models to learn individual representations for each modality, then concatenating these representations for downstream tasks. 3) Granularity-based Modality Correspondence: Applying different granularities to correspond to different modalities during the learning process. We plan to explore these directions in future work. Additionally, we train our method on a new dataset, MIMIC-PERform-AF, which uses PPG and ECG data to distinguish atrial fibrillation patients (20,400 samples, 2 classes). The results, presented in R-Table 1, show that our method outperforms the second-best result by approximately 3% in F1 score. --- Rebuttal 2: Title: Main paper revision Comment: Dear Authors, Thank you for addressing my concerns with additional material. However, a revised version of the paper must be provided to fully address the points in my and the other authors' reviews. Thank you. --- Rebuttal Comment 2.1: Comment: Dear Reviewer, Thank you for your feedback on our additional material. While we would be happy to provide a revised version of the entire paper, according to the rebuttal rules of Neurips 2024, we are not allowed to upload the revision of our paper until the camera-ready stage. We are only allowed to upload one page of PDF in the global response, including tables and figures. For your convenience, we quote some FAQs from **NeurIPS 2024 FAQ for Authors** in the **Reviewing/Discussion process**: * **Can we upload a revision of our paper during the rebuttal/discussion period?** No revisions are allowed until the camera-ready stage. * **Can we upload a revision of the supplementary materials during the rebuttal/discussion period?** No. You may revise it for the camera-ready stage. * **What is the length limit in the rebuttal phase?** You can submit a rebuttal of up to 6000 characters per review, and one global rebuttal of up to 6000 reviews. These are posted by clicking the "Rebuttal" and "Author Rebuttal" buttons. You can additionally add a one-page PDF with Figures and tables. You can upload this PDF after you click the "Author Rebuttal" button. * **Where can I see my submitted PDF for the author-rebuttal?** There is a link to the PDF at the end of the global rebuttal. Could you use our current rebuttal for your assessment? We are happy to answer your additional questions. Thank you again!
Summary: This paper proposes a multi-granularity patching Transformer for medical time series classification. The proposed model, Medformer leverages cross channel patching to learn inter-channel correlations. It utilizes multi-granularity embedding for learning temporal patterns. Finally, it takes 2 stage self-attention to aggregate information from multiple granularities. The experiments on 5 datasets show the effectiveness of Medformer. Strengths: 1.The paper is well structured and easy to follow. 2.The problem of medical time series classification is important. 3.The paper conducts several experiments on medical time series datasets and demonstrates the effectiveness of Medformer. Weaknesses: 1.The novelty of this paper is not enough. The idea of leveraging multi-granularity and multi-channel features has been studied in time series field. 2.Time series classification has also been well studied. The paper should clarify and focus the specific challenges for medical time series classification compared to general time series. 3.For general time series classification, several new SOTA methods have been released, such as Time-LLM, ModernTCN, and Lag-llama, etc. Such recent studies should be compared in the experiments. 4.The results of Medformer are not superior in several metrics. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.How about the performance of the medformer when it is applied to general time series classification? This experiments could demonstrate the advantages of the model. 2.What are the new challenges of time series classification in medical datasets? How does Medformer resolve it? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions on our paper! We are happy you feel our paper is well-structured and easy to follow. We respond to each of your questions and concerns. If you do not feel we have sufficiently justified a higher score, please let us know your further concerns and how to improve. Thank you again! *** **Q1**: The idea of leveraging multi-granularity and multi-channel features has been studied in time series field. **A1**: We agree that some methods utilized multi-granularity or multi-channel features in the time series domain, as we compared or described in the related work. However, we are the first to combine multi-granularity with cross-channel patching to model cross-channel information across different levels of granularity. In particular, we design an embedding method for medical time series data, where different channels represent signals from different regions of the same organs (brain, heart, etc.). Tight correlations among the channels are evident and valuable from a medical perspective[4][5][6]. Therefore, our approach is tailored to capture these correlations effectively with one-step data embedding. In contrast, as we mentioned in the paper (lines 107-110), existing patching methods in time series transformers, such as those inherited from PatchTST [1][2][3], rely on single-channel patching. These methods are designed for time series forecasting in tasks like weather and energy prediction, where different channels may represent entirely different meanings and, therefore, follow a channel-independent strategy. Thus, our method exhibits a significant deviation from the other methods and is a rational design for the specific datasets and challenges. *** **Q2**: The paper should focus on the challenges of medical time series classification. What are those challenges, and how does Medformer resolve them? **A2**: 1) **Strong correlation between channels and features at a wide frequency range motivates our design of multi-granularity cross-channel patching**: Medical time series data are collected from human subjects using sensors and electrodes, usually resulting in multi-channel and multi-modal data. As we discussed in our previous response, different channels in medical time series often represent activities in various regions of the same underlying organs, e.g., brain, heart, etc., from a medical perspective. Therefore, strong inter-channel correlations exist naturally [4][5][6]. Moreover, biomarkers in these signals appear at a wide range of frequencies with a difference of up to 1~2 magnitude [6]. To utilize these features, we design multi-granularity cross-channel patching that enables one-step data embedding, followed by a two-stage intra-inter granularity self-attention mechanism to exploit these features further. 2) **The correlation of samples from the same subject encourages us to explore a subject-independent training setup**. As we described in the preliminaries section(lines 124-135), for medical time series collected for disease diagnosis tasks, the ultimate goal is to predict the label of **Subject** instead of **Sample** to diagnose disease. In this case, samples from unseen subjects should not be utilized in the training to follow a real-world scenario. Many existing works do not consider subject ID information[7][8]. To better simulate real-world conditions, we categorize data splitting strategies into two setups: subject-dependent and subject-independent. For subject-dependent, samples are randomly split into training, validation, and test sets. For subject-independent, we split subjects into these three sets, ensuring that samples from the same subjects are exclusively contained within each set (See Figure 2). Tables 2 and 3 in the paper show how deceptively high results could be achieved with improper setup, leading to ineffective models in real-world applications. *** **Q3**: Several new time series classification SOTA methods have been released, such as Time-LLM, ModernTCN, Lag-llama etc. These studies should be compared. **A3**: Thank you for recommending these exceptional works. We will review them and incorporate them into the related work section in our upcoming revision. We have included experiments for ModernTCN in Table R.1 of the attached PDF. Despite our best efforts, we could not obtain results from the two LLM-based models, Time-LLM and Lag-Llama, as they are primarily designed for time series forecasting. Fine-tuning these pre-trained models for classification tasks would require more computational resources than currently available. However, to ensure a comprehensive evaluation of our model, we add two additional methods, TCN[11] and Mamba[13], also in Table R.1. *** **Q4**: How does the Medformer perform in general time series classification? **A4**: While our method is designed for medical time series using their characteristics, we believe it applies to general time series data, particularly multi-channel time series with intrinsic correlations among channels. To prove this, we add 3 new datasets to our experiments: FLAAP, UCI-HAR, and MIMIC-PERform-AF. The results are presented in R-Table 1, attached to the general response PDF. We evaluated our method and 6 baselines across the 3 new datasets and the previously used TDBrain dataset. Our method achieved the best F1 score on 3 datasets and the second-best on the remaining one. *** **Q5**: Medformer's results are not superior in several metrics. **A5**: PatchTST outperforms our method on the PTB-XL dataset by 0.6 in F1 score. Based on the ablation study presented in Table 4, cross-channel patching does not provide as much benefit as other datasets. We guess that this might caused by the smaller number of channels (12) in PTB-XL and weak channel correlation, which limit the advantages of our cross-channel patching. However, we believe the other mechanism, such as intra-inter granularity self-attention, can be combined with PatchTST to improve the result on PTB-XL. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal. The authors clarify the ideas and motivations of MedFormer and add more baselines. I have increased my score. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and endorsement! --- Rebuttal 2: Comment: Dear reviewer vj5L: We would like to kindly remind you that the author-reviewer discussion will end in 1 day. Could you take a look at our rebuttal and let us know if our rebuttal has addressed your concerns? As our paper is currently on the borderline, your input will be greatly appreciated. Best regards --- Rebuttal 3: Title: Please respond to the rebuttal of NeurIPS Submission7556! The response period ends in one day! Comment: Reviewer vj5L, Thank you for your service in reviewing for NeurIPS! Your work is not done, however, as part of a reviewer's responsibility is to engage with the authors during the discussion period. The authors of Submission 7556 (Medformer) have provided an extensive response to your review. Please read it and indicate the extend to which it addresses your concerns. Please indicate clearly which of the issues you raised are addressed and which are not, such that the authors have a final chance to reply. There is only one day left in the discussion period, so please do this ASAP! The AC for Submission 7556
Summary: In this paper, the authors introduced a multi-granularity patching transformer tailored specifically for medical time series classification. To leverage the characteristics of medical time series data, the model incorporated three unique features : cross-channel patching to leverage inter-channel correlations, multi-granularity embedding for capturing features at different scales, and two-stage (intra- and inter-granularity) multi-granularity self-attention for learning features and correlations within and among granularities. The model was well tested with several datasets and showed mixed results against other models. Strengths: Considering the charactaristics of time series data in medical domain, this approach is quite reasonable and is showing strong results especially in EEG datasets. Weaknesses: Because this model has been evaluated on very specific time series data, EEG and ECG, it is unclear whether this approach can be extended to many other time series data outside the medical domain. Technical Quality: 3 Clarity: 3 Questions for Authors: As mentioned in Appendix G, what domains or data sets should be tested next to test the ability of this approach? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Currently the proposed model was tested only in EEG and ECG domains, thus the scope is too narrow for the general audience of the NeurIPS. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and concerns regarding our paper! We appreciate your thoughtful questions. Here are the reponses to your question and concern. If you feel our response does not fully justify a higher score, please let us know how we can further improve our work. Thank you again for your consideration. *** **Q1**: Extension to general time series outside the medical domain. **A1**: Thank you for raising this concern. Our answer contains two parts. 1) We agree our method is designed for medical time series due to the inherent correlations among channels from a medical perspective[4][5][6]. However, we believe our method has the potential to be extended to other domains as well. Specifically, our approach is particularly well-suited for multi-channel time series data where theoretical correlations exist among different channels. Our multi-granularity cross-channel patching effectively embeds these channel correlations in a single step. Potential domains for application include human vital signs for health monitoring and time series data collected from mobile sensors for human activity recognition. 2) To further evaluate the generalizability of our method, we add three new datasets, two human activities recognition datasets FLAAP(13123 samples, 10 classes) and UCI-HAR(10299 samples, 6 classes), and one multi-modal vital signs(PPG and ECG) dataset MIMIC-PERform-AF(20400 samples, 2 classes) for atrial fibrillation classification. See R-Table 1 for the results in the pdf file attached to the general response. In this table, we compare with three old baselines with good performance so far, Crossformer, Reformer, and Transformer, and three new baselines TCN, ModernTCN, and Mamba. We evaluate our method and the six baselines on three new datasets and one old dataset, TDBrain. Our method demonstrates top-1 Accuracy and F1 score in three datasets and second-best in the remaining one. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. The additional experiments clearly addressed my concerns and strongly demonstrated the advantage of the proposed method. Thus I updated my score. I look forward to reading the integrated final version. --- Reply to Comment 1.1.1: Comment: Thank you so much for your endorsement! We will integrate everything into the final version of paper.
Rebuttal 1: Rebuttal: We appreciate the thoughtful feedback provided by all the reviewers. We appreciate reviewer ApjC for recognizing the novelty of our work in being the first to integrate cross-channel patching and a new multi-granularity self-attention mechanism. In response to the reviewers' insightful comments, we have conducted additional experiments (Table R.1 in the attached PDF), including three new baselines, TCN[11], ModernTCN[12], and Mamba[13], and three new datasets, including two human activities recognition datasets, FLAAP(13123 samples, 10 classes)[14] and UCI-HAR(10299 samples, 6 classes)[15], and one multi-modal vital signs(PPG and ECG) dataset MIMIC-PERform-AF(20400 samples, 2 classes) for atrial fibrillation classification[16]. We will incorporate the newly added contents into the next version of our paper. Thank you again for your suggestions and feedback! We work hard to refine our paper, and we sincerely hope our responses are informative and helpful. If you feel we have not sufficiently addressed your concerns to motivate increasing your score, we would love to hear from you further on what points of concern remain and how we to improve our work. Thank you again! Due to space limitations, we include **all the references** here in the global response and **all the new tables and figures** in the attached PDF file . All the new tables and figures are start with **R** to distinguish from the submitted paper verion. The experiment results of the ablation study in R-tables 1 and 2 and R-Figure 1 a) and b) can also be found in the attached PDF file. *** ## **References** [1] Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. ICLR, 2023. [2] Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. ICLR, 2023. [3] Yitian Zhang, Liheng Ma, Soumyasundar Pal, Yingxue Zhang, and Mark Coates. Multi-resolution time-series transformer for long-term forecasting. In International Conference on Artificial Intelligence and Statistics, pages 4222–4230. PMLR, 2024 [4] Vincent Bazinet, Justine Y Hansen, and Bratislav Misic. Towards a biologically annotated brain connectome. Nature reviews neuroscience, 24(12):747–760, 2023. [5] Imperatori, Laura Sophie, et al. "EEG functional connectivity metrics wPLI and wSMI account for distinct types of brain functional interactions." Scientific reports 9.1 (2019): 8894. [6] Singh, A., Dandapat, S. (2015). Two-Dimensional Processing of Multichannel ECG Signals for Efficient Exploitation of Inter and Intra-Channel Correlation. In: Bora, P., Prasanna, S., Sarma, K., Saikia, N. (eds) Advances in Communication and Computing. Lecture Notes in Electrical Engineering, vol 347. Springer, New Delhi. [7] Cosimo Ieracitano, Nadia Mammone, Alessia Bramanti, Amir Hussain, and Francesco C Morabito. A convolutional neural network approach for classification of dementia stages based on 2d-spectral representation of eeg recordings. Neurocomputing, 323:96–107, 2019 [8] Fangzhou Li, Shoya Matsumori, Naohiro Egawa, Shusuke Yoshimoto,Kotaro Yamashiro, Haruo Mizutani, Noriko Uchida, Atsuko Kokuryu, Akira Kuzuya, Ryosuke Kojima, et al. Predictive diagnostic approach to dementia and dementia subtypes using wireless and mobile electroencephalography: A pilot study. Bioelectricity, 4(1):3–11, 2022 [9] Rahimpour, S., Rajkumar, S. and Hallett, M. (2022) “The Supplementary Motor Complex in Parkinson’s Disease,” Journal of Movement Disorders. [10] Helson, P. et al. (2023) “Cortex-wide topography of 1/f-exponent in Parkinson’s disease,” npj Parkinson’s Disease. [11] Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018. [12] Luo, Donghao, and Xue Wang. "Moderntcn: A modern pure convolution structure for general time series analysis." ICLR. 2024. [13] Gu, Albert, and Tri Dao. "Mamba: Linear-time sequence modeling with selective state spaces." arXiv preprint arXiv:2312.00752 (2023). [14] KUMAR, PRABHAT; SURESH, S. (2022), “FLAAP: An open Human Activity Recognition (HAR) dataset for learning and finding the associated activity patterns ”, Mendeley Data, V1, doi: 10.17632/bdng756rgw.1 [15] Reyes-Ortiz,Jorge, Anguita,Davide, Ghio,Alessandro, Oneto,Luca, and Parra,Xavier. (2012). Human Activity Recognition Using Smartphones. UCI Machine Learning Repository. https://doi.org/10.24432/C54S4K. [16] P. H. Charlton et al. (2022), “Detecting beats in the photoplethysmogram: benchmarking open-source algorithms,” Physiological Measurement. Pdf: /pdf/2ea97fa200932041a3362ccde07c424bd86265b8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LocCa: Visual Pretraining with Location-aware Captioners
Accept (poster)
Summary: This paper explored integrating region captions into caption-based language-image pertaining. Specifically, in addition to the autoregressive text modeling on global captions, the proposed LocCa also predicts constructed strings of "{bounding box}-{region caption}" and "{region caption}-{bounding box}". Although the method is conceptually simple, the author benchmarked LocCa in a variety of tasks and showed that it could achieve strong performance. Strengths: - The methodology is straightforward and elegant. The proposed LocCa unifies global and region-level caption generation and doesn't require additional complex architectural modifications. - The benchmarking of downstream evaluation is quite comprehensive and the performance is strong. Weaknesses: 1. **Objectives of LocCa**. Overall, the paper is well-written and easy to follow. However, the presentation on LocCa objectives in section 3.2 is a bit confusing starting from line 131. The "dual-faceted" loss is hard to interpret for first-time readers. If I understand correctly, it might mean that LocCa calculates loss on both box $b$ and region caption $c$. The author may consider rewriting this part to improve readability. Also, citing references regarding "contrasting with traditional approaches" could help readers understand the novelty. Additionally, how to construct a batch with three tasks should also be mentioned somewhere in section 3. 1. **Box predictions in GCap**. In the grounded captioning (GCap) task, conditioned on the image, the model needs to first predict a bounding box, and then predict the caption. The task of predicting a box from the image looks like an object proposal generation task, but the supervision here is only one box per sample. However, it's not clear whether introducing this supervision of object proposal generation is desired by the authors since there are no relevant discussions. 1. **Ablation of pretraining tasks**. Table 6 is very important for this paper. The authors ablated AREF and GCAP tasks respectively and showed that it leads to performance drops. To make the paper more complete, the authors may consider adding the ablation of only removing the box part and keeping the prediction to the region captions. This will help demonstrate the value of location awareness. Also, ablating the global caption and keeping AREF and/or GCAP may also be interesting. 1. **String-based box representation**. Compared to special tokenizations for box coordinates, whether the string-based method require more tokens to represent one bounding box? Whether it lead to less efficient decoding? Also, the hyperparameter of box coordinates resolution also plays an important role here and needs to be discussed and studied with ablations. The authors may see the discussion and explorations in the following papers for reference on bounding box representations: - [1] Shikra: Unleashing Multimodal LLM’s Referential Dialogue Magic - [2] Kosmos-2: Grounding Multimodal Large Language Models to the World - [3] RemoteCLIP: A Vision Language Foundation Model for Remote Sensing Technical Quality: 4 Clarity: 3 Questions for Authors: N/A Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Weaknesses 1: Objectives of LocCa** Thanks for pointing out the "dual-faceted" loss. The understanding is correct, and we will revise this section for clarity. By "traditional approaches," we are referring to methods like OFA [1] and Florence-2 [2], which we will appropriately cite in the final manuscript. Additionally, we have explained how to construct a batch with three tasks in Lines 163-166, and we will include more details in Section 3 as suggested. 2. **Weaknesses 2: Box predictions in GCap** The introduction of object proposal generation supervision in the GCap task is indeed an intentional design choice. This approach is conceptually similar to the first RoI prediction in the Pix2Seq model [3]. Our hypothesis is that providing partial location information (such as the coordinates of the upper-left corner of an object) enables the model to predict the complete bounding box coordinates. This assumption justifies the introduction of additional supervision as a reasonable enhancement to the model’s learning process. As detailed in Section 4.3 and Appendix C, one of the additional benefits of this design is the flexibility it offers during the prediction phase. For instance, we can input a specific coordinate to prompt the model to generate a corresponding regional caption. Alternatively, we can simply input the task prefix ('GCAP') and use beam search to allow the model to output various detection results. This flexibility enriches the model’s utility by accommodating different levels of input specificity, thereby enhancing its practical applicability in diverse scenarios. 3. **Weaknesses 3: Ablation of pretraining tasks** - We understand your intention and appreciate the suggestion regarding the ablation of the bounding box component. However, the integration of spatial and textual information is central to the LocCa model's design. Removing the box component would impede the model’s ability to learn about spatial coordinates, which are crucial for its zero-shot prediction capabilities and understanding the spatial context of objects. The LocCa framework is specifically developed to link region box locations with captions to enhance cross-modal associations. Thus, omitting the bounding box data would significantly diminish the model’s ability to comprehensively understand visual scenes. Therefore, we have focused our evaluations on scenarios that maintain the full scope of the model’s cross-modal learning functionality. - We totally agree it would be interesting to see the impact of the global caption (Cap) task. The experimental setup is similar to that in Table 6 in the paper, but we used a larger decoder with 12 layers. Comparing Exp1 and Exp2 in **Table 3 in the rebuttal PDF file**, we observed a consistent performance drop without the Cap task on both RefCOCOs and holistic tasks. The drop was more pronounced on holistic tasks like COCO captioning and image classification, suggesting that the Cap task is beneficial for holistic image representation learning. 4. **Weaknesses 4: String-based box representation** - String-based tokenization requires more tokens than special tokenization, making it less efficient. For instance, using the c4 tokenizer [4], '498' requires two tokens for string representation but only one token with special tokenization. - As shown in the **Table 3 in the rebuttal PDF file**, by comparing Exp 3,1,4, we did not observe a noticeable difference in performance across various box coordinate resolutions (224, 500, 1000) with image resolution of 224, indicating that LocCa remains robust despite changes in coordinate resolution. We set the coordinate resolution to 500 to adapt to different image resolutions (224 -> 384 -> 640), as shown in Figure 3. We believe that further lowering the coordinate resolution (such as 32 that much less than 224) could degrade the model’s accuracy, as each coordinate might then correspond to multiple pixels, leading to potential mapping errors. - Thanks for providing Shikra, Kosmos-2, and RemoteCLIP as references for bounding box representations. We compared these with LocCa and noted distinct application scenarios. Shikra explored both string and special tokenization methods, ultimately opting for string tokenization, which we also employed. Shikra uses a LLM and a frozen visual encoder, making it difficult to train a new coordinate vocabulary that aligns with the pretrained VL representation. In contrast, LocCa trains the entire model from scratch, allowing for easier adaptation. This explains the significant performance gain of around 2% on the RefCOCO+/g with string tokens in Shikra, but the gap in LocCa is much smaller (<0.3%). Additionally, Kosmos-2 introduces location tokens for each image grid rather than the coordinates. This allows a bounding box to be represented by just two tokens, enhancing efficiency. However, we believe these tokens are too coarse, potentially impairing the model’s ability to accurately learn and represent small objects. RemoteCLIP suggests converting the box coordinates into text strings and concatenating them with other textual data before encoding them with the text encoder. It avoids using complex region-text alignment (like RegionCLIP). However, without explicitly constructing the correspondence between regional visual features and box coordinates, achieving fine-grained region-wise alignment may be challenging. Reference: [1] Wang et. al. OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework. [2] Xiao et. al. Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks. [3] Chen et al. Pix2seq: A Language Modeling Framework for Object Detection. [4] https://github.com/mosaicml/streaming/ --- Rebuttal 2: Comment: Thanks for the response and updated experiments. They are all sound to me. Please make sure the mentioned revisions are incorporated into the final version. I have updated my rating from weak accept to accept. --- Rebuttal Comment 2.1: Comment: Thanks for your recognition of our work and for raising the score! We will add the mentioned revisions in the final manuscript.
Summary: The paper introduces LocCa, a new visual pretraining approach that incorporates location-specific tasks into image captioning-based vision language models, improving their ability to extract detailed information from images. The authors propose two location-aware tasks, automatic referring expressions (AREF), which predicts bounding box coordinates from captions, and grounded captioning (GCAP), to jointly predict the box coordinates and captions from image, utilizing a multitask encoder-decoder architecture. As a result, LocCa significantly outperforms standard captioning models on localization challenges, achieving top results on datasets such as RefCOCO/+/g, while maintaining similar performance on broader tasks. Strengths: - The paper is clear and well-written. - The proposed approach demonstrates good performance on a broad number of downstream tasks. - The paper provides a wide range of analyses. Weaknesses: - The model's training process relies on regional captions and bounding boxes, necessitating the use of pre-existing tools like OWL-ViT to produce detailed object locations for its training data. This approach creates a significant dependency on these off-the-shelf models, potentially limiting the quality of the training data. Assembling a large-scale dataset becomes challenging under these constraints. While the authors have managed to curate an impressive billion-scale dataset, this quantity doesn't necessarily ensure data quality. The reliance on external models for data generation raises questions about the overall integrity and reliability of the training information. - Typo - Line 134: AREFtask → AREF task - Line 240: dentified → identified Technical Quality: 3 Clarity: 3 Questions for Authors: - I have two questions about the proposed training tasks. - During the pretraining process or after training, is there any trend related to the size of the object in the image? Such analysis can be done among downstream tasks, such as object detection or RefCOCO series datasets. - Can you provide more details on the detection visualization analysis presented in Appendix C? Specifically, what is the connection between observing only one object per example and the substantial bounding box overlap observed without Non-Maximum Suppression (NMS)? Additionally, doesn't the LocCa pretraining process involve comprehension of multiple objects through its three tasks: captioning, AREF, and GCAP? How does this align with the single-object observation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper addresses the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Weaknesses: Data quality and dependency on external models & Typos** - To address the concerns about data quality and dependency on external models like OWL-ViT, we would like to emphasize our strategic use of simple filtering techniques to enhance data quality. Specifically, we employ a confidence score threshold of greater than 0.3 for selecting bounding boxes. This threshold is chosen based on its proven effectiveness in balancing accuracy and inclusivity, thus minimizing the inclusion of erroneous or irrelevant data. This filtering approach is similar to the use of the CLIP score in creating high-quality image-text datasets like LAION-400M [1] and YFCC15M [2], and its effectiveness has also been demonstrated in OWL-ViT-2 [3]. - Thanks for pointing out the typos; we will correct them in the final manuscript. 2. **Q1: Analysis on object size** We have analyzed the prediction results related to box size in the MSCOCO-DET dataset. The table below presents the mean Average Precision (mAP) for different box sizes, using the widely accepted classifications: large (>96$^2$ pixels), medium (32$^2$ - 96$^2$ pixels), and small (<32$^2$ pixels). We found that larger boxes consistently achieve higher mAP, a trend echoed in other object detection studies such as Pix2Seq [4] and SSD [5]. This pattern holds across various visual backbones, including CLIP, Cap, CapPa, and LocCa. The higher mAP for larger boxes can be attributed to the more detailed visual features they provide, making them easier for models to detect. | Model | mAP | mAP-small | mAP-medium | mAP-large | |-----------------|:------------:|:-----------:|:------------:|:-----------:| | CLIP | 40.46 | 19.11 | 44.46 | 62.79 | | Cap | 39.00 | 18.31 | 42.56 | 60.55 | | CapPa | 39.48 | 18.32 | 43.21 | 59.81 | | LocCa | 47.73 | 26.88 | 52.80 | 67.64 | 3. **Q2: Visualization details** During the LocCa pretraining process, each example specifically sampled one box-text pair for the AREF and GCAP tasks, individually (we also explored using multiple box-text pairs for each task during pre-training, it did not enhance performance and instead increased computational costs). We hypothesize that through multiple epochs of training, LocCa can observe all candidate box-text pairs in the training set. When deploying the pretrained model for zero-shot predictions with the GCAP task, it executes multiple decodings for a single input image using beam search, as described in Section 4.3. By setting the beam search to a beam size of 1 (single beam) and a low temperature, the model consistently selects the most confident bounding box regions. While this sampling strategy yields high-confidence predictions, it also results in substantial bounding box overlaps without the application of Non-Maximum Suppression (NMS). Reference: [1] Schuhmann et. al. LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. [2] https://huggingface.co/datasets/mehdidc/yfcc15m [3] Minderer et.al. Scaling Open-Vocabulary Object Detection. [4] Chen et al. Pix2seq: A Language Modeling Framework for Object Detection. [5] Liu et al. SSD: Single Shot MultiBox Detector. --- Rebuttal Comment 1.1: Comment: Dear reviewer, Thank you again for the detailed and constructive comments. As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. Otherwise, please let us know your remaining concerns so we can address them before the discussion period closes. Sincerely --- Rebuttal 2: Title: Final rating Comment: Thank you for the rebuttal. As most of my concerns and questions are addressed, I will maintain my initial rating.
Summary: This work proposed a location-aware pre-training for vision-language learning. The pre-training contains two tasks: one has location input and output caption/text; the other one has text input and output location of the corresponding object. The model is trained from scratch in ~1B large-scale image-text pairs with object location extracted by an off-the-shelf detection model. The experiments demonstrate the strong performance of the proposed model and the generalizability of the pre-trained vision encoder. Strengths: 1. The experiment section covers a wide range of tasks including location-aware tasks and holistic vision-language tasks. The ablation is also comprehensive. It's also impressive that the pre-trained vision backbone can be used as a general visual encoder to replace other CLIP-like models. 2. The model is lightweight. In the era of scaling up models, the proposed model can achieve great performance with just 600+M parameters. Weaknesses: 1. The key novelty is claimed to be pre-training a vision-language model via two location-aware tasks (Line139): `automatic referring expression` and `grounded captioning`. However, similar location-input-text-output and text-input-location-output tasks have been widely used in the training (mostly fine-tuning though) of location-aware MLLMs recently, eg, Shikra[1], Ferret[2], GLaMM[3]. Moreover, Ferret-v2 [4] also proposed dense referring and dense grounding as a pre-training stage. Those two tasks are quite similar to what this work proposes, and they even involve multiple regions in one round. 2. Missing comparison with many location-aware Multimodal LLM methods. Location-aware MLLMs load pre-trained LLMs and train on a moderate amount of data (100k-1M) in a few steps (mostly within 3 epochs). They can already show great performance in basic tasks and reasoning tasks. This work instead trains the model from scratch with large-scale data (1B) with longer training time. The authors should analyze the benefits and drawbacks of each line of work. 3. Missing Evaluations on: (1). Location-aware Reasoning tasks, such as Visual-7W[5], LookTwice-QA[6] used in Shikra[1] and Ferret-Bench used in Ferret[2]. (2). Grounded captioning capability is not evaluated, for example, Flickr30k used in GLaMM and Ferret. (3). Comparison with Multimodal LLMs in RefCOCOs and above-mentioned tasks. 4. Training data, WebLI dataset, is not available to the public. Considering the main novelty, location-aware pre-training, largely depends on the scale and quality of pre-training dataset, the work is hard to be reproduced. Refs: \ [1] Chen, Keqin, et al. "Shikra: Unleashing multimodal llm's referential dialogue magic." arXiv preprint arXiv:2306.15195 (2023). \ [2] You, Haoxuan, et al. "Ferret: Refer and ground anything anywhere at any granularity." ICLR 2024 \ [3] Rasheed, Hanoona, et al. "Glamm: Pixel grounding large multimodal model." CVPR. 2024. \ [4] Zhang, Haotian, et al. "Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models." arXiv preprint arXiv:2404.07973 (2024). \ [5] Zhu, Yuke, et al. "Visual7w: Grounded question answering in images." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [6] Mani, Arjun, et al. "Point and ask: Incorporating pointing into visual question answering." arXiv preprint arXiv:2011.13681 (2020). Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses. ===================== Thank the author for the answers. Please make sure to add the experiments of Visual7W and LookTwice-QA as promised. I'd like to raise my rating. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, It is discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Weaknesses 1&2: Comparison with location-aware MLLMs** Thanks for highlighting the connections with existing location-aware MLLMs like Shikra, Ferret, and GLaMM which we will cite in our final manuscript. We discuss the differences between LocCa and the referenced work as follows: - Firstly, we appreciate the opportunity to clarify the fundamental objectives and innovations of our work with LocCa, as discussed in Sec. 3.3 of our manuscript. Our primary goal with LocCa is not to enhance MLLMs with location awareness as described in the referenced works. These works typically utilize a pretrained CLIP image encoder and incorporate location-aware tasks during either the fine-tuning stages or as secondary enhancements to already trained models. In contrast, LocCa is a new approach to visual pretraining that inherently integrates location-aware tasks from the very beginning. - Besides, LocCa modifies the standard REF and GCAP tasks to better accommodate this integration, as detailed in Lines 130-137 in the paper. As a complementary, the visual encoder developed through LocCa can be easily incorporated into any MLLM, as evidenced by its application in the PaLI-3 experiments shown in Table 4 in the paper. - Ferret-v2 is a concurrent paper. We also investigated dense referring and dense grounding as proxy tasks during LocCa pretraining. The results were on par with AREF/GCAP, but required a significantly longer decoding length for the decoder, making the process more computationally intensive while providing little or no benefits. 2. **Weaknesses 3: Missing Evaluations** - (1) Thanks for highlighting the missing tasks on location-aware reasoning. Both Visual7W and LookTwice-QA are relevant. In Visual7W, six types of questions (what, where, when, who, why, and how) assess a model’s visual understanding capabilities, akin to the VQA/GQA tasks we selected in Table 3 in the paper. The seventh question category (which) is similar to the task of REC. Therefore, we actually almost covered all the task types presented in Visual7W. Furthermore, Visual7W is derived from MS-COCO, which also underlies VQAv2 and RefCOCOs for which we present results in the paper. LookTwice-QA requires situating a local region in the broader context of the image, which is meaningful while not widely used in recent research. **We plan to include both tasks in our future evaluations**. Ferret-Bench typically necessitates a LLM for knowledge-based reasoning, which is not the main focus of LocCa. While integrating the pretrained LocCa visual encoder with a LLM (e.g., Vicuna used in Ferret) for complex tasks is feasible, it falls outside the scope of this work. - (2) For grounded captioning, we clarify that GRIT/GPT4RoI/GLaMM evaluate on RefCOCO and Visual Genome, whereas Ferret evaluates on Flickr30k. The evaluation tasks differ between these two studies: the former involves generating regional captions based on a given bounding box location, while Ferret requires the model to generate a caption and provide bounding boxes for all noun phrases within the caption. Our setup aligns more closely with GRIT/GPT4RoI/GLaMM, so we report our grounded captioning results on Visual Genome and make comparisons with these methods. As shown in the **Table 1 in the rebuttal PDF file**, the LocCa$_L$ model with only 0.6B parameters (without LLM), outperforms GPT4RoI-13B (with LLM) on both METEOR and CIDEr scores for the Visual Genome grounded captioning task. When compared with GLaMM, it performs better on the METEOR metric but lags on CIDEr. METEOR is an evaluation metric that focuses on the precision, recall, and alignment of words between the generated caption and the reference caption, while CIDEr evaluates the similarity of n-grams between the generated captions and the reference captions. Visual Genome has relatively simple captions, typically consisting of a few words. LocCa adopts a simpler decoder (0.3B params only) that closely matches the reference captions in terms of word choice and order. The simplicity of the decoder might limit its ability to produce complex sentences but enables it to generate concise captions that align well with VG's simple nature. GLaMM, on the other hand, uses a more complex LLM as its decoder (Vicuna-7B, 20x params compared to LocCa's decoder), which likely generates more diverse captions that include n-grams that match the reference captions more effectively. This could explain why LocCa has a higher METEOR score but a lower CIDEr score compared to GLaMM. However, it is important to note that the **pretrained LocCa encoder is complementary to MLLMs (i.e. as a better alternative option to the CLIP encoder)**, and we expect further performance gains on downstream tasks when combining both. - (3) As shown in the **Table 2 in the rebuttal PDF file**, LocCa$_G$ achieves higher performance than MLLM (e.g. Shikra, Ferret) on RefCOCOs even with a small decoder rather than a LLM (1.3B vs 13B). 3. **Weaknesses 4: Reproduction** - First, in our paper we only adopted a publicly available OWL-ViT CLIP L/14 model [7] to generate detection pseudo annotations. We believe a similar dataset (e.g. based on publicly accessible data) could be reproduced with the technical details provided and the public OWL-ViT models. - In the meantime, as in other areas of large-scale science, we believe that publishing as many details as possible on the findings of state-of-the-art systems is beneficial to the community. We will also try our best to provide detailed descriptions on construction of the dataset, and model training methods. - Finally, in our ongoing work, we are actively exploring training our models using only publicly accessible data, so that we can make a fast and accessible model. We are making our best effort to release these artifacts. Refs: [7] Minderer et. al. Simple Open-Vocabulary Object Detection with Vision Transformers. --- Rebuttal Comment 1.1: Comment: Dear reviewer, Thank you again for the detailed and constructive comments. As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. Otherwise, please let us know your remaining concerns so we can address them before the discussion period closes. Sincerely
Summary: This paper presents LocCa, a novel visual pretraining paradigm that incorporates location-aware tasks into captioners. Specifically, LocCa employs two tasks, bounding box prediction and location-dependent captioning, conditioned on the image pixel input. This multi-task training helps LocCa significantly outperforms standard captioners on downstream localization tasks while maintains comparable performance on holistic tasks. Strengths: 1. The paper is well written. The core contributions are clearly presented. 2. The proposed method achieves state-of-the-art performance. Weaknesses: Several works also investigate he matching of image regions with corresponding text during pretraining. The authors claim that compared to these methods, the proposed LocCa can git rid of complex model architectures and become more computationally efficient. To demonstrate this, more experimental results should be provided to validate this claim, for example, FLOPs, trainable paramters or training time. Technical Quality: 3 Clarity: 3 Questions for Authors: Several works also investigate he matching of image regions with corresponding text during pretraining. The authors claim that compared to these methods, the proposed LocCa can git rid of complex model architectures and become more computationally efficient. To demonstrate this, more experimental results should be provided to validate this claim, for example, FLOPs, trainable paramters or training time. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Several works also investigate he matching of image regions with corresponding text during pretraining. The authors claim that compared to these methods, the proposed LocCa can git rid of complex model architectures and become more computationally efficient. To demonstrate this, more experimental results should be provided to validate this claim, for example, FLOPs, trainable paramters or training time. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Previous work, such as RegionCLIP, has also attempted to align image regions with corresponding textual descriptions using contrastive pre-training schemes. However, these approaches tend to be resource-intensive and less efficient: - In terms of trainable parameters, RegionCLIP [1] uses a pre-trained CLIP from OpenAI [2] as a teacher model to train a student visual encoder of equivalent size. Besides, it incorporates an ROI-Align module for extracting regional features, which increases the complexity. Conversely, LocCa aligns with the Cap model in simplicity, having a similar number of parameters, i.e., about half the parameters of the RegionCLIP model, taking the teacher model into account. - In terms of training time, RegionCLIP requires extracting 100 RoIs per image and conducting dual-level contrastive learning—both at the image and region levels. This significantly increases the computational load per example by a factor of 100x|C| (where |C| is the average number of concepts per caption) during the contrastive learning stage. According to [3], with a pre-trained CLIP, training RegionCLIP (with RN50 encoder) takes 6 days using 32 V100 GPUs on 57.6M examples, which is 80k V100-hours or approximately 20k TPUv4-hours per billion examples [4][5]. Conversely, LocCa (B/16 encoder) reuses encoded visual features across 3 tasks, leading to an overall training time per billion examples that is only 1.3 times longer (611 vs 454 TPUv4-hours per billion examples) than that of Cap, which is much faster than RegionCLIP. | Model | Params | TPUv4-hrs. | |------------|:--------:| :----------: | | B/16 Cap | 192 M | 454 | | B/16 CLIP* | 197 M | 444 | | B/16 LocCa | 192 M | 611 | | R50 RegionCLIP | - | 20000 | Reference: [1] Zhong et. al. RegionCLIP: Region-based Language-Image Pretraining. [2] https://github.com/openai/CLIP [3] https://github.com/microsoft/RegionCLIP/issues/50 [4] https://lambdalabs.com/blog/nvidia-a100-vs-v100-benchmarks [5] https://cloud.google.com/blog/products/ai-machine-learning/google-wins-mlperf-benchmarks-with-tpu-v4 --- Rebuttal Comment 1.1: Comment: Dear reviewer, Thank you again for the detailed and constructive comments. As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. Otherwise, please let us know your remaining concerns so we can address them before the discussion period closes. Sincerely
Rebuttal 1: Rebuttal: We appreciate the valuable advice and generally positive feedback from all reviewers. Specifically, Reviewers 88rJ, ris8, and HN67 find our paper clear and well-written. Reviewer HN67 considers our methodology straightforward and elegant. Reviewers gCZj, ris8, and HN67 acknowledge that our experiment encompasses a wide range of downstream tasks. All reviewers agree that our performance is strong with a lightweight model. We addressed the reviewers' concerns in a separate section. To provide detailed experimental results, we have included a PDF file containing three tables. Please refer to this PDF file, where the relevant information is indicated as **Table x in the rebuttal PDF file**. Pdf: /pdf/46c46cffbdbe15d98ca706c655e54166af319b50.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion
Accept (poster)
Summary: The paper “FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion” introduces a novel framework called FuseMoE, designed to tackle challenges associated with multimodal data in machine learning. This framework addresses key issues such as missing elements, temporal irregularity, and sparsity in data, which are prevalent in fields like healthcare where data is often incomplete and irregularly sampled. The core innovation in FuseMoE is its gating function that integrates various modalities efficiently. The framework enhances predictive performance through: 1. Sparse MoE Layers: Incorporating sparsely gated MoE layers to manage tasks and learn optimal modality partitioning. 2. Laplace Gating Function: A novel gating function theoretically proven to ensure better convergence rates than traditional Softmax functions. 3. FlexiModal Data Handling: Efficiently handling scenarios with missing modalities and irregularly sampled data trajectories. The framework is validated through empirical evaluations on diverse prediction tasks, demonstrating its effectiveness in real-world applications. Strengths: 1. Existing demand of a more flexible gating function in the field of multimodal MoE makes the idea of this paper promising. Also, the possible implementations of this new MoE system is promising and may lead to some practical medical tools. 2. The theoretical proof is sufficient and rigorous. 3. The experimental design is comprehensive from the perspectives of scenarios and data, to a certain extent corresponding with the theory. The appendix contains a large amount of supplementary information, which helps verify the completeness and correctness of the article. Weaknesses: 1. The comparison works are all very outdated (the latest one UTDE[1] is publicized in 2023, with over half of the compared works predating 2020). Why not compare with more recent works? 2. Fig.1 lacks clear description, as the data processing workflow is not well explained. 3. Gaussian gating and your proposed form are very similar in format, with formula $h(x)=\operatorname{Top} \mathrm{K}\left(-\|W-x\|_{2}^{2}\right)$. Thus, theoretical comparison with original softmax may be unfair as the form is derived from and similar to gaussian gating. Therefore, although theoretical results may be valid, their importance diminishes due to this aspect. [1] Improving Medical Predictions by Irregular Multimodal Electronic Health Records Modeling Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. The biggest question at my side is that the similarity with gaussian gating may lead to some theoretical misunderstanding. Any further explanation on this topic could lead to higher score. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations, there is some redundancy in the encoder part that heavily draws inspiration from UTDE's work. From the appendix it can be observed that encoder might have a greater impact on overall performance than what gate function alone contributes to improvement; hence further discussion is needed on whether this gate function must necessarily be tied to methods related to mTAND and imputation for obtaining higher-quality features as inputs for downstream MoE layers from an implementation perspective. Of course, the discussion on this topic is optional. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `sZSN`, Thank you very much for your constructive suggestions. We are encouraged by your acknowledgment that our idea is promising, the theoretical proof is rigorous, and the experimental design is comprehensive, containing a large amount of supplementary information. Below, we address your concerns in detail. **W1:** We wish to emphasize that all the baselines we compared are well-known and representative methodologies within specific categories: concatenation, Tensor Fusion, Multimodal Adaptation Gate, and cross-attention-based methods. The MoE-based fusion method is a completely new line of work, and the comparison is essentially within the concept of fusion approaches. To the best of our knowledge, based on the latest baselines we compared with, there is neither new work that opens up a new line of fusion methods nor any work that is far superior to the method and baselines we presented. However, in response to your comments and those of other reviewers, we have added additional experiments and ablation studies on new datasets and settings to further strengthen our results. These can be found in the attached PDF file. **W2:** Figure 1 illustrates the example of multimodal electronic health records that possess irregularity and missing modality problems. This illustration reflects the situation of the MIMIC-IV dataset. Detailed task descriptions and data preprocessing workflows are discussed in Appendices A and D. We will add detailed explanations to Fig. 1 in the revised version. **W3:** Softmax gating is more commonly used than Gaussian gating in real-world problems involving vision, text, and potential multimodal applications, and it also achieves relatively better performance. This is why we mainly compare our method with Softmax gating. Gaussian gating has been compared with Laplace gating in a different work, which illustrates that our proposed Laplace gating is more sample-efficient than Gaussian gating as shown in Nguyen et al. [1]. First, recall that the general form of the Gaussian gating is given by $$h(x)=\mathrm{TopK(-(W-x)^{\top}\Gamma^{-1}(W-x)/2)},$$ where $W$ is a mean vector, and $\Gamma$ is a covariance matrix. Then, Nguyen et. al. [1] consider two settings of the mean parameters $W^*_j$. Under the first setting (resp. second setting), Nguyen et. al. [1] argue that due to an interaction between the mean parameters $W^*_j$ and the covariance matrix $\Gamma^*_j$ (resp. the expert parameters $a^*_j$) via the partial differential equation (PDE) in Eq.(8) (resp. Eq.(11)) in [1], the rates for estimating mean parameters $W^*_j$ (resp. $a^*_j$) decrease when the number of their fitted atoms increases, and are no faster than $O(n^{-1/8})$. On the other hand, it can be verified that such parameter interactions do not occur under the FuseMoE. Thus, the estimation rates for the mean parameters $W^*_j$ and the expert parameters $a^*_j$ when using the Laplace gating remain unchanged of order $O(n^{-1/4})$. To strengthen our arguments, we also include a table where we summarize the parameter estimation rates under the Gaussian MoE when using the Laplace gating and the Gaussian gating. Finally, we explain why the aforementioned PDEs affect the parameter estimation rates. In particular, a key step in our proof is to decompose the density difference $p_{G_n}(Y|X)-p_{G_*}(Y|X)$ into a combination of linearly independent terms using Taylor expansions. However, if those PDEs hold true, then components in that decomposition become linearly dependent, which is undesirable and leads to slow parameter estimation rates. *Summary of parameter estimation rates under the Gaussian MoE with the Gaussian gating and the Laplace gating. The function $\widetilde{r}(\cdot)$ is defined in Eq.(12) in [1], while the function $\bar{r}(\cdot)$ is defined in Eq.(9) in our paper. Note that $\widetilde{r}(\cdot)\leq\bar{r}(\cdot)$ and $\widetilde{r}(2)=4$, $\widetilde{r}(3)=6$. Additionally, $\mathcal{A}^n_j:=\mathcal{A}_j(\widehat{G}_n)$ is a Voronoi cell defined in Eq.(7) in our paper:* |`Gating`|$W_{j}^{*}$|$a_{j}^{*}$|$b^*_{j}$|$\nu_{j}^{*}$| |:---:|:----:|:----:|:----:|:----:| |Gaussian [1]: setting I|$\mathcal{O}(n^{-1/2\bar{r}(\|\mathcal{A}^n_{j}\|)})$|$\mathcal{O}(n^{-1/4})$| $\mathcal{O}(n^{-1/2\bar{r}(\|\mathcal{A}^n_{j}\|)})$|$\mathcal{O}(n^{-1/\bar{r}(\|\mathcal{A}^n_{j}\|)})$| |Gaussian [1]: setting II|$\mathcal{O}(n^{-1/2\widetilde{r}(\|\mathcal{A}^n_{j}\|)})$|$\mathcal{O}(n^{-1/\widetilde{r}(\|\mathcal{A}^n_{j}\|)})$|$\mathcal{O}(n^{-1/2\widetilde{r}(\|\mathcal{A}^n_{j}\|)})$| $\mathcal{O}(n^{-1/\widetilde{r}(\|\mathcal{A}^n_{j}\|)})$| |Laplace (Ours)|$\mathcal{O}(n^{-1/4})$|$\mathcal{O}(n^{-1/4})$|$\mathcal{O}(n^{-1/2\bar{r}(\|\mathcal{A}^n_{j}\|)})$|$\mathcal{O}(n^{-1/\bar{r}(\|\mathcal{A}^n_{j}\|)})$| **Limitations:** We will revise our paper to avoid potential information overlaps with prior UTDE work. However, we wish to emphasize that the encoder part was not proposed by UTDE itself; the authors of that paper also utilized prior works (i.e., mTAND, Time2Vec) for demonstration purposes, similar to our approach. Regarding the gating function and encoders, we appreciate your insights. Indeed, all the ablation studies on encoders presented in Appendix H.2 are based on the Laplace gating function of our FuseMoE framework. Choosing appropriate encoders is beyond the scope of this paper, which is why we defer this discussion to the Appendix. However, as the reviewer points out, it is beneficial to determine whether gating functions or encoders contribute more significantly to overall performance improvement. Therefore, we have added additional experimental results on mutating different gating functions in conjunction with different encoders and compared the differences in results. The figures can be found in the attached PDF file. **Reference** [1] Nguyen et al., (2024). Towards convergence rates for parameter estimation in Gaussian-gated mixture of experts. Authors --- Rebuttal Comment 1.1: Title: Kindly Request for Reviewer's Feedback Comment: Dear Reviewer `sZSN`, We sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes. This is a gentle reminder that the **Author-Reviewer Discussion period ends in just around 12 hours from this comment, i.e., 11:59 pm AoE on August 13.** We are happy to answer any further questions you may have before then, but we will be unable to respond after that time. If you agree that our responses to your reviews have addressed the questions you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments! Sincerely, Authors
Summary: This paper proposes an MOE-based model to handle multimodal data fusion. It addresses two challenges: missing modalities and irregularly sampled data trajectories. A Laplace gating function is applied to the MoE Backbone. An entropy regularization loss is proposed to ensure balanced and stable expert utilization. The author validates the method in diverse datasets. Strengths: 1. Strong motivation. Two challenges presented in this paper, missing modalities and irregularly sampled data trajectories, are important in the multi-modal data fusion area. It is reasonable to address them through sparsing encoding and gating networks in the MOE layer. 2. Good theoretical analysis. This paper presents a detailed theoretical analysis of the Laplace gating over the standard Softmax gating in MoE. Other mathematical proofs seem to extensively illustrate the characteristics of the proposed MOE model. 3. Good experimental results. Comprehensive evaluations of FuseMoE on image and video datasets seem to validate its effectiveness. Weaknesses: Addressing the following weaknesses may improve the paper: 1. The author should clearly distinguish the proposed method and others’ modules. From the paper, the Laplace gating function is proposed as a new one. I am not sure the author made some contributions to the encoder design, router design, and loss design. The author should make more illustrations about the contributions, not just combine other people’s work together. 2. The experimental results are not extensive. The author should clearly demonstrate the data modality of the chosen benchmarks. It seems that other modalities, such as text, are not included. The author should explain this. 3. Please explain more concisely how the gating functions can stabilize the imbalance and sparse multi-modal data. The paper should point this out more concisely and better with experimental results. Too many mathematical proofs and theorems in Sec.3 seem not helpful in illustrating the advantages of the proposed method. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weakness part to answer the questions. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `6SUH`, We are grateful for your positive feedback and insightful comments. It is particularly encouraging to hear that you found our manuscript to have strong motivation and address important challenges, with good theoretical analysis and comprehensive experimental results. In the following, we address your major concerns in detail. **Q: The author should clearly distinguish the proposed method and others’ modules. From the paper, the Laplace gating function is proposed as a new one. I am not sure the author made some contributions to the encoder design, router design, and loss design. The author should make more illustrations about the contributions, not just combine other people’s work together.** **A:** We respectfully disagree with your claim that we are merely combining existing literature. In our work, the MoE fusion layer and missing modalities sections are newly proposed and form important parts of the FuseMoE method. The theoretical insights of the Laplace gating function are another integral and unique aspect of this paper, leading to extensive empirical evaluations under multiple circumstances and applications. Most importantly, the new setting and motivation can potentially be connected to many real-world applications. While the encoder models we employed are based on existing work, this is primarily for demonstration purposes and does not constitute a significant portion of the context. We would like to emphasize that this level of prior-work reuse has been common in many prior works, including our baselines from [1]. Nevertheless, our intention was not to combine other people’s work but to provide a clearer demonstration. We truly appreciate your advice in pointing out this issue and will ensure to clarify this in the revised version. **Q: The experimental results are not extensive. The author should clearly demonstrate the data modality of the chosen benchmarks. It seems that other modalities, such as text, are not included. The author should explain this.** **A:** We have created a table summarizing the modality components and sample size of each dataset in the attached PDF file. These details can also be found in the dataset details section in Appendix B. The datasets we tested include multiple modalities such as time series, text, image, ECG, and video frames. **Q: Please explain more concisely how the gating functions can stabilize the imbalance and sparse multi-modal data. The paper should point this out more concisely and better with experimental results.** **A:** We start with the computation of gating functions to demonstrate the advantage of Laplace gating. For each token $\mathbf{h}$, and each expert $i$ with embedding $\mathbf{e}_i$, the similarity score $s_i$ is computed as: $s_i = \mathbf{h} \cdot \mathbf{e}_i,$ which represents the affinity between the token and the expert. These scores are then passed through a Softmax function to obtain the gating probabilities $g_i$: $g_i = \frac{\exp(s_i)}{\sum_{j} \exp(s_j)},$ where the gating probabilities determine the weight or importance of each expert in contributing to the final decision. **Representation collapse** occurs when the Softmax gating mechanism consistently assigns high probabilities to a small subset of experts, causing these experts to dominate the decision-making process while the others become redundant. This issue arises due to: - If a few experts have significantly higher similarity scores $s_i$ than others for most tokens, the Softmax function will amplify these differences, leading to very high gating probabilities for these experts. - The exponential nature of the Softmax function makes it sensitive to differences in similarity scores, causing the highest scores to dominate. As for the Laplace gating function, instead of using the inner product, it computes the similarity score $s_i$ as the negative L2-distance between a token’s hidden representation $\mathbf{h}$ and an expert embedding $\mathbf{e}_i$: $s_i = -\| \mathbf{h} - \mathbf{e}_i \|_2,$ which is a distance-based similarity measure. The $L_2$-distance measures how far apart the token representation and expert embedding are in the feature space. This approach does not inherently favor any expert based on magnitude, unlike inner product which can be biased towards experts with larger norms. By considering the distance, the Laplace gating function ensures that all experts have a more balanced opportunity to be selected based on how close they are to the token representation, rather than being dominated by a few experts with higher dot products. When dealing with heterogeneous inputs, such as multimodal data (e.g., text, images, time series), the feature distributions can be very different across modalities. The $L_2$-distance is a more robust measure that can handle these differences without being overly sensitive to the scale and variance of the input features. In addition, it can gracefully degrade in the presence of missing data, rather than causing abrupt changes in gating probabilities that might occur with inner product-based measures. For experimental results on the Laplace gating function, we have added results on ImageNet using the Vision-MoE framework, which can be found in the attached PDF file. In our paper, we have already tested the Laplace gating function on MIMIC-III, MIMIC-IV, CIFAR-10, and PAM. Additionally, the results presented on the CMU-MOSI and CMU-MOSEI datasets also utilize the Laplace gating function. Sincerely, Authors **Reference** [1] Zhang et al., (2023) Improving Medical Predictions by Irregular Multimodal Electronic Health Records Modeling, ICML 2023.
Summary: The Paper introduces “FuseMoE”, a novel mixture-of-experts (MoE) framework that can handle multi-modal data even in scenarios with missing elements, sparsity of samples and temporal irregularity. It proposes an innovative Laplace gating function with theoretical proof to enhance convergence rates and predictive performance across various tasks. The Laplace gating function allows sparser distribution of weights among MoE and encourages more balanced utilization of experts with sharper peaks and heavier tails preventing certain experts to be over dominant. FuseMoE includes modality and irregularity encoder using a discretized multi-time attention (mTAND) module to support generic fleximodal data ingestion with unlimited input modalities instead of other popular pairwise setup that exist in the literature. Paper also outlines various router designs for processing multimodal inputs and discusses the tradeoff between them and demonstrates . Key: 1. Laplace gating function showed superior performance compared to Softmax gating across multiple tasks 2. Unlike baseline models, FuseMoE demonstrated improved scalability with increasing number of modalities 3. Handles missing modalities with tAND and per modality routers with entropy loss function 4. theoretical analysis of convergence rates for parameter estimation 5. FuseMoe outperformed baseline methods on several fleximodal datasets. Strengths: 1. Sparse MoE backbone with novel Laplace gating function that ensures better convergence rates compared to traditional Softmax with theoretical guarantees 2. Modality and Irregularity Encoder to use mTAND module to descretize irregularly sampled observations to mix with continuous features, Effective management of missing data and irregular sampling 3. MoE fusion layer to integrate embedding from different modalities with different router designs. Flexibility to handle variable number of modalities because of per modal experts routing 4. Novel methods, strong theoretical contributions and comprehensive emperical results and evaluation Weaknesses: 1. Potential over-parameterization when input is small 2. limited discussion of computational needs and scalability to huge datasets 3. Most of the items discussed were already established and proved. Paper seems to be combining existing literature for these datasets. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The paper claims to work with unlimited input modalities. is it really so? how do you anticipate the fuseMoE to handle extremely sparse data across modalities? Any thoughts/estimation on how far we can go here? How does the complexity of setup scale with number of modalities? 2. Is Laplace only tested with CIFAR-10 dataset? Why was it not tested with other bigger datasets? 3. Have you thought about or explored the interpretability side of expert assignement process? 4. Few other paper discusses and introduces Laplace instead of gaussian for MoE. Can you expand more on how FuseMoE is differnt from these? - Nguyen, Hien D., and Geoffrey J. McLachlan. "Laplace mixture of linear experts." *Computational Statistics & Data Analysis* 93 (2016): 177-191. - Wu, Lc., Zhang, Sy. & Li, Ss. Heteroscedastic Laplace mixture of experts regression models and applications. *Appl. Math. J. Chin. Univ.* **36**, 60–69 (2021). https://doi.org/10.1007/s11766-021-3591-2 Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Potential limitation of over-parameterization of networks especially when input data/size is small is adequately discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**: Yes, we mentioned this as one of our limitations in line 316 of Section 5. This issue was observed during our empirical evaluation, where the Time2Vec method we employed transforms a univariate time series into a high-dimensional vector. This transformation helps capture trend/seasonality components and long-term dependency. However, when the number of samples is small and lacks sufficient information, transforming to a high-dimensional vector can be redundant and lead to over-parameterization. This problem only occurred in a few scenarios among the numerous experiments we conducted. As mentioned in Section 5, we plan to address this issue in future work. **W2**: We have included a discussion comparing the computational efficiency of various methods in Figure 12 of Section H.3. Additionally, we have described the computational resources used in Section G.1. We have also tested FuseMoE on large-scale multimodal datasets, including MIMIC-III, MIMIC-IV, and CMU-MOSEI. As shown in Table 1 of the MultiBench paper [1], these datasets are all featured as large-scale multimodal datasets, with MIMIC-III containing 36,212 samples, MIMIC-IV containing 73,173 samples, and CMU-MOSEI containing 22,777 samples. Even though CMU-MOSI is relatively small-scale with 2,199 samples, our experiments have overall tested a sufficient number of large-scale multimodal datasets compared to most prior works (e.g. [2]). These datasets span various data types, including text, images, time series, ECG, video frames, and audio signals, demonstrating strong generalization capacity across domains. Please find tables summarizing the information of these datasets in the attached PDF file. **W3**: We respectfully disagree that we are merely combining existing literature. In our work, the MoE fusion layer and missing modalities sections are newly proposed and form important parts of FuseMoE. The theoretical insights of Laplace gating are another integral aspect of this paper, leading to extensive empirical evaluations under multiple circumstances. Most importantly, the new setting and motivation can potentially be connected to many real-world applications. While the encoder models we employed are based on existing work, this is primarily for demonstration purposes and does not constitute a significant portion of the context. We want to emphasize that this level of prior-work reuse has been common in many prior works, including our baselines from [2]. We truly appreciate your advice in pointing out this issue and will ensure to clarify this in the revised version. **Q1**: We wish to first emphasize that “unlimited” does not mean infinite. According to [3], most real-world multimodal problems do not exceed four modalities. FuseMoE can scale to recently introduced high-modality problems, which combine modalities from different tasks. The scalability advantage is particularly prominent when compared with the pair-wise cross-attention-based approach. FuseMoE is designed to address real-life multimodal datasets with missing modality or irregularity issues, which can be caused by malfunctioning equipment, different patient conditions, human-level decisions, etc. In principle, the current method can be applied to arbitrary sparsity or modality combinations, as we have only tested on available real-world multimodal datasets. However, we did not manually create sparse data to meet the potential “extremely sparse” criteria. If such situations occur, we can employ additional strategies to enhance the robustness of the current framework, such as leveraging hierarchical MoE where a specific subgroup of experts is assigned to handle missing modalities, and applying regularization or dropout during training to make the model robust to missing modalities. Given that the model architecture remains the same, as the number of modalities N increases, the FuseMoE-based method scales linearly in $\mathcal{O}(N)$, whereas the cross-attention method scales in $\mathcal{O}(N^2)$. **Q2**: We have added additional results on ImageNet using the Vision-MoE framework for the Laplace gating, this can be found in the attached PDF file. In our paper, we have also tested the Laplace gating on MIMIC-III, MIMIC-IV, and PAM. Additionally, the results presented on the CMU-MOSI and CMU-MOSEI datasets utilize the Laplace gating function. **Q3**: Yes, please see Figure 12 (c) in Appendix H, which visualizes the modality contribution of the MIMIC dataset. Additionally, we have included the modality contributions of the CMU-MOSI and CMU-MOSEI datasets in the attached PDF file. **Q4**: There are three main differences between our FuseMoE and the Laplace MoE considered in the referenced papers: 1. Conditional Distributions: Under the FuseMoE model, the conditional distribution of $Y | X$, with the density given in Equation (4), where $X$ is an input and $Y$ is an output, is a mixture of Gaussian distributions. In contrast, the Laplace MoE model utilizes a mixture of Laplace distributions. 2. Sparse MoE versus Dense MoE: In FuseMoE, only a subset of Gaussian distributions is activated for each input $X$. Specifically, the conditional distribution of $Y | X$ is a mixture of K Gaussian distributions, where K is often set to one or two in practice. This mechanism is enabled by the Top K operator described below line 154. Meanwhile, in the Laplace MoE model, all Laplace distributions are activated for each input. 3. Gating Kernels: In FuseMoE, the mixture weights in Equation (4) use the Laplace kernel with the scale parameter set to one. In contrast, the gating kernel in the Laplace MoE model is an exponential value of a linear kernel. **References** [1] MULTIBENCH: Multiscale Benchmarks for Multimodal Representation Learning [2] Improving Medical Predictions by Irregular Multimodal Electronic Health Records Modeling [3] High-Modality Multimodal Transformer: Quantifying Modality \& Interaction Heterogeneity for High-Modality Representation Learning --- Rebuttal Comment 1.1: Title: Official Comment from reviewer 6Ay7 Comment: I thank the authors for the rebuttal and for taking the time to conduct additional experiments and comparisons. The rebuttal and the additional clarification address most of my concerns so I am happy to increase my score from 6 to 7. Cheers! --- Reply to Comment 1.1.1: Title: Thank You! Comment: Dear Reviewer `6Ay7`, Thank you so much for your positive response! Please feel free to reach out if you have any further questions or thoughts. We would be delighted to discuss them with you to improve our paper! Sincerely, Authors
Summary: This papers proposes a novel MoE arcchitecture for handling and fusing multiple modalities, along with two core contributions: - a novel router design that can handle missing modalities; - a laplace gating function that is theoretically proven to ensure better convergence. Strengths: - 1 This paper is generally well written and easy to follow; tables and figures are neat and informative; - 2 The motivations of core designs are well elaborated; - 3 Theoretical proofs are providied to further justify its effectiveness; - 4 Extensive experiments, along with diverse modalities. are conducted to validate the effectiveness of the proposed approach. - 5 The capabilty of handling missing inputs is interesting (Fig. 4 c). Weaknesses: - 1 The proposed method achieves promising results on the tested benchmark, which are still comparably small scale datasets, e.g, CMU-MOSI is with 2000+ samples. I am wondering if this work can be applicable to training/fine-tuning with larger scale data. - 2 Again, for vision task, only CIFAR-10 is used. This cannot fully justify its effectiveness for more general vision or multi-modal tasks; - 3 The capability in handling missing modalities is promising. The auhor is encouraged to explain why, in some scenarios, mising modalities + the proposed method surposses the variant with full modalities (Fig 4 c). I am still wondering its generalizability under more scenarios. - 4 The visualization and quanlitative analysis of the learned gaing weight would help the reader better understand the method. Technical Quality: 3 Clarity: 3 Questions for Authors: - 1 In table 5, Laplace gaiintg shows limited superiority against other variants, any insight for it? - 2 In table 4c, why the method achieves better results with less modaltieis? Any further insight or ablations? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `P6x6`, We deeply appreciate your insightful comments and positive feedback. We are heartened by your recognition that our paper is well-written, with well-elaborated motivations. We are also pleased that you acknowledge the theory and extensive experiments we provided to justify the effectiveness of our method. Below, we address your questions and concerns in detail. **Q: The proposed method achieves promising results on the tested benchmark, which are still comparably small scale datasets, e.g, CMU-MOSI is with 2000+ samples. I am wondering if this work can be applicable to training/fine-tuning with larger scale data.** **A:** Yes, we have tested FuseMoE on large-scale multimodal datasets including MIMIC-III, MIMIC-IV, and CMU-MOSEI. As shown in Table 1 of the MultiBench paper [1], these datasets are all featured as large-scale multimodal datasets, with MIMIC-III containing 36,212 samples, MIMIC-IV containing 73,173 samples, and CMU-MOSEI containing 22,777 samples. Even though CMU-MOSI is relatively small-scale with 2,199 samples, our experiments overall have tested a sufficient number of large-scale multimodal datasets. These datasets span various data types including text, images, time series, ECG, video frames, and audio signals, demonstrating strong generalization capacity across different domains. Please find tables summarizing the information of these datasets in the attached PDF file. **Q: Again, for vision task, only CIFAR-10 is used. This cannot fully justify its effectiveness for more general vision or multi-modal tasks** **A:** As explained above, several of the multimodal datasets we tested include vision components, such as chest X-rays in MIMIC and video frames in CMU-MOSI and MOSEI. Additionally, we have conducted further evaluation of the ImageNet dataset. Detailed information can be found in the attached PDF document. **Q: The capability to handle missing modalities is promising. The author is encouraged to explain why, in some scenarios, missing modalities + the proposed method surpasses the variant with full modalities (Fig 4 c). I am still wondering about its generalizability under more scenarios.** **A:** Full modalities mean that patient records with incomplete modalities will be discarded. In contrast, incorporating data with missing modalities allows us to access a broader array of samples, resulting in the utilization of all available patient data. As mentioned in Table 6 of Appendix B.1, for instance, when examining the 48-IHM or LOS tasks in the MIMIC dataset, the total number of patient samples is 35,129. However, the number of patients with clinical notes is only 32,038, and with chest X-ray (CXR) only 8,731. If we strictly enforce the inclusion of complete modalities involving CXR, the total number of samples available is constrained by this bottleneck. Therefore, being able to utilize incomplete modalities is a critical step in making use of the vast amount of patient data without full modalities. This approach will greatly improve generalization due to the increased number of available samples. **Q: The visualization and qualitative analysis of the learned gating weight would help the reader better understand the method.** **A:** We have included the visualization and qualitative analysis of the learned gating weights in Figure 12 of Appendix I. To better address your request, we have also added additional results on the CMU-MOSI and MOSEI datasets in the attached PDF file. **Q: In table 5, Laplace gating shows limited superiority against other variants, any insight for it?** **A:** In the results of Table 5, Laplace gating is still close to the best-performing method. We wish to emphasize that the advantage shown in Table 4 comes from the MoE architecture improvement. Most fusion methods listed in Table 4 are only suited to two-modality problems and cannot be extended as we scale to more modalities in Table 5. The comparison between Softmax, Gaussian, and Laplace gating functions in Table 5 reflects improvements within the same MoE architecture. These two types of improvements are distinct: sometimes one factor (e.g., the MoE architecture) is more important than the other (e.g., the gating function). Additionally, the performance advantage of each method also depends on the modality combination. Adding or removing modalities can influence the effectiveness of the Laplace gating mechanism. **Q: In table 4c, why the method achieves better results with less modalities? Any further insight or ablations?** **A:** Based on the context, we believe you mean Figure 4c instead of Table 4c. We would like to refer you to our answer to your first question for more details on this matter. Please let us know if there are specific aspects or sections you would like us to elaborate on further, or if you have any additional questions. Thank you! Sincerely, Authors **Reference** [1] Liang et al., (2021) MULTIBENCH: Multiscale Benchmarks for Multimodal Representation Learning, NeurIPS 2021.
Rebuttal 1: Rebuttal: Dear Area Chairs and Reviewers, We want to thank you for your valuable feedback and insightful reviews, which have greatly contributed to improving our paper. The following endorsements are truly motivating: - Writing: Our paper is well-written and easy to follow, with informative tables and figures (Reviewer `P6x6`). - Motivation: The motivation of our paper is promising and well-elaborated (Reviewers `P6x6`, `6SUH`, `sZSN`). - Method: FuseMoE is novel and performs better than baselines (Reviewer `6Ay7`). - Theory: Our paper is theoretically sound (Reviewers `6Ay7`, `6SUH`, `sZSN`). - Experiments: The experiments in our paper are extensive and comprehensive (Reviewers `P6x6`, `6Ay7`, `6SUH`, `sZSN`). To better address the reviewers’ concerns, we have added additional results in the attached PDF file. We summarize the results below: - **Table 1**: Summary of information on all the datasets we tested in the experiment section, including important categories such as modalities and sample sizes. These results complement our answers to Reviewers `P6x6`, `6Ay7`, and `6SUH`. - **Figure 1**: ImageNet classification accuracy results using Vision-MoE-Base and Vision-MoE-Large models. For Vision-MoE-Base, we used 8 experts, 512 hidden dimensions, and 12 layers; for Vision-MoE-Large, we used 16 experts, 768 hidden dimensions, and 16 layers. The models are trained for 300 epochs. These results complement our answers to Reviewers `P6x6`, `6Ay7`, `6SUH`, and `sZSN`. - **Figure 2**: Visualization of modality weight on the top-$k$ chosen experts using CMU-MOSI and CMU-MOSEI datasets. For every expert selected, we calculate the number of samples that include a specific modality, weighted by corresponding weight factors from the gating functions. The outcomes are subsequently normalized across modalities. The results for the MIMIC dataset can be found in Figure 12(c) of Appendix H.3. These results complement our answers to Reviewers `P6x6`, `6Ay7`, and `sZSN`. - **Tables 2 - 5**: Experimental results on mutating different gating functions in conjunction with different encoders of FuseMoE. All results are averaged over 5 random runs. These results complement our answers to Reviewer `sZSN`. We hope that our responses below and additional results will satisfactorily address your questions and concerns. We sincerely appreciate the time and effort you have dedicated to reviewing our submission, along with your invaluable suggestions. Sincerely, Authors Pdf: /pdf/97356d6977cfbd1a2722fbe54cea9d760f46c4ec.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reinforcement Learning Policy as Macro Regulator Rather than Macro Placer
Accept (poster)
Summary: The paper proposes MaskRegulate, a method that utilizes reinforcement learning in the refinement stage for time-efficient training and accurate reward generation. Additionally, the authors introduce a regularity metric that can be used alongside HPWL in chip placement. To validate their proposed method, the authors evaluated the PPA performance, a real-world metric and compared with competitive baselines. Strengths: - This paper highlights that a major challenge in applying RL to chip placement is the problem formulation. - The proposed MaskRegulate introduces the concept of regularity, which has been overlooked in previous works, and achieves state-of-the-art performance. - The authors validated their proposed method by evaluating it on the ICCAD 2015 benchmark and comparing it with competitive baselines. Furthermore, they assessed the real-world metric, PPA, using a commercial EDA tool to ensure the method's practicality in industry applications. - The visualization results in Figure 6 are impressive and effectively demonstrate the effectiveness of the proposed method. If possible, I recommend including some of these visualizations from Figure 6 in the main paper. Weaknesses: The overall presentation of the paper requires improvement, particularly in the clarity of labels and explanations for figures. Figure 3(a) is especially problematic, as the labels 'adjusting' and 'adjusted' are ambiguous and do not clearly indicate which blocks they refer to. To avoid potential misinterpretation, it is crucial to enhance both the figure and its accompanying explanation. Addressing these issues will significantly improve the paper's clarity and prevent possible misunderstandings. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. How does the model determine adjusted and unadjusted macros? Are the macros adjusted sequentially? If so, how is the order of sequence determined? 2. What happens if the model encounters a situation where no feasible space is available? Does it re-adjust previously adjusted macros? 3. In section 4.4, you mentioned using alpha=0.7 for experiments. Do these experiments refer to the results in Table 1 and Table 2? If so, please explicitly state this, as Figure 4 suggests that alpha is a very important hyperparameter. Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 4 Limitations: - Apart from some presentation-related issues, I do not observe any significant limitations in this paper. - To further strengthen the quality of the work, the authors might consider including training convergence graphs for ChipFormer and MaskRegulate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive comments. Below please find our response. ### Q1 Figure 3(a) is not clear We are sorry for the unclear figures. The red macro $M^1$ denotes the macro that has been adjusted; the yellow macro $M^2$ is the current macro that is adjusting; the blue macros $M^3$ and $M^4$ are the macros that are unadjusted and will be adjusted in the future steps. We will revise this figure and accompany explanation in the caption to make it clear. Thank you for bringing this to our attention. ### Q2 How does the model determine adjusted and unadjusted macros? Are the macros adjusted sequentially? Yes, the macros are adjusted sequentially. As we stated in lines 145-147 of the paper, the order of the macro sequence is determined by the corresponding net and size of a macro, and the number of connected modules that have been adjusted, which is the same as in previous studies [1-2]. We will revise to make this clear. Thank you. [1] MaskPlace: Fast Chip Placement via Reinforced Visual Representation Learning. NeurIPS, 2022. [2] Macro Placement by Wire-mask-guided Black-box Optimization. NeurIPS, 2023. ### Q3 No feasible space is available? We sincerely appreciate the reviewer's insightful question regarding the model's behavior in situations where no feasible space is available. In fact, throughout our entire training and test process, we did not encounter any instances where the final adjusted solution had overlaps. This can be due to the following three reasons: 1. Prioritized adjustment sequence. We employ a specific adjustment sequence that prioritizes elements with greater impact on the layout, based on many aspects, as we answered in Q2. This will mitigate the risk of overlap. 2. Position mask: MaskRegulate uses a position mask that effectively selects non-overlapping positions, fundamentally reducing the risk of overlapping occurrences. 3. Integration of regularity. MaskRegulate is designed to encourage elements to align with edges, which not only enhances the overall layout but also further reduces the probability of overlapping. Given these strategies, we have not encountered any instances of element overlap during our training and testing phases. However, we acknowledge that the possibility of overlap may exist in extreme scenarios. As you suggested, re-adjusting previously adjusted macros is a good idea for improving the model's robustness, which can lead to legal placement result by additional trials. Besides, we can introduce a negative reward to the overlapping scenario, which would penalize adjustments that lead to overlaps, thereby guiding the model towards better adjustments. We are grateful to the reviewer for highlighting this potential issue. We will implement this mechanism in the future. This enhancement will further improve our model's robustness and adaptability, enabling it to handle a wider range of complex layout scenarios, including potential space constraint issues. Thank you very much! ### Q4 Is $\alpha=0.7$ in Table 1 and Table 2? Yes, we set $\alpha=0.7$ by default in our experiments, including those in Tables 1 and 2. We apologize for not explicitly stating this. We will revise to clarify this setting. Thank you for bringing this to our attention. ### Further improvement 1: including some visualizations from Figure 6 in the main paper. Thank you for your appreciation of our visualizations! We're glad that you found them valuable. In the light of your suggestions, we will adjust the layout and attempt to incorporate these visualizations into the revision of our paper. We believe this will enhance readability and improve the overall presentation of our research. Thanks for your constructive feedback. ### Further improvement 2: adding training convergence graphs Thank you for your valuable suggestion. We agree that including these figures would provide additional insights and further strengthen the quality of our work. However, due to space limitation of the response phase, we couldn't include these graphs currently. We will include them in the revision of our paper. Thank you. **We hope that our response has addressed your concerns, but if we missed anything please let us know.** --- Rebuttal Comment 1.1: Title: Dear reviewer, please read and respond to authors' rebuttal. Comment: This paper has diverse reviews and it would benefit a lot to start a discussion to clarify any confusing points. Thanks! Your AC.
Summary: This paper utilizes an online RL algorithm to adjust the existing placement layouts, instead of placing the blocks on the scratch. Additionally, the paper introduces one heuristic concept regularity, which better regularity results in higher PPA performance. Besides, this paper tests the PPA performance using commercial software, showing their methods can achieve significant PPA improvements. Experiments show the proposed method improves PPA performance and proxy in the ICCAD 2015 benchmark compared to the state-of-the-art placement works. Strengths: 1. The paper introduces the new metric regularity, which is the heuristic knowledge by experienced engineers. Compared to the proxy metrics commonly used in previous work, it is more reflected in the PPA performance which is a more important metric considered by the industry. 2. The paper uses the commercial EDA tool to evaluate the efficiency of their methods. The results of the EDA tool analysis are convincing, and the performance is superior to the previous methods. Weaknesses: Several technical details need to be explained. 1. The most concerning part is the new MDP. The action space is not well described. Compared to MaskPlace, MaskRegulate places the blocks based on the full placement results. So, if the current placing block is placed in an already occupied position, would the occupied block be switched by the current block, or the occupied block be plugged off the board? Compared to placing the blocks one by one, the termination function is clear that the episode would end once all blocks are placed. In this paper, the termination function is not clear since each step is the full placement with all blocks on the board. What is the termination function in this MDP? 2. WireMask feature map calculation is also ambiguous in this paper. Since the WireMask feature map is proposed in the MDP that places blocks one by one, it describes the amounts of gains in HPWL when placing the current block on each grid. But, in your MDP, the block seemed to be switched (related to the first question). How do you calculate the WireMask in your case? 3. In the generalization part, for the unseen chips, is the model training part frozen to do forward inferences only? Technical Quality: 2 Clarity: 1 Questions for Authors: Please see the pros and cons part. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: The regularity is one heuristic knowledge summarized by experienced engineers. As the introduction in section 3.2 “Why does regularity matters?”, the engineer prefers to place macros toward the peripheral regions of the chip. Is it possible that there exists some other unknown heuristics, like rectangular shape, or triangle shape that also impact a good performance? It may be the open question that applying the existing human heuristic knowledge constraints the agent explores the whole state-action design space. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive comments. Below please find our response. ### Q1 Technical details are not clear. Thank you for carefully reading our paper and providing these detailed and valuable comments. We are sorry for the unclear presentations. **Q1-1 Termination function.** MaskRegulate sequentially adjusts all the macros in a certain order. All macros will be sorted according to certain rules and then adjusted one by one, as we stated in line 146 of the main paper. The MDP will terminate when all macros have been adjusted. **Q1-2 Would the occupied block be switched by the current block?** No. When MaskRegulate is adjusting a macro, only the macros that have been adjusted (i.e., macros that appear earlier in the adjustment sequence) are considered as occupied, as illustrated in Figure 3(b) of the paper. The macro to be adjusted can temporarily occupy the position of unadjusted macros, and the HPWL is calculated in their respective nets. We will revise these parts to make them clear. Thank you very much. ### Q2 Calculation of WireMask The WireMask is calculated by all the other macros' current locations, whether they are adjusted or not, as illustrated in Figure 5 of our paper. ### Q3 Model inference in the generalization part. Yes. We frozen the pre-trained model and only do forward inference in the generalization experiments. We will make this clear in our final revision. Thank you. ### Q4 Integrate unknown heuristics Thank you for your insightful comment and valuable suggestion. We would like to address your point as follows: In the chip design domain, heuristic knowledge plays a crucial role, and state-of-the-art chips still heavily rely on expert knowledge. We acknowledge that there may exist other unknown heuristics that could potentially impact the performance of the placement process. Incorporating these additional heuristics is an important direction for future research. In our upcoming work, we plan to explore techniques like reward modeling to effectively capture and integrate a wider range of expert knowledge to guide the training of our RL regulator. This work represents an initial attempt to incorporate advanced heuristic knowledge, specifically the idea that macros should be placed near the periphery of the chip, into the current practice of RL-based placement. Our results demonstrate that this approach yields significant improvements in placement quality. We believe that this work opens a door to enhance RL-based approaches by incorporating heuristic knowledge to make them more suitable for real-world placement tasks. We greatly appreciate your suggestion and will include a discussion of these potential future directions in the revised version of our manuscript. Your feedback has been invaluable in helping us identify areas where we can clarify and expand upon our contributions. **We hope that our response has addressed your concerns, but if we missed anything please let us know.** --- Rebuttal Comment 1.1: Title: Dear reviewer, please read and respond to authors' rebuttal. Comment: This paper has diverse reviews and it would benefit a lot to start a discussion to clarify any confusing points. Thanks! Your AC.
Summary: This paper proposes a novel approach called MaskRegulate for the refinement stage of macro placement, using reinforcement learning (RL) methods. Specifically, it trains an RL policy to adjust the existing placement layouts, from which the policy can receive sufficient information, instead of placing from scratch. Regularity is also considered during the learning process, helping to enhance the placement quality. Experiments demonstrate that it outperforms previous competitive approaches, achieving significant half-perimeter wirelength (HPWL), power, performance, and area (PPA) improvements. Strengths: 1. This paper first explores RL method as a regulator instead of a placer, providing a more effective and efficient approach to optimizing chip design. 2. Regular mask is employed to guide the learning process, which greatly improves the regularity of the chip layouts. 3. PPA metrics are considered for a comprehensive analysis, showing the practical applicability and effectiveness of MaskRegulate. 4. The experiments are sufficient and solid, showing the great refinement and generalization ability of MaskRegulate. Weaknesses: 1. Regularity is also proposed in other works such as [1]. This paper does not discuss how it differs from them. 2. The paper lacks some discussion on the necessity of using RL other than other methods as the regulator. [1] A. Vidal-Obiols, J. Cortadella, J. Petit, M. Galceran-Oms, and F. Martorell. Rtl-aware dataflow-driven macro placement. In 2019 Design, Automation Test in Europe Conference Exhibition (DATE), 2019. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the regular mask be used in [2] WireMask-BBO? 2. Why does GP HPWL improve since only MP HPWL is considered in the learning process? [2] Y. Shi, K. Xue, L. Song, and C. Qian. Macro placement by wire-mask-guided black-box optimization. In Advances in Neural Information Processing Systems 36, New Orleans, LA, 2023. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and positive comments. Below please find our response. ### Q1 Discussion of regularity in other paper Thanks for your valuable comments. [1] proposes a multi-level approach for macro placement that can leverage the hierarchy tree and effectively explore structural information at the RTL stage. The hierarchical partitions facilitate the integration of information such as dataflow, wirelength, and regularity. Then, [2] attempts to "mimic" the interaction between the RTL designer and the physical design engineer to produce human-quality macro placement results by exploiting the logical hierarchy as well as regularity and connectivity of macros. However, no studies have integrated regularity into RL. In our paper, we not only propose a new RL regulator framework but also integrate regularity into it. Additionally, our proposed RegularMask can be used to improve other methods, such as MaskPlace (as shown in Table 8 of the paper) and WireMask-BBO (as shown in the following answers to Q2). We will add this discussion to our revised paper. Thank you very much. [1] RTL-aware dataflow-driven macro placement. DATE, 2019 [2] RTL-MP: Toward practical, human-quality chip planning and macro placement. ISPD, 2022. ### Q2 Discussion of using RL as regulator. Can the regular mask be used in WireMask-BBO? Thank you for your valuable comments. Compared to RL, BBO relies on iterative evaluation and cannot obtain a generalizable policy capable of rapid adjustment through forward inference alone. We believe other methods (such as BBO) can also utilize our proposed RegularMask. To investigate whether WireMask-BBO can benefit from incorporating RegularMask, we conducted additional experiments on the ICCAD 2015 benchmark to compare WireMask-EA and WireMask-EA + RegularMask. WireMask-EA + RegularMask comprehensively places and adjusts based on both WireMask and RegularMask. As shown in Table R3, WireMask-EA + RegularMask improves both global HPWL and regularity in all the cases, demonstrating the versatility of our proposed RegularMask. We will add this discussion and experiment to our revised paper. Thank you. ### Q3 Why does GP HPWL improve since only MP HPWL is considered in the learning process? Macros (i.e., individual building blocks such as memories) have much larger sizes than standard cells (i.e., smaller basic components like logic gates), which significantly impacts the overall quality of the final chip. A common practice in placement is to first fix macro locations and then place standard cells [1-3]. After determining the positions of macros by optimizing MP HPWL, placing the remaining standard cells typically yields good final results, i.e., GP HPWL. However, to consistently improve GP HPWL, it's necessary to consider the sparse signal in the GP stage, which is one of our future areas of work as we stated in our paper. We will add this discussion to our revised paper. Thank you very much. [1] MaskPlace: Fast chip placement via reinforced visual representation learning. NeurIPS, 2022. [2] AutoDMP: Automated DREAMPlace-based macro placement. ISPD, 2023. [3] Macro placement by wire-mask-guided black-box optimization. NeurIPS, 2023. **We hope that our response has addressed your concerns, but if we missed anything please let us know.** --- Rebuttal Comment 1.1: Comment: Thank you very much for addressing my concerns. I will keep my score.
Summary: This paper introduces a Macro Regulator that uses reinforcement learning (RL) to optimize existing macro placements. It reconsiders the application of RL in macro placement and incorporates a regularity metric into the reward function. The paper presents a comprehensive set of comparative experiments to ultimately demonstrate the superiority of the Macro Regulator. Ablation studies also confirm the effectiveness of each component in the proposed approach. The contributions of the paper are specified as follows: Firstly, Macro Regulator shifts the focus of RL from placing macros from scratch to refining existing macro placements. This approach enables the RL policy to utilize more state information and achieve more accurate reward signals, enhancing learning efficiency and final PPA quality. Secondly, the paper introduces the concept of regularity into the RL framework, a crucial metric often overlooked in chip design. By incorporating regularity as part of the input information and reward signal, Macro Regulator ensures consistency in manufacturing and performance. Thirdly, the proposed method is evaluated on the ICCAD 2015 benchmark, demonstrating superior performance in terms of global placement half-perimeter wire length (HPWL) and regularity compared to various competing methods. Additionally, PPA metrics are tested using the commercial EDA tool Cadence Innovus, showing significant improvements. Strengths: In summary, this paper makes a substantial contribution to the field of chip design through its novel problem formulation and high-quality experimental methodology. The originality of integrating regularity into the RL framework and the thoroughness of the experimental validation are particularly noteworthy. This work advances the state of the art in macro placement. 1. Originality - Novel Problem Definition: The paper introduces a new problem formulation by shifting the focus from using RL to place from scratch to refining existing macro placements. This innovative approach allows for more efficient use of state information and more precise reward signals. - Integration of Regularity: The inclusion of regularity as a key metric in the RL framework is a novel contribution. Regularity is crucial for ensuring manufacturing consistency and performance but is often overlooked in existing methods. 2. Quality - Robust Methodology: The paper employs a comprehensive experimental methodology, utilizing the ICCAD 2015 benchmark and commercial EDA tools (Cadence Innovus) for validation. The experiments are well-designed and include comparisons with multiple existing methods. - Detailed Analysis: The paper includes a comprehensive analysis of the proposed method, along with ablation studies that validate the effectiveness of different components. The detailed evaluation and the provided code help to enhance the credibility of the results. 3. Significance - Impact on Chip Design: The proposed Macro Regulator addresses significant limitations of existing RL methods in macro placement, such as training time, reward sparsity, and generalization capability. By improving placement quality and PPA metrics, this work has the potential to significantly impact the efficiency and effectiveness of chip design processes. Weaknesses: While the paper presents a significant advancement in the field of macro placement using reinforcement learning, there are areas for improvement. Expanding the benchmark scope, including real-world case studies, demonstrating generalization capabilities, and providing more detailed algorithmic descriptions would enhance the overall impact and robustness of the work. Addressing these weaknesses would make the paper's contributions even more compelling and practical for real-world applications. 1. Limited Benchmark Scope - Additional Datasets Needed: While the paper utilizes the ICCAD 2015 benchmark and Cadence Innovus for validation, the scope of the benchmarks is somewhat limited. Including additional datasets or real-world applications could strengthen the evaluation and demonstrate the generalizability of the proposed method. For example, benchmarks from other well-known contests or industrial examples with different characteristics could provide a more comprehensive assessment. 2. Generalization to Different Chip Designs - Generalization Capability: While the paper mentions the generalization capabilities of Macro Regulator, more detailed experiments and analysis on different types of chip designs (e.g., various sizes, complexities, and design constraints) would strengthen the claims. Demonstrating the method's adaptability to a broader range of scenarios would enhance its perceived robustness and applicability. 3. Clarity of Method Description - Detailed Algorithmic Steps: While the paper is generally clear, some parts of the methodology could benefit from more detailed descriptions. For example, providing pseudo-code or more granular steps of the Macro Regulator algorithm would help readers better understand the implementation details and reproduce the results. Additionally, regularity is an important metric, but the paper provides limited explanation of this metric. It would be beneficial to supplement more information about its definition and calculation. Furthermore, a more detailed explanation of essential factors such as the states and actions in reinforcement learning would enhance the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: In the abstract, the author states, "Our RL regulator can fine-tune placements from any method and enhance their quality." However, in the experimental section, it is mentioned, "For each chip, MaskRegulate uses DREAMPlace to obtain an initial macro placement result to be adjusted, which takes within a few minutes and has relatively low quality." There are two issues with this: 1. The claim of "from any method" is not entirely supported as DREAMPlace is essentially an analytical method, thus the experiments can only prove the ability to improve macro placement solutions from analytical methods, not from any other method. 2. The assertion of "relatively low quality" raises the question of whether the MaskRegulate method relies heavily on the quality of initial macro placement result. It would be worthwhile to explore the effect of optimizing from different quality macro placement solutions (such as higher but not the best macro placement result), rather than starting from relatively low-quality ones. In summary, can the MaskRegulate method improve the quality of "any method" and "any quality of initial macro placement solution"? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and valuable comments. Below please find our response. ### Q1 Limited benchmark scope. Thanks for your valuable comments. In our work, we use the ICCAD 2015 contest as our benchmark, which is currently one of the largest open-source benchmarks that allows us to evaluate congestion, timing and other PPA metrics. To the best of our knowledge, ICCAD 2015 is the benchmark that closely reflects the current practices in the EDA industry. We agree with you that adding benchmark scopes would further strengthen our work. Thus, we add experiments on ISPD 2005 contest benchmark, which is also a popular benchmark in AI for chip design but does not have sufficient information for PPA evaluation. The results can be found in Table R1 in our PDF file. For detailed settings and discussions, please refer to the following Q2. Thank you. ### Q2 Generalization ability Thank you for your valuable comments. In our paper, we have tested MaskRegulate's generalization ability in Table 2. We have now further tested the generalization on the ISPD 2005 benchmark by directly using the pre-trained models on superblue 1, 3, 4, and 5 (i.e., the same models in Table 2) of MaskPlace and MaskRegulate to place and regulate the eight chips. As shown in Table R1, MaskRegulate still outperforms MaskPlace in most cases, demonstrating our superior generalization ability and robustness. We will include this experiment in our revised paper. Thank you very much. ### Q3 Clarity of method description. Thank you for your suggestions. However, we cannot update the paper now due to the rules of rebuttal phase. We will carefully revise our paper according to your suggestions, including adding pseudo-code and introducing regularity and MDP in detail. Thank you very much. ### Q4 Can MaskRegulate improve the quality of any method and any quality of initial macro placement solution? Thank you for your valuable comments. Yes, MaskRegulate can be used to adjust any initial macro placement solution. According to your suggestions, we have conducted additional experiments to demonstrate this capability. We used the pre-trained model on superblue 1, 3, 4, and 5 (i.e., the same models in Table 2 and R1) to adjust different placement results obtained by MaskPlace, AutoDMP, and WireMask-EA. The results are shown in Table R2. MaskRegulate consistently improves regularity on all four unseen chips and enhances global HPWL on three chips. We will include this experiment in our revised paper. Thank you very much. **We hope that our response has addressed your concerns, but if we missed anything please let us know.** --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer NxAR Comment: Thank you very much for addressing my concerns. I will keep my score.
Rebuttal 1: Rebuttal: We are very grateful to the reviewers for carefully reviewing our paper and providing constructive comments and suggestions. We have revised the paper carefully according to the comments and suggestions, but we cannot upload the paper due to the NeurIPS rules. Our response to individual reviewers can be found in the personal replies, but we would also like to make a brief summary of revisions for your convenience. Writing Enhancements: - We correct typos and improve some method explanations to enhance overall readability. Expanded Discussions: - We incorporate a discussion on the combinatorial solver for macro placement. - We explore the potential of using alternative approaches as macro regulators. - We address the scenario where no feasible space is available and propose potential solutions. Additional Experiments: - We conduct generalization experiments using the ISPD 2005 benchmark to demonstrate the robustness of our method. - We conduct generalization experiments to adjust various layouts obtained through different algorithms with different qualities. - We implement and evaluate WireMask-BBO as a macro regulator using our proposed RegularMask. - We include training convergence graphs to facilitate a comparative analysis of different RL-based approaches. **We believe these revisions have significantly strengthened our paper and hope that your concerns have been addressed. We remain open to further feedback and are happy to provide additional clarification if needed.** Pdf: /pdf/8d1c444913649a236a772fc8404f34e6592c7213.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents an application of the RL algorithm in chip design optimization. They formulate the chip design process as an MDP process to make decisions based on the current state of the chip macro arrangement so that they can apply PPO to optimize the policies. Compared with starting from scratch, they focus on the fine-tuning stage of the macros for denser reward signals. Their experiments in a public benchmark, the ICCAD 2015 benchmark show a great improvement in the chip design metrics. Strengths: According to the related work section, although there have been RL methods in this benchmark, this method improves previous methods by starting from macro placements instead of starting from scratch, which is a heuristic way of optimizing and simplifying the problem. This heuristic should be novel when combined with this class of methods. The paper is technically solid in the problem formulation, experiments are sufficient with baselines, and reasonable ablation studies, and the results look good. I like the visualization part of the paper. Weaknesses: Compared with MaskPlace [1], the contributions of this paper lie in the expertise or experiences in the expert chip design area instead of the machine learning area, as MaskPlace has already used PPO. One thing I am concerned about is that NeurIPS may not be a good venue to discuss this contribution. [1] Lai, Y., Mu, Y., & Luo, P. (2022). Maskplace: Fast chip placement via reinforced visual representation learning. Advances in Neural Information Processing Systems, 35, 24019-24030. Technical Quality: 3 Clarity: 3 Questions for Authors: [1] When you discretize the map, it becomes a combinatorial problem. Did you compare the sample complexity of using RL vs using some combinatorial solver? [2] PPO can handle continuous action space, is there a possibility of improvement using continuous action space? [3] What will be the future directions for this work or is this problem solved by RL? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments. Below please find our response. ### Q1 Compared with MaskPlace, the contributions of this paper lie in the expertise or experiences in the expert chip design area .... One thing I am concerned about is that NeurIPS may not be a good venue to discuss this contribution. Thanks for your comments. However, we may not agree with you respectfully. Compared with MaskPlace, the main contributions of our work are two folds: 1. Problem formulation: MaskRegulate shifts the focus of RL from placing macros from scratch to refining existing macro placements, which allows for more effective learning from structured state and accurate reward information, significantly enhancing the learning efficiency. 2. Integration of regularity: We introduce regularity, a critical yet overlooked metric in chip design, into the RL training framework, which not only aligns with industry practice but also enhances the chip PPA quality. We have conducted comprehensive experiments to show the superior performance of MaskRegulate: - Our main experiments on the popular ICCAD 2015 benchmark have shown the significant improvements of MaskRegulate in PPA metrics, demonstrating the practical applicability and effectiveness of the RL regulator. - For the problem formulation, Table 6 in the paper has shown that Vanilla-MaskRegulate is better than MaskPlace. The only difference between them is the problem formulation, and all the other components (e.g., state and reward) are the same. Thus, the results in Table 6 clearly demonstrates our motivation, highlighting the advantages of our regulator problem formulation. - For the integration of regularity, Table 8 in the paper and Table R3 in the PDF file show that our proposed RegularMask can be integrated into other methods such as MaskPlace and WireMask-EA. Next, we will explain why NeurIPS is a good venue for our work. Recently, AI for chip design significantly expands the application domain of current AI technologies and enhance their impact [1]. In recent years, researchers have discovered that RL can assist chip design, as demonstrated in Google's Nature paper [2]. Since then, numerous top-tier venues (e.g., NeurIPS, ICML, and ICLR) have emerged with related research, aiming to further improve its effectiveness from different perspectives [3-9], just to name a few. Our work not only proposes a new placement paradigm but also introduces practical experience from the chip design field to guide the algorithm, which opens new possibilities for the application of RL in chip design. Therefore, we believe that a top-tier AI conference like NeurIPS is a proper venue for our work. Thank you. [1] Machine learning for electronic design automation: A survey. TODAES, 2021 [2] A graph placement methodology for fast chip design. Nature, 2021. [3] On joint learning for solving placement and routing in chip design. NeurIPS, 2021. [4] MaskPlace: Fast chip placement via reinforced visual representation learning. NeurIPS, 2022. [5] Chipformer: Transferable chip placement via offline decision transformer. ICML, 2023. [6] Macro placement by wire-mask-guided black-box optimization. NeurIPS, 2023. [7] CircuitNet 2.0: An advanced dataset for promoting machine learning innovations in realistic chip design environment. ICLR, 2024. [8] Reinforcement learning within tree search for fast macro placement. ICML, 2024. [9] A hierarchical adaptive multi-task reinforcement learning framework for multiplier circuit design. ICML, 2024. ### Q2 Discussion about combinatorial solver. Good points! Thanks for your insightful comments. Placement is a very large-scale non-linear optimization problem. Using combinatorial solvers for placement has a long history [1-2], which belongs to analytical placement approach. They were initially used for placement of small-scale chips, but have gradually been replaced by more effective gradient-based approaches [3-4] in recent years. Currently, combinatorial solvers are typically used for small-scale and constrained problems, such as floorplanning [5]. The state-of-the-art analytical placement method is DREAMPlace [4], which is an important baseline in our work. Compared to DREAMPlace, our RL regulator has many advantages, as demonstrated in our paper. According to your comments, we will introduce the history of analytical placers in detail to provide a comprehensive understanding of related techniques for readers without the placement background. Thank you for bringing our attention to this point. [1] Analytical approaches to the combinatorial optimization in linear placement problems. TCAD, 1989. [2] BonnPlace: Placement of leading-edge chips by advanced combinatorial algorithms. TCAD, 2008. [3] Replace: Advancing solution quality and routability validation in global placement. TCAD, 2018. [4] DREAMPlace: Deep learning toolkit-enabled GPU acceleration for modern VLSI placement. TCAD, 2021. [5] Global floorplanning via semidefinite programming. DAC, 2023. ### Q3 PPO can handle continuous action space, is there a possibility of improvement using continuous action space. Thanks for your suggestions. Due to the large action space of placement, previous studies propose to discretize the layout into grids, thus reducing the action space and improving learning efficiency. We agree that learning in the continuous space can enhance decision flexibility, but this also brings challenges such as low convergence rate and under-exploration. We will consider it in the future. Thank you. ### Q4 Future directions of RL Our main future work includes two aspects: 1. Further improving the generalization of the RL policy by using better architectures and training methods. 2. Utilizing some MO techniques (e.g., MORL) to obtain a set of Pareto-optimal placement results with different preferences across multiple objectives. **We hope that our response has addressed your concerns, but if we missed anything please let us know.** --- Rebuttal Comment 1.1: Title: Dear reviewer, please read and respond to authors' rebuttal. Comment: This paper has diverse reviews and it would benefit a lot to start a discussion to clarify any confusing points. Thanks! Your AC. --- Rebuttal Comment 1.2: Comment: Thank authors for answering my questions, I have adjusted my ratings. Follow-up Q2: I guess your RL method benefits from the reduction in the state/action space, and similarly, combinatorial methods can also benefit from this, was there any baseline run under the same action space or problem setting? --- Reply to Comment 1.2.1: Comment: Thank you for your feedback. We are pleased to hear that your concerns have been addressed and the score has been increased. Regarding the follow-up Q2, we agree that our proposed formulation of RL can also be used for combinatorial methods. To the best of our knowledge, no existing work has used combinatorial optimization under this setting. One may use some suitable combinatorial methods under our proposed RL regulator formulation and the reduction of search space to investigate if they can be improved. We will leave this for future work. Thank you for your insightful suggestions.
null
null
null
null
null
null
LT-Defense: Searching-free Backdoor Defense via Exploiting the Long-tailed Effect
Accept (poster)
Summary: The paper proposes a backdoor defense method called LT-Defense for detecting whether a model is a backdoored model. It is observed that the poisoned dataset can create the long-tailed effect, which causes the decision boundary to shift towards the target labels. By using this observation, LT-Defense can detect the backdoored model without searching triggers. Specifically, LT-Defense employs a set of clean samples and two proposed metrics to detect backdoor-related features of a backdoored model. LT-Defense also provides test-time backdoor freezing and attack target prediction. Extensive experiments demonstrate the effectiveness of LT-Defense. Strengths: The proposed LT-Defense utilizes the long-tail effect created by the poisoned dataset. This motivation is clear and rationable. The experiments are sufficient. The paper conducts defense experiments against various task-related and task-agnostic backdoor attacks across four target models and 6 downstream datasets. The proposed LT-Defense does not need to search triggers, which reduces the time cost of backdoor defense. This is demonstrated in the experiments results. Weaknesses: The problem and method are not presented clearly. For example, in equation (1), T is not defined and loss function should be argmin instead of argmax. It is better to proofread the mathematical equations and definitions. Also, it is better to highlight the results in Table 1-3 to show the advantages of proposed LD-Defense. For the head feature selection of method (in Sec. 4.1), the paper does not introduce used features. Are the features are output embeddings of different layers? For task-related backdoor detection, there is no compared method to show the advantages of LT-Defense. Technical Quality: 2 Clarity: 2 Questions for Authors: In line 139, the paper claims that ' Moreover, owing to the long-tailed effect, the similarity among benign features will increase'. Is this observation related to equation (7)? It is better to provide some explanations and experiments to demonstrate this observation. Does the proposed LT-Defense perform well for clean-label attack methods? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The limitations have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions. **We employ the symbol # to denote the labels in the additional pdf file (e.g., #Tab.1, #Fig.1).** **Comment 1:** The problem and method are not presented clearly. For example, in equation (1), T is not defined and loss function should be argmin instead of argmax. It is better to proofread the mathematical equations and definitions. Also, it is better to highlight the results in Table 1-3 to show the advantages of proposed LD-Defense. In line 139, the paper claims that ' Moreover, owing to the long-tailed effect, the similarity among benign features will increase'. Is this observation related to equation (7)? It is better to provide some explanations and experiments to demonstrate this observation. **Response 1:** We apologize for the typos and unclear descriptions. We’ll carefully check the manuscript and fix them. We agree that line 139 needs more detailed discussions. As depicted in Fig.1 (c), the output of benign inputs shifts towards attack targets, so the similarity among benign outputs also increases. We will also provide a visualized example to help understand (#Fig.2). **Comment 2:** For the head feature selection of method (in Sec. 4.1), the paper does not introduce used features. Are the features are output embeddings of different layers? **Response 2:** There might be some misleading presentation in Sec.4. Actually, we are not “selecting” but “recognizing” the head features. The “feature” here means the output of the target model. In task-agnostic scenarios, the output is a feature vector (e.g., 768 features for RoBERTa-base). As for text generation tasks, the output is a sequence of logits with sizes of the vocabulary space (e.g., 50265). We then use these output variables to perform backdoor detection. **Comment 3:** For task-related backdoor detection, there is no compared method to show the advantages of LT-Defense. **Response 3:** We list the applicability of existing backdoor detection methods for NLP models as follows: #Tab.4 T-Miner, Piccolo, and DBS have been proved to be less effective against task-agnostic backdoors by LMSanitator. We also briefly discussed why they are not applicable in text generation tasks in related work. Here we provide some detailed analysis: Searching-based methods aim to search for a trigger which forces the model to classify all inputs to the target class. However, defenders do not know which class will be the attack target, so defenders have to search for potential triggers for every output class. In other words, the time cost is linearly and positively correlated with the number of categories For classification tasks such as sentimental classification (2 classes), news classification (4 classes), the time cost is acceptable. But for text generation tasks where output space is the vocabulary space (e.g., 50265 classes), the time cost will become unacceptable. #Tab.2 Therefore, LT-Defense is the first cost-acceptable backdoor detection method for text generation task, so we did not provide baseline for comparison. **Comment 4:** Is LT-Defense effective against clean-label attack? **Response 4:** As discussed above, LT-Defense is designed for detecting task-agnostic backdoors and text generation backdoors, in these two scenarios, there is no available clean-label attack. And due to the characteristic of the two scenarios, it is difficult to construct clean-label backdoor attacks.
Summary: In this paper, the authors explore search-free defense strategies against backdoor attacks in language models. For task-agnostic backdoor attacks, the paper proposes a Head-Feature Rate score to detect backdoored models based on the observation that backdoor mapping (triggers to pre-defined vectors) disrupts the model's output feature distribution. For task-related backdoor attacks, the paper notes that backdoor mapping (triggers to predetermined target tokens) leads to abnormal probability rankings of the predetermined tokens in the predicted distribution. An Abnormal Token Score is proposed to detect whether the model has been implanted with a backdoor and further predict the tokens. Strengths: 1. Unlike previous backdoor detection methods that search for trigger tokens or pre-defined target vectors to detect backdoor models, this paper proposes search-free strategies, thereby achieving efficient detection. This motivation is clear and valuable. 2. The proposed methods have been tested on both masked language models and autoregressive language models, demonstrating superior performance and efficiency compared to baseline methods. 3. Adaptive attack against proposed methods is discussed. Weaknesses: 1. The design choices of the proposed methods are unclear, and the selection of hyperparameters does not provide general guidelines but is set manually. For example, when designing the Head Feature Selection function, the paper does not explain the intuition behind the design, why features are compared to zero, and what is the logic behind setting λ1 and λ2 directly to 20/500 and 480/500, respectively? The same issue exists for the settings of ts1, ts2, and ts3. 2. The setting of the defender's capabilities is unreasonable. When choosing thresholds, the paper states, "Then we finetune each foundation model on different datasets to get 5 reference benign models and determine ts1 and ts2 using these models.". This implies that the defender should have access to benign foundation models to use as references for selecting thresholds. Why doesn't the defender just use the benign foundation models? 3. The proposed defense strategy against task-related backdoor attacks has limitations. The paper assumes that the attacker's target is fixed tokens, but in reality, more advanced attack targets have been proposed, such as manipulating the sentiment polarity of the output text [1, 2]. Is the proposed method still effective in such cases? References 1. Spinning language models: Risks of propaganda-as-a-service and countermeasures S&P 2022 2. Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection NAACL 2024 Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weakness part. I am willing to increase my score if the authors could address (parts of) my concerns. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors provide a few limitation analysis, although it is not sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your efforts and valuable suggestions. **We employ the symbol # to denote the labels in the additional pdf file (e.g., #Tab.1, #Fig.1).** **Comment 1:** The design choices of the proposed methods are unclear, and the selection of hyperparameters does not provide general guidelines but is set manually. **Response 1:** #Fig.1 We use a figure to show step-by-step how we determine the hyperparameters of LT-Defense: (1) We observe that backdoor training will introduce a long-tailed effect to the activation results of the output features, which means for different input examples, a specific feature tends to be activated similarly. Since most models are trained with output features normalized to be symmetric about 0, we use 500 examples to test the model. (2) In the figure, the horizontal axis denotes “For a specific output feature, how many examples are positively activated”, and the vertical axis represents “How many output features of the model satisfies the value on the horizontal axis”. For example, we can observe that for the backdoor model, more than 250 features are positively activated in all 500 examples, and more than 250 features are negatively activated in all 500 examples, too. (3) The differences between benign models and backdoor models reveal a long-tailed effect. So we further design metrics to describe this difference.We use $\lambda_1$ and $\lambda_2$ to define head features, which means for most test examples, these features have similar activation values. Then we calculate the ratio of head features in all features as the Head Feature Rate (HFR). (4) We can observe that the values of $\lambda_1$ and $\lambda_2$ will influence the values of HFR, but it will not change the differences between benign and backdoor models. As a result, $\lambda_1$ and $\lambda_2$ have a large effective range. In fact, LT-Defense will remain a high detection accuracy when $\lambda_1$ and $\lambda_2$ are set as 100/500 and 400/500, respectively. (5) For HFR, we use benign models as reference to determine the threshold $({th}_1, {th}_2)$ of different models and do not require defenders to know prior knowledge about attacks, because the HFRs of benign models have a small variance but a large shift with HFRs of backdoor models (as depicted in Fig.4). As for Abnormal Token Score for text generation backdoor detection, ${th}_3$ is also determined using benign reference models and has a large effective range, so we simply took an acceptable value **Comment 2:** The setting of the defender's capabilities is unreasonable. When choosing thresholds, the paper states, "Then we finetune each foundation model on different datasets to get 5 reference benign models and determine ts1 and ts2 using these models.". Why doesn't the defender just use the benign foundation models? **Response 2:** Firstly, we apologize for inappropriate descriptions. Although we follow the experimental setup of previous method to train reference benign models on different datasets, LT-Defense is dataset-agnostic, i.e., for the same model trained on different datasets, the thresholds keep the same. Secondly, we agree that a major limitation of existing backdoor detection methods is their high requirements of defenders’ capabilities, and LT-Defense outperforms them for the following reasons: (1) LT-Defense does not require prior knowledge about potential attacks, which is an important assumption of searching-based methods. For example, LMSanitator requires to train 5 benign and 5 malicious models on the target model to determine the thresholds, Piccolo and DBS rely on assumptions about the format of potential triggers. (2) LT-Defense can be deployed under black-box constraints because it only requires the query access. Searching-based methods all require a white-box privilege due to the forward-backward process. #Tab.3 Therefore, LT-Defense is useful in real-world scenarios. For example, opensource platforms such as Huggingface, Model Zoo release thousands of new models every day. These models have similar architectures and are trained on the same/different datasets. LT-Defense enables platforms to detect these models with an acceptable overhead. **Comment 3:** The proposed defense strategy against task-related backdoor attacks has limitations. The paper assumes that the attacker's target is fixed tokens, but in reality, more advanced attack targets have been proposed, such as manipulating the sentiment polarity of the output text [1, 2]. Is the proposed method still effective in such cases? **Response 3:** (1) LT-Defense does not assume that the attacker's target is fixed tokens. For example, in Tab.2, AutoPoison-refusal does not specify fixed tokens but uses prompt to force the model to refuse to answer. And LT-Defense can effectively detect AutoPoison-refusal. (2) For the two new attacks mentioned, the first one (S&P’ 22) focused on classification tasks (such as sentimental classification), which is not the target task of LT-Defense. For the second one (NAACL’ 24), it can be reproduced using AutoPoison since both of them are based on prompt tunning. So, we make some additional experiments and list them as follows: #Tab.1 Looking into the experiments, after using “Describe computers negatively.” to tune the target model, the ATS exceeds the threshold since tokens related to “harmful” “addiction” begin to have higher probabilities of appearance. **Comment 4:** The authors provide a few limitation analysis, although it is not sufficient. **Response 4:** As discussed in the paper, LT-Defense is less effective for classification tasks with fewer categories (e.g., 2 classes), because these classes may be highly imbalanced naturally. #Tab.2 and #Tab.4 list the applicable scenarios for LT-Defense. Therefore, classification task with fewer categories is a limitation of LT-Defense. We will add a separate section on limitation for clarification. --- Rebuttal Comment 1.1: Comment: Thank for your detailed rebuttal. It addressed some of my concerns. However, two critical ones still remain. Accordingly, I keep my score unchanged. More details are as follows: 1. Like other reviewers, I am confused about the design choices of the proposed methods and on the selection of hyperparameters in the paper. The author's response still does not provide a clear explanation. 2. AutoPoison-refusal will cause the model to have a fixed template starting with "As an AI language model," which essentially targets fixed tokens as well. Therefore, I suggest the authors should test defenses against implicit attacks that do not target fixed tokens, such as sentiment manipulation. The implementation of the author for [2] is inappropriate; it should be tested according to the same settings as in the paper[2]. --- Rebuttal 2: Comment: Thank you for your feedback and we would like to provide further clarification on both concerns. As shown in #Fig.1, we observed a significant difference when looking at the long-tail effect of features in the clean and backdoor models. This difference does not depend on the hyperparameters we set. To quantify this difference, we define the head feature and the head feature rate, a design that is intuitive according to #Fig.1. Specifically, for a given model structure, we obtained bounds on the HFR by rounding up/down with the maximum/minimum values of the head HFR for the 5 clean models. For the ATS, we use the same approach. Compared to previous methods, our approach reduces the need for a prior knowledge related to backdoor models. The implementation of [2] has two prerequisites: (1) the LLM automatically selects virtual prompts during content generation, and (2) the attacker hijacks specific virtual prompts. In our implementation, we unify these two conditions to the fact that the attacker has embedded the virtual prompt in the specific Q&A process. Specifically, in the instruction tuning process, we added the instruction "describe the computer negatively" to the computer-related Q&A following [2] and obtain #Tab.1. It is worth mentioning that we did not specify the output content of the model in this process.
Summary: This paper proposes a searching-free backdoor defense method for language models, named LT-Defense. LT-Defense is inspired by the long-tailed property of target classes, where a backdoored model tend to have an increased predicting rate for target classes compared with benign model. Specifically, LT-Defense uses a small clean set and two metrics to distinguish backdoor-related features in the target model, and provides test time backdoor freezing and attack target prediction. Strengths: This paper is generally well-written and provides strengths in the following dimensions: 1. **Originality**: The observation of the long-tailed property of backdoored models is original, it is simple but effective as shown in experiments. 2. **Quality**: This paper is generally in good quality. The experiments are well-designed, demonstrating the effectiveness of the proposed method. 3. **Clarity**: The paper is well-organized, with several illustration figures clearly showing the design of proposed LT-Defense, which greatly enhances the understanding of the content. 4. **Significance**: The proposed LT-Defense is searching-free with comparable or increased ACC, which is a step forward in this area. Weaknesses: 1. **Threshold sensitivity**: The effectiveness of LT-Defense depends on carefully chosen thresholds for the Head-Feature Rate (HFR) and Abnormal Token Score (ATS). This may propose less robustness across different datasets and models. 2. **Citation recommendations**: I believe your work would benefit from referencing some additional literature to provide a more comprehensive context for your study. Specifically, i recommend citing the following articles: - Attack of the tails: Yes, you really can backdoor federated learning (NeurIPS 20) - Moderate-fitting as a natural backdoor defender for pre-trained language models (NeurIPS 22) - Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition (ICML 24) - A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning (NeurIPS 23) 3. **Minor subscript issue**: It should be $\bf{X}_N$ instead of $\bf{X}_n$ in line 104 and 122. Technical Quality: 3 Clarity: 3 Questions for Authors: How robust are these thresholds across different datasets and models? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: See *Weaknesses*. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your efforts and valuable suggestions. **We employ the symbol # to denote the labels in the additional pdf file (e.g., #Tab.1, #Fig.1).** **Comment 1:** - The effectiveness of LT-Defense depends on carefully chosen thresholds for the Head-Feature Rate (HFR) and Abnormal Token Score (ATS). This may propose less robustness across different datasets and models. - How robust are these thresholds across different datasets and models? **Response 1:** Tanks for the suggestions. Compared to previous methods, LT-Defense is less sensitive to hyperparameters and has better transferability for the following reasons: Firstly, the thresholds of LT-Defense only rely on benign reference models, while previous methods (LMSanitator, DBS, Piccolo, etc.) all require both benign and backdoor models as reference to determine the detection thresholds. Secondly, LT-Defense is dataset-agnostic, i.e., for the same model trained on different dataset, the thresholds remain the same, which makes LT-Defense useful in real-world scenarios. Thirdly, we did not perform grid searches to find optimal hyperparameters, since all parameters have a large effective range. We provide a figure to make the design and selection of hyperparameters clearer and more intuitive: #Fig.1 The key insight behind LT-Defense is to detect the long-tailed effect. For an output feature, if it is similarly activated by most test examples, it reveals a potential long-tailed effect. Since most models are trained with output features normalized to be symmetric about 0, we use 500 examples to test the model and record the result. As illustrated in #Fig.1, the horizontal axis denotes “For a specific output feature, how many examples are positively activated”, and the vertical axis represents “How many output features of the model satisfies the value on the horizontal axis”. The construction of #Fig.1 does not rely on any hyperparameters but already show a significant difference between benign and backdoor models. What λ1, λ2, ts1, ts2 and ts3 do is to describe the difference. As a result, the effective ranges of these hyperparameters are large. For example, LT-Defense remains a high detection accuracy by changing λ1, λ2 from (20/500, 480/500) to (100/400, 400/500). **Comment 2:** I believe your work would benefit from referencing some additional literature to provide a more comprehensive context for your study. **Response 2:** Thanks for the recommendation. Introducing the context of long-tailed learning will help readers with the background of AI security. We will put this in the section of related work. **Comment 3:** Minor subscript issue. **Response 3:** We apologize for the misleading caused by typos, we will carefully check the manuscript and fix them in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I would keep my score.
null
null
Rebuttal 1: Rebuttal: # Global Rebuttal We sincerely thank all the reviewers for your valuable feedback and insightful comments. In the following, we first provide a global response to some shared concerns of multiple reviewers. Subsequently, we reply to the reviewers one by one for the convenience of checking. **We employ the symbol # to denote the labels in the additional pdf file (e.g., #Tab.1, #Fig.1).** **1. Hyperparameter, transferability, and defenders’ capability.** **Comment:** - Reviewer 1: The effectiveness of LT-Defense depends on carefully chosen thresholds. This may propose less robustness across different datasets and models. - Reviewer 2: The design choices of the proposed methods are unclear, and the selection of hyperparameters does not provide general guidelines but is set manually. Why features are compared to zero, and what is the logic behind setting λ1 and λ2 directly to 20/500 and 480/500, respectively? The same issue exists for the settings of ts1, ts2, and ts3. - Reviewer 2: The setting of the defender's capabilities is unreasonable. Why doesn't the defender just use the benign foundation models? - Reviewer 3: For the head feature selection of method (in Sec. 4.1), the paper does not introduce used features. Are the features are output embeddings of different layers? **Response:** Tanks for the suggestions. Compared to previous methods, LT-Defense is less sensitive to hyperparameters and has better transferability for the following reasons: Firstly, the thresholds of LT-Defense only rely on benign reference models, while previous methods (LMSanitator, DBS, Piccolo, etc.) all require both benign and backdoor models as reference to determine the detection thresholds. Secondly, LT-Defense is dataset-agnostic, i.e., for the same model trained on different dataset, the thresholds remain the same, which makes LT-Defense useful in real-world scenarios. Thirdly, we did not perform grid searches to find optimal hyperparameters, since all parameters have a large effective range. We provide a figure to make the design and selection of hyperparameters clearer and more intuitive: #Fig.1 The key insight behind LT-Defense is to detect the long-tailed effect. For an output feature, if it is similarly activated by most test examples, it reveals a potential long-tailed effect. Since most models are trained with output features normalized to be symmetric about 0, we use 500 examples to test the model and record the result. As illustrated in #Fig.1, the horizontal axis denotes “For a specific output feature, how many examples are positively activated”, and the vertical axis represents “How many output features of the model satisfies the value on the horizontal axis”. The construction of #Fig.1 does not rely on any hyperparameters but already show a significant difference between benign and backdoor models. What λ1, λ2, ts1, ts2 and ts3 do is to describe the difference. As a result, the effective ranges of these hyperparameters are large. For example, LT-Defense remains a high detection accuracy by changing λ1, λ2 from (20/500, 480/500) to (100/400, 400/500). **2. Advanced attacks, baselines.** **Comment:** Reviewer 2: The paper assumes that the attacker's target is fixed tokens, but more advanced attack targets have been proposed. Is the proposed method still effective in such cases? Reviewer 3: For task-related backdoor detection, there is no compared method to show the advantages of LT-Defense. Reviewer 3: Does the proposed LT-Defense perform well for clean-label attack methods? **Response:** LT-Defense does not assume that the attacker's target is fixed tokens. For example, in Tab.2, AutoPoison-refusal does not specify fixed tokens but uses prompt to force the model to refuse to answer. And LT-Defense can effectively detect AutoPoison-refusal. For the two attacks mentioned by Reviewer 2, the first one (S&P’ 22) focused on classification tasks (such as sentimental classification), which is not the target task of LT-Defense. For the second one (NAACL’ 24), it can be reproduced using AutoPoison since both of them are based on prompt tunning. So, we make some additional experiments and list them in #Tab.1: #Tab.1 As shown in #Tab.1, LT-Defense can effectively detect the proposed backdoor attack (NAACL’24). #Tab.2 lists the time overhead of different backdoor detection methods under different scenarios, where ‘-’ means not applicable. #Tab.2 As shown in the table, for text generation tasks, all existing methods are cost-unacceptable (T-Miner, Piccolo, DBS) or not applicable (LMSanitator). So, there is no baseline methods available for this condition. Existing clean-label backdoor attack focused on classification tasks such as sentimental analysis. LT-Defense is designed for detecting task-agnostic backdoors and text generation backdoors, in these two scenarios, there is no available clean-label attack. And due to the characteristic of the two scenarios, it is difficult to construct clean-label backdoor attacks. Pdf: /pdf/097b43d26ca4b4f6dc5e2eafbc13645a4d85a55e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Small coresets via negative dependence: DPPs, linear statistics, and concentration
Accept (spotlight)
Summary: This paper studies corsets and aims to construct them using determinantal point processes (DPPs). The authors show that DPPs can provably outperform independently drawn corsets due to linear statistics and better concentration of DPPs. Strengths: - The topic of coreset problems is essential in machine learning. - The paper is technically solid, providing interesting theoretical results. - Empirical results confirm the power of DPPs in independent data. Weaknesses: - The writing can be improved. There are many theorems and remarks in the paper, which makes it confused about the main contribution. The theorems are technical but without explanations. What are the conclusions and applications of these theorems? - Lack of coreset literature. For instance, Cohen-Addad, V., Larsen, K.G., Saulpic, D., Schwiegelshohn, C., & Sheikh-Omar, O.A. (2022). Improved Coresets for Euclidean $k$-Means; 2) Huang, Lingxiao, Jian Li and Xuan Wu. “On Optimal Coreset Construction for Euclidean $(k,z)$-Clustering. Technical Quality: 3 Clarity: 2 Questions for Authors: By reading the paper, I am still confused about when DPPs outperform importance sampling. Could you give some concrete examples, e.g., under what optimization problem and what dataset, what is the coreset size by DPP v.s. by importance sampling? -- Thanks for the response. I increase my score to 5. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review. Here are answers to your comments/questions that we hope will clarify things. > The writing can be improved. There are many theorems and remarks in the paper, which makes it confused about the main contribution. The theorems are technical but without explanations. What are the conclusions and applications of these theorems? We are sorry about the confusion. Since we otherwise got good marks for presentation, we are confident that we can make some limited adjustments to further clarify the role of each theorem and remark. In particular, following a suggestion of Reviewer 329X, we will explicitly include in the introduction our claim on the sample complexity of coresets with DPPs. In a nutshell, coresets built using independent samples require a cardinality in $\mathcal{O}(\epsilon^{-2})$ for a uniform multiplicative error of $\epsilon$. DPPs always come with at worst the same dependence, while *specific* DPPs come with a strictly better dependence of the cardinality to $\epsilon$. The meaning of *specific* is that the chosen DPP should come with a fast-decaying variance of linear statistics, as is the case for the DPP we introduce in Example 5. See our Remarks 4.1 and 6.1, and our answer to Reviewer 329X for more details on the impact of variance reduction on the size of the coreset. > Lack of coreset literature. For instance, Cohen-Addad, V., Larsen, K.G., Saulpic, D., Schwiegelshohn, C., & Sheikh-Omar, O.A. (2022). Improved Coresets for Euclidean-Means; 2) Huang, Lingxiao, Jian Li and Xuan Wu. “On Optimal Coreset Construction for Euclidean-Clustering. We thank the reviewer for drawing our attention to these interesting references; we will add them to our paper. We note that we see the paper as a DPP paper with a view to the coresets problem, so this submission is a way for us to connect with the coresets-for-ML community and raise awareness on the potential of DPPs and negative dependence in that context. This explains why we limited ourselves to general references on coresets, and largely refrained from delving into the rich literature on coresets that address specific problems, such as Euclidean clustering. Our point is that a very generic DPP-based coreset construction can improve on the dependence of $m$ in $\epsilon^{-2}$ in a precise sense. We would be happy if the paper generated discussions on particular coreset settings like $k$-means clustering, for which much theoretical work is available. Comparison to dedicated literature for such applications are a natural direction for follow-up research. > By reading the paper, I am still confused about when DPPs outperform importance sampling. Could you give some concrete examples, e.g., under what optimization problem and what dataset, what is the coreset size by DPP v.s. by importance sampling? DPPs outperform importance sampling in the sense that some DPPs, like the discretized multivariate OPE of Example 5, provide randomized $\epsilon$-coresets with a cardinality $m$ that grows strictly slower as $\epsilon$ decreases; see Theorems 4 and 6, and in particular Remarks 4.1 and 6.1. Another way to say the same thing is that, under some DPPs, *the approximation error* $\epsilon$ decreases faster with $m$ than the importance sampling rate $m^{-1/2}$. This is confirmed by our experiments. The meaning of *some DPPs* above is that we need DPPs that come with a fast-decaying variance of linear statistics. This has been proved for certain DPPs, such as the discretized multivariate OPE introduced in Example 5. Note that the latter DPP does not use any information on the loss function nor the optimization algorithm used, so that our guarantee is fairly general, at the risk of the constants not being tight. The loss function only enters our guarantees through assumptions (A.1) to (A.4). Conceptually, the reason for better performance of DPPs is that the repulsion between sample points baked into the framework discourages the selection of multiple points with similar properties in the sample. DPP-based diverse samples thus cover more ground in a single sample, and can be shown to be more stable statistically. On the other hand, in importance sampling, two sample points do not “see” each other, and this can lead to oversampling in regions with higher importance scores. In the setting of Example 5, it can be shown that the variance reduction accorded by moving from uniform random sampling to importance sampling can lead to an improvement in the leading constant, whereas using an appropriate DPP can lead to an improvement in the exponent. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. The authors have addressed my questions. I would like to raise my score.
Summary: The paper proves concentration inequalities for linear statistics of samples form a DPP. In particular, a guarantee for the coreset sampling problem is shown: If the coreset is sampled from a DPP then the loss over the coreset approaches the loss over the full dataset faster then when the coreset is sampled uniformly. The superiority of DPP samples over uniform samples is demonstrated in a k-means application on toy data and MNIST data. Strengths: * proper theoretical justification of results that have been empirically observed in previous work * The paper is well written, e.g., it contains a compact introduction to coresets that one can easily follow even if unfamiliar with coresets. * The assumptions of the theoretical results are stated clearly and are discussed. * informative and honest discussion section about limitations and interesting future work directions * The results generalize beyond the coreset problem and could potentially be interesting for other DPP application areas, too. * The results also apply to non-symmetric kernels. Weaknesses: * The markers in Figure 1 (a) and 2 (a) are different sizes, but I can't find information on what a marker's size encodes. I would appreciate a clarification. * The comparison to the related work "Tremblay et al 2019" (see line 52-55) could be a little bit more detailed. It's just mentioned that this other paper also contains theoretical results for the same/ a similar setting, but not what their nature is and how they differ from the results in the present paper (I don't doubt they do, but I think 1-2 sentences more on this would be useful). * The "stratified" baseline seems much stronger than the uniform baseline and the DPP sampler—at least, where it is straightforward to apply. Other heuristical sampling methods that encode repulsiveness between data items without being DPPs might also be preferable in large-scale machine learning applications, including coreset problems. The practical relevance might be limited, but I don't consider this a major issue as the paper is of a theoretical nature. Technical Quality: 4 Clarity: 3 Questions for Authors: Remark 3.3 mentioned that the assumption on a set's cardinality holds for most kernels of interest. Are there more examples other than the projection kernels? I assume it holds for the Gaussian kernel as it is used in the experiments? Are there examples of common kernels where the assumption does not hold? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: yes, there is a dedicated limitations sections that I perceive as accurate Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive review. Here are answers to some of your comments/questions. > The markers in Figure 1 (a) and 2 (a) are different sizes, but I can't find information on what a marker's size encodes. I would appreciate a clarification. The size of a marker placed at $x$ is proportional to the corresponding weight $1/K(x,x)$ in the estimator of the average loss. Equivalently, the marker size is inversely proportional to the marginal probability of $x$ being included in the DPP sample. We will clarify this in the final version of the paper. > The comparison to the related work "Tremblay et al 2019" (see line 52-55) could be a little bit more detailed. [...] I think 1-2 sentences more on this would be useful). We will add a brief discussion to clarify the differences. The most important thing for our paper is that Tremblay et al 2019 only prove that DPP-based coresets of accuracy $\epsilon$ should have cardinality $\mathcal{O}(\epsilon^{-2})$, which is the same rate as independent samples. This falls short of demonstrating that repulsiveness of DPPs leads to an improvement in the rate of convergence in sampling tasks, as believed from physical heuristics. This provides a natural Besides our generic concentration inequalities, our key contribution is that we show that appropriately chosen DPPs can lead to significantly smaller coresets, i.e. of cardinality $\mathcal{O}(\epsilon^{-\gamma})$ with $\gamma > 2$ as $\epsilon$ goes to zero. Tremblay et al. (2019) have a number of other interesting results in their paper, for instance that for any DPP with an Hermitian kernel that has for diagonal the sensitivity, the cardinality of the DPP coreset is smaller than i.i.d. sampling proportionally to the sensitivity. This is another hint that repulsiveness helps, but it is not a direct proof of the heuristic that smaller DPP coresets work. A key contribution of ours is that we quantify how small DPP coresets can be, and we show that they can break the $\mathcal{O}(\epsilon^{-2})$ barrier of independent samples. > The "stratified" baseline seems much stronger than the uniform baseline and the DPP sampler—at least, where it is straightforward to apply. Other heuristical sampling methods that encode repulsiveness between data items without being DPPs might also be preferable in large-scale machine learning applications, including coreset problems. The practical relevance might be limited, but I don't consider this a major issue as the paper is of a theoretical nature. Actually, it is not hard to show that stratified sampling as we implement is a DPP, although it is rarely introduced that way. But we agree that there are plenty of useful repulsive methods and heuristics that are not DPPs. However, the stochastic dependencies in other repulsive or negatively dependent heuristics are often too complicated to understand from a theoretical point of view. As such, they come with very little mathematical guarantees. We see DPPs as a tractable tool to mathematically quantify the power of negative dependence in sampling applications. > Remark 3.3 mentioned that the assumption on a set's cardinality holds for most kernels of interest. Are there more examples other than the projection kernels? I assume it holds for the Gaussian kernel as it is used in the experiments? Are there examples of common kernels where the assumption does not hold? There is no Remark 3.3, but you likely mean Remark 4.3 or 6.3, which both discuss cardinality. The assumption $\vert\mathcal{S}\vert \leq Bm$ a.s. is equivalent to the kernel having at most $Bm$ nonzero eigenvalues; this includes DPPs with projection as well as many non-projection kernels. Note that the assumption is here to simplify the proof for the reader who focuses on projection kernels, but the assumption is easy to avoid for a generic Hermitian kernel, thanks to a control on the tail $\mathbb{P}(\vert \mathcal{S}\vert\geq (B+1)m)$ and our assumption that the marginal probability of seeing any item is lower bounded, as mentioned in Remarks 4.3 and 6.3. The case of a DPP with Gaussian kernel and Gaussian reference measure, where the number of nonzero eigenvalues is infinite, is covered by that extension. For the Gaussian kernel in the experiments, the situation is slightly different. We indeed include $m$-DPPs with Gaussian kernels in our experiments, but an $m$-DPP is not a DPP. Strictly speaking, our proof thus does not cover $m$-DPPs, and we rather include $m$-DPPs in our experiments for comparison to previous work. Note however that $m$-DPPs are conditional DPPs (conditioned on the number of points), and we could easily extend the proof of our concentration inequalities by a simple conditioning argument, if needed for the sake of exhaustiveness. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. They make sense to me. I keep my positive score.
Summary: This paper presents a study on the use of Determinantal Point Processes (DPPs) for constructing coresets in machine learning tasks. DPPs are random configurations of points with negative dependence, making them suitable for subsampling tasks like minibatch selection or coreset construction. Therefore, it is natural to ask can DPP-based coresets have smaller size than the independent-sampling-based coresets. The paper answers the question, demonstrating that DPP-based method outperforms. To achieve this, the author provide a new understanding of coreset loss as a linear statistic of the random point set. Then they connect the coreset construction to the concentration inequalities of the linear statistic and show that a suitable DPP yields a coreset of size $o(\varepsilon^{-2})$. Strengths: - New concentration inequalities for linear statistics of DPPs are presented, which are applicable to general non-projection and non-symmetric kernels. - The DPPs-based coresets have a smaller theoretical size than the independent-sampling-based coresets. - The paper introduces the study of coresets for vector-valued objective functions, a topic that holds independent interest. Weaknesses: see the questions Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the time complexity of DPP-based coreset construction? - The paper proposes some assumptions on the DPPs. Are these assumptions fair compared to previous methods? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive review. Here are answers to your questions. > What is the time complexity of DPP-based coreset construction? In general, sampling a DPP of cardinality $m$ among a ground set of size $n$ is $O(nm^2)$ provided the kernel matrix has been diagonalized beforehand. Depending on how the kernel is specified, the diagonalization preprocessing can add up to $O(n^3)$ operations. In practice, randomization and low-rank approximation techniques can be used to take down these costs for large-scale applications; see e.g. [Kulesza and Taskar, FTML 2012](https://www.nowpublishers.com/article/Details/MAL-044). Given that coreset guarantees are uniform bounds for entire function classes, the same coreset once sampled can give accurate approximation for a range of functions of interest for a particular application, and thus DPP sampling is essentially a one-time preprocessing overhead. On a related note, leveraging the fact that the coreset-based approximant is a linear statistic, the sampling problem may be reduced to that of sampling a real-valued random variable via its Laplace transform, cf. [Bardenet et al., Arxiv](https://arxiv.org/abs/2007.04287). It may be noted that in many modern coreset applications, the functional evaluation is often computationally very expensive, and the cost of coreset sampling can be secondary to that. A typical example would be the computation of the output of a large-scale neural network, or its training dynamics where the gradient is computed via a backpropagation through the network. Another instance is that of large-scale conditional random field models. In such situations, it is imperative to reduce the number of function evaluations, and smaller coresets are very helpful in achieving that objective. > The paper proposes some assumptions on the DPPs. Are these assumptions fair compared to previous methods? Our concentration results are very generic, and we even consider DPPs with non-symmetric kernels. In the application to coreset construction, we make very minimal assumptions on the DPP that is used. In fact, we only bound from below the marginal probability of seeing any item of the ground set in the coreset; see Remark 4.2. This seems reasonable to us: since our goal is to estimate a mean over the $n$ data points, there should be a chance, even minute, of sampling each data point. Most of our assumptions are on the complexity of the space of queries $\mathcal F$. However, as demonstrated in our paper, these assumptions cover most of the learning tasks for which coresets have been studied: $k$-means, linear regression, etc, and also some important spaces of test functions of other fields (e.g. band-limited functions in signal processing). --- Rebuttal Comment 1.1: Comment: Thank you for your responses. The authors have addressed some of my questions. I would like keep my positive score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive response. Please let us know if you have further questions, or would like to have additional clarifications; we would be happy to furnish further details.
Summary: This work improves on the concentration bounds on DPP for coresets by explicitly relying on the fact that loss based coresets are linear functionals. To this end, this paper also add the coreset results for Non-symmetric kernels, (including additive coresets) expanding on the previous results on DPP Coresets (which were solely in the hermitian regime). Paper compares against other sampling based theoretical coresets (including DPPs) for approximation errors. Strengths: I like the fact that this paper extended the DPP bounds to Non-symmetric kernels. Overall the paper is not difficult to parse through, despite many theoretical statements. Weaknesses: I cannot exactly pin down the weakness in this work since I'm not expert in this area, however, I'd like to point out a few things that can help to understand this work better. - When going from lipschitz (Peres and Pemantle) to linear functional concentration, what exactly changes which leads to the better bounds? - Can the authors have a side by side comparision of the sample complexity for the previous existing works (DPP for coresets paper), in a tablular form to understand the contributions of this work better? - What is an intuitive explanation of the fact that the range of $\epsilon$ got tighter in non-symmetric case? - Can there be chaining related improvements to further improve the provided bounds, as done in Bhatt and Bilmes 2021? - Is it possible to provide some experimental results with different choice of kernels including non-symmetric? - Writing Suggestion: It might improve readability of $\|\varphi\|_{\infty}$ is replaced by some constant bounding the loss function. References: - Tighter m-DPP Coreset Sample Complexity Bounds, Bhatt and Bilmes 2021. Subset ML at ICML'21 Technical Quality: 3 Clarity: 3 Questions for Authors: Refer to the weakness. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Refer to the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive review! Here are answers to your comments. > When going from Lipschitz (Peres and Pemantle) to linear functional concentration, what exactly changes which leads to the better bounds? The Pemantle-Peres results aim to address general Lipschitz functions in several variables. Roughly speaking, one has to pay for the generality by having a much cruder result. A key point we make is that, in many practical situations, functionals of DPPs that are of interest have a linear structure. Therefore it is natural to develop a theory of concentration of linear functionals of a DPP, as we do in this paper. Unlike the general Lipschitz case, the Laplace transform of linear statistics of DPPs is explicitly given by a Fredholm determinant. This is the starting point of the derivation of a tighter concentration result. Another advantage is that careful work around this Fredholm determinant allows keeping in the bound the variance of the linear statistic under scrutiny, like in the classical Bernstein inequality. In that view, the result of Pemantle and Peres is more similar to the Hoeffding inequality. > Can the authors have a side by side comparison of the sample complexity for the previous existing works (DPP for coresets paper), in a tabular form to understand the contributions of this work better? We will mention this early on in the introduction when we list our contributions. Right now, the sample complexity is indeed a bit hidden in our Remark 4.1. Essentially, the best (multiplicative) $\epsilon$-coresets built with independent samples are of size $m=\mathcal{O}(\epsilon^{-2})$. Using the general result of Pemantle and Peres, Tremblay et al. (the “DPPs for coresets” paper) provided a bound with the same dependence in $\epsilon$ for DPPs. On our side, we show that we can actually take $m=\mathcal{O}(\epsilon^{-2/(1+\delta)})$, where $\delta$ depends on the variance of the subsampled loss under the considered DPP. In particular, for the discretized multivariate OPE introduced in Example 5, any $\delta \in(0,1/d)$ works, where data are assumed to live in $\mathbb{R}^d$ (in fact, in practical terms we recommend applying the algorithm there on dimensionally reduced data, and the $d$ here can be taken to be this reduced dimension). This shows that if the DPP is suitably chosen, the dependence of the coreset size in $\epsilon$ is provably better than for independent coresets. A table summarizing our sample complexity could look like this. | Independent sampling | Generic DPPs | Discretized multivariate OPE | |---|---|---| | $\mathcal{O}(\epsilon^{-2})$ | $\mathcal{O}(\epsilon^{-2/(1+\delta)})$ for $\delta \geq 0$ based on the DPP | $\mathcal{O}(\epsilon^{-2/(1+1/d-\eta)})$ for any $\eta>0$. | > What is an intuitive explanation of the fact that the range of $\epsilon$ got tighter in the non-symmetric case? It is not so clear to us that the range gets tighter. The nuclear norm of the kernel is roughly $m$, so that everything depends again on the variance of the linear statistic. Much less is known on the latter for non-symmetric kernels. In fact, although the bounds look similar in the symmetric and non-symmetric cases, the non-symmetric case forced us to change our proof technique, leading to a different range for $\epsilon$. In general, for symmetric kernels, an elaborate spectral theory is available and generally finer analysis (e.g., involving algebraic cancellations) is possible. A case in point would be the Christoffel-Darboux theory that allows for the improved bounds in Example 5. > Can there be chaining related improvements to further improve the provided bounds, as done in Bhatt and Bilmes 2021? Thanks for bringing this interesting paper to our attention. There could be a small improvement in a smart chaining argument, but it is very unlikely that we can substantially improve the rate in the dependence of $m$ in $\epsilon$. In [Bhatt and Bilmes 2021], they indeed gain a log term that was implicitly there in [Tremblay et al. 2019]; this definitely makes for a cleaner result, but does not substantially change the rate. > Is it possible to provide some experimental results with different choice of kernels including non-symmetric? In our experiments, we do explore several choices of kernels. These kernels are all symmetric in nature. While our concentration results are generally applicable, our experimental investigations in the paper focus on the problem of coreset construction. The setting of non-symmetric kernels is not very natural to the problem of sampling coresets. Generally speaking, non-symmetric kernels are much less tractable, analytically and algorithmically. Therefore, in the problem of DPP-based coreset construction, where the kernel is our choice, one nearly always goes for a symmetric kernel to produce the desired repulsive effects among sample points and the attendant variance reduction. In particular, the fast decay of the variance of linear statistics is key in deriving *small* coresets from our concentration inequality. To our knowledge, there is no known non-symmetric DPP with a fast-decaying variance of linear statistics, and we thus do not have a clear non-symmetric candidate DPP for comparing coresets. A key application of non-symmetric kernels is in situations where it is desirable that the mutual repulsion between points is violated at certain locations; cf [Gartrell et al. 2019](https://arxiv.org/abs/1905.12962), or [Poulson 2019](https://arxiv.org/abs/1905.00165) and references therein. In such problems, it is of natural interest to obtain confidence bands for the outputs of the non-symmetric DPP based methods, and our concentration results for non-symmetric DPPs are the first ones to provide such guarantees. The experimental analysis related to our concentration results for the non-symmetric case are thus better suited to a different work which focuses on such relevant applications of non-symmetric kernels. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal, increasing score. Comment: Thanks for responding to my questions! I believe this discussion can help the manuscript. I am raising my scores.
Rebuttal 1: Rebuttal: We thank the reviewers for their careful reading of the paper, and their detailed comments, to which we have responded point by point below. We would like to take this opportunity to elaborate on the fact that our submission is appropriately viewed in the context of a wider programme of leveraging negative dependence as a toolbox for machine learning. In particular, this enables us to exploit the diversity of samples generated using a DPP to provide more representative and more parsimonious summaries of datasets (or approximations to functions of datasets). While this is motivated by physical heuristics coming from quantum and statistical mechanics, establishing this as a principled learning paradigm with robust and rigorous guarantees has proven to be a much more difficult task, with only a limited number of use-cases to support more extensive experimental evidence. Our work contributes to this overall programme by addressing the problem of sampling coresets. From this perspective, for our paper, the coresets problem is an important application domain wherein we establish the effectiveness of negative association-based methods (in contrast with, e.g., a paper that is situated deep within the coresets literature). As such, our evaluations and comparison benchmarks are also principally based on literature that has attempted to accomplish this goal (e.g., Tremblay et al. 2019). On a related note, discussion of the nuances of the coresets literature in specific settings (such as clustering or regression) in our paper are somewhat muted, especially in view of the limited space available in a NeurIPS submission. Having established foundational guarantees for DPPs as a coresets generating mechanism in this paper, it would be natural to explore in future works the performance of DPP-based coresets in such specific applications. Such future works would be natural venues where detailed comparisons to established benchmarks in those particular settings can be undertaken. In contrast, the present work focuses on very general, foundational guarantees for an approach based on negative association, and is ideally perceived and evaluated in that framework.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Revisiting Score Propagation in Graph Out-of-Distribution Detection
Accept (poster)
Summary: In this paper, the authors attempt to tackle the task of detecting Out-of-Distribution (OOD) nodes in graph data. To this end, the authors propose an augmentation method, namely Graph-Augmented Score Propagation (GRASP). GRASP performs the task of OOD detection in graphs by increasing the ratio of intra-edges (i.e., edges only connecting in-distribution nodes or OOD nodes). Theoretical analysis is also provided to support the effectiveness of the proposed approach. Experimental results show that GRASP can outperform existing OOD detection methods in various datasets. Strengths: 1. The proposed method is introduced in an informative manner. 2. The paper is easy to follow. 3. The comparative experiments are comprehensive. 4. The hyperparameter sensitivity analysis is thorough, providing readers with valuable insights. Weaknesses: 1. At the beginning of the article, the author might overly emphasize the contribution of this article, to some extent giving the impression to readers that it is the first study on graph OOD detection. However, there have already been some papers in this area.​ For instance, 1) "Learning on Graphs with Out-of-Distribution Nodes," in KDD 2022 (cited); 2) "Generalizing graph neural networks on out-of-distribution graphs," in TPAMI 2023 (not cited); 3) "Good-d: On unsupervised graph out-of-distribution detection (cited)," in WSDM 2023. While their core idea differs from this paper, their contributions should not be overlooked. Given these observations, it is highly suggested that the authors conduct broader investigations of existing works, discuss the differences between the proposed methods and other related methods, and adjust the contributions of the paper. 2. The baselines selected in the experiments may not be sufficiently novel. This makes the experimental results are not convincing enough. 3. The proposed method needs more motivation from both theoretical and practical aspects. For example, are there any observations or previous studies supporting the assumptions that edges follow a Bernoulli distribution? Are there concrete (or real-world) examples that can demonstrate the importance of the number of intra-edges and inter-edges in OOD detection? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The authors emphasize multiple times the importance of the number of intra-edges and inter-edges in OOD detection, providing theoretical proof. However, could a concrete example be provided to illustrate this concept more intuitively? 2. Does the assumption that edges follow a Bernoulli distribution receive support in most real-world datasets, practical applications, or previous studies? 3. In Table 3, the FPR on the dataset reddit2 is significantly lower than that of its comparison models. Could the author provide further explanation for this remarkable improvement? 4. Is there any underlying pattern in the ratio of intra-edges to inter-edges when achieving the best OOD detection performance across different datasets? 5. Can authors conduct broader investigations of recent related works, explicitly discuss the differences between the proposed and other related methods, and adjust the contributions of the paper (if possible)? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations of the proposed method in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and insightful questions. Below, we address each point in detail: >**W1&Q5. Suggestion on providing more discussion w.r.t the exisiting graph OOD works and comments on the paper's contribution position.** We appreciate your comments and the opportunity to clarify and further discuss the positioning! Firstly, we acknowledge the contributions of previous works in this field and have made it clear from the introduction (line 29) that our approach is inspired by prior research [72]. If any part of the text inadvertently suggested that we were the first to address graph OOD detection, we are welcome with advices to eliminate possible confusion and better position our contributions. Regarding the discussion with references and existing works on graph OOD: 1. **OOD Generalization vs. OOD Detection**. The uncited paper [A] suggested by the reviewer addresses a different problem -- OOD generalization rather than OOD detection. This distinction is crucial as OOD generalization focuses on correctly classifying domain-shifted data, which involves a different kind of "OOD" than the semantic shift concerns of OOD detection. Our work specifically targets scenarios where data does not conform to any in-domain class. 2. **Graph-level vs. node-level**. We also note a significant body of research [B-G] that focuses on graph-level OOD detection. Our work, however, contributes to the less-explored area of node-level OOD detection, where current literature remains sparse [H-L]. Our findings help fill this gap by providing new insights and methodologies. 3. **Training-required vs. post hoc method**. Methodologically, our approach differs significantly from those in existing literature, which often require extensive re-training [I-L]. Our post hoc solution (demonstrated in Tables 2, 3, 15, and 16) enables effective and efficient OOD detection, facilitating easier deployment and practical application in real-world scenarios. 4. **New theoretical contribution**. Our paper introduces fundamental theoretical analysis, a first in the realm of node-level graph OOD detection. We elucidate key factors such as the importance of intra-edge dominance, providing a new theoretical framework that aids in understanding and addressing the unique challenges of OOD detection on graphs. We hope these clarifications and expansions will address the concerns raised and better articulate the value and positioning of our work in the literature. [A] Generalizing graph neural networks on out-of-distribution graphs. TPAMI'23 [B] GraphDE: A Generative Framework for Debiased Learning and Out-of-Distribution Detection on Graphs. NeurIPS'22 [C] Towards OOD Detection in Graph Classification from Uncertainty Estimation Perspective.ICML'22 [D] On Estimating the Epistemic Uncertainty of Graph Neural Networks using Stochastic Centering. ICML'23 [E] GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection. WSDM'23 [F] A Data-centric Framework to Endow Graph Neural Networks with Out-Of-Distribution Detection Ability. KDD'23 [G] HGOE: Hybrid External and Internal Graph Outlier Exposure for Graph Out-of-Distribution Detection. MM'23 [H] Energy-based out-of-distribution detection for graph neural networks. ICLR'23 [I] Uncertainty aware semi-supervised learning on graph data. NeurIPS'20 [J] Graph posterior network: Bayesian predictive uncertainty for node classification. NeurIPS'21 [K] Learning on graphs with out-of-distribution nodes. KDD'22 [L] Bounded and Uniform Energy-based Out-of-distribution Detection for Graphs. ICML'24 >**W2. The baselines selected in the experiments may not be sufficiently novel. This makes the experimental results are not convincing enough.** To address this concern, we have incorporated two latest baselines, NODESafe [A] and fDBD [B], into our experiments on both common datasets (5 datasets) and large-scale datasets (3 datasets). NODESafe aims to reduce the generation of extreme scores when traing ID models on graphs, and fDBD detects OOD samples based on their feature distances to decision boundaries. Our results reveal that the performance of these methods lags significantly behind ours due to their inability to increase the proportion of intra-edges. | Datasets | cora | | amazon | | coauthor | | chameleon | | squirrel | | |----------|--------|--------|--------|--------|----------|--------|-----------|--------|----------|--------| | method | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | | NODESafe | 84.09 | 68.73 | 68.65 | 91.51 | 80.26 | 71.74 | 50.72 | 92.29 | 49.26 | 94.60 | | fDBD | 56.77 | 81.56 | 73.31 | 51.87 | 63.10 | 59.68 | 50.85 | 89.85 | 53.17 | 94.78 | | GRASP | 93.50 | 29.70 | 96.68 | 14.38 | 97.75 | 7.84 | 76.93 | 66.88 | 61.09 | 85.59 | | Datasets | arxiv-year | | snap-patents | | reddit2 | | |----------|------------|--------|--------------|--------|---------|--------| | method | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | | NODESafe | 47.41 | 94.92 | 51.51 | 93.02 | 43.60 | 97.11 | | fDBD | 48.87 | 95.82 | 43.27 | 94.99 | 55.72 | 89.78 | | GRASP | 81.24 | 73.93 | 72.13 | 75.22 | 98.50 | 2.41 | [A] Bounded and Uniform Energy-based Out-of-distribution Detection for Graphs. ICML'24 [B] Fast Decision Boundary based Out-of-Distribution Detector. ICML'24 --- Rebuttal 2: Title: Part 2 of the rebuttal Comment: >**W3&Q2. Justification regarding using Bernoulli distribution to model edge weights.** Fair Concern! As we are dealing with discrete graphs, where edges either exist or do not, the adjacency matrix values are binary, taking on either 0 or 1. This naturally aligns with the Bernoulli distribution, which is frequently employed in graph structure learning, as exemplified in several pertinent references in [A,B,C]. [A] Learning Discrete Structures for Graph Neural Networks. ICML'19. [B] Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings. NeurIPS'20. [C] Data Augmentation for Graph Neural Networks. AAAI'21. >**W3-2. Examples that demonstrate the importance of the number of intra-edges and inter-edges.** We present detailed evidence in Table 14, which outlines the relationship between the ratio of intra-edges and OOD detection performance across 10 real datasets. The data illustrate that a higher number of intra-edges is crucial for effective OOD detection. >**Q1. Request for an intuitive example showing the importance of intra-edges and inter-edges in OOD detection.** Figure 2 provides a clear, intuitive example demonstrating how variations in the number of intra-edges versus inter-edges impact OOD detection. The two graphs in Figure 2 have the same nodes, but due to the differing numbers of intra-edges and inter-edges, they yield completely opposite results after score propagation. Specifically, score propagation enhances OOD detection performance only when intra-edges dominate; otherwise, it may in turn hurt the performance. This conclusion is supported by the emperical results of our used real datasets, as shown in Table 2, Table 3, and Table 14. >**Q3. Concern about lower FPR on the dataset Reddit2 compared to other models.** The notable decrease in False Positive Rate (FPR) observed with GRASP on the Reddit2 dataset can be attributed to an increased proportion of intra-edges, as detailed in Table 14. After propagating score along $A\_+$, the high scores of ID nodes are transferred more to $\mathcal{V}\_{uid}$ than to $\mathcal{V}\_{uood}$, resulting in higher scores for $\mathcal{V}\_{uid}$ than for $\mathcal{V}\_{uood}$, thus widening the gap between $\mathcal{V}\_{uid}$ and $\mathcal{V}\_{uood}$ and reducing the misidentification of OOD. Figure 1 in the attached pdf (Please refer to the Common Responses section titled "Author Rebuttal by Authors" https://openreview.net/forum?id=jb5qN3212b&noteId=HVpGXjNQ0B) shows the score distribution of $\mathcal{V}\_{uid}$ and $\mathcal{V}_{uood}$ on Reddit2 before and after using GRASP respectively. The shaded area with diagonal lines represents FPR, visually illustrating the aforementioned reasons. >**Q4. Patterns in the ratio of intra-edges to inter-edges for better OOD detection performance.** The following table systematically presents the impact of the intra- to inter-edge ratio ($\eta\_{intra}$/$\eta\_{inter}$) on OOD detection performance across various datasets. The results indicate that the number of intra-edges plays a crucial role in OOD detection performance. When the ratio of intra-edges is increased to dominate, score propagation can achieve excellent OOD detection performance across different datasets. | | Before Applying GRASP | | After Applying GRASP | | |---------------|-----------------------|--------|----------------------|--------| | Datasets | $\eta\_{intra}$/$\eta\_{inter}$ | AUROC |$\eta\_{intra}$/$\eta\_{inter}$ | AUROC | | cora | 4.65 | 87.52 | 11.87 | 93.50 | | amazon | 14.24 | 96.27 | 28.07 | 96.68 | | coauthor | 12.66 | 95.82 | 34.46 | 97.75 | | chameleon | 1.03 | 50.42 | 3.72 | 76.93 | | squirrel | 0.65 | 35.88 | 1.67 | 61.09 | | reddit2 | 0.68 | 31.99 | 46.62 | 98.50 | | ogbn-products | 4.55 | 85.66 | 14.53 | 93.79 | | arxiv-year | 0.72 | 35.30 | 4.60 | 81.24 | | snap-patents | 0.53 | 27.35 | 2.56 | 72.13 | | wiki | 1.52 | 60.32 | 3.91 | 77.97 | --- Rebuttal Comment 2.1: Title: Thanks for the detailed responses Comment: Dear Authors, Thanks very much for your detailed responses. I will keep my positive score on this paper. --- Reply to Comment 2.1.1: Title: Appreciate Your Feedback Comment: Dear Reviewer hXZy, We are delighted that our responses have effectively addressed your concerns. Your recognition of our efforts in the paper is greatly appreciated! Warm regards, The Authors of Submission 12283.
Summary: This study investigates the effectiveness of score propagation in graph raph Out-of-Distribution (OOD) detection. It explores the conditions under which score propagation can be beneficial and proposes an edge augmentation strategy (GRASP) to improve its performance. The authors provide theoretical guarantees for GRASP and demonstrate its performance compared to existing OOD detection methods on several graph datasets. Strengths: 1) The study delves into the mechanisms of score propagation and derives conditions for its effectiveness. This theoretical foundation is solid and extends the understanding of graph OOD detection. 2) The proposed GRASP method is a practical and efficient solution for improving OOD detection performance. The results showed that GRASP has achieves the SOTA performance. 3.) The paper is generally in well-written and easy to follow. Weaknesses: 1) it is strongly suggested to release the source code of experiments. 2) Considering the imporvement is margin in Table 2 and 3, it is also suggested that the authors should provide a standard error with statistical significance analysis in the results. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the performance of GRASP vary with different OOD distributions (e.g., uniform, normal, outliers)? 2. Can the authors provide further insights into how GRASP works and how it interprets the graph structure? 3. How does the performance of GRASP scale with the size of the graph and the number of nodes? Is there any systemic analysis? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I did not find significant limitation of this study, one point may concern me that the effectiveness of GRASP relies on the connectivity and structure of the graph. In scenarios with random connectivity patterns, the method may not be as effective.. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and insightful questions. Below, we address each point in detail: >**W1. it is strongly suggested to release the source code of experiments.** Sure! We have made the source code available at the following link: https://anonymous.4open.science/r/GRASP-EEA3/README.md. >**W2. Considering the imporvement is margin in Table 2 and 3, it is also suggested that the authors should provide a standard error with statistical significance analysis in the results.** We appreciate your observations regarding the improvements in Table 2 and 3! We'd like to emphasize that the overall improvement of our method is substantial. Nevertheless, your suggestion about including the standard error (STD) to provide statistical significance analysis is well taken. We have updated the manuscript to include standard errors for all datasets. It is important to note that the FPR95 metric displays a relatively high standard error across methods on the Cora, Amazon, and Chameleon datasets due to their small sizes, which makes the outcomes sensitive to variations in data splits. Nonetheless, our method consistently remains competitive across all five runs, as detailed in the updated Table. | Datasets | cora | | amazon | | coauthor | | chameleon | | squirrel | | |-------------|---------------|---------------|---------------|---------------|--------------|---------------|--------------|---------------|--------------|---------------| | method | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | | MSP | 84.56 ± 5.39 | 70.86 ± 15.88 | 89.34 ± 3.49 | 49.26 ± 10.51 | 94.34 ± 0.41 | 28.82 ± 1.94 | 57.96 ± 3.31 | 85.70 ± 7.09 | 48.51 ± 0.46 | 94.68 ± 1.01 | | Energy | 85.47 ± 4.98 | 67.54 ± 22.98 | 90.28 ± 3.42 | 42.13 ± 9.96 | 95.67 ± 0.25 | 20.29 ± 1.49 | 59.20 ± 4.31 | 88.06 ± 7.50 | 45.07 ± 1.68 | 93.98 ± 1.42 | | KNN | 70.94 ± 5.62 | 90.20 ± 4.35 | 84.71 ± 3.28 | 65.19 ± 7.26 | 90.13 ± 0.50 | 51.24 ± 1.83 | 57.90 ± 6.48 | 93.38 ± 5.48 | 54.68 ± 2.25 | 94.72 ± 2.84 | | ODIN | 84.98 ± 5.59 | 68.41 ± 18.48 | 89.90 ± 3.65 | 44.06 ± 10.69 | 95.27 ± 0.33 | 22.59 ± 1.76 | 57.94 ± 3.75 | 85.31 ± 7.64 | 44.08 ± 0.35 | 94.17 ± 0.44 | | Mahalanobis | 85.48 ± 1.69 | 69.68 ± 14.60 | 75.58 ± 7.97 | 96.49 ± 5.96 | 84.98 ± 0.58 | 85.71 ± 1.82 | 53.19 ± 4.30 | 95.55 ± 2.36 | 54.99 ± 0.70 | 94.90 ± 0.51 | | GKDE | 86.27 ± 2.69 | 63.71 ± 14.36 | 77.26 ± 5.54 | 81.29 ± 3.36 | 95.13 ± 0.29 | 25.48 ± 1.48 | 50.14 ± 5.50 | 92.93 ± 4.89 | 49.38 ± 3.58 | 96.71 ± 0.67 | | GPN | 82.93 ± 11.20 | 58.45 ± 31.98 | 82.63 ± 5.87 | 72.95 ± 19.77 | 93.82 ± 2.63 | 34.11 ± 22.46 | 68.20 ± 6.70 | 82.25 ± 6.55 | 48.38 ± 4.43 | 95.58 ± 1.65 | | OODGAT | 53.63 ± 5.13 | 94.59 ± 6.38 | 66.95 ± 16.02 | 71.34 ± 15.34 | 52.18 ± 8.26 | 96.53 ± 3.39 | 59.67 ± 6.37 | 94.43 ± 3.43 | 46.13 ± 3.10 | 95.27 ± 1.00 | | GNNSafe | 87.52 ± 6.16 | 54.71 ± 31.41 | 96.27 ± 0.31 | 22.39 ± 4.90 | 95.82 ± 0.28 | 16.64 ± 1.90 | 50.42 ± 0.65 | 100.00 ± 0.00 | 35.88 ± 0.24 | 100.00 ± 0.00 | | GRASP | 93.50 ± 1.65 | 29.70 ± 12.25 | 96.68 ± 0.28 | 14.38 ± 6.63 | 97.75 ± 0.18 | 7.84 ± 0.58 | 76.93 ± 4.18 | 66.88 ± 6.48 | 61.09 ± 1.49 | 85.59 ± 3.61 | | Datasets | reddit2 | | odbn-product | | arxiv-year | | snap-patents | | wiki | | |-------------|--------------|--------------|--------------|--------------|---------------|---------------|--------------|--------------|--------------|--------------| | Method | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | | MSP | 46.61 ± 0.66 | 96.59 ± 0.14 | 70.19 ± 0.92 | 86.87 ± 0.35 | 47.24 ± 3.70 | 95.03 ± 1.46 | 46.99 ± 0.83 | 94.31 ± 0.30 | 54.70 ± 0.68 | 95.46 ± 0.32 | | Energy | 44.13 ± 0.14 | 96.77 ± 0.03 | 68.13 ± 0.38 | 85.09 ± 0.45 | 51.35 ± 5.91 | 94.10 ± 2.76 | 46.03 ± 4.59 | 96.82 ± 1.07 | 29.02 ± 2.78 | 97.31 ± 1.54 | | KNN | 66.74 ± 0.55 | 90.78 ± 0.75 | 73.58 ± 1.21 | 84.22 ± 2.00 | 57.96 ± 2.19 | 95.35 ± 0.92 | 53.45 ± 0.93 | 90.54 ± 1.09 | 43.69 ± 4.83 | 93.43 ± 2.57 | | ODIN | 44.69 ± 0.24 | 96.74 ± 0.07 | 68.95 ± 0.52 | 85.65 ± 0.31 | 47.36 ± 3.46 | 95.06 ± 1.47 | 45.20 ± 0.87 | 94.27 ± 0.30 | 29.91 ± 0.47 | 97.88 ± 0.18 | | Mahalanobis | 74.89 ± 1.01 | 71.73 ± 1.55 | OOM | | 59.57 ± 1.27 | 88.60 ± 1.27 | 58.50 ± 0.81 | 96.03 ± 0.22 | 67.95 ± 1.56 | 72.33 ± 2.15 | | GKDE | OOT | | OOM | | OOM | | OOM | | OOM | | | GPN | OOM | | OOM | | 50.97 ± 14.98 | 95.62 ± 3.29 | OOM | | OOM | | | OODGAT | OOM | | OOM | | 59.38 ± 3.44 | 92.90 ± 0.94 | OOM | | OOM | | | GNNSafe | 31.99 ± 0.26 | 99.49 ± 0.07 | 85.66 ± 1.16 | 77.86 ± 1.09 | 35.30 ± 0.06 | 100.00 ± 0.00 | 27.35 ± 0.18 | 99.92 ± 0.18 | 60.32 ± 4.51 | 72.63 ± 2.05 | | GRASP | 98.50 ± 0.02 | 2.41 ± 0.09 | 93.79 ± 0.24 | 39.77 ± 1.25 | 81.24 ± 0.39 | 73.93 ± 0.60 | 72.13 ± 0.06 | 75.22 ± 0.09 | 77.97 ± 1.38 | 58.49 ± 1.07 | --- Rebuttal 2: Title: Part 2 of the rebuttal Comment: >**Q1. How does the performance of GRASP vary with different OOD distributions (e.g., uniform, normal, outliers)?** We thank reviewer for the suggestion! We manually generate 2D sample points as graph nodes, with ID samples drawn from a standard Gaussian distribution and OOD samples drawn from three different distributions: a 2D uniform distribution, a different normal distribution from ID, and an outlier distribution. The outlier samples are randomly selected points outside the region of the ID sample points. We calculate the RBF kernel similarity between pairs of sample points and constructe edges between nodes with high similarity to simulate the similarity between nodes' embeddings after training GNN. The experimental results are shown in the table below. The results indicate that GRASP performs best when the OOD is normal, worst when the OOD is uniform, and intermediate when the OOD is outlier. | OOD | AUROC | FPR | |---------------|--------|-----| | Before GRASP | 56.20 | 92 | | uniform+GRASP | 74.08 | 62 | | normal+GRASP | 97.98 | 0 | | outlier+GRASP | 85.81 | 47 | We are happy to provide more experiment results for reviewer's interest! >**Q2. Can the authors provide further insights into how GRASP works and how it interprets the graph structure?** Thank you for your inquiry about the rationale behind GRASP! Here's a succinct overview to aid your understanding: - **Core Principle**: The effectiveness of GRASP in enhancing graph OOD detection hinges on increasing the proportion of intra-edges before performing score propagation. - **Implementation**: Based on this principle, GRASP identifies a subset $G$ within the graph, characterized by having more connections to ID data than to OOD data. Additional edges within $G$ are then added to enhance the score propagration for OOD detection. Regarding your question on how GRASP interprets the graph structure: - GRASP is designed to amplify the score difference between $\mathcal{V}\_{uid}$ (nodes to be identified as ID) and $\mathcal{V}\_{uood}$ (nodes to be identified as OOD). By reinforcing connections within the subset $G$ that has stronger ties to ID data, and then conducting score propagation, it exert a greater influence on $\mathcal{V}\_{uid}$ compared to $\mathcal{V}\_{uood}$. This structured modification and targeted propagation are pivotal for enhancing the discriminative capability of the network against OOD nodes. If there is any aspect of the question that we have misunderstood, or if further clarification is required, please do not hesitate to let us know! >**Q3. How does the performance of GRASP scale with the size of the graph and the number of nodes? Is there any systemic analysis?** The 10 datasets used in our experiments cover various scales from small scale to large scale. As shown in Tables 1-3, GRASP's performance is not significantly affected by the scale or size of the datasets. Instead, its performance is more influenced by the proportion of inter-edges in the original datasets. | Datasets | #Nodes | #Edges | Ratio of intra-edges | Ratio of inter-edges | AUROC | |---------------|---------|-----------|----------------------|----------------------|--------| | cora | 2708 | 5429 | 92.23 | 7.77 | 93.50 | | amazon | 7650 | 238162 | 96.56 | 3.44 | 96.68 | | coauthor | 18333 | 163788 | 97.18 | 2.82 | 97.75 | | chameleon | 2277 | 31421 | 78.81 | 21.19 | 76.93 | | squirrel | 5201 | 198493 | 62.54 | 37.46 | 61.09 | | reddit2 | 232965 | 23213838 | 97.90 | 2.10 | 98.50 | | ogbn-products | 2449029 | 61859140 | 93.56 | 6.44 | 93.79 | | arxiv-year | 169343 | 1166243 | 82.15 | 17.85 | 81.24 | | snap-patents | 2923922 | 13975788 | 71.94 | 28.06 | 72.13 | | wiki | 1925342 | 303434860 | 79.64 | 20.36 | 77.97 | --- Rebuttal 3: Title: Part 3 of the rebuttal Comment: >**Limitation. GRASP may not be as effective in scenarios with random connectivity patterns** We agree with the reviewer's point that when a graph has random connectivity, all methods will perform poorly because such a graph lacks any meaningful edge information, making it indeed impossible to work effectively. To validate this, we conduct a simple experiment in Cora. Specifically, we randomly shuffle the original edges of the graph while retaining the original features and labels for training the ID model. We then evaluate the performance of each baseline on the trained model. The results, presented in the table below, confirm that all methods achieve AUROC scores around 50%, comparable to random guessing. However, in real-world graphs, random connectivity is unlikely, ex. social network. In reality, there are connections between IDs, and they naturally tend to be linked together. Therefore, it is less practical to consider the network with random connectivity. | Method | AUROC | FPR95 | |-------------|-------|-------| | MSP | 53.75 | 94.00 | | Energy | 53.91 | 93.91 | | KNN | 50.29 | 94.56 | | ODIN | 54.19 | 93.73 | | Mahalanobis | 54.27 | 93.86 | | GNNSafe | 51.97 | 93.27 | | GRASP | 51.37 | 92.73 |
Summary: The paper proposes a methodology called Graph-Augmented Score Propagation to improve OOD detection performance on graphs. The key idea of the paper is an edge augmentation strategy which selectively adds edges to a subset of training nodes, which is combined with score propagation for the OOD node detection task on the graphs. Theoretical analyses is provided which links the OOD score propagation to the intra-edge vs inter-edge ratios between ID and OOD samples. Experimental results are provided on several real world graph datasets with the measurement of OOD detection metrics for demonstrating the effectiveness of their method. Strengths: 1. The paper is generally well written, well motivated and easy to follow. 2. Theoretical analysis of the setting when OOD score propagation will be effective is helpful for future work in this direction, and for developing OOD detection methods for graphs. 3. The key idea of the work for OOD detection is presented as a post hoc strategy, hence the practical applicability of the method is good. 4. Results in Table 2 and 3 are convincing. Weaknesses: 1. The creation of the subset G is dependent on the selection of S_id and S_ood examples which uses the MSP as a measure of for creation of these sets. However it has been well-discussed that MSP suffers from several practical issues such as overconfidence [Hendrycks and Gimpel, ICLR ‘17], poor generalization [Lee et. al, NeurIPS ‘18] and calibration issues [Guo et al, ICML ‘17]. Because of GRASP’s dependence on the MSP for creating the set G, the overall method can be non-robust. 2. Error bars are missing from all the results, which is important for assessing the consistency of the proposed method, since the data augmentation could be significantly affected by the randomness. Technical Quality: 3 Clarity: 2 Questions for Authors: How would the authors ensure robustness of their detection method when the confidence could vary? (see weakness pt 1 above) Did the authors do any experiments to evalute the robustness of their method and how would they address it? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors mention the limitations of their work in Appendix section E and societal impact in Appendix Section F. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and insightful questions. Below, we address each point in detail: >**W1&Q1. Concern about using MSP score to select S_id and S_ood.** Thank you for this insightful question! We acknowledge the concern that the model can sometimes exhibit overconfidence for certain OOD nodes. However, this does not undermine the applicability of GRASP when ID nodes' MSP scores are generally higher than those of OOD nodes. As demonstrated in Figure 4, we selectively utilize the highest MSP scores for ID and the lowest for OOD. This selection strategy results in more accurate estimations of $S\_{id}$ and $S\_{ood}$. We empirically support this approach with results presented in Tables 2, 3, and 5, across five datasets (Squirrel, Reddit2, arXiv-Year, snap-patents, and Wiki). These results show significant performance improvements with GRASP, particularly where original MSP scores were ineffective, ensuring more accurate distinctions between $S_{id}$ and $S_{ood}$ in each iteration. Additionally, our method offers high flexibility—**it can integrate any existing OOD scoring function**. While we use MSP in our primary demonstrations to explain GRASP's principles, substituting MSP with other well-known OOD scoring methods in our framework also yields competitive results, as shown in Table 7. This highlights GRASP’s adaptability and robustness. >**W2. Suggestion of including error bars.** This is an excellent suggestion! We have now included the main results with standard errors for all datasets in the revised manuscript and the table below. It is important to note that the FPR95 metric displays a relatively high standard error across methods on the Cora, Amazon, and Chameleon datasets due to their small sizes, which makes the outcomes sensitive to variations in data splits. Nonetheless, our method consistently remains competitive across all five runs, as detailed in the updated table in the second part of the response. --- Rebuttal 2: Title: Part 2 of the rebuttal (Table with error bar) Comment: | Datasets | cora | | amazon | | coauthor | | chameleon | | squirrel | | |-------------|---------------|---------------|---------------|---------------|--------------|---------------|--------------|---------------|--------------|---------------| | method | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | | MSP | 84.56 ± 5.39 | 70.86 ± 15.88 | 89.34 ± 3.49 | 49.26 ± 10.51 | 94.34 ± 0.41 | 28.82 ± 1.94 | 57.96 ± 3.31 | 85.70 ± 7.09 | 48.51 ± 0.46 | 94.68 ± 1.01 | | Energy | 85.47 ± 4.98 | 67.54 ± 22.98 | 90.28 ± 3.42 | 42.13 ± 9.96 | 95.67 ± 0.25 | 20.29 ± 1.49 | 59.20 ± 4.31 | 88.06 ± 7.50 | 45.07 ± 1.68 | 93.98 ± 1.42 | | KNN | 70.94 ± 5.62 | 90.20 ± 4.35 | 84.71 ± 3.28 | 65.19 ± 7.26 | 90.13 ± 0.50 | 51.24 ± 1.83 | 57.90 ± 6.48 | 93.38 ± 5.48 | 54.68 ± 2.25 | 94.72 ± 2.84 | | ODIN | 84.98 ± 5.59 | 68.41 ± 18.48 | 89.90 ± 3.65 | 44.06 ± 10.69 | 95.27 ± 0.33 | 22.59 ± 1.76 | 57.94 ± 3.75 | 85.31 ± 7.64 | 44.08 ± 0.35 | 94.17 ± 0.44 | | Mahalanobis | 85.48 ± 1.69 | 69.68 ± 14.60 | 75.58 ± 7.97 | 96.49 ± 5.96 | 84.98 ± 0.58 | 85.71 ± 1.82 | 53.19 ± 4.30 | 95.55 ± 2.36 | 54.99 ± 0.70 | 94.90 ± 0.51 | | GKDE | 86.27 ± 2.69 | 63.71 ± 14.36 | 77.26 ± 5.54 | 81.29 ± 3.36 | 95.13 ± 0.29 | 25.48 ± 1.48 | 50.14 ± 5.50 | 92.93 ± 4.89 | 49.38 ± 3.58 | 96.71 ± 0.67 | | GPN | 82.93 ± 11.20 | 58.45 ± 31.98 | 82.63 ± 5.87 | 72.95 ± 19.77 | 93.82 ± 2.63 | 34.11 ± 22.46 | 68.20 ± 6.70 | 82.25 ± 6.55 | 48.38 ± 4.43 | 95.58 ± 1.65 | | OODGAT | 53.63 ± 5.13 | 94.59 ± 6.38 | 66.95 ± 16.02 | 71.34 ± 15.34 | 52.18 ± 8.26 | 96.53 ± 3.39 | 59.67 ± 6.37 | 94.43 ± 3.43 | 46.13 ± 3.10 | 95.27 ± 1.00 | | GNNSafe | 87.52 ± 6.16 | 54.71 ± 31.41 | 96.27 ± 0.31 | 22.39 ± 4.90 | 95.82 ± 0.28 | 16.64 ± 1.90 | 50.42 ± 0.65 | 100.00 ± 0.00 | 35.88 ± 0.24 | 100.00 ± 0.00 | | GRASP | 93.50 ± 1.65 | 29.70 ± 12.25 | 96.68 ± 0.28 | 14.38 ± 6.63 | 97.75 ± 0.18 | 7.84 ± 0.58 | 76.93 ± 4.18 | 66.88 ± 6.48 | 61.09 ± 1.49 | 85.59 ± 3.61 | | Datasets | reddit2 | | odbn-product | | arxiv-year | | snap-patents | | wiki | | |-------------|--------------|--------------|--------------|--------------|---------------|---------------|--------------|--------------|--------------|--------------| | Method | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | | MSP | 46.61 ± 0.66 | 96.59 ± 0.14 | 70.19 ± 0.92 | 86.87 ± 0.35 | 47.24 ± 3.70 | 95.03 ± 1.46 | 46.99 ± 0.83 | 94.31 ± 0.30 | 54.70 ± 0.68 | 95.46 ± 0.32 | | Energy | 44.13 ± 0.14 | 96.77 ± 0.03 | 68.13 ± 0.38 | 85.09 ± 0.45 | 51.35 ± 5.91 | 94.10 ± 2.76 | 46.03 ± 4.59 | 96.82 ± 1.07 | 29.02 ± 2.78 | 97.31 ± 1.54 | | KNN | 66.74 ± 0.55 | 90.78 ± 0.75 | 73.58 ± 1.21 | 84.22 ± 2.00 | 57.96 ± 2.19 | 95.35 ± 0.92 | 53.45 ± 0.93 | 90.54 ± 1.09 | 43.69 ± 4.83 | 93.43 ± 2.57 | | ODIN | 44.69 ± 0.24 | 96.74 ± 0.07 | 68.95 ± 0.52 | 85.65 ± 0.31 | 47.36 ± 3.46 | 95.06 ± 1.47 | 45.20 ± 0.87 | 94.27 ± 0.30 | 29.91 ± 0.47 | 97.88 ± 0.18 | | Mahalanobis | 74.89 ± 1.01 | 71.73 ± 1.55 | OOM | | 59.57 ± 1.27 | 88.60 ± 1.27 | 58.50 ± 0.81 | 96.03 ± 0.22 | 67.95 ± 1.56 | 72.33 ± 2.15 | | GKDE | OOT | | OOM | | OOM | | OOM | | OOM | | | GPN | OOM | | OOM | | 50.97 ± 14.98 | 95.62 ± 3.29 | OOM | | OOM | | | OODGAT | OOM | | OOM | | 59.38 ± 3.44 | 92.90 ± 0.94 | OOM | | OOM | | | GNNSafe | 31.99 ± 0.26 | 99.49 ± 0.07 | 85.66 ± 1.16 | 77.86 ± 1.09 | 35.30 ± 0.06 | 100.00 ± 0.00 | 27.35 ± 0.18 | 99.92 ± 0.18 | 60.32 ± 4.51 | 72.63 ± 2.05 | | GRASP | 98.50 ± 0.02 | 2.41 ± 0.09 | 93.79 ± 0.24 | 39.77 ± 1.25 | 81.24 ± 0.39 | 73.93 ± 0.60 | 72.13 ± 0.06 | 75.22 ± 0.09 | 77.97 ± 1.38 | 58.49 ± 1.07 | --- Rebuttal Comment 2.1: Title: Response to authors' rebuttal Comment: Thanks for the rebuttal and sharing the additional results, I would encourage the authors to include them in the paper for a clearer indication of statistical significance of their results and contribution. Given my positive outlook on the paper, I would like to keep my score. --- Reply to Comment 2.1.1: Title: Appreciate Your Feedback Comment: Dear Reviewer xmGd, Thank you for taking the time to read our rebuttal and for your positive feedback. We appreciate your suggestion and will incorporate these results in the revised version of our paper. Best regards, The Authors of Submission 12283.
Summary: This work aims to detect out-of-distribution (OOD) nodes on a graph by exploring useful OOD score propagation methods. It introduces a novel edge augmentation strategy, with a theoretical guarantee. The approach's superiority is empirically demonstrated, outperforming OOD detection baselines in various scenarios and settings. Strengths: S1. The paper conducts an in-depth analysis and empirically validates the beneficial conditions for OOD score propagation in a graph. S2. The paper introduces the GRASP methodology, which addresses the limitations of previous score propagation methods and demonstrates superior performance in node-level OOD detection tasks. Weaknesses: W1. Does the strategy of utilizing GRASP with additional edges compromise classification accuracy within the distribution? In other words, how can you balance the homogeneity of graph classification with the homogeneity of OOD detection tasks when adding edges? W2. While this method is suitable for node-level OOD detection, it appears to be more challenging to adapt it to subgraph-level tasks or graph-level tasks. Technical Quality: 2 Clarity: 3 Questions for Authors: Q1. Is the classification accuracy in Appendix D.1 assessed before or after employing GRASP? Is there a comparison between the two? Q2. Do similar cases discussed in Figure 2 exist within real-world datasets? Q3. What is the accuracy of the edge-adding method presented in the text, specifically how many to intra-edges, and how many to inter-edges? What impact would incorrectly added edges have on the results? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and insightful questions. Below, we address each point in detail: >**W1. Relationship between GRASP and the in-distribution classification accuracy.** Fair concern! GRASP is a post hoc OOD detection method that does not interfere with the classification process. Therefore, the classification accuracy of in-distribution (ID) data remains unchanged. >**W2. Extension to subgraph-level tasks or graph-level tasks.** We appreciate the reviewer's suggestion to extend our work to different tasks! While our primary focus is on the node-level setting, we are open to discussing how GRASP can be adapted for subgraph-level or graph-level tasks. One potential approach is to treat a subgraph or graph as a single node, with the similarity between subgraphs or graphs represented by the weights of edges. This adaptation would enable the application of our method to these broader contexts. >**Q1. Is the classification accuracy in Appendix D.1 assessed before or after employing GRASP? Is there a comparison between the two?** (Related to W1) The classification accuracy presented in Appendix D.1 is from the pretrained ID model before employing GRASP. Since our method is post hoc, the classification accuracy remains the same across all post hoc methods. >**Q2. Do similar cases discussed in Figure 2 exist within real-world datasets?** Yes, they do. Figure 2 is designed to intuitively illustrate our theoretical findings, which are authentically reflective of real-world scenarios. Our theory suggests that score propagation enhances OOD detection performance when intra-edges are predominant. This behavior is captured in real-world datasets, as supported by the empirical evidence presented in Table 14. >**Q3. What is the accuracy of the edge-adding method presented in the text, specifically how many to intra-edges, and how many to inter-edges? What impact would incorrectly added edges have on the results?** The accuracy of the added edges on the common datasets is shown in the table below: | Datasets | Added Edges' ACC | |-----------|-------------| | cora | 0.94 | | amazon | 0.94 | | coauthor | 0.99 | | chameleon | 0.80 | | squirrel | 0.80 | We investigate the impact on GRASP's performance by adding varying proportions of incorrect edges. The results are presented in the table below, with the "ratio" column indicating the proportion of incorrect edges added. The results show that GRASP's performance deteriorates significantly when the proportion of incorrect edges exceeds 0.3. | Datasets | cora | | amazon | | coauthor | | chameleon | | squireel | | |----------|--------|--------|--------|--------|----------|--------|-----------|--------|----------|--------| | ratio | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | | 0 | 98.78 | 0.42 | 99.85 | 0.21 | 99.79 | 0.00 | 95.06 | 5.79 | 92.27 | 7.99 | | 0.1 | 87.04 | 54.57 | 90.86 | 47.73 | 90.03 | 39.10 | 77.91 | 55.09 | 70.04 | 66.20 | | 0.3 | 57.33 | 77.98 | 65.04 | 80.70 | 66.63 | 79.13 | 63.89 | 76.94 | 49.63 | 91.78 | | 0.5 | 42.52 | 80.94 | 37.09 | 90.60 | 40.75 | 88.16 | 45.79 | 90.72 | 23.43 | 96.29 | | 0.7 | 38.88 | 86.97 | 30.27 | 95.63 | 34.64 | 92.78 | 28.41 | 97.34 | 12.20 | 99.91 | --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Thank you for your reply. I have concerns about your claim in the paper that "GRASP still works when raw MSP fails," based on the new experiment involving the addition of incorrect edges. The results in Tables 2 and 3 indicate that MSP's performance is poor, which suggests that using MSP to select S_id and S_odd may introduce numerous errors. This could lead to incorrect edges being added to the augmented graph, which, based on the evidence that "GRASP's performance deteriorates significantly when the proportion of incorrect edges exceeds 0.3," could adversely impact the effectiveness of GRASP. It's unclear under what conditions your algorithm is efficient, specifically on which kinds of graph datasets and with which base models. --- Rebuttal 2: Title: Response to Reviewer's Concerns - Part 1 Comment: Dear Reviewer hQev, Thank you for raising the concerns regarding our claim that "GRASP still works when raw MSP fails." We appreciate the opportunity to further clarify this aspect of our study. To delve deep into this, we conducted a visualization analysis of the MSP score distribution similar as Figure 4 across each dataset. Due to the limitations of the OpenReview platform, we are unable to upload these new figures to share the insight. (We will include in the revision) In these figures, we observe multiple distinct "heaps" in the score distribution. It is important to note that since MSP measures across the entire test set, it does not effectively capture all the local characteristics. Notably, our results indicate that selections made using Sid, which prioritizes the highest MSP scores, more composed of ID nodes rather than OOD nodes. This tendency is further reinforced with additional propagation iterations. We also wish to highlight additional evidence supporting the effectiveness of GRASP: 1. **GRASP is also compatible with other scoring function beyond MSP.** Substituting MSP with other well-known OOD scoring methods in our framework also yields competitive results, as shown in the table (AUROC) below. | method | cora | amazon | coauthor | chameleon | squirrel | |--------------|--------|--------|----------|-----------|----------| | MSP | 84.56 | 89.34 | 94.34 | 57.96 | 48.51 | | MSP+prop | 88.02 | 95.32 | 97.15 | 50.35 | 36.21 | | MSP+**GRASP** | **93.50** | **96.68** | **97.75** | **76.93** | **61.09** | | Energy | 85.47 | 90.28 | 95.67 | 59.20 | 45.07 | | Energy+prop | 87.52 | 96.27 | 95.82 | 50.42 | 36.49 | | Energy+**GRASP** | **88.34** | **96.35** | **96.64** | **62.04** | **60.66** | | KNN | 70.94 | 84.71 | 90.13 | 57.90 | 54.68 | | KNN+prop | 73.70 | 92.36 | 95.47 | 49.76 | 53.99 | | KNN+**GRASP** | **91.48** | **97.43** | **96.52** | **76.32** | **60.24** | 2. **Enhancement of Intra-edge Ratios During Propagation.** By utilizing Sid and Sood estimates, GRASP significantly increases the proportion of intra-edges, thereby improving the overall accuracy of edge addition. The ablation study results displayed below illustrate the increased accuracy of added edges when GRASP is employed, compared to scenarios where it is not used: | Datasets | w/o GRASP | w GRASP | |-----------|-----------|---------| | cora | 0.89 | 0.94 | | amazon | 0.92 | 0.94 | | coauthor | 0.93 | 0.99 | | chameleon | 0.51 | 0.80 | | squirrel | 0.39 | 0.80 | --- Rebuttal 3: Title: Response to Reviewer's Concerns - Part 2 Comment: Regarding the impact of other based models, we have shown in Table 13 that GRASP achieves optimal performance on various base models in the literature. For the reviewer’s convenience, we have included the relevant results below and in the third part of the response, which clearly attests to the versatility and practicality of GRASP: | | Datasets | cora | | amazon | | coauthor | | chameleon | | squirrel | | |----------|-------------|--------|--------|--------|--------|----------|--------|-----------|--------|----------|--------| | Backbone | Method | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | | GAT | MSP | 55.33 | 88.82 | 29.88 | 94.39 | 28.15 | 94.26 | 91.27 | 61.94 | 95.21 | 47.50 | | | Energy | 80.71 | 79.16 | 26.48 | 95.24 | 20.96 | 95.65 | 92.71 | 61.11 | 96.47 | 45.69 | | | KNN | 71.14 | 81.28 | 46.42 | 90.74 | 42.51 | 91.51 | 89.02 | 61.13 | 95.37 | 53.16 | | | ODIN | 55.27 | 89.06 | 26.92 | 94.89 | 24.61 | 94.95 | 90.83 | 62.89 | 96.11 | 45.68 | | | Mahalanobis | 67.92 | 86.37 | 14.28 | 95.80 | 26.27 | 94.46 | 95.35 | 50.65 | 91.36 | 57.67 | | | GNNSafe | 58.97 | 85.64 | 29.12 | 93.16 | 25.41 | 93.91 | 100.00 | 50.39 | 100.00 | 36.21 | | | **GRASP** | **22.76** | **94.28** | **14.21** | **96.79** | **8.59** | **97.51** | **70.15** | **73.40** | **85.84** | **61.18** | | GCNJK | MSP | 81.33 | 80.40 | 32.45 | 94.64 | 26.43 | 94.44 | 86.42 | 68.19 | 94.93 | 51.83 | | | Energy | 96.56 | 70.16 | 40.90 | 93.80 | 18.75 | 95.75 | 91.92 | 65.16 | 95.36 | 49.68 | | | KNN | 90.98 | 73.81 | 64.47 | 85.18 | 50.95 | 89.98 | 94.45 | 59.04 | 94.64 | 53.49 | | | ODIN | 81.04 | 80.68 | 28.35 | 95.13 | 21.12 | 95.41 | 86.03 | 68.58 | 95.04 | 50.64 | | | Mahalanobis | 60.84 | 86.20 | 61.61 | 87.11 | 83.04 | 87.34 | 87.23 | 66.61 | 91.52 | 57.24 | | | GNNSafe | 65.01 | 83.11 | 22.41 | 96.28 | 13.27 | 96.47 | 100.00 | 50.40 | 100.00 | 36.21 | | | **GRASP** | **29.69** | **92.98** | **12.66** | **96.86** | **8.03** | **97.74** | **59.61** | **75.78** | **86.02** | **60.70** | | GATJK | MSP | 69.56 | 84.51 | 47.21 | 91.32 | 24.66 | 95.37 | 94.39 | 55.43 | 94.67 | 50.98 | | | Energy | 62.27 | 85.75 | 34.75 | 92.89 | 17.23 | 96.38 | 91.11 | 59.01 | 95.61 | 48.76 | | | KNN | 82.54 | 74.32 | 70.98 | 83.48 | 38.95 | 92.56 | 92.21 | 61.14 | 95.20 | 54.32 | | | ODIN | 64.25 | 85.21 | 39.29 | 92.19 | 18.16 | 96.30 | 93.56 | 56.10 | 95.24 | 48.62 | | | Mahalanobis | 79.60 | 79.33 | 52.79 | 88.53 | 34.60 | 93.68 | 91.59 | 52.38 | 91.52 | 56.19 | | | GNNSafe | 44.43 | 90.01 | 22.46 | 95.45 | 17.54 | 95.32 | 100.00 | 50.39 | 100.00 | 36.15 | | | **GRASP** | **29.04** | **92.57** | **14.78** | **96.70** | **8.32** | **97.70** | **78.65** | **71.09** | **85.88** | **61.17** | | APPNP | MSP | 59.37 | 89.01 | 64.64 | 86.51 | 18.38 | 96.45 | 94.24 | 48.87 | 94.41 | 50.91 | | | Energy | 81.82 | 81.21 | 62.87 | 84.36 | 14.57 | 97.01 | 90.55 | 55.75 | 90.91 | 53.04 | | | KNN | 75.33 | 81.21 | 49.55 | 89.76 | 38.44 | 91.71 | 92.14 | 54.19 | 94.12 | 53.14 | | | ODIN | 56.72 | 89.47 | 60.67 | 86.76 | 15.02 | 96.98 | 94.63 | 50.71 | 94.41 | 50.60 | | | Mahalanobis | 73.64 | 86.02 | 98.75 | 62.13 | 30.20 | 93.91 | 92.38 | 58.15 | 93.29 | 56.65 | | | GNNSafe | 59.70 | 85.45 | 19.26 | 95.08 | 12.10 | 96.60 | 100.00 | 50.45 | 100.00 | 36.24 | | | **GRASP** | **26.45** | **94.16** | **5.69** | **97.11** | **8.69** | **97.59** | **83.41** | **63.02** | **86.42** | **60.76** | --- Rebuttal 4: Title: Response to Reviewer's Concerns - Part 3 Comment: | | Datasets | cora | | amazon | | coauthor | | chameleon | | squirrel | | |----------|-------------|--------|--------|--------|--------|----------|--------|-----------|--------|----------|--------| | Backbone | Method | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | | H2GCN | MSP | 67.00 | 86.50 | 59.23 | 86.88 | 99.37 | 40.35 | 91.00 | 62.79 | 94.34 | 57.21 | | | Energy | 68.06 | 86.84 | 57.05 | 86.21 | 97.85 | 51.65 | 92.66 | 63.24 | 96.75 | 53.18 | | | KNN | 80.00 | 79.68 | 63.85 | 80.54 | 60.66 | 77.25 | 95.13 | 56.89 | 95.62 | 57.45 | | | ODIN | 65.21 | 87.10 | 56.25 | 86.97 | 99.43 | 41.58 | 91.07 | 63.52 | 95.08 | 55.69 | | | Mahalanobis | 81.67 | 80.55 | 86.26 | 77.33 | 97.92 | 61.02 | 97.62 | 58.29 | 96.36 | 53.54 | | | GNNSafe | 43.97 | 88.83 | 33.40 | 90.87 | 93.00 | 43.23 | 100.00 | 50.35 | 100.00 | 36.26 | | | **GRASP** | **33.54** | **92.63** | **16.57** | **96.48** | **14.23** | **96.08** | **66.38** | **74.72** | **86.04** | **60.83** | | MixHop | MSP | 83.94 | 78.60 | 53.56 | 90.97 | 48.66 | 90.91 | 92.95 | 56.77 | 95.60 | 49.07 | | | Energy | 83.67 | 77.15 | 57.04 | 89.28 | 28.49 | 94.67 | 94.10 | 57.21 | 95.61 | 48.87 | | | KNN | 93.36 | 69.93 | 65.41 | 86.45 | 62.40 | 85.91 | 89.52 | 57.64 | 93.44 | 54.00 | | | ODIN | 83.14 | 79.10 | 50.00 | 91.25 | 41.39 | 92.65 | 93.45 | 56.48 | 95.68 | 47.58 | | | Mahalanobis | 82.35 | 80.04 | 90.05 | 81.85 | 47.41 | 91.67 | 93.93 | 56.30 | 91.33 | 56.56 | | | GNNSafe | 66.86 | 83.77 | 39.72 | 93.54 | 33.83 | 92.46 | 100.00 | 50.35 | 100.00 | 36.42 | | | **GRASP** | **32.11** | **92.77** | **10.07** | **96.99** | **9.41** | **97.31** | **76.92** | **66.12** | **85.92** | **60.69** | | GPR-GNN | MSP | 64.90 | 87.44 | 62.84 | 87.66 | 23.96 | 95.64 | 96.09 | 47.65 | 95.78 | 44.62 | | | Energy | 72.85 | 83.86 | 64.23 | 85.28 | 16.42 | 96.50 | 93.78 | 49.09 | 95.16 | 42.63 | | | KNN | 74.24 | 81.46 | 48.47 | 90.48 | 38.83 | 92.31 | 94.39 | 55.31 | 94.18 | 51.74 | | | ODIN | 62.58 | 88.13 | 55.49 | 88.41 | 17.24 | 96.51 | 96.16 | 47.50 | 95.51 | 42.32 | | | Mahalanobis | 79.56 | 84.53 | 97.25 | 69.75 | 49.93 | 91.56 | 87.01 | 55.95 | 87.24 | 61.10 | | | GNNSafe | 51.65 | 85.91 | 13.63 | 96.46 | 14.73 | 95.96 | 100.00 | 50.32 | 100.00 | 36.25 | | | **GRASP** | **26.71** | **94.02** | **5.30** | **97.14** | **8.28** | **97.70** | **76.53** | **72.43** | **85.40** | **61.33** | | GCNII | MSP | 72.85 | 83.02 | 51.72 | 88.13 | 23.18 | 95.21 | 96.03 | 55.46 | 94.13 | 49.46 | | | Energy | 83.15 | 75.24 | 48.28 | 88.78 | 17.72 | 96.03 | 95.87 | 56.75 | 94.61 | 48.63 | | | KNN | 83.99 | 76.02 | 59.25 | 86.74 | 36.05 | 93.43 | 94.72 | 52.86 | 94.65 | 53.47 | | | ODIN | 71.49 | 83.31 | 49.44 | 88.35 | 19.44 | 95.75 | 95.61 | 56.63 | 94.73 | 48.34 | | | Mahalanobis | 73.90 | 82.01 | 77.63 | 80.87 | 44.01 | 92.63 | 96.68 | 46.57 | 91.66 | 53.62 | | | GNNSafe | 66.70 | 83.12 | 27.08 | 93.13 | 17.87 | 94.47 | 100.00 | 50.35 | 100.00 | 36.32 | | | **GRASP** | **27.92** | **93.51** | **23.53** | **93.72** | **8.82** | **97.61** | **76.79** | **66.44** | **86.27** | **60.62** |
Rebuttal 1: Rebuttal: Dear Reviewers and ACs, We are grateful for the insightful comments and valuable suggestions from all reviewers. In the following, we would like to summarize the contributions and revisions of this paper. As abbreviations, we refer to Reviewer hQev as R1, Reviewer xmGd as R2, Reviewer aWhs as R3, and Reviewer hXZy as R4 respectively. **Contributions**: - We study score propagation in-depth, theoretically elucidating the conditions under which score propagation is effective. Multiple reviewers value the theoretical contribution of our paper: ```conducts an in-depth analysis``` (R1), ```theoretical analysis is helpful for future work in this direction, and for developing OOD detection methods for graphs``` (R2), ``` theoretical foundation is solid and extends the understanding of graph OOD detection``` (R3). - We propose a graph augmentation method aimed at increasing the ratio of intra-edges to enhance OOD detection performance without the need for training from scratch and without requiring knowledge of true OOD nodes. Multiple reviewer recognized the effectiveness and practical values of our method, ```demonstrates superior performance``` (R1), ```results are convincing``` (R2), ```has achieves the SOTA performance``` (R3); ```the practical applicability is good``` (R2), ```the solution is practical and efficient``` (R3), ```experiments are comprehensive, the hyperparameter sensitivity analysis is thorough, providing readers with valuable insights``` (R4). - Our paper is ```well motivated``` (R2), ```introduced in an informative manner``` (R4), ```well written``` (R2, R3), and ```easy to follow``` (R2, R3, R4). **Responses and Revisions:** - For Reviewer R1's concerns: - We clarify the relationship between GRASP and ID classification accuracy. - We discuss the extension of GRASP to subgraph-level and graph-level tasks. - We elucidate the applicability of Figure 2 in real-world datasets. - We investigate the impact of incorrectly added edges on the results. - For Reviewer R2's concerns: - We explain the effectiveness and robustness of our method in using MSP. - We add error bars to the results. - For Reviewer R3's concerns: - We release the source codes and pre-trained checkpoints for the reproducibility of our work. - We add error bars to the results. - We conduct additional experiments to analyze how GRASP's performance varies with different OOD distributions. - We provide further insights into the rationale behind GRASP and how it interprets the graph structure. - We analyze the relationship between GRASP and the graph size. - We explain the effect of random connectivity on GRASP. - For Reviewer R4's concerns: - We provide a more detailed discussion of existing graph-OOD works and position our paper's contributions. - We add comparative experiments with two latest baselines. - We justify the use of the Bernoulli distribution to model edge weights. - We present a concrete and intuitive example to illustrate the motivation behind our approach. - We offer further explanation for the remarkable improvement of GRASP on FPR in reddit2. - We highlight the key factors in achieving optimal OOD detection performance. Thanks again for your efforts in reviewing our work, and we hope our responses can address any concerns about this work. Warm regards, The Authors of Submission 12283. Pdf: /pdf/6bd660f0a86ebf36fd2a2e1cda575caa0ee2145a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DC-Gaussian: Improving 3D Gaussian Splatting for Reflective Dash Cam Videos
Accept (poster)
Summary: This paper tries to reconstruct 3DGS (3D Gaussian Splatting) with Dash Cam Videos, which introduces obstructions on windshields. An adaptive image decomposition is applied to learn transmission images and obstruction images separately. The former is modeled with 3DGS in G3 Enhancement and are rendered into 2D space, while the latter is learned on 2D camera space conditioned by car positions in Illumination-aware Obstruction Modeling. Further, an opacity map is optimized to weigh these two images. This method surpasses Zip-NeRF, 3DGS and other runner-ups in public and self-collected datasets. Strengths: Originality: This paper addresses the issue faced in daily life where perfect capturing is hard to get. The setting of Dash Cam Videos is novel, and I'm happy to see the introduction of 3DGS to model such straight-ahead-captured data. Quality: The results seem good, with high scores on rendering and outstanding decomposition on transmission and obstruction. Clarity: I believe the paper and diagrams are easy to read. Significance: This may do help to autonomous driving considering its high-speed rendering. Weaknesses: 1. The opacity map is modeled to be independent of car position, but in fact, according to Fresnel's law, opacity is also related to the light distribution, and therefore also related to the position. 2. The two observations in 3.3 are too strong. This somehow weakens the generalization of the algorithm. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How to calculate opacity loss in Eq.8? 2. Are there any visualizations related to reconstructed 3DGS? 3. How fast is the rendering? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As stated in this paper, DC-Gaussian has only been evaluated on single-sequence videos. I would also prefer the opacity map to be detailed modeled. However, considering the good performance, this trade-off on opacity is acceptable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1) Why is the opacity map modeled to be independent of car position?** Thank you for pointing out that, according to Fresnel's law, the opacity of the window should also depend on the car's position. However, in experiments, we found that most of occlusions are caused by the objects between dash camera and car window, such as mobile phone holders and stains, rather than the objects outside the car. Since these objects remains relatively still to cars, it is reasonable to design the opacity map to be independent of the car's position. Given the extremely ill-posed nature of this task, we chose to omit the dependence between window's opacity and car's position. **2) The two observations in 3.3 about the static motion prior and vary illuminations of obstructions are too strong.** We acknowledge that the assumptions guiding our design are quite strong. However, this task is highly ill-posed, as evidenced by the suboptimal results of previous methods, such as some reflection removal models trained on paired datasets. These strong assumptions have been proved to be crucial for achieving satisfactory performance. We will explore to relax these assumptions in the future research, enabling adaptation to a wider range of scenarios. **3) Details of $\boldsymbol{\mathcal{L}_{opacity}}$.** This question is answered in the global response. **4) Rendering speed.** This question is answered in the global rebuttal response. **5) Visualization of 3DGS.** We provide a visualization of the generated 3DGS in Figure 1 of the attached PDF. Due to the one-page limit of the attached document, please refer to our supplementary materials for additional visual results. --- Rebuttal Comment 1.1: Comment: After thoroughly reading the PDF and the rebuttal, I would like to thank the author for adequately addressing my concerns. I will maintain my opinion that this is a very interesting setting. Since exposed cameras are highly susceptible to physical damage, I believe Dash Cam Videos present a potential solution. At least they could provide valuable prior knowledge fusion in automatic driving. As for the inaccuracies in opacity map modeling, there is no significant degradation in the results, so I'm willing to accept this trade-off. However, I hope the author seriously considers this issue in future work. I'll keep my score. --- Reply to Comment 1.1.1: Title: Thanks for the Response Comment: Thank you so much for your valuable feedback. We greatly appreciate your acknowledgment of this setting and your insightful suggestions for improving the modeling of the opacity map. We will conduct more experiments and analysis on the opacity map in our future work.
Summary: This paper presents a new Gaussian Splatting novel view synthesis method specifically for in-vehicle dash cam videos containing various obstructions. To the challenges caused by obstructions, the proposed method separately represents the transmission and obstruction part of the camera capture. The transmission part is a 3DGS representation, and the obstruction part is modeled as a 2D representation with an opacity map and position-dependent illumination-aware neural model. The synthesized transmission images are also used to enhance the geometry prior for 3DGS training. The evaluation results show apparent improvements over previous methods. Strengths: 1. The paper has an interesting motivation for a neglected but meaningful task on driving scene reconstruction with the dash cam. The paper is well-written and easy to follow. The paper also gives adequate references to related work. 2. The proposed solution looks reasonable and very effective on this specific task. I love to see how the authors leverage the key observations to design their solutions. 3. The latent intensity modulation (LIM) is an interesting and smart design for handling varying illumination conditions on the road. Weaknesses: 1. The proposed method is more like an improvement for a specific Gaussian Splatting application. The similar components in the proposed method can be found, in part, from other related work. Therefore, the novelty of this paper is moderate. 2. In Sec. 3.2, the authors reformulate the composition equation to Eq. 4. This equation is a bit incorrect from the perspective of light transport. I feel the $(1 - \phi) I_t + (\phi) I_o $ would make more sense. 3. In the experiments, it is still unclear about the evaluation datasets. The following key informations are missing in the paper: * The number of scenes used and the number of frames per scene. * Image quality, and resolutions (since dash cam might have worse image quality). * Do the scenes contain dynamic objects? * Is there any scene containing opaque occlusions such as a car hood? * Do the drive sequences contain any turning to show more diverse illumination changes? I would suggest the authors show more visual results in the revised version. I am looking forward to the authors' responses. Technical Quality: 3 Clarity: 4 Questions for Authors: * I wonder how well the proposed method works for the reflective car hoods, which are also commonly seen from the dash cam. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations are discussed at the end of the paper, but I don’t think it is adequate, see my comments in the above sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1) Moderate novelty because the similar components in the proposed method can be found, in part, from other related work.** We appreciate if the reviewer could be more specific about the components. Then we can provide detailed explanations during the discussion period. While both 3DGS and hierarchical hash encoding are tools that have been developed and widely used in related works, the novelty of this paper lies in our effective design of a framework that utilizes physical priors to address this challenging task for the first time. **2) Correctness of equation 4.** Thank you for pointing out this typo. The physically correct version of the composition equation should be: $$ I(u, v, j) = (1 - \boldsymbol{\phi})I_t(u,v,j) + (\boldsymbol{\phi}) I_o(u,v,j) $$ We will correct this typo in the revised version of the paper. **3) Details of the datasets.** Thank you for bringing this up. We will include these details in the revised paper. - The curated BDD100K dataset contains 8 scenes, while the self-collected DCVR dataset includes 10 scenes. Each sequence consists of approximately 300 frames, extracted from 10-second videos at a frame rate of 30 Hz. - Modern dash cameras provide decent quality. The resolution of the images in BDD100K is around 1296x500. For the DCVR dataset, we used a 4K wide-angle camera (purchased on Amazon, selected based on high sales volume). The original resolution of these images is 3840x2160. We performed camera calibration and undistortion using a chessboard and the OpenCV library before running the SFM algorithm. To facilitate the training process, we downsampled the images 3 times to a resolution similar to that in BDD100K, aligning with popular datasets like KITTI-360 and BDD100K. - 4 out of 8 scenes in BDD100K contain moderate dynamic objects. No scenes in DCVR contain dynamic objects. - As shown in Fig. 7 in the paper, our dataset includes opaque occlusions. Our dataset does not contain car hoods, which we will discuss later. - 3 out of 8 scenes in BDD100K contain turning. 4 out of 10 scenes in DCVR contain turning. We visually depict the turning trajectories in Fig. 2 of the attached PDF. **4) How well the proposed method works for the reflective car hoods?** Thank you for bringing up the topic of car hoods. While car hoods are indeed prone to strong reflections, they typically occupy only around 10\% of the lower parts of the images, which do not contain much information about the street scenes. Additionally, car hoods are consistently located in the same position in each video. We addressed this issue by manually cropping out the areas containing the car hoods during the dataset curation process. We believe that this can still be done on a large scale, as segmenting car hoods is not a particularly difficult task. **5) Limitations.** Limitations are discussed in the global response. --- Rebuttal 2: Comment: Thanks for your answers to my questions. I am glad to know more details about the evaluated data. These evaluated data are sufficiently diverse to demonstrate the improvements brought by the proposed method. I am happy with the authors' response and will remain or possibly raise my score later. One additional concern I had is similar to the Reviewer si9q's W1, the proposed method is practically limited to the car-mounted dash cam, but it doesn't address the obstruction challenge from the general scenes. This is still a key limiting point. --- Rebuttal Comment 2.1: Title: Thanks for the Response Comment: We are very happy to hear that you are satisfied with our responses. We would like to kindly clarify that our method is specifically designed for autonomous driving scenarios not for general scenes. Dash cameras are widely used in vehicles and dash cam footage offers unique value for autonomous driving. Compared to general scenes, dash camera in vehicles contain some important properties that can be leveraged to tackle reflections and obstructions. As shown in main paper, the obstruction removal works designed for general scenes show quite worse results, since this task is quite ill-posed and achieving good results in specific scenes is also not that easy. Thanks for your suggestions, we would further study this task for the goal of achieving good results in broader scenes.
Summary: This paper deals with the problem of novel view synthesis in outdoor scenes captured with a dashcam. The authors develop a 3D Gaussian Splatting based method that is robust to common obstructions observed in dashcam videos, mainly due to the way these videos are captured: Mobile-phone holders, reflections and windshield stains create artifacts in the reconstruction as they are not natively modeled by 3DGS and cannot be reconstructed as part of the static 3D scene as they move with the ego-vehicle. The authors propose a learnable opacity map that decomposes the images into obstructions and real scene and a lighting modulation module that adapts obstructions over for each view based on varying illumination across views. These are implemented with a 2D hashgrid and MLP heads. They can then use the opacity map to blend the obstruction and real scene for rendering each view. The authors also use an MVS method with geometric consistency filtering to initialize the 3D Gaussian positions. Strengths: - The proposed decomposition scheme is effective in separating obstruction from real scene as illustrated in Fig. 4/5/6. - The accompanying video results are impressive - The different modules are clearly ablated in Tab. 2 Weaknesses: - There is no description of $\mathcal{L}_{opacity}$, but without this it is not possible to judge whether the learned opacity map is learned in a self supervised manner or if additional labels are needed. - The task setup is similar in spirit to NeRF-in-the-wild. Therefore, to judge the importance of the proposed design opposed to generic in-the-wild methods (NeRF-W, not tailored towards dashcam videos, it would be important to understand how such methods compare on the benchmarks the authors use. However, the only baselines the authors compare to are ZipNeRF, 3DGS and GaussianPro, all of which are not designed for this type of captures. - As far as I understand, G3E consists of running PatchMatchNet with a standard geometry consistency filter. There seems to be no innovation in that to me. What is Fig. 8 comparing exactly? Random initialization vs MVS or SfM vs MVS initialization? Technical Quality: 2 Clarity: 2 Questions for Authors: It would be great if the authors can clarify my concerns w.r.t. the loss details, G3E and may provide more insights on the choice of baselines. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The limitation discussion is short and does not point to meaningful failure cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1) Details of $\boldsymbol{\mathcal{L}_{opacity}}$.** The formulation of opacity loss is shown in the global response. The opacity map is learned in a self-supervised way without relying on additional labels. **2) Why only compare with general Novel View Synthesis methods?** As the first work addressing the novel task of NVS for dash cam videos, there are no prior works specifically designed for this task. As a result, we compare ours with several SOTA methods. NeRF-W is designed for NVS using images captured with different cameras and at different times but still cannot handle obstructions like reflections on the windshield. We evaluate NeRF-W on the dash cam videos, and as shown in the table below, our method significantly outperforms NeRF-W in both synthesis quality and rendering speed. Table 2. Comparison with NeRF-W. Tested on BDD100K | Method | fps $\uparrow$ | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | |--------------------|:----------------:|:-----------------:|:-----------------:|:--------------------:| | Nerf in the Wild | 0.18 | 22.58 | 0.708 | 0.395 | | DC-Gaussian (Ours) | 120 | 29.44 | 0.914 | 0.143 | Furthermore, as illustrated in Figure 4 in the attached pdf, although NeRF-W is designed to handle illumination variance and transient objects, it fails to separate obstructions from the images. None of the obstructions are accurately represented in the transient image. This suboptimal result occurs because the obstructions move with the camera rather than being transient parts of the images. The reflections and transmissions are intertwined in a complex manner that NeRF-W's design cannot effectively address, unlike our method. **3) Novelty of G3E.** We use the standard geometry consistency filter from PatchMatchNet to improve novel view synthesis. Some works also leverage MVS priors to help NVS task. However, simply applying PatchMatchNet to original input images does not yield optimal results. In our method, we first run DC-Gaussian without multi-view stereo (MVS) initialization. We then use the trained model to synthesize $\hat{I}_t$, as explained in Equation 7. This strategy effectively suppresses obstructions in the original images and enhances overall performance. In Fig 8, we compare the results using MVS initialization (a w/ G3E) and that using SfM initialization (b w/o G3E), we will make it more clear in revised paper. **4) Limitations.** The limitations are discussed in the global response. --- Rebuttal 2: Title: Follow-Up on Rebuttal Discussion Comment: We value your feedback and are eager to address any further questions or concerns you may have. If you have had a chance to review our response and have additional thoughts, we would greatly appreciate your input. --- Rebuttal 3: Title: Awaiting Your Response as Deadline Nears Comment: As the deadline approaches, I am eagerly anticipating your response.
Summary: This paper focuses on using dash cam videos for 3D Gaussian Splatting-based outdoor scene reconstruction. To address challenges such as reflections and occlusions on windshields, DC-Gaussian introduces an adaptive image decomposition module to model these effects in a unified manner. Additionally, an illumination-aware obstruction modeling technique is designed to manage reflections and occlusions under varying lighting conditions. Finally, the authors employ a geometry-guided Gaussian enhancement strategy to improve rendering details by incorporating additional geometry priors. Extensive experiments on self-captured and public dash cam videos verify the effectiveness of the proposed method. Strengths: (1) The proposed method exhibits promising 3DGS-based reconstruction performances on some self-captured and public dash cam videos. (2) In general, this manuscript is well-structured and the writing is clear. The authors have effectively organized their ideas, making the content easy to follow and understand. Weaknesses: (1) Limited Practicality: My main concern is the practical value of this work. Dash cam videos generally provide single-view sequences, and these sparse views would result in the learned 3DGS suffering from limited novel-view synthesis quality. Additionally, the inherent limitations of dash cam videos, such as reflections and occlusions on the windshields, would further degrade rendering quality despite the authors’ efforts to mitigate these issues with developed modules. Therefore, I find it hard to believe that people would choose dash cam videos for high-quality 3D scene reconstruction in real-world applications. (2) The authors introduce global-shared hash encoding and MLPs for illumination-aware obstruction modeling. The reviewer is curious about the running speed compared to the baseline method, 3DGS. Please provide a comparison of the inference times. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations and the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1) Concern about the practical value of using dash cam videos for Novel View Synthesis** Dash cam videos have unique values for autonomous driving. Dash cam videos deeply reflect the diversity and complexity of real-world traffic scenarios. They are used to provide large-scale, diverse driving video datasets in a crowd-sourced manner [1]. Dash cam videos also offer important data sources about multi-agent driving behaviors [2] and evaluating the robustness of algorithms under visual hazards [6]. BDD100K [1], a large scale diverse driving videos dataset built with dash cam videos, has been cited more than 2000 times. It has been widely used in many important vision tasks, such as Object Detection, Semantic Instance Segmentation [8], and Multiple Object Tracking and Segmentation [7], etc. This fact shows the huge interests of vision community in dash camera studies. Although most dash cam videos are monocular and prone to obstructions, we believe studying how to leverage the important data source is quite important to many vision tasks, such as novel view synthesis and 3D reconstruction. Moreover, our work is an attempt to tackle obstruction removal from dash cam videos, making it possible to achieve high-quality NVS from dash cam videos. It will provide huge 3D information for autonomous driving. [1] Yu F, Chen H, Wang X, et al. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. CVPR, 2020. [2] Chandra R, Wang X, Mahajan M, et al. Meteor: A dense, heterogeneous, and unstructured traffic dataset with rare behaviors. ICRA, 2023. [3] Grand View Research. Dashboard camera market size, share \& trends analysis report by technology (basic, advanced, smart), by product, by video quality, by application, by distribution channel, by region, and segment forecasts, 2024 - 2030, 2023. Accessed: 2024-05-16. [4] Martin-Brualla, Ricardo, et al. "Nerf in the wild: Neural radiance fields for unconstrained photo collections." CVPR, 2021. [5] Gao, Chen, et al. "Dynamic view synthesis from dynamic monocular video." ICCV, 2021. [6] Oliver Zendel, Katrin Honauer, Markus Murschitz, Daniel Steininger, and Gustavo Fernandez Dominguez. Wilddash-creating hazard-aware benchmarks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 402–416, 2018. [7] Luiten, Jonathon, et al. "Hota: A higher order metric for evaluating multi-object tracking." International journal of computer vision 129 (2021): 548-578. [8] Yan, Bin, et al. "Universal instance perception as object discovery and retrieval." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. **2) Rendering speed.** This question is answered in the global rebuttal response. --- Rebuttal Comment 1.1: Comment: Thank you for your feedback. However, this rebuttal still does not alleviate my concerns about its practicality, especially for the novel-view synthesis (NVS): I acknowledge the potential application value of dashcams in many visual tasks (as the authors have listed), but this does not mean it remains significant in 3D autonomous driving scene reconstruction tasks. Dashcams capture monocular video, and in autonomous driving scenarios, the viewpoints are typically very sparse, inherently unsuitable for NVS. Although the authors have mitigated obstruction interference, synthesizing novel views is also crucial for 3D scene reconstruction. The authors show so-called NVS in Fig. 1 (c); however, this is somewhat misleading. From my observation, the viewpoints in Fig. 1 (b) and (c) are the same, with only obstruction removal being performed, which does not qualify as NVS. Based on my experiments, novel view synthesis from monocular autonomous driving video is very challenging. Could you present your actual NVS results? I suspect they might not be very strong. Finally, since autonomous driving scene reconstruction is inherently a highly challenging task, I still find it hard to believe that anyone would choose dash cam videos for high-quality 3D scene reconstruction in real-world applications. --- Reply to Comment 1.1.1: Title: Thanks for the Response Comment: Thanks very much for your further response. **“The viewpoints of Fig.1 (b) and (c) are the same, which are the not the results of novel view synthesis (NVS)”** We respectfully disagree with this statement. The viewpoints of (b) and (c) must be identical because (b) represents the real-captured image used as a reference, while (c) is the synthesized image produced by our method after obstruction removal. In the context of novel view synthesis, it is standard practice to present synthesized images from a testing viewpoint alongside the corresponding real-captured reference images. Consistent with prior works in this field, we divided the captured images into two sets: the 'train split,' which is used to optimize Gaussian splatting, and the 'test split,' which is excluded from the optimization process but utilized for evaluating the synthesized images. For additional novel view synthesis results, please refer to our supplementary materials, where we also provide videos **“monocular video is inherently unsuitable for NVS”** We acknowledge that novel view synthesis (NVS) from monocular video is a challenging task, but we believe this challenge should not deter our community from making efforts to address it. In fact, NVS and 3D reconstruction from monocular cameras are long-standing problems in computer vision and computer graphics. Even in recent works related to NeRF and 3DGS, numerous studies have focused on using monocular cameras as input, such as [1,2,3,4,5,6,7,8,9,10,11]. Among these, [1, 2] are specifically designed for autonomous driving (AD) scenarios. In addition, with the rapid advancements in the computer vision community, more challenging tasks, such as generating an entire driving sequence from a single image, have become possible through recent works [12, 13]. Notably, GenAD [13], trained on monocular videos from YouTube, leverages a substantial amount of dash cam footage. This not only underscores the significant value of dash cam videos but also motivates us to explore novel view synthesis (NVS) using less constrained data sources, such as monocular dash cam videos. [1] Zhou, Hongyu, et al. "Hugs: Holistic urban 3d scene understanding via gaussian splatting." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Lu, Fan, et al. "Urban radiance field representation with deformable neural mesh primitives." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [3] Gao, Chen, et al. "Dynamic view synthesis from dynamic monocular video." ICCV, 2021. [4] Wu, Tianhao, et al. "D^ 2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video." Advances in neural information processing systems 35 (2022): 32653-32666. [5] Tian, Fengrui, Shaoyi Du, and Yueqi Duan. "Mononerf: Learning a generalizable dynamic radiance field from monocular videos." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [6] Das, Devikalyan, et al. "Neural parametric gaussians for monocular non-rigid object reconstruction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [7] You, Meng, and Junhui Hou. "Decoupling dynamic monocular videos for dynamic view synthesis." IEEE Transactions on Visualization and Computer Graphics (2024). [8] Tretschk, Edgar, et al. "Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [9] Fu, Yang, Ishan Misra, and Xiaolong Wang. "MonoNeRF: learning generalizable NeRFs from monocular videos without camera poses." International Conference on Machine Learning. PMLR, 2023. [10] Park, Byeongjun, and Changick Kim. "Point-dynrf: Point-based dynamic radiance fields from a monocular video." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024. [11] Wang, Fusang, et al. "Planerf: Svd unsupervised 3d plane regularization for nerf large-scale scene reconstruction." 3DV (2023). [12] Blattmann, Andreas, et al. "Align your latents: High-resolution video synthesis with latent diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [13] Yang, Jiazhi, et al. "Generalized predictive model for autonomous driving." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their insightful and constructive suggestions. We are glad that reviewers found: (1) The problem setting is novel (Reviewer QCPG) and meaningful (Reviewer zoS8); (2) The proposed method is interesting, smart (Reviewer zoS8), and effective (Reviewer zoS8, N7Lh); (3) The results look promising (Reviewer si9q), impressive (Reviewer N7Lh), good, and outstanding (Reviewer QCPG). Each comment from the reviewers has been replied to individually. Here we first respond to common questions. **1) Rendering speed.** As shown in the table below, DC-Gaussian achieves 120 fps at a resolution of 1920x1080 on an RTX 3090 GPU. Although ours is slightly slower than 3DGS, the speed still enables real-time rendering, which is crucial for applications such as autonomous driving simulators. Table 1. Rendering Speed Analysis. Tested on one NVIDIA RTX 3090 GPU with image resolution $1920\times1080$ | Method | fps $\uparrow$ | |--------------------|:----------------:| | Nerf in the Wild | 0.18 | | ZipNeRF | 0.27 | | GaussianPro | 210 | | 3DGS | 155 | | DC-Gaussian (Ours) | 120 | **2) Details of $\mathcal{L}_{opacity}$.** The formulation of **$\mathcal{L}_{opacity}$** is $\sum_{(u, v)}\Vert \phi(u, v) \Vert_{1}$, where $(u, v)$ are the image coordinates. This opacity loss encourages the opacity map to have the minimum areas that could satisfy the optimization. This design is based on the prior knowledge that opaque objects typically occupy only small portions of the windshield. **3) Limitations.** Our work focuses on obstruction removal during Novel View Synthesis (NVS) for dash cam videos. Our method is not specifically designed to improve performance on multiple sequences or dynamic scenes. These limitations have been discussed in the paper. Here, we provide additional experimental results in Figure 3 of the attached PDF. The results demonstrate that dynamic objects do not significantly impact the performance of obstruction removal. When dynamic objects move at a quite slow speed, our method also presents reasonable results. Recent efforts on modeling dynamic objects with high speed, such as StreetGaussian [1], have achieved impressive results. We plan to incorporate these techniques into our method in future research to enable robust dynamic modeling. [1] Yan, Yunzhi, et al. "Street gaussians for modeling dynamic urban scenes." ECCV, 2024. Pdf: /pdf/0b4d9792321cc0a63de973bd936b26b7c2125d4e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Data-faithful Feature Attribution: Mitigating Unobservable Confounders via Instrumental Variables
Accept (poster)
Summary: The paper introduces the problem that unobservable confounders in data can mislead and negatively affect the quality of feature attribution explanations. Feature attribution methods do not consider unobservable confounders which poses the risk of misinterpreting the attribution scores found, which could have consequences, especially in high-risk application domains where users rely on feature attribution to calibrate their trust in the model. To address the issue, the authors propose to decouple input features affected by unobservable confounding factors and those not. The latter are then used to train a confounder-free model, for which they show that feature attribution methods SHAP and Integrated Gradients are faithful to the true data generation process and robust. Strengths: - The paper addresses a novel perspective on feature attribution: Unobserved confounders in the data generation process can mislead feature attribution methods to assign inflated or deflated scores to features that are affected by unobserved confounders. I have not come across this perspective of connecting the data generation process to local explanations before. I believe it is an interesting viewpoint, particularly for tabular data use cases where causal relations may be discoverable. - The paper uses an example to illustrate the problem it is addressing and the approach to solving it, which helps the reader follow the methodology. - The authors provide the source code for their experiments. Weaknesses: - The paper is quite inline-math-text heavy, which limits its readability. I believe the readability and therefore clarity can be improved by restructuring math-heavy paragraphs (room for this could be made by e.g. shortening the introduction). - Some things were unclear, please see the questions. - Minor comments: - Lines 146-154: The long equations are slightly difficult to follow when written inline, and will be clearer if using \equation or \align. - Lines 156-159: Thought experiment results could be shown in a table, for more clarity. - Line 191: it’s -> it is - Please increase the font size in Figures 2 and 3 for better readability. Technical Quality: 3 Clarity: 3 Questions for Authors: - I did not fully understand how you identified the unobservable confounders in the real dataset use case. How could practitioners identify unobservable confounders in their models? - Feature attribution methods explain a model output $f(x)$ by assigning attribution scores in the input instance $x$. I am unsure if I may have misunderstood your approach, but by fitting a model with a different data (one time with all variables, and once without variables affected by unobserved confounders), are you not training and explaining a different model? If this is the case, I see your contribution rather providing a training procedure on only instrumental variables and without confounding variables, leading to data-faithful attribution. Currently, the paper's contribution often reads as if you are proposing a new attribution method (i.e. referring to your approaches as IV-SHAP and IV-IG). Could you please clarify? - By excluding variables affected by unobserved confounders in training, does the model's predictive performance (e.g. measured in accuracy) suffer or does the task become easier for the model? Is there a trade-off between data-faithful feature attribution and model performance? - How similar are the resulting attribution scores of IV-SHAP and IV-IG? It has previously been found that local explanation methods like feature attribution often disagree because they only locally approximate the model (Krishna et al., 2022: The Disagreement Problem in Explainable Machine Learning). Does removing the confounding variables lead to less disagreeing feature attribution? - In feature attribution, we are often interested in the features with the largest positive or negative attribution scores as we want to understand which features a model relied on most for its output. In this case, the faithful ranking of the scores matters more than the score itself. I'm curious, in your experiments, did the ranking of scores change with the instrumental variable approach? How much does being faithful to the data matter in practice if not? - Out of curiosity: What do you think, how would this approach / does this approach transfer to high-dimensional problems like image or text classification? I will happily adjust the score when I understand the paper better. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations and broader impact are adequately discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging and insightful comments. Please find our response below. **Response to Weaknesses** * W1: We will follow your suggestions to enhance the readability. * Minor comments: Fixed. Thanks! **Response to Questions** * Q1: In our paper, when addressing the impact of unobserved confounders, it is not necessary to identify what these confounders are. Instead, we only need to determine their existence and, if present, employ our method to mitigate their effect on feature attribution. For example, we can use potential instrumental variables for detection. When we intervene on an instrumental variable, we can then analyze whether the correlation between variable $\tilde{\boldsymbol{x}}$ and the target feature aligns with the change of the target variable caused by the instrumental variable. If the correlation is strong, but the change in the target variable caused by the instrumental variable is small, this suggests the presence of unobserved confounders which increases the correlation between $\tilde{\boldsymbol{x}}$ and the target feature [1]. * Q2: Feature attribution indeed involves explaining a model output $f(x)$ by assigning attribution scores to the input instance $x$. However, our focus is on data-faithful feature attribution, i.e., we are not trying to explain the output of a specific model but trying to explain the target feature $y$ through a model. For example, consider a medical scenario where a patient is interested in understanding the impact of all features on a disease, rather than just the prediction output of a particular diagnostic model. In data-faithful attribution, the trained model is used to estimate feature subset utilities because the real-world data generation equation is unknown. Thus, the model serves as a tool for data-faithful attribution. We propose a training method to reduce confounder influence and enhance data-faithful attribution. We will clarify them in the paper. * Q3: Our experiments suggest that the model's accuracy might be affected, indicating a trade-off in most scenarios. When we train the model using features that have been re-estimated to remove the influence of confounders, the model may lose the correlation between the predicted output and the confounders, which also affects the target feature $y$. We compared the Mean Squared Error (MSE) of our model with a predictive model in the experiment and found our loss to be slightly higher. We note that our goal is not to maximize the predictive accuracy of the model but rather to ensure that the attribution results are more faithful to data. We will clarify it in the paper. * Q4: In our experiments, the average change rates between IV-SHAP and IV-IG scores in both synthetic and real-world datasets are below 10\%. SHAP and IG are inspired by the Shapley and Aumann-Shapley values [2], respectively, and share many common properties [3]. In contrast, methods like LIME and DeepLIFT follow different principles and produce different values. Thus, merely removing confounders is unlikely to resolve inconsistencies between LIME/DeepLIFT and IV-SHAP/IV-IG. Discrepancies among attribution methods also arise from how they handle correlations between input features. For example, on-manifold Shapley [4] uses conditional expectations to supplement unselected features, while causal Shapley [5] uses interventions. These methods can be combined with ours, but addressing confounders alone cannot resolve differences due to varying approaches to feature correlation. Our paper focuses on the impact of unobserved confounders, and we suggest choosing the appropriate method for handling feature correlations based on the specific context. * Q5: The ranking of scores obtained from our methods and existing methods are different. We statistically analyze the proportion of attribution score ranking changes for IV-SHAP and IV-IG compared to SHAP and IG on Dataset A when $\rho=0.2$. The results are shown in the table below. | Feature Deviation | 0.125 | 0.250 | 0.375 | 0.500 | |----------|---------|---------|---------|---------| | IV-SHAP | 82.7% | 87.1% | 92.3% | 94.2% | | IV-IG | 58.4% | 69.6% | 77.1% | 86.3% | In the Griliches76 dataset experiments, 57.1\% of the attribution rankings changed. Even when the rankings remain the same, changes in attribution scores can offer valuable insights into feature contributions. For example, if the attribution score for education decreases, even without a change in ranking, it could indicate that the return on education for income is lower than initially expected. * Q6: The proposed feature attribution approach may be feasible for image and text classification. Images often encounter confounders like lighting. If a model is trained where most cat images are in strong lighting and dog images in weak lighting, the attribution for a cat image taken in weak lighting may be misleading. Identifying suitable instrumental variables to mitigate the confounders can make feature attribution more reliable and intuitive. This is an insightful extension, and we will explore it further. References [1] Baiocchi M, Cheng J, Small D S. Instrumental variable methods for causal inference[J]. Statistics in medicine, 2014. [2] Tauman Y. The Aumann-Shapley prices: a survey[J]. The shapley value, 1988. [3] Sundararajan M, Najmi A. The many Shapley values for model explanation[C]//(ICLR), 2020. [4] Frye C, de Mijolla D, Begley T, et al. Shapley explainability on the data manifold[C]//(ICLR), 2020. [5] Heskes T, Sijben E, Bucur I G, et al. Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models[J]. (NeurIPS), 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and additional analysis for Q5. There are only a few follow-up questions and comments: Q1: Could you please confirm whether my understanding is correct? Unobservable confounders are not required to be found. By being able to observe their effects, it is possible to identify the instrumental variables and make use of them to train a model with data-faithful feature attributions. Q2: Could you please confirm whether my understanding is correct? In this work, the focus is not on providing a way to achieve data-faithful feature attribution explanations for the sake of explaining model outputs, but on *understanding* the relationship between data and target values. The model trained on instrumental variables explains this relationship as feature attribution applied to this model are data-faithful. Can it be seen as an analysis tool for phenomena, e.g. a disease as in the example you mentioned? Q6: Just a comment: Feature attribution in image classification usually treats each pixel as a feature, but I like your connection to lighting conditions. Perhaps data-faithful concept attribution fits the approach of confounders as well, what do you think? --- Rebuttal 2: Title: Response to the official comment of Reviewer 6T6P Comment: Thank you so much for your insightful follow-up questions and comments. * Q1: Yes, your understanding is correct. * Q2: Yes, your understanding is correct. Our proposed approach can be used as a tool for analyzing phenomena, making it particularly suitable in medical and financial fields. * Q6: Yes, data-faithful concept attribution is a suitable scenario for applying the approach of confounders, and we appreciate this idea. The example mentioned in the rebuttal can be effectively modeled by treating the concept of lighting conditions as an unobservable confounder. Besides, we also think that data-faithful feature attribution is meaningful in the presence of pixel-level confounders. In adversarial attacks, a well-known example is adding invisible pixel-level noise to a panda image, leading to the model misclassifying the image as a gibbon [1]. When we conduct feature attribution on the image, those noisy pixels appear important, though humans cannot see their significance. If we could train a model to eliminate the influence of such noise, the attribution would better match the judgment of a human on the panda image and the shape of the panda. We think that there might be similar noise confounders existing in image classification. If we remove these confounders, the feature attribution results of the model would be more data-faithful, which might imply better model robustness. Of course, the example of data-faithful attribution in the image we mentioned above has some differences from our paper's focus. Firstly, the method for removing pixel-level confounders in images might not use instrumental variables. Secondly, data-faithful feature attribution, as we mentioned here, aims to help humans judge the reliability of a model, as in image classification, where model performance is usually the main concern. The performance-oriented goal is different from the goal of our paper on understanding the relationship between input features and target feature values. We believe this is an interesting topic that deserves more detailed thought in terms of motivation and methodology. We hope our responses have clarified your questions and helped improve your opinion of our work. Reference [1] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. ICLR 2015. --- Rebuttal Comment 2.1: Comment: Yes, thanks for the confirmation! I now understand the paper better and see the proposed method as a promising direction to recover causal effects in data. I'll raise my score from 4->7. I would like to ask the authors to please make the scope and problem clearer in the final version of the paper, in order to avoid confusion with explanation methods that aim at explaining model outputs (like my case). --- Reply to Comment 2.1.1: Comment: Thank you very much for your comments. We appreciate your insights and will certainly follow your suggestions to make the scope and problem clearer in the final version.
Summary: The paper addresses the challenge of unobservable confounders in feature attribution methods, which can lead to misinterpretations. The authors propose a novel approach called "data-faithful feature attribution," which trains models free of confounders using instrumental variables (IVs) to ensure that feature attributions are faithful to the data generation process. Strengths: 1. The introduction of a confounder-free model training approach using instrumental variables is a significant advancement. This method addresses a critical gap in feature attribution by ensuring data fidelity. 2. The paper provides a comprehensive theoretical analysis that clearly explains the impact of unobservable confounders on feature attribution and how the proposed method mitigates this issue. 3. The authors validate their approach on both synthetic and real-world datasets, showing up to a 67% improvement in attribution accuracy over baseline methods. This demonstrates the practical applicability and effectiveness of the method. 4. The proposed method significantly reduces errors in feature attribution, ensuring that the contributions of input features to the target feature are accurately represented. Weaknesses: 1. The two-stage training process and the use of advanced techniques like IVs may be complex to implement and require significant computational resources, potentially limiting the method's practical usability. 2. The theoretical derivations are based on the assumption that the influence of unobserved confounders on the target features is linear. This assumption may not hold in all real-world scenarios, limiting the method's applicability. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How can one reliably identify suitable instrumental variables in practice? What criteria should be used to ensure their effectiveness? 2. How does the method perform with non-linear effects of unobserved confounders? Have you conducted any experiments or simulations to demonstrate its robustness in such scenarios? 3. Have you conducted sensitivity analyses to evaluate the robustness of the method under different conditions, such as varying degrees of distributional shifts or the quality of IVs? 4. How does the proposed method scale with larger datasets and higher-dimensional data? Are there any benchmarks or performance metrics available to demonstrate its scalability? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Please refer to the above questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging and insightful comments. Please find our response below. **Response to Weaknesses** * W1: The implementation and computation complexity of the two-stage training process is similar to the model training in the model-faithful feature attribution. In the first stage, we need to train a DNN classifier $\hat{M_\phi}$ if $\tilde{\boldsymbol{x}}$ is discrete (or mixture density network $\hat{M_\phi}$ if $\tilde{\boldsymbol{x}}$ is continuous) to approximate the distribution of unconfounded $\tilde{\boldsymbol{x}}$. Thus, the implementation is easy and computation complexity does not require significant computation resources in the first stage. The second stage of model training is the same as the training of the predictive model in model-faithful feature attribution, except that the feature values of $\tilde{\boldsymbol{x}}$ are replaced with the re-estimated unconfounded values. The re-estimated unconfounded values are sampled from the first-stage trained model, the sampling has little influence on the implementation and computation complexity of the second-stage model training. Therefore, the two-stage training process does not limit the method's practical usability. We will clarify it in the paper. * W2: We conduct the theoretical derivations based on the assumption that the influence of unobserved confounders on the target features is linear. We can give an accurate formula for the error in the linear assumption while other types of influence are difficult to decouple in theoretical analysis. However, our method can still work well in the practical scenario when the effect of the confounders is not linear. In the real datasets Griliches76 dataset and the Angrist and Krueger dataset we used in Section 5.2, the confounder of Ability has a non-linear impact on income. The experimental results still demonstrate significant effectiveness. **Response to Questions** * Q1: To identify suitable instrumental variables (IVs), we can first conduct a preliminary screening, such as LASSO regression, to assess the correlation between potential IVs and confounded features. Variables with non-zero coefficients may be correlated with $\tilde{\boldsymbol{x}}$ and are considered potential IVs. Next, we can use causal interventions to empirically verify these IVs, ensuring their effectiveness through statistical tests, such as checking if the F-statistic exceeds a threshold or using Hansen tests for validity. For a more in-depth study on IV identification, we recommend the seminal works [1][2][3]. Although identifying IVs is beyond this paper's scope, we will add a discussion to make this paper more self-contained and accessible to readers. * Q2: Our method can effectively address the non-linear effects of unobserved confounders. In experiments with real datasets, such as Griliches76 and Angrist and Krueger, the confounder Ability has a non-linear impact on income. For example, while factors like IQ positively correlate with income, the relationship is not linear. Although we assume linear effects of confounders for the convenience of theoretical analysis, our method remains effective for non-linear effects. This is because we re-estimate $\tilde{\boldsymbol{x}}$ using instrumental variables in the first stage, allowing us to obtain unconfounded $\tilde{\boldsymbol{x}}$ for attribution, regardless of how unobserved confounders influence $Y$. * Q3: In Section 5.1, we examined distribution shifts using synthetic datasets because real datasets do not allow control over external confounders and instrumental variables (IVs). In our synthetic datasets, we use $\rho$ to control the magnitude of unobserved confounders, $\tilde{\boldsymbol{x}}$ and the unobserved confounders are correlated. Consequently, the distribution of $\tilde{\boldsymbol{x}}$ also changes. Figures 1 and 2 demonstrate that our method's effectiveness increases with confounder influence. Notably, even at $\rho=0.2$, our method is effective. Additionally, experiments show that higher-quality IVs reduce confounder-induced errors more effectively. For example, we test the average error of IV-SHAP on Dataset A when $\rho=0.2$ with $\psi \sim (0,0.5), \psi \sim (0,1), \psi \sim (0,2)$ where $\psi$ controls the correlation of IVs with $\tilde{\boldsymbol{x}}$. The results are shown in the table below. | Feature Deviation | 0.125 | 0.250 | 0.375 | 0.500 | |----------|---------|---------|---------|---------| | $\psi \sim (0,0.5)$ | 0.034 | 0.174 | 0.268 | 0.397 | | $\psi \sim (0,1)$ | 0.018 | 0.057 | 0.166 | 0.280 | | $\psi \sim (0,2)$ | 0.012 | 0.039 | 0.0127 | 0.204 | * Q4: Our method scales well with larger datasets and higher-dimensional data. While larger datasets increase the training cost of the two-stage model, they do not complicate implementation. The training method costs twice as much as a typical predictive model, but remains manageable. Besides, the larger datasets increase the robustness and stability of the two-stage model. For higher-dimensional data, the computational demand increases, so we introduced approximation methods for IV-SHAP and IV-IG (see Appendix Section F), which we validated with the Spambase dataset. Extending our experiments to a 100-dimensional synthetic dataset, we found our method remained effective. Since data-faithful feature attribution is used mainly for tabular data to help understand feature relationships, and typically does not involve large dimensions. The average attribution ratio can still be used to evaluate scalability in the experimental section. References [1]Angrist, J. D., \& Imbens, G. W. Identification and estimation of local average treatment effects. Econometrica, 1994. [2]Baiocchi, M., Cheng, J., \& Small, D. S. Instrumental variable methods for causal inference. Statistics in Medicine, 2014. [3]Murray, M. P. Avoiding invalid instruments and coping with weak instruments. Journal of Economic Perspective, 2006.
Summary: This paper addresses the problem of estimating causal effects with feature attribution, by applying SHAP and Integrated Gradients (IG) to two-stage models with instrumental variables. On synthetic datasets, the proposed methods IV-SHAP and IV-IG can better recover the ground-truth causal effects, compared to SHAP and IG applied to predictive models. On real-world datasets, IV-SHAP and IV-IG are more aligned with prior knowledge than SHAP and IG. Strengths: - This paper addresses an important problem in the field of feature attribution: gaining causal insights from the attribution scores. Although it seems intuitive that a predictive model cannot provide causal insights when paired with feature attribution methods, this paper formally illustrates the need to account for confounding. - The writing is clear. The pronounced focus on data fidelity in the introduction makes it clear that model-centric interpretation is not the goal of this paper. The conceptual examples are illuminating. Weaknesses: - It is unclear how the expectations with respect to $\hat{M}_{\phi}(\tilde{x} | \bar{x}_t, \psi_t)$ in Equations (4) and (6) are empirically computed in the experiments. The (approximate) computation can potentially be expensive. - What is the impact on IV-SHAP and IV-IG when the three assumptions of instrumental variables are violated? The paper lacks a discussion on this. - What is the impact on IV-SHAP and IV-IG when the input features are correlated? The paper lacks a discussion on this. Technical Quality: 3 Clarity: 3 Questions for Authors: - Besides SHAP and IG, is it appropriate to apply other feature attribution methods to a two-stage model in order to estimate causal effects? - For the real-world datasets, what are the baseline values for IV-IG, and how are features removed in IV-SHAP? Minor comments: - There are typos in lines 172 and 178: $\frac{1}{\mathcal{N}}$ should be replaced with $\frac{1}{|\mathcal{N}|}$. - I recommend omitting the $D$ in $\mathbb{E}_D$ of Proposition 2 for consistency. - The use of Hoeffding's inequality in the proof for Lemma 7 require some boundedness condition on $h(u_i)$, which is reasonable to assume but should be mentioned nonetheless. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of the proposed approach itself seem adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging and insightful comments. Please find our response below. **Response to Weaknesses** * W1: For the case of discrete $\tilde{\boldsymbol{x}}$, $\hat{M_\phi}$ is trained as a DNN classifier with softmax output, where the output element represents the probability of the corresponding category. We can enumerate the probability of each category and compute the weighted sum of all categories when we need to compute the expectation term condition on $\hat{M_\phi}\left(\tilde{\boldsymbol{x}} \mid \overline{\boldsymbol{x_t}}, \psi_t \right)$. The computation complexity is $O(c*n)$, where $c$ is the number of categories and $n$ is the number of sampled data tuples in model training. For the case of continuous $\tilde{\boldsymbol{x}}$, we implement a mixture density network to fit the distribution of $\hat{M_\phi}\left(\tilde{\boldsymbol{x}} \mid \overline{\boldsymbol{x_t}}, \psi_t\right)$. We use 20 random samples from the distribution output by the mixture density network to approximate the expectation, which has demonstrated great stability in our experiments. Thus, the computation complexity is $O(n)$. Both cases can be effectively computed given typical computational resources. We will add the details in the paper. * W2: The effectiveness of IV-SHAP and IV-IG may be reduced when the three assumptions of instrumental variables are violated. However, the extent of this reduction depends on how severely the assumptions are violated. To mitigate the influence of unobservable confounders, the instrumental variables need to have a more direct influence on $\tilde{X}$ while having less correlation with other features. Additionally, the instrumental variables should have little or no direct influence on $Y$. The instrumental variables are still effective at reducing the impact of unobservable confounders if the assumptions are only mildly violated. For example, in our experiment with the Griliches76 dataset, the chosen instrumental variable, Parental Education, has a weak correlation with the unobservable confounder, Ability, as we analyzed in Appendix G.2. However, the influence of parental education on child education is much stronger, as demonstrated by the statistical characteristics of the real dataset in Appendix G.2. Despite this mild violation, the effectiveness of IV-SHAP and IV-IG remains significant. We will add a discussion in the paper/ * W3: The correlation of input features may affect the data-faithfulness of IV-SHAP and IV-IG. We can combine methods which deal with the correlated input features to the two-stage model to better capture these correlations. For example, on-manifold Shapley [1] can be used to account for feature correlations, while causal Shapley [2] can be applied if the causal structure of the input features is known. We note that our primary focus in this paper is on addressing the influence of unobservable confounders. Therefore, we chose the two most representative feature attribution methods, SHAP and IG, to formulate the problem. We will include this discussion in the paper. **Response to Questions** * Q1: Other feature attribution methods can be applied according to the demands. For example, when considering the correlation between input features, we can apply on-manifold Shapley or causal Shapley to our two-stage model if sufficient prior knowledge is available. Our methods focus on addressing unobservable confounders by re-estimating $\tilde{\boldsymbol{x}}$. Therefore, we can combine other feature attribution methods to handle the input features after re-estimating $\tilde{\boldsymbol{x}}$ with the two-stage model. * Q2: We experimented with multiple baseline values by subtracting specific values from the features $\tilde{\boldsymbol{x}}$ and $\overline{\boldsymbol{x}}$ in both synthetic and real datasets. Lines 283-286 and 311-312 in Section 5 provide the details of experimental setting. The removed features are replaced with the average values from the dataset in IV-SHAP, which is consistent with the approach in SHAP. **Response to Minor comments** * M1: Fixed. Thanks! * M2: We have omitted the $D$ following your suggestion. * M3: $h(u_i)$ is the weighted gradients of a deep neural network, which is continuously differentiable and bounded in our setting. We will clarify it in the paper. References [1]Frye C, de Mijolla D, Begley T, et al. Shapley explainability on the data manifold[C]. International Conference on Learning Representations (ICLR). [2]Heskes T, Sijben E, Bucur I G, et al. Causal Shapley values: Exploiting causal knowledge to explain individual predictions of complex models[J]. Advances in neural information processing systems (NeurIPS). --- Rebuttal Comment 1.1: Comment: Thank you for clarifying my questions. I appreciate the authors for explaining that the performance of IV-SHAP and IV-IG can depend on (i) whether the three assumptions of instrumental variables are violated and (ii) the correlations between input features. I don't consider these as weaknesses specific to IV-SHAP and IV-IG, but rather general problems in causal inference. I encourage the authors to highlight the importance of (i) and (ii) so readers have such awareness when using IV-SHAP and IV-IG. For example, ablation studies with synthetic datasets can help readers assess the impact of (i) and (ii) on the effectiveness of IV-SHAP and IV-IG. Overall, the authors have addressed my questions, so I maintain my original score of 6. --- Rebuttal 2: Comment: Thank you very much for your valuable comments. We appreciate your emphasis on the assumptions of instrumental variables and correlations of input features, which we agree are crucial factors for the effectiveness of IV-SHAP and IV-IG. We will make sure to highlight these points in the final version.
null
null
Rebuttal 1: Rebuttal: We are very grateful to the reviewers for their encouraging and insightful comments. To address the concerns, we provide detailed point-to-point responses as follows.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Bridge the Points: Graph-based Few-shot Segment Anything Semantically
Accept (spotlight)
Summary: This paper proposes a graph-based approach for Few-shot Semantic Segmentation (FSS) based on the Segment Anything Model (SAM). The authors propose a Positive-Negative Alignment module to select point prompts and a Point-Mask Clustering module to align the granularity of masks and selected points. The proposed method surpasses state-of-the-art generalist models on COCO-20i and LVIS-92i datasets, and also performs well on One-shot Part Segmentation and Cross Domain FSS datasets. Moreover, the proposed method is hyperparameter-free and efficient. Strengths: 1. How to conduct prompt engineering for vision foundation models(SAM) is still a challenging problem in the research community. This work fills this gap. 2. The proposed method is systematic, consisting of three modules: Positive-Negative Alignment, Point-Mask Clustering, and Positive and Overshooting Gating. Each module is well-designed and novel. 3. The paper is well-written and easy to follow. The authors can accurately describe the well-designed modules. 4. The experiments are comprehensive and the results are convincing. The proposed method significantly outperforms the SOTA on COCO-20i. The cross-domain validation further demonstrates the generalization and robustness of the method. 5. The proposed method is hyperparameter-free, which is a great advantage and increases the practical value of the method. 6. I briefly checked the code, and the submitted code is complete, which increases the reproducibility of the paper. Weaknesses: 1. In L173, "This is the precondition for the efficacy of our PMC module, as even slight errors could significantly impact the clustering accuracy" The authors should provide some visualization examples or experiments to verify this statement. 2. In Table 4, I notice weak or strong connected components are not clearly defined and the difference are also very small. The authors should provide more analysis to explain this. And what is the hyperparameter for K-means++? Have you tried different hyperparameters for K-means++? 3. I notice the authors conduct ablation studies on different datasets for different components. I hope the authors can provide more experiments on part segmentation. Since previous SOTA Matcher require different hyperparameters for part segmentation. Conducting ablation on part segmentation can further prove the hyperparameter-free advantage of the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: Can this methods applied to other tasks such as video object segmentation or instance segmentation? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of the proposed method in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to weaknesses 1. >In L173, "This is the precondition for the efficacy of our PMC module, as even slight errors could significantly impact the clustering accuracy" The authors should provide some visualization examples or experiments to verify this statement. To verify this statement, we presented several experiments and visualizations. Specifically, 1) the visualization results in Fig.3 and Fig.12 illustrate the clustering results of the images. While the PNA module provides reliable point prompts, it is noted that some points fall *slightly outside* the target area. These misplacements may lead to generating masks for adjacent objects. Such points cannot be distinguished for clustering methods, particularly when the masks from SAM are of suboptimal quality or when there is **overlap** between masks of neighboring objects. 2) We further presented an analysis of the overlapping in Sec. A.5.2 and Fig. 7-9 in the Appendix (page 15), where it was observed that most adjacent masks along the boundary of foreground and background regions show limited overlapping. 3) According to Tab.8, our approach using a larger SAM model with better mask generation ability led to better results, showing the influence of mask quality. 2. >In Table 4, I notice weak or strong connected components are not clearly defined and the difference are also very small. The authors should provide more analysis to explain this. Weakly or strongly connected components are the concepts of graph connectivity, where each weak or strong connected component ensures that every vertex has connectivity or bidirectional paths from every other vertex in the directed graph, respectively. According to our visualization analysis in Fig. 18 of the global PDF, clustering by weak connected components prioritizes overall target coverage, while clustering by strongly connected components focuses on individual parts. Besides, we list the performance of these categories in the table below. Both of the clustering methods have their own advantage, where weak connected components can identify the well-shaped objects occupying enough proportion of the image area (shown in *Airplane* and *Elephant*), whereas the strong connected components can better handle the slim objects with limited pixels wide in the narrowest area (shown in *Scissors, Skateboard*). Therefore, by only using any of strong or weak connected components cannot address the two problems, and as a result, we have to select one method with better mIoU performance, i.e., the weak connected components. |Classes|Airplane|Elephant|Scissors|Skateboard| |-|-|-|-|-| |Ours w. strong|49.6|83.0|68.4|63.0| |Ours w. weak|65.0|84.0|68.0|59.3| 3. >And what is the hyperparameter for K-means++? Have you tried different hyperparameters for K-means++? Following the setting in Matcher [2], we set the hyperparameter K for K-means++ to 10, according to their ablation studies. We have also experimented with different hyperparameters and found that even if the K-means++ clustering method introduces an external hyperparameter, it does not surpass the performance of our mask clustering method (**58.7**). |K|6|8|10| |-|-|-|-| |Ours w. K-means++|57.7|57.8|57.5| 4. >I notice the authors conduct ablation studies on different datasets for different components. I hope the authors can provide more experiments on part segmentation. Thanks for the suggestion. We included experimental results on part segmentation in Table 2, as shown below. From these results, we can observe that the proposed method effectively handles part segmentation, validating the hyperparameter-free advantage of our approach compared to the Matcher which uses manually set hyperparameter. This indicates that our method not only simplifies the modeling process by eliminating the need for tuning external hyperparameters but also maintains robust performance across different segmentation tasks. |Method|PACO-Part|Pascal-Part| |-|-|-| |Matcher|34.7|42.9| |Ours|36.3 (+1.6%)|44.5 (+1.6%)| In this paper, we propose a Positive-Negative Alignment (PNA) module and a Post-Gating strategy based on the weakly connected graph components, enabling a hyperparameter-free pipeline. *As suggested, we further provide extensive ablation experiments on part segmentation detailed in the table below.* Our approach, employing various settings of parameter-free components, outperforms the previous SOTA, Matcher, which relies on different hyperparameters for part segmentation. We are happy to provide more ablation experiments if needed. |$S_{mean}^+$|$S_{max}^+$|$S_{mean}^-$|PACO-Part|Pascal-Part| |-|-|-|-|-| |$\checkmark$| |$\checkmark$|35.7|44.0| | |$\checkmark$|$\checkmark$|34.9|44.1| |$\checkmark$|$\checkmark$| |35.4|44.2| |$\checkmark$|$\checkmark$|$\checkmark$|**36.3**|**44.5**| # Response to questions 1. >Can this methods applied to other tasks such as video object segmentation or instance segmentation? Thanks for the insightful question. Our method follows the previous settings of FSS, i.e. segmenting one target class at a time. As a semantic segmentation method, our approach focuses on intra-class similarity and inter-class distinction. Whereas video object segmentation and instance segmentation require recognizing instance-level distinctions within the same class, which conflicts with the intra-class similarity emphasis of FSS. However, **as suggested**, we evaluate our approach on DAVIS 2017 via simple one-shot segmentation and argmax-based object classification using the mean similarity map from DINOv2, and achieved results that are 0.5% better than those of PerSAM (ICLR2024) under a similar VOS setting. Considering that the clustering process of our method is able to distinguish the instances to some extent, we will follow the suggestion from the reviewer and extend our method for these tasks in future work. --- Rebuttal Comment 1.1: Comment: After reading the author's response, most of my concerns have been addressed. I've decided to maintain my current score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for taking the time to review our response. We’re pleased that our rebuttal has addressed the concerns and we sincerely appreciate the reviewer's positive score. We will incorporate the key points the reviewer raised as we work on the final version of the revised paper.
Summary: The paper first proposes graph-based approach for SAM-based few-shot semantic segmentation, modeling the relationship of SAM-generated masks in an automatic clustering manner. A positive-negative alignment module and a post-gating strategy based on the weakly connected graph components, enabling a hyper parameter-free pipeline. Extensive experimental comparisons and analysis across several datasets over various settings show the effectiveness and efficiency of the proposed method. Strengths: 1、The graph-based approach for SAM-based methods proposed in the article is more insightful. Because SAM-based methods can generate many patches with point prompts, and how to group them is a good question. 2.The experiments are comprehensive. The ablation experiment demonstrates the effectiveness of the modules. The visualisation also illustrates the idea expressed. Weaknesses: 1. The captions are not detailed, especially in Figure I and 2. 2. The overall organisation and presentation of the article are poor and need improvement. Some details: ① On the left side of the first line of Figure 2, the images are ‘reference on top and target on bottom, but the features become ‘reference on bottom target on top’; it's not consistent. ② In my understanding, in Figure 2,positive-negative alignment should be conducted on the target image using the foreground and background features of the reference's image, but the part is very unclear and not informative and does not show the above content. ③Point-Mask Clustering part in Figure 2 can be understood that it wants to express the cluster of the points to different targets in the images, but the drawing is very casual, and there is no mark or expression. 3. How to conduct mask merging? Is each point used to generate a mask and then merge, or is the cluster of points used together to SAM to generate mask prediction, or are there other ways? Give the details. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author provides the limitations in the appendix. They say the coordinates of points will miss the small objects. This is a real problem and is suitable for further study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. >The captions are not detailed, especially in Figure I and 2. We apologize for the brevity of the captions due to consideration of space constraints. As suggested, we will include more details in the captions. Specifically, for Figure 1, the updated caption will be: "*Performance comparison of our approach against previous state-of-the-art methods in terms of efficiency and generalized capabilities in Few-shot Semantic Segmentation. Figure 1(a) illustrates our approach's superior performance in efficiency and effectiveness across various model sizes. Figure 1(b) demonstrates the generalizability of our approach across different domains.*" For Figure 2, the revised caption will be: "*Overview of our approach, where the Positive-Negative Alignment module recognizes the correlation between target features and reference features for point selection, the Point-Mask Clustering module efficiently clusters the points based on the coverage of corresponding masks, and Post-Gating filters out the false-positive masks for generating final prediction.*" 2. >The overall organisation and presentation of the article are poor and need improvement. Some details: ① On the left side of the first line of Figure 2, the images are ‘reference on top and target on bottom, but the features become ‘reference on bottom target on top’; it's not consistent. ② In my understanding, in Figure 2,positive-negative alignment should be conducted on the target image using the foreground and background features of the reference's image, but the part is very unclear and not informative and does not show the above content. ③Point-Mask Clustering part in Figure 2 can be understood that it wants to express the cluster of the points to different targets in the images, but the drawing is very casual, and there is no mark or expression. We appreciate the detailed suggestions for enhancing our presentation. As suggested, we have revised Fig. 2 and visualize the process of the PNA module. These updated figures are included in the **attached one-page global PDF file**. **We will incorporate this update in the final version.** 3. >How to conduct mask merging? Is each point used to generate a mask and then merge, or is the cluster of points used together to SAM to generate mask prediction, or are there other ways? Give the details. We acquired a set of points from the PNA module, each serving as a prompt for generating a corresponding mask, with the one-to-one correspondence where one point generates one mask. In the Post-Gating process, we select the points along with their corresponding masks for mask merging. For the final prediction, we simply used the union of the selected masks to create a merged mask. **We will update the more detailed description above in the final version.** --- Rebuttal Comment 1.1: Comment: After reading the author's response, most of my concerns have been addressed. I've decided to raise my score.
Summary: This paper extends SAM to few-shot semantic segmentation tasks by proposing an approach based on graph analysis and representation learning. The contributions include a Positive-Negative Alignment module to generate the initial points prompt using DINOv2 features, as well as Point-Mask Clustering and Post Gating modules to filter the proper points based on the SAM generated masks. The proposed method exhibits efficiency and performance advantages over previous work such as PerSAM and Matcher. Strengths: (1) The paper is well-written and easy to follow. (2) The proposed method is hyperparameter-free and shows significant improvements in both efficiency and performance. (3) Experiments on various datasets and tasks validated the generality of the proposed method. Weaknesses: (1) The proposed method is specifically tailored for DINOv2 and SAM, is it possible to apply it to other foundation models? This could broaden the impact of the method. (2) In Figure 1(b), the scales of the axes are unclear. Are the starting points of the nine axes all 0? (3) On the FSS-1000 dataset, the performance of the proposed method is comparable to Matcher, which deserves a discussion. For example, what factors may have contributed to this result. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. >The proposed method is specifically tailored for DINOv2 and SAM, is it possible to apply it to other foundation models? This could broaden the impact of the method. Thanks for the suggestion. Our approach can be easily applied to other foundation models, and as suggested, we further provide experimental analysis on this, as shown in the table below. Since SAM is almost unique prompt-based SOTA foundation model for segmentation, we only focus on evaluating different backbones here. Particularly, we assess various backbone networks from leading vision-related foundation models within our approach and the previous SOTA, Matcher, using the widely-used backbones of CLIP (ViT-L/14) and ImageNet-pretrained ResNet-50 on the $COCO-20^i$ dataset. The results in the table below show that our approach maintains competitive performance across different backbones, highlighting its effectiveness and adaptability. |Backbone|Matcher|Ours| |:-:|:-:|:-:| |DINOv2|52.7|58.7| |CLIP(ViT-L/14)|32.2|40.2| |ResNet-50|24.9|32.3| 2. >In Figure 1(b), the scales of the axes are unclear. Are the starting points of the nine axes all 0? Thanks for pointing this out. The starting points of the axes are not 0, as we intended to highlight the relative performance of our approach compared to previous SOTA methods. Since the current SOTA approaches are far from 0, setting the starting point at 0 would not distinctly present the advantage of our approach. Therefore, we customize the starting point to 40% of our performance level, enhancing visibility and comparison clarity. 3. >On the FSS-1000 dataset, the performance of the proposed method is comparable to Matcher, which deserves a discussion. For example, what factors may have contributed to this result. This dataset consists mainly of images with large, clearly defined target objects against simplistic backgrounds, which tends to normalize results across different segmentation methods. Such simplicity can obscure the unique strengths of various approaches. Our approach, equipped with the Positive-Negative Alignment (PNA) module and Positive Gating, is optimized for complex scenarios where background and target objects are closely interlinked. However, the simple backgrounds in FSS-1000 do not fully demonstrate our approach's capabilities, resulting in performance comparable to approaches like Matcher. This highlights the importance of diverse datasets in fully assessing the effectiveness of segmentation methods designed for complexity. In the final version, we will expand our experimental analysis to further clarify this.
null
null
Rebuttal 1: Rebuttal: We appreciate the detailed and constructive comments from the reviewers. For each of the concerns/questions, we have provided replies, revisions, and additional experiments accordingly, and included **a global one-page PDF (as attached below)** for figures mentioned in our response. Please let us know if you have any further comments, and we are more than happy to participate in the discussion. Pdf: /pdf/67ca3900acff7b19f98bad0ad7fa99d1ef89458d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Gated Slot Attention for Efficient Linear-Time Sequence Modeling
Accept (poster)
Summary: The paper introduces Gated Slot Attention (GSA), an enhancement of Gated Linear Attention (GLA), aimed at improving the efficiency of sequence modeling. GSA incorporates a selective gating mechanism to manage memory updates, leveraging a two-pass GLA structure. This approach allows GSA to be more context-aware and to retain a bounded memory footprint, making it suitable for long sequence tasks. The key improvement is the incorporation of a softmax operation to retain sharp attention distributions, which enhances sequence modeling by reducing dilution. Strengths: - The GSA's gated update mechanism and two-pass structure offer a straightforward yet effective enhancement over GLA. This design ensures that memory usage remains bounded and manageable, which is critical for handling long sequences efficiently. - The experimental results provided by the authors demonstrate the advantages of the GSA mechanism. The results highlight GSA's ability to improve performance in sequence modeling tasks, validating the practical effectiveness of the proposed approach. Weaknesses: - While GSA presents improvements, it largely builds on existing GLA techniques. The enhancements, though valuable, might be seen as incremental rather than revolutionary, potentially limiting the perceived impact of the work. - Despite efforts to reduce computational overhead, the softmax operation on $QK^T$ still retains a quadratic complexity in training. This raises questions about the authors' claim of linear-time sequence modeling in the title, which could be misleading. - The authors do not specify explicitly whether GSA inherits the recurrent architecture of GLA. If this is the case, it would be necessary to compare GSA to other recurrent models like RWKV to provide a comprehensive evaluation of its performance and advantages in sequence modeling tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: How does the inference performance of GSA compare to other SOTA models in terms of speed and memory usage? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q:** *While GSA presents improvements, it largely builds on existing GLA techniques. The enhancements, though valuable, might be seen as incremental rather than revolutionary, potentially limiting the perceived impact of the work.* **A:** We appreciate your feedback and respectfully argue that GSA represents substantial advancements over existing techniques: * GSA is motivated by viewing $\mathbf{K}$ and $\mathbf{V}$ as memories and linearizing standard attention by managing $\mathbf{K}$ and $\mathbf{V}$ into constant number of memory slots. This perspective drives our 2-pass recurrence formulation: $\mathrm{softmax}(\mathbf{Q}\tilde{\mathbf{K}}^\top)\tilde{\mathbf{V}}$. This significantly differ from exising linear attention that conduct linearization by rearranging the computation order of $\mathbf{Q}\mathbf{K}^\top\mathbf{V}$. * The 2-pass recurrence with context-aware queries demonstrates significant improvements in retrieval tasks (FDA, SQUAD, SWDE), showcasing the practical impact of our approach. * GSA maintains the general form of standard attention while offering $\mathbf{K}\mathbf{V}$ linearization. The $\mathrm{softmax}$ spikiness [1] notably enhances continual-pretraining results, paving the way for efficient scaling to larger models. --- **Q:** *Despite efforts to reduce computational overhead, the softmax operation on $\mathbf{Q}\mathbf{K}^\top$ still retains a quadratic complexity in training. This raises questions about the authors' claim of linear-time sequence modeling in the title, which could be misleading.* **A:** Thank you for your concern! We would like to clarify that in GSA, the $\mathrm{softmax}$ is applied over $\mathbf{Q}\tilde{\mathbf{K}}^\top$ rather than $\mathbf{Q}\mathbf{K}^\top$ in standard attention (SA), as presented in Section 2.2. Unlike $\mathbf{K}\in \mathbb{R}^{T\times d}$, $\tilde{\mathbf{K}}\in \mathbb{R}^{m\times d}$ is derived by passing $\mathbf{K}$ along with $\mathbf{\phi}\in \mathbb{R}^{T\times m}$ through GLA recurrence, compressing it to a constant size of $m\times d$. This crucial difference enables GSA to reduce the training complexity from quadratic to linear time, supporting our claim of linear-time sequence modeling. We will ensure this distinction is more explicitly stated in the revised version. Thank you. --- **Q:** *The authors do not specify explicitly whether GSA inherits the recurrent architecture of GLA. If this is the case, it would be necessary to compare GSA to other recurrent models like RWKV to provide a comprehensive evaluation of its performance and advantages in sequence modeling tasks.* **A:** We would like to clarify that: as detailed in Section 3, GSA's output is indeed derived from $\tilde{\mathbf{K}}_t^\top \mathbf{q}_t \in \mathbb{R}^d$, where $\tilde{\mathbf{K}}$ and $\tilde{\mathbf{V}}$ maintain constant sizes regardless of sequence length. Both $\tilde{\mathbf{K}}$ and $\tilde{\mathbf{V}}$ involves one GLA recurrence. We provide hardware-efficient implementations for the 2-pass operations to enhance training efficiency. Full algorithm details are available in Appendix A. Regarding comparisons with other recurrent models, we include comparisons with RWKV6 in Table 5 in the paper. Our results show that GSA significantly outperforms RWKV6 under controlled settings, while RWKV6 is trained on billions of tokens and GSA uses continual pretraining. It's worth noting that while RWKV6 shares similarities with GLA in terms of data-dependent gating and matrix-formed hidden states, it employs additional short convolutions at the expense of efficiency. Due to limited computational resources, we haven't provided results for RWKV6 models trained from scratch, nor have we employed the specific techniques used by RWKV6, to ensure fair comparisons. We commit to improving the clarity of our presentation in the next version of our paper. --- **Q:** *How does the inference performance of GSA compare to other SOTA models in terms of speed and memory usage?* **A:** During inference, GSA demonstrates similar theoretical inference performance compared to other SOTA models : * GSA needs $O(1)$ space for hidden states, similar to GLA and other linear models, while SA requires $O(N)$ KV cache. * GSA requires $O(1)$ time per step to access memory and generate outputs as in GLA. Our empirical speed evaluations for 7B models with 2k prefix support these advantages: * GSA and GLA consume similar 15GiB memory, while SA requires an additional 3GiB due to KV cache requirements for further generation. * Regarding speed, the latency for GSA is 13s, comparable to GLA (13.4s) and SA (12s), aligning with our findings in Tables 2 & 3 of our paper. It's important to note that we have not yet fully optimize the recurrent kernel for speeding up the generation stage, nor reducing the I/O overhead through customization. We are committed to exploring these optimizations in future work to fully realize GSA's theoretical inference advantages and enhance its real-world performance. Thank you. [1] The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry: https://arxiv.org/abs/2402.04347 --- Rebuttal Comment 1.1: Comment: Dear Reviewer eYuC, Could you please let us know if our response has addressed your concerns? If you have any further questions, please feel free to raise them at any time. --- Rebuttal 2: Title: Response to Reviewer eYuC: additional inference efficiency comparisons Comment: To provide more clarity, we conducted additional comparisons on an H800 GPU, optimizing our inference kernel for GSA to perform one-by-one generation. This optimization significantly reduces I/O overhead for 2-pass recurrences, which is crucial for auto-regressive generation. Our updated results show: | | Transformer++ | GLA | GSA | | --- | ------------: | -----------: | -----------: | | 2k | 85.0 (14.9) | 77.3 (14.0) | 78.0 (14.1) | | 4k | 169.4 (15.5) | 140.9 (13.9) | 141.7 (14.1) | | 8k | 350.5 (16.6) | 274.6 (13.9) | 269.2 (14.0) | | 16k | 783.1 (18.8) | 528.0 (13.9) | 517.1 (14.1) | We compare the inference latency (seconds) as well as the memory consumption of Transformer++, GLA, and GSA for a single sequence. By varying the generation length from 2k to 16k: * GSA maintains consistent memory usage across different sequence lengths, unlike Transformer++ which consumes up to 4GiB more for 16k sequences. * GSA shows comparable or slightly better inference speed than GLA, especially for longer sequences, thanks to our optimized fused kernel. These findings underscore GSA's competitive performance in real-world scenarios, combining the theoretical advantages of linear attention models with practical optimizations. We are committed to further exploring and implementing optimizations to fully realize GSA's potential and enhance its real-world performance. We hope this clarifies our contribution and strengthens our paper's position.
Summary: A major challenge is storing information in a bounded number of memory slots. This work builds on ideas from ABC and gated linear attention. ABC recursively updates the bounded-memory keys and values states over time, and computes a softmax with the queries at timestep t to produce the output at t. Gated linear attention and Mamba use data-dependent decays over time to better decide what information to keep versus throw away given the limited memory. GLA, unlike Mamba, enables the use of GPU tensor cores through its parameterization. GSA retains the selective gating from GLA and the update rules from GSA. The architecture remains chunk-wise parallelizable (like GLA) and provides inductive biases that address limitations of GLA and ABC respectively. GSA is validated up to 2.7Bn parameters from scratch and 7Bn parameters in continual training, providing promising results on standard LM-eval harness benchmark tasks. Strengths: The writing and contextualization of the contributions with respect to prior work are very clear. The architectural inductive baises from GSA versus prior linear recurrent architectures are compelling and very interesting. For e.g., “In GLA the query vector is only dependent on the current input token, while in GSA, the query vector is the output of the first GLA pass and thereby is aware of the entire historical information and in this sense more context aware”. Two passes over the input could help the model make better decisions about what information to keep versus throw away given the limited recurrent memory. GSA is trained from scratch to relatively large scales (2.7Bn parameters, 100B tokens) and demonstrates high quality on LM-eval harness tasks compared to the baselines, despite using less memory than GLA. GSA can be implemented efficiently by adopting off-the-shelf algorithms from flash linear attention. Weaknesses: Building on my comment on the compelling properties of GSA, the paper does not extensively show how these properties make a difference in language modeling. It would be interesting to know how the “context aware query vectors” help GSA on real-world data by comparing to models without this property. OR for the authors to include discussion on what kind of sequences this property might help with. The architectural modifications are not tied to empirical insights beyond lm-eval harness scores in the current submission. GSA is only evaluated on LM-eval harness. It is known that models perform somewhat similarly on these short-context tasks with very little memory (Arora et al, 2024) and mentioned in the paper’s limitations section. The paper does not dive into any tasks that require longer context reasoning or retrieval, making it unclear how GSA behaves. - "GSA only needs half the recurrent state size of GLA and a quarter the recurrent size of RetNet, while having better performance, thus enjoying a lower memory footprint during inference". The claim is not fully validated unless the GSA models are evaluated on tasks that stress-test memory utilization (like retrieval, long-context tasks). The MMLU scores of the continually fine-tuned 7B checkpoints remains very low, just like the baseline – the other scores are roughly comparable across models. There is no analysis as to why this is. The paper claims that GSA is drastically better than SUPRA, but the delta is small (<1 point on average), so it is not clear what is meant by this claim. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why do the authors believe the current set of benchmarks is sufficient? Is it possible to include results on retrieval and longer context benchmarks? 2. Can the authors perform error analysis on the continually trained models to help understand why SUPRA and GSA perform poorly on MMLU? 3. Can the authors provide more concrete hypotheses as to where context-aware query tokens (resulting from the first GLA pass) could help in real language modeling settings? In the introduction, “slots” are not defined. They are first defined in section 2. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors do mention that they ignore retrieval style tasks in their analysis (in Section 6), however if the paper makes claims about memory-efficiency then it is important to evaluate quality on this style of tasks. This is because it is known that there are fundamental memory and quality tradeoffs. If the reviewers address these concerns, I will consider increasing my score! Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and insightful questions. We promise to revise the paper to address your concerns and make the presentation more clear and comprehensive in the next version. --- **Q:** *It would be interesting to know how the "context aware query vectors" help GSA on real-world data by comparing to models without this property. Why do the authors believe the current set of benchmarks is sufficient? Is it possible to include results on retrieval and longer context benchmarks?* **A:** Thank you for your very constructive suggestions. In the above table, we have additionally included results to validate the effectiveness of "context aware query vectors": * We conduct experiments on three tasks proposed by Arora et al, 2024 [1]: FDA, SQUAD, and SWDE, showing **significant improvements in GSA's performance compared to Mamba, RetNet, and GLA on these recall-intensive tasks**. Despite smaller memory capacity, GSA outperforms Mamba, RetNet, and GLA by an average of 5.9, 7.7, and 4.8 points, respectively. * We report results on long context benchmarks reported by Xiong et al. 2023 [2]. It is clear that **GSA can be well extrapolated to 16k give 2k training context length**. GSA exhibits better extrapolation than GLA and RetNet on Qasper, NarrativeQA, QuALITY and QMSum. These results validate GSA's strong performance on information extraction tasks, which we attribute to more effective memory utilization enabled by "context aware query vectors." We are planning to inspect the performance of GSA scaled to larger model size and number of slots. --- **Q:** *The MMLU scores of the continually fine-tuned 7B checkpoints remains very low, just like the baseline – the other scores are roughly comparable across models. The paper claims that GSA is drastically better than SUPRA, but the delta is small (<1 point on average), so it is not clear what is meant by this claim. Can the authors perform error analysis on the continually trained models to help understand why SUPRA and GSA perform poorly on MMLU?* **A:** After carefully diving into the details of SUPRA, we updated the continual pretraining results of GSA models trained with low learning rate of $3×10^{-5}$ combined with slow decay scheduler. From the above table, we observe that GSA significantly outperforms RetNet and GLA across various tasks, including NQ, TriviaQA, and BBH. Notably, for challenging reasoning / language understanding tasks of BBH and MMLU, GSA outperforms GLA by 2.7 and 4.0 points, indicating that GSA's formulation, which is similar to standard attention (SA), allows it to retain more capabilities when finetuned from another SA LLM. In the bottom lines we further provide the results of continual pretraining 100B tokens: GSA greatly surpasses SUPRA by 2.2, 5.4, 8.7 and 6.0 points, respectively. [1] Simple linear attention language models balance the recall-throughput tradeoff: https://arxiv.org/abs/2402.18668 [2] Effective Long-Context Scaling of Foundation Models: https://arxiv.org/abs/2309.16039 --- Rebuttal Comment 1.1: Comment: Dear Reviewer LU5r, Could you please let us know if our response has addressed your concerns? If you have any further questions, please feel free to raise them at any time. --- Rebuttal 2: Title: Response to Reviewer LU5r13: speed analysis Comment: Thank you for your question and we are happy to provide further clarifications on the speed comparisons during both training and inference: **Training** For sequence length $N$, chunk $C$, and head dimension $d$: * GLA with chunkwise parallelism requires $O(NC d + N d^2)$ FLOPs and $O(Nd)$ additional memory for forget gates [1]. * GSA with $m$ memory slots (modeled as two-pass GLA recurrences) requires $O(NC m + NC d + 2N md)$ FLOPs and $O(Nm+Nd)$ memory. In our experiments, we set $m=64$ to leverage tensor core accelerations, as matmuls operating on $64\times 64$ tiles are shown to be highly hardware-efficient [2]. For models > 1B, $d$ is larger than 512. In these cases, despite the two-pass nature, GSA's complexity ($O(NC m + NC d + 2N md)$) can be lower than that of GLA ($O(NC d + N d^2)$) since $2md=128d < d^2$. **Inference** Both GLA and GSA maintain constant time and memory complexity during token-by-token generation. We also implemented fused kernels to reduce the IO overhead for operations like $\mathtt{softmax}$, potentially making GSA faster than GLA implementations in FLA [3]. For comparison, Transformers require quadratic time complexity for training and $O(N)$ time and memory for inference. To illustrate the efficiency, we provide inference latency (seconds) for a single sentence beyond the training speed / memory analysis in Table 2 of our paper: | | Transformer++ | GLA | GSA | | --- | ------------: | ----: | ----: | | 2k | 85.0 | 77.3 | 78.0 | | 4k | 169.4 | 140.9 | 141.7 | | 8k | 350.5 | 274.6 | 269.2 | | 16k | 783.1 | 528.0 | 517.1 | Note: We used the basic Huggingface `model.generate` API for demonstration, leaving room for further optimization. * Transformer++ underperforms even at moderate lengths (2k) and scales poorly. * GSA performs comparably to GLA, slightly outperforming it for sequences >8k due to our optimized fused inference implementations. We will include a detailed speed analysis for both training and inference in our revised manuscript. Thank you for bringing this to our attention. [1] Gated Linear Attention Transformers with Hardware-Efficient Training: https://arxiv.org/abc/2312.06635 [2] GPUs Go Brrr: https://hazyresearch.stanford.edu/blog/2024-05-12-tk [3] FLA: A Triton-Based Library for Hardware-Efficient Implementations of Linear Attention Mechanism: https://github.com/sustcsonglin/flash-linear-attention --- Rebuttal Comment 2.1: Title: Thanks Comment: I have raised my score --- Reply to Comment 2.1.1: Comment: Thank you very much! We are delighted that our response has addressed your concerns. We will incorporate these new results in the next iteration of the paper.
Summary: The paper explores a new variant of attention with bounded memory to reduce the growing memory size and thus mitigates the memory-intensive challenges of Transformers. The key idea is to set a memory bound, with a predetermined number of usable memory slots, and a gating mechanism to select or mix KV vectors from the previous step. Strengths: The paper proposes a variant of attention mechanism with bounded memory, which seems to be an original contribution. Weaknesses: * The paper could clarify the significance of its proposed method. The results show not so much improvements compared with prior work in terms of performance and memory costs. The effectiveness and efficiency of the proposed method remain unclear. * The writing on the presentation of the proposed method and the evaluation and insightful discussion could be improved. Technical Quality: 3 Clarity: 2 Questions for Authors: * Fig. 1 can be better illustrated and explained how the proposed Gate Slot Attention works. * What is the reason for choosing memory slots to fit in SRAMs? Particularly when the claim is for efficient training. * What is the contribution when the memory footprint as shown in Table 2 is larger than other methods? * What are the overheads, say additional parameters and operations, as introduced by this work? * Some key results could be mentioned in the abstract and introduction. * Fix the extra line-breaker as in line 255? * The quotation marks used in line 128 to 135 are bizarre. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The limitation mentioned in the paper is moderate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and insightful feedback. We will revise the paper accordingly in the next version to enhance clarity in the abstract, introduction, and discussion sections, polish the writing, fix potential errors, and improve the figures. --- **Q:** *The paper could clarify the significance of its proposed method. The results show not so much improvements compared with prior work in terms of performance and memory costs. The effectiveness and efficiency of the proposed method remain unclear. What is the contribution when the memory footprint as shown in Table 2 is larger than other methods?* **A:** We highlight that Gated Slot Attention (GSA) offers significant advantages over existing methods, as shown in the above tables: * GSA shows **improved performance on recall-focused tasks** given limited memory capacity compared with Mamba, RetNet and GLA. Especially on SQUAD and SWDE, GSA outperforms other models by 5.4 and 10.8, respectively. * GSA's formulation, which mimics the spikiness in standard attention, facilitates effective continual pretraining from well-trained LLMs like Mistral 7B, demonstrating **superior performance on challenging tasks like NQ, TriviaQA, BBH, and MMLU**. Despite a slight increase in FLOPs, GSA achieves comparable training throughput to GLA (42K vs. 42.5K tokens/s under 8K context length) and outperforms Flash Attention (36.9K tokens/s) greatly. Regarding memory footprint, the increase is minimal compared to GLA for sufficiently large head dimensions (d). The benefits in performance and versatility outweigh this small trade-off. We will provide more detailed discussions on complexity and additional results in the next version of our paper to better highlight these contributions. --- **Q:** *What is the reason for choosing memory slots to fit in SRAMs? Particularly when the claim is for efficient training.* **A:** GSA involves two passes of gated recurrent processes, necessitating hardware-efficient implementations to match GLA's efficiency. To address this, we follow the approach in [1], accelerating GSA by fully utilizing tensor cores, which is 16× faster than non-matmul counterparts. [2] reveals that latencies for tensor-core matmuls of 16×16 remain similar to 64×64 (Figure 1), despite operating on 16×16 tiles. Our preliminary experiments confirm these findings, motivating us to choose a number of slots that can be processed efficiently in one matmul pass. | Slots | PPL (↓) | tokens/s | | ----- | ------- | -------- | | 32 | 13.74 | 46.7K | | 64 | 13.51 | 44.1K | | 128 | 13.46 | 37.1K | As shown in the table, we observe a significant throughput degradation when increasing slots from 64 to 128, while there's a substantial perplexity gap between 32 and 64 slots. Consequently, we adopt 64 slots as the default setting, balancing efficiency and performance. This hardware-efficient implementation allows GSA to match GLA's efficiency while providing improved performance. Detailed implementations are provided in Appendix A. We will include more discussions and comparisons in the next version of our paper. Thank you. --- **Q:** *What are the overheads, say additional parameters and operations, as introduced by this work?* **A:** We appreciate the reviewer's question regarding the overheads introduced by our work. We have included some discussions in Section 3.1 of our paper, but we're happy to provide further clarification here. * **Parameter allocation**: For 1.3B parameter models, GSA introduces approximately $dhm \approx 0.125 d^2$ additional parameters for forget gate mappings compared to Llama. This results in a parameter allocation similar to that of GLA. * **Computation Complexity**: The computation complexity of GSA and GLA operation is $O(NCd+Nd^2)$ [3] and $O(NCm+NCd+2Ndm)$, respectively, where N is sequence length, C is chunk size, d is head dimension, and m is number of memory slots. While we set $m=64$ and $m\ll d$, the increased complexities are negligible. Our hardware-efficient implementations demonstrate that GSA performs comparably to GLA in terms of efficiency, as illustrated in Figure 3 in the paper. Importantly, this modest increase in parameters and comparable computational efficiency come with significant benefits, which we've discussed in detail in our response to Q1. These advantages justify the minimal overhead introduced by our approach. [1] FLA: A Triton-Based Library for Hardware-Efficient Implementations of Linear Attention Mechanism: https://github.com/sustcsonglin/flash-linear-attention [2] Simple linear attention language models balance the recall-throughput tradeoff: https://arxiv.org/abs/2402.18668 [3] Gated Linear Attention Transformers with Hardware-Efficient Training: https://arxiv.org/abs/2312.06635 --- Rebuttal Comment 1.1: Comment: Dear Reviewer AbD2, Could you please let us know if our response has addressed your concerns? If you have any further questions, please feel free to raise them at any time. --- Reply to Comment 1.1.1: Comment: Dear Reviewer AbD2, This is a kind reminder that today is the last day of the author-reviewer discussion period. If you have any concerns, please let us know as soon as possible so that we can address them. --- Rebuttal 2: Comment: Thank you! We're glad our clarifications were helpful. We'll include these new results in the next version.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their thorough examination of our work and their insightful feedback. Your thoughtful comments and questions have significantly enhanced our submission and have been addressed in detail in individual responses. We have additionally run many new empirical results and analysis, including: * **Time and memory analysis**: We have added comprehensive efficiency metrics and updated the speed comparisons for different numbers of slots to address Reviewer AbD2's questions regarding computational efficiency. * **Long-context evaluation**: In response to Reviewer LU5r's concerns, we have performed evaluations on real-world language tasks with extended context, including Qasper, NarrativeQA, QuALITY, and QMSum. * **Recall-intensive and challenging language reasoning tasks**: To address questions from Reviewers AbD2 and LU5r, we have included results from recall-intensive tasks and more challenging tasks focused on language reasoning to demonstrate the advantages of our GSA approach. In this shared response, we provide a detailed elaboration of these new results: ### **Recall-intensive tasks** We evaluated our model on recall-intensive tasks proposed by Arora et al, 2024 [1]: | | FDA | SQUAD | SWDE | | ---------- | -------- | -------- | -------- | | Mamba | 9.7 | 33.3 | 34.6 | | RetNet | 8.9 | 33.1 | 30.4 | | GLA | **11.8** | 25.8 | 43.3 | | GSA (ours) | 11.3 | **38.7** | **45.4** | GSA's strong performance, particularly on SQUAD and SWDE, demonstrates that our 2-pass recurrence design enables efficient memory utilization despite limited capacity. [1] Simple linear attention language models balance the recall-throughput tradeoff: https://arxiv.org/abs/2402.18668 ### **Results on challenging language understanding and reasoning tasks** We expanded our evaluation to include more demanding tasks, following the settings of Llama2 [2] and Mistral: | Model | Tokens | NQ | TriviaQA | BBH | MMLU | Avg. | | ---------- | ------ | -------- | -------- | -------- | -------- | -------- | | Mamba | 1.2T | 25.4 | 66.2 | 21.5 | 33.2 | 36.5 | | RWKV6 | 1.4T | 20.9 | 59.5 | 23.4 | 43.9 | 36.9 | | SUPRA | +20B | - | - | - | 28.0 | - | | RetNet | +20B | 16.2 | 43.0 | 8.7 | 26.1 | 23.5 | | GLA | +20B | 22.2 | 57.8 | 20.8 | 28.4 | 32.3 | | GSA (ours) | +20B | 23.4 | 60.7 | 23.5 | 32.4 | 35.0 | | SUPRA | +100B | 24.7 | 60.4 | 19.8 | 34.1 | 34.7 | | GSA (ours) | +100B | **26.9** | **65.8** | **29.3** | **38.1** | **40.0** | GSA consistently achieves better scores across all tasks compared to SUPRA, RetNet, and GLA at similar token counts and performs competitive with trillion-token models, highlighting its superior in-context learning capabilities and validating our claim that GSA preserves more abilities compared to other linear attention methods under continual-pretraining settings. [1] Llama 2: Open Foundation and Fine-Tuned Chat Models: https://arxiv.org/abs/2307.09288 ### **Length Extrapolation** In order to showcase the abilities of extrapolating to longer sequences, we conduct additional experiments on long-context tasks, following the settings used in the Llama2-Long report [1]. We tested GSA on four real-world language tasks: Qasper, NarrativeQA, QuALITY, and QMSum. The input for each task was truncated at 16k tokens following [1], which is 8x longer than our training context length of 2k tokens. The results are presented in the following table: | | Qasper | NarrativeQA | QuALITY | QMSum | | ------ | -------- | ----------- | -------- | -------- | | Mamba | 5.6 | **22.2** | 27.5 | 0.7 | | RetNet | 11.1 | 0.0 | 26.2 | 0.0 | | GLA | 18.4 | 17.2 | 30.9 | 8.9 | | GSA | **18.8** | 19.2 | **32.0** | **10.0** | These results demonstrate that our GSA model generalizes well to sequences of 16k tokens, despite being trained on much shorter contexts. Notably, GSA outperforms other linear models across all four tasks, showcasing its robustness and effectiveness in handling long-range dependencies. [1] Effective Long-Context Scaling of Foundation Models: https://arxiv.org/abs/2309.16039
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Statistical Multicriteria Benchmarking via the GSD-Front
Accept (spotlight)
Summary: The authors propose a novel way of comparing classifiers to assess their effectiveness. They posit (1) that comparisons should allow for different quality metrics simultaneously, (2) that comparisons should take into account the statistical uncertainty induced by the choice of benchmark suite, and (3) that the robustness of the comparisons under small deviations in the underlying assumptions should be verifiable. They illustrate their proposed method on the benchmark suite PMLB and on the platform OpenML. Strengths: The paper is clear and easy to follow. The proposed method is innovative, it enjoys some desirable mathematical properties (that are well-presented), and it seems to work well on the selected benchmarks. Weaknesses: Some references to the existing imprecise probabilistic literature are missing, there are some small typos, and an extra effort can be put to make the definition of $d_s$ more accessible. Technical Quality: 3 Clarity: 3 Questions for Authors: When introducing imprecise probabilities, the authors fail to reference a few papers in such a literature. In particular, the authors leverage the idea of $\epsilon$-contamination in section 4.2. Other papers in the imprecise probabilistic machine learning literature dealing with, and advancing the knowledge of, $\epsilon$-contaminations are [1-5]. In addition, [6] uses IPs to robustify Bayesian neural networks. More in general, we suggest the authors to read (and possibly cite) the works by Eyke Hüllermeier, Thierry Denoeux, Sebastien Destercke, Fabio Cuzzolin, Alessio Benavoli, and Michele Caprio on imprecise probabilistic machine learning, all of which have a distinct "robustness" flavor to them, and that might be interesting to the authors for future endeavors. There is a typo in line 95: there's a period after superscript 3. While I understand the need of being as general as possible, wouldn't it be more beneficial to the reader to define $\hat{\pi}_c(\{z\})= 1/s \left|\lbrace\{ i : i \leq s \text{, } \Phi(C,T_i)=z \rbrace\}\right|$ in line 167, instead of using a generic $M \subseteq [0,1]^n$ for the definition? There is a typo in line 167: last letter should be $C^\prime$ and not $C$. Line 205: the question *of* how to [...] --- [1] https://www.jstor.org/stable/2242055 [2] https://onlinelibrary.wiley.com/doi/book/10.1002/9780470434697 (Chapter 10) [3] https://link.springer.com/chapter/10.1007/978-3-031-57963-9_1#:~:text=In%20their%20seminal%201990%20paper,bound%20to%20hold%20with%20equality. [4] https://arxiv.org/abs/2402.00957 [5] https://arxiv.org/abs/2308.14815 [6] https://arxiv.org/abs/2302.09656 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, we would like to thank the reviewer for their time and thoughtful consideration of our paper. We believe that the more direct mathematical notation suggested by the reviewer regarding the empirical measure as well as the suggested more detailed explanation of the test statistic will help to further improve the accessibility of the paper, which the reviewer otherwise describes as “easy to follow”. We are pleased that the reviewer finds our method “innovative” and emphasizes that it “enjoys some desirable mathematical properties”, which they also describe as “well-presented”. We will now address the reviewer's concerns and questions in turn: **Weakness 1: Missing references, different notions of robustness in IP for ML** We agree with the reviewer that Huber's book on robust statistics should definitely be cited when introducing the contamination model and thank the reviewer for identifying this gap. We add a corresponding reference to the revision of our paper. We are also very grateful for the further references provided by the reviewer; they for sure deserve proper consideration: We will go through them and use some part of the additional page to add appropriate references in the revision of our related work section and the introduction of contamination models in Section 4.2. In this context, we will also pay special attention to sharpen the presentation of our understanding of robustness even more, also in contrast to related, but in detail subtly different concepts from current literature. Thanks once more for raising the issue of slightly deviating robustness concepts and for all the hints to important further references. **Weakness 2: Extra effort to make the definition of $d_s$ more accessible** We agree with the reviewer that a purely formal introduction of the test statistic $d_s$, which plays a central role in the paper, is not ideal. We are happy to use some of the additional space in the revised version of the manuscript to add an additional – less formal and instead more intuitive – description of $d_s$. We think this is an excellent point and thank the reviewer for raising it. **Questions:** First of all, thank you for your close reading, which made it possible to find three notational and grammatical typos scattered throughout the paper. We appreciate this very much. We agree that your proposed definition of the empirical measure is more direct and more easy to access. We will change this accordingly in the revised version. Thank you very much for pointing this out! We thank you once more for your review and your concrete suggestions and questions allowing us to further improve our paper.
Summary: The authors propose a method for multicriteria evaluation of classifiers which is more informative than the Pareto front. The new GSD-front and is based on the previously proposed generalized stochastic dominance ordering (GSD) for classifiers. The authors also provided a sound inference framework, included hypothesis tests for whether a new classifier would be in the GSD-front of a benchmark. Strengths: 1. The method is sound and extremely useful 2. The paper is very well written; notation is consistent throughout 3. Given the large (and increasing) number of potential models that can be applied to any given task, this method is very significant for identifying which models tended to be among the best for other similar tasks 4. The hypothesis tests derived from the framework are also very useful to add significance to these benchmarks 5. All the information needed to fully understand the paper is in the main body of text Weaknesses: 1. There could have been maybe a toy example to show the differences between GSD-front and Pareto-front in a more didactic way 2. The method seems computationally expensive. The pmlb_permutation_tests.R script needs 5 days to run and this does not include training of the evaluated models, as these are collected results Technical Quality: 4 Clarity: 3 Questions for Authors: 1. How does the running time of a Pareto-front benchmark compare with a GSD-front one? 1. If the Pareto-front is much cheaper to obtain, would it make sense to start with it, and only look for what classifiers in that pre-selection would be in the GSD-front? 2. How were different hyper-parameter values treated in the experiments? Were all results of each classifier for a dataset aggregated or did the authors use specific settings for each classifier, e.g. default values from the classifier's implementation? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, we would like to thank the reviewer for their time and thoughtful consideration of our paper. We are grateful for the suggestion to add a toy example showing the differences between GSD-front and Pareto-front in a more didactic way and the suggestion to add more details on hyperparameter tuning in the PMLB experiment. We follow both suggestions in the revision of our manuscript (the example will be added to the main text, the tuning details to the corresponding paragraph in the appendix). We are very pleased that the reviewer appreciates our method as “extremely useful” and the paper as “very well written”, with notation being “consistent throughout”. We will now address the reviewer's comments in turn: **Weakness 1: Missing toy example** We agree that a concrete (minimal) example could help to quickly and intuitively grasp the differences between Pareto and GSD-Front. We are happy to use parts of the additional page to include such an example. Specifically, we plan to build the example directly after Definition 1 to illustrate the Pareto front and then continue it after Definition 6 to contrast the GSD front. To further strengthen the reader's intuition right from the start, we also plan to refer to the "Method comparison" from Section 5.1 at the end of these examples. Thanks for the proposal, we think this is an excellent idea! **Weakness 2: Computation time** We agree that the computation time is relatively long. However, we believe that this is not really a major problem for the settings in which our approach is intended to be applied. Firstly, we are in a benchmark setup, so our method only needs to be run once, not several times. This is in contrast to the intensive hyperparameter tuning of models that is often part of everyday work. Even more important, our method can provide the information that an algorithm is dominated by others and, therefore, should not be considered in later tasks. So, starting with a rather expensive benchmark experiment can save a lot of time later on, in particular, as the results provided by our method come with (additionally robustified) inferential guarantees and are not purely descriptive. Secondly, the expensive part is the computation of the test (as it is a permutation test) and not the computation of the empirical GSD front. The computation of the test is indeed expensive, but unlike other benchmarking approaches, we obtain from it a statistically sound inferential statement. If we restrict our analysis to the empirical, non-inferential part, the computation time is much smaller. **Question 1: Pareto-front vs empirical GSD-front** We think the idea to first compute the Pareto-front (as a kind of pre-processing step) and then proceed with our GSD-based analysis is excellent. Thank you very much for that suggestion! We will include this idea in the new paragraph with end-user recommendations (see the answer to 1fNM) in the revised manuscript. Concerning the computation of the Pareto vs GSD-front: Since the Pareto-front simply checks whether one classifier is strictly dominated by another, the Pareto-front is simply based on counting the existence of observed dominances. This can be done in milliseconds. As we pointed out above, the computation of the empirical GSD-front is much faster than the entire inference part, but still more expensive than the computation of the Pareto-front. Similar to above, we think that the further insight that the empirical GSD-front provides more than justifies the higher computational time – in particular, as it can save computation time later on. **Question 2: Hyper-parameter** Thanks for this important question. In the case of the Open ML platform, we used the performance evaluation provided by the OpenML library, see line 838-846. In the case of the PMLB benchmarking suite, we tuned the six classifiers’ hyperparameters on a (multivariate) grid for each of the 62 datasets and eventually computed all evaluation metrics through 10-fold cross validation, see lines 999-1008 in the paper for more details. Concretely, we tuned - “sigma” and “C” for support vector machines - “mtry”, “splitrule” and “min.node.size” for ranger - “k” for knn - “alpha” and “lamba” for glmnet - “CF”, “M”, and “U” for CART (J48) - “eta” and “k” for compressed rule ensembles (cre) We thank you once more for your review and your concrete suggestions and questions allowing us to further improve our paper.
Summary: This submission studies the problem of comparing multiple classifiers under multiple evaluation criteria. It presents the construction of the GSD-front, i.e., the set of GSD-undominated classifiers, and an empirical variant called the $\epsilon$-empirical GSD-front. Then, theoretical aspects of the GSD-front and the $\epsilon$-empirical GSD-front, such as satistical consistency, and the inclusion concerning the respective Pareto-front, are investigated. Two statistical tests, namely static GSD-test and dynamic static GSD-test, on whether a given classifier $C \in \mathcal{C}$ is an element of the true GSD-front are proposed. Both tests are shown to be valid level-$\alpha$ tests. A robustified static GSD-test is also proposed to deal with the setting where the available data sets are drawn under non-i.i.d.-scenarios. An empirical study is conducted to assess the proposed tests. It covers 80 binary classification data sets from OpenML [70]. The set of classifiers consists of Support Vector Machine (SVM) with Random Forest (RF), Decision Tree (CART), Logistic Regression (LR), Generalized Linear Model with Elastic net (GLMNet), Extreme Gradient Boosting (xGBoost), and k-Nearest Neighbors (kNN). The set of evaluation metrics consists of predictive accuracy, computation time on the test data, and computation time on the training data. Empirical evidence suggests that the GSD-approach tends to be more contrastive, compared to Pareto-front and another test that aggregates the single-metric tests as in [19]. Another empirical study is conducted using 62 datasets from the Penn Machine Learning Benchmark (PMLB). The set of classifiers consists of rule ensembles of trees (CRE) [53], CART, RF, SVM with radial kernel, kNN and GLMNet. The set of evaluation metrics consists of classical accuracy (metric), ii) robustness of accuracy w.r.t. noisy features (ordinal), and iii) robustness of accuracy w.r.t. noisy classes (ordinal). The results provided by GSD-tests are discussed. Strengths: S1: The submission is well-written and organized. S2: Theoretical aspects of the GSD-front and GSD-tests are carefully investigated. S3: The authors also take into account an interesting aspect that the different evaluation metrics may have different levels of impact. Non-i.i.d.-scenarios are also taken into account. S4: Empirical evidence seems to suggest that GSD-tests can complement the existing statistical tests for comparing classifiers meaningfully. Weaknesses: W1: The authors might consider mentioning more scenarios where comparing multiple classifiers under the multiple evaluation criteria is crucial. An example might be multi-label classification, where multiple evaluation metrics have been proposed to evaluate multi-label classifiers. The set of evaluation metrics can also be extended with others, such as training time, storage memory, and so on. W2: The motivation to compare multiple classifiers using evidence on multiple data sets, which can come from very different application domains might need to be strengthened. For example, it would be not easy to convince practitioners in safety-critical applications, such as lung cancer detection to use some classifier that has shown to be promising given evidence aggregated from multiple domains. W3: Additional recommendations for the end-user might be beneficial. For example, what should one do if the GSD-front contains multiple classifiers? Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: Could you elaborate more on the case where only one training data is taken into account? This is related to the comment in W2. Q2: Regarding W3, should one randomly choose one element of the GSD-front when having to make predictions? Yet, one might consider doing ensemble learning on the set of promising classifiers. In safety-critical applications, such a strategy might lower the interpretability and explainability of the predictive system. Do you have any recommendations in such cases? Q3: Could you also discuss the Pareto-front and results of the test that aggregates the single-metric tests as in [19] for the empirical study on PMLB data sets? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to "Weaknesses" and "Questions" for detailed comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful consideration. We appreciate the suggestions to include end-user recommendations and comparative studies for the PMLB datasets. We follow both in the revision (recommendations go to the main paper, studies to the appendix). We are pleased you find our paper "well-written and organized" and our framework "carefully investigated". We now address your comments: **W1:** We are grateful for your idea to add more scenarios, which we do in the revised version. The multilabel classification case is very interesting: For classes $i=1,...,K$, one can use a collection of classical metrics for multiclass classification like Hamming distance, Jaccard similarity etc. Within our approach, a more direct way may be even more interesting: Take a separate metric for every class (e.g., accuracy) and then treat all metrics together as a K-dimensional multi-cardinal metric. With the flexibility of the $R_2$ relation, one can implement different strengths of the underlying scale of measurement, e.g. by imposing only cardinal commensurability within each class, or by additionally requiring interclass commensurability (which presumably leads to an approach very similar to using only a classical (cardinal) one-dimensional metric for the entire multiclass performance). This can be accompanied, similar to our experiments, with further criteria such as training time or memory. **W2:** We fully share your concern about benchmarking on datasets from multiple domains different from the one of intended use, e.g., lung cancer detection. We consider this a very relevant problem in benchmarking generally (not only limited to our GSD-analysis). It is the very reason why we robustified our tests against non-i.i.d. selection of datasets in section 4.2. We observe it to be common practice in benchmarking to simply test methods on huge benchmark suites that include datasets from a myriad of domains. Statistically, such a collection can hardly be considered *i.i.d.* from the population of interest. Our robustification allows for valid conclusions even if some heterogeneity/asymmetry in the selection processes is allowed, i.e., if it deviates from *i.i.d*. E.g., for the OpenML study, the observed conclusions remain valid under contamination (i.e. can come from very different data domains) of 7, 8, 11 and 11 out of 80 datasets respectively (lines 335-344). This exactly addresses your concern: Our methodology allows for valid statistical conclusions even if the datasets are not *i.i.d*. Finally, from a different angle, this point is an excellent motivation to extend our results to stratified and regression-like situations, as mentioned in Section 6. Clearly, the area from which a dataset originates is an important meta-information to include. **Q1:** Our method can also be applied to one dataset. It then translates to the partial order resulting from the classifiers' performances w.r.t. a multidimensional metric on this dataset, and, in the further special case of only one metric, even results in a total order. This is a sensible thing to do if one is interested in a purely descriptive (as opposed to inferential) analysis of classifier performance on a single dataset. Then, however, much of our conceptual work regarding statistical properties of our estimator and tests is not needed; the tiny sample size does not allow for meaningful inferences. Nevertheless, such an application of our methodology is possible in principle and leads to the known special cases. **Q2 and W3:** We agree it is good to add a paragraph with end-user recommendations. We use part of the extra page for such an explanation and will add it to the beginning of Section 5. Specifically, we plan to explain that our method is not primarily intended to identify the best classifier for a problem class. (In this sense, it is not a problem if the GSD front contains more than one element.) It is rather for checking whether a newly proposed classifier for a certain problem class can potentially improve on the state-of-the-art classifiers, or whether it disqualifies itself from the outset. Furthermore, we want to emphasize that our framework allows us to provide such statements with inferential guarantees by means of appropriately constructed statistical tests, and even to examine these inferential guarantees in terms of their robustness to deviations from the i.i.d assumption. We are very grateful for your idea about ensemble learning with the GSD-front. Although our original motivation was different, this sounds practically quite rewarding, but of course needs more careful attention than can be paid in the context of this paper under review. Speculating a bit, it seems tempting to apply aggregation rules to the outputs of the different classifiers in the GSD front. Most in line with our work is to rely then on a credal point of view, collecting, for every unit, all the predicted classes (similar to E-admissibility in credal decision making). In safety-critical applications, this may help to avoid overoptimism, and it informs the user about whether there is consensus among the non-dominated classifiers. **Q3:** We add the results of the Pareto-analysis and the aggregated single-metric tests for the PMLB study to the appendix. Also for the PMLB case, the Pareto front contains all classifiers and is not informative. The pdf page presenting the results of the single-metric tests as in [19] is attached to the “global response” above. Furthermore, note that – for all three metrics considered – the Friedman rank sum test rejects the global null of no differences such that we can conduct (two-sided) pairwise post hoc tests (α=0.05) with no difference as their null. The interpretation of the results is in complete analogy to the one of the OpenML study in Ap. C.1.3. We thank you once more for your review and your concrete suggestions and questions allowing us to further improve our paper. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I think the response indeed complements the submission. The additional discussion on the case where GSD-front contains multiple classifiers might need to be further elaborated as it might be of practical relevance. In different applications of credal decision making, I guess one might expect that there are experts who know and can provide the true classes (by possibly paying additional costs). Yet, I think whether one should expect such experts in model selection/comparison might require more careful attention. However, I think the motivation for avoiding overoptimism is also interesting. After due consideration, I raised the rating to $7$. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Thank you for your reply and for taking into account our rebuttal in your assessment of the paper! We will include a discussion about the case where the GSD front contains multiple classifiers in the new paragraph with end-user recommendations.
null
null
Rebuttal 1: Rebuttal: **Global response** We sincerely thank all reviewers for their thorough, high-quality, and detailed assessment of our manuscript. We are encouraged by the very positive, affirmative reviews and feel most grateful for the very precise and constructive suggestions on how to improve our paper further. The reviews unanimously underscore the novel and innovative nature of our proposed benchmark methodology via the GSD-Front. **Reviewer 1fNM** praises the paper for its "well-written and organized" presentation, emphasizing the thorough investigation into the theoretical aspects of the GSD-front and GSD-tests. The reviewer further welcomes the inclusion of different evaluation metrics and the consideration of non-i.i.d. scenarios as a crucial aspect of the methodology, noting that "the empirical evidence suggests that GSD-tests can complement the existing statistical tests for comparing classifiers meaningfully." **Reviewer 3sEe** highlights the method's soundness and utility by calling it "extremely useful" and “very significant”. The reviewer further praises the notation as “consistent throughout”. Generally, 3sEe applauds the "very well written" nature of the paper, which provides "all the information needed to fully understand the paper in the main body of text." Lastly, **Reviewer n7Tx** calls our work “innovative”, stating that the paper is "clear and easy to follow" with a methodology that "enjoys some desirable mathematical properties", which are “well presented”. The reviewer also appreciates the extensive empirical evidence in favor of our method, stating that it "seems to work well on the selected benchmarks.” Generally, neither the theoretical soundness nor the practical relevance ("excellent" and “extremely useful”: Reviewer 3sEe) was questioned. The reviews rather focused on didactical aspects. As described in detail in the individual responses, we are most confident that we can properly address all issues in the camera ready version. The additional page will mainly be used to follow the reviewers’ suggestions to add some further didactical material illustrating and demonstrating our approach. In particular, we will include a paragraph with additional recommendations for the end-user (**Reviewer 1fNM**), add an example elaborating the differences between GSD-front and Pareto-front (**Reviewer 3sEe**), as well as extend the discussion of related work and additionally provide a more intuitive description of the central test statistic (**Reviewer n7Tx**). **Attached pdf page** Please note that the pdf page attached to this global comment is specific to Q3 of Reviewer 1fNM. The context of the contents of the page is explained in our corresponding answer. Pdf: /pdf/44d5b8d6dfc4b5e30a349cdc0a6ba0ae10fdbd95.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning
Accept (poster)
Summary: The paper introduces the Star-Agents framework, an innovative system designed to enhance the quality of datasets used for instruction-tuning of large language models (LLMs). The framework addresses the challenges of collecting high-quality and diverse data by automating the process through multi-agent collaboration and assessment. The optimized datasets resulted in significant performance improvements. The research contributes to an advanced automated data optimization system that refines instruction samples with suitable complexity and diversity for LLMs, leading to more efficient model alignment and improved performance. Strengths: 1. The paper presents a novel approach to enhancing the quality of datasets for instruction-tuning of LLMs. The concept of using multiple agents to generate diverse instruction data and the dual-model evaluation strategy for data selection are innovative. 2. The paper includes comprehensive experiments with different models, showcasing the effectiveness of the Star-Agents framework. 3. The reported performance improvements of up to 12% on average, and over 40% in specific metrics, highlight the practical significance of the research. Weaknesses: 1. This paper utilizes multi-agent collaboration, but it seems that the paper does not explore the impact of different numbers of agents or agent pairs on the results. The authors should also specify the exact number of agents used in the experiments in the "Implementation Details" section. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you provide some specific cases? For example, demonstrate how the outputs from different agent pairs differ. 2. What impact would removing some or adding additional agent pairs have on the results? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have mentioned the limitations of this work in the paper and have also suggested potential research directions for addressing these limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive comments. **Q1**: Different numbers of agents or agent pairs on the results. **A1**: We have conducted experiments to explore the impact of varying the number of agent pairs on the results. As shown in Table 1, we observed that as the number of agent pairs decreases, the model's performance exhibits a corresponding decline. This finding highlights the importance of the number of agents in achieving optimal results. We will include these experimental results and a discussion of their implications in the revised manuscript. Table 1. Performance using different agent pair | model | agent-pair | Vicuna-bench | Wizardlm-bench | MT-bench | Average | | ------------------------------ | ---------- | ------------ | -------------- | -------- | ------- | | Pythia-1B-evol_instruct | 1 | 5.07 | 3.55 | 2.56 | 3.73 | | Pythia-1B-star_instruct_2_pair | 2 | 5.23 | 3.88 | 2.69 | 3.93 | | Pythia-1B-star_instruct_5_pair | 5 | 5.29 | 3.87 | 2.78 | 3.98 | | Pythia-1B-star_instruct_10_pair | 10 | 5.93 | 3.90 | 2.69 | 4.17 | **Q2**: The authors should also specify the exact number of agents used in the experiments in the "Implementation Details" section. **A2**: Thank you for your insightful suggestion. In our experiments, we utilized a total of 10 agent-pairs. We will include this specific information in the "Implementation Details" section of the revised manuscript to ensure clarity and completeness. **Q3**: Could you provide some specific cases? For example, demonstrate how the outputs from different agent pairs differ. **A3**: we have included a specific case in the table below to demonstrate how the outputs from different agent pairs differ briefly. Due to the character limitations of this rebuttal, more details are provided in the supplementary PDF attached to the global rebuttal. We kindly ask you to review the detailed examples there. - Instruction: Can you suggest a highly specialized conference dedicated to exploring advanced machine learning algorithms and their practical applications in the vast field of artificial intelligence? We are looking for keynote speakers who are prominent figures in the field of AI research, and the event should provide attendees with a chance to participate in immersive, hands-on workshops, as well as poster sessions highlighting groundbreaking research in this field. Additionally, the conference should offer insights into the latest trends and innovations in the world of machine learning. - Agent-Pair A: I recommend checking out the Conference on Neural Information Processing Systems (NeurIPS). It is one of the most prestigious conferences in the field of machine learning and artificial intelligence. NeurIPS features keynote speakers from top universities and research institutions, as well as hands-on workshops and poster sessions showcasing cutting-edge research in the field. The conference also offers a chance to learn about the latest trends and innovations in machine learning. - Agent-Pair B: The conference you are looking for is the International Conference on Machine Learning (ICML). It is a highly specialized conference that focuses on exploring advanced machine learning algorithms and their practical applications in the field of artificial intelligence. The conference is organized by the International Machine Learning Society (IMLS) ....... - Agent-Pair C: I would highly recommend the NeurIPS (Conference on Neural Information Processing Systems) conference for your needs. NeurIPS is one of the largest and most prestigious annual conferences in machine learning and artificial intelligence. It attracts leading researchers, engineers, and students in the field, providing an ideal setting for sharing new research, ideas, and innovations.......... - Agent-Pair D: Certainly! I recommend the \"Machine Learning and Artificial Intelligence Conference\" which will be held in San Francisco, California. The conference will feature keynote speakers such as Andrew Ng, Yoshua Bengio, and Geoffrey Hinton, who are all renowned experts in the field of AI research. Attendees will have the opportunity to participate in immersive, hands-on workshops, as well as poster sessions highlighting groundbreaking research in machine learning. The conference will also offer insights into the latest trends and innovations in the world of machine learning. - Agent-Pair E: One highly specialized conference that meets your criteria is the \"International Conference on Machine Learning\" (ICML). ICML is one of the premier events in the field of machine learning and attracts top researchers, practitioners, and industry leaders. The conference features keynote speeches by renowned experts, in-depth workshops, and numerous poster sessions showcasing cutting-edge research.\n\nHere are some key details about ICML:........Another conference that you might consider is the \"NeurIPS\" (Conference on Neural Information Processing Systems), which is another major event in the AI and machine learning community. NeurIPS has a similar format and focus on advanced research and practical applications. - Agent-Pair F: ......... **Q4**: What impact would removing some or adding additional agent pairs have on the results? **A4**: Thank you very much for your insightful suggestion. We have conducted additional experiments to investigate the impact of varying the number of agent pairs on our model's performance. As illustrated in Table 1, we observed that reducing the number of agent pairs tends to result in a gradual decline in the model's effectiveness. This trend indicates that the performance of our model is positively correlated with the number of agent pairs. We hope these findings address your concerns satisfactorily. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thanks to the authors for resolving my concern. For the Q1, from the table, it seems that 10 agent pairs is not the number that achieve the top of performance. What still concerns me is the balence of performance and cost (if more agent pairs cost more). --- Reply to Comment 1.1.1: Title: Thanks for your feedback! Comment: Dear Reviewer 4rks, Thank you for your valuable feedback. We understand your concern regarding the balance between performance and cost when increasing the number of agent pairs. To address this, we conducted additional experiments. As shown in Table 2, we observed that the performance improvement starts to plateau when the number of agent pairs increases to 8. While the model's performance is highest with 10 agent pairs, the difference between 8 and 10 pairs is marginal. Considering the trade-off between performance and cost, we believe that using 8–10 agent pairs is a reasonable and effective choice. This range provides a good balance, ensuring high performance while keeping the cost manageable. Table 2. Performance using different agent pairs | model | agent-pair | Vicuna-bench | Wizardlm-bench | MT-bench | Average | | ------------------------------- | ---------- | ------------ | -------------- | -------- | ------- | | Pythia-1B-evol_instruct | 1 | 5.07 | 3.55 | 2.56 | 3.73 | | Pythia-1B-star_instruct_2_pair | 2 | 5.23 | 3.88 | 2.69 | 3.93 | | Pythia-1B-star_instruct_5_pair | 5 | 5.29 | 3.87 | 2.78 | 3.98 | | Pythia-1B-star_instruct_7_pair | 7 | 5.48 | 3.92 | 2.79 | 4.06 | | Pythia-1B-star_instruct_8_pair | 8 | 5.69 | 3.92 | 2.78 | 4.13 | | Pythia-1B-star_instruct_9_pair | 9 | 5.80 | 3.91 | 2.77 | 4.16 | | Pythia-1B-star_instruct_10_pair | 10 | 5.93 | 3.9 | 2.69 | 4.17 | --- Rebuttal 2: Title: Thanks for the comments. Comment: Dear Reviewer 4rks, We sincerely appreciate the time you’ve taken to provide valuable feedback on our paper. In our rebuttal, we have thoroughly addressed all of your initial concerns and included the requested experimental results. If you have any further questions or concerns, we would be happy to discuss them with you. Additionally, we welcome any new suggestions or comments you may have!
Summary: The paper introduces the "Star-Agents" framework, designed to optimize data for instruction tuning in large language models (LLMs). This system automates the process of enhancing dataset quality by employing a multi-agent approach. Empirical studies demonstrate that the optimized datasets lead to significant performance improvements in models such as Pythia and LLaMA. Strengths: 1. The motivation is clear and it is easy to follow their method. 2. The framework's effectiveness is validated through extensive experiments, showing significant performance improvements in various benchmarks. Weaknesses: 1. What is the overhead of this proposed method, like wall-clock time? 2. stability: can this method be scalable to large scale dataset optimization, like web-crawl data? Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive comments. **Q1**:What is the overhead of this proposed method, like wall-clock time? **A1**:Thank you for your insightful question regarding the overhead of the proposed method. The computational overhead of our proposed method primarily depends on the inference computational load of the various Large Language Models (LLMs) used. To provide a clearer understanding, let us consider specific LLMs use in this paper: - LLM Qwen-14B: During inference with a sequence length of 256 tokens, the computational load is approximately 4×10^12 floating point operations (FLOPs). - LLM Phi-2-2.7B: For the same sequence length, the inference computational load is around 7×10^11 FLOPs. - LLM ChatGPT: Given that ChatGPT is a proprietary model, we lack precise data on its computational requirements. Nonetheless, for estimation purposes, we can approximate the overall computational cost. Assuming an iterative process involving multiple LLMs (e.g., 10 LLMs) and a large dataset (e.g., 70,000 samples), the total computation without our methods can be roughly estimated as: 4×10^12 FLOPs (Qwen-14B) × 10 LLMs × 70,000 samples = **2.8×10^18 FLOPs** While, when the Agent-Pairs Sampling and Instruction Memory Bank were employed, 5 of 10 LLMs are employed to generate data , therefore, total computation can be significant reduced and roughly estimated as: 4×10^12 FLOPs (Qwen-14B) × **5** LLMs × 70,000 samples = **1.4×10^18 FLOPs** This estimation highlights the significant computational requirements, largely dominated by the inference processes of the LLMs. Other components of our method contribute negligible computational overhead compared to the inference load of the LLMs. **Q2**:stability: can this method be scalable to large scale dataset optimization, like web-crawl data? **A2**:We appreciate your inquiry about the scalability of our method. Our approach focuses on enhancing the quality and diversity of instruction datasets, and it is inherently scalable to larger datasets. The methodology we proposed is not constrained by dataset size, thus it can be effectively applied to extensive datasets. Specifically, there are existing studies that leverage large models to synthesize and process pre-training data [r1]. Our method can be integrated with these existing approaches to optimize and handle large-scale pre-training datasets such as web-crawl data. This integration is a promising area for our future research, and we are committed to exploring this direction further. References: [r1] : Nemotron-4 340B Technical Report[J]. arXiv preprint arXiv:2406.11704, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for your response and it would be great if we can add this in the camera ready version. I will keep my score. --- Reply to Comment 1.1.1: Title: Thanks for your feedback! Comment: Dear Reviewer b1DV, Thanks for your feedback and valuable suggestions! We will ensure that the recommended additions are included in the camera-ready version of the paper. Regards --- Rebuttal 2: Title: Thanks for the comments. Comment: Dear Reviewer b1DV, We sincerely appreciate the time you took to provide valuable comments on our paper. In our rebuttal, we have addressed all of your initial concerns and included the requested experimental results. If you still have any question or concern, we would be glad to discuss them with you. Additionally, we welcome any new suggestions or comments you may have! Regards
Summary: The Star-Agents framework presents an advanced approach for enhancing data quality in instruction-tuning of large language models (LLMs) through multi-agent collaboration and automated assessment. By generating diverse instruction data using various LLM agents and evaluating it with a Dual-model metric, this framework ensures both diversity and quality. It dynamically refines its processes to prioritize more effective agents, leading to significant performance improvements. Experimental results show an average increase of 12% in model performance, with some metrics seeing increases up to 40%. This method addresses common issues in data generation such as lack of stylistic variety and overly complex datasets, making it a robust solution for optimizing LLM training datasets. Strengths: The paper stands out for its innovative dual-model scoring system, which refines data quality assessment for large language models (LLMs) and is a key contribution to the field. It demonstrates state-of-the-art results, with the framework achieving an average performance increase of 12% and up to 40% in specific metrics, highlighting its effectiveness. Additionally, the paper is clearly written and well-structured, making complex concepts accessible and easy to follow, which enhances its academic impact and usability. Weaknesses: I could not find any weakness apart from mentioned in the questions below. Technical Quality: 3 Clarity: 4 Questions for Authors: What is the size and composition of the datasets used in your experiments? Have you compared your framework's performance using standard benchmarks like MMLU or those from Hugging Face? What would be the impact of using only Mistral-ChatGPT for 70,000 iterations on the diversity and quality of the generated data? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors acknowledge a limitation in their framework, noting that it is currently designed for optimizing single-turn interactions. They suggest that extending this approach to multi-turn scenarios could further enhance its applicability and effectiveness, addressing more complex dialogue systems where contextual continuity and conversational depth are critical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments. **Q1**:What is the size and composition of the datasets used in your experiments? **A1**:The datasets used in our experiments consist of 70,000 samples based alpaca evol instruct datasets [r1]. Each sample is paired with an instruction and a corresponding response. Here is an example: - Instruction: Can you suggest a highly specialized conference dedicated to exploring advanced machine learning algorithms and their practical applications in the vast field of artificial intelligence? We are looking for keynote speakers who are prominent figures in the field of AI research, and the event should provide attendees with a chance to participate in immersive, hands-on workshops, as well as poster sessions highlighting groundbreaking research in this field. Additionally, the conference should offer insights into the latest trends and innovations in the world of machine learning. - Response: I recommend checking out the Conference on Neural Information Processing Systems (NeurIPS). It is one of the most prestigious conferences in the field of machine learning and artificial intelligence. NeurIPS features keynote speakers from top universities and research institutions, as well as hands-on workshops and poster sessions showcasing cutting-edge research in the field. The conference also offers a chance to learn about the latest trends and innovations in machine learning. **Q2**: Have you compared your framework's performance using standard benchmarks like MMLU or those from Hugging Face? **A2**: We evaluated our framework on the standard benchmarks from the Open LLM Leaderboard (including MMLU). As shown in Table 1, although the training datasets we used in our paper focus more on conversational performance, our framework still achieved improvements across multiple standard test sets. Table 1. Performance on Open LLM Leaderboard | Model | ARC | HellaSwag | MMLU | TruthfulQA | Average | | ------------------------ | ----- | --------- | ----- | ---------- | ------- | | Wizardlm | 51.60 | 77.7 | 42.7 | 44.7 | 54.18 | | Llama-2-7B-evol_instruct | 51.88 | 76.70 | 45.76 | 46.10 | 55.11 | | Llama-2-7B-star_instruct | 54.44 | 77.64 | 46.94 | 46.13 | 56.29 | **Q3**: What would be the impact of using only Mistral-ChatGPT for 70,000 iterations on the diversity and quality of the generated data? **A3**: Thank you for your insightful question regarding the impact of using Mistral-ChatGPT for 70,000 iterations on the diversity and quality of the generated data. We have conducted a detailed analysis to address your concerns. Firstly, we generated 70,000 data samples exclusively using the agent-pair Mistral-ChatGPT. In terms of diversity, we observed a decline in the variation of responses for the same instructions. Regarding quality, some of the data generated was of lower quality compared to using the all of agent-pairs, but it was still superior to the baseline data. To further assess the implications, we trained the model using this generated dataset. The performance metrics are presented in the Table 2. The results demonstrate that relying on Mistral-ChatGPT for data generation negatively impacts the model's overall capability compared to using the full set of agent-pairs, but it is still superior to the baseline. This confirms the effectiveness of our approach. Table 2. Performance of Mistral-ChatGPT agent pair | Model | Vicuna-bench | Wizardlm-testset | MT-bench | Average | | ----------------------- | ------------ | ---------------- | -------- | ------- | | Pythia-1B-evol_instruct | 5.07 | 3.55 | 2.56 | 3.73 | | Pythia-1B-star_mistral | 5.23 | 3.88 | 2.69 | 3.93 | | Pythia-1B-star_instruct | 5.93 | 3.90 | 2.69 | 4.17 | References: [r1] WizardLM: Empowering large pre-trained language models to follow complex instructions[C]//The Twelfth International Conference on Learning Representations. 2024. --- Rebuttal Comment 1.1: Comment: Thank you for addressing most of my concerns. I have few comments. "Thank you for your insightful question regarding the impact of using Mistral-ChatGPT for 70,000 iterations on the diversity and quality of the generated data. We have conducted a detailed analysis to address your concerns. Firstly, we generated 70,000 data samples exclusively using the agent-pair Mistral-ChatGPT. In terms of diversity, we observed a decline in the variation of responses for the same instructions. Regarding quality, some of the data generated was of lower quality compared to using the all of agent-pairs, but it was still superior to the baseline data. To further assess the implications, we trained the model using this generated dataset. The performance metrics are presented in the Table 2. The results demonstrate that relying on Mistral-ChatGPT for data generation negatively impacts the model's overall capability compared to using the full set of agent-pairs, but it is still superior to the baseline. This confirms the effectiveness of our approach." Here what metric did you use to calculate diversity? --- Rebuttal 2: Title: Thanks for your feedback! Comment: Dear Reviewer kHqZ, Thank you for your thoughtful feedback and valuable comments. We appreciate the opportunity to further clarify our methodology. Regarding the metric used to calculate diversity, we employed cosine similarity as the primary metric. Specifically, we calculated the cosine similarity between the sentence embeddings of the datasets. The diversity metric is inversely related to the cosine similarity; hence, lower cosine similarity indicates higher diversity in the generated data. In our analysis, we found that the Mistral-ChatGPT dataset exhibited a cosine similarity of approximately 20.8%, which indeed suggests a higher degree of similarity (and thus lower diversity) when compared to the dataset generated using all agent pairs, which had a cosine similarity of 6.9%. We hope this clarification adequately addresses your concerns, and we are grateful for your guidance in improving the clarity and rigor of our work.
Summary: The paper presents a framework for enhancing the quality of instruction datasets used for tuning large language models (LLMs). The proposed framework, Star-Agents, leverages multi-agent collaboration to generate, evaluate, and refine instruction data automatically. The approach comprises three main components: 1. **Diverse Data Generation**: The framework employs multiple LLM agents to generate varied instruction data. Each agent pair, consisting of distinct instruction and response agents, creates diverse data samples to enrich the dataset. 2. **Dual-model Evaluation**: This component introduces a two-tiered evaluation mechanism using both small and large models. The evaluation metric considers the difficulty and quality of the generated data, ensuring it is challenging yet manageable for the target model. 3. **Dynamic Refinement**: The framework dynamically adjusts the sampling probabilities of agent pairs based on their performance, promoting the selection of the most effective agents for data generation. Empirical studies demonstrate the optimized datasets generated by Star-Agents led to performance improvements, with an average increase of 12% and up to 40% in specific benchmarks such as MT-bench, Vicuna-bench, and the WizardLM testset. ------------------------------------------------------------------------------------------------------------------------------------------------------------ Thank you for your reply and I have updated my score accordingly. Strengths: **Originality** The paper introduces a framework, Star-Agents, for automatic data optimization in instruction tuning for large language models (LLMs). This originality arises from: - **Multi-Agent Collaboration**: Utilizing multiple LLM agents to generate diverse and high-quality instruction data. This approach addresses the common limitations of single-model data generation methods, ensuring a richer and more varied dataset. - **Dual-Model Evaluation**: Implementing a dual-model evaluation strategy that assesses both the difficulty and quality of the generated data. This innovative metric ensures the data is challenging yet manageable, enhancing the instruction tuning process. - **Dynamic Refinement**: The dynamic adjustment of sampling probabilities for agent pairs based on performance is a creative mechanism that optimizes the data generation process over time. **Quality** The paper is supported by empirical studies. The framework is tested using instruction-tuning experiments with two models, including Pythia and LLaMA. The results consistently show performance improvements, validating the quality of the proposed method. **Clarity** The paper is clearly written and well-structured, facilitating easy understanding of the proposed framework. Weaknesses: **Limited Dataset Evaluation** The paper evaluates the Star-Agents framework on a limited set of datasets, which may not fully capture the framework's robustness and generalizability. Specifically: - **Small Evaluation Datasets**: The datasets used for evaluations are relatively small. Evaluating the framework on larger, more diverse datasets would provide a better understanding of its effectiveness across different data scales and domains. **Complexity of the Method** The Star-Agents framework involves multiple stages and the use of several teacher models, which introduces complexity: - **Multiple Teacher Models**: The requirement for multiple LLM agents increases the computational cost and complexity of the implementation. Simplifying the framework or exploring methods to reduce the number of required teacher models could enhance its practical applicability. - **Three-Stage Process**: The three-pronged approach, while thorough, may be overly complex for some applications. Streamlining the process without sacrificing performance gains could make the framework more accessible and easier to implement. **Lack of Human Evaluations** The paper does not include human evaluations of test data, which is crucial for assessing the practical quality and usability of the model. "Absence of Standard Benchmark Results" The framework's performance is not evaluated on widely recognized benchmarks, limiting the comparability of its results: - **Standard Benchmarks**: Including evaluations on standard benchmarks such as GSM8K, HumanEval, or other well-known datasets would provide a clearer comparison with existing methods. This would help situate the Star-Agents framework within the broader context of instruction tuning and LLM performance. Technical Quality: 2 Clarity: 2 Questions for Authors: What will be the results if base models are larger, e.g. llama 13b or llama 70b? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See more details in Weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments. **Q1**: Small Evaluation Datasets **A1**: We followed several seminal works[r1,r2], and used widely accepted datasets such as Mt-bench, Vicuna-bench, and the WizardLM testset for our evaluations. These datasets are commonly utilized to assess the effectiveness of LLM's instruction tuning capabilities. To further substantiate the robustness and generalizability of our framework, we also conducted evaluations using datasets from the standard benchmark Open LLM Leaderboard. As shown in Table 1, despite the primary focus of our datasets on instruction tuning capabilities, our method demonstrated improvements across multiple test sets. This indicates that the Star-Agents framework not only excels in instruction tuning tasks but also performs well on a broader range of datasets. Table 1. Performance on Open LLM Leaderboard |Model|ARC|HellaSwag|MMLU|TruthfulQA|Average| |-|-|-|-|-|-| |Wizardlm-7B|51.60|77.7|42.7|44.7|54.18| |Llama-2-7B-evol_instruct|51.88|76.70|45.76|46.10|55.11| |Llama-2-7B-star_instruct|54.44|77.64|46.94|46.13|56.29| **Q2**: Multiple Teacher Models **A2**: Compared to a single LLM, using multiple LLMs may increase the time complexity. However, we employed a model parallelism approach during the data generation process, which does not significantly increase the latency of our method while effectively improving the results. Additionally, we have proposed a selection strategy to reduce complexity. We introduced a dynamic selection strategy for LLMs, where during each data generation process, we sample based on the past performance of the LLMs. The sampled LLMs are responsible for generating diverse data. By limiting the number of activated LLMs each time, we can ensure the quality of data generation while significantly reducing the resources required for LLMs inference. In the future, we will continue to research simplified methods to enhance the efficiency of our approach. **Q3**: Three-Stage Process **A3**: Thank you for your comments. Firstly, Our three-stage method ensures diverse, high-quality data by leveraging multiple LLMs for varied data generation, using dual-model evaluation to identify high-quality, tailored data, and employing an evolution strategy to adjust sampling probabilities for efficient, task-specific data generation. Each stage serves a distinct purpose, and has a clear and justified function, and their integration ensures a streamlined process in practice. We acknowledge the need to simplify the process. Our future work will focus on optimizing data generation, reducing computational costs, and exploring automated tools to enhance usability. These improvements aim to make the Star-Agents framework easier to implement without compromising performance. **Q4**: Lack of Human Evaluations **A4**: In our paper, we conducted evaluations on three different test sets using the LLM-as-a-judge method based on GPT-4, as outlined in [r1]. According to [r1], the LLM-as-a-judge approach based on GPT-4 achieves an agreement rate of 85% with human evaluations, surpassing the inter-rater agreement rate among humans, which is approximately 80%. Moreover, numerous seminal works, such as [r3], and [r4], have successfully employed GPT-4 in LLM-as-a-judge to replace human evaluations. Additionally, we conducted a manual inspection of the evaluation results and found that the trends observed with GPT-4 evaluations align with those of human evaluations. Therefore, we adopted the LLM-as-a-judge method based on GPT-4 for our evaluations in this paper. **Q5**:Standard Benchmarks: **A5**:In response to this suggestion, we have conducted additional evaluations of the Star-Agents framework using the datasets available on the standard benchmark Open LLM Leaderboard. As shown in Table 1, our framework demonstrated improvements across various test sets, even though our chosen datasets were focused on dialogue performance. We believe that these additional benchmark results will provide a clearer and more robust comparison, demonstrating the effectiveness and versatility of the Star-Agents framework in various scenarios. **Q6**:What will be the results if base models are larger, e.g. llama 13b or llama 70b? **A6**:Due to time and computational constraints, we conducted preliminary experiments on the LLaMA2-13B model, as presented in Table 2 . Our method shows a significant improvement compared to the baseline. Additionally, in the MT-bench First-turn evaluation, Llama-2-13B-star_instruct achieved a score of 6.98, which is close to the Llama2-70B-Chat score of 6.99, a much larger model that has undergone SFT and DPO. While, the teacher models used in our experiments were mostly smaller than 13B. For larger models, it is essential to incorporate more powerful teacher models to further enhance their performance effectively. Table2. Performance of Llama2-13B |Model|MT-bench| |-|-| |Llama-2-13B-evol_instruct|5.88| |Llama-2-13B-star_instruct|6.28| References: [r1] Judging llm-as-a-judge with mt-bench and chatbot arena[J]. Advances in Neural Information Processing Systems, 2024, 36. [r2] WizardLM: Empowering large pre-trained language models to follow complex instructions[C]//The Twelfth International Conference on Learning Representations. 2024. [r3] Mixtral of experts[J]. arXiv preprint arXiv:2401.04088, 2024. [r4] Minicpm: Unveiling the potential of small language models with scalable training strategies[J]. arXiv preprint arXiv:2404.06395, 2024. --- Rebuttal 2: Title: Thanks for the comments. Comment: Dear Reviewer piXU, We sincerely appreciate the time and effort you invested in providing valuable feedback on our paper. In our rebuttal, we have thoroughly addressed all of your initial concerns and included the requested experimental results. If you have any further questions or concerns, we would be happy to discuss them with you. Additionally, we welcome any new suggestions or comments you might have! Regards
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for the overall positive reviews and helpful feedback, which we have incorporated to improve our work. If any remaining doubts exist, we encourage the reviewers to engage in the discussion so we can clarify them. If all concerns have been resolved, we kindly ask the reviewers to consider raising their scores. Best Regards, Submission 15632 Authors Pdf: /pdf/d356adfd5d39dddb76a24e4fa4425be5f4380d03.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks
Accept (spotlight)
Summary: This paper introduces a new approach called Robust Prompt Optimization (RPO) for defending large language models (LLMs) against jailbreaking attacks. The key contributions are: 1. Formalizing a minimax optimization objective for ensuring safe LLM outputs under a realistic threat model involving various attacks and adaptive adversaries. 2. Proposing the RPO algorithm, which directly optimizes for the defense objective using principled attack selection and discrete optimization. 3. Developing an easily deployable suffix-based defense that achieves state-of-the-art performance in protecting LLMs against jailbreaks on benchmark datasets. The RPO method works by optimizing a set of "trigger tokens" that enforce safe outputs even under adversarial attacks. The authors evaluate RPO on recent red-teaming benchmarks and show it significantly reduces attack success rates on models like GPT-4 and LLaMA-2. Key advantages of RPO include its negligible inference cost, minimal impact on benign prompts, and ability to transfer to black-box models and unknown attacks. The paper provides both theoretical analysis and experimental results demonstrating RPO's effectiveness as a robust, universal defense against various jailbreaking techniques. Strengths: Originality: - Proposes the first formal optimization objective for defending language models against jailbreaking attacks, incorporating the adversary directly into the defensive objective. This is a novel formulation of the problem. - Introduces Robust Prompt Optimization (RPO), a new algorithm to optimize for this defensive objective using a combination of attack selection and discrete optimization. Quality: - Provides theoretical analysis showing that optimizing their proposed objective is guaranteed to improve robustness, even on unseen instructions and attacks. This gives a solid theoretical grounding. - Conducts extensive empirical evaluation on recent benchmarks (JailbreakBench and HarmBench), demonstrating state-of-the-art performance in reducing attack success rates. - Shows transferability of the defense to black-box models like GPT-4 and resistance to adaptive attacks, indicating the approach is robust. Clarity: - The paper is generally well-structured and clearly written. - Key ideas and contributions are summarized concisely in the introduction. - The methodology is explained step-by-step with supporting equations and an algorithm description. Significance: - Addresses an important problem in AI safety - defending large language models against jailbreaking attacks that could lead to harmful outputs. - Achieves state-of-the-art results on reducing attack success rates (down to 6% on GPT-4 and 0% on Llama-2). - The proposed defense is lightweight and easily deployable as a suffix, making it practical for real-world implementation. - The approach is model-agnostic and transfers well to different LLMs, including closed-source models, increasing its potential impact. Overall, this paper makes significant contributions in formulating and addressing the challenge of defending LLMs against jailbreaking attacks. The combination of theoretical grounding, novel algorithmic approach, and strong empirical results on challenging benchmarks makes this work quite impactful for the field of AI safety and robustness. Weaknesses: 1. Limited discussion of computational costs: - The paper doesn't provide details on the computational resources required for RPO optimization. - It's unclear how long it takes to generate the defensive suffix or how this scales with different model sizes. - Actionable improvement: Include a section on computational requirements, comparing RPO's runtime and resource usage to existing defenses and baseline LLM inference. 2. Lack of ablation studies: - The paper doesn't explore the impact of different components of RPO (e.g., attack selection frequency, batch size, number of iterations). - Actionable improvement: Conduct ablation studies to show how each component contributes to the overall performance and to guide practitioners in tuning these hyperparameters. 3. Limited exploration of potential negative impacts: - While the paper focuses on defending against harmful outputs, it doesn't discuss potential unintended consequences of the defense mechanism. - For instance, could RPO inadvertently block legitimate but sensitive queries? - Actionable improvement: Include a section on potential limitations and negative impacts, with empirical analysis on false positive rates for benign but sensitive queries. 4. Insufficient comparison to other optimization-based defenses: - While the paper compares to some existing defenses, it doesn't thoroughly compare to other optimization-based approaches in adversarial robustness literature. - Actionable improvement: Include comparisons to adversarial training methods adapted for language models, such as those proposed by Ziegler et al. (2022) in "Adversarial Training for High-Stakes Reliability". 5. Limited exploration of transfer learning: - While the paper shows transfer to GPT-4, it doesn't explore how well the defense transfers between models of different sizes or architectures. - Actionable improvement: Conduct experiments on transfer learning between models of varying sizes (e.g., from smaller to larger models) and different architectures (e.g., from decoder-only to encoder-decoder models). 6. Lack of human evaluation: - The paper relies primarily on automated metrics for evaluation. - It's unclear how the defended model's outputs are perceived by human users in terms of safety and quality. - Actionable improvement: Conduct a human evaluation study to assess the perceived safety and quality of outputs from models with and without RPO defense. 7. Limited discussion on the choice of loss function: - The paper uses log probability as the loss function but doesn't justify this choice or explore alternatives. - Actionable improvement: Provide a justification for the chosen loss function and experiment with alternative loss functions (e.g., KL-divergence, earth mover's distance) to see if they yield better results. 8. Insufficient analysis of the learned defensive suffixes: - The paper doesn't provide an in-depth analysis of the structure or content of the learned defensive suffixes. - Actionable improvement: Include a section analyzing the learned suffixes, perhaps using interpretability techniques to understand what patterns the defense is learning. 9. Limited exploration of multi-turn interactions: - The paper focuses on single-turn interactions, but many real-world scenarios involve multi-turn dialogues. - Actionable improvement: Extend the evaluation to multi-turn scenarios to assess how well the defense holds up over extended interactions. 10. Lack of discussion on potential adaptive attacks: - While the paper mentions resistance to adaptive attacks, it doesn't explore specific adaptive strategies an attacker might employ against RPO. - Actionable improvement: Include a section on potential adaptive attacks against RPO and empirically evaluate the defense's performance against these hypothetical attacks. These specific improvements would strengthen the paper's contribution and provide more comprehensive insights into the proposed defense mechanism. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Computational Resources and Scalability: Question: What are the computational requirements for RPO optimization? How does the runtime scale with model size and dataset size? Suggestion: Provide a detailed analysis of computational costs, including time and hardware requirements for different model sizes. 2. Hyperparameter Sensitivity: Question: How sensitive is RPO to its hyperparameters (e.g., attack selection frequency, batch size, number of iterations)? Suggestion: Conduct and present an ablation study showing the impact of different hyperparameter choices on the defense's effectiveness. 3. False Positive Analysis: Question: Does RPO inadvertently block legitimate but sensitive queries? What is the false positive rate? Suggestion: Perform an analysis on a set of benign but potentially sensitive queries to assess any unintended blocking. 4. Comparison to Adversarial Training: Question: How does RPO compare to adversarial training methods adapted for language models? Suggestion: Include a direct comparison with adversarial training approaches, particularly those designed for language models. 5. Transfer Learning Capabilities: Question: How well does the RPO defense transfer between models of different sizes or architectures? Suggestion: Conduct experiments showing transfer performance between various model sizes and architectures. 6. Human Evaluation: Question: How do human evaluators perceive the safety and quality of outputs from RPO-defended models compared to undefended models? Suggestion: Conduct a human evaluation study and present the results. 7. Loss Function Choice: Question: Why was log probability chosen as the loss function? Have alternative loss functions been considered? Suggestion: Provide justification for the chosen loss function and experiment with alternatives like KL-divergence or earth mover's distance. 8. Analysis of Learned Suffixes: Question: What patterns or structures are present in the learned defensive suffixes? Suggestion: Perform an in-depth analysis of the learned suffixes, possibly using interpretability techniques. 9. Multi-turn Interactions: Question: How does RPO perform in multi-turn dialogue scenarios? Suggestion: Extend the evaluation to include multi-turn interactions to assess the defense's effectiveness over extended conversations. 10. Adaptive Attacks: Question: What specific adaptive strategies might an attacker employ against RPO, and how does the defense perform against these? Suggestion: Outline potential adaptive attacks and empirically evaluate RPO's performance against them. 11. Impact on Model Performance: Question: Does the RPO defense impact the model's performance on non-adversarial tasks? Suggestion: Evaluate the defended model on standard language modeling benchmarks to assess any potential degradation in general performance. 12. Generalization to Other Types of Attacks: Question: How well does RPO generalize to types of attacks not seen during training? Suggestion: Test the defense against a held-out set of novel attack types not used during optimization. 13. Ethical Considerations: Question: Are there any potential ethical issues or misuse scenarios associated with deploying RPO in real-world applications? Suggestion: Include a discussion on the ethical implications and potential misuse of the technology. 14. Integration with Existing Systems: Question: How easily can RPO be integrated into existing language model deployment pipelines? Suggestion: Provide guidelines or a case study on integrating RPO into a typical LLM deployment setup. 15. Longevity of the Defense: Question: How long is the RPO defense expected to remain effective as new attack methods are developed? Suggestion: Discuss the expected longevity of the defense and propose strategies for keeping it up-to-date with evolving attacks. These questions and suggestions aim to address key aspects of the paper that could benefit from further clarification or exploration, potentially changing opinions on the work's impact and completeness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback and positive review. We're pleased that the reviewer found our work to have *significant contributions in formulating and addressing the challenge of defending LLMs against jailbreaking attacks* and appreciated our *combination of theoretical grounding, novel algorithmic approach, and strong empirical results.* We address concerns and suggestions below: **Weaknesses** *Computational costs*: Optimizing the RPO suffix takes about 4-5 hours on a single A100 GPU for 500 iterations, each taking 1-2 minutes. We observe convergence after ~100 iterations. This is approximately 8x cheaper than optimizing a GCG attack string. We will add these details to the paper. *Ablation studies*: Thank you for this suggestion. While a full ablation study would be valuable, it's computationally infeasible within the rebuttal period, given our resources. However, we did observe that increasing the number of iterations and batch size generally improved performance, with diminishing returns after 100 iterations. We'll include these observations in the paper and conduct an analysis. *Potential negative impacts*: In our experiments, we found that RPO had minimal impact on benign queries, with a slight performance reduction on MT-Bench (9.32 vs 9.20 on GPT-4), only on very short queries where the model might get confused by the suffix. We'll expand our discussion in Section 4.3. *Comparison to other optimization-based defenses*: While a direct comparison to adversarial training methods would be interesting, it's challenging due to the significant differences in approach and computational requirements. However, we'll expand our discussion in the related work section to highlight these differences and potential complementarities. *Transfer learning*: We agree this is an interesting direction. Our current results in Table 1 demonstrate transfer across different model sizes and architectures (e.g., from Llama-2 to GPT-4 and Vicuna). *Human evaluation*: Thank you for the suggestion. While a full human evaluation study is beyond our current resources, we agree it would be valuable. We'll discuss this limitation in the paper. *Loss function choice*: We chose log probability as it's standard in language modeling and aligns well with our objective. We'll add this justification to the paper. *Analysis of learned suffixes*: We'll add a brief analysis of common patterns in the learned suffixes to the paper, though a full interpretability study is beyond our current scope. *Multi-turn interactions*: Our current work focuses on single-turn interactions, which are the standard setting in LLM security. All attack baselines were evaluated using single-turn instructions in their original settings, which we match for our defense setting. Extending to multi-turn scenarios is an important direction for future work, which we'll highlight in the paper. *Adaptive attacks*: We discuss adaptive attacks in Section 4.3 and Table 3. We designed adaptive versions of GCG and PAIR and found that RPO retains its high robustness, and has stronger adaptive robustness than defense baselines. **Questions** *Impact on model performance*: We analyze the trade-off between robustness and general performance in Table 4. We find that there is only a minor trade off between robustness and model performance, typically for very short queries where the model might get confused by the suffix. This results in a slight reduction on MT-Bench. On MMLU, we do not observe a performance difference for most models. Optimizing for semantically meaningful defense strings could fully mitigate this, which we leave to future work. *Generalization to unseen attacks*: Table 2 demonstrates RPO's generalization to unseen attacks on HarmBench. RPO reduces ASR across all six attacks, including four not seen during optimization. For example, on Vicuna-13B, RPO reduces ASR from 65.9% to 59.5% on AutoDAN and from 53.6% to 37.2% on TAP. This robust generalization showcases RPO's effectiveness against a wide range of jailbreaking techniques. *Ethical considerations*: Section 5 provides a discussion of ethical implications. We acknowledge that proposing our defense may lead to the development of stronger attacks. However, we believe the benefits of improved LLM safety outweigh this risk. *Integration with existing systems*: RPO is designed for easy integration into existing LLM deployment pipelines. The optimized suffix (20 tokens) is appended to the user input as part of the system prompt during inference. This requires minimal changes to existing infrastructure and incurs negligible computational overhead compared to baseline defenses like SmoothLLM. *Longevity of the defense*: While the evolving nature of attacks poses challenges, RPO's design allows for easy updating. The suffix can be periodically re-optimized on new attack types to maintain effectiveness. We conduct an analysis where we add a new attack, TAP, to optimization below. | Model | GCG | AutoDan | PAIR | TAP | Few-Shot | PAP | Average | |-------------|------|---------|------|-----|----------|-----|---------| | Llama-2-7B | 31.9 | 0.0 | 9.4 | 9.1 | 5.0 | 2.7 | 9.7 | | + RPO (3 attacks) | 6.7 | 0.0 | 5.0 | 7.8 | 0.0 | 0.0 | 3.2 | | + RPO (4 attacks inc TAP) | 6.9 | 0.0 | 4.7 | 5.2 | 0.0 | 0.0 | 2.8 | We find that RPO can indeed be updated to new attacks, further improving the robustness of TAP by including it in the optimization. This results in higher average robustness after adding a new attack. In practice, RPO generalizes well to unseen attacks, so it is not mandatory to keep updating the defense, although this can indeed improve robustness further. We hope this addresses your concerns. We are happy to answer further questions in the discussion phase. --- Rebuttal 2: Title: Follow up to reviewer Comment: Dear Reviewer EVpE, We are thankful for your review. With the rebuttal deadline drawing near, please inform us if your concerns have been properly addressed. We are ready to offer further clarification. Best regards, The Authors
Summary: This paper introduces Robust Prompt Optimization (RPO), a novel method for defending LLM against jailbreaking attacks, which manipulate prompts to induce harmful behavior. Inspired by the Adversarial Training, RPO optimizes a suffix for the LLM prompt, ensuring safe responses even when the input is modified by an attacker. The RPO method contains two steps, jailbreaking prompt selection and discrete optimzation, which is also based on a complete theoretical proof. RPO demonstrates significant improvements in attack success rate (ASR) on various LLMs, including GPT-4, and is transferable to unknown attacks, making it a robust and versatile defense mechanism for LLMs. Strengths: **Good Innovation**: Optimization-based methods are widely used for optimizing jailbreak suffixes, but this paper innovatively applies it to the reinforcement of security alignment, introducing a new strategy for jailbreak defense. **Solid Theoretical Foundation**: The paper conducts a detailed theoretical analysis of RPO (Reinforced Protection Optimization). **Elaborate Empirical Evaluation**: The effectiveness of the method is demonstrated through extensive empirical experiments. Weaknesses: **Transferability Discussion**: While RPO shows promising transferability, which is very interesting, the paper does not seem to discuss this aspect sufficiently. I recommend that the authors further explore and elaborate on why RPO exhibits good transferability. **Efficiency Concerns**: Section 4.3 discusses the method only adds 20 additional tokens, which does not significantly increase the cost of inference for benign prompts. However, there appears to be a lack of discussion on the efficiency of the optimization process itself. It is well-known that token-level optimization, typified by methods like GCG, is often criticized for its high optimization costs. Thus, I am concerned about the optimization efficiency of RPO. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.How many rounds of optimization are required for a successful optimization of RPO, and how much time does each round typically take? 2.Why do the optimized defense suffixes exhibit good transferability across different models? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper thoroughly discusses the existing limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and positive review. We are glad the reviewer found our use of optimization for defense innovative, theoretical analysis solid, and empirical evaluation elaborate and extensive. > *Transferrability discussion* We are glad the reviewer found the transferability of RPO suffixes interesting. This property can be attributed to the similarities between LLMs, all trained with similar autoregressive objectives and internet-scale datasets. This is supported by the fact that many LLMs from different families, such as Llama and GPT, now have similar capabilities. We will add a discussion about this in the revised version. > *Effiency concerns* Optimizing the RPO suffix indeed incurs a computational cost, but it is less than the cost of GCG. Our experiments find that optimizing this suffix is around 8x cheaper than optimizing a GCG attack string since models already tend to refuse harmful instructions. We also optimize this on a single A100 GPU, which takes about 4-5 hours for 500 iterations. Each iteration takes 1-2 minutes, and we observe convergence after around 100 iterations. Further reducing this cost could be an interesting avenue for future work as explored in recent attack optimization methods [1]. We will add a more detailed discussion of this in the revised version. We hope our response answered the reviewer's remaining concerns. We are happy to answer further questions in the discussion phase. [1] Sadasivan, V.S., Saha, S., Sriramanan, G., Kattakinda, P., Chegini, A.M., & Feizi, S. (2024). Fast Adversarial Attacks on Language Models In One GPU Minute. ArXiv, abs/2402.15570. --- Rebuttal 2: Title: Follow up to reviewer Comment: Dear Reviewer YYJE, We are thankful for your review. With the rebuttal deadline drawing near, please inform us if your concerns have been properly addressed. We are ready to offer further clarification. Best regards, The Authors
Summary: The paper proposes a novel method, Robust Prompt Optimization (RPO), to enhance the robustness of large language models (LLMs) against jailbreaking attacks. Existing defenses, which operate during pre-training or fine-tuning stages, incur high computational costs. The proposed RPO method optimizes a lightweight suffix at the inference stage, incorporating the adversary into the defensive objective. Theoretical and experimental results demonstrate that RPO reduces the attack success rate on GPT-4 and Llama-2 models, setting a new state-of-the-art in defense against jailbreaking attacks. Strengths: 1. The proposed method demonstrates substantial improvements in reducing attack success rates, outperforming existing defense mechanisms. 2. Unlike other methods that operate during pre-training or fine-tuning stages, RPO is computationally efficient, operating during the inference stage without significant overhead. 3. The authors provide a thorough theoretical analysis of the defense method. Weaknesses: 1. The method can not address other failure modes such as deception and malicious code generation, limiting its applicability. 2. How does the proposed RPO method handle the evolution of attack strategies over time, and how frequently would the defensive suffix need updating? The authors should have provided more analysis. 3. What are the potential trade-offs between robustness and model performance when applying RPO to different LLM architectures? The authors should have provided more analysis. Technical Quality: 4 Clarity: 4 Questions for Authors: Please refer to Weaknesses for details. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and positive assessment of our work. We greatly appreciate the reviewer's recognition of our paper's substantial improvements in reducing attack success rates and its computational efficiency. We're pleased that the reviewer found our theoretical analysis thorough. We address concerns and questions in detail below: > *Handling other failure modes* Indeed, our paper does not focus on other failure modes, such as deception and malicious code generation, but on harmful generation. We acknowledge this limitation in Section 5 of our paper. It is worth noting that these failure modes are very nascent and not well-studied, lacking standardized definitions and benchmarks, but they could be an interesting direction for future work. > *Evolution of attack strategies and updating the defensive suffix* The RPO method is designed to be adaptive and can be periodically reoptimized to account for new attack strategies. Indeed, initial versions of RPO were only optimized on GCG and had limited transferability to semantically meaningful attack prompts in PAIR or JBC. To further analyze this, we optimize a new RPO suffix on TAP, a newer attack, in addition to GCG, PAIR, and JBC. The results on HarmBench are provided below. | Model | GCG | AutoDan | PAIR | TAP | Few-Shot | PAP | Average | |-------------|------|---------|------|-----|----------|-----|---------| | Llama-2-7B | 31.9 | 0.0 | 9.4 | 9.1 | 5.0 | 2.7 | 9.7 | | + RPO (3 attacks) | 6.7 | 0.0 | 5.0 | 7.8 | 0.0 | 0.0 | 3.2 | | + RPO (4 attacks inc TAP) | 6.9 | 0.0 | 4.7 | 5.2 | 0.0 | 0.0 | 2.8 | We find that RPO is indeed can be directly updated to new attacks, further improving the robustness of TAP by including it in the optimization. This also improves PAIR robustness, likely due to the similarity between the optimized prompts. This results in higher average robustness after adding a new attack. However, we also observe a slight reduction in GCG robustness, which has very different attack prompts, suggesting some interference between some attacks while benefits for others. We will conduct a more detailed analysis for the revised version of the paper. In addition, while we observe optimizing directly on an attack will improve robustness directly, RPO can also transfer well to unseen attacks, making it not mandatory to update the defense whenever there is a new attack. > *Trade-offs between robustness and model performance for different LLM architectures* We appreciate you highlighting this important aspect. We analyze the trade-off between robustness and general performance in Table 4 (also attached below). We find that there is only a minor trade-off between robustness and model performance, typically for very short queries where the model might get confused by the suffix. This results in a slight reduction on MT-Bench. More capable models such as GPT-4 are more robust and have a more minor performance reduction. On MMLU, we do not observe a performance difference for most models. Optimizing for semantically meaningful defense strings could mitigate this, which we leave to future work. | Model | Method | MT-Bench | MMLU | |-------------|--------|----------|------| | Vicuna-13B | Base | 6.57 | 0.50 | | | RPO | 5.96 | 0.49 | | Llama-2-7B | Base | 6.18 | 0.46 | | | RPO | 6.05 | 0.46 | | GPT-3.5 | Base | 8.32 | 0.68 | | | RPO | 7.81 | 0.66 | | GPT-4 | Base | 9.32 | 0.85 | | | RPO | 9.20 | 0.85 | Thank you again for your insightful comments and questions. We are happy to answer additional concerns or questions in the discussion phase. --- Rebuttal 2: Title: Follow up to reviewer Comment: Dear Reviewer yUwN, We are thankful for your review. With the rebuttal deadline drawing near, please inform us if your concerns have been properly addressed. We are ready to offer further clarification. Best regards, The Authors
Summary: In this paper, the authors proposes a new defense, called Robust Prompt Optimization(RPO) to defend the jailbeak attack. It optimizes a secure suffix with min-max optimization. Experiments reveal that it can defend multiple existing attacks Strengths: 1 This paper is well written. 2 The authors give theoretical proof to explain the working dynamics of RPO. 3 The proposed method is simple and easy to understand. Weaknesses: 1 RPO obtains high ASR against the JBC attack on Vicuna which is much worse than the Rephrasing defense. 2 Although RPO can save a lot of computation cost than other methods during the inference time. The cost to generated the suffix can not be simply ignored. From my experimence, it usually needs multiple hours even on a single A100 GPUs. Regarding this point, self-reminder [1] and ICD [2] might be a better choice. 3 In Table 1, authors only report results on two open-source and closed-source models. More experiments are needed to fully investigate the capability of RPO, such as Vicuna-7B, Llama-13B, QWen-7B and claude. 4 Although in the related work section, the authors list a lot of current defenses. However, in Table 1, they compare RPO with only a few. I would suggest authors compared DPO with stronger defenses such as [3] , [4] and [5]. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors well addresses the limitations of the proposed method. [1] Defending chatgpt against jailbreak attack via self-reminder [2] Jailbreak and guard aligned language models with only few in-context demonstrations [3] Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations [4] Defending large language models against jailbreaking attacks through goal prioritization. [5] Rain: Your language models can align themselves without finetuning Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and helpful review. We are glad the reviewer found the paper well-written, theoretical results meaningful, and method easy to understand. We address the concerns below. > *RPO obtains high ASR against the JBC attack on Vicuna which is much worse than the Rephrasing defense.* 1. While RPO indeed performs worse than rephrasing on Vicuna on the JailbreakChat attack, we note that this is the only setting where RPO is not the strongest defense. On Llama-2, GPT-3.5, and GPT-4, RPO outperforms all defenses. These are also more important and widely used models than Vicuna. On the more difficult, optimization-based attacks GCG and PAIR, RPO outperforms baseline defenses on Vicuna. 2. Vicuna is an older model that has no safety training. This may result in fundamentally different behavior when presented with jailbreaking attempts, making it more susceptible to certain attacks and less responsive to our defense mechanism. Vicuna, based on an earlier Llama model, may have different attention patterns and token representations compared to Llama-2, on which RPO was optimized. We will add this discussion in the revised version. > *The cost to generated the suffix can not be simply ignored.* Optimizing the RPO suffix indeed incurs a computational cost. Our experiments find that optimizing this suffix is around 8x cheaper than optimizing a GCG attack string since models already tend to refuse harmful instructions. We also optimize this on a single A100 GPU for 4-5 hours. We also open-source our optimized RPO suffixes, so users do not need to do this themselves. Our new experimental results below demonstrate RPO outperforms defenses with similar inference costs, such as Self-Reminder and Few-Shot Examples. > *More experiments are needed to fully investigate the capability of RPO* We conducted our experiments on four models: Llama-2-7B, Vicuna-13B, GPT-3.5-Turbo, and GPT-4 since these are the models represented in JailbreakBench [1]. As suggested, we have conducted additional evaluations on the Qwen-7B and Llama2-13B models. Due to the cost of generating new attack prompts for each instruction for each model, we will evaluate Vicuna-7B and Claude for the revised paper. The results further demonstrate the effectiveness and transferability of RPO across different model architectures and sizes. | Method (Qwen-7B) | PAIR | GCG | JBC | |--------------------|------|------|------| | Base | 0.68 | 0.11 | 0.58 | | Perplexity Filter | 0.66 | 0.0 | 0.58 | | SmoothLLM | 0.36 | 0.02 | 0.44 | | Few-Shot | 0.16 | 0.01 | 0.50 | | RPO | 0.04 | 0.0 | 0.45 | For Qwen-7B, RPO significantly outperforms all other methods on the PAIR attack, reducing the attack success rate from 68% (base model) to just 4%. This is a substantial improvement over the next best defense, Few-Shot, which achieves 16%. On the GCG attack, RPO matches the best performance of 0% attack success rate. On JBC, RPO is competitive with SmoothLLM (but outperforms it significantly on PAIR and GCG). | Method (Llama2-13B) | PAIR | GCG | JBC | |---------------------|------|-----|------| | Base | 0.02 | 0.0 | 0.01 | | Perplexity Filter | 0.02 | 0.0 | 0.01 | | SmoothLLM | 0.01 | 0.0 | 0.0 | | Few-Shot | 0.01 | 0.0 | 0.01 | | RPO | 0.01 | 0.0 | 0.00 | Similar to Llama-2-7B, Llama-2-13B already has high robustness to all three attacks. Using RPO similarly outperforms baseline defenses across attacks, achieving perfect robustness to GCG and JBC and reducing PAIR ASR. SmoothLLM is the only defense competitive to RPO in this setting, but it requires twice as many inference computations and is less effective on other models. We will add these results to the revised version and similar evaluations on Claude and Vicuna-7B. > *I would suggest authors compared DPO with stronger defenses* We appreciate the reviewer's suggestion to compare RPO with additional strong defenses. We have conducted further experiments, comparing RPO with self-reminder and few-shot examples (RAIN is a weaker defense that has a very high GCG ASR on Vicuna (38%) [2], so we do not evaluate it). The results for GPT-4 are presented in the table below. | Method (GPT-4) | PAIR | GCG | JBC | |-----------------|------|------|------| | Base | 0.50 | 0.01 | 0.0 | | Perplexity Filter | 0.43 | 0.0 | 0.0 | | SmoothLLM | 0.25 | 0.03 | 0.0 | | Rephrasing | 0.35 | 0.01 | 0.01 | | Self-reminder | 0.16 | 0.0 | 0.0 | | Few-Shot | 0.10 | 0.0 | 0.0 | | RPO | 0.06 | 0.0 | 0.0 | These new results demonstrate that RPO consistently outperforms or matches the performance of all compared defenses across different attack types: PAIR Attack: RPO achieves the lowest attack success rate of 6%, significantly improving upon Self-reminder (16%) and Few-Shot (10%). GCG Attack: RPO matches the perfect defense (0% attack success rate) achieved by Perplexity Filter, Self-reminder, and Few-Shot, while outperforming SmoothLLM and Rephrasing. JBC Attack: RPO maintains the 0% attack success rate achieved by most defenses, matching the performance of baselines. These results highlight that RPO not only compares favorably to the defenses in our original submission but also outperforms or matches the effectiveness of additional strong defenses suggested by the reviewer. We will incorporate these additional comparisons and analyses into the revised paper and add the results of new baseline defenses on the other models we evaluated. We hope our response has addressed the reviewer's concerns. We are happy to answer further questions in the discussion phase. [1] Chao, P., Debenedetti et al (2024). JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models. ArXiv, abs/2404.01318. [2] Li, Y., Wei et al (2023). RAIN: Your Language Models Can Align Themselves without Finetuning. ArXiv, abs/2309.07124. --- Rebuttal 2: Title: Follow-up to reviewer Comment: Dear Reviewer Md6V, We are thankful for your review. With the rebuttal deadline drawing near, please inform us if your concerns have been properly addressed. We are ready to offer further clarification. Best regards, The Authors --- Rebuttal 3: Comment: Dear authors, I am happy that you address most of my concern. 1 However, considering defending jailbreak attacks from prompt tuning view is not a full new direction. For example, related works [1] and [2] study the same problem and propose the corresponding methods. I list them here is not doubting the novelty of this paper. However, discussing with them might help readers understand the importance of the prompt-based defenses better. 2 Notice that you release the code. How many iterations do RPO need for its convergence? [1] Fight Back Against Jailbreaking via Prompt Adversarial Tuning [2] On prompt-driven safeguarding for large language models --- Rebuttal 4: Title: Response to reviewer Comment: We thank the reviewer for the response and are glad most concerns are addressed. > *Discussing related work* We are glad the reviewer has brought these works to our attention and agree they should be discussed. 1. PAT [1] is concurrent to our work, and explores a similar optimization objective. However, PAT is only optimized on the GCG attack, which may limit its transferability to newer attacks. 2. DRO [2] also explores prompt-based defenses for LMs but optimizes a soft prompt in embedding space. This prevents DRO from transferring to closed-source models. In addition, the DRO objective is only a minimization objective, not a min-min optimization objective like RPO, which incorporates the adversary into optimization. This makes RPO more adaptive to jailbreaking attacks. We will include baseline comparisons and a more detailed discussion in the revised version. >*How many iterations do RPO need for its convergence?* In practice, we find that RPO requires around 100 iterations to converge on a single A100 GPU (around an hour), and we optimize our suffixes for 500 iterations, which takes 4-5 hours. In the revised version, we will conduct a more detailed analysis of performance over time. We are happy to further clarify, and hope the reviewer will raise the score if all concerns have been addressed. --- Rebuttal Comment 4.1: Comment: Thank you for your response. However. from my view, PAT and DRO both incorporate benign prompts into their formulation, which makes them achieve lower FPR (False positive rate) on detecting harmless prompts. In RPO, I only see optimization on harmful input. Do you have any suggestions for future improvement? --- Reply to Comment 4.1.1: Title: Response to reviewer Comment: Thank you for pointing this out. PAT and DRO indeed also optimize on benign prompts. However, from the reported MT-Bench scores in [1], RPO is competitive with PAT on FPR. We provide the results here, where the MT-Bench scores are taken directly from [1]. | Model | Method | MT-Bench | |-------------|--------|----------| | Vicuna-13B | Base | 6.57 | | | PAT | 6.15 | | | RPO | 5.96 | | GPT-3.5 | Base | 8.32 | | | PAT | 8.06 | | | RPO | 7.81 | | GPT-4 | Base | 9.32 | | | PAT | 8.77 | | | RPO | 9.20 | On weaker models such as Vicuna and GPT-3.5, RPO has a more significant effect on benign prompts than PAT, but on more capable models such as GPT-4, it has a smaller effect than PAT. This may be due to optimizing RPO on semantically meaningful harmful prompts. We will conduct a more detailed analysis and comparison in the revised version. Including benign prompts in the optimization of RPO is an interesting suggestion that may further reduce benign impact. We will try this improvement as an ablation in the revised version. For future work, modifying RPO to optimize semantically meaningful prompts, such as constraining candidate tokens to low perplexity ones, may also improve this. [1] Mo, Y., Wang, Y., Wei, Z., & Wang, Y. (2024). Fight Back Against Jailbreaking via Prompt Adversarial Tuning.
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chair, We sincerely thank you for your thorough and insightful reviews of our paper on Robust Prompt Optimization (RPO). We appreciate the positive feedback and constructive suggestions that have helped improve our work. We are pleased that the majority of reviewers were positive about the paper and that reviewers found our paper to have significant strengths. Reviewers noted that our paper is "well written" with a "simple and easy to understand" method, supported by "theoretical proof" and "thorough theoretical analysis." Our work was recognized for its "Good Innovation" in applying optimization to security alignment, with "Elaborate Empirical Evaluation" demonstrating "substantial improvements in reducing attack success rates." Importantly, it was highlighted that our work "Addresses an important problem in AI safety" and achieves "state-of-the-art results on reducing attack success rates." In response to your feedback, we have: - Conducted additional experiments on additional models (Qwen-7B and Llama-2-13B) to demonstrate RPO's broad effectiveness. - Expanded comparisons with recent defense methods, self-reminder and few-shot examples. - Provided more detailed analysis of RPO's computational requirements and optimization process. - Elaborated on RPO's transferability and generalization to unknown attacks and conducted additional ablations on updating the defense string to new attacks We believe these additions strengthen our paper significantly. We remain excited about RPO's potential impact on improving LLM safety and reliability, and we look forward to presenting a revised version incorporating these improvements. Thank you again for your valuable feedback and for considering our work.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
High-Resolution Image Harmonization with Adaptive-Interval Color Transformation
Accept (poster)
Summary: This work proposes the Adaptive-Interval Color Transformation (AICT) method for harmonizing high-resolution composite images. The proposed AICT first uses a parameter network to predict multiple curves (as 3D LUTs) to perform pixel-wise color transformation at low-resolution. The AICT then adjusts the sampling intervals of the color transformation by splitting each predicted 3D LUT into two 3D LUTs. Last, AICT uses a global consistent weight learning method to predict an image-level weight for each color transform. Experiments on the iHarmony4 [8] and ccHarmony [26] datasets show promising quantitative results of the proposed AICT. Strengths: Originality: (+) The strength is that the difference to existing HR image harmonization methods is clear. Quality: (+) The proposed method is complete and technically sound. (+) The experiment settings (evaluation datasets, metrics, and competing methods) are convincing. Clarity: (+) The paper presents sufficient implementation details for reproducing the proposed method. Significance: (+) Quantitative results in Tables 1-3, and 6-7 show superior or comparable performance of AICT in harmonizing HR and LR composite images. Weaknesses: Originality: This work proposes the AICT method to address an existing task of harmonizing high-resolution (HR) composite images. While previous HR image harmonization methods (e.g., [12,46]) predict and then upsample low-resolution (LR) parameter maps for harmonization, the proposed AICT also predicts LR parameter maps but applies two 3D LUTs to harmonize the colors in a coarse-to-fine manner. (-) One weakness is the limited/vague novelty of using two 3D LUTs for color transformation, which is a combination of ideas from existing LUT-based image enhancement methods [47, 48] but lacks necessary discussions and experimental verifications. Quality: (-) The paper mentions the ``non-linearities of the color transform at high resolution’’ many times as the key problem suffered by existing methods and addressed by the proposed AICT. However, the paper does not explain what the ``non-linearities’’ are, nor do the visual results in Figures 7, 8, and 9 illustrate such advantages. (-) The paper claims AICT being lightweight and computationally efficient but Figure 2 only demonstrates that AICT has a small model size. According to Tables 8 and 9, AICT runs much slower than [21] and [46] and is comparable to [12]. Hence, such a claim needs to be tuned down. (-) The paper claims the ``global consistent weight method’’ to be novel but the paper does not explain its novelty. Clarity: There are a few weak points regarding the paper writing. (-) The paper mentions the ``adaptive interval learning method’’ and the ``global consistent weight method’’ as the second main contribution (line 86-87, page 2). However, the Method section discusses the proposed ``global consistent weight method’’ very briefly as a part of subsection 3.2 (line 207-214, page 5). The authors are suggested to either explain the ``global consistent weight method’’ in detail regarding its novelty or remove the corresponding claim in the second contribution. (-) According to Figure 3 and Section 3 (line 134-144, page 4) the LR branch of the proposed AICT outputs two parameter maps (C and F). However, Eq. (3) indicates that there is a low-resolution harmonized image predicted by the LR branch. (-) Figure 1 is not self-contained. The S2CRNet [22] is missing in Figure 2. (-) The sub-title of Section 2.1 is better: ``High-resolution Image Harmonization. (-) There are no discussions on the relations between the proposed AICT and existing methods in Section 2.1 and 2.2. (-) There are many repeated sentences in, for example,(Line 5-6, Line 60-63, Line 374-378), and (Line 7-13, Line 67-78). (-) The references contain redundant items, such as [13,14]. (-) ``M=6’’ and ``M=10’’ should be ``K=6’’ and ``K=10’’. (-) It is better to move the Tables 6 and 7 to the main paper. Significance: (-) The visual results in Figure 7-9 do not show significant improvements. (-) The main quantitative performance gain seems to come from the foreground-normalized MSE loss [34], according to Tables 4 and 5. The paper presents a harmonization method that can produce better quantitative results, but the visual advantages illustrated in this paper are not significant Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: What is the novelty of the proposed 3D LUTs-based adaptive-interval color transformation, compared to the non-uniform sampling interval learning in Yang et al. [47] and the decomposition of a single color transform into two sub-transformations in Yang et al. [48]. It is suggested to explain the challenges of directly combining [47] and [48], and how the proposed AICT addresses such challenges. Comparison results between AICT and a simple combination of [47] and [48] are suggested to provide. Q2: Can authors report and explain the results of incorporating more 3D LUTs? Q3: According to Tables 4 and 5, the foreground-normalized MSE loss [34] is more important to the final results compared to the ``w/o Weight’’ and ``w/o Color’’. Can authors explain this and/or provide more results for clarification? Q4: Any more ablations on the proposed ``global weight’’? Q5: Can authors report non-reference image quality metrics (e.g., NIQE and LPIPS) for the harmonized real composite images in Figure 9? Q6: Can authors provide intermediate visual results of the ablated models? Q7: Can authors explain why using the channel-crossing strategy does not improve the performance? Q8: Any reason not to use SSIM as the evaluation metric? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper has discussed the limitations and potential negative societal impact. I cannot think of any further critical points. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your positive evaluation and valuable suggestions.** ***W1 & Q1: Limited novelty of using two 3D LUTs for color transformation, which is a combination of ideas from existing LUT-based image enhancement methods [47, 48] but lacks necessary discussions and experimental verifications..*** ***R1 & A1:*** Regarding the novelty of using two 3D LUTs for color transformation, we request the reviewer to refer to our response to Q2 & Q3 of Reviewer pwQP due to the space limitation. Our method is not a combination of Adaint [47] and Seplut [48]. In fact, simply combining them faces the challenges of limited color transformation capability and overlooking diverse local contexts. Our AICT addresses these challenges by dynamically predicting entire LUTs for pixel-wise sampling interval adjustment and color transformations. We also conduct experimental comparison between our method and Adaint and Seplut as well as their combination. As shown in the table below, our AICT achieves the best performance in terms of all metrics. |Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:| |Adaint|358.28|31.96|37.41|0.9886| |Seplut|358.51|31.35|37.33|0.9876| |Adaint+Seplut|252.32|30.55|37.66|0.9889| |our AICT|**228.43**|**17.76**|**39.40**|**0.9910**| ***W2: Explain “the non-linearities” and show the visual results.*** ***R2:*** Due to resolution differences, predicted 3D LUTs cannot align with the original input image. Consequently, linear interpolation based on a few sampling points is performed within the 3D LUT to obtain output color values, leading to "non-linear" transformations. To illustrate this advantage, we present the color distribution in local areas of high-resolution harmonized and ground truth images in Figure 3 of the PDF attached in the author rebuttal, which shows that the pixel values predicted by our method are closer to the real color distribution. ***W3: “lightweight and computationally efficient” needs to be tuned down.*** ***R3:*** Thanks for this suggestion. We will tune down our claim in the revised manuscript. ***W4 & Q4: Explain novelty of the 'global consistent weight method'.*** ***R4 & A4:*** We request the reviewer to refer to our response to Q4 & Q5 & Q6 of the Reviewer pwQP due to space limitation. ***W5: There are a few weak points regarding the paper writing.*** ***R5:*** Thanks for this suggestion. We will improve the paper writing in the revised manuscript. ***W6: The visual results do not show significant improvements.*** ***R6:*** To better show significant improvements, we quantify and visualize the error between the harmonized and ground truth images as shown in Figure 1 of the PDF attached in the author rebuttal. We also present additional visual results in Figure 2 of the PDF. ***Q2: Report and explain the results of incorporating more 3D LUTs.*** ***A2:*** We report more results in the table below. For each color channel, we use only 1 LUT for color transformation ('Single') and cascade 2 and 3 LUTs for adaptive interval learning ('Int x 2', 'Int x 3') and color transformation ('Tra x 2', 'Tra x 3'). We also cascade 4 and 6 LUTs for alternating adaptive interval learning and color transformation ('Alt x 2', 'Alt x 3'). Performance improves slightly with more LUTs but plateaus at three, due to increased training difficulty. |Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:| |Single |248.51|19.71|39.15|0.9908| |Int x 2|227.95|**17.40**|**39.50**|**0.9910**| |Int x 3|229.50|18.55|38.95|0.9909| |Tra x 2|**228.40**|*17.50*|*39.41*|0.9909| |Tra x 3|230.28|19.01|37.95|0.9908| |Alt x 2|230.58|20.25|38.02|0.9908| |Alt x 3|232.25|19.89|37.80|0.9907| |our AICT|*228.43*|17.76|39.40|**0.9910**| ***Q3: Explain why foreground-normalized MSE loss [34] is more important*** ***A3:*** The foreground-normalized MSE loss, widely used in methods like PCT-Net and DCCF, prevents instability in images with very small objects. For ablation studies, we set Amin to 100, 5000, and 10000, denoted as 'Amin=100', 'Amin=5000', and 'Amin=10000'. In our AICT, Amin is set to 1000. As shown in the table, when Amin is 100, performance doesn't improve due to the scarcity of smaller targets. Increasing Amin from 1000 to 10000 causes instability in training on small targets, resulting in decreased performance. |Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:| |MSE|287.04|23.17|38.41|0.9908| |Amin=100|229.87|17.96|39.23|0.9909| |Amin=5000|234.22|20.33|38.69|0.9852| |Amin=10000|250.89|22.56|37.56|0.9864| |our AICT|**228.43**|**17.76**|**39.40**|**0.9910**| ***Q5: Report non-reference image quality metrics (e.g., NIQE and LPIPS) for Figure 9.*** ***A5:*** Since LPIPS is a full-reference metric, we only report NIQE in the table below. Our method obtains the best results in the first two examples and the second-best result in the last example. |Method|Composite image|Harmonizer|DCCF|PCT-Net|our AICT| |----|:----:|:----:|:----:|:----:|:----:| |NIQE↓|10.51|9.18|*9.09*|9.35|**9.06**| |NIQE↓|2.85|*2.82*|2.82|2.82|**2.82**| |NIQE↓|3.21|3.15|3.16|**3.15**|*3.15*| ***Q6: Provide intermediate visual results of the ablated models*** ***A6:*** The intermediate visual results are shown in Figure 4 of the PDF attached in the author rebuttal. ***Q7: Explain why the channel-crossing strategy fails to improve performance*** ***A7:*** Our parameter model uses an RGB image to predict parameter maps, which inherently include information from all color channels. The channel-crossing strategy introduces redundancy without improving performance. ***Q8: Use SSIM as the evaluation metric*** ***A8:*** SSIM is actually not a commonly used metric for image harmonization tasks. Following this suggestion, we have used SSIM as an evaluation metric. As shown in the table, our AICT achieves a competitive SSIM score of 0.9910. |Method|Harmonizer|DCCF|PCT-Net|our AICT| |----|:----:|:----:|:----:|:----:| |SSIM↑|0.9685|0.9896|**0.9911**|0.9910| --- Rebuttal Comment 1.1: Comment: Thanks and I have read the rebuttal, which has provided enough information. --- Reply to Comment 1.1.1: Comment: We are very happy that our rebuttal provides enough information for addressing your concerns. It would be very appreciated if you could increase your rating if our rebuttal has addressed your concerns. Thank you very much for your efforts in reviewing our paper.
Summary: This paper proposes an AdaptiveInterval Color Transformation method (AICT) for high-resolution image harmonization, which predicts pixel-wise color transformation and adaptively adjusts the sampling interval to model local non-linearities of the color transformation at high resolution. Strengths: 1. The paper is overall well-written and easy to understand. 2. The idea is neat and clean. 3. The experimental results look good. Weaknesses: 1. The authors should conduct experiments using the training/test set of HAdobe5k which focuses on high-resolution, and report the results under different resolutions (1024, 2048, etc). More baselines should be compared in this setting. 2. The authors should test on high-resolution real composite images and conduct user study. 3. The authors are suggested to evaluate the method on ccHarmony dataset, which can reflect the illumination variation more faithfully. 4. The authors should discuss the limitation of the proposed method and show some failure cases. 5. More efficiency analyses (FLOPs, memory, time) should be provided. Technical Quality: 3 Clarity: 3 Questions for Authors: Please address the questions in "Weakness". Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the "Weakness". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your positive evaluation and valuable suggestions.** ***Q1: The authors should conduct experiments using the training/test set of HAdobe5k which focuses on high-resolution, and report the results under different resolutions (1024, 2048, etc). More baselines should be compared in this setting.*** ***A1:*** Thanks for this suggestion. We have reported the results under the resolutions 1024x1024 and 2048x2048 using the training/test set of HAdobe5k in Tables 2 and 7 of the submitted manuscript. Following this suggestion, we present comparison with two additional baselines: AdaInt [1] and SepLUT [2]. As shown in the tables below, our method outperforms AdaInt and SepLUT in terms of all metrics. |Image Size|Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:|:----:| |1024 x 1024|AdaInt|235.86|31.75|37.80|0.9837| ||SepLUT|224.98|27.41|37.75|0.9826| ||Our AICT|**156.81**|**19.50**|**39.67**|**0.9863**| |Image Size|Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:|:----:| |2048 x 2048|AdaInt|221.90|31.09|38.10|0.9846| ||SepLUT|216.14|27.02|37.97|0.9834| ||Our AICT|**147.99**|**17.92**|**40.07**|**0.9871**| [1] Yang, Canqian, et al. AdaInt: Learning adaptive intervals for 3D lookup tables on real-time image enhancement. CVPR 2022. [2] Yang, Canqian, et al. Seplut: Separable image-adaptive lookup tables for real-time image enhancement. ECCV 2022. ***Q2: The authors should test on high-resolution real composite images and conduct user study.*** ***A2:*** Thanks for this suggestion. We test on a high-resolution real composite image dataset [3] with resolutions ranging from 1024 × 1024 to 6016 × 4000. We randomly select 20 images for the user study. In the study, 20 volunteers independently rank the predictions from 1 to 3 based on visual quality, considering color and luminance consistency. Scores of 3, 2, and 1 are assigned for ranks 1, 2, and 3, respectively. The mean scores for each method are presented in the table below. As we can see that our method achieves the highest score. |Method|Harmonizer|DCCF|PCT-Net|our AICT| |----|:----:|:----:|:----:|:----:| |Score↑|21.6|28|32.4|**37.8**| [3] Li Niu, et al. Deep image harmonization with globally guided feature transformation and relation distillation. ICCV 2023. ***Q3: The authors are suggested to evaluate the method on ccHarmony dataset, which can reflect the illumination variation more faithfully.*** ***A3:*** Thanks for this suggestion. Actually, we have evaluated the method on the ccHarmony dataset and presented the results in Table 3 of the submitted manuscript. Our method achieves the best results in terms of MSE, PSNR, and SSIM metrics, which demonstrates the effectiveness of the proposed method when handling illumination variation. ***Q4: The authors should discuss the limitation of the proposed method and show some failure cases.*** ***A4:*** Thanks for this suggestion. We have discussed the limitations of the proposed method and shown some failure cases in Section A.4 of the supplementary materials of the submitted paper. ***Q5: More efficiency analyses (FLOPs, memory, time) should be provided.*** ***A5:*** Thanks for this suggestion. In the submitted manuscript, we have provided the FLOPs, memory cost, and inference time on the HAdobe5k dataset for different resolutions (1024 x 1024 and 2048 x 2048). Please refer to Tables 8 and 9 of the submitted manuscript. --- Rebuttal Comment 1.1: Comment: The rebuttal has addressed all my concerns. --- Reply to Comment 1.1.1: Comment: We are very happy that our rebuttal has addressed all your concerns. Thank you very much for your efforts in reviewing our paper.
Summary: This paper presents a new method for harmonizing the color of foreground objects added to scenes with the colors of the background original image. The authors focus on a model able to present good results on high-resolution images. The idea is to learn two different sets of Look-Up-Tables. The first set aims at modifying the input values to re-scale them in a way that the second set can perform pixel-wise editing. The authors show the ability of their method in three different datasets. Strengths: - The general idea of using two LUT-like transformations is interesting. - Results outperform the state of the art. Weaknesses: - The paper is difficult to follow at the beginning, as the authors abuse notation regarding 3DLUTs in color image processing. In color image processing 3DLUTs are a function that goes from R^3 to R^3, i.e. given an input (R,G,B) values they output a (R',G',B') value ---see for example [1]---. Therefore, in color image processing 3DLUTs cannot be approximated by curves (contrary to what is said by the authors in lines 138-139). What the authors are doing is learning spatial-aware 1D LUTs, i.e. a function from R^3 to R that given an input (x,y,R) outputs a value R'. It is true that some papers (citation 24 on the paper) already abuse notation when speaking of 4DLUTs. However, in their case, they are not redefining an already existing term, as it is 3DLUTs. Therefore, the authors of this manuscript need to rewrite different parts of the paper refraining from using the term 3DLUT to define their approach. As it currently stands, it cannot be accepted due to this conceptual error. - Metrics are insufficient. MSE and PSNR are strongly correlated metrics. In the end, PSNR=10log_{10}(MAX^2/MSE), where MAX is the maximum possible value. Therefore, the authors need to add further metrics such as LPIPS, DeltaE as perceptually-based full-referenced metrics, and NIQE as a non-reference one. - The paper will be of much larger interest if the authors present the results of a user study, for example as it was done in PCT-Net (reference 12 in this paper) or in [2]. - Figures 1,3, and 4 are versions of the same general idea of Figure. It will be beneficial for the paper if the authors can present just 1 Figure in which all the small differences are included. [1] Conde, M. V., et al. Nilut: Conditional neural implicit 3d lookup tables for image enhancement, AAAI 2024 [2] Valanarasu, J. M. J., et al. (2022). Interactive portrait harmonization, ICLR 2023 Technical Quality: 3 Clarity: 2 Questions for Authors: First, please address the weaknesses mentioned before. Also, when reading this paper, I got the feeling that: "This idea is mostly AdaInt + Mask". I may have lost some important component here, so please guide me through why my feeling may not be correct. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your positive evaluation and valuable suggestions.** ***Q1:The authors abuse notation regarding 3DLUTs in color image processing.*** ***A1:*** We thank the reviewer very much for pointing out the inaccurate usage of the term 3DLUTs. Following the reviewer's suggestion, we will rename the term of 3DLUTs to define our approach and rewrite the corresponding parts of the paper. ***Q2: Metrics are insufficient. The authors need to add further metrics such as LPIPS, and DeltaE as perceptually-based full-referenced metrics, and NIQE as a non-reference one.*** ***A2:*** Thanks for this suggestion. Following this suggestion, we use LPIPS, DeltaE, and NIQE metrics to evaluate the performance of the compared methods on the iHarmony4 dataset. The table below shows the quantitative results. As we can see that our method achieves the best results in terms of LPIPS and DeltaE and the third best results in terms of NIQE. It should be noted that NIQE is a no-reference metric that does not assess color or semantic context continuity. Therefore, it is not suitable for evaluating the performance of image harmonization. |Method|LPIPS↓|DeltaE↓|NIQE↓| |----|:----:|:----:|:----:| |Harmonizer|0.016|0.88|4.48| |DCCF|0.017|0.80|**4.46**| |PCT-Net|**0.013**|*0.67*|*4.47*| |our AICT|**0.013**|**0.66**|4.48| ***Q3: The paper will be of much larger interest if the authors present the results of a user study, for example as it was done in PCT-Net (reference 12 in this paper) or in [2].*** ***A3:*** Thanks for this suggestion. We randomly select 20 high-resolution real composite images from the GiftNet dataset. In the study, 20 volunteers independently rank the predictions from 1 to 3 based on visual quality, considering color and luminance consistency. Scores of 3, 2, and 1 are assigned for ranks 1, 2, and 3, respectively. The mean scores for each method are presented in the table below. As we can see our method achieves the highest score. |Method|Harmonizer|DCCF|PCT-Net|our AICT| |----|:----:|:----:|:----:|:----:| |Score↑|21.6|28|32.4|**37.8**| ***Q4: Figures 1,3, and 4 are versions of the same general idea of Figure. It will be beneficial for the paper if the authors can present just 1 Figure in which all the small differences are included.*** ***A4:*** Thanks for this suggestion. We will combine Figures 1, 3, and 4 into a single comprehensive figure that highlights all the small differences in the camera-ready version. ***Q5: Also, when reading this paper, I got the feeling that: "This idea is mostly AdaInt + Mask". I may have lost some important component here, so please guide me through why my feeling may not be correct.*** ***A5:*** We are sorry that our presentation let the reviewer misunderstand that our idea is mostly AdaInt + Mask. In fact, our method is significantly distinct from AdaInt in three aspects. **Firstly**, AdaInt predicts only a few weight parameters to fuse several fixed LUTs obtained through training, which limits its expressive capacity. In contrast, our method dynamically predicts entire LUTs based on the input, resulting in significantly greater expressive power. **Secondly**, AdaInt uses LUTs to perform global RGB-to-RGB transformations, which overlook diverse local contexts. In contrast, our method uses the color and position of each pixel as inputs for the LUTs to perform pixel-wise color transformations. **Thirdly**, AdaInt applies adaptive interval learning to each lattice dimension of the 3D LUTs for global sampling interval adjustment, which limits the expressiveness of the 3D LUTs. In contrast, our method achieves pixel-wise sampling interval adjustment to model local non-linearities of the color transformation. --- Rebuttal Comment 1.1: Title: Increasing my rating Comment: Following the answer, and although results in LPIPS, Delta E and NIQE are not impressive, I increase my rating to a borderline accept in line with the other reviewers. This said, please rename the term 3DLUT in the paper as promised in the rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you very much for your efforts in reviewing our paper. We will certainly rename the term 3DLUT in the paper as promised in the rebuttal.
Summary: This paper proposes a new method called Adaptive-Interval Color Transformation (AICT) for high-resolution image harmonization. The key ideas are: 1. Predicting pixel-wise color transformations using a parameter network that generates multiple 3D lookup tables (LUTs). 2. Separating the color transform into cascaded sub-transformations using two 3D LUTs to adaptively adjust the sampling intervals and model local non-linearities. 3. Using a global consistent weight learning module to predict image-level weights for each transformation to enhance overall harmony. Extensive experiments show that AICT achieves state-of-the-art performance on benchmark datasets while being computationally efficient. Strengths: 1. **Novel approach to high-resolution image harmonization**: The paper proposes a new method, Adaptive-Interval Color Transformation (AICT), which addresses the challenging problem of harmonizing high-resolution composite images. The key idea of using adaptive sampling intervals in the color transformation is a novel contribution that sets AICT apart from prior works. 2. **Improved quantitative performance on benchmark datasets**: AICT achieves state-of-the-art results on the widely-used iHarmony4 and HCOCO datasets, outperforming previous methods in terms of fMSE, MSE, and PSNR metrics. These quantitative improvements, although relatively small in some cases, demonstrate the effectiveness of the proposed technical innovations. 3. **Computational efficiency and scalability**: The paper presents a detailed analysis of the computational complexity and memory usage of AICT, showing that it achieves a favorable trade-off between performance and efficiency compared to existing high-resolution harmonization methods. The scalability of AICT to higher resolutions (e.g., 2048x2048) is also demonstrated, which is important for practical applications. 4. **Insights into the role of adaptive sampling intervals**: Through ablation studies and visualizations, the paper provides some insights into the importance of adaptive sampling intervals in the color transformation process. The comparison between fixed and adaptive intervals highlights the benefits of the proposed approach in terms of handling local color variations and edge artifacts. Weaknesses: ## Unrealistic Settings The input resolutions in the main experiments (1024x1024 and 2048x2048) are still relatively low compared to professional compositing workflows, which often deal with 4K or higher resolutions. The effectiveness of AICT for ultra high-resolution images is unclear and should be validated with appropriate benchmarks. ## Lack of Comparison to Extensively Studied Prior Art: Proposed Approach Appears to be a Trivial Amalgamation of Established Techniques - The use of 3D LUTs for color transformations is not a fundamentally new idea in image processing. Many previous works, especially in the photo enhancement domain, have used similar techniques. The paper does not sufficiently explain how AICT's specific LUT-based approach is distinct from prior methods. - Adaptively adjusting the sampling intervals of color transformations has also been explored in works like AdaInt. While AICT's dual cascaded LUTs are a nice extension, this still seems like an incremental contribution building on existing ideas. - The global weight learning module likewise appears very similar to the global adjustment parameters used in existing works. More discussion is needed on how AICT's weights differ and what additional capabilities they enable. ## Missing motivations for method modules - The global weight learning module seems disconnected from the rest of the method. It's unclear why weighting the predicted LUTs based on global image features would improve harmonization quality. The authors should explain the reasoning behind this design choice and how it relates to the adaptive interval learning. - The global weight learning module predicts a single scalar weight for each LUT, which is then uniformly applied to all pixels. This global approach contradicts the motivation of AICT to enable local color adjustments. A more spatially-varying weighting scheme would be more consistent with the overall goal of the method. ## Evaluation - The use of cascaded dual 3D LUTs is a key aspect of AICT, but is not ablated. Testing a single LUT baseline, more numbers of LUTs or alternative compositions of LUTs would help validate the necessity of the proposed decomposition. - The user study results are not reported, making it difficult to assess AICT's perceptual quality compared to other methods. Quantitative metrics like fMSE and PSNR do not always correlate well with human judgments, so user ratings would provide valuable additional evidence for AICT's benefits. ## Performance While the method outperforms previous approaches, the gains on some metrics (e.g. PSNR) are relatively modest (<1 dB). It would be good to see more analysis on what types of images/scenarios benefit most from AICT. Overall, the reviewer acknowledges that this paper addresses an important topic using generally reasonable methods. The initial score will be increased if the rebuttal effectively addresses the concerns mentioned above. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see weakness. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Please see weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your positive evaluation and valuable suggestions.** ***Q1: The effectiveness of AICT for ultra high-resolution images is unclear and should be validated with appropriate benchmarks.*** ***A1:*** Thanks for this suggestion. In Table 1 of the paper, we have reported the results on the HAdobe5k dataset with resolutions ranging from 312 × 230 to 6048 × 4032. Following the reviewer's suggestion, we collect two ultra high-resolution benchmarks (over4K and over5K) by selecting images with resolutions exceeding 4K (4096 × 2160) and 5K (5120 × 2880) from the HAdobe5k dataset. The compared results on the table below show that our method achieve the best performance on the over4K benchmark in terms of all metrics. On the over5K benchmark, our method achieves the best performance in terms of MSE and PSNR and the second-best performance in terms of fMSE and SSIM. |Benchmark|Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:|:----:| | over4K |Harmonizer|199.68|22.87|37.91|0.9374| ||DCCF|199.85|20.54|38.38|0.9884| ||PCT-Net|165.31|19.36|39.82|**0.9900**| ||AdaInt|226.66|27.83|38.20|0.9882| ||SepLUT|219.60|24.98|38.13|0.9866| ||Our AICT|**154.44**|**16.79**|**40.16**|**0.9900**| | benchmark|Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:|:----:| |over5K|Harmonizer|302.43|35.09|36.65|0.9441| ||DCCF|320.02|28.44|35.73|0.9757| ||PCT-Net|**292.62**|*21.40*|*37.48*|**0.9790**| ||AdaInt|330.25|29.26|36.35|0.9781| ||SepLUT|395.42|22.06|36.30|0.9774| ||Our AICT|*293.08*|**15.48**|**37.64**|*0.9786*| ***Q2 & Q3: Explain how AICT's specific LUT-based approach is distinct from prior methods.*** ***A2 & A3:*** Compared with prior methods, our method has three distinct differences. **Firstly**, unlike previous methods that use fixed weight parameters to fuse pre-trained LUTs, our method dynamically predicts entire LUTs based on the input. **Secondly**, Previous methods use LUTs for global RGB-to-RGB transformations, which often overlook local context. In contrast, our method uses each pixel’s color and position as inputs for LUTs, enabling pixel-wise color transformations. **Thirdly**, previous methods apply adaptive interval learning and separable lookup tables to each lattice dimension of the 3D LUTs for global sampling interval adjustment, which limits the expressiveness of the 3D LUTs. In contrast, our method achieves pixel-wise sampling interval adjustment to model local non-linearities of the color transformation. ***Q4 & Q5 & Q6: More discussion is needed on how AICT's weights differ and what additional capabilities they enable.*** ***A4 & A5 & A6:*** Thanks for this suggestion. When a foreground is inserted into a new background, its average brightness is influenced by the background lighting. Predicting spatially consistent weights for global color adjustment can enhance adaptive interval learning. To further discuss AICT's weights, we conduct ablation studies by sequentially removing the parameters: first for the R channel ('w/o R'), then for both R and G channels ('w/o RG'), and finally all parameters ('w/o weight'). We also explore pixel-level weights ('spatial'). As shown in the first table below, increasing the number of removed parameters decreases performance, highlighting the role of weight learning module. The results of "spatial" show that learning spatially-varying weights does not enhance performance, as LUTs handle pixel-wise color transformations. To illustrate the role of the weight learning module in global color adjustment, we calculate the error between the average color values of harmonized and ground truth images. As shown in the second table below, our method performs better. |Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:| |w/o R|230.83|18.25|39.39|**0.9910**| |w/o GB|232.20|19.10|39.18|0.9909| |w/o weight|247.48|19.40|39.16|0.9908| |spatial|230.24|18.03|39.23|0.9909| |our AICT|**228.43**|**17.76**|**39.40**|**0.9910**| |Method|w/o weight|our AICT| |----|:----:|:----:| |error↓|6.94|**6.57**| ***Q7: Cascade LUTs need ablation studies*** ***A7:*** Following this suggestion, we report more results in the table below. For each color channel, we use only 1 LUT for color transformation ('Single') and cascade 2 and 3 LUTs for adaptive interval learning ('Int x 2', 'Int x 3') and color transformation ('Tra x 2', 'Tra x 3'). We also cascade 4 and 6 LUTs for alternating adaptive interval learning and color transformation ('Alt x 2', 'Alt x 3'). Performance improves slightly with more LUTs but plateaus at three, due to increased training difficulty. |Method|fMSE↓|MSE↓|PSNR↑|SSIM↑| |----|:----:|:----:|:----:|:----:| |Single |248.51|19.71|39.15|0.9908| |Int x 2|227.95|**17.40**|**39.50**|**0.9910**| |Int x 3|229.50|18.55|38.95|0.9909| |Tra x 2|**228.40**|*17.50*|*39.41*|0.9909| |Tra x 3|230.28|19.01|37.95|0.9908| |Alt x 2|230.58|20.25|38.02|0.9908| |Alt x 3|232.25|19.89|37.80|0.9907| |our AICT|*228.43*|17.76|39.40|**0.9910**| ***Q8: The user study results are not reported.*** ***A8:*** Thanks for this suggestion. We randomly select 20 high-resolution real composite images from the GiftNet dataset. In the user study, 20 volunteers independently rank the predictions from 1 to 3 based on visual quality. Scores of 3, 2, and 1 are assigned for ranks 1, 2, and 3, respectively. The mean scores for each method are presented in the table below. As we can see our method achieves the highest score. |Method|Harmonizer|DCCF|PCT-Net|our AICT| |----|:----:|:----:|:----:|:----:| |Score↑|21.6|28|32.4|**37.8**| ***Q9: More analysis on what types of images/scenarios benefit most from AICT.*** ***A9:*** We present more results in Figure 2 in the PDF attached in the author rebuttal. As we can see that our AICT method particularly benefits images with rich textures, such as fur and hair. This is because AICT achieves non-uniform sampling intervals, allowing for better modeling of local non-linearities in color transformations. --- Rebuttal Comment 1.1: Comment: Thank you for thoroughly addressing most of my concerns in your rebuttal. This has led me to increase my rating for the paper. I wish you the best of luck with your research! --- Reply to Comment 1.1.1: Comment: Thank you for your positive response to our rebuttal. We appreciate all your efforts in reviewing our paper.
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to express our sincere gratitude to all the reviewers for their constructive feedback and for recognizing the performance and efficiency of our proposed method. We appreciate your valuable suggestions and have carefully addressed each point in our responses. 1.**Reviewer pwQP (Score: 5 - Borderline Accept)** recognizes the novel aspects of our AICT method and its computational efficiency. We have addressed your concerns by validating our approach on ultra high-resolution images and clarifying its distinctiveness from prior methods. Additionally, we have provided detailed explanations for the global weight learning module and included extensive ablation studies. 2.**Reviewer dAiu (Score: 4 - Borderline Reject)** acknowledges the innovative use of LUT transformations and the performance of our method compared to state-of-the-art techniques. Following the reviewer's suggestion, we will rename the term of 3DLUTs to define our approach and rewrite the corresponding parts of the paper. We have added LPIPS, DeltaE, and NIQE metrics to our evaluation, demonstrating the superiority of our method. A user study has been conducted to further validate our results. Additionally, we have highlighted the key differences between our method and AdaInt to address your concerns. 3.**Reviewer 4Tt1 (Score: 7 - Accept)** acknowledges the clarity and the effectiveness of our method. In response to your suggestions, we have conducted additional experiments using the HAdobe5k dataset at various resolutions (1024x1024 and 2048x2048), compared our method with more baselines, and reported the results, demonstrating superior performance in terms of MSE and PSNR. We have also evaluated our method on high-resolution real composite images and the ccHarmony dataset, and discussed method limitations and failure cases in detail. Furthermore, we expand our efficiency analysis to include FLOPs, memory usage, and inference time for different resolutions. 4.**Reviewer M5vB (Score: 5 - Borderline Accept)** praises the neat and clean idea as well as the impressive experimental results. We have clarified the novelty of the 3D LUTs-based approach and demonstrated the advantages of AICT through comparative experiments. We address the issues related to non-linearities and efficiency, and adjust our claims accordingly. Additionally, we expand the discussion on the global consistent weight method, conduct further ablation studies, and correct errors and redundancies in the text. Our responses to individual comments of each reviewer are posted in the rebuttal under each reviewer's report. All the required experimental results are presented in the PDF attached in this rebuttal. Specifically: - **Figure 1** visualizes the error between the harmonized and ground truth images. - **Figure 2** presents qualitative results against existing methods. - **Figure 3** displays the color distribution in local areas of high-resolution harmonized and ground truth images, showing that the pixel values predicted by our method are closer to the real color distribution. - **Figure 4** presents the visual results of ablated models. For convenience, we highlight the figure relevant to each reviewer's comments as follows: - **Reviewer pwQP**: Figure 2.  - **Reviewer M5vB**: Figure 1, Figure 2, Figure 3, and Figure 4 We hope that our responses and the revisions made to the manuscript will alleviate any concerns and enhance the overall quality of our submission. Once again, we thank all the reviewers for their thoughtful reviews and valuable suggestions. Pdf: /pdf/2e634cdc78023a13eb1938083604da8da0073237.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Identifying General Mechanism Shifts in Linear Causal Representations
Accept (poster)
Summary: Recently, causal representation learning has drawn a lot of attention in the representation learning area, where it considers a causal relation among the latent generative factors. This work is under the setting of linear causal representation learning. Data $X$ comes from a linear mixing $X=GZ$ of the unknown latent factors $Z$, and latent factors $Z$ follow a linear SCM. They are trying to estimate the causal mechanism's shifting (causally changed) node, under the more general/relax intervention. They claim that ‘it is possible to identify the latent sources of distribution shifts while bypassing the estimation of the mixing function and the SCM (causal relation matrix). The experiments were provided to justify their setting. Strengths: Pros: 1. This paper is very well-written. It's pretty clear and easy to understand. The reviewer enjoys reading it. 2. They are considered a more relaxed setting of intervention. It's an interesting idea where you directly find the shifted node. Weaknesses: Cons: 1. The first issue is the main motivation of this work. Their idea is to bypass the estimation of the mixing function $G$ and the SCM $B$ (causal relation matrix). However, the reviewer believes this is the main motivation for causal representation learning. Why would we want a shift node, if we can't even identify the causal relationship in CRL? Could the author please clarify the point here? 2. The reviewer believes the strong assumption limits the significance of this work. The `access to a test function (Assumption B)' is a really strong assumption that seems not realistic. Since you got the permutation ambiguity fixed. And I believe you also fixed the sign ambiguity in your assumption. Combined with the bounded variance you basically got an ICA with only scaling ambiguity. What is the difference between this work and the classical linear ICA result? 3. The reviewer would like to see the clarification of the motivation and connection between linear ICA and this work. For linear ICA, we can get mixing function $M$ and the independent sources $\epsilon$ in an unsupervised way, with an identifiability guarantee. But for causal representation cases, the $M$ decomposes into $G$ and $B$. The natural/intuitive idea is to identify those factors with certain assumptions. Could the author provide any intuition for this? The reviewer would love to raise the score if these questions were resolved in the discussion session. Technical Quality: 2 Clarity: 4 Questions for Authors: Besides the three questions I mentioned in the cons section, I do have another question here, but it does not lower the marks of this work, just discussion. 1. This question is not an issue for this work since it seems many reference works use interventional settings. But it would be interesting if the author could discuss this. Could the author clarify why we must use interventional data? Is there any alternative way since the interventional data is kind of a semi-supervised strong assumption? I know there is a paper about the 'Challenging Common Assumptions in unsupervised CRL', but is there any other way we can avoid using interventional data? Confidence: 5 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your efforts on reviewing our paper! Below, we address your concerns. ### From weakness 1. We provided some motivation from Line 34 to Line 38 in the paper, and also included a toy example in Figure 1. Additionally, we provided a Psychometrics data application (Section 5.2) to reflect the motivation and practical significance of our method. In addition, we would like to provide more potential applications here to illustrate in which scenarios one might be interested in localizing shifted nodes. For example, examining the mechanism changes in the gene regulatory network structure between healthy individuals and those with cancer may provide insights into the genetic factors contributing to specific cancers. Within biological pathways, genes could regulate various target gene groups depending on the cellular environment or the presence of particular disease conditions [1,2]. In the analysis of EEG signals [3], it is of interest to detect neurons or different brain regions that interact differently when the subject is performing different activities. All these questions can be formulated as lozalizing shifted nodes. The complementary question of finding shifted nodes is to find the shared non-shifted nodes, which can also be used for social science analysis. For instance, we may have data from different countries and different ethnicities but aim to identify the potential causes that influence the years of education pupils receive [4]. Based on the applications in these fields, I believe that estimating shifted nodes in CRL is an interesting and valuable topic. [1] Hudson, N. J., Reverter, A., \& Dalrymple, B. P. (2009). A differential wiring analysis of expression data correctly identifies the gene containing the causal mutation. PLoS computational biology. [2] Pimanda, J. E., Ottersbach, K., Knezevic, K., Kinston, S., Chan, W. Y., Wilson, N. K., ... \& Göttgens, B. (2007). Gata2, Fli1, and Scl form a recursively wired gene-regulatory circuit during early hematopoietic development. Proceedings of the National Academy of Sciences. [3] Sanei, S., \& Chambers, J. A. (2013). EEG signal processing. John Wiley \& Sons. [4] Ghassami, A., Salehkaleybar, S., Kiyavash, N., \& Zhang, K. (2017). Learning causal structures using regression invariance. Advances in Neural Information Processing Systems, 30. 2. Assumption B is included for better presentation and understanding of our method's workflow, but it is not necessary for our identifiability results. As mentioned in line 148 and Appendix C, there are alternatives to this assumption. With estimated noise samples, we can perform distribution matching to achieve a consistent order of noise components. Therefore, this assumption is not necessary. Regarding sign and scaling ambiguity, no, we do not fix sign ambiguity in our assumptions, as ICA inherently has sign ambiguity. In Theorem 3, we prove that the unmixing matrix estimated from the ICA method, even with sign ambiguity, will not affect our estimation of shifted nodes. Additionally, in Theorem 1, it is proven that under certain assumptions, the ICA solution only has permutation and sign ambiguity, so there should be no scaling ambiguity. ICA is an important component of our method. The main contribution of our paper is to prove that the ICA solution can help us detect the shifted latent nodes. We also provide the identifiability result for the algorithm designed based on ICA. We believe this is not trivial since we are solving a new problem with theoretically backed methods. 3. Thank you for the question. We refer you to the global response for a detailed answer to this question. In short, in the "inadequate" intervention data setting (small number of intervention environments, inadequate number of interventions, and general intervention setting), estimating $G$ and $B$ from $M$ is impossible. Our method bypasses the step of estimating $B$ but can still estimate shifted nodes with fewer assumption constraints. ### From questions Thank you for the question. Assuming interventional distributions is a modeling choice, which may or may not hold in practice. Our setting is concerned with identifying the sources of distribution changes (shifted nodes) across two or more populations. In this case, one formal way to characterize distribution changes is through modeling interventions in a causal graph. We do not think one *must* assume this but we believe it can be a reasonable assumption in multiple scenarios. --- We hope our answers will make you feel more positive about our work. Please let us know any follow-up questions! --- Rebuttal Comment 1.1: Title: Response to the author Comment: W1 The reviewer thanks the author for the clarification and agrees that it is of certain value for finding the shifted nodes. however, it does not fully resolve my question. The reviewer respectfully argues that the shift node setting is not aligned with the fundamental motivation of causal representation learning, which is finding the causal graph/mechanism. W2 The author answered my question about the strong assumption of the test function. The reviewer was satisfied with their explanation about relaxation. W3 For the motivation, the author referenced paper [1] is probably the closest linear model setting. It showed an interventional-based linear model. They showed the number of interventions needed for their method but that seems not a strict lower bound. The reviewer respectfully disagrees with the author's claim 'in the "inadequate" intervention data setting, estimating $G$ and $B$ from $M$ is impossible'. Please correct me if I was wrong, and I would also like to know if there is any reference proving the strict lower bound of intervention needed for identifiability. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thanks for your response. We are glad we were able to solve your concern about the assumption on the test function. Next we address your remaining questions. **W1** We are not sure why the reviewer is concerned about our setting not being aligned with CRL? Nowhere in our paper we claim that our goal is to solve the CRL problem. Our paper considers the CRL setting in the sense that our model of the data generating process follows that of the linear CRL setting, that is, the latent variables follow an structural causal model, but we explicitly mention the goal of our work in our contributions and the paragraph before (Lines 55-72). **W3** Thank you for your question. Indeed, several papers discuss the strict lower bound on the number of interventions needed for causal structure recovery. For example, in [1], Theorem 2 demonstrates that with perfect interventions on each single node across different environments, the causal structure can be estimated up to a permutation. Moreover, Proposition 5 indicates that if the number of interventions is fewer than $d$ (where $d$ is the number of latent nodes), the causal structure becomes non-identifiable. Additionally, Appendix B of the same paper shows that if the interventions are soft rather than perfect, Theorem 2 may no longer hold. The most recent advancement in this area is found in [2], which relaxes the hard intervention assumption but still requires at least $d$ environments and $\Theta(d)$ soft interventions.\ These identifiability limitations in CRL w.r.t. the number of environments and the type of interventions are precisely what motivate the search for alternative goals that are still useful in practice, such as identifying mechanism shifts, all while bypassing full identification of the causal structures. [1] Squires, Chandler, et al. "Linear causal disentanglement via interventions." International Conference on Machine Learning. PMLR, 2023. [2] Jin, Jikai, and Vasilis Syrgkanis. "Learning causal representations from general environments: Identifiability and intrinsic ambiguity." arXiv preprint arXiv:2311.12267 (2023).
Summary: This work studies the nontrivial problem of causal representation learning from the perspective of mechanism shifts within the latent SCM. Specifically, the authors relax existing restrictive assumptions in interventional causal representation learning, such as data generated from single-node perfect interventions and the number of environments necessary to facilitate identifiability and show that it is possible to identify the latent nodes that shift between environments/distributions from more general soft/hard and add/reverse interventions given access to fewer environments than the number of causal variables. Furthermore, the authors develop a practical algorithm to recover the latent sources attributable to the distribution shift and evaluate their method on synthetic data and a psychometric dataset. The main contribution in this work seems to be the use of a test function to score the noise factors learned from ICA to construct a sorted permutation matrix, which is used to construct the new unmixing matrix for scrambling the independent SCM noise variables. Strengths: - The theoretical result and intuition of identifying latent sources of distribution shift is interesting and is a step toward more feasible CRL for real-world application. Most work focuses on a supervised discriminative setting for studying distribution shifts. This work stands out in being one of the first to identify latent sources of distribution shifts. - The empirical evaluation is extensive and considers real-world datasets to evaluate the proposed CRL algorithm. The results from the Psychometrics dataset suggest that the algorithm proposed is capable of identifying the latent shifts from the data distribution to a great degree and in line with human interpretation. The test statistic proposed to quantify the degree of distribution shift between nodes w.r.t unmixing matrix is practical. - Generalizing the class of interventions for interventional CRL to identify only shifted nodes is a useful result for real-world CRL, where interventions may be multi-node and more complex. Weaknesses: - This work considers linearity in both the mixing and the SCM, which can be somewhat of a restricting assumption in practice. - Proposition 2 and Theorem 3 seem to contradict each other. By the current logic, the two statements imply that both shifted and non-shifted nodes require the same condition of the row corresponding to the variable index in the unmixing matrix to be invariant across environments (w/ sign flip). This should also be made clear in Section 4.2 in Step 3. - Minor points - In the caption of Figure 1, for the UK environment, it should be the edge Z_4 → Z_1 is removed instead of Z_5 → Z_1 removed. Technical Quality: 3 Clarity: 3 Questions for Authors: - Proposition 2 and Theorem 3 are contradicting. What are the criteria for a node to be identified as a shifted node? Do you mean that if the ith row of the unmixing matrix is **different between environments**, then node i is a shifted node? This seems to be the case when looking at the proof of Proposition 2 in the appendix. I would appreciate it if the authors could provide some clarification on this. - Do the authors have any intuition about the setting where noise factors are correlated and the iid assumption of noise across environments is violated? In this scenario, we would no longer have the standard linear ICA result to build off of. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer's valuable suggestions and for recognizing the novelty of our work. We next address the reviewer's concerns. ### From weaknesses * Please refer to our global response regarding linear models. * Thanks for pointing this out. We apologize for the typo in Proposition 2, line 187. It should state that "$Z_i$ is identified as a **not** shifted latent node between $k$ and $k'$ if and only if $M_i^{(k)} = M_i^{(k')}$", and Theorem 3 is correct. We now believe Proposition 2 and Theorem 3 are consistent after correction. Additionally, in Section 4.3, Step 3, line 213, it should read "$Z_i$ is a **non-shift** node between $k$ and $k'$ if and only if $\widetilde{M_i}^{(k)} = \pm \widetilde{M_i}^{(k')}$", and in line 215, it should state "there is **no shift** in node $Z_i$ if and only if $\widetilde{M_i}^{(k)}=\pm \widetilde{M_i}^{(k')}$". We will correct these in the revision. Thank you again for your careful review. * Correct. Thanks for pointing this out. We will correct it in the revision. ### From questions * As we pointed above, there are some a couple of typos in Proposition 2 and Section 4.2, Step 3. After correction, as you suggested, the illustration of our method should be consistent. A node $Z_i$ is a shifted node if and only if $M_i^{(k)} \neq M_i^{(k')}$, which means that the $i$-th row of $M^{(k)}$ and $M^{(k')}$ are different, or $\widetilde{M_i}^{(k)} \neq \pm \widetilde{M_i}^{(k')}$. * Great question! This is a non-trivial question that makes for an exciting future direction. It is important to note that the correlated noise assumption impacts the identifiability of $B^{(k)}$. Its effect on the ICA solution would come secondary. For our setting and for CRL in general, it is crucial that $B^{(k)}$ is unique to avoid identifiability issues. If we assume the noise component is correlated, it implies that $\Omega$ is not a diagonal matrix, and it is also possible that $\Omega$ is not full rank. Given that $\Omega^{1/2}B = I - A$, if $\Omega$ is not full rank, then $B$ is not unique. --- We hope our answers will make you feel more positive about our work. Please let us know any follow-up questions! --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifying response. The authors have done a good job of answering my questions. The problem addressed is quite interesting with significant implications in downstream distribution shift generalization. Furthermore, I believe the theoretical identifiability results and practical algorithm are of interest to the CRL community. Since this work helps to bridge the gap between the theory and practice of CRL, I am increasing my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you for taking the time to respond. We are happy to see our answers were helpful and made you feel more positive about our work. Thanks a lot for your efforts!
Summary: This paper considers the setting of linear causal representation learning (CRL) with possibly multi-node interventions. Instead of focusing on the task of identifying the causal structure, which is recently shown to be impossible, the authors instead focus on the task of identifying mechanism shifts i.e. which nodes are intervened in each environment. The authors show that this identification task is actually possible, and design an identification algorithm to achieve it. Lastly, the authors empirically demonstrate the effectiveness of their approach. Strengths: 1. The paper is well-written and easy to follow. Most mathematical definitions and statements are supported with sufficient explanations. 2. The task that the paper focuses on i.e. identifying mechanism shift is quite interesting, and can possibly be considered in other cases where full identification is hard or even impossible. Weaknesses: 1. It seems that step 3 in Sec. 4.2 is stated without any proof of why it works. Is it just a heuristic or that it can provably lead to identification? 2. It seems to me that the main results of this paper is very closely related to [1], but the authors do not discuss in detail about this issue. (Please see the "Question part" for more details) [1] Jin, Jikai, and Vasilis Syrgkanis. "Learning causal representations from general environments: Identifiability and intrinsic ambiguity." arXiv preprint arXiv:2311.12267 (2023). Technical Quality: 2 Clarity: 2 Questions for Authors: In the paper [1], the authors consider linear CRL (the same setting as this paper) and design an identification algorithm that works for general environments (i.e. multi-node interventions). Their algorithm can fully recover the causal graph, as well as recover the mixing matrix up to a surrounding node ambiguity (SNA). They show that SNA is am intrinsic barrier in this setting. What I'm wondering is that, given their identification algorithm, if it is the case that the task of "identifying mechanism shift" can be straightforwardly resolved. Because if you can recover the mixing matrix, then you can also recover the noise-to-latent matrix (i.e. $B^{(k)}$) in the current paper). Then you can identify the mechanism shifts simply by comparing the entries of different $B^{(k)}$'s. Of course, the mixing matrix is actually recovered with some ambuguities. However, given that such ambiguities are inevitable as shown in [1], I suspect that such ambiguities do not affect the task of identifying mechanism shift. *I am happy to raise my score if the above concern is appropriately addressed.* Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your efforts in evaluating our paper. Below, we address your concerns. ### From weaknesses 1. In the population setting, step 3 will lead to identifiability provably, and it is proven in Theorem 3 that $L_i^{k,k'} = 0$ if and only if $Z_i$ is not a shifted node between environments $k$ and $k'$. However, in the finite sample setting, since we have no precise measure of the accuracy of the ICA estimation, the choice of $\alpha$ is heuristic. ### From questions * Yes, if we can estimate $B^{(k)}$ by the method in [1], SNA ambiguity does not affect the identifiability of our task. However, estimating $B^{(k)}$ up to SNA **requires** two assumptions: the number of environments $K$ should be at least equal to the number of latent nodes $d$, and there should be at least $\Theta(d^2)$ interventions. **Our algorithm does not rely on these two assumptions**. For example, for *any* number of latent nodes, our method can localize mechanism shifts even when given only $K=2$ environments. * Even if we were able to estimate $B^{(k)}$, if the objective is to identify shifted nodes, estimating less can be more efficient. Our paper shows that estimating $B^{(k)}$ is not necessary at all for achieving that objective. We will add these comments in the revision to emphasize the difference to [1]. --- We hope our answers will make you feel more positive about our work. Please let us know any follow-up questions! --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the reply. If I understand it correctly, the authors' point is that the task of this paper is different (and strictly easier) than the one in [1], since the goal here is only to identify the shifted nodes rather than the full causal model. As a result, the assumptions required in this paper is also weaker than [1]. Given the above interpretation of the main contributions of this paper, I would say that the task this paper considers may be of interest on its own and the results are novel. However, I'm still concerned that the contributions of this paper is a bit too close to [1]. Because although [1] requires $d$ environments (or equivalently, $d$ interventions for each node). Indeed, after completing the step 1 in their algorithm, one can directly compare the corresponding rows of $M_k$ to determine whether the node is shifted or not. Although this is not explicitly done in that paper (because their task is causal graph discovery), this deduction seems too simple to be the main result of a NeurIPS paper. Actually the main ideas underlying their algorithms seem to be the same i.e., the $i$-th row of $M_k$ implicitly encodes any information of node $i$ that is invariant under linear transformations. I'm also concerned about the real-world implications of the mechanism shift identification task. Indeed, without recovering the true latent nodes and the causal graph, it does not seem to be extremely useful to identify a shift at a certain node, because we have no other information of this node. How can we utilize the mechanism shift result to solve downstream tasks? --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you for participating during this discussion period, your questions are greatly appreciated. We next address them: 1. We would like to offer a different perspective on the contributions of our work. Our contributions are in explicitly *formulating, proving, and demonstrating through experiments* that one can effectively and efficiently solve the problem of identifying mechanism shifts for linear latent causal variables. Given the assumptions in the paper, we found that there is a simple yet elegant solution for this problem and we firmly consider this to be a strength, not a weakness. \ Regarding the comparison to [1], it's correct that [1] also uses ICA as a first step; however, the same can be said about [2], which also uses ICA as a first step and then a couple of extra steps to identify the causal order in the fully observable setting. In fact, any other application of ICA ranging from predicting stock market prices to working with EEG data would all share with our algorithm the same step of applying ICA. This is to highlight the importance of the problem formulation, which Reviewers tbD8 and 33zq kindly appreciated from our paper. 2. Regarding the real-world implications. We believe our problem setting is more realistic, in the sense that it could be applied more widely, partly because the objective is less ambitious than estimating the full causal graph and we require less assumptions. Indeed, one of the key motivations for **directly** learning differences of causal graphs given in [3,4,5] is that learning full causal graphs is in general impractical (mainly due to strong assumptions and being sample inefficient) and, in many cases, scientists are simply interested in understanding changes among populations/distributions. Note that the latter, i.e. comparing distributions, is a fundamental question in statistics, and our setting is concerned with identifying the *latent* sources of distribution changes, which is closely related to root cause analysis, as stated in Lines 55-60.\ To conclude, for example, in our real-world study in Section 5.2, the research question is whether there exists significant variations in personality traits (latent variables) across populations male/female and US/UK based on psychological tests (observed measurements). Here, our algorithm was able to answer consistently with existing psychological literature (Lines 265-266). [2] Shimizu, S., et al. (2006). A linear non-Gaussian acyclic model for causal discovery. JMLR. [3] Wang, Y., et al. (2018). Direct estimation of differences in causal graphs. NeurIPS. [4] Chen, T., et al. (2024). iSCAN: identifying causal mechanism shifts among nonlinear additive noise models. NeurIPS. [5] Malik, V., et al. (2024). "Identifying Causal Changes Between Linear Structural Equation Models." UAI. We hope these comments are helpful, we appreciate your participation during this discussion period. Please let us know any follow-up questions or concerns.
Summary: This work studies the problem of detecting the mechanism shifts in a novel way by considering the latent nodes. The authors prove identifiability results based on assumptions softer from prior identifiability results for causal representation learning. Their method is based on ICA and is evaluated empirically on synthetic data and a real-world psychometric dataset. Strengths: The paper is very well written and the scope and contribution are clearly stated and illustrated with examples. **Related work** The related work part is detailed and correctly positions the paper at the midpoint between causal representation learning and causal mechanism shift detection. **Novelty** The paper is novel as it proposes a methodology for a known problem (detection of causal mechanism shifts), in a new setting that considers latent variables, which is the case in the causal representation learning field. **Theory** The authors propose and theoretically prove a softer identifiability result that allows for fewer than $d$ and unrestricted interventions (soft/hard and possibly applied to multiple nodes). **Experiments** The method is evaluated on synthetic experiments and an interesting real-world experiment on a psychometric dataset, which provides evidence for its applicability in practice. Weaknesses: This work has some possible weaknesses **Significance of contribution** The proposed work solves a simpler problem than causal representation learning, which doesn't require learning the whole mixing matrix $B$, but rather only the distribution shifts. I am concerned that this simplification of the problem might make it easier solvable and I wonder whether other methods of CRL could be transformed easily so they could perform well on this task. **Experiments** Following my previous concern, I wonder why you did not compare against prior CRL methods or methods for causal mechanism shifts. Such a comparison would enhance the experimental results. Can you adapt the CRL methods or the causal mechanism shift techniques for observable variables to your setting? **Sample complexity** From Figure 2 it seems that indeed for a large (towards infinity) number of samples, your method can accurately detect the mechanism shifts. I am concerned, however, that the sample complexity is high which makes nodes with over hundreds of nodes out of reach. This can also be seen from Table 1 where your performance drops quite early, even from graphs with 60 nodes. From a theoretical perspective, it would be interesting to compute how many samples are required regarding the number of nodes (sample complexity result - either theoretical or experimental study). Such a theorem would be appreciated by the community. **Real experiment: size of changes** The results are very interesting and agree with psychological findings. However, the number of nodes and changes are very small. A larger number of nodes (up to 100) would be more sufficient to show that your algorithm is valid in practice. **Limitations** You should explain either why the application to a large number of nodes is not needed in practice or add it to the limitations. Technical Quality: 3 Clarity: 4 Questions for Authors: Line 56: Do you think that the problem you solve is relevant to identifying the locations of the root causes of a linear SEM (as in [1])? Figure 1 caption, 7th line: Typo, should be $Z_4\to Z_1$ Line 143: What are the implications of this assumption? Is it also present in prior identifiability results? Line 231: What is the effect of the observed space dimension $p$ being larger or smaller with respect to the problem you solve (does it become easier or harder)? Figure 3: the font in the legend and xticks must be larger. Line 293-294: Here I got a bit confused. This is different from what you show in the example of Fig. 1, right? Because in Fig. 1 you show interventions (shifts) across different countries. Line 297: Can you briefly explain how your methodology can be generalized to a nonlinear data-generating process? Line 303: What would distribution shift imply for image data? [1] Misiakos, P., Wendler, C., & Püschel, M. (2024). Learning DAGs from data with few root causes. Advances in Neural Information Processing Systems, 36. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have discussed some limitations in the appendix. However, a significant limitation of not being applicable to a large number of nodes is not included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the reviewer's recognition of our paper's novelty and contribution. We now address the reviewer's concerns. ### From Weaknesses > Significance of contributions... Please refer to the global response for this concern. > Experiments... The most recent CRL method with official code released is [2]. In their simulation, their official code is based on all Gaussian noise and samples data precision matrices from the Wishart distribution. However, in our case, our noise setting is non-Gaussian, requiring the calculation of the data precision matrix from samples. Since the estimation of the precision matrix involves the inverse of a not full-rank matrix, their code becomes numerically unstable. Thus, it is hard to accommodate their code in our setting. Instead, as you suggested, it is a good idea to compare the existing methods of directly finding causal mechanism shifts in a fully observable setting. We compared with methods DCI [3], as they are the most recent methods with official codes publicly available. The results for the simulation are shown in the table of pdf. Our method achieves higher performance in most settings. [2] Squires, C., et al. Linear causal disentanglement via interventions. [3] Wang, Y., et al. Direct estimation of differences in causal graphs > Sample complexity.... Please refer to our gloabl response. > Real world experiment... 1. We believe that our experimental setting is relatively high-dimensional and involves larger graph sizes compared with existing CRL methods. For example, [2] considers settings where $d = 5$ latent nodes, [3] considers settings with $d = 5, 10$, and [4] considers settings where $d = 5, 8, 10$. In contrast, our experiments consider $d$ ranging from $5$ to $40$, which represents a relatively high-dimensional graph compared to existing baseline methods. [4] Jin, J., et al. "Learning causal representations from general environments: Identifiability and intrinsic ambiguity." 2. The main contribution of our paper is to provide an identifiability result to detect the shifted nodes under fewer assumptions. Thus, we believe that the experimental part is sufficient to validate our method and proof. 3. It is challenging to find CRL datasets, under which the detected shifted nodes can also be validated by scientific papers. > Limitations Our method can be applied to any number of latent nodes, and our identifiability results holds. As we explained in previous responses, datasets with large number of latent nodes are hard to obtain, and we have already conducted synthetic experiments on relatively large latent graph sizes compared with other CRL methods. We will add more discussion about this in the limitations section of the revision. ### From Questions > Line 56... Thank you for your suggestion,we will add more discussion about it in our revision. We note that there are differences between our problem setting and the one in the mentioned paper, even in the direct observation setting. Using notations in [1], their goal is to find the SEM $A$ from the equation $X = C(I + \bar{A})$, where $(I - A)(I - \bar{A})=I$. They assume that $C$, the root cause, is sparse. In contrast, our objective is to identify the difference in $A$ across different environments, where the difference in $A$ can be sparse. The sparsity in the difference of $A$ between different environments does not necessarily imply a sparse root cause $C$. To elaborate further, suppose we have datasets from two environments, $X^1 = A^1 X^1 + N$ and $X^2 = A^2 X^2 + N$. When we take the difference, we can consider $X^1 - X^2$ as arising from the process $X = (A^1 - A^2)X + N'$. Even though $B = (A^1 - A^2)$ may be sparse, $(I + \bar{B})^{-1}$ is not necessarily sparse. Therefore, while the question addressed in that paper is relevant, it is not exactly the same as ours. We will add the citation and include a brief discussion about the differences between our work and that paper in the revision. > Fig1 caption... Thanks for point this out. We will correct them in the revision. > L143... This assumption is introduced to ensure a consistent order of latent noise components after applying ICA. When ICA is applied to each individual environment, the order of noise components may differ. Therefore, this assumption helps us eliminate permutation ambiguity. However, this assumption is not necessary. We discuss how to relax this assumption in Appendix C. > L231... For any given $d$, a higher $p$ can be regarded as more data being provided since it offers additional auxiliary information. On the other hand, higher dimensional data makes optimization in ICA more challenging. Lower $p$ values provide less auxiliary information, but they are easier to optimize. Generally, we believe that higher $p$ will offer more benefits than disadvantages. For instance, in the third plot of Figure 2, we observe that higher $p$ values consistently yield higher F1 scores. > L293.. Yes. Sorry for the confusion. Figure 1 and Figure 3 are two different example, and line 293-294 is based on the example of Figure 3. We will make it more clear in the revision. > L297.. Please refer to the global response. > Line 303.. A good example of image distribution shifts is the WILDS dataset [1]. For instance, it provides a dataset where tissue slide images shift across patients' health conditions and hospitals. Identifying the mechanism shifts in this context involves using the tissue image dataset to determine, for example, whether some particular coloring changes across healthy and diseased patients. Since there is no evidence that the (mixing) mapping from latent to observation is linear, we did not include such an image example in our paper. [1] Koh, P., et al. "Wilds: A benchmark of in-the-wild distribution shifts." ICML 2021. --- We hope our answers will make you feel more positive about our work. Please let us know any follow-up questions! --- Rebuttal Comment 1.1: Title: Thank you. Comment: After your rebuttal, I am very positive that this is great work and I am accordingly increasing my score. Particularly I appreciate the novelty of your work, by combining two distinct problems (causal representation learning and causal mechanism shifts). --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you for participating during this discussion period. We are happy to see our answers addressed your concerns and we are grateful for your support to our work. Once again, thank you.
Rebuttal 1: Rebuttal: We appreciate the time and efforts all reviewers have invested in evaluating our manuscript. We are grateful for the constructive feedback and insightful comments. It has come to our attention that there are some common questions regarding our paper, particularly concerning: 1. Can existing CRL methods be applied to our question? Is it a more difficult or simplified question than CRL? 2. Linear mixing and causal structure assumption. Can the method be generalized to nonlinear mixing functions? 3. What is the sample complexity of our method? We next address these concerns and hope our answers will make the reviewers feel more positive about our work. ### Question 1 In some aspects, our setting does indeed simplify the CRL problem as we do not aim to identify the entire mixing matrix and latent causal structure. However, in other aspects, our setting faces other challenging scenarios such as having access to fewer than $d$ environments (even $d=2$), and considering more general interventions, such as edge weight changes, edge additions/removals, and edge reversals. Most existing CRL methods [1,2,3] rely on each latent node having a perfect intervention *in at least* one environment. Recent developments such as [4] are capable of recovering $B^{(k)}$ under more general interventions (**none to our knowledge allow edge reversals**) but they still require *at least* $d$ environments and $\Theta(d)$ interventions. To our knowledge, no existing CRL method can address the problem of identifying latent shifted nodes in scenarios with fewer environments, fewer interventions, and under general types of interventions. [1] Seigal, A., Squires, C. and Uhler, C. [2022], ‘Linear causal disentanglement via interventions’. [2] Buchholz, S., Rajendran, G., Rosenfeld, E., Aragam, B., Schölkopf, B., \& Ravikumar, P. (2023). Learning linear causal representations from interventions under general nonlinear mixing. Advances in Neural Information Processing Systems, 36. [3] Ahuja, K., Mahajan, D., Wang, Y., \& Bengio, Y. (2023). Interventional causal representation learning. In International conference on machine learning. [4] Jin, J., \& Syrgkanis, V. (2023). Learning causal representations from general environments: Identifiability and intrinsic ambiguity. arXiv preprint arXiv:2311.12267. ### Question 2 We believe linear models are good starting points for underexplored settings such as ours, moreover, sometimes linear methods can also serve as rough approximations for nonlinear models. Currently, our method cannot be directly generalized to nonlinear mixing functions. To speculate, one way to address this could be using nonlinear ICA methods; however, a formal proof requires further research and this presents an exciting future direction. ### Question 3 The sample complexity of our algorithm is tied to the sample complexity of ICA since our algorithm uses ICA. Since there are different algorithms for solving ICA, we will assume that the estimated ICA unmixing function has the following statistical accuracy: If $n \ge g(d,\delta)$, then with probability at least $1 - h(n,d,\delta,\epsilon)$ we have: $$ l(\hat{M}_i - M_i) \le C \cdot p(d,n)f(\delta), $$ where $\hat{M}_i$ is the $i$-th row of the estimated unmixing matrix $\hat{M}$. Here, $C$ is a constant, and $p$, $f$, $g$, and $h$ are known functions. For instance, in [5], $p(d,n) = \sqrt{\frac{d}{n}}$ and $f(\delta) = \sqrt{\log(1/\delta)}$. The loss function $l$ can be chosen to be the $L_2$ norm. Therefore, for two environments $k$ and $k'$, if node $i$ is not shifted: $$ ||\hat{M}_i^k - \hat{M}_i^{k'}||_2 \le ||\hat{M}_i^k - M_i||_2 + ||\hat{M}_i^{k'} - M_i||_2 \le 2 \cdot C \cdot p(d,n)f(\delta) $$ with at least probability $1 - 2h(n,d,\delta,\epsilon)$. Thus, if we set the threshold $\alpha$ in our algorithm to $2 \cdot C \cdot p(d,n)f(\delta)$, we can control the false discovery rate to be at most $2h(n,d,\delta,\epsilon)$. A similar sample complexity theorem can be extended to more than $2$ environments with a similar technique as long as we know the sample complexity of an ICA algorithm. We will add a discussion on this in the revision. [5] Auddy, A., \& Yuan, M. (2023). Large dimensional independent component analysis: Statistical optimality and computational tractability. arXiv preprint arXiv:2303.18156. Pdf: /pdf/127269147a52267e9e84fd4da8be08250798c77e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DARNet: Dual Attention Refinement Network with Spatiotemporal Construction for Auditory Attention Detection
Accept (poster)
Summary: This paper makes a substantial contribution to the field of auditory attention detection (AAD) by presenting a more accurate and efficient model, the dual attention refinement network with spatiotemporal construction (DARNet). The authors effectively address critical limitations in current AAD algorithms, specifically the lack of spatial distribution information and the inability to capture long-range latent dependencies within EEG signals. The methodological innovations and empirical results suggest that DARNet represents a significant advancement in decoding brain activity related to auditory attention tasks. The experiments show significant improvements over state-of-the-art methods. Strengths: 1. This paper uses a simple yet effective method to address the previous auditory attention detection algorithms' insufficient utilization of the spatiotemporal features of EEG signals. 2. The proposed method is technically sound and the idea seems to be interesting. 3. The experimental setting is valid and extensive, and the experimental results indicate significant improvements over some comparison methods on several datasets. Weaknesses: Overall, the writing of this paper is relatively clear and the structure is complete. However, there are still some issues, such as grammatical errors and some sentences that are not clearly articulated. 3)There are many writing problems. The authors should further polish this manuscript: 1)Line 100-102: The expression is not very clear. 2)Line 103: "However previous" -> "However, previous". 3)Line 188: "or" -> "and". 4)Line 292: "increased" -> "increases". Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why did you use a dual attention refinement module instead of three layers or more? The authors should provide more analysis 2. Did you lose the FC layer for the final classification part in Figure 1? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The DARNet has only been validated under the subject-dependent condition, and it may not perform as well under subject-independent conditions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal: Thank you so much for your thoughtful comments and the time to provide constructive feedback! ### Weaknesses: 1. **Regarding grammatical errors and unclear sentence expressions in the paper:** We are very grateful to the Reviewer for carefully reviewing the paper. We have thoroughly examined and corrected the grammatical errors and unclear sentences to ensure the paper is clearer and more accurate. 2. **For the specific issues you pointed out, we have made the following revisions:** - **Lines 100-102:** We have reorganized the sentence to make it clearer. The revised sentence is: "EEG signals record the brain's neuronal electrical activity, varying over time and reflecting activity patterns and connectivity across brain regions." - **Line 103:** We have changed "However previous" to "However, previous." - **Line 188:** We have changed "or" to "and." - **Line 292:** We have changed "increased" to "increases." ### Questions: 1. **Why did you use a dual attention refinement module instead of three layers or more?** In the field of auditory attention detection, researchers are more focused on addressing the issue of low decoding accuracy in low-latency scenarios. Specifically, they aim to reflect the model's effectiveness in low-latency conditions by examining decoding accuracy within a 0.1-second decision window. However, EEG sequences within this 0.1-second window typically contain 12 or 6 data points, which is insufficient for our model to use three or more layers of refinement. Additionally, after two refinement operations, the EEG sequence is already compressed to 3 or 1 data points, making further compression meaningless. Therefore, we chose to use only a two-layer attention refinement module. Furthermore, we conducted ablation experiments, as shown in Table 1 and Table 2. The results indicate that using a single-layer attention refinement compared to a two-layer attention refinement resulted in only a 0.5% decrease in performance, which aligns with our assumptions. Table1: Ablation Study on DTU dataset. | Model | 0.1s | 1s | 2s | 5s | | :------------ | :------------------------------- | :-------------------------------- | :------------------------------- | :------------------------------- | | single-DARNet | $79.1 \pm 5.66$ | $86.3 \pm 5.83$ | $88.0 \pm 4.03$ | $90.2 \pm 5.62$ | | **DARNet** | $\textbf{79.5}\pm \textbf{5.84}$ | $\textbf{87.8} \pm \textbf{6.02}$ | $\textbf{89.9}\pm \textbf{5.03}$ | $\textbf{93.1}\pm \textbf{4.37}$ | Table2: Ablation Study on KUL dataset | Model | 0.1s | 1s | 2s | 5s | | :------------ | :-------------- | :-------------- | :-------------- | :-------------- | | single-DARNet | $91.1 \pm 5.18$ | $95.5 \pm 3.28$ | $96.2 \pm 2.98$ | $96.7 \pm 4.03$ | | **DARNet** | $91.6\pm 4.83$ | $96.2 \pm 3.04$ | $97.2\pm 2.50$ | $98.0\pm 3.17$ | 2. Did you lose the FC layer for the final classification part in Figure 1? Thank you for your thorough review and feedback. We acknowledge that we missed labeling the fully connected layer (FC layer) in the final classification part of Figure 1. We will correct this in the final version of the paper. We appreciate your attention to detail. ### Limitations: 1. **The DARNet has only been validated under the subject-dependent condition, and it may not perform as well under subject-independent conditions.** Recently, we conducted additional leave-one-subject-out cross-validation experiments on the publicly available DTU and KUL datasets, using a 1-second decision window that closely matches human attention shifts. The results showed that our model performed exceptionally well under subject-independent conditions, surpassing the performance of current SOTA models. The specific results are shown in Table 3: Table3: Cross-subject experiment comparison on the KUL and DTU dataset for 1s. The GCN is currently the state-of-the-art (SOTA) model for cross-subject tasks. | model | KUL | DTU | | ---------------- | --------------------------------- | --------------------------------- | | SSF-CNN | $54.1 \pm 6.60$ | $48.7 \pm 3.96$ | | MBSSFCC | $59.0 \pm 8.72$ | $49.3 \pm 4.86$ | | DBPNet | $59.7 \pm 8.12$ | $53.7 \pm 5.98$ | | GCN[1] | $64.4 \pm 6.36$ | $53.5 \pm 7.53$ | | **DARNet(ours)** | $\textbf{74.0} \pm \textbf{11.4}$ | $\textbf{56.0} \pm \textbf{5.71}$ | [1] S. Cai, R. Zhang and H. Li, "Robust Decoding of the Auditory Attention from EEG Recordings Through Graph Convolutional Networks," *ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, Seoul, Korea, Republic of, 2024, pp. 2320-2324, doi: 10.1109/ICASSP48485.2024.10447633. --- Rebuttal Comment 1.1: Title: Thank you for your prompt and comprehensive response to my review. Comment: Your detailed answers have effectively addressed my concerns. The additional cross-subject experiments are a valuable addition, significantly strengthening the overall contribution of the paper. Additionally, the authors have provided a thorough rationale, supported by ablation studies, demonstrating the effectiveness of their approach within the constraints of low-latency scenarios. I am confident that this paper makes a significant contribution to the field of auditory attention detection and will be of great interest to the community. Therefore, I am raising my rating to "Strong Accept." --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your positive feedback and for raising the rating of our paper. We greatly appreciate your recognition of our additional experiments and the overall contribution of our work. We are delighted that our paper is considered a significant contribution to the field and will be of interest to the community.
Summary: The paper proposes DARNet, a dual attention refinement network with spatiotemporal construction for auditory attention detection (AAD). The network captures spatiotemporal features and long-range latent dependencies from EEG signals, leading to improved classification accuracy and reduced parameter count compared to state-of-the-art models. Strengths: 1. DARNet shows significant improvements in classification accuracy across multiple datasets, demonstrating its effectiveness in AAD tasks. 2. The model reduces the number of parameters by 14% compared to the state-of-the-art, which is beneficial for practical applications requiring efficient computation. 3. The integration of spatial and temporal convolutions to capture dynamic patterns in EEG signals enhances the model's ability to decode brain activity accurately. 4. The dual attention refinement module effectively captures long-range dependencies in EEG signals, addressing a common limitation in previous models. Weaknesses: 1. The experiments are all subject-dependent, which may limit the generalizability of the results. Cross-subject conditions could be more relevant for practical applications, where the model needs to generalize across different individuals. 2. While the model achieves high performance, its complexity might make it difficult to interpret and understand the underlying mechanisms contributing to its success. More insights into the workings of the dual attention mechanism and its impact on EEG signal processing would be beneficial. Technical Quality: 3 Clarity: 3 Questions for Authors: NA Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed in the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal: Thank you so much for your thoughtful comments and the time to provide constructive feedback! ## Weakness: 1. **Supplementary cross-subject experiment results:** Recently, we conducted additional leave-one-subject-out cross-validation experiments on the publicly available DTU and KUL datasets, using a 1-second decision window that closely matches human attention shifts. The results showed that our model performed exceptionally well under subject-independent conditions, surpassing the performance of current SOTA models. The specific results are shown in Table 1: Table1: Cross-subject experiment comparison on the KUL and DTU dataset for 1s. The GCN is currently the state-of-the-art (SOTA) model for cross-subject tasks. | Model | KUL | DTU | | ------- | --------------------------------- | --------------------------------- | | SSF-CNN | $54.1 \pm 6.60$ | $48.7 \pm 3.96$ | | MBSSFCC | $59.0 \pm 8.72$ | $49.3 \pm 4.86$ | | DBPNet | $59.7 \pm 8.12$ | $53.7 \pm 5.98$ | | GCN[1] | $64.4 \pm 6.36$ | $53.5 \pm 7.53$ | | DARNet | $\textbf{74.0} \pm \textbf{11.4}$ | $\textbf{56.0} \pm \textbf{5.71}$ | [1] S. Cai, R. Zhang and H. Li, "Robust Decoding of the Auditory Attention from EEG Recordings Through Graph Convolutional Networks," *ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, Seoul, Korea, Republic of, 2024, pp. 2320-2324, doi: 10.1109/ICASSP48485.2024.10447633. 2. **More insights into the workings of the dual attention mechanism and its impact on EEG signal processing would be beneficial:** We supplemented our study with ablation experiments using a single-layer attention refinement module, referred to as single-DARNet, while the original model is denoted as DARNet. The results, shown in Tables 1 and 2, reveal that the performance gap between single-DARNet and DARNet widens with increasing decision window lengths. Our analysis suggests that with a short decision window length of 0.1 seconds, the EEG sequence contains few data points, making each point significantly impactful on the decoding accuracy. At this stage, the local features of the EEG signal play a more decisive role in the decoding process. Therefore, the performance difference between single-DARNet and DARNet is not substantial. However, under experimental conditions with medium to long decision window lengths, such as 1 second and 2 seconds, the performance gap between single-DARNet and DARNet gradually increases. This indicates that as the decision window lengthens, the long-range dependencies within the EEG signals increasingly influence the decoding accuracy. This experiment demonstrates that the long-range dependencies are captured within EEG signals. Additionally, under experimental conditions with even longer sliding window sizes, such as 5 seconds, our model continues to perform exceptionally well, further proving that the long-range dependencies are captured within EEG signals. Table2: Ablation Study on DTU dataset. | Model | 0.1s | 1s | 2s | 5s | | :------------ | :------------------------------- | :-------------------------------- | :------------------------------- | :------------------------------- | | single-DARNet | $79.1 \pm 5.66$ | $86.3 \pm 5.83$ | $88.0 \pm 4.03$ | $90.2 \pm 5.62$ | | **DARNet** | $\textbf{79.5}\pm \textbf{5.84}$ | $\textbf{87.8} \pm \textbf{6.02}$ | $\textbf{89.9}\pm \textbf{5.03}$ | $\textbf{93.1}\pm \textbf{4.37}$ | Table3: Ablation Study on KUL dataset | Model | 0.1s | 1s | 2s | 5s | | :------------ | :-------------- | :-------------- | :-------------- | :-------------- | | single-DARNet | $91.1 \pm 5.18$ | $95.5 \pm 3.28$ | $96.2 \pm 2.98$ | $96.7 \pm 4.03$ | | **DARNet** | $91.6\pm 4.83$ | $96.2 \pm 3.04$ | $97.2\pm 2.50$ | $98.0\pm 3.17$ | --- Rebuttal Comment 1.1: Comment: Thanks for your supplementary experiments and I have increased the score from 5 to 6. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your recognition of our supplementary experiments and your decision to increase the score.
Summary: This paper proposes a new architecture for auditory attention detection (AAD) that consists of three key components: 1) Convolutional layers applied to the temporal and spatial dimensions of EEG signals in a sequential manner to extract features. 2) Two attention layers to process these features. 3) A feature fusion module to combine the outputs of the two attention layers. The method was tested on three different datasets and achieved state-of-the-art (SOTA) classification results. Strengths: The paper presents an effective and innovative adoption of convolution and attention layers for auditory attention detection. The model has been rigorously validated on three different datasets. The inclusion of ablation studies further strengthens the validity of the findings by systematically analyzing the contribution of each component of the model. Weaknesses: * Using convolutional layers on the temporal dimension followed by the spatial dimensions of EEG signals is not a novel idea. Classic EEG+DL+BCI papers, such as ConvNet (Schirrmeister, 2017) and EEGNet (Lawhern, 2018), already employ similar concepts. Many EEG+DL+AAD papers also adopt spatial processing modules (e.g., references 3 and 14 in the paper). The author should provide a more in-depth discussion on why their spatial module is more effective than existing methods. Specifically, they should highlight any unique innovations or improvements their approach offers over previous works. * It would be interesting to see the comparison between dual-attention layer and one attention layer. * It is crucial to explicitly mention in the methods section that this is a subject-dependent study. * There is a typo in the notations on row 91. Technical Quality: 3 Clarity: 3 Questions for Authors: How did 48 minutes of recording in the KUL dataset yield over 5000 1-second decision windows? If a sliding window with overlap was used, please describe the procedure in detail to ensure that there are no repeated signal segments in the training and test sets. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: * Reproducibility of the paper would be significantly enhanced if the author provided the full code. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal: Thank you so much for your thoughtful comments and the time to provide constructive feedback! ### Weakness: 1. **A more in-depth discussion on why their spatial module is more effective than existing methods.** Auditory attention decoding requires processing EEG signals under complex acoustic stimuli. Temporal regularity is crucial in selective hearing, making the extraction of spatiotemporal relationships key to decoding. Traditional ConvNet models for motor imagery first perform temporal convolution and then spatial convolution, potentially overlooking complex interactions between spatial and temporal dimensions. Most current EEG+DL+AAD methods use DE features, such as transforming the temporal domain into the frequency domain and projecting it onto a 2D topological map, which can lose important temporal variations. Recent methods like STANet (Su et al.) explore spatial features in the temporal domain but are limited to assigning weights to EEG channels without considering nonlinear relationships between them. + **Rich Spatiotemporal Feature Extraction:** We capture spatial dependencies between EEG channels and temporal patterns within each channel. By emphasizing channel interactions and using GELU to capture nonlinear relationships, we achieve comprehensive spatiotemporal representations. + **Multi-level temporal pattern extraction:** We integrate a dual attention mechanism on the basis of effective convolution design, enabling the model to focus on distant time points and capture long-term attention pattern changes. Additionally, we compared the performance of temporal-only, spatial-only, and spatiotemporal feature extraction. This comparison offers valuable insights for future research. 2. **The comparison between dual-attention layer and one attention layer.** We have supplemented our study with ablation experiments using a single-layer attention refinement module, referred to as single-DARNet, while the original model is denoted as DARNet. The results, shown in Tables 1 and 2, reveal that the performance gap between single-DARNet and DARNet widens with increasing decision window lengths. Our analysis indicates that with a short decision window of 0.1 seconds, the EEG sequence contains few data points, making each point significantly impactful on decoding accuracy. At this short window length, local EEG features are crucial, so the difference in performance between single-DARNet and DARNet is minimal. However, as the decision window lengthens to 1 and 2 seconds, the gap between single-DARNet and DARNet increases. This suggests that longer decision windows better capture the long-range dependencies within EEG signals, enhancing decoding accuracy. Table1: Ablation Study on DTU dataset | Model | 0.1s | 1s | 2s | 5s | | :------------ | :-------------- | :-------------- | :-------------- | :-------------- | | single-DARNet | $79.1\pm 5.66$ | $86.3\pm 5.83$ | $88.0 \pm 4.03$ | $90.2 \pm 5.62$ | | **DARNet** | $79.5\pm 5.84$ | $87.8\pm 6.02$ | $89.9\pm 5.03$ | $93.1\pm 4.37$ | Table2: Ablation Study on KUL dataset | Model | 0.1s | 1s | 2s | 5s | | :------------ | :-------------- | :-------------- | :-------------- | :-------------- | | single-DARNet | $91.1\pm 5.18$ | $95.5\pm 3.28$ | $96.2\pm 2.98$ | $96.7\pm 4.03$ | | **DARNet** | $91.6\pm 4.83$ | $96.2 \pm 3.04$ | $97.2\pm 2.50$ | $98.0\pm 3.17$ | 3. **Supplementary cross-subject experiment results:** Recently, we conducted additional leave-one-subject-out cross-validation experiments on the publicly available DTU and KUL datasets, using a 1-second decision window that closely matches human attention shifts. The results showed that our model performed exceptionally well under subject-independent conditions, surpassing the performance of current SOTA model. The specific results are shown in Table 3. Table3: Cross-subject experiment comparison for 1s. The GCN is currently the SOTA model for cross-subject tasks. | Model | KUL | DTU | | :--------- | :-------------------------------- | :-------------------------------- | | SSF-CNN | $54.1 \pm 6.60$ | $48.7 \pm 3.96$ | | MBSSFCC | $59.0 \pm 8.72$ | $49.3 \pm 4.86$ | | DBPNet | $59.7 \pm 8.12$ | $53.7 \pm 5.98$ | | GCN | $64.4 \pm 6.36$ | $53.5 \pm 7.53$ | | **DARNet** | $\textbf{74.0} \pm \textbf{11.4}$ | $\textbf{56.0} \pm \textbf{5.71}$ | 4. **There is a typo on row 91.** We are very grateful to reviewer for reviewing the paper so carefully. Modified version: By employing a moving window on the EEG data, we obtain a series of decision windows, each containing a small duration of EEG signals. Let $R=[r_1,...,r_i,...,r_N] \in \mathbb{R}^{T×N}$ represent the EEG signals of a decision window, where $r_i \in \mathbb{R}^{N \times 1}$ represents the EEG data at the $i$-th time point within a decision window, containing $N$ channels. ### Questions: 1. **48 minutes of recording yield over 5000 1s decision windows.** The 48 minutes of recording contains 2880 seconds of EEG data. Consistent with previous studies, we set the repetition rate to 0.5. Thus, we obtain $2880 \times 2 = 5760$ 1s decision windows. 2. **Prevent data leakage.** To prevent data leakage, we designated the first 90% of the EEG data from each trial as the training set and the remaining 10% as the test set. We then applied the sliding window technique separately to both sets. ### Limitations: 1. Reproducibility of the paper would be significantly enhanced if the author provided the full code. We will upload the complete code once the paper is accepted. --- Rebuttal Comment 1.1: Comment: I thank the author for the detailed response. The cross-subject study improved the quality of the work. I changed my score to 6: Weak Accept --- Reply to Comment 1.1.1: Comment: We greatly appreciate your constructive comments and are pleased that our cross-subject study addressed your concerns.
Summary: The manuscript is aim to capture the and spatial distribution information and long-range dependencies in EEG signals. Two modules are designed to solve the upper two challenges, spatiotemporal construction module and dual attention refinement module. The experiment have shown the superiority of the proposed method. And the analysis have shown the advantage of each module. Strengths: 1)Good writing and organization. 2)Method is good understanding and can be replicated. 3)The experiment and analysis is sufficient. Weaknesses: 1)The manuscript is lack of novelty. 2)It is less clarity in session Abstract on which dataset the accuracy is improved using different sliding window size. 3)There is no analysis that the level of computation cost and time consuming of the mentioned methods in Table 2. Technical Quality: 3 Clarity: 3 Questions for Authors: 1)Many researchers have done this work, to capture the long-range dependencies and spatial distribution information within EEG signals. Is there any more innovation of your proposed method? (From framework design and experimental performance.) 2)What sliding window size is the best performance with? Have you ever try other size longer than 2 s? 3)Please prove that the long-range dependencies are captured within EEG signals. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal: Thank you so much for your thoughtful comments and the time to provide constructive feedback! ### Weakness: 1. **Innovation summary:** We apologize for any confusion that may have led to the perception of a lack of novelty in our manuscript. We have clarified and summarized our key innovations as follows: Firstly,our method highlights EEG channel interactions. Unlike some spatial feature explorations through the frequency domain that exist in the AAD field, our work effectively extracts spatial information directly from the temporal domain and uses GELU to capture nonlinear relationships. We compared temporal-only, spatial-only, and spatiotemporal feature extraction, offering significant insights for summarizing, inspiring, and guiding future works in AAD. Secondly, unlike methods that stack multiple self-attention layers to capture long-range dependencies, we employ a refinement operation that progressively compresses the EEG signal sequence. This approach models global dependencies more effectively and reduces the model's parameter count. By halving the sequence length at each step, our method reduces the complexity of self-attention to one-fourth of the original. Thirdly, our feature fusion module effectively balances local EEG features and long-range dependencies, enhancing decoding accuracy. Finally, our model outperforms current SOTA models across multiple datasets while maintaining a lower parameter count. 2. **Dataset clarity in Abstract.** Thank you for your thorough review. We realize we overlooked specifying the dataset annotations and will update this in the final version of the paper. 3. **The computation cost and time consuming of the mentioned methods .** Table 1 includes data on computation cost and time consumption, conducted on the KUL dataset for 1s. The results demonstrate that our model has fewer parameters, as well as lower MACs and test time compared to current SOTA models. Table1. DBPNet is currently the SOTA model. | Model | Param(M) | MACs(M) | Test time(ms) | | :--------- | :------- | :-------- | :------------ | | MBSSFCC | 83.91 | 89.15 | 11.20 | | DBPNet | 0.91 | 96.55 | 11.98 | | **DARNet** | **0.78** | **16.36** | **10.03** | ### Questions: 1. **Is there any more innovation of your proposed method?** See response to weakness1. 2. **What sliding window size is the best performance with? Have you ever try other size longer than 2s?** Previous research shows that longer sliding windows capture more information and typically improve performance. However, in AAD tasks, longer windows increase latency, which is less suitable for human attention switching. Therefore, current research focuses on improving decoding accuracy with short decision windows, specifically 0.1s, 1s, and 2s. Our results for a 5-s window, presented in Table 2 & 3, confirm that our model continues to outperform current SOTA models. Table2: AAD accuracy(%) comparison on DTU dataset. | Model | 0.1s | 1s | 2s | 5s | | :--------- | :--------------- | :--------------- | :-------------- | :-------------- | | MBSSFCC | $66.9 \pm 5.00$ | $75.6 \pm 6.55$ | $78.7 \pm 6.75$ | $80.2\pm 8.64$ | | DBPNet | $75.1 \pm 4.87$ | $83.9 \pm 5.95$ | $86.5 \pm 5.34$ | $90.1 \pm 5.82$ | | **DARNet** | $79.5\pm 5.84$ | $87.8\pm 6.02$ | $89.9 \pm 5.03$ | $92.4\pm 4.71$ | Table3: AAD accuracy(%) comparison on KUL dataset. | Model | 0.1s | 1s | 2s | 5s | | :--------- | :-------------- | :-------------- | :-------------- | :-------------- | | MBSSF | $79.0 \pm 7.34$ | $86.5 \pm 7.16$ | $89.5 \pm 6.74$ | $92.8 \pm 5.32$ | | DBPNet | $87.1 \pm 6.55$ | $95.0 \pm 4.16$ | $96.5 \pm 3.50$ | $97.0 \pm 4.05$ | | **DARNet** | $91.6\pm 4.83$ | $96.2\pm 3.04$ | $97.2 \pm 2.50$ | $98.0\pm 3.17$ | 3. **Please prove that the long-range dependencies are captured within EEG signals.** We supplemented our study with ablation experiments using a single-layer attention refinement module, referred to as single-DARNet, while the original model is denoted as DARNet. The results, shown in Tables 4 & 5, reveal the performance gap between single-DARNet and DARNet widens with increasing decision window lengths. Our analysis indicates that with a short decision window of 0.1s, the EEG sequence contains few data points, making each point significantly impactful on decoding accuracy. At this short window length, local EEG features are crucial, so the difference in performance between single-DARNet and DARNet is minimal. However, as the decision window lengthens to 1s and 2s, the gap between single-DARNet and DARNet increases. This suggests that longer decision windows better capture the long-range dependencies within EEG signals, enhancing decoding accuracy. Our experiments with even longer sliding window sizes, such as 5s, confirm that our model effectively captures these long-range dependencies. Table4: Ablation Study on DTU dataset. | Model | 0.1s | 1s | 2s | 5s | | :------------ | :-------------- | :-------------- | :-------------- | :-------------- | | single-DARNet | $79.1 \pm 5.66$ | $86.3 \pm 5.83$ | $88.0 \pm 4.03$ | $90.2 \pm 5.62$ | | **DARNet** | $79.5\pm 5.84$ | $87.8 \pm 6.02$ | $89.9\pm 5.03$ | $93.1\pm 4.37$ | Table5: Ablation Study on KUL dataset. | Model | 0.1s | 1s | 2s | 5s | | :------------ | :-------------- | :-------------- | :-------------- | :-------------- | | single-DARNet | $91.1 \pm 5.18$ | $95.5 \pm 3.28$ | $96.2 \pm 2.98$ | $96.7 \pm 4.03$ | | **DARNet** | $91.6\pm 4.83$ | $96.2 \pm 3.04$ | $97.2\pm 2.50$ | $98.0\pm 3.17$ |
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Implicit Bias of Gradient Descent toward Collaboration between Layers: A Dynamic Analysis of Multilayer Perceptions
Accept (poster)
Summary: In this work, the authors study the difference between underparameterised and overparameterised networks in terms of the collaboration between consecutive layers. They find that under-parameterized networks tend to foster co-correlation among layers to improve performance, whereas performance of over-parameterized networks can improve without relying on this co-correlation. They relate the co-correlation to adversarial robustness. Strengths: This paper adresses an important open question in deep learning: how should we relate the concepts of generalization, adversarial robustness and the network archictecture and size? The paper adresses this question from an original perspective. The literature study is quite extensive and motivates the work (although it remains quite vague/confusing, see next). Great care has been taken in a correct mathematical formulation off all definitions and proofs. The authors have verified some of their theoretical claims with a series of experiments. Weaknesses: 1. From your abstract, introduction and conclusion, it is not clear why you would study co-correlation between layers in the first place, and/or how this relates to robustness and generalization. E.g., starting from line 40, the reader can infer: Dirichlet energy = measure of adversarial robustness -> Dirichlet energy per layer -> co-correlation -> gradient descent increases co-correlation. But how and why is this co-correlation exactly related to adversarial robustness? Etc. 2. This does not become really clear to me in the other parts of the paper either. I understand that the dirichlet can be used to measure the gap between generalization and adversial risk. But what this means for individual layers and why you would want to look into this remains unclear. E.g., Line 163: “Since the Dirichlet energy defined in Equation (7) can be used to measure the variability of mappings,it follows that this metric could be employed to evaluate the adversarial robustness of individual layers or modules within neural networks. Consequently, this allows for an assessment of whether there is collaboration between these components in terms of adversarial robustness.”: this seems an important point, but I could’t understand what you meant exactly, especially the second sentence. 3. L117: why is the second layer random and fixed? If needed for a mathematical analysis, what is the influence of this on the generality of your findings? 4. The claim made in L160-161 cannot be supported by fig. 1 alone. It’s not because Dirichlet energy has a similar (inverse) profile over width and initialization as the adversarial robustness, that one influences the other directly. The same goes for the other figures. More information/verification seems needed. 5. L192-194: what about non-linear activation functions? 6. The experiments are only conducted on the simplified models used for the theoretical derivations, and as such do not show anything about how the theoretical findings relate to actual, practical neural networks (i.e., models with more layers, where no layers are held fixed). Typos/grammar/symbols: 7. Line 64, 76, 77, 105, 111 (strange sentence), 127, caption figure 1, 275, … 8. Eq 1: f_W undefined 9. Fig 1: legend and labels too small, caption not clear Technical Quality: 3 Clarity: 2 Questions for Authors: See points raised above. In general, I think this work could be a valuable contribution, but the quality of the text has to be increased to better motivate and explain the approach. In it’s current state, the work is hard to judge. If you think a change to the manuscript is warranted, please specify how you would update it. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations are not addressed sufficiently. E.g., not clear what the influence of the strong assumption that the second layer remains fixed has on the generality of the findings. No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback and careful review. Your detailed examination of the paper and identification of its flaws are greatly appreciated. My rebuttals are as follows: ## The reason why study co-correlation: The motivation for studying co-correlation actually comes from the research question: *Can different layers of a neural network collaborate against adversarial examples?* This is an important point, especially in an era dominated by complex neural networks. For example, in the main block of a vanilla Vision Transformer, we have Multi-head Self Attention (MSA) followed by an MLP. People may expect these components to have specific functionalities and imagine the forward propagation working like independent robots on a conveyor belt. However, even for a simple MLP, there is hardly any research on whether different layers interact with each other, let alone on the effects on adversarial robustness. Our work aims to fill this gap to some extent. ## How co-correlation related to adversarial robustness: The relationship between co-correlation and adversarial robustness is bridged by the Dirichlet energy, as depicted in Theorem 4.5 and Eq. (15). While co-correlation $\varrho_{\boldsymbol{\phi}, \boldsymbol{\varphi}}$ is not directly related to adversarial robustness, it is the most significant factor affecting the Dirichlet energy as shown in Thm. 4.1. Specifically, let $\phi \circ \varphi$ be a 2-layer neural network where $\varphi$ represents the first layer and $\phi$ represents the second layer. Since the terms $\Big(1 + \frac{var_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}{\mu^2_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}\Big)^{\frac{1}{2}}$ is negligible, the co-correlation $\varrho$ dominate how the Dirichlet energy of each layer can be passed to that of neural network. ## About L163: The use of Dirichlet energy to measure adversarial robustness was first considered in the paper by Dohmatob et al. [8], as mentioned in line 137. It was originally proposed to replace the Lipschitz constant as a better measure of adversarial robustness, as shown on page 14, Appendix A, in [8]. In our work, we first demonstrate that Dirichlet energy can measure the gap between generalization and adversarial risk, implying that Dirichlet energy is a proper measurement for adversarial robustness since higher Dirichlet energy indicates a larger gap. With simple modifications, we can also calculate the Dirichlet energy for a single layer or blocks within neural networks, as defined in Eq. (7) in Def. 3.2. We use this concept because of the decomposition of the Dirichlet energy for neural networks, as shown in Eq. (15). If we regard the neural networks as compound functions, i.e., $\theta \circ \phi$, the decomposition of the Dirichlet energy for this compound function requires the computation of Dirichlet energy for $\phi$ and $\theta$, each of which can be an individual layer inside the neural network. This explains the first sentence that we can evaluate the adversarial robustness of individual layers. I apologize for the misuse of “adversarial robustness,” which will be corrected in a later version. For the second sentence, as is shown in Eq. (15), since the term $\Big(1 + \frac{var_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}{\mu^2_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}\Big)^{\frac{1}{2}}$ is negligible shown in Fig. 4, leave co-correlation $\varrho$ dominating whether Dirichlet energy for each mapping can be transferred to overall Dirichlet energy. Hence, we say that we can assess the collaboration between layers via $\varrho$. ## About L117 The random and fixed second layer is used for simplification of our analysis, following the previous work by Du et al. [9], which was published in ICLR. Since we use similar approaches—directly analyzing the gradient descent—and to make the results easier to understand, we adopted the same setting. We will emphasize this in the paper of a later version. ## About the claim in L160 Each neural network in Fig. 1 (a) and (b) was trained to converge under identical conditions, except for width and weight initialization. Therefore they demonstrate the relationship between Dirichlet energy and adversarial robustness, though not causally. However, as verification of Thm 4.1, it is adequate to show the correctness of the proof. And more experiments will be included in a later version. ## About L192-L194 L192-L194 discusses the interpretation of co-correlation in a linear case without activation functions to simplify understanding. Since the activation functions will make the transformation at layers become non-linear, we can replace the weight matrix with the Jacobian. Let $\theta$ and $\phi$ be the weight matrices with activation functions, then Eq. (11) becomes $$ \frac{\Vert J_{\theta} J_{\phi} \Vert_2 }{\Vert J_{\theta} \Vert_2 \Vert J_{\theta} \Vert_2 } $$ The explanation at L190-L194 is still applicable for the Jacobians of both layers. The difference is that the Jacobian varies with different inputs, potentially attect co-correlation, as empirically shown for ReLU in Fig. 1 (c) and (d). ## Extra experiments We conduct the experiments with ResNet50 and Wide-ResNet50 on CIFAR10. We divided the ResNets in two ways and used the Adam optimizer with a learning rate of 0.003 to track the dynamics of co-correlations. The experiment results are available at the link: [Dynamic of co-correlation of the divide as $A_1$, $A_2$](https://i.imgur.com/icGOSOK.png) and [Dynamic of co-correlation of the divide as $B_1$, $B_2$](https://i.imgur.com/EfTfHx4.png). As shown in both figures, the co-correlation increases over the training epochs, although the upward trend for the separation of $A_1$ and $A_2$ is not strictly monotonic. The figure shown in the anonymous link: [Illustration of the divide for ResNet50](https://i.imgur.com/1OPWAuO.png) shows the way we divide ResNet50. ### We will check all lines and make them more clear in a later version. --- Rebuttal Comment 1.1: Comment: I have carefully read the rebuttal, and I would like to thank the authors for their effort and clarifications. The additional experiments are performed with more complex models and strengthen the work. I appreciated the clarifications about the goal of the paper and the relationship between concepts. It's still rather hard to judge how the final manuscript will be improved, as the statements about this are rather unspecific ("We will emphasize this in the paper of a later version", " more experiments will be included in a later version "). I think the *presentation* of this paper could be greatly improved -but this might be achieved in the camera-ready version. E.g., the research goal as stated/clarified in the rebuttal is quite different from what a reader can infer from, e.g., the original abstract. But I'd like to give the authors the benefit of the doubt, and I'll raise my score to 5. --- Reply to Comment 1.1.1: Title: Proposed Updates to the Manuscript Comment: Thank you very much for your response. I have carefully checked my manuscript according to your response, and I would like to propose the following updates to the manuscript: ## Abstract The main issue with the abstract is the lack of a clear statement of the primary research goal, particularly in L8 to L13. The sentences in this section will be rephrased to clearly emphasize the research goal. This part will be modified as follows: we will first present the research goal, followed by an introduction to the core concept we proposed to evaluate collaboration between layers, along with an intuitive explanation of this concept. A brief discussion of the different behaviours of over and under-parameterized neural networks will then be included, followed by our experimental results. ## Introduction The issue in the Introduction is similar to that in the abstract — the absence of a clearly stated research goal: to determine whether there is collaboration between layers in resisting adversarial examples. Additionally, it lacks a clear explanation of the relationships between different concepts, such as the connection between Dirichlet energy, adversarial robustness, and co-correlation. To address these issues, we will follow the same logic as the modification in the abstract and rephrase the third paragraph (L33 to L39), with the research goal emphasized to clearly outline the motivation for this work. An additional paragraph will be added to explain the concepts used in the paper and their relations. A clear flow of how we solve the research goal will also be included in the same paragraph. ## Preliminary A clear explanation of the neural network setting will be added. In addition to citing the setting from [9], we will explain why this setting is reasonable and whether the theoretical results can be generalized to broader neural networks. The initialization setting mentioned in line 117 will also be explained, including a discussion on the impact of deviating from this setting. ## Measure Adversarial Risk by Dirichlet Energy Additional empirical evidence using more complex neural networks will be included to verify our proposed Theorem 4.1. Experiments involving more complex models, such as ResNet50 and WRN50, will be presented in a new figure. Additionally, a brief discussion of the non-linear case will be added to the "Interpretation of Co-correlation" paragraph. ## On dynamics of co-correlation In L214 to L219 and L254, we will clarify the reasoning behind our assumptions and compare them with existing works, such as those by Lyu and Li [22], Ji and Telgarsky [18], Kunin et al. [20], and Frei et al. [13], to highlight the advantages of our simplified assumptions. ## Experiments Some of the additional experiments will be included in this section, along with brief descriptions. ## Conclusion In this section, we will rephrase the paragraph to re-emphasize our research goal, followed by an explanation of how we modelled the problem using Dirichlet energy and the proposed concept of co-correlation, supported by our experimental results. The re-emphasis of the research goal will be placed at the beginning of this section. ## Overall modification The use of the terms "under and over-parameterized neural networks" will be clarified as "narrow and wide neural networks" when referring specifically to 2-layer networks. And they will be used only when discussing more general or deeper networks. Additionally, a brief discussion on neural networks with varying depth and width but the same number of parameters will be included in the appendix, along with more experiments, and will be mentioned in the main content. Thank you again for your thorough review of the paper. Your feedback has been invaluable in helping me improve the manuscript. --- Rebuttal 2: Title: A kind reminder for your response Comment: Due to the limited time available to respond, and since I have not received any feedback on whether I answered your question, I kindly request that you let me know if my response was satisfactory. This would help me understand where I can improve my work. Thank you!
Summary: This paper investigates the implicit bias of gradient descent in over-parameterized neural networks, particularly focusing on the collaboration between consecutive layers. The study first introduces Dirichlet energy to evaluate the adversarial risk. Then it decomposes Dirichelt energy between layers and measures the alignment of feature selections between layers. The collaboration of Dirichlet energy between layers is called co-correlation. Through theoretical analysis and extensive experiments, the authors demonstrate that over-parameterized networks exhibit stronger resistance to increased co-correlation, thereby enhancing adversarial robustness. Strengths: The paper introduces the novel concept of co-correlation to quantify collaboration between layers, offering a fresh perspective on implicit bias in neural networks. The notion of co-correlation helps with distinguishing the dynamics between over-parameterized and under-parameterized networks. Moreover, this paper provides robust theoretical analysis for the properties of the dynamics of co-correlation. Weaknesses: 1. The theory part of the paper mainly analyzes the dynamics of co-correlation. However, the paper only explain the relation between Dirichlet energy and adversarial robustness. It does not directly connect adversarial robustness with co-correlation in analysis.Without a valid analysis, co-correlation dynamics cannot lead to the conclusions on adversarial robustness. 2. In the analysis of co-correlation, this paper proved $C(t)\>0$ after training for some time. This result indicates the increeasing trend of co-correlation but not explaining the value and the rate of the convergence, which shows the strength of the implicit bias. Moreover, it could also be better if the authors discuss in the theory part about the effect of initialization shown in experiment. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why the final co-correlation of MLP is smaller than linear net with same width and initialization? 2. How will the Dirichlet energy for each mapping $\phi$ and $\psi$ changes? Will this also affect the Dirichlet energy and adversarial robustness of the neural network? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have addressed the limitations of their work, acknowledging the challenges in extending the approach to more complex models and the need for broader evaluations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and careful review. The rebuttals are as follows: ## About the relation between co-correlation and adversarial robustness The relationship between co-correlation and adversarial robustness is bridged by the Dirichlet energy, as depicted in Theorem 4.5 and Eq. (15), shown as $$ \mathfrak{S}(\boldsymbol{\phi}\circ \boldsymbol{\varphi}) = \varrho_{\boldsymbol{\phi}, \boldsymbol{\varphi}} \Big(1 + \frac{var_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}{\mu^2_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}\Big)^{\frac{1}{2}} \rho_{\boldsymbol{\phi}, \boldsymbol{\varphi}} \mathfrak{S}(\boldsymbol{\phi}) \mathfrak{S}(\boldsymbol{\varphi}). $$ While co-correlation $\varrho_{\boldsymbol{\phi}, \boldsymbol{\varphi}}$ is not directly related to adversarial robustness, it is the most significant factor affecting the Dirichlet energy and the Dirichlet energy is directly related to adversarial robustness as shown in Thm. 4.1. Specifically, let $\phi \circ \varphi$ be a 2-layer neural network where $\varphi$ represents the first layer and $\phi$ represents the second layer. First, as demonstrated in Eq. (15), the Dirichlet energy of the network $\mathfrak{S}(\phi \circ \varphi)$ is dominated by the co-correlation $\varrho$, since the terms $\Big(1 + \frac{var_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}{\mu^2_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}\Big)^{\frac{1}{2}}$ are negligible as shown in Fig. 4. Seondly, our work concerns more about how the Dirichlet energy of layer 1 and layer 2, i.e., $\mathfrak{S}(\boldsymbol{\phi})$ and $\mathfrak{S}(\boldsymbol{\varphi})$, can be passed to overall Dirichlet energy of neural networks. And this interaction between layers can be quantified by co-correlation $\varrho$. It is important to note that this paper primarily aims to understand the interplay between layers concerning adversarial robustness, i.e. $\varrho$, rather than adversarial robustness itself. Therefore, the direct relationship between co-correlation and adversarial robustness is not our primary focus. ## About the value and the rate of convergence As indicated in Thm. 5.3 and Thm. 5.5, Eq. (17) and Eq. (20), the exact value of $C(t)$ depends on several assumptions about the inputs and the state of the weight matrix $W(t)$. It is possible to provide a more precise evaluation of $C(t)$ based on specific assumptions about the inputs. However, doing so requires particular assumptions, such as Gaussian distribution of the inputs and weight matrix. Because our work primarily aims to answer the research question of whether there exists collaboration between layers to improve adversarial robustness, we focused on qualitative analysis of dynamics change rather than quantitative analysis of the convergence speed. This is an interesting problem, but it is impossible to address all issues within a limited article. Future studies may explore this idea. ## About co-correlation of MLP is smaller than linear net As is shown on page 5, the co-correlation represents the feature alignment of layers in neural networks. Introducing an activation function, at least ReLU, as an 'isolation' layer between $W_1$ and $W_2$, is suggested to reduce co-correlation. This concept is experimentally verified in Fig. 1 (c) and (d). While we did not formally prove this idea, and different activation functions other than ReLU may even enhance co-correlation, our conclusion holds as long as the derivative of the activation function is bounded, as detailed in Assumption 5.4. ## The effect of Dirichlet energy for each mapping Yes, as shown in Eq. (15), the overall Dirichlet energy is also affected by that of individual mappings. However, since our primary focus is on the interplay between layers, this aspect is beyond the scope of our paper. Even if the Dirichlet energy for each mapping, $\mathfrak{S}(\boldsymbol{\phi})$ and $\mathfrak{S}(\boldsymbol{\varphi})$, are large, co-correlation still represents the interplay between $\phi$ and $\varphi$. Additionally, since other statistics are negligible shown in Fig. 4, whether the high value of Dirichlet energy for each mapping can be transferred to $\mathfrak{S}(\boldsymbol{\phi} \circ \boldsymbol{\varphi})$ still highly depends on $\varrho$. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal. I think I have to explain my concerns again since the authors thought the questions are vague and beyond the scope of their work. When I read this paper, I assume that the research goal of this paper is $\textbf{whether there exists collaboration between layers to improve adversarial robustness}$, which have also been mentioned by the authors in their rebuttal. In order to completely solve the problem, I think the paper should answer the following two questions: 1. the existence of collaboration between layers. 2. the collaboration improves adversarial robustness. My main concern is the completeness of this work since I was not satisfied with the answers of the two questions above. In my review, I provided two major weakness. The second weakness of $C(t)$ is about question No.1 above, where I think $C(t)>0$ is not a very strong evidence of the existence of collaboration between layers. Only proving $C(t)>0$ shows that $\varrho$ increases during training but does not rule out the case that $\varrho(t)$ only increases a little, which is not enough to claim the existence of the collaboration. My first major concern is directly connected with question No.2 above, where I think the paper only provides vague explanations to this question. However, in the response from the authors, they claims that this question is not their main focus. According to the general rebuttal and the responses under other reviews, I realized that the primary focus of this paper is question No.1 only. I think it would be better for the authors to address their resaerch goal more explicitly in their paper and responses. After reading the responses, I have no concern on the completeness since question No.2 is not a primary goal of this paper and the problem of $C(t)$ is not that significant because the authors provided some experiment evidence on question No.1 at least. However, the contribution of the paper is not that signinificant with only showing the existance of the collaboration between layers. I have to admit that this implicit bias does not rely on strong assumptions and can be extended to deep networks. But I am wondering if this implicit bias provides significant insight on understanding the benifit of GD in training neural networks without focusing on question No.2 above. According to the explanations above, I will keep the score. Thanks the authors again for their rebuttal and the additional results they provided. --- Reply to Comment 1.1.1: Comment: Thank you very much for your responses. Below is my response to the questions raised regarding the paper. ## About the existence of collaboration between layers The paper is structured as follows: First, in Theorem 4.1 and Figures 1(a) and 1(b), we establish a connection between adversarial risk and Dirichlet energy, showing that Dirichlet energy, denoted as $\mathfrak{S}(f)$, can approximate the gap between adversarial risk and natural risk. To facilitate understanding of Theorem 4.5, we first define our core concept, co-correlation $\varrho$, along with other related statistics. In Theorem 4.5, the Dirichlet energy of a compounded mapping $\phi\circ\varphi$ can be decomposed into the product of the Dirichlet energy for each mapping $\mathfrak{S}(\phi)$, $\mathfrak{S}(\varphi)$, co-correlation $\varrho$ and other statistics. Figure 4 in the appendix shows that these other statistics are negligible, so given that $\mathfrak{S}(\phi)$ and $\mathfrak{S}(\varphi)$ are fixed, co-correlation $\varrho$ represents the interaction between $\mathfrak{S}(\phi)$ and $\mathfrak{S}(\varphi)$ w.r.t. adversarial robustness. Here, $\phi\circ\varphi$ can be considered as a neural network, with $\phi$ and $\varphi$ as the functional components that constitute the network. Based on this flow, we define the collaboration between layers—or more broadly, between different components in neural networks—by co-correlation $\varphi$. It generally describes the interaction between $\phi$ and $\phi$ concerning adversarial robustness, where a lower value of $\varrho$ implies stronger collaboration between layers. Therefore, as long as $\varrho$ increases during GD, we can assert that the implicit bias of GD hinders collaboration between layers. In cases where $\varrho$ increases only a little, and the overall $\mathfrak{S}(\phi\circ\varphi)$ decreases due to lower values of $\mathfrak{S}(\phi)$ and $\mathfrak{S}(\varphi)$, it is still fair to claim the existence of collaboration. ## Beyond the existence of collaboration In addition to proving the existence of collaboration between layers, we also explore how the over- and under-parameterization of neural networks—i.e., wider and shallower 2-layer networks—affects adversarial robustness. As shown in Figure 3 and discussed in Section 6.2, GD tends to improve the performance of narrower neural networks by fostering co-correlation among layers, which will potentially weaken the adversarial robustness. On the other hand, over-parameterized networks (the wider ones) are trained with less reliance on interlayer correlation (a smaller $\varrho$ implies stronger collaboration), leading to inherently more robust models, while under the assumption that $\varrho$ is a more dominated factor than $\mathfrak{S}(\phi)$ and $\mathfrak{S}(\varphi)$. The measure of $\mathfrak{S}(\phi)$ and $\mathfrak{S}(\varphi)$ will be included in a camera-ready version of the paper. This observation complements the argument made by Frei et al. [13], which states that GD leads neural networks to non-robust solutions. We show that this problem can be mitigated to some extent in over-parameterized networks. GD will lead neural networks to a non-robust solution, but not the worst one for over-parameterized neural networks. Finally, the discussion on the value of $C(t)$ is indeed intriguing, particularly in the context of neural networks with different architectures. However, due to space limitations, it wasn't possible to include everything in the 9-page paper. This topic could be a promising direction for future research, where more comprehensive work will be done to explore the rate of convergence and the value of $C(t)$. --- Rebuttal 2: Title: A kind reminder for your response Comment: Given the limited time available to respond and the lack of feedback on whether I adequately addressed your question, I kindly request that you let me know if my response was satisfactory. This would help me identify areas where I can improve my work. Thank you! Additionally, I noticed that your question is somewhat vague, and many aspects fall beyond the scope of our research. If possible, providing more specific questions would be greatly appreciated.
Summary: This work focuses on studying the implicit bias of correlation between intermittent layers of neural nets and uses this metric to analyze the adversarial robustness of networks in under and over parameterized regimes. The authors further use these findings to suggest that in the under parametrized case, gradient descent enhances accuracy by forcing co-correlation between layers which in turns leads to worse adversarial robustness. However, in the overparameterized case, models depend less on the interlayer correlations and thus result in more robust models. The authors approach the problem by using the Dirichlet energy to bound the adversarial risk. They then introduce the co-correlation metric, designed to describe the alignment of layers in a linear 2-layer neural net. Further, the authors manage to bound the Dirichlet energy using the co-correlation metric and additional terms but provide experimental evidence to indicate that main term changing throughout training in under over/under parameterized scenarios is the co-correlation metric. The paper then focuses on studying the dynamics of co-correlation change during training on the linear model with emphasis on layer width and initialization, describing how co-correlation increase happens mainly during the early stages of training. Finally, the authors provide experimental results and extend their work to general MLPs. Strengths: Overall I found the work to be quite interesting, well written and described. The ideas were well explained, assumptions are mostly reasonable and the paper is easy to understand for me. Additionally I like seeing works done in the interlayer interactions of deepnets and believe such observations and results could have further experimental and theoretical implications. I am also quite interested in the results regarding implication of initialization and am somewhat surprised by the experimental results describing the importance of initialization. Weaknesses: 1) While I understand that the focus of the work is on theory, I would like to see 1-2 more experimental results, potentially with CIFAR-10 or larger networks (maybe not MLP) just to confirm the results and give a better perspective. 2) The authors often make claims regarding over-under parametrization vs width of layers. I find it a bit confusing whether some of the claims are considering the number of network parameters or the actual width of layers and would like to see some more distinction. For example in experimental cases, it's possible to try out narrower but deeper networks with the same number of parameters as wider ones to isolate the width effect. I understand that this could be hard in such controlled cases and small MLP’s of course. I will address some more of my concerns in the question section. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) To be clear the authors don’t provide any theoretical explanation for why the other terms used for the upper bound in Thm 4.5 are negligible ? And on that note, am I correct in assuming that the point of Fig 4 in appendix is to show how the variance/mean and linear correlation don’t change much with model width ? If so, shouldn't you consider different initializations as well ? 2) On equation 19, the authors make the claim that the first term in the lower bound is large early on ? Wouldn’t this also be true during the late stage of training since the learning rate is smaller and $ \tilde{x} $ stabilizes ? With enough training epochs, I expect that to be the case right ? 3) Could authors provide further explanation on the potential impact of weight regularization and co-correlation given Property 2 ? 4) The comment made on line 267, wouldn’t it be the opposite that $ \tilde{x}_* $ fluctuates more during the early stages of training with higher learning rate, as mentioned previously for Thm 4.5 ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback and careful review, you indeed checked the details of the paper and pinpointed the flaws. It is highly valued feedback. My rebuttals are as follows: ## More experiments ### Experiments on ResNet50 and Wide-Resnet50 We conducted additional experiments on larger and deeper networks, specifically ResNet50 and Wide-ResNet50 on CIFAR10. These experiments will be included in the paper in a later version. In the figures shown in the anonymous links: [Illustration of the divide for ResNet50](https://i.imgur.com/1OPWAuO.png), we divided the ResNets in two ways and used the Adam optimizer with a learning rate of 0.003 to track the dynamics of co-correlations. The experiment results are available at [Dynamic of co-correlation of the divide as $A_1$, $A_2$](https://i.imgur.com/icGOSOK.png) and [Dynamic of co-correlation of the divide as $B_1$, $B_2$](https://i.imgur.com/EfTfHx4.png). As shown in both figures, the co-correlation increases over the training epochs, although the upward trend for the separation of $A_1$ and $A_2$ is not strictly monotonic. ### About varying depth and width Your suggestion to conduct experiments on networks with varying depth and width, but the same number of total parameters, is quite interesting. However, in this paper, we focus on the theoretical analysis of whether there is collaboration between layers against adversarial examples. How different architectures of neural networks impact the co-correlation, especially with varying depth and width, will be considered in future work. ### Negligible of other terms on different weight initialization Yes, Fig. 4 in the appendix demonstrates that the other terms are negligible. Additional experiments with different initializations are available at [Relative std](https://i.imgur.com/dLOv5yv.png) and [Linear correlation](https://i.imgur.com/GzsDl3G.png). We consider the MLP with a width of 512 with varying weight initializations. Similar to Fig. 4 in the appendix, the first and second figure show the $\frac{var^{\frac{1}{2}}}{\mu}$ and $\rho$ in Eq. (15). Empirically, it shows that $\Big(1 + \frac{var_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}{\mu^2_{\boldsymbol{\phi}, \boldsymbol{\varphi}}}\Big)^{\frac{1}{2}} \rho_{\boldsymbol{\phi}, \boldsymbol{\varphi}}$ is quite close to 1, making it negligible in the Eq (15). ## About Eq.(19) and training stage According to Eq. (19), the first term of Eq. (19) is $$ \frac{\sum_{\tau=1}^{t}\widetilde{\boldsymbol{x}}(\tau)^T\widetilde{\boldsymbol{x}}(t)}{\Vert W(t) \Vert_2^2}\cdot \Big(1 - \big(\boldsymbol{v}(t)^T\boldsymbol{a}\Big)^2\Big) $$ The exact value of this term is complex; however, the sign of this term depends on $\sum_{\tau=1}^{t}\widetilde{\boldsymbol{x}}(\tau)^T\widetilde{\boldsymbol{x}}(t)$, since other terms are positive, as explained in Thm 5.3. According to Eq. (17), $\widetilde{\boldsymbol{x}}$ is a weighted average of all inputs, where the weight is the difference between the ground truth $y_i$ and the predicted likelihood $sig(u_i(t))$. At the initial stage with a small learning rate, $\widetilde{\boldsymbol{x}}(\tau_1)$ is similar to $\widetilde{\boldsymbol{x}}(\tau_2)$ for $\tau_1, \tau_2 \in[t]$. More specifically, let $t$ be 5 epochs. Since the model hasn't learned much, the predicted likelihood $sig(u_i(t))$ of $\tau_1 = 1$ and $\tau_2 = 4$ are all similar to each other; therefore, $\widetilde{\boldsymbol{x}}(\tau_1)$ and $\widetilde{\boldsymbol{x}}(\tau_2)$ are similar in terms of cosine similarity. However, in the later stages, as the model learns more from the data, most of the predictions become correct, and $y_i - \sigma(u_i)$ will approach zero, causing $\sum_{\tau=1}^{t}\widetilde{\boldsymbol{x}}(\tau)^T\widetilde{\boldsymbol{x}}(t)$ to converge. Continued training will also enlarge $\Vert W(t) \Vert_2^2$ and make $\Big(1 - \big(\boldsymbol{v}(t)^T\boldsymbol{a}\big)^2\Big)$ approach zero. Considering all the impacts, the co-correlation will increase very fast at the beginning, then the rate of increase will become smaller, and in the final stage, it will approach zero, as shown in Fig. 2 (c) and (d). In Fig. 2 (c) and (d), all the co-correlation of networks flattened at the end. ## About weight regularization and co-correlation on Property 2 As shown in Eq. (19), the co-correlation is inversely related to the $L_2$-norm of the weight matrix. This $L_2$-norm is the operator norm, which is different from the norm used in weight regularization, such as L2 regularization, which is the Frobenius norm (F-norm). This implies that merely controlling the F-norm of the weight matrix may not have a direct impact on $\Vert W \Vert_2$. However, since all norms are equivalent in finite-dimensional vector spaces, L2 regularization will somehow restrict its increase during training. Additionally, since the value of $C(t)$ also depends on $\Big(1 - \big(\boldsymbol{v}(t)^T\boldsymbol{a}\big)^2\Big)$, where $\boldsymbol{v}$ is the maximal singular vector of $W(t)$, it is difficult to determine whether $C(t)$ will become larger because of weight regularization. ## About L267 Yes, if the learning rate is too large, whether at the initial stage or at any stage, $\widetilde{x}^{\star}$ may fluctuate too much, but it may also prevent the model from converging or result in poor performance. In such cases, it is possible that $C(t)$ could become negative. However, in most machine learning settings, it is required that the learning rate not be too large, such as using 3e-4 with Adam, to ensure convergence and good performance. Therefore, I believe the assumption that the learning rate is not too large is realistic. --- Rebuttal 2: Title: A kind reminder for your response Comment: Due to the limited time available to respond, and since I have not received any feedback on whether I answered your question, I kindly request that you let me know if my response was satisfactory. This would help me understand where I can improve my work. Thank you! --- Rebuttal Comment 2.1: Comment: Thank you for the detailed response and I apologise for the late reply. First I would like the appreciate the authors for including additional experimental results and responding to my concerns, specifically with question 2 and 4. I belive this work has an intriguing concept and solid theoretical work so I would like to keep my current evaluation. --- Reply to Comment 2.1.1: Comment: Thank you so much for your review and response.
Summary: In this work, the authors study the tradeoff between generalization and adversarial robustness from the perspective of collaboration between the layers of a neural network (NN). They adapt the concept of Dirichlet energy to analyze the robustness of different layers in the network. Decomposing this across layers allows them to quantify the alignment between feature selection between consecutive layers as collaboration correlation (or co-correlation). They show that the co-correlation increases while training, which leads to lower adversarial robustness, MLPs with larger widths exhibit more resistance to increased co-correlation and present supporting experimental results. Strengths: This work presents a novel *view* on the tradeoff between adversarial robustness and generalization of NNs trained with gradient descent. Weaknesses: - While the results are somewhat interesting, it is unclear what the implications of these results are, since many of the observations in this work have been noted in prior work, and are not surprising. It is also not clear if the proposed concept of co-correlation can be used can be used for other applications . - There are several typos/grammatical errors/formatting issues that should be corrected: - In line 56, ‘adversarial’ should be ‘adversarially’. - The last part of lines 64-65 should be rephrased. - In line 77, ‘digger’ should be ‘dig’. - The heading of section 3 should be ‘Preliminaries’. - Some of the notations (e.g., $sig$) should be corrected (to sig). - In line 127, ‘Slimier’ should be ‘Similar’. - The formatting of Theorem 4.1 (lines 155-156) should be corrected. - In line 179, $J\mathbf{\psi}$ should be $J_{\mathbf{\psi}}$. - In Ass. 5.2, L-2 should be $L_2$. - In Prop. 1, ‘flattened’ should be ‘saturated’. - A reference is missing in line 275. - Line 297 should be rephrased. - Missing ‘.’ in line 311. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weaknesses section. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: There are no societal negative impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and careful review. We appreciate the opportunity to clarify and expand on our work. Our rebuttals are as follows: ## Theoretical Implications Although it may not be immediately apparent in the paper, the implications of our work are significant. From a theoretical perspective on implicit bias research, our approach is distinct in assuming only L-2 norm-bounded inputs, Gaussian weight initialization, and bounded derivatives of the activation functions. This differs from most works on implicit bias, such as those by Lyu et al. [23], Frei et al. [12], Kunin et al. [20], and Frei et al. [13], where proofs are typically provided for 2-layer neural networks and rely on strong assumptions on inputs, making it difficult to generalize to deep neural networks. The extension to deeper neural networks in these works may require the same strong assumptions on features between layers, which is unrealistic. However, due to our assumptions on inputs needing only to be $L_2$-norm-bounded, our work can be more easily extended to deep networks. This extension was not included in the paper due to page limitations. However, we have provided experiments with ResNet50 and Wide-ResNet50 on CIFAR10 in this rebuttal and the later version of this paper. In the figures shown in the anonymous links: [Illustration of the divide for ResNet50](https://i.imgur.com/1OPWAuO.png), we divided the ResNets in two ways and used the Adam optimizer with a learning rate of 0.003 to track the dynamics of co-correlations. The experiment results are available at [Dynamic of co-correlation of the divide as $A_1$, $A_2$](https://i.imgur.com/icGOSOK.png) and [Dynamic of co-correlation of the divide as $B_1$, $B_2$](https://i.imgur.com/EfTfHx4.png). As is shown in both figures, the co-correlation increases over the training epochs, although the upward trend for the separation of $A_1$ and $A_2$ is not strictly monotonic. Additionally, our work offers a possible explanation for the observation that wider neural networks are more resilient to adversarial attacks. This complements recent findings by Frei et al. [13], who suggest that implicit bias leads to non-robust solutions. Our work suggests that these non-robust solutions can be mitigated by increasing the network width. ## Practical Implications Practically, our findings imply that designing an isolation layer to decouple the correlation between layers could enhance adversarial robustness. However, due to page limitations, we concentrate on the theoretic analysis of whether there is a collaboration between layers against adversarial examples. This aspect of how to reduce the co-correlation will be explored in future works. We greatly appreciate your careful review. All typographical and grammatical errors will be corrected in a subsequent version. And all the suggestions will be considered. Thank you for your understanding and feedback. --- Rebuttal 2: Title: A kind reminder for your response Comment: Due to the limited time available to respond, and since I have not received any feedback on whether I answered your question, I kindly request that you let me know if my response was satisfactory. This would help me understand where I can improve my work. Thank you! --- Rebuttal Comment 2.1: Comment: Thank you for the detailed response. It would be helpful to include more discussion on the implications in the paper. Reading through other reviews and responses, it seems that making changes such as adding more experiments as well as intuitions about the theorems are changes that go beyond minor updates. However, the paper presents a promising viewpoint. Hence, I will maintain my score. --- Reply to Comment 2.1.1: Comment: Thank you very much for your response.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their careful review. ## Motivation of the paper This paper is mainly to address the research problem: *Whether there a collaboration between layers against adversarial examples during Gradient Descent?* To quantify this collaboration, we introduce a new concept called *Co-correlation*, interpreted as the alignment of feature selections to maximize outputs for each layer and investigate the implicit bias of gradient descent. We theoretically show that Gradient Descent enhances this collaboration between layers. Additionally, we observe different behaviours for under- and over-parameterized neural networks: under-parameterized networks tend to foster co-correlation among layers to improve performance, whereas over-parameterized networks' performance improvement does not heavily rely on establishing such co-correlation. ## How we will adjust in the later version After reviewing all the feedback, we will adopt all the necessary suggestions. To facilitate understanding of our work, we will add more intuition behind our theorems and better clarify our arguments. Because this is theoretical work, it inevitably has some limitations in the experimental aspects. However, more real-world experiments will be included in a later version. The code for these experiments will also be made publicly available. Pdf: /pdf/4e9c3f9133c74d9f2f4130b455db8a731cdb6fcb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Expectation-Maximization Algorithm for Training Clean Diffusion Models from Corrupted Observations
Accept (poster)
Summary: The authors present a new method for training diffusion models from corrupted data. The presented method is an EM algorithm that in turn (i) uses the current model to sample clean images given the noisy input and then (ii) use these clean samples to train the diffusion model. The authors show the results for data corrupted by noise, blurring and missing pixels. Strengths: * This is a conceptual paper that does not focus on empirically optimised architecture details, but instead shows the applicability of a simple elegant new idea. This is a plus. * I can think of many applications in e.g. in scientific imaging (e.g. microscopy or astronomy) that suffer heavily from noise and degradation and for which clean data training is usually not available or very limited. So, I believe such methods can have a substantial practical impact. * The paper is well structured, with each building block (Score based diffusion models, posterior sampling, EM algorithm) being carefully introduced. Weaknesses: * Most importantly, I think the evaluation is not as convincing as it could be. All qualitative results seem to have artefacts to a degree. This also seems to be true for the baselines. E.g. the ambient diffusion results in Figure 3 suffer from some form of pixel artefacts. This is not the case when looking at the ambient diffusion paper (e.g. Figure 7), where not such artefacts are visible. Granted, this is a different dataset. However, I wonder if there is a problem with the training. * I think the fact that the method requires some (albeit little) clean data as a first training step is a weakness of the method. * My understanding is that the current system assumes Gaussian noise as part of the forward model (line 149). This can be limiting. * There are some typos and language problems. Technical Quality: 3 Clarity: 3 Questions for Authors: At the moment the degradation model is known and seems to have to involve Gaussian noise (line 149). How does this work for the blurring operation, which seems to be deterministic? Or is the amount of added Gaussian noise just not visible. How does it work for the in-painting task, which also does not seem to involve Gaussian noise? Typos and language: Line 103-104 "we defines" Table 4: "The results show that EM-Score is insensitivity to initializations" Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Do the authors think that it could be possible to include other degradation models, such as a more realistic combination of Gaussian noise Poisson shot noise? Would it potentially be possible to learn the parameters of a degradation model. e.g. the blur kernel of the parameters of the noise distribution? A learnable degradation model would drastically increase the applicability in real life imaging applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; We thank the reviewer for the constructive suggestions. We are happy that the reviewer recognizes our idea as "simple" and "elegant", our method "can have a substantial practical impact", and our paper "is well structured". We have provided explanations and clarifications regarding all the concerns, as shown below: &nbsp; **1. Qualitative results with artifacts** &emsp; Thanks for your careful review! In Figure 3, we used the officially released Ambient Diffusion checkpoint for image posterior sampling rather than training the SOTA models ourselves. We double-checked our code and found no issues. &emsp; However, training clean diffusion models from corrupted images is extremely challenging, and the missing information can cause qualitative results on some datasets to appear artefactual. The good performance in Figure 7 of the Ambient Diffusion paper leverages AFHQ, whereas our results report on CIFAR10. CIFAR10 images contain more high-frequency features than AFHQ images, making inverse problems more challenging and leading to more reconstruction artifacts. The original Ambient Diffusion paper also noted similar issues, stating that learning clean diffusion models on CIFAR10 is harder compared to CelebA-HQ and AFHQ (Sec. 5.1 in their paper) : > “for CelebA-HQ and AFHQ, we manage to maintain a decent FID score even with 90% of the pixels deleted. For CIFAR-10, the performance degrades faster, potentially because of the lower resolution of the training images”. &nbsp; **2. Dependence on clean data for initialization** &emsp; Our method relies on a small amount of clean data as starting point, which we acknowledge as a limitation, as discussed in the conclusion. However, we have minimized this requirement to as few as 10 clean images (Fig. 5(a)). Additionally, we have explored alternative initialization strategies, including using out-of-distribution (OOD) clean images or classical pre-processing methods to initialize EM iterations (see Fig.5 and Tbl. 4 in the main paper). Our preliminary findings suggest that our EM approach can be initialized using low-frequency image statistics (e.g., smoothness) learned from OOD or pre-processed images. Investigating the elimination of the clean data requirement is an important research direction that we leave for future work. We will discuss this further in the revision. &nbsp; **3. Gaussian noise in forward models** &emsp; All examples in the paper involve Gaussian noise in their degradation models, including inpainting and deblurring, as practical imaging problems always include measurement noise. However, the noise added to masked and blurry images is minimal (std=0.0001 and 0.02, respectively) and may not be visible. We also support different noise models, as discussed below. &nbsp; **4. Extending to other types of measurement noise** &emsp; Our method can be extended to more realistic noise models. As discussed in the original DPS paper [1], it can handle various noise statistics, such as Gaussian and Poisson. Our EM framework has flexibility in replacing our E-step with DPS under new noise assumptions or even more advanced posterior sampling algorithms such as plug-and-play Monte Carlo (PMC)[2]. &nbsp; **5. Learning parameters of degradation models** &emsp; Thank you for suggesting this important topic! Learning degradation models complements our method, which focuses on understanding the relationship between observations and underlying images rather than just image priors. This challenge can also be addressed using the EM framework, where the M-step iteratively estimates the degradation model’s parameters by incorporating the gradient of the data fidelity term with respect to these parameters. We will discuss it in the paper and continue to explore this exciting direction in the future. &nbsp; We thank the reviewer once again for the insightful suggestions. We hope our explanations have clarified all the concerns. We are happy to address any further questions or discussions the reviewer may have. Thank you! &nbsp; **Reference**: [1] Hyungjin Chung, Jeongsol Kim, Michael T. Mccann, Marc L. Klasky, Jong Chul Ye. Diffusion Posterior Sampling for General Noisy Inverse Problems, ICLR 2023. [2] Yu Sun, Zihui Wu, Yifan Chen, Berthy Feng, and Katherine L. Bouman. Provable Probabilistic Imaging using Score-based Generative Priors, arXiv 2023. --- Rebuttal 2: Title: Thank you for your reply. Comment: I don't have any additional questions. --- Rebuttal Comment 2.1: Comment: Thank you so much for your response! We appreciate your time and consideration.
Summary: This paper we proposes an expectation-maximization (EM) approach to train diffusion models from corrupted observations. The extensive experiments show the effectiveness of the proposed method. Strengths: 1. The experiments are extensive. 2. The proposed approach addresses a significant challenge in the field where large clean datasets are often unavailable. Weaknesses: 1. The source code is not provided yet, but the authors promised to make it public in the future. Technical Quality: 4 Clarity: 3 Questions for Authors: While you mention that only a small number of clean images are needed to initialize the EM iterations, does the number of clean images affect the final model performance in applications? Is it possible to completely eliminate the dependence on clean data? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors presented the limitations and proposed the future direction to addressed them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; We thank the reviewer for the insightful suggestions. We are happy that the reviewer recognizes our approach "addresses a significant challenge in the field", and our experiments are "extensive" and "show the effectiveness of the proposed method". We will release all the code and the checkpoint to the community to stimulate future research. Below we address each question of the reviewer. &nbsp; **1. Source code** &emsp; We are happy to provide a private repository of our code to address your concerns. Following NeurIPS author guidelines, > If you were asked by the reviewers to provide code, please send an anonymized link to the AC in a separate comment, make sure the code itself and all related files and file names are also completely anonymized. &emsp; we have sent a link to the AC for anonymity verification and will share it with you once approved. &nbsp; **2. Impact of number of clean images on model performance** &emsp; Preliminary experiments in Section 5.4 and Figure 5(a) show that using 10 to 500 clean images results in similar posterior sampling outcomes after the first EM stage, indicating that the diffusion model successfully learns important image statistics with limited data. Since clean images initialize the diffusion model by providing low-frequency statistics, the model performance will not be significantly affected by their number. &nbsp; **3. Completely eliminating the dependence on clean data** &emsp; This is indeed an important research direction. In this paper, we have explored strategies to eliminate the dependence on clean data, such as initializing EM iterations with classical pre-processing methods, as detailed in Appendix B. Additionally, leveraging a generative foundation model like Stable Diffusion as the initial model could be a potential alternative solution, as suggested by other reviewers (Consistent Diffusion Meets Tweedie, ICML 2024). We will continue to explore this direction in future work. &nbsp; We thank the reviewer once again for the insightful suggestions. We greatly appreciate the recognition and are more than willing to address any further questions or discussions the reviewer may have. Thank you! --- Rebuttal Comment 1.1: Title: My concern is resolved Comment: Thanks for your reply. My concern is resolved. --- Reply to Comment 1.1.1: Comment: We are very happy that the concerns are addressed! Thank you once again for your valuable feedback!
Summary: Authors propose using the EM algorithm to train a diffusion model on corrupted data. To initialize the process, the proposed method requires access to a "limited number of clean samples." The authors claim the method allows convergence to the true data distribution. Strengths: **Originality.** The idea of using EM to train a diffusion model from corrupted data is specifically original; however, training generative models without access to clean data is a well-explored problem, even before the diffusion model era. **Quality and clarity.** The paper is easy to follow, and the problem is well-motivated. **Significance.** I do not find the proposed method in its current format significant, as it assumes access to a "limited dataset of clean images" without quantifying how small this set can be. Additionally, the paper claims convergence to the true data distribution without providing conditions on the forward operator and noise process, which is an unsubstantiated claim given the limited set of inverse problems explored. Weaknesses: * Claiming convergence to the true data distribution is a grand claim, and simply providing examples from relatively simple inverse problems does not substantiate this claim. Therefore, due to unsubstantiated claims and limited quantitative discussion on the required conditions for the forward operator and the number of clean data samples needed, I find this paper not truly usable in practice. Technical Quality: 2 Clarity: 3 Questions for Authors: * How do authors quantify "minimal clean images"? The authors mention that sparse clean data provides a good initialization of the DM's manifold. Why is this the case? What does "good" mean precisely in this context? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: * I disagree with the authors that needing access to a minimal number of clean images is a minor limitation. AmbientDiffusion and some other prior work discussed do not assume access to such a dataset, and hence the comparisons might not be entirely fair. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; We thank the reviewer for the thoughtful comments. We believe there may have been some misunderstandings regarding our work, which we will clarify. Below we carefully address all the questions from the reviewer. &nbsp; **1. Quantifying the number of required clean images** &emsp; We totally agree that quantifying the amount of required clean images is important and we have carefully investigated this problem in Section 5.4, Figure 5(a), and Table 4 of the main paper. Our experiments show that diffusion models trained on as few as 10 clean images provide adequate initialization. Moreover, to further reduce the need for clean data, we also explored using out-of-distribution (OOD) or pre-processed images for initialization, which achieved comparable results. We will highlight these findings more clearly in the revision. &nbsp; **2. Convergence claims & conditions for the forward operator** &emsp; In our paper, convergence refers to reaching a local minimum of the optimization problem, as established by the properties of the classical EM algorithm and supported by previous theoretical studies [1]. We discuss this briefly in Section 3.3, lines 124-126, of the original paper. The forward operator and noise process do not affect this type of convergence but do influence the convergence speed by determining the amount of information in the corrupted observations. We will clarify the sentences and definitions about convergence in the final version. &nbsp; **3. Why sparse clean data provide “good” initialization** &emsp; This question has been partially addressed in Section 4.2, lines 160-165, of the original paper. Sparse clean data helps the initial diffusion model learn common structures in natural images, such as continuity, smoothness, and object profiles. In this context, “good” means capturing important low-frequency statistics of natural images. From the EM algorithm perspective, “good” initialization means defining a local minimum solution close to the global minimum. &nbsp; **4. Limitation of requiring clean images** &emsp; Considering the minimal number of clean images required for initialization and the alternative strategies we've validated for our EM framework (as explained in answer (1)), we respectfully disagree that this is a major limitation of our algorithm. Additionally, in many scientific applications, accessing some clean images is reasonable, as the lack of clean images is often due to cost rather than physical restrictions. For instance, in fluorescent microscopy, while high-SNR images are limited due to the need for long exposure times and the risk of phototoxicity, it is still feasible to obtain some high-SNR images to support the training of clean diffusion models from extensive low-SNR microscopy data. &nbsp; We thank the reviewer once again for the valuable comments. We hope our explanations have clarified all the questions. We are happy to address any further concerns or discussions the reviewer may have. We kindly hope the reviewer could consider raising the score if satisfied with our responses. Thank you! &nbsp; **Reference**: [1] Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, 1977 --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I will maintain my score because of one main reason: the claims made in the paper are too general and misleading, e.g., "This iterative process leads 8 the learned diffusion model to gradually converge to the true clean data distribution" (from abstract). In the rebuttal, the authors state that "In our paper, convergence refers to reaching a local minimum of the optimization problem." I do not get that impression from the paper. In addition, the authors, in response to my concern regarding conditions on the inverse problem's forward operator, state that "The forward operator and noise process do not affect this type of convergence but do influence the convergence speed by determining the amount of information in the corrupted observations." I strongly disagree: the forward operator could be an all-zeros matrix. You cannot recover the true data distribution using observations like that. --- Rebuttal 2: Comment: &nbsp; We thank the reviewer for the additional comments. We are pleased that many of your concerns have been addressed. Below, we provide additional clarification regarding the convergence claim and the conditions of the forward operator. &nbsp; **1. Convergence Claim** &emsp; As explained in our rebuttal, the convergence we refer to is the local convergence inherited from the EM framework. We explicitly mentioned this in our original paper (lines 124-126). This claim is further supported by our extensive experimental results across various imaging tasks (see Appendix C). We promise to rephrase our abstract and add further clarifications on this point in the revised manuscript to avoid any confusion. &nbsp; **2. Forward Operator** &emsp; In our previous responses, we misunderstood the reviewer's comments as asking us to discuss the effects of various corruption types (e.g., inpainting, denoising, deblurring) on the method’s effectiveness. We fully agree that there are certain cases where recovering a clean distribution from corrupted data is information-theoretically impossible, such as when the forward operator matrix is entirely composed of zeros. We will include additional discussions on these scenarios in the revised manuscript. &nbsp; We hope our explanations address all the concerns of the reviewer. We would greatly appreciate it if the reviewer could reconsider the evaluation of our paper. Thank you!
Summary: The authors introduce a new framework for training diffusion models from corrupted data. Prior work on this research topic is based on the Ambient Diffusion framework or Stein's Unbiased Risk Estimate (SURE) idea. The authors propose an alternative methodology based on the Expectation-Maximization algorithm. The algorithm is tested on standard datasets and leads to improved performance over the considered baselines. Strengths: 1. The research topic of training diffusion from corrupted data is relevant and timely. 2. The authors propose a fresh idea for this interesting research problem. The proposed idea has several elegant properties: I) it is simple, ii) relatively easy to implement, and iii) the idea is oblivious to the type of corruption. 3. The fact that the proposed algorithm is agnostic to the type of corruption is important: as the authors correctly mention, prior work, such as Ambient Diffusion, is designed (or at least tested) only for inpainting. 4. The authors manage to outperform the prior state-of-the-art, Ambient Diffusion in the same setting (CIFAR-10 with 60% corruption). 5. The authors have a nice way of leveraging clean samples. Prior work assumes that we don't have any access to clean samples. This paper naturally integrates the available clean data by using them to initialize the EM algorithm. Overall, I believe this submission offers a promising alternative to Ambient Diffusion for learning diffusion models with corrupted data. Weaknesses: 1. The comparison to the baselines is somewhat limited/unfair. As the authors mention, Ambient Diffusion has been designed for inpainting. The authors use Ambient Diffusion for denoising and Deblurring. It is not clear how they do this since the original algorithm cannot be trivially extended to these settings. Even for the case of inpainting, the authors could report results for other levels of corruption as well (apart from $p=0.6$) and for other datasets apart from CIFAR-10. Regarding the comparisons with Ambient Diffusion, the only valid data point right now is for $p=0.6$. Providing a more comprehensive analysis would help the readers understand in what regime the proposed method works the best. 2. Specifically for the case of denoising, the authors might want to provide comparisons with the more recent work Consistent Diffusion Meets Tweedie which extends the Ambient Diffusion idea to the setting of noisy samples. 3. For some scientific applications, there is no available clean data at all. The proposed algorithm needs some clean data to initialize the EM and this could be limiting for such applications. 4. One issue with the proposed method is that it relies on solving inverse problems with diffusion models. This problem is computationally intractable and for that reason, there has been a large body of research papers proposing approximations to the conditional score. The authors rely on such an approximation (DPS) which is known that doesn't truly offer samples from the posterior unless the data distribution is very simple (e.g. Gaussian). 5. One issue with this method seems to be the training speed. Even for converge to a local optimum, EM typically requires many steps. Each step here is a new training of a diffusion model. Prior works on learning diffusion models from corrupted data only required a single training. 6. For some corruption types, it is impossible to ever reconstruct the true distribution from noisy observations. The authors do not discuss how the proposed algorithm would work in such cases. 7. Minor: There is a block that needs to be deleted from the NeurIPS checklist section. Technical Quality: 4 Clarity: 2 Questions for Authors: See the Weaknesses Section above. I am willing to further increase my score if my concerns are properly addressed. Confidence: 5 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; We thank the reviewer for the invaluable feedback. As noted by the reviewer, our idea is "fresh" and "elegant," our algorithm is "agnostic to the type of corruption," and our results "outperform the prior state-of-the-art." Below we provide more explanations and additional results to address each concern from the reviewer. &nbsp; **1. Comparison to baselines** &emsp; Ambient Diffusion pioneered in learning clean diffusion models from masked images, achieving current STOA performance. While this insightful idea was originally designed for inpainting task, we feel it is more comprehensive to test it on denoising and deblurring tasks, using its original implementation with a naïve further corruption technique (randomly masking pixels for all tasks). We acknowledge that the original algorithm cannot be trivially extended to different settings. To avoid misinterpretation, we are open to labeling Ambient Diffusion's denoising and deblurring results as "not applicable," as these tasks fall outside its original design scope. In addition, we also provided SURE-Score in our paper as a versatile baseline capable of handling various corruptions. &emsp; We agree that validation on different levels of corruption and other datasets would strengthen the understanding of our method. As suggested by the reviewer, we conducted additional experiments comparing the EM approach with Ambient Diffusion across different corruption levels (masking ratios of 0.4, 0.6, 0.8). These new results, presented in the table below and the attached PDF, demonstrate our method’s robustness and superior performance across diverse conditions. We will incorporate these findings as well as more experiments on new datasets (e.g., CelebA) in the revision. | Masking Ratio | 0.4 | 0.6 | 0.8 | | --------------------------------- | ----- | ----- | ----- | | AmbientDiffusion, FID$\downarrow$ | 18.85 | 28.88 | 46.27 | | Ours, FID$\downarrow$ | 13.75 | 21.08 | 45.24 | &nbsp; **2. Recent work "Consistent Diffusion Meets Tweedie"** &emsp; We thank the reviewer for pointing out this relevant ICML 2024 work, which cleverly finetunes Stable Diffusion (SD) to leverage the pre-trained knowledge in denoising tasks. We will add it in related work. However, as our method focuses on training diffusion models from scratch using corrupted observations, we feel the two works belong to different categories. A fairer comparison with the ICML work would involve applying an EM framework to finetune SD with noisy observations, which presents intriguing future research. We will explore it in future work. &nbsp; **3. The limitation of clean data initialization** &emsp; Our method relies on a small amount of clean data as a starting point, which we do acknowledge as a limitation, as discussed in the conclusion. However, we have minimized the requirement of the clean data to as few as 10 or 50 clean images (Fig. 5(a)), which we believe is a reasonable assumption for many scientific applications. For instance, in fluorescent microscopy imaging, high SNR images are limited due to long exposure times and potential phototoxicity rather than physical restrictions. Therefore, acquiring a small number of high-SNR images to aid training is feasible. &emsp; On top of that, we have explored alternative initialization strategies, including using out-of-distribution (OOD) clean images or classical pre-processing methods to initialize EM iterations (see Fig. 5 and Tbl. 4 in the paper). Our preliminary findings suggest that our EM approach can be initialized using low-frequency image statistics (e.g., smoothness) learned from OOD or pre-processed images. Investigating ways to eliminate the need for clean data would be an important future research direction. We will discuss more in the revision. &nbsp; **4.DPS does not offer true posterior samples** &emsp; We acknowledge that DPS-based methods may not always accurately sample the true posterior due to their simplified assumptions. Our key contribution is the EM approach itself, which provides flexibility in choosing the posterior sampling method in its E-step. For example, it can replace DPS with more advanced posterior sampling techniques with stronger theoretical guarantees, such as plug-and-play Monte Carlo (PMC)[1], which ensures non-asymptotic stationarity. We will discuss it in the revision. &nbsp; **5. Training efficiency** &emsp; Although our method involves multiple training steps, as discussed in Sec. 4.3, we avoid training diffusion models from scratch at each M-step. Instead, we fine-tune the model in most iterations, only training from scratch in the final 1-3 steps to prevent memorizing poor samples from the initial stages. This ensures rapid convergence in each iteration and significantly reduces training time. In practice, our method has a similar training time as Ambient Diffusion (ours is 2 days with 4 NVIDIA A800 GPUs). &nbsp; **6. Discussion on corruption types that are impossible to recover** &emsp; We agree that for extremely corrupted images (e.g., >99% pixels masked), reconstructing the true distribution becomes challenging or even impossible due to the loss of spatial relationships among pixels. This is a common issue across all baselines. A potential solution could involve incorporating more prior knowledge, such as fine-tuning SD instead of training from scratch, similar to ICML 2024 work. We will discuss it in the revision. &nbsp; **7.Redundancy in the checklist section** &emsp; Nice catch! We will correct it. &nbsp; We thank the reviewer again for the valuable insights and suggestions. We hope our explanations have clarified all the concerns. We are happy to address any further questions or discussions the reviewer may have. Thank you! &nbsp; **Reference**: [1] Yu Sun, Zihui Wu, Yifan Chen, Berthy Feng, and Katherine L. Bouman. Provable Probabilistic Imaging using Score-based Generative Priors, arXiv 2023. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgement Comment: I would like to thank the authors for their time and their efforts during the rebuttal. I believe that the experiments with Ambient Diffusion for tasks beyond inpainting should be completely removed. Extending this method to corruptions beyond random inpainting should be done in a way such that Theorem 4.1. of the paper is satisfied. The current mechanism for further corruption is not valid and hence the results are not valid. I would like to ask the authors to remove the results for such tasks to avoid confusing the readers. I would like to thank the authors for providing additional comparisons to Ambient Diffusion. I would urge the authors to provide more points and more datasets. This is crucial: different proposed methods should be evaluated in the same setting so that future work can be compared on a common benchmark. From the current results, it looks like the gap between the proposed method and Ambient Diffusion is closing for higher corruption. Results for higher corruption should be provided as well. In my opinion, it is absolutely fine if the proposed method even underperforms a baseline in some settings. The contribution is significant already. Regarding the ICML work, it provides a general algorithm for training diffusion models from noisy observations than can be applied to training from scratch as well. In any case, this is concurrent work and I agree with the authors that direct comparisons are not needed. Regarding discussion for other corruption types: this is not what I meant. I was referring to cases where it is information-theoretically impossible to learn the underlying distribution from measurements, e.g. you can't learn the distribution of human faces from a set of training images where the eyes are always missing. Overall, I believe this paper has some really nice aspects and hence I will keep my positive score of 6. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the additional comments. We will incorporate all your invaluable suggestions in our revision. Specifically, we will: 1) Remove AmbientDiffusion’s results on deblurring and denoising. 2) Include experiments on more corruption conditions and datasets. 3) Add discussion on the cases where learning clean diffusion from corrupted data is theoretically impossible. Thank you once again for your valuable feedback!
Rebuttal 1: Rebuttal: &nbsp; We thank all the reviewers for their professional and constructive feedback. We are encouraged by their recognition of our paper's technical importance and novelty (qpvz, Kirh, bEKL), broad applicability (qpvz, bEKL), impressive performance in extensive experiments (qpvz, Kirh), and high-quality writing (HYFc, bEKL). We have summarized some of the major concerns raised by the reviewers below. Point-to-point responses are also included as a reply to each reviewer. Additionally, we have added a PDF file for further experimental results. &nbsp; **1: Requirement for clean data to initialize the EM training** &emsp; The introduction of clean images aids the diffusion model in converging to a local minimum close to the true image distribution. We clarify that as few as 10 images may suffice, and alternative initialization strategies also exist, as discussed in Section 5.4 and Appendix B of our original submission. We acknowledge that other strategies suggested by reviewers, such as initializing with generative foundation models (e.g., Stable Diffusion[1]), are promising and will discuss them in the revision. We also discuss scientific imaging applications (e.g., fluorescent microscopy) to explain that access to some clean images is reasonable, as their scarcity is often due to cost rather than physical restrictions. &nbsp; **2: More comparisons to baseline methods like Ambient Diffusion** &emsp; While our original submission included extensive experiments comparing our method with baselines, as suggested by the reviewers, we added additional experiments with different settings (e.g., various corruption levels) and further verification of our implementation to strength our conclusions. We have included these new results and the corresponding conclusions in the PDF as well as the point-by-point responses. &nbsp; **3: The algorithm’s extension to different corruption types, noise models, and unknown degradations** &emsp; The EM algorithm is a versatile framework that can be extended to various imaging inverse problems and can also learn unknown forward models or noise statistics. We will include more discussions on these aspects in our revision. &nbsp; Please let us know if these clarifications and additional results address your concerns. We are happy to discuss any remaining points during the discussion phase. &nbsp; &nbsp; **Reference**: [1] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. High-Resolution Image Synthesis with Latent Diffusion Models, CVPR 2022. Pdf: /pdf/0ffc51761d8126973c9c4bb05f92605166e9db13.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity
Accept (poster)
Summary: The paper consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries. Authors provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds. They propose an algorithm that features a combination of a bootstrapping stage and a mirror-descent stage. Strengths: The authors consider a problem formulation where the gradient of the function or higher order derivatives are not available. This problem formulation is of extreme interest in the field of Machine Learning. However, the work has many weaknesses (see below). Weaknesses: 1. Almost no mention is made of work motivation. This is an important part that **cannot be missed**... 2. Also, this paper gives the impression that **it was produced clearly not in this year**, since the main references are **from 2021** and earlier. But since 2021, the research, in particular in the area of zero-order optimization has a lot of interest in the community and is highly advanced compared to what the authors cited as related works. For example, in addition to a number of cited papers studying the zero-order optimization problem with the assumption of increased smoothness of the function, there is already a more recent paper that has been missed: _Akhavan et al. (2023)_ [1], which proposes an improved analysis of gradient approximation bias estimation as well as second moment estimation. Similarly, there is a weakly described section presenting different gradient approximations, where the main focus fell on the smoothing vector (Gaussian vector or Randomized vector). But, there are already a number of papers, such as _Gasnikov et al. (2024)_ [2], which have shown that the randomized approximation clearly performs better on practical experiments than the Gaussian approximation. It seems better to give an overview of different gradient approximations, including $l_1$ randomization, in this section 3. Since I have already mentioned the topic of practical experiments, it is important to note that despite the fact that the authors provide mainly theoretical work, a paper that is intended for presentation at the NeurIPS conference should **show the effectiveness of the proposed results** in real life. This paper lacks any experiments, which is a major weakness of the paper. 4. The structure of the article is poorly chosen... It seems that the sections "Main contribution", "Discussion" and "Conclusion" would **have improved the presentation** of the article's results. 5. Regarding the results themselves, essentially **only one result is presented**, which is a minor contribution. It would be interesting to consider when the function is not only twice differentiable but also has a higher order of smoothness, as well as other settings of the problem: convex, non-convex, maybe the Polyak–Lojasiewicz condition, etc. [1] Akhavan A. et al. (2023). Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm //arXiv preprint arXiv:2306.02159. [2] Gasnikov, A. et al. (2024). Highly smooth zeroth-order methods for solving optimization problems under the PL condition. Computational Mathematics and Mathematical Physics. Technical Quality: 2 Clarity: 1 Questions for Authors: I have a few questions and suggestions: - It seems that Theorem 3.1 can be improved in terms of dependence on the problem dimension $d$ using tricks from the work of _Akhavan et al. (2023)_ [1] - Can the authors explain what the final result would look like if $w_t$ does not necessarily have zero mean, but is independent of vector $e$ (randomization on the sphere)? Also, if we consider deterministic noise $|\delta(x)| \leq \Delta$ as noise. [1] Akhavan A. et al. (2023). Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm //arXiv preprint arXiv:2306.02159. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Please find our responses below. > Almost no mention is made of work motivation. - The importance of stochastic optimization is clearly mentioned in our introduction. - The motivation for this work stems from a lack of understanding of the sample complexity under convexity and higher-order smoothness. As stated in Lines 39-46 and summarized in Table 1, the problem of finding tighter complexity bounds for higher-order smooth functions has been widely studied, but the existing sample complexity bounds all have an apparent gap in the regime of $k \geq 2$. > Also, this paper gives the impression that it was produced clearly not in this year, since the main references are from 2021 and earlier. - We have cited papers from beyond 2021, such as Lattimore & György (2023) and Yu et al. (2023). - Thank you for your suggestion on references. However, the references you mentioned, Akhavan et al. (2023) and Gasnikov et al. (2024), address different research questions from those in our paper. They focus on zeroth-order optimization under assumptions such as the PL condition and higher-order Hölder smoothness. In contrast, we assume strong convexity and a Lipschitz Hessian. More specifically, similar to other related works, Akhavan et al. (2023) adopt the additional assumption of the Lipschitz gradient while we do not impose this condition. Their $O(r^2)$ bound on the bias of the $L_2$ gradient estimator for $\beta=3$ is implied and strengthened by our bound in Theorem 4.1 as we discussed in Appendix A. On the other hand, Gasnikov et al. (2024) focus on adversarial bandit problems with (almost) adversarial noises, whereas we consider stochastic bandit problems with i.i.d. random noises. > … a paper that is intended for presentation at the NeurIPS conference should show the effectiveness of the proposed results in real life. This paper lacks any experiments, which is a major weakness of the paper. - Our work addresses the theoretical question of bandit optimization under strong convexity and Lipschitz Hessian. We provide matching lower and upper bounds for simple regret, thereby fully resolving the complexity of this fundamental problem. Given the theoretical nature of our contributions, experimental validation is outside the scope of this study. > It seems that Theorem 3.1 can be improved in terms of dependence on the problem dimension d using tricks from the work of Akhavan et al. (2023) - Under the current problem setting, the tricks from Akhavan et al. (2023) cannot improve the dependence on $d$ in Theorem 3.1. This is because our lower bound in Theorem 3.2 already matches Theorem 3.1 in its dependence on $d$. Furthermore, the dependence on $d$ in Akhavan et al. (2023) is suboptimal compared to our bound. Specifically, their rate is $d^{2 - \frac{2}{\beta}}$ (where $\beta \geq 1$ is the order of Hölder smoothness), while our rate is linear in $d$. Therefore, we believe that applying techniques from Akhavan et al. (2023) is unlikely to improve the dependence on $d$. > Can the authors explain what the final result would look like if 𝑤𝑡 does not necessarily have zero mean, but is independent of vector 𝑒 (randomization on the sphere)? Also, if we consider deterministic noise |𝛿(𝑥)|≤Δ as noise. - Thank you for pointing out this interesting future direction. In our work, we consider the stochastic bandit setting where the noises are i.i.d. random variables. If $w_t$’s are dependent on $e$ or are deterministic, the problem would fall into the category of adversarial bandits, which requires fundamentally different approaches and is outside the scope of our paper. --- Rebuttal Comment 1.1: Comment: Dear Authors, I thank you for your replies. However, the authors have confirmed my concerns (ignoring some of the weaknesses in the rebuttals)... I will discuss the readiness of the article for presentation at the conference with other Reviewers and Area Chair before making a final decision. At the moment, I maintain a score of 2. --- Reply to Comment 1.1.1: Comment: Dear reviewer cqSV, Thank you for your recent update. We noticed that you mentioned, “the authors have confirmed my concerns.” We would appreciate your clarification on how our response confirmed your concern that "Theorem 3.1 can be improved in terms of dependence on the problem dimension using ..." As we have explained in our rebuttal, "our lower bound in Theorem 3.2 already matches Theorem 3.1 in terms of dependence on the problem dimension." making such improvements mathematically infeasible. Could you provide further details on how you believe the dependence on d could be improved? Best, Authors of Submission4474
Summary: The paper studies zero order stochastic optimization (the learner has access to noisy function evaluation only) assuming the objective is $M$-strongly convex and has a $\rho$-Lipschitz (in forbenius norm) Hessian. Matching upper and lower bounds are presented, which establish a tight result of suboptimality $\Theta(d T^{-2/3}M^{-1})$ (w.h.p.) after $T$ function evaluations. Strengths: * A tight result is achieved for the setting studied. * On the technical side, the gradient evaluation procedure in the final stage of the algorithm uses samples from the ellipsoid induced by the Hessian approximation rather than opting for isotropic sampling. Further, a novel approach for analyzing the bias and variance of this estimation procedure is presented that yields sharp bounds. This technique has not been used by prior works, and seems crucial for the minimax optimal result. * This result improves over the most closely related prior state-of-the-art Novitskii & Gasnikov (2021) by a factor of $d^{1/3}$, who require an additional Lipschitz gradient assumption (see discussion in Appendix A). Weaknesses: Nothing in particular. Technical Quality: 3 Clarity: 3 Questions for Authors: - Lines 125-126: 1-subgaussianity already implies variance $\leq 1$. - Akhavan et al. (2020) consider the Lipschitz gradient setting and do not require a Lipschitz Hessian. Hence it seems their results and yours are incomparable (correct?). Given this I did not understand lines 315-317: "On the other hand ... well." Do you include Akhavan et al in this statement? It doesn't seem true that a direct application of your algorithm order wise improves the result of Akhavan et al because your result does not apply in their setting. - Since we consider a twice differentiable objective over a bounded domain, it follows the gradient is in fact Lipschitz. With this in mind, are the assumptions of Novitskii & Gasnikov (2021) indeed stronger than those made in this work? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback from the reviewer. Below are the point-to-point responses to the comments. >Implication of bounded variance from 1-subgaussianity We would like to clarify that our intention in mentioning both assumptions was to use the sub-Gaussian assumption to simplify the algorithmic design and meanwhile explicitly state the bounded variance condition for readers who may not be familiar with the properties of sub-Gaussian variables. However, we want to emphasize that, of the two assumptions, the bounded variance condition is more fundamental to our results. The sub-Gaussian assumption can be removed by employing more sophisticated mean-estimation methods, such as median-of-means instead of naive averaging. In the revised manuscript, we will clarify that the use of the stronger sub-Gaussian assumption is for simplicity. We will also refer to references related to these alternative mean estimators, e.g., [1]. [1] A. Nemirovskii and D. Yuom. Problem Complexity and Method Efficiency in Optimization,". Wiley, 1983. > Applicability of our result to Akhavan et al. (2020) and the comparison in line 315-317 As mentioned in Appendix A of our paper, although Akhavan et al. (2020) did not directly assume a Lipschitz Hessian, our result can be related to their $\beta=3$ smoothness condition (which we refer to as the $k=3$ smoothness condition in our paper). The bound we discuss in lines 315-317 corresponds to Lemma 2.3 in Akhavan et al. (2020). As a sanity check, the prior work needed to assume this higher-order smoothness condition to obtain a bound that scales with $r^2$ for the gradient estimator. This is because, with only the Lipschitz gradient condition, the best achievable bound scales with $r$, which is worse by a factor of $1/r$. > Is Lipschitz gradient over bound domain in Novitskii & Gasnikov (2021) indeed a stronger assumption? We would like to clarify that although the twice differentiability implies the finiteness of the second-order derivatives over a compact set, it does not imply that the gradient is Lipschitz with a fixed constant such as $\rho$ or $L$ even for a fixed compact set. In other words, the Hessian can be finite but unbounded. A simple example is that the function class of all quadratic polynomials (with arbitrarily large second-order derivatives) satisfies Lipschitz Hessian but not Lipschitz Gradient. Therefore, the Lipschitz gradient condition has to be explicitly assumed in prior works in addition to the higher-order smoothness conditions. Notably, even in the classical analysis of Newton's method, which assumes zero-error observations, an additional constant factor associated with the Lipschitz gradient condition is used to obtain non-trivial complexity bounds (e.g., see [2], Section 9.5.3). This demonstrates the difficulty of removing this smoothness condition. We introduced a non-trivial modification of the Newton step in the bootstrapping stage to achieve a uniform convergence guarantee of $E[f(x_B)-f^*]=O(1)$ (see Algorithm 4), which removes the need for the Lipschitz gradient condition. We consider this to be one of our main technical contributions. [2] Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
Summary: This work studies the convergence of zeroth order stochastic optimization for a class of strongly convex, second-order smooth objective functions. The authors assume that the noisy one-point feedback oracle is available, and the additive noise is subgaussian. Both the asymptotic upper bound and the matching lower bound for the minimax simple regret are provided. Strengths: The authors propose a novel algorithm that combines gradient estimation, bootstrapping procedures, and a mirror descent procedure. This combination enables accurate estimation of gradients and Hessians under second-order smoothness, thereby achieving the optimal rate. Weaknesses: The idea of this work is interesting, but the presentation needs improvement. Some main results should be explicitly and rigorously stated as theorems, for example, eq (6). Additionally, it is unclear why the two stages in Algorithm 4 are necessary. Providing intuition and explanation for the two stages and the choice of parameters in Algorithm 4 would be beneficial. Furthermore, the proof of the upper bounds contains gaps. The result obtained in line 601 is $\lim_{T\to\infty} T^{2/3}E[\|\nabla f(x_N^B)\|^3]=0$. It is unclear how $M$ and $\rho$ enter the final result and match eq(6) with the specific choice of $\epsilon$ in line 170. More detailed steps in the proof would help clarify this connection. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Remark 4.2: I cannot understand the first sentence. Are there any established results for the case of the cubic polynomial function $f$? 2. Algorithm 4: Is the $\hat H$ returned by HessianEst invertible? 3. Proof of Theorem 3.1: 1) In line 200, "First, recall our construction ensures that," could the author provide the proof of this statement? 2) In lines 203-204, the two displays omit the dimension dependence. Will this omission impact the final dimension dependence? 4. Line 358: I cannot see how to apply Lemma B.2, also known as Proposition 7 in [Yu et. al. 2023]. What is the $Var_{f\sim p}[f(x)]$? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper proposes a new algorithm but does not provide any numerical results to validate it. There are many interesting questions that could be investigated numerically regarding this method, such as its dependence on the length of the first and second stages. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback from the reviewer and will state equation (6) formally as a Theorem. Below are the point-to-point responses to the comments. >Necessity of the two-stage algorithm The two phases of our algorithm are designed to handle distinct challenges. The first stage of our algorithm deals with potential initial conditions with unbounded gradients and highly skewed Hessians. To that end, we need a gradient estimator whose variance does not depend on the gradient norm to ensure the uniform boundedness of $E[f(x_B)-f^*]$. Therefore, we employ the coordinate-wise estimator in BootstrappingEst for gradient estimation in the first stage. In fact, the bootstrapping stage requires a non-trivial modification of the Newton step to achieve the uniform convergence guarantee of $E[f(x_B)-f^*]=O(1)$ (see Algorithm 4) without relying on the Lipschitz gradient condition assumed in prior works. We want to emphasize that even in the classical analysis of Newton's method, which assumes zero-error observations, the additional Lipschitz gradient condition is used to obtain non-trivial complexity bounds (e.g., see [1], Section 9.5.3). This highlights the difficulty of removing this smoothness condition, and we view the removal of this assumption in our work as one of the main technical contributions. [1] Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. The second stage of our algorithm is needed to finetune the result from the first stage to achieve the optimal sample complexity. Once the algorithm reaches a point sufficiently close to the global minimum, as specified in equation (6), the hyperellipsoid estimator in GradientEst provides a better bias-variance tradeoff (by a factor of $d$) as the third term on the RHS of inequality (3) becomes dominant. Using this alternative estimator in the second stage allows us to reduce the estimation error by a factor of $d^{2/3}$. > gap from line 601 to equation (6) Thanks for pointing this out. This step is due to the strong convexity condition, which implies that $\frac{||\nabla f(x)||_2^2}{2M} \geq f(x)-f^*$ for any $x\in\mathbb{R}^d$. Recall that in our main theorems we are considering the limits as $T\rightarrow +\infty$ for fixed $d$, $M$, $ \rho$, and $R$ (this will also be stated in the updated version of the theorem for equation (6)). The convergence in inequality (6) is obtained by simply plugging in the definition of $\epsilon$ and the inequality above obtained from strong convexity into the limit of $ 𝑇^{2/3} 𝐸[||∇𝑓(𝑥_𝑁^{(𝐵)})||_2^3]$. >The first sharpness statement in Remark 4.2? This claim states that for any fixed $\lambda_z$, $\rho$, and $d$, there exists a function $f$ in $\mathcal{F}(M,\rho,R)$ and a matrix $Z=\lambda_z I$ such that the bias of GradientEst is exactly given by inequality (2). One such example of $f$ can be constructed as a function that is locally a polynomial of degree $3$, with its cubit term given by $f(\boldsymbol{x} )=\frac{\rho}{6}\sum_{k}(x_k^3)$, where $x_k$ are the individual coordinates of $\boldsymbol{x}$. While this construction of $f$ is elementary, the bias of GradientEst over this function is derived from integration over the hypersphere, which is standard and well-established. >Is the $\hat{H}$ returned by HessianEst invertible? Note that at the end of this subroutine, we raised all eigenvalues of \hat{H} to at least M. Hence, the returned $\hat{H}$ is positive definite, which must be invertible. >Proof of Theorem 3.1: In line 200? The condition $||x_T-x_B||_2\leq M/\rho$ is guaranteed by the clipping step in the final stage in Algorithm 4, where we project $r$ to the $L_2$ ball of radius $\frac{M}{\rho}$, and $x_T$ is created with $x_B+r$. >In lines 203-204 Similar to what we have clarified in the comment above for equation (6), we shall clarify that the notation $f=o(g)$ means that $\lim_{T\rightarrow +\infty} f/g=0$ for any fixed $M$, $\rho$, $R$, and $d$. This property is independent of any multiplicative factor of $d$, and it does not impact the asymptotic result for $T$ being sufficiently large. > Line 358: how to apply Lemma B.2, and what is the 𝑉𝑎𝑟𝑓∼𝑝[𝑓(𝑥)]? We choose $p$ to be the uniform distribution over the function class $\{f_1,f_2\}$, as stated in line 347, so that each $f_k$ has a probability of $½$. To apply Lemma B.2, we verify that the condition of uniform sampling error in Lemma B.2 is satisfied, which is shown in lines 361-368. The $\textup{𝑉𝑎𝑟}_{f\sim p}[𝑓(x)]$ under this prior is simply $(\frac{f_1(x)-f_2(x)}{2})^2$, and it is upper bounded by $\pi^2 y_0^2$ as mentioned in line 346-347. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. After thoroughly reviewing it, I will maintain my current rating.
Summary: The paper studies the problem of zero-order optimization of a strongly convex function whose Hessian is Lipschitz continuous. The proposed algorithm exploits zero-order information from the oracle to estimate the Hessian and gradient of the function at each iteration. Using these estimates, the authors employ a second-order method to achieve convergence for the optimization error. The results are asymptotic and valid for a sufficiently large number of function evaluations. The main contribution of the paper is improving the dependency of the achieved optimization error with respect to the dimension. The derived upper bound is minimax optimal with respect to the number of function evaluations, dimension, and the strong convexity parameter. Strengths: The result of the paper regarding the upper bound is a significant and surprising contribution. In the literature, there was an optimality gap with respect to the dimension: the existing lower bound was of the order $d$, while the upper bound was of the order $d^{4/3}$. The main anticipation was that the lower bound was not tight enough. However, the result of this paper shows that $d$ is minimax optimal, which I find to be a very valuable observation. Weaknesses: The main issue I encountered while reading the paper is that the authors should provide more motivation for their algorithm. For instance, they use two different gradient estimators, GradientEst and BootstrappingEst, but it is unclear why both are necessary and why one is not sufficient. The assumption on the noise is also more restrictive compared to previous work in the literature, see e.g. [1]. The authors assume that the noise is sub-Gaussian, whereas other papers have only assumed a finite second moment. The authors did not explain why this stronger restriction on the noise is needed. The main algorithm of the paper, Algorithm 4, consists of two stages. While the first stage of the algorithm seems natural, the second stage is not well motivated. [1] Arya Akhavan, Massimiliano Pontil, and Alexandre Tsybakov. Exploiting higher order smoothness in derivative-free optimization and continuous bandits. Advances in Neural Information Processing Systems, 2020. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. As I mentioned earlier, could the authors explain why there is a need for two different gradient estimators? 2. Why does the noise need to be sub-Gaussian? Specifically, the results of Theorem 4.1 also hold for any noise with a finite second moment. 3. Could the authors motivate the second stage of the algorithm? The authors mentioned, "The role of the final stage is to ensure that $f(x_B) - f(x^*)$ is sufficiently small with high probability," but why you need such a guarantee with high probability, and not just in expectation? 4. Could the authors explain why they need a high probability bound in Theorem 4.3 and why expectation bounds are not sufficient? 5. I am a bit confused about where the authors used Assumption A (3). The parameter $R$ does not appear anywhere in the algorithms or in the final bound. 6. It would be helpful if the authors provided a discussion on the number of function evaluations used overall in the main algorithm. 7. A version of Theorem 3.2 has already appeared in [1] without the dependency on $\rho$. So, the novel aspect of this lower bound is its dependency on $\rho$, which I found interesting. Please correct me if I'm wrong, but unfortunately, I cannot see the proof of this theorem in the paper. 8. I would like to know the authors' thoughts on higher-order smoothness: if we assume that the higher derivatives are uniformly bounded, do the authors believe it is possible to achieve the dependency of $d$ with respect to the dimension by estimating higher-order derivatives? 9. Could the authors specify the lower bound for $T$ for all the results in the paper to hold? [1] Arya Akhavan, Massimiliano Pontil, and Alexandre Tsybakov. Exploiting higher order smoothness in derivative-free optimization and continuous bandits. Advances in Neural Information Processing Systems, 2020. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback from the reviewer and have addressed each comment below. > Why there is a need for two different gradient estimators? The two gradient estimators are tailored to the two phases of our algorithm, which serve distinct purposes. The first stage of our algorithm deals with potential initial conditions with unbounded gradients and highly skewed Hessians. Therefore, we need a gradient estimator whose variance does not depend on the gradient norm to ensure convergence. Thus, we employ the coordinate-wise estimator in BootstrappingEst instead of GradientEst (comparing Theorem 4.1 and 4.3). On the other hand, once the algorithm reaches a point sufficiently close to the global minimum, as specified in equation (6), the hyperellipsoid estimator in GradientEst provides a better bias-variance tradeoff (by a factor of $d$) as the third term on the RHS of inequality (3) becomes dominant. Therefore, in the second stage, we choose this estimator to reduce the estimation error. > Why does the noise need to be sub-Gaussian? We would first like to emphasize that the sub-Guassian assumption we adopted in this work is merely to simplify the algorithmic design, allowing us to focus on presenting the key ideas for achieving minimax sample complexity. Even with general (non-sub-Guassian) noise distributions that have bounded variance, our results hold by employing more sophisticated mean-estimation methods (e.g., median-of-means instead of naive averaging). Therefore, the adoption of this assumption does not fundamentally weaken our results. However, the sub-Gaussian assumption becomes convenient in our setting compared to prior works, because we completely removed the Lipschitz gradient condition that those works assumed. Specifically, when the Lipschitz gradient condition is assumed, as shown in Akhavan et al. (2020), even simple gradient descent ensures that the squared norm of the updated gradient is linearly bounded by the squared norm of the previous gradient, allowing for a recursion based solely on the second moment of those gradients. Without this assumption, the squared norm of the updated gradient may depend on higher moments of the previous gradient. This dependency is not immediately removed by Theorem 4.1 as it only states characterization up to the second moment. A sub-Gaussian gradient estimator can provide guarantees for these higher moments. > Motivate the second stage of the algorithm? Why they need a high probability bound in Theorem 4.3? Please refer to the response to the second question. The necessity of a high-probability condition similar to equation (6) in each step is fundamentally required due to the absence of the Lipschitz gradient assumption. In fact, we want to highlight that, even in the classical analysis of Newton's method, which assumes zero-error observations, the additional Lipschitz gradient condition was adopted to obtain non-trivial complexity bounds (e.g., see [A], Section 9.5.3), implying the non-trivialness of removing this smoothness condition even for achieving $O(1)$ expected simple regret with unbounded Hessian. We achieve this uniform convergence with a non-trivial modification of Newton’s algorithm. Therefore, we view both stages of our algorithm as main technical contributions. [A] Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. > where the authors used Assumption A (3)? The condition of A3 is not directly used in our algorithm. However, it is required for the regret analysis in the sense that the total number of samples needs to be greater than a function of R for the algorithm to enter the final stage, e.g, see Proposition E.1. This dependency only contributes to an additive term in the total number of samples, and hence does not change the sample complexity asymptotically. > discussion on the number of function evaluations. Thank you for the feedback. We will add discussions on the sample complexity analysis and show the total number of required function evaluations is indeed no greater than $T$. > Theorem 3.2 vs the lower bound in [1]? Due to space constraints, we have provided the proof of Theorem 3.2 in Appendix B. While we use different smoothness assumptions than those in [1], our lower bound can be considered a strengthened version of Theorem 6.1 in [1] under $\beta=3$, as discussed in Appendix A. > higher-order smoothness? We share the belief that under higher-order smoothness conditions, it is possible to achieve the corresponding optimal sample complexity with estimators of higher-order derivatives. > lower bound for 𝑇 for all the results in the paper to hold? For all our analysis, the required lower bound of $T$ are polynomials of $R$, $\rho$, $M$, and $d$, e.g., see the requirement in Proposition E.1. However, as the focus of this work is on the asymptotic analysis, we did not optimize our algorithm or analysis for these non-asymptotic requirements. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgment Comment: I would like to thank the authors for the rebuttal. For now, I will maintain my current score. I plan to discuss the paper with the other reviewers and look forward to the author's discussions with them as well. I will update my score accordingly.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Nonstationary Sparse Spectral Permanental Process
Accept (poster)
Summary: Point process are generative models for datasets of points in (typically 1D, 2D, 3D Euclidean) space, e.g. datasets of rain drops, taxi pickup locations, and a Poisson process consists of an intensity function $ \lambda: \mathcal{X} \to \mathbb{R}^+ $ that roughly shows how likely a point is to appear at $x$. The number of points $N_S$ in a region $S\subset \mathcal {X}$ is Poisson distributed $N_S\sim \text{Poi}[\lambda_S]$ with mean parameter $\lambda_S = \int_S \lambda (x) dx$. A Poisson Process with a (squared for positive output) Gaussian Process prior over $\lambda(x)$ is a permanental process, from point clouds in 1D, 2D or 3D space, one can learn smooth underlying intensity functions. There has been much study on fitting GPs to learn intensity functions using well known GP tools such as inducing point methods and this work builds closely upon the work of [27] which uses the popular method of Random Fourier features for stationary kernels. This paper makes two novel contributions, firstly using Random Fourier features for non-stationary kernels and deriving the corresponding approximate marginal likelihood. Then secondly using deep kernel by nesting Fourier features that increases model capacity at the cost of requiring using numerical integrations for the approximate marginal likelihood. Experiments are performed and positive a range of results and model hyperparameters showing results both with and without improvements over baselines. [27] [Sellier 2023](https://proceedings.mlr.press/v206/sellier23a/sellier23a.pdf) Strengths: - the idea seems a nice intuitive extension of prior work - I felt the writing was very clear and concise and introduced all the necessary background and new ideas very clearly. - the method and experiments appear to be well designed - the new method DNSSPP introduces more parameters and the authors provide parameter sweeps and show results with and without improvement over baselines. Weaknesses: My only concern is the lack of substantial novel contribution, particularly when compared to the work of [27] that introduced GSSPP. - Replacing the approximation based on Bochner's Theorem with the approximation based on Yanglon 1987 [37] significantly increases model capacity and flexibility but, unless I am mistaken, appears to be a rather small change from GSSPP, we sample over pairs of frequencies instead. - consequently, deriving the Laplace approximate marginal Likelihood for NSSPP also appears to be not too difficult given the result for GSSPP, use trig identities to combine frequencies and I believe the results follow from the stationary case. - using deep features understandably helps model fitting however the analytic result of NSSPP is lost. Overall, this felt like GSSPP with different kernels, which introduced some difficulties that seemingly had fairly simple solutions. I feel that it is an elegant although not-too-difficult extension of GSSPP and I do not feel the contribution is significant enough for publication at NeurIPS. I enjoyed the paper and I feel the contribution is certainly very respectable and well designed and demonstrates the efficacy of the method. [27] [Sellier 2023](https://proceedings.mlr.press/v206/sellier23a/sellier23a.pdf) [33] [Fast Bayesian Intensity Estimation for the Permanental Process, Walder and Bishop, ICML 2017](https://proceedings.mlr.press/v70/walder17a.html) [37] [Correlation Theory of Stationary and Related Random Functions, Akiv aYaglom](https://link.springer.com/book/9781461290865) Technical Quality: 3 Clarity: 4 Questions for Authors: - how is the marginal likelihood for DNSSPP computed? (The paper just says "numerically" and I couldn't see details in the appendix) - do the error bars represent a one/two standard errors or standard deviation over test set likelihoods? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors have described the method in full. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q: My only concern is the lack of substantial novel contribution...... I feel that it is an elegant although not-too-difficult extension of GSSPP and I do not feel the contribution is significant enough for publication at NeurIPS. I enjoyed the paper and I feel the contribution is certainly very respectable and well designed and demonstrates the efficacy of the method. A: The reviewers' main concern is that the method proposed in this paper seems "not-too-difficult" and therefore not very "novel". However, we politely disagree that a sufficiently complex method is the only significant contribution for publication at NeurIPS. Our method may appear straightforward because we have presented it in a concise manner. From a high-level perspective, our main contributions are: (1) We used a nonstationary sparse spectral kernel to replace the stationary sparse spectral kernel, thereby constructing NSSPP, (2) we then derived the corresponding Laplace approximation inference algorithm, and (3) subsequently, we built a corresponding deep variant (DNSSPP) by stacking spectral feature mappings to further enhance expressiveness. To the best of our knowledge, there has been no prior work applying (deep) nonstationary kernels in Gaussian Cox processes. This gap is what this work seeks to address. Each of these three steps above involves greater challenges when extending from the stationary case to the nonstationary case, both in theoretical derivation and algorithm implementation. Especially the third step, the deep variant (DNSSPP), has never appeared in previous Gaussian Cox processes work. This idea cleverly utilizes the relationship between spectral feature mapping and deep models, resulting in its design. This contribution has also been praised by the other two reviewers. For example, Reviewer WJmT stated: "The introduction of a deep kernel variant, DNSSPP, which stacks multiple spectral feature mappings to enhance the expressiveness of the model significantly, is another novel idea." Reviewer QimU commented: "In terms of originality and significance, this paper fills a clear gap in the existing literature on the topic, namely probing the usefulness of deep models in the permanental process setting. Whilst conceptually not a particularly complicated extension, there are obviously various nuances to moving to a deep kernel setting which are outlined clearly by the authors throughout the work." We understand that different people may have different definitions of "novelty", which may be difficult to reconcile. We greatly respect your review and hope that the above response can lead you to reconsider the cleverly designed ideas in our work. We sincerely hope you can increase the rating. > Q: How is the marginal likelihood for DNSSPP computed? (The paper just says "numerically" and I couldn't see details in the appendix). A: For DNSSPP, the nested structure of trigonometric functions leads to the intensity integral ($\mathbf{M}$ and $\mathbf{m}$) lacking an analytical solution, and we need to resort to numerical integration. Numerical integration has many choices, such as Monte Carlo integration or quadrature methods. In this work, we used Gaussian quadrature. > Q: Do the error bars represent a one/two standard errors or standard deviation over test set likelihoods? A: It is one standard deviation. --- Rebuttal 2: Title: Thank you for the response Comment: Thank you for the response. It seems all reviewers agree on technical aspects and only I differ by my subjective judgement on the impact for which it seems I may have been too harsh! I am have to raised my score. --- Rebuttal Comment 2.1: Title: Thanks Comment: Thank you very much for your recognition and support. We will revise the paper according to your suggestions. Thank you once again for your constructive feedback and for increasing your rating.
Summary: Point process models are a commonly used technique for analysis of event-based data. Gaussian Cox processes are an example which use GPs to model the intensity function in a Cox process, which itself is a specific case of a Poisson process where the intensity function is a stochastic process. Generally, these types of processes are used to model rates of events occurring at given input locations. The link function within the Gaussian Cox process is a design choice; if we choose a square link function and a stationary kernel for the GP prior, this yields a stationary permanental process. The authors aim to address three key problems with these existing approaches; the cubic complexity of exact GP inference, the requirement to use kernels which allow for analytical solutions to the intensity integral, and the reliance on shallow kernels which limit the expressiveness and flexibility of the model. Introduced is a new form of permanental process model which uses a sparse spectral representation to improve computational efficiency, allows for nonstationary kernels to be used without restriction on the form of said kernel, and allows deep kernels to be formed by composing spectral feature mappings. Strengths: - In terms originality and significance, this paper fills a clear gap in the existing literature on the topic, namely probing the usefulness of deep models in the permanental process setting. Whilst conceptually not a particularly complicated extension, there are obviously various nuances to moving to a deep kernel setting which are outlined clearly by the authors throughout the work. - The experimental evaluation overall is thorough, and the results are well presented in a tidy and accessible manner, with clear figures. Particularly appreciated is the inclusion of the ablation study in the main text, which is very useful information to a reader/practitioner. Weaknesses: - This is just a reoccurring minor typo but I’m fairly certain it’s the Porto taxi dataset (as in the city of Porto, Portugal), not Proto, so just edit all mentions of this. - I wasn’t entirely sure of the rationale behind some of the baseline selections, for example, for some of the real-world experiments, you use as low as 10 inducing points/frequencies for the baselines, but use more than this for the DNSSPP (‘same configurations as for the synthetic data’). It would be good to have some discussion of this, and/or how the performance of the proposed methods compares to that of the baselines as we increase the number of parameters. Obviously what constitutes a “fair” comparison between models is highly subjective and depends on various factors such as computational complexity as well as parameter count, but I just wanted a little more context on this. - I’m aware that there has been some work by on understanding and exploring overfitting in the context of deep kernel learning (Promises and Pitfalls of DKL, Ober et al, 2021), it would be useful to touch on some work from this area during your discussion on overfitting just to clarify to the readers that this phenomenon is not arising solely due to the size of the datasets you are modelling. Technical Quality: 3 Clarity: 3 Questions for Authors: - As mentioned earlier, I think the rationale for using of a very small number of inducing points etc. for some of the baselines needs to be clarified in the text; is it the case that if you increase this number to 50+ that the baselines begin to outperform the models proposed by the authors? - You mention other variational methods are feasible, was there a certain specific reason that you decided upon a Laplace approximation-based approach? - See earlier point about discussing the impact of the deep kernel learning framework in general on overfitting. - See typos in weaknesses section, please fix. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The Limitations section of the paper briefly discusses the fact that the deep formulation of the DNSSPP does not allow for analytical computation of the intensity integral, which necessitates computationally inefficient numerical integration. There are also some practical considerations/limitations discussed elsewhere such as in Section 6.5.2. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q: As mentioned earlier, I think the rationale for using of a very small number of inducing points etc. for some of the baselines needs to be clarified in the text; is it the case that if you increase this number to 50+ that the baselines begin to outperform the models proposed by the authors? A: Thanks for your suggestion. For the synthetic data, we used the same number of ranks (50 inducing points, eigenvalues, frequencies, etc.) for all baseline methods, except for DNSSPP because it has many layers and different layers can have different layer widths (frequencies). For the real data, we chose different numbers of ranks for the 1-dimensional dataset (10 for Coal) and the 2-dimensional datasets (50 for Redwoods and Taxi) for all baseline methods, except for DNSSPP. The reason is that the Coal dataset is very small (only 191 events) and has a simple data pattern. A higher number of ranks would not further improve performance but would significantly increase the algorithm's running time. Therefore, we set the number of ranks to 10 for Coal, but 50 for Redwoods and Taxi because the latter two datasets are larger and more complex, requiring a higher number of ranks to improve performance. To demonstrate this, we further increased the number of ranks for all baseline models to 50 on the Coal dataset and compared the result to rank=10. The result is shown in Table 2 of the rebuttal PDF. As can be seen from the results, when we increased the number of ranks from 10 to 50, the performance of all baseline models did not improve significantly, but the algorithm’s running time increased substantially. In summary, for the same dataset, the number of ranks for all baseline methods is kept consistent, except for DNSSPP because it is not a single-layer architecture. The choice of the number of ranks for the baseline methods was made considering the trade-off between performance and running time. Further increasing the number of ranks would not significantly improve model performance but would result in a substantial increase in computation time. > Q: You mention other variational methods are feasible, was there a certain specific reason that you decided upon a Laplace approximation-based approach? A: Variational inference methods, to the best of our knowledge, require certain standard types of kernels, such as the squared exponential kernel, to ensure that the intensity integral in the likelihood has an analytical solution [18]. The advantage of the Laplace approximation is that it can analytically compute the intensity integral without restricting the types of kernels. Therefore, for NSSPP, we derived a fully analytical Laplace approximation inference algorithm. When we further extended NSSPP to the deep variant DNSSPP, the introduction of the deep architecture meant that neither variational methods nor the Laplace approximation had analytical solutions. For convenience, we continued to use the Laplace approximation method employed in NSSPP. > Q: I'm aware that there has been some work by on understanding and exploring overfitting in the context of deep kernel learning (Promises and Pitfalls of DKL, Ober et al, 2021), it would be useful to touch on some work from this area during your discussion on overfitting just to clarify to the readers that this phenomenon is not arising solely due to the size of the datasets you are modelling. A: Thanks for your suggestion. We agree with your advice. In fact, in the ablation study, we also observed some overfitting phenomena in deep kernel learning. We will provide more discussion on this issue, touching on related works from this area, in the camera ready. > Q: This is just a reoccurring minor typo but I'm fairly certain it's the Porto taxi dataset (as in the city of Porto, Portugal), not Proto, so just edit all mentions of this. A: Thanks. We will fix all these typos in the camera ready. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I appreciate the time and effort taken by the authors on the response to my review, and for addressing each of my concerns in turn. This is a well presented, thorough and novel piece of work, and I am satisfied based on the additional results provided (which should be included in the updated manuscript) that the approach presents an improvement over existing methods. As such, I increase my rating to accept. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you very much for your thoughtful review and for taking the time to carefully consider our responses. We greatly appreciate your kind words and are pleased that you find our work to be well-presented, thorough, and novel. We will certainly include the additional results in the updated manuscript, as you suggested. Your feedback is valuable in improving the quality of our work, and we are grateful for your support. Thank you once again for your constructive feedback and for increasing your rating to accept.
Summary: The paper introduces an approach to modeling permanental processes by utilizing a sparse spectral representation of nonstationary kernels, termed as Nonstationary Sparse Spectral Permanental Process (NSSPP) and its deep kernel variant (DNSSPP). This method addresses the limitations of traditional permanental processes which often require specific kernel types and assume stationarity, thus restricting the model's flexibility and computational efficiency. The deep kernel variant (DNSSPP) uses hierarchically stacked spectral feature mappings, significantly enhancing the model's ability to capture complex patterns in data. The paper presents experimental results on both synthetic and real-world datasets, showing that NSSPP and DNSSPP perform competitively on stationary data and show improvements on nonstationary datasets. Strengths: - The paper introduces a novel approach by integrating sparse spectral representation with nonstationary kernels in the context of permanental processes. The introduction of a deep kernel variant, DNSSPP, which stacks multiple spectral feature mappings to enhance the expressiveness of the model significantly, is another novel idea. - The paper is well-structured and articulates the motivations, methodology, and findings clearly. - Experimental results show the effectiveness of the approach when compared to baselines and other recent approaches. - The significance of this work is considerable, given its potential impact on various fields that utilize point process models. Weaknesses: - The experimental results (Table 1) indicate that performance enhancements are observed mainly with the deep kernel variant (DNSSPP) when compared to stationary kernels. It would be beneficial for the authors to include a comparison with a baseline that employs stacked mappings of stationary kernels to more clearly ascertain the source of these improvements. Technical Quality: 3 Clarity: 3 Questions for Authors: - Performance comparison with a baseline that employs stacked mappings of stationary kernels Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q: Performance comparison with a baseline that employs stacked mappings of stationary kernels. A: Thanks for the suggestion. A baseline that employs stacked mappings of stationary kernels is an important baseline. To show the source of the performance improvement, we have re-compared with a deep stacked stationary kernel model. The results are shown in Table 1 of the rebuttal PDF. The corresponding deep stacked stationary kernel model is referred to as DSSPP, which has the same deep architecture as DNSSPP. Therefore, DSSPP also has three implementations: DSSPP-[100,50], DSSPP-[50,30], and DSSPP-[30,50,30]. As can be seen from the results, the performance of the stationary kernel model significantly improves after introducing the deep architecture. On the stationary synthetic dataset, DSSPP performs comparably to DNSSPP. On the nonstationary synthetic dataset, DSSPP does not outperform DNSSPP due to the severe nonstationarity in the data. However, on any dataset, DSSPP outperforms other shallow stationary kernel models due to the introduction of the deep architecture. Therefore, this demonstrates that the source of performance improvement comes from the stronger expressiveness of both the deep architecture and the nonstationary kernel.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their efforts in providing insightful comments and constructive feedback. We are encouraged that reviewers recognize that our paper proposes an interesting extension on nonstationary permanental processes [R1,R2,R3], introduces an deep kernel variant to enhance expressiveness [R1,R2,R3], includes solid numerical experiments [R1,R2,R3], and is written very clearly and concisely [R1,R2,R3]. In the following, we address reviewers' comments point by point. Pdf: /pdf/81dd5be160b8ebc7c58369329135deb3ecb94c7f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference
Accept (poster)
Summary: This paper proposes to modify standard transformer architectures to use less allreduce operations from tensor parallelism. Specifically, the proposal is to allow each shard of the hidden state to operate as an individual complete hidden state during attention layers and each shard of the attention layer to operate independently. Consequently, allreduce is only needed after FFN layers and can overlap in time with attention calculation. Experiments of up to 760M models show that the new architecture can achieve accuracy that is almost as good as same-sized GPT2. Latency projection suggests 1.5X speed up in context encoding for 175B-size models. Strengths: Please see the summary. The proposal is simple yet makes sense. The accuracy and latency results are promising. Weaknesses: The biggest weakness of this paper is that accuracy results are only available up to 760M model size. There remains big uncertainty for larger models where the proposal matters. Also there seem to be consistently weak results on the ReCoRD task, is there an explanation on why? Another weakness of this paper is that the model training is committed to a specific tensor-parallelism degree. This is not a huge issue in practice but still a weakness. Technical Quality: 3 Clarity: 4 Questions for Authors: Please see above. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Performance on ReCoRD:** Please see the common response. This was due to a bug in the LM Evaluation Harness (Issue 1647 on GitHub). **Results only up to 760M parameters:** Indeed, it would have been great to evaluate models that are several billion parameters large. Unfortunately, we do not have access to the amount of compute necessary for such an endeavor. Nonetheless, we hope the results presented here will encourage others to try this approach given how prevalent and expensive Transformer inference can be. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response. My ratings remain the same.
Summary: The paper introduces a modification to the standard Transformer architecture aimed at reducing inter-device communication during inference in multi-device systems. By predetermining the degree of model parallelism, computations on each device can operate independently, allowing collective operators to overlap with compute tasks. Evaluations conducted using the SuperGLUE benchmark suite demonstrate that the proposed models maintain similar perplexity levels compared to standard Transformers while achieving notable speed improvements. Strengths: * **Novelty and contribution**: The paper presents a novel approach to parallelizing the Multi-Head Attention (MHA) and Feed-Forward Network (FFN) layers, allowing them to run concurrently with the AllReduce operations. This innovative strategy shows promise and is orthogonal to existing methods like FlashAttention and Speculative Decoding. * **Soundness**: The design and methodology, while simple, are robust and well-justified. The authors provide clear explanations and solid rationales for their approach, enhancing the credibility of their work. * **Clarity**: The paper is well-structured and written in a clear, concise manner, making it easy to follow and understand. Weaknesses: * **Limited benchmark**: The evaluation benchmarks primarily focus on language tasks, neglecting the application of Transformers to vision tasks. * **Limited comparative analysis**: The paper lacks comparisons with other existing baselines, including ablation studies and sensitivity analyses. While the authors compare their approach with models running Feed-Forward Networks (FFN) in parallel with Attention, they do not explore sensitivity to varying inter-device bandwidths or different compute-to-communication ratios. Technical Quality: 2 Clarity: 3 Questions for Authors: * Could there be additional evaluations on a more diverse set of benchmarks? Can the proposed approach be extended to applications in vision-based tasks? * How does the approach benefit systems with varying inter-device configurations and bandwidth capacities? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have thoroughly discussed the limitations of their approach. From the checklist, it appears that the approach does not pose significant negative societal implications, apart from the associated costs and resources required for training large models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Applicability to vision tasks:** Yes, our approach will also translate to encoder-only Transformers including those used in vision tasks such as ViT[4]. Nonetheless, we narrowed the scope of this work to focus our resources on the decoder-only Transformer models that are typically the largest and consequently the most expensive to run inference on. 4: “Image is worth 16x16 words…”, Dosovitskiy et al. 2020 **Varying device interconnects and compute/communication ratios:** We do present results at various compute/communication ratios because the cost of the AllReduce scales linearly with sequence length (and embedding size) but Attention itself is quadratic (Figure 4). For longer sequences, communication will occupy a smaller fraction of overall runtime. Additionally, our evaluation platform (NVSwitch) offers the most performant interconnect between A100 GPUs that we are aware of. As such, assuming that the compute capability of each device is unchanged, the presented latency improvements are a lower bound when compared to alternative topologies where some connections traverse PCIe and consequently suffer from limited bandwidth. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the response. I will maintain my current rating.
Summary: This paper propose Kraken, a new evolution of the standard Transformer to reduce the communication cost of inference. Kraken overlaps the collective operations with computing, therefore achieves smaller pre-filling time cost. Kraken is specially designed for tensor parallelism on multi-device environments. Strengths: 1. The focus on accelerating the pre-filling stage of inference by reducing communication overhead addresses a significant and challenging issue that merits further research. 2. Kraken preserves much of the standard Transformer architecture while effectively reducing communication overhead. 3. The solution of Kraken is is straightforward yet impactful. 4. Kraken maintains the model’s accuracy despite its optimizations. Weaknesses: 1. The paper lacks a detailed examination of related works that also focus on reducing communication costs. Overlapping techniques to accelerate inference have been widely explored in both industrial and academic contexts, as demonstrated by works such as [1, 2] and tools like NCCL. A clearer differentiation between Kraken and these existing solutions is necessary. > [1] Wang, Shibo, et al. "Overlap communication with dependent computation via decomposition in large deep learning models." (ASPLOS’22) > [2] Rajbhandari, Samyam, et al. "Zero-infinity: Breaking the gpu memory wall for extreme scale deep learning." (SC’21) 2. Sharing token and positional embeddings across all sub-layers might introduce limitations in how effectively the model can learn positional nuances across different parts of the input sequence. Thus, except the evaluation in 4.1 and 4.2, the paper should clarify why the new architecture does not affect the accuracy of the model. Technical Quality: 3 Clarity: 3 Questions for Authors: This paper focuses on an important issue and proposes an simple yet effective solution. However, it has several limitations that prevent the paper from being accepted. Firstly, a comprehensive study about related works is lacked. It is hard for me to get the key idea of this paper. What is the advantages of Kraken compared with other works? Secondly, although the paper has preserved most of the standard Transformer architecture, it still requires to re-train a model from scratch, which is extremely costly, especially for LLMs. The necessity of altering the Transformer architecture should be clarified. Specifically, it should be determined whether Kraken could be designed as a plug-in for any Transformer-based model and can work directly or require minimal re-training. Besides, as tools like DeepSpeed, PagedAttention, and Flash Attention are widely adopted in current LLM inference, it is critical that Kraken demonstrates compatibility with these tools to enhance performance and practicality. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comparison to Related Work:** Our approach is readily compatible with nearly all related work such as DeepSpeed, FlashAttention, and PagedAttention. Our evaluation (Figure 4) uses FlashAttention and TensorRT-LLM’s implementation of AllReduce which is even more performant than NCCL on systems equipped with NVLink. The fundamental contribution is to reformulate the Transformer layer and remove the true data-dependency introduced by the collectives necessary for tensor parallelism. This is illustrated by Figure 3. The closest point of comparison would be models that use parallel Attention and FeedForward which we do evaluate (Figure 4). Our model-level change is compatible with most system-level optimizations for Transformer training and inference. By virtue of the architecture, we don’t require the techniques proposed by Wang et al.[1] unless a Kraken model is parallelized on more devices than its degree of parallelism. **Pre-training and PlugIn:** While Kraken requires training from scratch, any plug-in we develop would still need to account for the true data dependency between layer activations in the original architecture. This imposes a hard limit on efficiency. In fact, the suggested plug-in would mostly take the form of the approach proposed by Wang et al. [1] **Sharing token and positional embeddings:** The absence of shared positional and token embeddings would complicate weight tying[3] given that we are computing only one probability distribution at the last layer. 3: “Using the output embedding…”, Press et al. 2017 --- Rebuttal Comment 1.1: Comment: Thank you for the explanation. I now clearly understand Kraken's key idea and how it differs from related works such as [1, 2]. It's a notable strength that Kraken is compatible with many existing tools like TensorRT. However, the requirement to train from scratch significantly impacts its application in LLMs due to the high cost of re-training. Considering both its strengths and weaknesses, I’d like to adjust my rating to 5. --- Reply to Comment 1.1.1: Title: Energy Usage and Infrastructure Allocation For Training vs Inference Comment: We thank the reviewer for appreciating how our approach is compatible with existing techniques. Yes, training from scratch is indeed very expensive. However, as the usage of Transformers grows, inference will require a much larger fraction of the available power and infrastructure. As an example, take the case study presented by Wu et al. [1]; here, the authors find that 70% of available power at a major tech company goes to ML inference (Figure 3) as opposed to 20% for training. Moreover, this work[1] and data is from 2022 which is a little before the current "Gen AI" trend that has taken over the industry. Sustainable AI: Environmental Implications, Challenges, and Opportunities, Wu et al., MLSYS 2022.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for reading our work, providing helpful feedback, and finding promise in our approach. After submission, we learned of bugs in LM Evaluation Harness that affected the scores on some SuperGLUE benchmarks such as ReCoRD. We have evaluated all models again using the most recent version of the toolkit; **updated results for Tables 2 and 3 are available in the attached PDF**. Pdf: /pdf/1e51f8eaaeb7094c2512d55dc6a9661baee09257.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Neuron Attributions in Multi-Modal Large Language Models
Accept (poster)
Summary: The paper describes a neuron attribution method for multimodal LLMs that the authors term "NAM". The method is broken down into two steps, a first step that uses image segmentation and a pretrained attribution algorithm, Diffusers-Interpret, to assign relevance scores to the final hidden state of the model. The second step involves attributing the final hidden state to neurons within the feed forward network of the base LLM. Strengths: The method seems to be effective at what it sets out to achieve, namely attributing model semantics to individual FFN neurons in the base model, and separating out the relative modality-specific semantic properties into I-Neurons and T-Neurons. The cross-sample invariance and semantic selectivity are compelling and convincing of the method's efficacy. The editing application gives an example of an especially relevant use case of the method. Weaknesses: The paper is at times difficult to read and needlessly obtuse, reflecting an unfortunate general trend in these sorts of papers. The paper would benefit from being less notationally dense and being less of a chore to read, especially since the underlying ideas are not nearly as complex as the notation suggests. Figure 2 is especially confusing and there are way too many moving parts given the relatively brief caption. There are some assumptions and potential limitations that may need to be justified (see questions for details). Technical Quality: 4 Clarity: 2 Questions for Authors: It is not clear how NAM performs in cases where one might want to perform attribution for semantic concepts that are not discrete objects which can be easily segmented out. A fundamental assumption here is that the semantics of the concept that one would like to explain have to be visually segmentable. I would like to see some discussion or clarification on whether the method is limited only to attribution of concrete, grounded concepts. For example, in the sentence "A school of fish is chased by a shark.", how would NAM attribute the word "chased"? A clarifying question: An underlying assumption here also seems to be that it is the FFN neurons that are primarily the ones that predominantly encode semantic knowledge. The authors give some citations in the introduction to support this assumption. Is there some basic intuition for why this cannot be encoded in the self-attention block instead? The authors state that one of the primary motivations of this method over others is that their neuron attribution method does not require intensive backwards-and-forwards passes. How much of a problem is this actually? It would be nice to have a comparison of computational costs, especially since this seems to be something that the authors argue. I wonder if this method would be applicable to the more challenging problem of sound object identification and attribution in multimodal audio LLMs. For example, given an image of a spectrogram of a duck quacking and a person talking, could you use this technique (perhaps replacing some of the constitent pieces here) to identify the components of the spectrogram that correspond to each sound object? To be clear, I am not requesting an additional experiment here, but do want to hear the author's thoughts on the matter and whether there are any limitations to applying these ideas here. Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: See questions for potential concerns about limitations. No obvious negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer gq3m: Thank you very much for your comments. We sincerely appreciate the time and effort you have dedicated to reviewing our work! Below, we meticulously provide responses to each of your comments and outline the modifications made to the manuscript based on your suggestions. Hope that our responses could address your concerns! >***W1: The paper would benefit from being less notationally dense.*** Thank you for your valuable feedback. Following your suggestions, we have revised our paper to improve clarity and readability. Specifically, we have streamlined the presentation by reducing notational density and simplifying definitions wherever possible (*e.g.*, Equation 4~6). Moving forward, we will be mindful of your suggestions and strive to maintain simplicity in both our descriptions and equations. We hope to contribute to curbing the trend of unnecessary complexity in academic writing. Thank you again for your thoughtful comments! >***Q1: I would like to see some discussion on whether the method is limited to attribution of concrete concepts.*** Thank you for your insightful suggestion. To address your concern, we have expanded our discussion on the limitations of our method. Here are the key points: - Firstly, our method relies on external segmentation models which can only segment concrete concepts. Hence, we acknowledge that this dependency makes our method challenging to attribute abstract concepts. - Despite this limitation, we believe that **this dependency** (*i.e.*, leveraging established external models from mature fields) **can be beneficial**, especially during the early stages of developing a new domain. Our experiments support this approach, demonstrating the advantages of cross-domain collaboration. Moreover, as segmentation models continue to advance, the upper bound of our method's capabilities will improve. - Finally, we find it pertinent to mention our ongoing project that builds on this work and aims to attribute abstract concepts within multimodal LLMs. Specifically, it seeks to capture commonalities in the hidden layer when LLMs output abstract concepts, thereby circumventing the limitations of segmentation models. We plan to release it soon after this paper is accepted. Furthermore, following your suggestion, we have added a subsection in the revised version, which elaborates on these points. We hope this addition enhances the rigor of the paper and provides readers with valuable insights. >***Q2: The authors give citations to support that FFN predominantly encodes semantic knowledge. Is there some basic intuition for why this cannot be encoded in the self-attention block instead?*** Thank you for raising this important question. The prevailing view — or basic intuition — suggests that the self-attention block is primarily responsible for gathering global features, while the FFN extracts contextual information based on these features, concretizing complex features into stable semantic concepts. Hence, the FFN potentially stores information that requires abstraction and synthesis beyond the immediate context. Furthermore, from an experimental standpoint, several studies in recent years have supported this view. One of the most influential works is the ‘‘causal tracing’’ [1]. It employed neuron activation interventions and observed their impact on the output, achieving causal attribution of different blocks. We highly recommend reading this fascinating study for more detailed insights. Finally, if you feel it would be beneficial, we are prepared to **expand the Preliminary section in the revised version to include a more thorough discussion** of both the theoretical and experimental evidence supporting this hypothesis. Hope our response could address your concerns! [1] Locating and Editing Factual Associations in GPT. NeurIPS 2022 >***Q3: It would be nice to have a comparison of computational costs, especially since the author claims that their neuron attribution method does not require time-consuming backwards-and-forward passes.*** Thanks for your suggestion. Based on your feedback, we have added an analysis of computational costs in the revised version. The table below presents the average time (in seconds) required to attribute a concept. This demonstrates that avoiding the traditional backwards-and-forward pass significantly reduces computational overhead, effectively supporting our claims. ||||| |:-:|:--:|:-:|:-:| |Method|Backwards|GILL|NExTGPT| |GraI|Yes|712.7|914.9| |GraD|Yes|279.3|299.7| |GraT|Yes|303.2|418.0| |NAM|No|**46.4**|**71.5**| ||||| We hope these additional experiments provide a clearer understanding of the efficiency gains offered by our approach. Your input has been invaluable in enhancing the rigor and clarity of our paper. Thank you once again for your insightful feedback! >***Q4: I wonder if this method would be applicable to audio modalities.*** Thank you for your insightful question. Our method can indeed be applied to other modalities such as audio, since the new attribution function we propose is modality-agnostic and can generalize across different types of data. To better illustrate this point, we conducted experiments with audio outputs using NExT-GPT. Here are the results: ||||| |:-:|:-:|:-:|:-:| |Method|Specificity|Relevance|Invariance| |CE|0.217|0.403|0.274| |GraI|0.264|0.500|0.337| |GraD|0.263|0.509|0.355| |AcU|0.349|0.552|0.396| |NAM|**0.361**|**0.596**|**0.428**| ||||| Due to time constraints, we were only able to complete this single set. We hope for your understanding in this matter. Additional experiments are currently underway, and we plan to include them in the paper after acceptance. *Once again, we are deeply appreciative of the time and expertise you have shared with us. Your insightful suggestions have significantly enhanced our manuscript, and we are more than happy to add clarifications to address any additional recommendations and reviews from you!* *Best,* *Authors* --- Rebuttal Comment 1.1: Title: Revised Score Comment: I believe my concerns here are reasonably satisfied. I am especially impressed by the turnaround on the audio model analysis, which I think is quite a nice addition to the paper. Conditional on that the results in this response are effectively integrated into the main paper, I am raising my score to an 8 to reflect these additional improvements, and would support acceptance. --- Reply to Comment 1.1.1: Title: Gratitude to Reviewer gq3m for Feedback Comment: Dear Reviewer gq3m: Thank you very much for your positive feedback and for raising your score. We greatly appreciate your detailed comments and suggestions, which have significantly improved our paper. **Your recognition encourages us to continue refining our work.** Your support and trust are invaluable to us. They not only validate our current efforts but also motivate us to push forward in the field of interpretable LLMs. We are committed to advancing our research and contributing meaningful insights and innovations to this rapidly evolving area. **While we recognize that interpretable LLMs are still in their early stages and that there is a long road ahead, we firmly believe that with persistent effort and dedication, we will make significant progress.** Thank you once again for your encouragement and support. Best regards, Authors
Summary: This paper proposes a method to attribute the multimodal output to neurons that are influential in the generation process. The approach considers image-text multimodal LLM where 1) the base LLM's internal representation is directly used to generate text, and 2) also passed to an image generation model. These this two tiered model architecture, the proposed approach breaks down the problem into attribution for the base model and attribution for the image generation model, which can be combined to cover all neurons in the considered architecture. The proposed approach is evaluated with baselines and shown to outperform in several tests including cross sample invariance. Strengths: - S1: The proposed approach is efficient as it does not require backpropagation. - S2: The proposed approach outperforms the baselines in several evaluation tasks including cross-sample invariance. - S3: The evaluation shows additional examples showing the semantics of the neurons and image editing. Weaknesses: - W1: It is unclear how to get to the Eq 1. From the GILL paper, it is using a regular OPT/GPT architecture. How did you get the linear addition of the attention vector instead of the typical multihead self-attention? Also, normalizations should be applied after a self-attention block/linear layer is applied, making linear relation from an intermediate layer to the last layer / penultimate embedding not possible. - W2. In the semantic relevance evaluation, relevant words are extracted using the same signal paths considered in the paper, while the signals can flow other ways for the baselines. Therefore, it is unclear if we can consider these words represent the semantics of the neurons identified by other methods. Moreover, the artificial neurons are known to be multisemantic, and neuron groups form a certain semantic instead of individual neurons in isolation. Thus, the metric is deemed to emphasize what the proposed approach is doing by leveraging its definition but might not be suitable for evaluating and comparing with the baselines. - W3: The presentation can be improved in many places (e.g., Line 110, 209-210, 217-218, 260, the definition of specificity). Technical Quality: 3 Clarity: 3 Questions for Authors: - Q1: Are the bars in Fig 3(c) stacked or overlayed? Is the green region T \cup I - T \cap I? - Q2: How do we know the activations of the neurons are signals. Why not lack of activations? - Q3: Related work should be moved to the main text. What's the difference to AcU [2]? Is it the same as the activations through the residual stream in your approach (Sec 3.1.2.)? - Q4: How do you know the neuron semantics and how they are correct? How do you know they are monosemantic? Why "dog", "shark" are wrong for L28.U4786? Overall, I find the paper has quite a bit of room to improve. However, I think this paper attacks an important problem, and shows promising results. Although not all evaluation methods are convincing, this is common for a growing subfield. At least some of the evaluation results can be useful to particular use cases, and I find this work worth dissemination. I wouldn't be upset if someone want this paper to be more refined and required another full review round, given there are some obvious presentation issues. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: They mentioned the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer VfTi: Thank you very much for your comments. We sincerely appreciate the time and effort you have dedicated to reviewing our work! Below, we meticulously provide responses to your comments and outline the optimizations made to the manuscript based on your suggestions. Hope that our responses could address your concerns! >***W1: How did you get the linear addition of the attention vector in Eq1?*** Thanks for your question. As mentioned in Preliminary, vector $a$ in Eq 1 is the output of the attention block that has undergone normalization. Therefore, performing linear addition on $a$ is consistent with the structure of the transformer. Moreover, this linear addition is also supported by existing works on various downstream tasks, such as LLMs editing [1] and explanation [2]. Due to space limitations, we cannot elaborate further here, but we recommend reviewing these works for additional insights. Additionally, following your feedback, we have enhanced the Preliminary in the revised version, particularly the descriptions related to Eq1. We believe this can strengthen the rigor and readability of the theoretical parts of our manuscript. Hope our response could address your concern! [1] Locating and Editing Factual Associations in GPT. NeurIPS 2022 [2] Multi-modal Neurons in Pretrained Text-Only Transformers. ICCV 2023 >***W2: The semantic relevance is deemed to emphasize what the proposed approach is doing, but might not be suitable for comparing with the baselines.*** Thanks for your concern. We acknowledge that using semantic relevance for comparing different methods carries potential bias, as its theoretical foundation is similar to our approach, all based on residual connections. Therefore, following your suggestion, we have revised the experimental section in the new version. Specifically, while other metrics (*e.g.*, invariance and specificity) are retained, semantic relevance is only employed to validate the theoretical soundness of our method. We believe this adjustment will enhance the rigor of our manuscript. We hope our response could address your concern! >***W3: Presentation can be improved in Line 110, 209-210, 217-218, 260 and the definition of specificity.*** We apologize for the oversight that resulted in some grammatical errors in the original text. Following your feedback, we have rechecked our paper and corrected grammatical errors and unclear expressions. Here are some examples of the corrections made: - *Line 110: To identify the contribution, we define these contribution scores as R.* - *Line 209-210: The remaining results can be found in Appendix B.* - *Line 217-218: For details on NAM and the baselines, please refer to Appendix A.* - *Line 260:Figure 4 (a) presents the average invariance of concepts.* - *Definition of specificity: Specificity is defined as the proportion of neurons that are crucial for a single concept solely.* We believe these revisions enhance the overall quality and readability of the manuscript. Thanks once again for your valuable comment! >***Q1: Are the bars in Fig 3(c) stacked or overlayed? Is the green region T \cup I - T \cap I?*** We apologize for any confusion caused. The bars in Fig 3 (c) are **overlayed**, and the green region represents T \cap I. To avoid this confusion, we have added this clarification to the reversion. >***Q2: How do we know the activations of the neurons are signals. Why not lack of activations?*** Thanks for your concern. Our work aims to explore which neurons play a critical role in generating multimodal content. Intuitively, neurons that are not activated do not seem to contribute to the output. Therefore, we naturally focus on the activated neurons solely. However, as you mentioned, it is possible that non-activated neurons could contribute to the output in a subtle or indirect way. Hence, we have included this insight in the Discussion. In the future, we will endeavor to explore this point, and we look forward to contributing to this area. We hope our response could address your query! >***Q3: Related work should be moved to the main text. Is AcU the same as the residual stream in your approach?*** Thanks for your suggestion. In response, we have moved the Related Work into the main text. Regarding the comparison with AcU, our method differs in that we consider both the activations and the residual stream. In contrast, AcU only considers the latter. >***Q4: How do you know the neuron semantics and how they are correct? How do you know they are monosemantic? Why dog and shark are wrong for L28.U4786?*** We apologize for any confusion caused. Here are our clarifications: 1. **Neuron Semantics:** As mentioned in Line 240-243, the unembedding matrix and the second linear of the FFN are treated as the projection from neurons to the vocabulary words. In this case, the word with the highest probability can be regarded as the neuron semantics. We cannot guarantee that these semantics are entirely accurate, but we consider it acceptable to use them as one of the references to assist in analyzing neuron characteristics. 2. **Polysemantic:** Many existing works have demonstrated the polysemantic nature of neurons. Our paper does not deny polysemy; instead, it aims to identify the most dominant of the multiple semantics. Thus, it does not conflict with polysemy but rather builds upon it. 3. **L28.U4786:** If L28.U4786 is selected as the explanation for "cat" while its semantics are "dog" and "shark", the explanation method is inaccurate. This is the point conveyed in Table 1. Following your feedback, we have incorporated these clarifications into the appropriate sections to improve clarity and readability. Hope our response could address your concerns! *Once again, we are deeply appreciative of the time and expertise you have shared with us, and we are more than happy to add clarifications to address any additional recommendations and reviews from you!* *Best,* *Authors* --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response to the review, and that many points were clarified in the paper along with the bias and weaknesses of the metrics (although I cannot confirm the change since the new version is somehow not visible). I would further encourage the authors to clarify the assumptions mentioned in the response to Q4 and state the limitations of the method as well as the evaluation. Since my concerns were generally on the presentation, which is seemingly being improved, I'm keeping my recommendation to accept this paper despite the limitations and weaknesses which are always present in any methods. --- Rebuttal 2: Title: Additional Clarification on Reviewer VfTi’s Concern Comment: Dear Reviewer VfTi, Thank you very much for your active engagement and valuable feedback. We are pleased to hear that our previous response addressed most of your concerns, with the exception of Q4 (*''How do your know the semantics of neurons?''*). We appreciate the opportunity to provide further clarification. **Since this year’s NeurIPS does not allow submitting a new version during rebuttal, we will provide a detailed explanation here to address your concern as thoroughly as possible within the limited space.** We assure you that more detailed enhancements have been incorporated into the new version, and we sincerely hope for your understanding and trust in this process. ***1. Residual Streams*** To address your question, let’s first revisit the concept of residual streams. The output of the $l$-th layer in the LLM can be computed as follows:$$\mathbf{h}^l=\mathbf{m}^l+\mathbf{h}^{l-1}+\mathbf{a}^l$$ where $\mathbf{a}^l$ and $\mathbf{m}^l$ are the output of the attention and FFN block, respectively. Specifically, we have $\mathbf{m}^l=W_{out} \mathbf{r}^l$ where $\mathbf{r}^l$ is the vector of the intermediate layer in FFN and $W_{out}^l$ is the output embedding matrix in FFN. Based on these, the output of the final layer $\mathbf{h}^{L}$ can be recursively derived as:$$\mathbf{h}^L=\sum_{l=1}^L\mathbf{m}^l+\mathbf{h}^0+\sum_{l=1}^L\mathbf{a}^l$$ It indicates that the output of any FFN module can contribute directly through linear addition to $\mathbf{h}^L$. This linearly accumulated flow of information is referred to as the residual stream. Given the complexity of the non-linear relationships within LLMs which are difficult to intuitively understand and control, many downstream tasks (*e.g.*, LLMs editing [1,2] and interpretability [3]) focus solely on the residual stream while reasonably neglecting other non-linear flows. ***2. Neuron Contributions via Residual Streams*** Revisiting the structure of LLMs, $\mathbf{h}^L$ is passed through the unembedding matrix $W_u$ to obtain a probability distribution over the vocabulary $\mathbf{y}$, with the word corresponding to the highest probability being the final output:$$\mathbf{y}=W_u\mathbf{h}^L$$ Since $\mathbf{m}^l$ contributes directly to the probability distribution through the residual stream, we define this contribution as $\mathbf{y}’$:$$\mathbf{y}’=W_u\mathbf{m}^l=W_uW^l_{out}\mathbf{r}^l$$ Here, $W_uW^l_{out}$ can be considered as a new unembedding matrix, and the probability distribution $\mathbf{y}’$ can be decomposed into contributions from each of the $k$ hidden neurons in $\mathbf{r}^l$:$$\mathbf{y}’=W_u\sum_{i=1}^k W^l_{\text{out}i}r_i$$ Thus, $W^{l}_{\text{out}i}{r}_i^l $ represents the distribution over the vocabulary for each hidden neuron. The word with the highest probability in this distribution is the one associated with that neuron. The intuition here is that **regardless of the magnitude of a neuron’s output, its contribution will always be most inclined toward its associated word**. For further details, please refer to [1]. ***3. Using These Semantics as Evaluation Metrics*** As you pointed out, this method of deriving semantics could have limitations due to potential bias (since it only considers information within the residual stream). Therefore, as mentioned in our previous response, we **did not** use this as an evaluation metric in the reversion. However, given the success of focusing on residual stream in many fundamental LLM studies and downstream tasks [1,2,3], we believe these semantics still hold reference value. As such, we included this as an interesting experimental observation for visualizing neuron semantics, and **we believe this visualization has the potential to offer readers valuable insights and perspectives on the behavior of neurons within MLLMs**. ***4. Other Evaluation Metrics*** Although we removed this metric in the new version, **the other metrics in our paper are unaffected by the residual stream and are therefore unbiased**. The results from these metrics successfully validate the superiority of our method, ensuring the effectiveness of our experimental section. Finally, we acknowledge that explainable MLLMs are still in their infancy, and there is currently no universally accepted metric. **This work not only aims to provide a new paradigm but also actively explores usable, fair, and open-source evaluation metrics to advance this field.** We humbly ask for your understanding and hope you can recognize the improvements we have made in the new version based on your feedback. *Thank you once again for participating in the discussion! We are ready to address any additional questions from you at any time and look forward to continuing our communication with you!* *Best regards,* *Authors* [1] Locating and Editing Factual Associations in GPT. NeurIPS 2022 [2] Mass-Editing Memory in a Transformer. ICLR 2023 [3] Multi-modal neurons in pretrained text-only transformers. ICCV 2023 --- Rebuttal Comment 2.1: Title: Gratitude to Reviewer VfTi and Respectful Seeking for Re-evaluation Comment: Dear Reviewer VfTi, We would like to express our sincere gratitude for the valuable insights and feedback you have provided throughout the review process. As the discussion period is drawing to a close in about one day, we would really appreciate it if you could kindly raise your score considering our follow-up responses. In the following, we would like to summarize the contributions and responses of this paper again. **Contributions:** - Innovative Framework: Our work introduces a novel NAM framework which pioneers the explanation of MLLMs and ***addresses a crucial gap in MLLMs*** (Reviewers `gq3m`, `kLxz`, `VhwX`, `VfTi`). - Robust Experiments: Our experimental design is rigorous and ***provides convincing evidence and comprehensive results*** (Reviewers `gq3m`,`VhwX`). - Valuable Insights: Our study ***offers interesting and instructive observations for the development of MLLMs*** (Reviewers `VhwX`, `VfTi`). **Responses to Your Concerns:** - Neurons Semantics: We have elaborated on the theoretical reasoning, cited sources, intuitive understanding, and practical applications of our approach to neuron semantics in two rounds of responses. - Evaluation: We have adjusted the evaluation metrics to treat semantics as a visualization for noval insights of neuron interpretability. - Validation of Linear Addition: We provided detailed derivations and justifications for the linear addition approach, confirming its correctness. - Enhanced Clarity and Presentation: We have refined the presentation at several key points you have mentioned (*e.g.*, Figure 3(c), Related Work, Line 110, 209-210, 217-218, 260) to enhance readability. - Neuronal Activations as Signals: We clarified the strong relevance of this choice to the issues explored in our paper, explaining why alternatives were not suitable for signaling in our context. We sincerely hope that our clarifications and detailed discussions address your concerns and that you might consider supporting our submission during this final phase of the review process. Should you have any further questions or need additional clarification, please do not hesitate to contact us. We are more than willing to provide further information promptly. Hope you a nice day and thanks again. Warm regards, The Authors of Submission 10306 --- Rebuttal 3: Title: Gratitude and Seeking for Final Review Consideration Comment: Dear Reviewer VfTi, We are extremely grateful for the insightful comments and the recognition you have shown toward our work. Your suggestion for optimizing the paper's presentation and experiments has significantly strengthened the rigor of our approach, making our contribution even more valuable to the community. **As our current score still hovering around the borderline, we humbly ask if you might consider the possibility of raising your score. Your support at this stage is immensely important to us!** If you believe there are still areas in our paper that could be further improved, we would be more than happy to engage in any additional discussion to address any remaining concerns, particularly as the rebuttal period is about to conclude in just a few hours. We are fully committed to advancing the field of explainable LLMs and contributing meaningfully to the community. Your feedback and support are invaluable to us in achieving this goal. Thank you once again for your thoughtful engagement. Warm regards, Authors
Summary: The work introduces a novel Neuron Attribution Method (NAM) tailored for MLLM. The NAM approach aims to reveal the modality-specific semantic knowledge learned by neurons within MLLMs, addressing the interpretability challenges posed by these models. The method highlights neuron properties such as cross-modal invariance and semantic sensitivity, offering insights into the inner workings of MLLMs. Strengths: 1. NAM effectively differentiates between modality-specific neurons, ensuring accurate attribution of text and image outputs to the relevant neurons. This granularity aids in understanding how MLLMs process multi-modal content. 2. The observation "..T/I-neurons identified by our NAM are specialized, showing specificity across different semantics. They are not generally sensitive to multiple semantics, confirming their targeted functionality... " and the application of NAM to image edit are interesting. Weaknesses: The NAM method relies on advanced segmentation models (like EVA02) and attribution algorithms (like Diffuser-Interpreter) tailored for specific generation modules. This dependency could limit the method's applicability and flexibility, especially when dealing with different or emerging MLLMs and generation technologies. Technical Quality: 3 Clarity: 2 Questions for Authors: NA Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer VhwX: Thank you very much for your comments. We sincerely appreciate the time and effort you have dedicated to reviewing our work! Below, we meticulously provide responses to each of your comments and outline the optimizations made to the manuscript based on your suggestions. Hope that our responses could address your concerns! >***W1: The NAM method relies on external models (e.g., image segmentation models), which could limit the method's applicability and flexibility.*** Thank you for raising this important concern. We acknowledge that reliance on external models may introduce limitations in terms of applicability and flexibility. However, we believe that this dependence is acceptable in the context of multimodal LLMs explainability for several reasons. Specifically, 1. **Employment of External Models in Multimodal LLMs:** Let's first consider the current landscape of multimodal LLMs (*e.g.*, EMU [1], Unified-IO 2 [2], DreamLLM [3], RPG [4], CM3Leon [5], GILL [6], and NextGPT [7]). These models often incorporate external models (*e.g.*, diffusion models) for multimodal understanding and generation. While these external models may introduce limitations in applicability and flexibility, they have not hindered the success and rapid development of multimodal LLMs. On the contrary, these mature models from well-established domains have become significant assets for multimodal research. **Hence, we believe that the benefits of utilizing external models in this domain** (*****i.e.***, multimodal research) far outweigh the potential limitations they might introduce.** 2. **Optimal Performance Across Diverse Task Configurations:** According to our experimental results, our method achieves optimal performance across various modalities, datasets, and metrics. This impressive outcome, achieved through the integration of external models, demonstrates the substantial advantages of cross-domain collaboration. It indicates that the limitations introduced by external models (1) have a minimal impact on our method's application to mainstream downstream tasks, and (2) do not impede our model from achieving state-of-the-art results across various tasks. Moreover, another potential advantage of cross-domain collaboration is that as the external models continue to advance, the upper bounds of our method's capabilities will also improve, expanding the range of problems it can address. **Therefore, we believe that cross-domain collaboration should not be constrained by the potential limitations introduced by external models**. 3. **Early Stage of Multimodal Explainability:** The field of multimodal LLM Explainability is still in its early stages. Leveraging well-established external models to aid initial exploration is a common practice. We believe that as this field advances, the dependency on external models will gradually decrease, and domain-specific models will emerge to further propel research. We hope to contribute meaningfully toward achieving this goal. Additionally, to address your concern, **we have emphasized the potential limitations introduced by external models in the Limitations section of the revised version, and included the above points of discussion in the Appendix section**, hoping to inspire further thought among researchers in the field of LLMs explainability. Hope that our response and the revisions made to the manuscript could address your concern! *Once again, we are deeply appreciative of the time and expertise you have shared with us. Your insightful suggestions have significantly enhanced the rigor of our experimental design and enriched the content of our manuscript, and we are more than happy to add clarifications to address any additional recommendations and reviews from you!* *Best,* *Authors* [1] Generative Pretraining in Multimodality. ICLR 2024 [2] DreamLLM: Synergistic Multimodal Comprehension and Creation. ICLR 2024 [3] Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision Language Audio and Action. CVPR 2024 [4] Next-GPT: Any-to-any multimodal LLM. Arxiv 2023 [5] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs. ICML 2024 [6] CM3Leon Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning. Arxiv 2024 [7] Generating Images With Multimodal Language Models. NeurIPS 2024 --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the response, I decide to keep the relative positive score. --- Reply to Comment 1.1.1: Title: Thank you for your support! Comment: Dear Reviewer, Thank you sincerely for your valuable and positive feedback. We deeply appreciate your recognition of our efforts in addressing the concerns from the reviews. Your acknowledgment of the improvements we have made is greatly encouraging, and we are grateful for your decision to maintain the positive score based on these enhancements. We will continue to refine and develop our research to contribute meaningfully to the field. Thank you once again for your careful consideration and support. Best regards, Authors! --- Rebuttal 2: Title: Gratitude and Seeking for Final Review Consideration Comment: Dear Reviewer VhwX, We are extremely grateful for the insightful comments you provided and for the recognition you have shown toward our work. Your suggestion to include additional discussion on the introduction of external modules has significantly strengthened the rigor of our approach, making our contribution even more valuable to the community. **As our current score still hovering around the borderline, we humbly ask if you might consider the possibility of raising your score. Your support at this stage is immensely important to us!** If you believe there are still areas in our paper that could be further improved, we would be more than happy to engage in any additional discussion to address any remaining concerns, particularly as the rebuttal period is about to conclude in just a few hours. We are fully committed to advancing the field of explainable LLMs and contributing meaningfully to the community. Your feedback and support are invaluable to us in achieving this goal. Thank you once again for your thoughtful engagement. Warm regards, Authors
Summary: Summary: This paper introduces NAM (Neuron Attribution Method), a novel approach for attributing neurons to specific semantic concepts in multimodal large language models (MLLMs). The key contributions are: (1) A method to identify modality-specific neurons (text or image) that are crucial for particular semantic concepts. (2) Analysis of neuron properties like cross-modal invariance and semantic sensitivity. (3) A framework for multimodal knowledge editing based on the identified neurons. The authors evaluate NAM on GILL and NExTGPT models, comparing it to several baseline attribution methods. They demonstrate NAM's effectiveness in identifying semantically relevant neurons, its cross-sample invariance, and its utility for targeted image editing tasks. Strengths: - The paper addresses an important gap in the interpretability of MLLMs, extending neuron attribution techniques from text-only models to multimodal systems. - The authors conduct extensive experiments to validate NAM, including comparisons with multiple baselines and analysis of various neuron properties. - The method is well-motivated, building on existing work in LLM interpretability while addressing the unique challenges of multimodal systems. - The paper provides a detailed description of the NAM algorithm, making it potentially reproducible by other researchers. Weaknesses: - While the authors acknowledge this limitation, the evaluation is conducted on only two MLLMs (GILL and NExTGPT). Testing on a broader range of models would strengthen the generalizability claims. - The experiments focus solely on text and image modalities. Exploring other modalities (e.g., audio, video) would provide a more comprehensive evaluation of NAM's capabilities. - The method depends on external image segmentation models to remove noisy semantics. This introduces a potential source of bias and may limit the method's applicability to scenarios where such models are unavailable or unsuitable. - The paper would benefit from ablation studies to isolate the impact of different components of the NAM algorithm, particularly the novel attribution score calculation. Technical Quality: 2 Clarity: 2 Questions for Authors: None. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer kLxz: Thank you very much for your comments. We sincerely appreciate the time and effort you have dedicated to reviewing our work! Below, we meticulously provide responses to each of your comments and outline the modifications based on your suggestions. Hope that our responses could address your concerns! >***W1. Testing on a broader range of models would strengthen the generalizability claims.*** Thank you for your insightful suggestion. Following your feedback, in addition to the GILL and NextGPT, we conducted experiments on two additional multimodal LLMs, EMU and DreamLLM. The results, summarized in the table below, demonstrate that our explanation method achieves optimal performance across all metrics, thus further validating the generalizability of our approach. |||||| |:-:|:-:|:-:|:-:|:-:| |Method|Model|CLipScore|BERTScore|MoveScore| |GraD||0.376|0.461|0.387| |AcT|EMU|0.469|0.582|0.595| |NAM||**0.499**|**0.625**|**0.634**| |||||| |GraD||0.382|0.449|0.362| |AcT|DreamLLM|0.510|0.502|0.568| |NAM||**0.538**|**0.521**|**0.593**| |||||| We have incorporated these additional results into the revised version of the paper, along with a more detailed analysis. Due to the limited time available during the rebuttal phase, we were able to include only these two sets of experiments. We hope for your understanding in this matter. Further experiments with additional multimodal LLMs are currently underway, and we plan to include them in the experimental section of the paper to enhance the robustness and comprehensiveness of our study. We hope that our response and the additional experiments meet your expectations. Thank you once again for your valuable feedback! >***W2. Exploring other modalities would provide a more comprehensive evaluation of NAM's capabilities.*** Thanks for your valuable suggestion regarding the exploration of other modalities. Following your recommendation, we have extended our analysis beyond text and image modalities to include audio modality on NExT-GPT, which is capable of generating sound, and the following results validate its cross-modal generalizability. ||||| |:-:|:-:|:-:|:-:| |Method|Specificity|Relevance|Invariance| |CE|0.217|0.403|0.274| |GraI|0.264|0.500|0.337| |GraD|0.263|0.509|0.355| |GraT|0.317|0.496|0.347| |AcT|0.346|0.561|0.406| |AcU|0.349|0.552|0.396| |NAM|**0.361**|**0.596**|**0.428**| ||||| Due to the time-consuming nature of generating other modalities, such as video, we were unable to complete all experiments within the short rebuttal period. However, these experiments are currently in progress, and we plan to include them in the updated version of the paper once completed. We appreciate your understanding on this matter. Hope that our response and the additional experiments could meet your expectations! >***W3. The method depends on external models (image segmentation models), which introduces a potential source of bias.*** Thanks for raising this important concern. We acknowledge that our method depends on external models, which would introduce potential bias. However, we believe that this dependency is an acceptable aspect within the study of explainability in multimodal LLMs for several reasons. 1. **External Models in Multimodal LLMs:** Let's first consider the current landscape of multimodal LLMs (*e.g.*, EMU, DreamLLM, Unified-IO 2, RPG, CM3Leon, and GILL). These models rely on external models (*e.g.*, diffusion models) for multimodal generations. While these external models may introduce bias similar to the models our method employ, they have not hindered the remarkable success of multimodal LLMs. On the contrary, these well-developed models have become significant assets for multimodal research. Hence, we believe that the advantages of using external models far outweigh the potential bias introduced. 2. **Optimal Performance Across Metrics:** According to our experimental results, our method has achieved optimal performance across various metrics. This indicates that the additional bias introduced by external models has a minimal impact on the model's accuracy, and does not impede our method from achieving state-of-the-art results. Moving forward, as segmentation models advance, the upper bound of our method's capabilities will continue to improve. 3. **Early Stage of Multimodal Explainability:** The field of multimodal explainability is still in its early stages. Utilizing well-established external models from a mature field, such as image segmentation models, to aid initial exploration is inevitable. Furthermore, we believe that as the field advances, the dependency on external models will gradually decrease, and domain-specific models will emerge to further progress in research. We hope to contribute meaningfully toward achieving this goal. To address your concern, we have added the above discussion into the revised version to enhance the rigor of the paper. Hope our response could address your concern! >***W4. The paper would benefit from ablation studies to isolate the impact of different components.*** Thank you for your concern. Following your suggestion, we have included ablation studies in the revised version, which focuses on two main components: our proposed attribution score (AS) and the denoising module (DM). Here are the results: |||||| |:-:|:-:|:-:|:-:|:-:| |Model|Method|Specificity|Relevance|Invariance| ||w/o AS|0.625|0.752|0.613| |GILL|w/o DM|0.660|0.759|0.635| ||NAM|0.674|0.782|0.644| |||||| ||w/o AS|0.598|0.643|0.527| |NExTGPT|w/o DM|0.611|0.661|0.532| ||NAM|0.624|0.689|0.546| |||||| These results demonstrate that each component of our design enhances the performance of our method. We hope these additional experiments could improve the overall integrity of our experiments. *Once again, we are deeply appreciative of the time and expertise you have shared with us. We are more than happy to add clarifications to address any additional recommendations and reviews from you!* *Best,* *Authors* --- Rebuttal Comment 1.1: Comment: Thanks for your response and additional experiments. This addresses some of my concerns and I have raised my score. --- Reply to Comment 1.1.1: Title: Heartfelt Thanks for Your Encouraging Feedback Comment: Dear Reviewer kLxz, Thank you very much for your timely feedback and for the positive adjustment to your score. We are truly delighted to see that our responses have successfully addressed your concerns. Your encouragement is incredibly valuable to our team. As you are aware, the field of explainable LLMs is still in its infancy. **Your insightful comments and suggestions throughout the review process have not only improved our work but also motivated us deeply. We are committed to continuing our research in this emerging area, striving to contribute further to its development.** Thank you once again for your essential role in refining our work and for your inspiring support. Warm regards, Authors --- Rebuttal 2: Title: Gratitude and Seeking for Final Review Consideration Comment: Dear Reviewer kLxz, We are extremely grateful for the insightful comments you provided and for the recognition you have shown toward our work. Your suggestion to include additional experiments has significantly strengthened the rigor of our approach, making our contribution even more valuable to the community. **As our current score still hovering around the borderline, we humbly ask if you might consider the possibility of raising your score. Your support at this stage is immensely important to us!** If you believe there are still areas in our paper that could be further improved, we would be more than happy to engage in any additional discussion to address any remaining concerns, particularly as the rebuttal period is about to conclude in just a few hours. We are fully committed to advancing the field of explainable LLMs and contributing meaningfully to the community. Your feedback and support are invaluable to us in achieving this goal. Thank you once again for your thoughtful engagement. Warm regards, Authors
Rebuttal 1: Rebuttal: Dear Reviewers: We gratefully thank you for your valuable comments! We are truly encouraged by the reviewers' recognition of that our work has **addressed an important gap in MLLMs** (by all Reviewers), **provided interesting and instructive observations** (Reviewer VhwX and VfTi), and **conducted convincing and comprehensive experiments** (Reviewers kLxz and gq3m). Here we meticulously give point-by-point responses to your comments, and further revise our manuscript following your suggestions. Hope that our responses could address all your concerns and meet the expectations of the conference committee. Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and **we are more than happy to add clarifications to address any additional recommendations and reviews from you**! Best, Authors
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Kermut: Composite kernel regression for protein variant effects
Accept (spotlight)
Summary: The authors introduce a family of Gaussian process based regression models for protein variant effect prediction. The "composite" kernel introduced makes use of structural information (i.e. closeness in 3d space) as well as pre-trained sequence and/or structure models like ESM2 and ProteinMPNN (via embeddings and/or amino acid preference distributions). The model shows promising predictive performance on the ProteinGym benchmark, including w.r.t. uncertainty metrics like expected calibration error (ECE). Strengths: - The authors contribute to a problem class that is of considerable interest to a fairly large slice of the NeurIPS community, inasmuch as it brings together a compelling application of ML to proteins, touches on transfer learning and representation learning, investigates uncertainty quantification in a difficult problem setting, and leverages classical methods like Gaussian processes - By using the (ever-larger) ProteinGym benchmark and including quite a few ablations, the authors provide a relatively comprehensive empirical evaluation of their method. - The performance of the proposed method appears to be pretty good (at least for the regime where you only extrapolate out a few mutations), and could presumably improve as other sequence and/or structure models are plugged-in to the same general construction Weaknesses: - The description and discussion of the Kermut kernel is not very easy to follow and could be considerably improved. - For example, much of the discussion on lines 128-144 seems a bit besides the point, since the authors do not in the end develop a model that is linear in one-hot-features. - The authors should do a much better job of foreshadowing/signposting that their construction "for single mutant variants" is but a stepping stone to a multi-mutant construction; otherwise the reader is liable to get confused about what's going on. - Many of the choices that lead to the final kernel construction are either not motivated or only briefly discussed. Why not use, for example, a product kernel $k_{\rm struct} \times k_{\rm seq}$? Some of these alternative choices should be discussed and, ideally, included in ablations. - What does this mean? [Line 167] "preventing the comparison of different mutations at the same site" - The paper is missing a discussion of the computational complexity of computing the kernel w.r.t. the sequence length, the number of mutations from the wildtype, etc. - The discussion of prior work is inadequate, especially w.r.t. work on sequence kernels and previous applications of GPs to protein modeling. Granted there is only so much space in the main text, but the reader deserves a more detailed discussion. (Perhaps some of the discussion can be relegated to the appendix.) To name just a few examples, there is a lot of work on sequence kernels (see e.g. [A] and references therein) and it is negligent to ignore this body of work. Also the method comparison to mGPfusion [24] is inadequate. The authors did not invent sequence kernels or pioneer their application to protein modeling (e.g. [B, C, D] for more recent work) and should be much more liberal and informative in their discussion of the literature. Granted the relevant literature can be scattered (arxiv, biorxiv, etc.), but the readers deserve (much) better. Discussion of relevant work is more than a required chore: if well done it adds significant value to the reader and the literature as a whole. References: - [A] "Biological Sequence Kernels with Guaranteed Flexibility," Alan Nawzad Amin, Eli Nathan Weinstein, Debora Susan Marks, 2023. - [B] Moss, Henry, et al. "Boss: Bayesian optimization over string spaces." Advances in neural information processing systems 33 (2020): 15476-15486. - [C] Parkinson, Jonathan, and Wei Wang. "Scalable Gaussian process regression enables accurate prediction of protein and small molecule properties with uncertainty quantitation." arXiv preprint arXiv:2302.03294 (2023). - [D] Greenman, Kevin P., Ava P. Amini, and Kevin K. Yang. "Benchmarking uncertainty quantification for protein engineering." bioRxiv (2023): 2023-04. Technical Quality: 3 Clarity: 2 Questions for Authors: - Have the authors considered pooling strategies other than mean-pooling? - What are typical kernel hyperparameters learned across the ProteinGym benchmark? $\gamma_1$, $\pi$, etc. It would be great to see a histogram or similar. - The throwaway comment [Line 316] "While this limitation can be fixed by online protein structure prediction, the computational cost would increase significantly" seems a bit misleading without further clarification, since kernels are "pairwise" objects and so it's not immediately clear how you would incorporate multiple structures into your kernel. Can you please comment? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: To my mind the major limitation of this work is that reliance on the ProteinGym benchmark means that there is little signal on how well this class of models would perform in more challenging problem settings where the model is asked to extrapolate out many mutations away from the wild type. There are increasingly many public datasets that make this kind of evaluation possible in principle, and I encourage the authors to apply their method to more challenging scenarios. For example: - Chinery, Lewis, et al. "Baselining the Buzz. Trastuzumab-HER2 Affinity, and Beyond!." bioRxiv (2024): 2024-03. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thorough insights and particularly the raised issues regarding the related works section as well as their comments on the methods section. **Weaknesses:** - "The description and discussion..." - "For example, ..." - "The authors should..." We acknowledge that section 3.3 was not as straightforward to follow as we intended and have modified it accordingly. We have essentially removed lines 128-144, which were meant to describe how the initial kernel came about. We now instead motivate why a structure kernel could be useful and that we intend to introduce a single-mutant kernel which we will later extend to multiple mutations. We then introduce the three sub-kernels sequentially, motivating the inclusion of each. We then mention the lack of epistasis, and use this and the existing literature on GPs on embeddings to motivate the addition of the sequence kernel to the structure kernel. - "Many of the choices..." Reviewer RDFU raised similar points in both weaknesses and questions and due to character constraints, we refer to our responses to this reviewer. - "What does this ..." We agree that the formulation was confusing and have modified it accordingly. Our point is that $k_H$ is incapable of distinguishing between different mutations at the same site since $d_H(x,x')=1$, when $x$ and $x'$ have mutations on the same site. - "The paper is missing a discussion of the computational complexity..." This is a good point. We’ve now added information on computational complexity in the appendix w.r.t sequence length, number of mutations, and number of datapoints. Briefly put, assuming precomputed embeddings and AA-distributions, evaluating $k(x,x’)$ has complexity $m_1\times m_2$ (for # mutations in $x$ and $x’$, respectively). The main bottleneck lies in the quadratic scaling of the transformer-based ESM model with sequence length, which can however be substituted with different architectures. - "The discussion of prior work..." Thank you for raising this important issue. We agree that our initial related work section was insufficient with respect to the existing literature for both kernel methods on protein sequences and uncertainty quantification and calibration. We have now revised section 2.3 on kernel methods significantly. We have furthermore added a new section on uncertainty quantification and calibration. We have additionally revised the section on local environments to more broadly capture the literature of machine learning modeling on structural environments. For transparency, we will include the revised texts as a comment to this rebuttal (due to the character limit) with included references. **Questions:** - “Have the authors considered pooling strategies other than mean-pooling?” Yes. In Table 2 in "Learning meaningful representations of protein sequences" [Detlefsen et al] they show that alternative pooling strategies can improve the predictive performance on downstream tasks, with significant gains shown by employing a bottleneck approach and to a lesser extent a concatenation as opposed to mean-pooling. While we would expect better performance using a bottleneck approach, this would require the additional computational overhead of training the bottleneck model. This would additionally decrease the flexibility of our model if different embeddings were used as the bottleneck would need to be retrained. A concatenation approach could be employed, but this would lead to significantly different embedding sizes as the protein sequences in ProteinGym range from 37 to thousands of amino acids, which would further complicate the modeling setup. One can interpret the addition of the structure kernel as a means of recapturing and emphasizing the local sequence differences between variants, which might be less conserved in the embeddings after the averaging operation. We will describe these considerations in the camera-ready version. - “What are typical kernel hyperparameters learned across the ProteinGym benchmark? , , etc. It would be great to see a histogram or similar.” We agree! In the global rebuttal, we have uploaded a one-page PDF with visualizations of the hyperparameters. We show histograms for each kernel hyperparameter as well as hyperparameter vs. sequence length and number of variants for 174 ProteinGym datasets. We also show $\lambda$ vs. $\pi$ which shows whether the emphasis is on the structure or sequence kernel (or both). Due to the 1-page constraints, we have only uploaded aggregated results. We will look further into quantitative and qualitative analyses of the hyperparameter distributions for ProteinGym in the coming months. - “The throwaway comment...” We agree that the above formulation is unnecessarily vague and even slightly misleading and have now rewritten it to reflect the following: Given a variant with a large number of mutations where we have reasonable suspicion that the structure is different, we would need to predict the structure anew (as well as obtaining AA distributions via ProteinMPNN for the new structure). The difficulty, as you rightly point out, is how we can compare the structure of the multi-mutant variant with, say, a single-mutant variant. The $k_p$ and $k_H$ terms will still be comparable, provided that the local environments are extracted from the different structures. The difficulty lies with the distance kernel, $k_d$. One solution would be to align the two structures, which is $n^3$ in computational complexity, and calculate the distance between the mutants, where the locations are found in their respective structures. Though not ideal, this could make up a possible solution. **Limitations:** - "To my mind..." We agree that our model needs to be evaluated more thoroughly in multi-mutant settings. We are currently looking into such benchmarks like the suggested paper or the GB1/AAV landscapes from FLIP and will include these in the revised paper. --- Rebuttal 2: Title: Revised section 2.2 Comment: *The following is the revised section 2.2 “Kernel methods for protein sequences” (formerly “Kernel methods”), followed by the newly added section 2.3 “Uncertainty quantification and calibration”, and references. We split them into separate comments due to character constraints.* **2.2 Kernel methods for protein sequences** Kernel methods have seen much use for protein modeling and specifically protein property prediction. Sequence-based string kernels operating directly on the protein amino acid sequences are one such example, where, e.g., matching k-mers at different ks quantify covariance. This has been used with support vector machines to predict protein homology [24,25]. Another example is sub-sequence string kernels, which in [26] is used in a Gaussian process for a Bayesian optimization procedure. In [27], string kernels were combined with predicted physico-chemical properties to further improve accuracy in the prediction of MHC-binding peptides and in protein fold classification. In [28], a kernel leveraging the tertiary structure for a protein family represented as a residue-residue contact map, was used to predict various protein properties such as enzymatic activity and binding affinity. In [29], Gaussian process regression (GPR) was used to successfully identify promising enzyme sequences which were subsequently synthesized showing increased activity. In [30], the authors provide a comprehensive study of kernels on biological sequences which includes a thorough review of the literature as well as both theoretical, simulated, and in-silico results. Most similar to our work is mGPfusion [31], in which a weighted decomposition kernel was defined which operated on the local tertiary protein structure in conjunction with a number of substitution matrices. Simulated stability data for all possible single mutations were obtained via Rosetta [32], which was then fused with experimental data for accurate ∆∆G predictions of single- and multi-mutant variants via Gaussian process regression, thus incorporating both sequence, structure, and a biophysical simulator. In contrast to our approach, the mGPfusion-model does not leverage pretrained models, but instead relies on substitution matrices for its evolutionary signal. A more recent example of kernel-based methods yielding highly competitive results is xGPR [33], in which GPs with custom kernels show high performance when trained on protein language model embeddings, similarly to the sequence kernel in our work (see Section 3). They use a set of novel random feature-approximated kernels with linear-scaling, where we use the squared exponential kernel. Moreover xGPR does not model structural environments, unlike our structure kernel. Their methods were shown to provide both high accuracy and well-calibrated uncertainty estimation on the FLIP and TAPE benchmarks. --- Rebuttal 3: Title: New section 2.3 Comment: **2.3 Uncertainty quantification and calibration** Uncertainty quantification (UQ) for protein property prediction continues to be a promising area of research with immediate practical consequences. In [34], residual networks were used to model both epistemic and aleatoric uncertainty for peptide selection. In [35], GPR on MLP-residuals from biLSTM embeddings was used to successfully guide in-silico experimental design of kinase binders and protein fluorescence, amongst others. The authors of [36] augmented a Bayesian neural network by placing biophysical priors over the mean function by directly using Rosetta energy scores, whereby the model would revert to the biophysical prior when the epistemic uncertainty was large. This was used to predict fluorescence, binding and solubility for drug-like molecules. In [37], state-of-the-art performance on protein-protein interactions was achieved by using a spectral-normalized neural Gaussian process [38] with an uncertainty-aware transformer-based architecture working on ESM-2 embeddings. In [39], a framework for evaluating the epistemic uncertainty of deep learning models using confidence interval-based metrics was introduced, while [40] conducted a thorough analysis of uncertainty quantification methods for molecular property prediction. Here, they highlighted the importance of supplementing confidence-based calibration with error-based calibration as introduced in [41], whereby the predicted uncertainties are connected directly to the expected error for a more nuanced calibration analysis. We evaluate our model using confidence-based calibration as well as error-based calibration following the guidelines in [40]. In [42], the authors conducted a systematic comparison of UQ methods on molecular property regression tasks, while [43] investigated calibratedness of regression models for material property prediction. In [44], the above approaches were expanded to protein property prediction tasks where the FLIP [5] benchmark was examined, while [45] benchmarked a number of UQ methods for molecular representation models. In [46], the authors developed an active learning approach for partial charge prediction of metal-organic frameworks via Monte Carlo dropout [47] while achieving decent calibration. In [48], a systematic analysis of protein regression models was conducted where well-calibrated uncertainties were observed for a range of input representations. --- Rebuttal 4: Title: References used in section 2.2 Comment: [5] Christian Dallago, Jody Mou, Kadina E. Johnston, Bruce J. Wittmann, Nicholas Bhattacharya, Samuel Goldman, Ali Madani, and Kevin K. Yang. FLIP: Benchmark tasks in fitness landscape inference for proteins, January 2022 [24] Christina Leslie, Eleazar Eskin, and William Stafford Noble. The spectrum kernel: A string kernel for svm protein classification. In Biocomputing 2002, pages 564–575. WORLD SCIENTIFIC, December 2001. [25] Christina S. Leslie, Eleazar Eskin, Adiel Cohen, Jason Weston, and William Stafford Noble.Mismatch string kernels for discriminative protein classification. Bioinformatics, 20(4):467–476, March 2004. [26] Henry Moss, David Leslie, Daniel Beck, Javier Gonzalez, and Paul Rayson. Boss: Bayesian optimization over string spaces. Advances in neural information processing systems, 33:15476–15486, 2020. [27] Nora C Toussaint, Christian Widmer, Oliver Kohlbacher, and Gunnar Rätsch. Exploiting physico-chemical properties in string kernels. BMC bioinformatics, 11:1–9, 2010. [28] Philip A. Romero, Andreas Krause, and Frances H. Arnold. Navigating the protein fitness landscape with Gaussian processes. Proceedings of the National Academy of Sciences, 110(3):E193–E201, January 2013. [29] Jonathan C Greenhalgh, Sarah A Fahlberg, Brian F Pfleger, and Philip A Romero. Machine learning-guided acyl-acp reductase engineering for improved in vivo fatty alcohol production. Nature communications, 12(1):5825, 2021. [30] Alan Nawzad Amin, Eli Nathan Weinstein, and Debora Susan Marks. Biological sequence kernels with guaranteed flexibility. arXiv preprint arXiv:2304.03775, 2023. [31] Emmi Jokinen, Markus Heinonen, and Harri Lähdesmäki. mGPfusion: Predicting protein stability changes with Gaussian process kernel learning and data fusion. Bioinformatics, 34(13):i274–i283, July 2018. [32] Andrew Leaver-Fay, Michael Tyka, Steven M Lewis, Oliver F Lange, James Thompson, Ron Jacak, Kristian W Kaufman, P Douglas Renfrew, Colin A Smith, Will Sheffler, et al. Rosetta3: an object-oriented software suite for the simulation and design of macromolecules. In Methods in enzymology, volume 487, pages 545–574. Elsevier, 2011. [33] Jonathan Parkinson and Wei Wang. Linear-scaling kernels for protein sequences and small molecules outperform deep learning while providing uncertainty quantitation and improved interpretability. Journal of Chemical Information and Modeling, 63(15):4589–4601, 2023. [34] Haoyang Zeng and David K Gifford. Quantification of uncertainty in peptide-mhc binding prediction improves high-affinity peptide selection for therapeutic design. Cell systems, 9(2):159–166, 2019. [35] Brian Hie, Bryan D. Bryson, and Bonnie Berger. Leveraging Uncertainty in Machine Learning Accelerates Biological Discovery and Design. Cell Systems, 11(5):461–477.e9, November 2020. [36] Hunter Nisonoff, Yixin Wang, and Jennifer Listgarten. Coherent blending of biophysics-based knowledge with bayesian neural networks for robust protein property prediction. ACS Synthetic Biology, 12(11):3242–3251, 2023. [37] Young Su Ko, Jonathan Parkinson, Cong Liu, and Wei Wang. Tuna: An uncertainty aware transformer model for sequence-based protein-protein interaction prediction. bioRxiv, pages 2024–02, 2024. [38] Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax Weiss, and Balaji Lakshminarayanan. Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. Advances in neural information processing systems, 33:7498–7512, 2020. --- Rebuttal 5: Title: References used in section 2.3 Comment: [39] Fredrik K. Gustafsson, Martin Danelljan, and Thomas B. Schön. Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision, April 2020. [40] Gabriele Scalia, Colin A. Grambow, Barbara Pernici, Yi-Pei Li, and William H. Green. Evaluating Scalable Uncertainty Estimation Methods for Deep Learning-Based Molecular Property Prediction. ACS, 2020. [41] Dan Levi, Liran Gispan, Niv Giladi, and Ethan Fetaya. Evaluating and Calibrating Uncertainty Prediction in Regression Tasks, February 2020. [42] Lior Hirschfeld, Kyle Swanson, Kevin Yang, Regina Barzilay, and Connor W Coley. Uncertainty quantification using neural networks for molecular property prediction. Journal of Chemical Information and Modeling, 60(8):3770–3780, 2020. [43] Kevin Tran, Willie Neiswanger, Junwoong Yoon, Qingyang Zhang, Eric Xing, and Zachary W Ulissi. Methods for comparing uncertainty quantifications for material property predictions. Machine Learning: Science and Technology, 1(2):025006, 2020. [44] Kevin P Greenman, Ava P Amini, and Kevin K Yang. Benchmarking uncertainty quantification for protein engineering. bioRxiv, pages 2023–04, 2023. [45] Yinghao Li, Lingkai Kong, Yuanqi Du, Yue Yu, Yuchen Zhuang, Wenhao Mu, and Chao Zhang. Muben: Benchmarking the uncertainty of molecular representation models. Transactions on Machine Learning Research, 2023. [46] Stephan Thaler, Felix Mayr, Siby Thomas, Alessio Gagliardi, and Julija Zavadlav. Active learning graph neural networks for partial charge prediction of metal-organic frameworks via dropout monte carlo. npj Computational Materials, 10(1):86, 2024. [47] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1050–1059, New York, New York, USA, 20–22 Jun 2016. PMLR. [48] Richard Michael, Jacob Kæstel-Hansen, Peter Mørch Groth, Simon Bartels, Jesper Salomon, Pengfei Tian, Nikos S. Hatzakis, and Wouter Boomsma. A systematic analysis of regression models for protein engineering. PLOS Computational Biology, 20(5):e1012061, May 2024. --- Rebuttal Comment 5.1: Comment: I thank the authors for engaging in such depth with reviewer feedback. Accordingly I have raised my score to a 7. I look forward to reading the camera ready version of the paper.
Summary: The authors suggest a method to predict the effect of mutations given sparse data. Their method is based on identifying the similarity of different sites on a protein by embeddings from large language models and structure. They show that their method performs state of the art mutation effect prediction. They also show that their method is only slightly overconfident. Strengths: The method performs substantially better than state of the art prediction. The method combines a variety of methods to build a prior on the effect of mutations. The authors include a codebase that makes this method easy to use for practitioners. Weaknesses: Epistasis is only included through sequence embeddings; in particular, the impact of structure is purely linear. Despite not suggesting any radically new technique, this is a practical method that cleverly and effectively uses available tools. For this reason however, I think it is reasonable to expect that the authors try hard to build as strong a method as possible. In particular, this paper is missing a more thorough investigation of model choices -- what if the Hellinger kernel is replaced with something else? what if a different kernel is used to compare embeddings? what if equation 2 is replaced with a kernel with a term for every combination the 3 kernels? what if the kernel is meta-learned on a subset ProteinGym and applied to the rest? The paper would be substantially strengthened by applying the methodology of Duvenaud, David. 2014. “Automatic Model Construction with Gaussian Processes.” University of Cambridge. Technical Quality: 4 Clarity: 2 Questions for Authors: Could you more thoroughly investigate building an accurate kernel method on this data? For example, trying nonlinear structure kernels might help? Using something like $\tilde k(X, X')=\exp(-k(X, X'))$ or an IMQ formulation as in Amin, Alan Nawzad, Eli Nathan Weinstein, and Debora Susan Marks. 2023. “Biological Sequence Kernels with Guaranteed Flexibility.” arXiv [Stat.ML]. arXiv. http://arxiv.org/abs/2304.03775. might be a reasonable thing to try. Or more flexible combinations of kernels as described in the weaknesses section. Confidence: 5 Soundness: 4 Presentation: 2 Contribution: 2 Limitations: Partially addressed. I would like a longer discussion about epistasis mentioned in the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thorough insights and ideas on kernel improvements. **Weaknesses:** - “Epistasis is only included through sequence embeddings; in particular, the impact of structure is purely linear.” This is true and to some extent by design. With CoVES, Ding et al. showed impressive results for modeling mutation preferences via simple one-hot encodings and a logistic predictor (and sufficient data). We wondered whether a similar site-specific approach could lead to a performant model, i.e. without explicitly modeling epistasis to keep the model simple, which turned out to be the case. The inclusion of epistatic signals via the sequence embeddings further increased performance, as one would expect. It is certainly possible that the construction of a non-linear global kernel which directly models epistasis could lead to increased performance; particularly in multi-mutant settings. We have updated the method section to describe this. - “Despite not suggesting any radically new technique, this is a practical method that cleverly and effectively uses available tools. For this reason however..." You raise many good questions and suggestions! Prior to submitting our manuscript, our model has undergone many iterations and much testing, including some of the suggestions raised above. For the site-comparison kernel ($k_H$), we have previously experimented with using the Jensen-Shannon divergence (as opposed to the Hellinger distance) and even a simple squared exponential kernel. Here, we saw a minor decrease (non-significant, however) when using the JSD and a larger decrease when using an SE kernel. For the mutation-comparison ($k_p$) and distance kernels ($k_d$), we have experimented with both exponential and squared exponential kernels, however without significant differences in predictive performance on test sets. For the sequence kernel, we experimented with using Mátern kernels (5/2 and 3/2) which, once again, only led to minor differences. We have now included additional ablation results where we use the JSD (in $k_H$) and Mátern 5/2 (in $k_\text{seq}$). As for kernel composition, we formulated Kermut as a sum of the structure and sequence kernels due to the additive interpretation given by Duvenaud, where adding kernels is roughly equivalent to a logical OR operation. The model can then leverage either structure or sequence similarities, depending on the presence and strength of each, as determined through the hyperparameter fitting. Using a structured approach such as the Automated Model Construction by Duvenaud might however give rise to a better model. We have now run a set of experiments using this approach with the sum and multiplication operations on our four base kernels (the three kernels from $k_\text{struct}$ and $k_\text{seq}$) on a subset of the ProteinGym benchmark. In each round, we observed increased predictive performance on the respective test sets. The resulting model was a multiplication of all components, i.e., $k_\text{struct} \times k_\text{seq}$. We now also include ablation results for this model, where it achieves slightly better performance than our proposed kernel - however within a margin of error (0.662 vs. 0.659 in average Spearman correlation on ablation datasets). For the product kernel, we observe slightly worse calibration in terms of ECE/ENCE with 0.060/0.192 vs. 0.051/0.179 for the original formulation. **Questions:** - “Could you more thoroughly investigate building an accurate kernel method on this data?..." We agree that it would improve the paper to include both richer kernels and more exhaustive baseline comparisons. We have initiated work on implementing the IMQ-H and mGP kernels, and anticipate that these results will be ready in time for the camera-ready version. **Limitations:** - “Partially addressed. I would like a longer discussion about epistasis mentioned in the weaknesses section.” Good point. We have now described to what extent epistasis is (and isn’t) incorporated in the model (see the response to your first raised weakness) in the method section and reflect further on it in the discussion. --- Rebuttal Comment 1.1: Title: Response Comment: I appreciate you've included a longer discussion of epistasis. I appreciate the results in CoVES and I think you're right that including epistasis is unlikely to substantially increase values on the tested benchmarks. However, I think drawing the conclusion from CoVES that epistasis doesn't matter is not sound; rather I think that this is a result of their benchmark being "hackable". Ultimately, I similarly hope you endeavor to do more than fit single and double mutants in ProteinGym: the sorts of kernels you design could be useful in fitting more diverse data (say from an iterative design experiment) or predicting predicting epistasis as its own end -- in these cases nonlinearity could be more useful. It seems you've done a lot of experiments. I would suggest dumping as many of those results into the appendix as is convenient for you, even if they don't result in major differences. Your additions address my main concerns about the potential to improve the method and I'm happy to recommend the paper be accepted. --- Reply to Comment 1.1.1: Comment: Thank you for you quick response - and the raised score! We agree that epistasis most likely plays a significant role when multiple mutations are introduced. We will make sure to include additional results in the final paper.
Summary: This paper proposes a kernel regression model to predict protein mutational effects. The model includes kernel functions crafted for the task. In specific, it includes a kernel that measures sequence similarity based on ESM-2 features, a local structure similarity kernel based on ProteinMPNN probability, and other kernels that impose priors such as spatial correlation between mutation sites. Strengths: - Kernels are carefully designed. Using ESM-2 and ProteinMPNN features to construct kernel functions is well-motivated. The effect of each kernel is well-justified via ablation studies (Table 2). - Uncertainty quantification capability of Gaussian process is valuable for making decisions in wet labs, which has been often neglected in previous work on protein variant effect prediction. This work provides such quantification and further insight into it. - The model is much faster than purely neural network-based methods which require at least one forward pass per mutation. Kermut is efficient because the sequence and structure features are computed only once for a protein sequence. - Kermut achieves significant better performance than baselines. It is a good demonstration of making use of pretrained neural network features with statistical methods when training data is not that much and interpretability is desirable. Weaknesses: - Current formulation of Kermut does not provide transferability to different protein sequences, while previous zero-shot prediction methods are capable of predicting variant effects without prior experimental data on the same sequence. Technical Quality: 3 Clarity: 3 Questions for Authors: - Why is the structure kernel function based on amino acid probability rather than the last-layer feature of ProteinMPNN? And why is the sequence kernel based on last-layer features rather than probability? What is the consideration behind these choices? - Why is it reasonable to assume that mutations on the same site have similar effects (L135)? It seems that the effects depend more on the amino acid types. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their efforts. **Weaknesses:** - “Current formulation of Kermut does not provide transferability to different protein sequences, while previous zero-shot prediction methods are capable of predicting variant effects without prior experimental data on the same sequence.” This is true - Kermut only works in an assay-specific, supervised manner. Kermut instead leverages transfer learning from other models (including zero-shot predictions) to drive its performance. **Questions:** - "Why is the structure kernel function based on amino acid probability rather than the last-layer feature of ProteinMPNN? And why is the sequence kernel based on last-layer features rather than probability? What is the consideration behind these choices?" Good questions! Embeddings extracted from pretrained pLMs like ESM-2 have been shown to be useful for downstream tasks in a number of studies and is thus a straightforward choice of input space to a kernel. We only use the ProteinMPNN output from the wild type protein structure. We could also have chosen to use the last layer node embeddings of ProteinMPNN for each position. This would, however, not include the knowledge of which amino acids the individual sites have been mutated to, but only which site. In other words, there would be no way for the kernel to distinguish between mutations at the same site, since that site would have a high-dimensional vector representation: the information for mutations 73A and 73V would both be represented in this vector for site 73. By using the probability distribution, we have a direct correspondence between elements in our “structure embedding” (the 20 dimensional AA distribution) and the amino acid of interest for a particular variant as modeled by the probability kernel $k_p$. While it is possible to use last-layer embeddings from a structure-based model for the site-comparison kernel ($k_H$) we would need to devise a different kernel for mutation comparisons ($k_p$), which - in our current approach - is handled simultaneously. The probability distribution similarly reflects amino acid preferences as shown in the literature. If two sites exhibit high probability for two distinct mutations, these mutations, according to our hypothesis, are likely to lead to a positive (or at least non-negative) change in the functional value. The probabilities therefore offer a convenient way of modeling similarity w.r.t. to function values, which is central to the GP. We have now made this point more clear in the method section. - "Why is it reasonable to assume that mutations on the same site have similar effects (L135)? It seems that the effects depend more on the amino acid types." If for example an amino acid in the active site of an enzyme is mutated, we expect that most mutations would have a similar negative effect on the activity of the protein. Similarly, we might find that several mutations at a specific site on the surface of a protein might stabilize/destabilize the protein. The amino acid type to which we mutate certainly matters, which is why we model entire sites ($k_H$) and specific mutations ($k_p$). The paragraph was meant to motivate the choice of kernel, but might have been more confusing and we have thus changed it.
null
null
Rebuttal 1: Rebuttal: We would first and foremost like to thank all reviewers for their constructive, high-quality reviews. Based on these valuable inputs, we have made a series of alterations to our manuscript, which we will list here. These will be described in greater detail in the individual rebuttals: - We have made a large revision/expansion of our related works section. We have specifically expanded section 2.2 on kernel methods greatly; we have added a dedicated section to uncertainty quantification and calibration for protein property prediction; and we have expanded the section on local environments to give a more transparent view of the literature. The updated section 2.2 and the new section 2.3 can be seen in full as officialts comments to reviewer Fowf. (Fowf) - We have changed method section 3.3 (“Kermut”) to give a more concise and straightforward introduction to our kernel, which motivates both how we present it, the individual components, and the kernel composition. (RDFU, Fowf, 1cc6) - We include in the appendix a model selection section, where we apply an automatic model selection approach on a subset of the ProteinGym datasets (see “Automatic Model Construction with Gaussian Processes” by Duvenaud, 2014) . This gives rise to a $k_{\text{struct}} \times k_{\text{seq}}$-kernel which performs slightly better in predictive performance than our $k_{\text{struct}} + k_{\text{seq}}$-kernel with slightly worse calibration (the performance is within the margin of error). (RDFU) - We have expanded our ablations to include additional kernel formulations: Mátern 5/2 in sequence kernel, JSD in site-kernel, product of sequence/structure kernel instead of sum. (RDFU, Fowf) - We have added a discussion of computational complexity in the appendix. (Fowf) - We have added visualizations of hyperparameters to the appendix. See added PDF for a summarized version. (Fowf) - We now explicitly mention how we handle and do not handle epistasis in the methods and discussion. (RDFU) We once again thank the reviewers for their insights and suggestions and believe that the resulting manuscript has been strengthened greatly. Pdf: /pdf/d637b957d2ffa67bbe0877d698b5c20d7346c67d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Certified Adversarial Robustness via Randomized $\alpha$-Smoothing for Regression Models
Accept (poster)
Summary: This paper considers randomized smoothing for (unbounded) regression models. An $\alpha$-trimming procedure is proposed in order to increase certification strength. Experiments to illustrate the effectiveness of the approach are conducted on synthetic datasets as well as camera re-localization tasks. Strengths: The problem of scaling up certified robustness (e.g., via randomized smoothing-based methods) to regression models is an important one to consider, and is certainly of interest to the Neurips audience. Weaknesses: Overall, the paper could use an extensive revision and polishing. There is quite a bit of terminology used throughout the paper that is not clearly defined, and the assumptions underlying some of the mathematical results are not made explicit/clear. In particular, some language from classification seems to be infiltrating the supposed regression setting of this paper, which makes things very difficult to understand. The paper seems to be heavily dependent on [17], with the only notable novelty being the incorporation of $\alpha$-trimming into the smoothing framework. See the Questions section below. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Line 68: You mention the prior works [17, 19, 7, 4, 16] that develop smoothing methods for regression models very briefly, and assert that they all suffer from limitations. However, more explanation is needed to distinguish your contributions from theirs. I suggest including more in-depth discussion on the contributions of these works, and clarify how your method differs/overcomes their particular limitations. 2. In the introduction, you use the acronym CR to denote "certified robustness," but in Section 3, you use CR to denote "certified regression." Please only use this acronym to denote one thing throughout the paper. 3. Lines 93 and 116: "...is the lower bound on the probability of accepting prediction in ith output variable." What does this mean? If we're dealing with regression models, what does "accepting prediction" of a particular output variable mean? 4. Line 118: What is the "accepted region"? 5. Line 118: So does Proposition 1 not apply to $\ell_1$ threats? 6. Figure 1: I don't find this figure particularly enlightening/explanatory. For example, what is the index $m$ here? Also, what is the index $k$ here? In your explanation of $\alpha$-trimming, you use $k$ to denote an arbitrary index (after computing the order statistics). Furthermore, why is the region around $y_{1:k}$ nonconvex, but the region around $y_{m:t}$ convex? In your Theorem 2 statement, you assume that the "accepted region" (again, this is not clearly defined anywhere) for each output is convex. Finally, what is the difference between the outputs and regions ($y_{1:k}$, $y_{m:t}$, and their green regions) within the red dashed box, and those same outputs and regions outside the red dashed box? 7. Line 150: "...the following equality holds..." I think you mean inequality. 8. Line 151: What is $q$ in (9)? Is this allowed to be any real number in $[0,1]$? 9. Line 157: Is it ever possible for the change from $q$ to $I_q(n-[\alpha n], n+[\alpha n])$ to be a decrease? 10. Theorem 2: One of the powers of randomized smoothing (for classification) is that it allows us to "robustify" a non-robust base classifier. However, it seems like the strength of your robustness certificate for your smoothed classifier depends heavily on the underlying robustness of the base classifier. That is, it appears as though in order for your smoothed classifier to have a high level of robustness, you need your base classifier to have a high level of robustness. Can you please discuss and clarify these aspects? If this interpretation is correct, this could result in significant limitations. 11. Algorithm 1, Linw 5: What does it mean for a continuous-valued regression prediction to be "correct"? Isn't the "correct" output for a given input a singleton in $\mathbb{R}^t$ and hence a measure-zero set? 12. Line 249/Section 5: I suggest moving this paragraph to around Lines 68--71 to better clarify your contributions at the outset. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The problem of scaling up certified robustness (e.g., via randomized smoothing-based methods) to regression models is an important one to consider, and is certainly of interest to the Neurips audience.** The authors would like to express their gratitude to the reviewer for recognizing the significance of the defined problem for the NeurIPS audience. This study is the first to apply the randomized smoothing technique to a broad class of regression models with both bounded and unbounded outputs. It provides proofs of their performance under constraints such as maximum permitted latency and limited computational resources, which restrict the number of examination points. Additionally, the results are benchmarked against the critical task of camera re-localization, where reliability is crucial for robotic vision and autonomous systems. **Overall, the paper could use ...** We appreciate the reviewer's efforts in reading the manuscript and providing feedback. Although the authors conducted several rounds of proofreading before the final submission, some typos might still be present, which will be corrected in the camera-ready revision, especially those mentioned by the reviewer in the following questions. In terms of underlying assumptions, to the best of our knowledge, there is no further assumptions needed in any of the results to be claimed. While the certification setup was designed for regression problems, we acknowledge that steps and model parameters are shared between classification and regression settings. The core concept of the randomized smoothing approach was initially developed for classification models and has been adapted for use in regression settings, leading to some similarities in terminology. As noted in the manuscript and mentioned by the reviewer, this paper extends the results in [17] by incorporating $\alpha$-trimming into the smoothing process to address a broader class of regression models, including those with unbounded outputs which is the main focus of this paper. This new smoothing approach includes an end-to-end analysis and demonstrates through various theorems that output predictions are certifiable even with limited data points, unlike in [17] which required a large discount factor. Additionally, we present a robustness analysis against $\ell_p$ attacks, extending the $\ell_2$ attack analysis in [17]. The connection between the certification of regression and classification models is also provided to give readers a deeper understanding of their similarities. **Line 68: You mention the prior works ...** We thank the reviewer for highlighting the need to clarify the distinctions between this study and those referenced in [17, 19, 7, 4, 16]. Although these differences were briefly summarized in Line 68, the authors believe that a more detailed explanation in Section 5 (Related Work) provides a precise description of each study and their respective differences or limitations compared to our work. Our study presents a universal certification technique applicable to any regression model. If the explanations in Section 5 are deemed insufficient, we are happy to add further clarification. In summary, we noted that *``The most related works to our study are [4,17] where in the former study, the object detection was investigated through the lens of certified regression, however, their analysis is relying on the scaling output of classifier models to expand the range of output values which constrains the architecture of considered models, and in the latter the certification was provided for a class of bounded output regression model in the asymptotic case. Compared to these methods, our approach provides a probabilistic certificate for all regression models (including models with a wide range of outputs) with a limited number of evaluations through drawing noisy samples.’’* **In the introduction, you use the acronym...** Thank you for pointing out this typo. The redundant acronym for certified regression has been removed in the revised version. **Lines 93 and 116: "...is the lower bound on ...** Please note that in the context of classification, a model is considered certifiably robust if any perturbation in the input sample does not change the output label prediction. For example, if the predicted label for the input sample $\textbf{x}$ is class A, it remains (or is highly likely to remain) class A for the perturbed sample $\textbf{x} + \boldsymbol{\delta}$. However, in the context of regression, due to the continuity of the output variable, any perturbation in the input causes changes in the output, which are directly related to the model weights and the gradient of the output with respect to the input. Therefore, the notion of robustness in regression differs from that in classification. In this setting, robustness is defined by Definition 1. In summary, it states that for each output variable, the user can define an acceptable region around the model output ($\textbf{y}_i$) for the input sample $\textbf{x}$ using a chosen measure of dissimilarity ($diss_y(.,.)$) and a corresponding radius ($\epsilon{y}_i$). If the output variability exceeds the defined region, the model is considered non-robust. **Line 118: What is the "accepted region"?** Based on Definition 1, the accepted region for the $i^{th}$ output variable, according to the user's level of tolerance, is defined by ${z \mid diss_y(z,\textbf{y}_i) \leq \epsilon{y_i}}$. This region represents a neighborhood around the observed output $\textbf{y}_i$, within which the user is satisfied if the perturbed input data generates such outputs. This explanation has been added immediately after Definition 1 to reduce confusion. --- Rebuttal Comment 1.1: Comment: I have read and responded to the author rebuttal. I have decided to increase my score by 1 point. --- Rebuttal 2: Comment: **Line 118: So does Proposition 1 not apply to $\ell_1$ threats?** As described in Proposition 1 and its proof sketch, this result is valid only for $p\geq 2$ which excludes $\ell_1$ threats. However, other inequalities can be used to relate $\ell_1$ to the $\ell_2$ norm which might result in a different form of certificate radii which is not within our interest in this study. **Figure 1: I don't find this figure...** This figure visualizes the general case of certification for regression models, where outputs are allowed to vary within a non-convex region around the output prediction for a given sample $\textbf{x}$. The outputs can be jointly analyzed within smaller groups, denoted by indices $1:k$ and $m:t$. However, the results provided in this manuscript address a particular case where outputs are examined separately, and their accepted regions are assumed to be convex. Throughout the paper, output accepted regions are assumed to be convex, though the general problem of certification for regression models may involve non-convex output vicinities (less likely) that can be defined by the user depending on the applications. The figure caption has been changed to *General schematic of how $\alpha$-trimming can be applied to the base regressor with $\ell_2$-norm ball (can be $\ell_p$ norm in this paper and can be any neighboring function in general) defined for input vicinity and any form of convex (in this paper) or nonconvex (in general) set for the output vicinity. Furthermore, outputs can be examined separately (in this paper) or jointly with other outputs (in the general case as denoted by $\textbf{y}_{m:t}$), etc.* to mitigate any confusion. **Line 150: "...the following equality holds..." I think you mean inequality.** Yes, thanks for mentioning this typo. It has been fixed in the revised manuscript. **Line 151: What is q in (9)? Is this allowed to be any real number in [0,1]?** The parameter $q$ is a real number between [0,1] and serves as the lower bound on the probability of observing a valid output in the base regression model. The range of this parameter has been added to the statement. **Line 157: Is it ever possible for the change from q to $I_q(n-[\alpha n],n+[\alpha n])$ to be a decrease?** The authors assume the reviewer means changing from $q$ to $I_q(n-[\alpha n],[\alpha n]+1)$ as stated in Theorem 2. This is an excellent question, and Proposition 2 was stated to address the same question. Please note that we are dealing with an unbounded scenario, and with probability $1−q$, some outputs can exceed the output vicinity around the observed $\textbf{y}_i$. In the worst-case scenario, which is always considered in robust certification, all these off the range outputs could be at infinity, and the $\alpha$-trimming filter might not be able to remove all these outliers. In such cases, the average of the remaining points would be invalid. Therefore, it is crucial to find the minimum rate of filtering ($\alpha^+$) to ensure this probability improves, as described in Proposition 2. **Theorem 2: One of the powers of randomized smoothing ...** Thank you for this insightful question. The authors believe that the performance of the base classifier always impacts the overall performance of the smoothed classifier, irrespective of whether the setting is classification or regression. In classification tasks, this strong relationship is reflected in the gap between $ \underline{p_A}$ and $ \overline{p_B}$. If a base classifier performs poorly under input perturbations, this gap will shrink, diminishing the effectiveness of the final certification. In classification settings, there is only one other parameter ($\sigma$) that can potentially compensate for this gap. In regression settings, a similar relationship applies: better performance in the base regression model results in a better value for $q$. However, as demonstrated in Theorem 3, the final performance is also influenced by $\sigma$, $n$ and importantly $\alpha$. Using an appropriate value for $\alpha$ can significantly enhance the certification, as illustrated in the examples from Line 179 to Line 189, even when the base regression model performs poorly. **Algorithm 1, Line 5: What does it mean...?** The correctness here (similar to the response to the above questions) refers to the fact that the output variable is within the defined accepted region. The term “valid” might better reflect this fact and “correct” has been changed to “valid” in the revised manuscript. Additionally, the probability value should be changed from $ \underline{p_{A_i}}$ to $\textit{{I}}_{p_Ai}(n-[\alpha n],[\alpha n]+1)$, because no attack was involved yet. Then further in Line 6, we estimate the radius for the perturbation of input data $\textbf{x}$ that generates acceptable results with probability at least P. **Line 249/Section 5: I suggest moving...** This suggestion will be taken into account for the camera-ready version of the paper. --- Rebuttal Comment 2.1: Comment: **Line 118: So does Proposition 1 not apply to $\ell\_1$ threats?** Your choice to set $r=2$ in the proof of Proposition 1, which leads to the requirement that $p\ge 2$, appears to be completely arbitrary to me. Why make this restriction? If someone wants to use your result in the case of $p=1$, then it appears as though they can't, even though, according to your proof, they can by choosing $r=1$, $p=1$, and $q=\infty$. I suggest stating and proving the result (Proposition 1) in its most generally applicable form. **Figure 1: I don't find this figure...** Your new caption still does not address my question " Finally, what is the difference between the outputs and regions ($y\_{1:k}$, $y\_{m:t}$, and their green regions) within the red dashed box, and those same outputs and regions outside the red dashed box?" The interpretation of the figure is still unclear. **Line 150: "...the following equality holds..." I think you mean inequality.** Thanks for fixing this. **Line 151: What is q in (9)? Is this allowed to be any real number in [0,1]?** "The range of this parameter has been added to the statement." Thanks. **Line 157: Is it ever possible for the change from $q$ to $I\_q(n-\alpha[n],n+\alpha[n])$ to be a decrease?** I see that Proposition 2 gives a sufficient condition for an increase. Based on your response and discussion after Theorem 2, it seems that it is possible for a decrease to occur, in general. You should explicitly mention this as a possible limitation (in such easily understandable terms), as it essentially boils down to your $\alpha$-trimming procedure failing to increase the robustness of the base model in such cases. It makes the most sense to me to assert this limitation (again, in simple language such as "failure to robustify") at the end of the discussion following Theorem 2, and then to move into your sufficient conditions (for "successful robustification") of Proposition 2 thereafter. **Theorem 2: One of the powers of randomized smoothing...** Thanks for your response; it gives nice comparisons between the underlying parameters of smoothing in classification versus regression, and how they relate to robustification. I suggest including such an explanation in your manuscript somewhere around Theorem 3. **Algorithm 1, Line 5: What does it mean...?** "The term *valid* might better reflect this fact and *correct* has been changed to *valid* in the revised manuscript." If so, then please be sure to clearly define what you mean by "valid" (i.e., to be in the "accepted region," after you clearly define that). **Line 249/Section 5: I suggest moving...** "This suggestion will be taken into account for the camera-ready version of the paper." Thanks. --- Rebuttal 3: Comment: **Line 68: You mention the prior works... / Line 249/Section 5: I suggest moving...** "This suggestion will be taken into account for the camera-ready version of the paper." Thanks. **In the introduction, you use the acronym...** Thanks for fixing this. **Lines 93 and 116: "...is the lower bound on...** If "accepting prediction in the $i$th output variable" means that "$\mathbf{f}\_{\theta}(\mathbf{x}+\mathbf{e})_i \in \mathbf{N}\_y(\mathbf{y}\_i, \epsilon\_{y\_i})$", then you should explicitly and clearly state/define this (before using the language). Otherwise, it may not be immediately clear to the reader that this is what you mean. **Line 118: What is the "accepted region"?** "This explanation has been added immediately after Definition 1 to reduce confusion." Thanks. --- Rebuttal 4: Comment: **If "accepting prediction in the ith output variable" means that $\textbf{f}_\theta(x+e)_i \in \textbf{N}_y(y_i,\epsilon_i)$, then you should explicitly and clearly state/define this (before using the language). Otherwise, it may not be immediately clear to the reader that this is what you mean.** We agree. As requested by the reviewers, the definition of the accepted region is now clearly stated in the revised manuscript immediately following Definition 1. **Your choice to set $r=2$, in the proof of Proposition 1, which leads to the requirement that $p\geq 2$, appears to be completely arbitrary to me. Why make this restriction? If someone wants to use your result in the case of $p=1$, then it appears as though they can't, even though, according to your proof, they can by choosing $r=1$, $p=1$, and $q=\infty$. I suggest stating and proving the result (Proposition 1) in its most generally applicable form.** We appreciate the reviewer's suggestions to enhance the validity of the results for other norms. As the reviewer correctly noted, by setting $r=1$, we can relate the $\ell_1$ norm to the $\ell_p$ norm ($p \geq 1$). However, it is important to note that one of these norms must be the $\ell_2$ norm in order to apply Theorem 1, which provides an upper bound on the input perturbation. Therefore, we must set $r=2$, and the constraint $p \geq 2$ directly follows from this choice. **Your new caption still does not address my question " Finally, what is the difference between the outputs and regions ($y_{1:k}$,$y_{m:t}$, and their green regions) within the red dashed box, and those same outputs and regions outside the red dashed box?” The interpretation of the figure is still unclear.** Apologies if this part of the question was not previously addressed. As noted in the paper, we apply the $\alpha$-trim smoothing function $\textbf{g}_\alpha(\textbf{x})$ as a wrapper around the base regression model, using the same definition of the vicinity sets in both input and output, as illustrated in Figure 1. Consequently, everything within the red dashed box is related to base regression certification and the results provided in Theorem 1 and Proposition 1. The corresponding output regions outside the red dashed box represent the certification analysis for the smoothed function and the applicability of the improved certification results shown in Theorems 2 and 3, and Propositions 2, with the accepted regions remaining consistent with those in the base regression model. This additional explanation will also be included in the camera-ready version. **I see that Proposition 2 gives a sufficient condition for an increase. Based on your response and discussion after Theorem 2, it seems that it is possible for a decrease to occur, in general. You should explicitly mention this as a possible limitation (in such easily understandable terms), as it essentially boils down to your $\alpha-$trimming procedure failing to increase the robustness of the base model in such cases. It makes the most sense to me to assert this limitation (again, in simple language such as "failure to robustify") at the end of the discussion following Theorem 2, and then to move into your sufficient conditions (for "successful robustification") of Proposition 2 thereafter.** This is an excellent suggestion that underscores the importance of the sufficient condition derived in Proposition 2. The authors are happy to add such an explanation in simple terms right before Proposition 2 in the camera-ready version. **Thanks for your response; it gives nice comparisons between the underlying parameters of smoothing in classification versus regression, and how they relate to robustification. I suggest including such an explanation in your manuscript somewhere around Theorem 3.** Thank you for your positive feedback on our response. As suggested, we will include this note in the camera-ready version of the paper. **"The term valid might better reflect this fact and correct has been changed to valid in the revised manuscript." If so, then please be sure to clearly define what you mean by "valid" (i.e., to be in the "accepted region," after you clearly define that).** Certainly, both the accepted region and the validity of the outputs will be clearly defined in the camera-ready version, as suggested by the reviewer. **I have read and responded to the author rebuttal. I have decided to increase my score by 1 point.** Thank you for taking the time to read our responses and for raising the score to a borderline reject. We have carefully addressed the other questions raised by the reviewer, and we hope that these new responses, along with the improvements made to the paper based on the reviewers' constructive comments, will result in a higher score. This, combined with the evaluations from the other reviewers, could lead to the publication of this new line of research in certifying machine learning models for regression tasks. --- Rebuttal Comment 4.1: Comment: I thank the authors for responding to my remaining concerns. In light of the promised revisions, I have increased my score by 1 more point. Overall, I think that this is still a borderline paper, and therefore I defer to the meta reviewers.
Summary: Prior work extends the notion of a 'certified robustness radius' from classification tasks to a regression task. The prior state-of-the art for calculating these certified robustness bounds in the regression setting had a major shortfall: it exhibited major instabilities when applied to values with an unbounded range. This paper suggests a method that is better conditioned for unbounded random variables. Strengths: The authors identify a popular bound in the classification case that currently does not have a practical analog for regression. They then develop a technique for extending such a bound to regression. Weaknesses: 1. The authors write that the method suggested by prior work in [17] fails for unbounded quantities in regression because it is 'unstable' however, they do not back up this claim. It would be nice if the authors could demonstrate this with experiments or explain why the method of [17] could not be expected to work for unbounded quantities. 2. There were some important terms that were not defined at all, like "accepted region" and "bag" in the appendix. Technical Quality: 3 Clarity: 1 Questions for Authors: Suggestions/ Questions: line 19: I suggest you note that adversarial training does provide some defense line 43: it's not clear what 'normalized' $z$ means here lines 126-128: I couldn't understand what this meant figure 2: I'm having a hard time interpreting b and c. What is the blue line? what is the function $g$? section 3.3: it seems that the data points between the $\alpha$th and $1-\alpha$th percentiles are what you call the ``accepted region" make sure the "accepted region" is defined! line 228: % is in the wrong place Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: It seems that there is a lot of prior work on $\alpha$ filtering in other contexts. Is it discussed when $\alpha$ filtering tends to work well/ not work well? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time spent on this manuscript, as well as their comments on the contribution of the work. Below, we reply to each comment separately. **The authors write that the method suggested by prior work in [17] fails for unbounded quantities in regression because it is 'unstable' however, they do not back up this claim. It would be nice if the authors could demonstrate this with experiments or explain why the method of [17] could not be expected to work for unbounded quantities.** A common assumption in [17] was the presence of lower and upper bounds for the model output for any input data. It was shown in [17] that as the difference between these bounds increases, the probabilistic certificate becomes impractical and approaches zero. This occurs because the averaging function used in the smoothing stage is highly sensitive to a single large outlier. Even one data point at the upper or lower bounds (assuming these bounds are large) can significantly alter the averaging function's result, making the output invalid (outside the accepted range). This fact was explained in Section 3.2. Therefore, in the unbounded scenario considered in this paper, the previous certificate is impractical due to the type of smoothing function used in that study. Figure 5 (top-left) illustrates the required discount factor (200%) as defined in [17] to achieve almost the same performance, highlighting the previous technique's failure for unbounded outputs. **There were some important terms that were not defined at all, like "accepted region" and "bag" in the appendix.** While the term “bag” simply refers to set of leftover data points after applying $\alpha$-trimming filter, the term “accepted region” is now clearly defined after Definition 1: *Based on Definition 1, users can define a region for $i^{th}$ continuous output variable by $\{z \mid diss_y(z,\textbf{y}_i) \leq \epsilon{y_i}\}$ as the accepted region where the output prediction can fit in without being considered as a wrong prediction.* This region is set by the user around $f(\textbf{x})$ to determine how much deviation is acceptable. For example, in camera- re-localization in a 3D scene with size $100m \times 100m$, the user might reasonably accept up to 0.5m deviations in the predictions. **line 19: I suggest you note that adversarial training does provide some defense.** Thanks for highlighting this important point. The authors believe that adversarial training is indeed one of the best strategies to defend against adversarial examples in the inference stage. This will be added to the camera-ready version for the sake of completeness. **line 43: it's not clear what 'normalized' z means here.** The normalization of $\textbf{z}$ is included because some divergences can only be applied to probability mass functions. This choice is up to the user, but in our paper, we used the $\ell_p$ norm as the measure of similarity without imposing any constraints on $\textbf{z}$. **lines 126-128: I couldn't understand what this meant.** This has been explained in response to the above question regarding the method in [17]. **figure 2: I'm having a hard time interpreting b and c. What is the blue line? what is the function g?** In both figures in (b) and (c), blue lines demonstrate the certified range around the centre points as the input (e.g., $x_1=2, x_2=3$) for base regression model ($f(\textbf{x})$) that is visualized in (a). On the other hand, $g(\textbf{x})$ is the smoothing function defined in equation (8) applied to the results of base regression with different $\alpha$ values. Figure 2 (b) demonstrates the certificate for $\ell_2$ attack (circles around the evaluated points) and Figure 2 (c) demonstrates the certificate for $\ell_{\infty}$ attack (squares around the evaluated points). In both figures, smoothing using $\alpha$-trimming increased the certification area while for higher $\alpha$ values this range has been further increased. **section 3.3: it seems that the data points between the $\alpha$-th and 1−$\alpha$-th percentiles are what you call the ``accepted region" make sure the "accepted region" is defined!** Accepted region around a given value $y$, motivated by Definition 1, includes points whose dissimilarity with $y$ is smaller than a threshold defined by the user. This definition has been added to the revised manuscript as the reviewer has requested. **line 228: % is in the wrong place.** Thanks for mentioning this typo. It has been fixed in the revised manuscript. **It seems that there is a lot of prior work on α filtering in other contexts. Is it discussed when $\alpha$-filtering tends to work well/ not work well?** $\alpha$-trimming is a popular technique for outlier rejection in robust statistics. While the appropriate filter design might differ from one application to another depending on the criteria, Proposition 2 and Theorem 3 state how this filter should be designed in the context of certified robustness given a level of robustness and number of samples. A summary of other approaches in other contexts such as robust parameter estimation will be added to the camera-ready version of the paper. --- Rebuttal Comment 1.1: Comment: **Unbounded quantities**: " It was shown in [17] that as the difference between these bounds increases, the probabilistic certificate becomes impractical and approaches zero." No, this was actually not discussed at all in section 3.2. Neither was "discount factors" (and I'm not sure what this means). I think you should add such a discussion to this section to put your paper in context. ** Definition of terms** This is not a standard use of the word "bag". You should either define it or re-write this section. As for the "accepted region"--- please make it clear in that sentence that you are defining a new term, that will be used throughout the paper. The current presentation of this term is rather confusing **$\alpha$-filtering in other contexts**: Could you describe such other criteria? I think it would be a nice way to connect your paper with existing literature **exposition:** Based on this exchange, I think you should work on improving the quality of the exposition of your paper. Are there further changes you plan to make to the exposition? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the additional feedback. We hope this discussion may address sufficient concerns to increase the score towards acceptance. **No, this was actually not discussed at all in section 3.2. Neither was "discount factors" (and I'm not sure what this means). I think you should add such a discussion to this section to put your paper in context.** In Section 3.2 as a summary of our previous reply, it was mentioned that *even a single adversarial point (in unbounded scenarios) can entirely shift the result of averaging into the invalid zone (from the user's perspective). This behavior is known as the zero breakdown point of averaging in robust statistics.* It was also noted that *it was shown for some cases where these considered bounds in the output are loose, the certificate bound in the input becomes worse than the base regression model.* We therefore believe the limitations of simple averaging smoothing have already been thoroughly discussed. However, if the reviewer still believes the discussion is deficient in some way, we would appreciate a pointer to parts requiring clarification. The “Discount factor” parameter was introduced in [17] (Section 4.3) to propose an approach for certifying bounded models in a finite sample regime. They used this positive parameter to apply a discount, which made the accepted region wider than that of the base regression model, aiming for better analytical results in worst-case scenarios. However, in our paper, we did not use any discount factor as described in [17]. We only mentioned it as a limitation of the approach in [17] when comparing our results in the camera re-localization task (Figure 5). For the sake of completeness, the discount factor will be explained in the camera-ready version. **This is not a standard use of the word "bag". You should either define it or re-write this section. As for the "accepted region"--- please make it clear in that sentence that you are defining a new term, that will be used throughout the paper. The current presentation of this term is rather confusing.** As the reviewer suggested, we will provide a precise definition of the term "bag" in the appendix where it is used. Concerning the term "accepted region," we appreciate the reviewer's suggestion and will clarify that this term will be used throughout the paper to refer to the neighbourhood defined in the output. **$\alpha$-filtering in other contexts: Could you describe such other criteria? I think it would be a nice way to connect your paper with existing literature** One of the primary uses of the $\alpha$-trimming filter is in data preprocessing and outlier rejection within signal processing, prior to parameter estimation. The adjustment of $\alpha$ in this context typically relies on prior knowledge about the proportion of data points that deviate from the nominal distribution, such as a Gaussian distribution. While this prior knowledge may not always be accurate, it has been widely utilized in signal processing to reduce the sensitivity of estimators. Another method for tuning $\alpha$ in parameter estimation is to consider efficiency at the nominal density. For instance, if no outliers are present in the dataset, how should $\alpha$ be set to achieve an estimation that closely matches the performance of its maximum likelihood counterpart? The authors appreciate the reviewer's suggestion and are eager to include a brief paragraph discussing the application of the $\alpha$-trimming approach in the literature, further supporting its selection in this new context. **exposition: Based on this exchange, I think you should work on improving the quality of the exposition of your paper. Are there further changes you plan to make to the exposition?** The authors are planning to only add/change the parts that are communicated to the reviewers during the rebuttal and discussion period. This includes parts related to the definition of terms “bag”, “discount factor” and “accepted region”, a short explanation for Figure 5 (see our rebuttal), application of $\alpha$-trimming in the literature, referring to the differences with the other works in the literature at the end of section 2, caption in Figure 2, etc. The authors are open to any other suggestions which the reviewer may believe would enhance the exposition of the work. If, however, the reviewer believes that there aren’t specific further additions, then we would appreciate the opportunity to publish the paper.
Summary: This work extends current randomized smoothing on the regression task via the $\alpha$-trimming filter. A new probabilistic certificate bound for is given against the $l_p$ norm attack for all regression models with the unconstrained output. Comprehensive synthetic simulations and evaluation on the real-world camera re-localization task are conducted to demonstrate the effectiveness of the proposed method Strengths: 1. This work gives good theoretical insights and rigorous proof. 2. The derived certification bound is valid for any regression model with bounded or unbounded output. Weaknesses: **Give more straightforward examples for equation (12).** As the main improvement brought by $\alpha-$ trimming filter lies on $I_{n,\alpha}^{-1}(P)$, it is recommended to give a more straightforward illustration of the situation when $I^{-1}_{n,\alpha}(P)$ provides better certified robustness than $P$, such as a curve or a table which shows different certified bound with different $P$ and $\alpha$. **More explanation in Figures 4 and 5.** In Figure 4, after reading the experiment part, I still feel confused about why the predicted location of the camera is a trajectory but not a point. Moreover, I have not found any further explanation for Figure 5. It's recommended to explain the experiment results more which may help readers understand easier. **Lack of comparative results.** This work lacks comprehensive comparative results with other certified robustness methods designed for the regression model. Technical Quality: 3 Clarity: 2 Questions for Authors: I am curious about the probability calculated from equation (6) and (9). Is there a closed-form solution to get the probability or is the probability gotten by sampling? If is by sampling, will the sampling brought by smoothing delay the prediction process of the regression model? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors appreciate the reviewer's recognition of the theoretical rigor in the paper. This work represents the first universal certification framework for regression models designed to defend against adversarial examples during the inference stage. Below, we provide a detailed response and further insights to support our proposed methodology, demonstrating the value of our analysis and we hope they help in receiving higher scores and increasing the chance of publication. **Give more straightforward examples for equation (12). As the main improvement brought by α− trimming filter lies on ...** Equation (12) states that for a base regression model that generates outputs within the accepted region with probability $q$, the chance of observing a valid output after applying $\alpha$-trimming filter becomes $\textit{{I}}_q(n-[\alpha n],[\alpha n]+1)$. In order to show this is an improvement even for worst-case scenario, we proved that the filtering rate should be greater than a threshold denoted by $\alpha^+$. The inverse process has been used in Theorem 3 to estimate the certificate radii. Therefore, a plot for $\textit{{I}}_q(n-[\alpha n],[\alpha n]+1)$ vs $\alpha$ can better visualize this improvement. This visualization can be found in the attached PDF along with this rebuttal. This new visualization will be added to the final camera-ready version of the manuscript: *Figure 1 shows two different models one with $q=0.7$ and one with $q=0.9$. After applying $\alpha$-trimming the obtained probability of validity in the results are shown in blue and orange colours, respectively. In both settings, $\alpha^+$ values are demonstrated in vertical dashed lines and it can be observed that the success rate of the prediction ($\textit{{I}}_q(n-[\alpha n],[\alpha n]+1)$) is always greater than the assumed $q$ values for $\alpha \geq \alpha^+$. As described in the example in Lines 179- Line 189 of the manuscript, the corresponding $\alpha^+$ values are slightly greater than $1-q$ to ensure improvement even in worst-case scenarios.* **More explanation in Figures 4 and 5. In Figure 4, after reading the experiment part, I still feel confused about why the predicted location of the camera is a trajectory but not a point. Moreover, I have not found any further explanation for Figure 5. It's recommended to explain the experiment results more which may help readers understand easier.** Please note that for all these three evaluated scenes, a single camera is continuously moving along a path and taking images. For example, in Great Court scene as mentioned in the experiment, 760 images were taken in different locations within the scene. Therefore, when all the predicted positions are shown together, a trajectory from the locations of the camera can be obtained. However, for each predicted location, the estimated certification radius which is colour-coded for each point in Figure 4, gives a measure of how sensitive the prediction is against adversarial examples around the given image. Figure 5, on the other hand, reflects the certified median error as defined in Line 230 for the scenes in comparison with certification of models with bounded outputs. In sensitivity analysis section (Appendix E) some aspects of these curves have been explained in detail. However, the following detailed explanation has been added to the main experiment section as reviewer suggested to better explain settings and the take home messages of the results: *As shown in these plots, $\alpha$-trimming filter consistently decreased the certified median error (orange curve) across all input perturbation ranges ($r$) compared to the results obtained by the base regression model (blue curve). The main reasons for this improvement are firstly, because of the better approximation of position parameters leveraging outlier removal and averaging using $\alpha$-trimming filter . Secondly, because of better certificate radii for each image in the scene which decreases penalization in the process of certified median error calculation. Leveraging the $\alpha$-trimming approach for smoothing, we are no longer worried about the output ranges, and no further assumptions such as large sample size or discount factor are required to provide a valid certificate.* **Lack of comparative results. This work lacks comprehensive comparative results with other certified robustness methods designed for the regression model.** Please note that one of the main claims of the paper is that this study is the first that proposes a technique that enables to provide certification for regression models with unbounded outputs. None of the existing techniques [17,19,7,4,16] are feasible to be used in the context of unbounded outputs or certification in the inference stage, unless setting some assumed bounds for the output as authors did for the work [17] in Figure 5 (top left) which is the most related work to this study. Therefore, this study should be considered as a standalone work and as a baseline for future studies not as an incremental improvement of previous studies. **I am curious about the probability calculated from equation (6) and (9). Is there a closed-form solution to get the probability or is the probability gotten by sampling? If is by sampling, will the sampling brought by smoothing delay the prediction process of the regression model?** Thanks for this question. Similar to the classification counterparts, the probability $ {p_A}_i$ should be estimated using sampling strategies and Monte Carlo estimation; however, the lower bound estimate $ \underline{p_A}_i$ is obtained by Clopper-Pearson interval prediction which gives a lower bound estimate given number of samples and required confidence level. Therefore, the delay caused by sampling can be compromised with the tightness of the lower bound (using smaller number of samples) and this is fully under control of the user. --- Rebuttal Comment 1.1: Title: Response to the author Comment: Thank you for your detailed response. My concerns are mostly addressed. After carefully reviewing the comments from the other reviewers, I keep my original score as borderline accept. --- Reply to Comment 1.1.1: Comment: We thank Reviewer wW8d for acknowledging receipt of the author rebuttal, and for acknowledging that their *“concerns are mostly addressed”*. If the reviewer might point us to which concerns (if any) remain unaddressed, we would appreciate this during the discussion period, as it would help improve the paper and provide us the valuable opportunity to work with the reviewer. The reviewer has stated *” After carefully reviewing the comments from the other reviewers, I keep my original score as borderline accept”*. Approximately 1.5 days later, one of the other reviewers increased their scores. Moreover, there has been rebuttals and discussion, beyond the other reviews. We would therefore respectfully ask if the reviewer has any further updates to their advice or scores. With thanks, The authors
null
null
Rebuttal 1: Rebuttal: The authors thank the reviewers for their feedback and constructive comments. Please find visualisation of the probabilistic certicates vs $\alpha$ in the attached PDF (Reply to reviewer wW8d). Pdf: /pdf/b0def38fe6ef72be294e84979ddb87496af8e2d1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models
Accept (poster)
Summary: This study presents BeNeDiff that integrates behavior-informed latent variable models with state-of-the-art generative diffusion models to investigate neural dynamics during behavioral tasks. The methodology involves identifying fine-grained and disentangled neural subspaces and synthesizing behavior videos to provide interpretable quantifications of neural activity related to specific behaviors. The authors validate their approach using multi-session widefield calcium imaging datasets, demonstrating high levels of neural disentanglement and reconstruction quality. Strengths: 1. Using wide-field calcium imaging for neural-behavior analysis is both innovative and interesting. 2. The approach of employing a behavior LVM for latent factor disentanglement, followed by a video diffusion model for behavior encoding, is both clear and impressive. 3. Especially, using $Z$ produced by LVM as learning target for the classifier in video diffusion model is quiet important, avoiding the issue of "label leakage" which maybe introduced by classifier-free guidance generation. 4. The paper is well-organized, and the disentanglement results and behavior encoding results are both meaningful and useful. Weaknesses: 1. This work primarily relies on a behavior LVM for neural activity disentanglement, which can be influenced by the newly introduced total correlation (TC) penalty term with $\beta$ weighted. The impact of this penalty should be considered through ablation studies. **Not a Weakness**: There is significant research combining behavioral labels with latent variables of neural activity [1,2], considering the effects of different disentanglement algorithms on behavior encoding would further enhance this study. [1] Zhou, Ding, and Xue-Xin Wei. "Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE." Advances in Neural Information Processing Systems 33 (2020): 7234-7247. [2] Schneider, Steffen, Jin Hwa Lee, and Mackenzie Weygandt Mathis. "Learnable latent embeddings for joint behavioural and neural analysis." Nature 617.7960 (2023): 360-368. Technical Quality: 3 Clarity: 4 Questions for Authors: I have no question. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: see WeaKnesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer HCYS, Thank you for your recognition of the work and insightful questions. We provide clarifications and new results that we have generated to address your questions below. > This work primarily relies on a behavior LVM for neural activity disentanglement, which can be influenced by the newly introduced total correlation (TC) penalty term with 𝛽 weighted. The impact of this penalty should be considered through ablation studies. We appreciate your attention to this aspect. The following are our ablation studies with various $\beta$ values. Each experiment condition is repeated with 5 runs, and their mean and variance are listed. | Metrics | $\beta = 1$ | $\beta = 2$ | $\beta = 4$ (Ours) | $\beta = 8$ | | :--------------------------------: | :----------------: | :----------------: | :----------------: | :----------------: | | $R^2$(%) $\uparrow$ | $79.24 (\pm 0.27)$ | $77.97 (\pm 0.19)$ | $75.41 (\pm 0.24)$ | $71.44 (\pm 0.28)$ | | RMSE (x100) $\downarrow$ | $33.90 (\pm 0.21)$ | $34.65 (\pm 0.16)$ | $35.50 (\pm 0.17)$ | $38.84 (\pm 0.18)$ | | $\operatorname{MIG}$(%) $\uparrow$ | $48.75 (\pm 0.26)$ | $51.37 (\pm 0.28)$ | $55.87 (\pm 0.26)$ | $58.85 (\pm 0.26)$ | *Table 1: Hyper-parameter $\beta$ Investigation on the SSp-Left region of Session-1.* | Metrics | $\beta = 1$ | $\beta = 2$ | $\beta = 4$ (Ours) | $\beta = 8$ | | :--------------------------------: | :----------------: | :----------------: | :----------------: | :----------------: | | $R^2$(%) $\uparrow$ | $74.93 (\pm 0.24)$ | $72.50 (\pm 0.20)$ | $69.59 (\pm 0.22)$ | $66.00 (\pm 0.27)$ | | RMSE (x100) $\downarrow$ | $33.80 (\pm 0.16)$ | $35.08 (\pm 0.22)$ | $36.91 (\pm 0.18)$ | $39.99 (\pm 0.18)$ | | $\operatorname{MIG}$(%) $\uparrow$ | $49.26 (\pm 0.28)$ | $52.76 (\pm 0.29)$ | $58.56 (\pm 0.29)$ | $61.06 (\pm 0.27)$ | *Table 2: Hyper-parameter $\beta$ Investigation on the MOs-Left region of Session-1.* From the results, we observed that setting $\beta = 1$ or $\beta = 2$ resulted in lower disentanglement of the latent subspace. However, increasing $\beta = 8$ led to substantial degradation in neural reconstruction quality. Hence, we determined that $\beta = 4$ offers a balanced trade-off between disentanglement and reconstruction. These Tables will be added in the Appendix of the revised manuscript. > There is significant research combining behavioral labels with latent variables of neural activity [1,2], considering the effects of different disentanglement algorithms on behavior encoding would further enhance this study. This is an insightful point. We would like to note that the listed two research works [1, 2] combining behavioral labels for neural latent variable learning do not focus on the disentanglement of the neural subspace. The total-correlation (TC) term proposed here is promising to be combined with these two approaches [1, 2] to achieve a more disentangled latent subspace. We added the TC term to their loss functions, and the results are reported in the following Tables 3 and 4. Each experiment condition is repeated with 5 runs, and their mean and variance are listed. | Metrics \ Method | pi-VAE | pi-VAE w/ TC | CEBRA | CEBRA w/ TC | | :--------------------------------: | :----------------: | :----------------: | :----------------: | :----------------: | | $R^2$(%) $\uparrow$ | $79.60 (\pm 0.22)$ | $72.10 (\pm 0.25)$ | $74.37 (\pm 0.24)$ | $70.51 (\pm 0.28)$ | | RMSE (x100) $\downarrow$ | $33.07 (\pm 0.18)$ | $36.20 (\pm 0.20)$ | $36.74 (\pm 0.22)$ | $37.82 (\pm 0.24)$ | | $\operatorname{MIG}$(%) $\uparrow$ | $40.12 (\pm 0.24)$ | $57.27 (\pm 0.23)$ | $43.98 (\pm 0.29)$ | $49.66 (\pm 0.27)$ | *Table 3: Total-correlation term effect analyses on the SSp-Left region of Session-1.* | Metrics \ Method | pi-VAE | pi-VAE w/ TC | CEBRA | CEBRA w/ TC | | :--------------------------------: | :----------------: | :----------------: | :----------------: | :----------------: | | $R^2$(%) $\uparrow$ | $72.63 (\pm 0.28)$ | $66.99 (\pm 0.29)$ | $70.73 (\pm 0.23)$ | $67.65 (\pm 0.23)$ | | RMSE (x100) $\downarrow$ | $32.14 (\pm 0.17)$ | $37.12 (\pm 0.18)$ | $35.69 (\pm 0.19)$ | $37.75 (\pm 0.25)$ | | $\operatorname{MIG}$(%) $\uparrow$ | $37.94 (\pm 0.23)$ | $60.14 (\pm 0.24)$ | $42.20 (\pm 0.28)$ | $50.21 (\pm 0.31)$ | *Table 4: Total-correlation term effect analyses on the MOs-Left region of Session-1.* After incorporating the total-correlation term, we observe that the disentanglement performance of these baseline neural LVMs improves. These results will also be added to the Appendix of the revised manuscript. We would like to emphasize that the disentangled neural LVM module is not our primary contribution. Instead, the main modeling and scientific contribution of our work both lies in the video diffusion modeling (VDM) module of BeNeDiff, which interprets the neural dynamics of each disentangled latent factor in a generative manner. The VDM module is crucial for clearly interpreting the neural dynamics of each latent factor, demonstrating specificity to behaviors of interest (e.g., paw-x-axis movement, jaw movement) and aligning with ground-truth behavioral trajectories. Moreover, the VDM module can be generalized to all the aforementioned neural LVM baselines with the total-correlation term for the discovery of neural dynamics in the learned neural subspace. We can also include more generated video results in an anonymized link upon request. Refs: [1] Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE. (Zhou et al., 2020) [2] Learnable latent embeddings for joint behavioural and neural analysis. (Schneider et al., 2023) --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. The new information meets my expectations, and I hope this content can be added to the supplementary materials. Thus, I maintain my original score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer HCYS, Thank you for your timely response. We will include the ablation study of the hyperparameter $\beta$ within the total-correlation term and present the results in the appendix of the revised manuscript. Additionally, we will assess the generalizability of the video diffusion modeling (VDM) module to the pi-VAE and CEBRA neural LVM baselines (with the total-correlation penalty term) to discover neural dynamics in their learned disentangled subspace. The generated video results (similar to Figures 6 and 7 of the manuscript) will be added to the supplementary materials. We again appreciate your evaluation and recognition of our work.
Summary: The paper proposed BeNeDiff to learn disentangled neural trajectories together with a generative diffusion mode for behavior. BeNeDiff leverages beta-VAE for learning of disentangled space space Additional behavior generation module is applied for interpretation of the neural subspace. It is interesting to see that the learned latent latent trajectory is separated for different behavior types. The qualitative results and quantitative results demonstrate the superior of the proposed method. Strengths: 1.The disentangle property of neural latent space is significant compared to previous methods based on the quantitative metric. 2.The idea leveraging diffusion model to interpret learned neural subspace is interesting. Weaknesses: 1.It is unclear to me if the behavior video generation part is trained together with the disentangled subspace learning. It seems true based on the figure, but didn't see how they are optimized together from text. 2.it is interesting to learn the disentangled neural space. Some visualization of the learned latent space is beneficial. 3.Additional ablation study on different contribution of the loss function could be performed. Technical Quality: 3 Clarity: 2 Questions for Authors: 1.How training and testing data are split? Are they session wise or random? How the method can be generalized to different sessions or animals? 2.Does the learned latent trajectory show some clustering property, or somehow organized, could you visualize it? 3.For session 3.2, $c$ is better to be described as it is first mentioned in eq.5. Does the number of class is equals to feature dimension D and the same as the number of the behavior classes"? 4.To get figure 4, which latent variable do you use, $\hat{Z}$ or $Z$? 5.For fig.6, and fig.7, could you show the difference of the ground truth frame with the same behavior type as comparison. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: It could be better the paper include more analyze on the learned latent trajectory and see if it could provide some scientific insight. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer CLKJ, Thank you for your valuable comments. We would like to make the following clarifications. Hopefully these will resolve most of your concerns, and they can be taken into account when deciding the final review score. > It is unclear to me if the behavior video generation part is trained together with the disentangled subspace learning. It seems true based on the figure, but didn't see how they are optimized together from text. We would like to note that given the learned disentangled neural subspace and trajectories (from the neural latent variable model), the behavioral video (generation) diffusion model (VDM) module serves as an expressive ML tool for interpreting and visualizing complex factor-wise neural dynamics. Hence, the behavioral VDM is trained after the neural LVM in two steps, as we need high-quality neural latents for training the classifier (neural encoder) in VDM. This procedure is also in line with the conventions of previous studies [1] for scientific interpretation. Regarding Figure 2, it describes the neural dynamics interpretation phase of BeNeDiff rather than the model training phase. In this phase, behavior videos are generated with guidance from the neural encoder. In the revised manuscript, we will also enhance Figure 2 for clearer visualization. > It is interesting to learn the disentangled neural space. Some visualization of the learned latent space is beneficial. Does the learned latent trajectory show some clustering property, or somehow organized, could you visualize it? It could be better the paper include more analyze on the learned latent trajectory and see if it could provide some scientific insight. We thank the reviewer for this practical suggestion. In Figure 1(B) of the attached PDF, in each subfigure, we select two neural latent factors at a time and plot their 2D trajectories. Latent-factor-3 is informed by the "Paw-x" behavior-of-interest. We observe that the learned dimension axes are disentangled, but the behavior dynamics are not readable from the trajectories. Figure 1(C) depicts the temporal evolution of all six latent factors, which is also difficult to interpret in terms of the behavior dynamics encoded by the neural population. In Figure 1(D) of the attached PDF, the full neural latent trajectories of trials (dimension reduction by PCA to 3D) are also hard to directly interpret or identify any clustering structure. To sum up, the learned neural subspace exhibits certain disentangled properties, but the visualization results are not intuitive or readable enough for experimentalists. Therefore, the VDM module in BeNeDiff is crucial for clearly interpreting the neural dynamics of each latent factor, which demonstrates specificity to behaviors of interest and aligns with ground-truth behavioral trajectories (e.g., Latent-factor-3 is aligned with "Paw-x" in Figure 1(E)). Please refer to Section 1.2 of the global response for detail. > Additional ablation study on different contribution of the loss function could be performed. Please refer to Section 1.4 of the global response for a detailed discussion of this point. > (1) How training and testing data are split? Are they session wise or random? (2) How the method can be generalized to different sessions or animals? (1) We train separate models for each session independently. Within each session, we use an 80/20 split for training and testing, employing cross-validation to ensure model robustness. (2) This is an interesting point. The generalizability of each method is a vital consideration. For the neural data part, the difficulty arises from the varying number of observed neurons across different sessions. Additionally, the same brain region can exhibit varying encoding for the same behavior of interest across different animals. One possible solution to mitigate the issues is to add a linear probing layer to perform neural align [2, 3] from various sessions and animals into a unified subspace, then can be adpated to the BeNeDiff framework. Meanwhile, the VDM module can generalize well to multi-sessions and multi-animals settings by fitting a unified neural encoder for the aligned neural data. > (1) For session 3.2, 𝑐 is better to be described as it is first mentioned in eq.5. (2) Does the number of class is equals to feature dimension D and the same as the number of the behavior classes"? (1) Thank you for this practical suggestion. We will move the description of class label $\mathbf{c}$ from line 147 to after Eq. (5). (2) Yes. The number of class labels $\mathbf{c}$ is the same as the latent factor dimension $D$, as well as the number of behaviors of interest. > To get figure 4, which latent variable do you use, 𝑍^ or 𝑍? We use each latent factor of $\mathbf{Z}$ for behavior decoding, and get the results in Figure 4. > For fig.6, and fig.7, could you show the difference of the ground truth frame with the same behavior type as comparison. We apologize for the confusion and would like to clarify that the results in Figures 6 and 7 are all derived from analyzing the behavior-related neural dynamics of one target neural latent factor using two baseline latent manipulation methods and BeNeDiff. The ground truth frames from behavior trials in the dataset do not specifically activate one neural latent factor, so the frame differences between consecutive images result in entangled behaviors. Including them would not provide a fair comparison. We have, however, provided several ground-truth behavior trials in the supplementary materials for reference. We look forward to further discussion, and are happy to answer any questions that may arise. Refs: [1] Cortical discovery using large scale generative models. (Luo et al., 2023) [2] Stabilizing brain-computer interfaces through alignment of latent dynamics. (Karpowicz et al., 2022) [3] Using adversarial networks to extend brain computer interface decoding accuracy over time. (Ma et al., 2023) --- Rebuttal Comment 1.1: Comment: Thanks for the responses. The responses addressed most of my concerns. But towards Q2, if the number of latent factor $D$ and the number of behavior of interest are the same, use the class label as guidance would obviously be helpful to learn a disentangled latent space. Directly learning a classifier and use the latent from the classifier would also be disentangled. It would be good to provide the latent variables from the trained classifier to diffusion model to see if the behavior videos could be synthesized from these variables. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer CLKJ, We appreciate your positive feedback and the adjustment to your score. As mentioned in our previous response, we will include the video results synthesized from behavior classifier latents in the supplementary materials of the revised manuscript. Thank you once again for your valuable suggestions and comments. --- Rebuttal 2: Title: Synthesizing Videos with Behavior Classifier Latents Comment: Dear Reviewer CLKJ, We thank you for the valuable response. We agree that incorporating behavior labels can be helpful for learning a disentangled neural subspace. However, we note that our neural LVM module in BeNeDiff maintains a high neural data reconstruction rate (as shown in Table 1 of the manuscript), thereby preserving the neural dynamics in the latent trajectories. It is true that we can train a probabilistic behavior classifier (a behavior decoder from neural data $\mathbf{X}$ to behavior labels $\mathbf{U}$) to approximate the conditional distribution $p(\mathbf{U} \mid \mathbf{X})$ and derive latent variables $\mathbf{Z}'$ from it. The $\mathbf{Z}'$ would also be disentangled. However, such $\mathbf{Z}'$ lacks physical or semantic meaning, as there is no reconstruction of the neural data, making it unable to interpret neural dynamics or provide any scientific insights. We further trained a diffusion model guided by $\mathbf{Z}'$ and presented the visualization results. We have sent the AC a comment linking to an anonymous repository containing these synthesized video results. In accordance with this year’s NeurIPS guidelines, please have the AC forward it to you. We observe that while the synthesized videos by activating each latent factor does show some specificity to a particular behavior of interest, **the overall video results are quite chaotic and do not align with the ground-truth behavioral trajectories.** **The behavior movements are overly drastic and contain many unnatural distortions** compared to the videos generated by the neural latent trajectories in BeNeDiff. These videos will be included in the supplementary materials of the revised manuscript. Additionally, we have included the videos generated by BeNeDiff and the ground-truth behavior videos in the anonymous repository provided to the AC for comparison. Looking ahead, our future research aims to generalize BeNeDiff to settings where the latent (feature) dimension is larger than the number of behavior classes. We hope that these responses and the above rebuttals address your concerns and will be considered when determining the final review score.
Summary: The authors implement a VAE with behavioral labels to identify behavior-specific latent variables. They use these latents to then create a generative model for experimental video. They focus on the application of their model to a wide-field cortical recordings of a mouse during an experimental visual task. They show that their model identifies disentangled behaviorally relevant latents and can create experimental video where identifiable behavioral elements are generated from specific neural latents. Strengths: I see two primary contributions in this paper that make it appropriate for a Neurips audience. 1) The use of Total correlation measures to disentangle latents in a VAE framework in neuroscience. 2) Using generative video diffusion models to identify what video features relate to individual neural latents. Both of these are interesting applications of existing methods but they are applied in a clever way to neural data and I think these would be of interest to a computational neuroscience audience. Weaknesses: Each of the primary contributions to neural LVM type models (the use of total correlation and the diffusion modeling) are not sufficiently benchmarked against competing approaches. It is unclear how this model compares to the others used in neuroscience. If the authors wanted to strengthen the case that their model is an important novel contribution for behaviorally-relevant latent disentangling, they could compare how total correlation VAE compares to other methods where behavioral labels are used to help disentangle the latent states. One important comparison that I would be curious to see that isn't cited would be the pi-VAE1, but the others the authors cite could be used. Specifically, these other models could be added to figures 4 and 5. The authors could also focus on how using video diffusion is a novel application in this setting. Again, some additional benchmarking with existing deep LVMs in neuroscience and some further discussion and emphasis as to how the diffusion methodology solves an important problem in neuroscience that perhaps existing methods cannot solve would strengthen this paper. Though the authors highlight the application of the model to this specific dataset as a contribution, this alone would be more appropriate for a more experimentally-focused venue. For neurips, the focus should be on the model advancements and their applications, not the significance of the dataset. 1 - Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE - Zhou and Wei Technical Quality: 2 Clarity: 3 Questions for Authors: NA Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors only minimally discussed the limits of the ability of their model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer o3cJ, Thank you for your detailed and constructive comments. We would like to make the following clarifications. Hopefully these will resolve most of your concerns, and they can be taken into account when deciding the final review score. > Each of the primary contributions to neural LVM type models (the use of total correlation and the diffusion modeling) are not sufficiently benchmarked against competing approaches. We would like to clarify that the contribution of BeNeDiff is two-fold: (1) The use of the total correlation term in the behavior-informed neural latent variable model (LVM) to induce a disentangled neural latent subspace. While this module is not our main modeling focus, it is effective compared to baselines (please refer to Table 1 of the attached PDF for the results). (2) The main novelty and contribution of our paper is the video diffusion modeling (VDM) module, which serves as an interpretation tool for visualizing the neural dynamics of each learned disentangled latent factor. The generated behavioral videos of VDM conditioned on activating one learned latent factor over time demonstrate that each factor shows specificity to a ground-truth behavior of interest (e.g., paw-x-axis movement, jaw movement) and aligns with ground-truth behavioral trajectories. The qualitative performance comparison of this VDM module against baselines is presented in Figures 6 and 7 of the manuscript, with detailed analyses in Sections 3.2.1 and 5.3. > If the authors wanted to strengthen the case that their model is an important novel contribution for behaviorally-relevant latent disentangling, they could compare how total correlation VAE compares to other methods where behavioral labels are used to help disentangle the latents. One important comparison that I would be curious to see would be the pi-VAE [1]. Including the suggested pi-VAE [1], we have added the following three baselines for extensive comparisons. Please refer to Section 1.3 of the global response for the baseline descriptions and tables with results. We also note that the VDM module of BeNeDiff can generalize to interpret the neural dynamics of the latent factors learned by all the neural LVM baselines mentioned above. > The authors could also focus on how using video diffusion is a novel application in this setting. Again, some further discussion and emphasis as to how the diffusion methodology solves an important problem in neuroscience that perhaps existing methods cannot solve would strengthen this paper. This is a highly valid point. We would like to emphasize that the *scientific question* in neuroscience we aim to answer in this paper is "*how can we enable in-depth exploration of neural latent factors with video behavior recorded, revealing interpretable neural dynamics associated with behaviors*". Therefore, the modeling goal is to develop a machine learning tool for visualizing neural dynamics encoded by each learned latent factor. This involves mapping the neural latent factors $\mathbf{Z}$ to behavior videos $\mathbf{Y}$, activating a single latent factor $\mathbf{z}^{(d)}$, and observing how the manipulation affects $\mathbf{Y}$. The induced changes in the videos reveal the dynamics encoded by $\mathbf{z}^{(d)}$. Previous manipulation methods [2, 3] often change the target $\mathbf{z}^{(d)}$ while fixing non-target subspaces to arbitrary values. However, setting arbitrary values without knowing the true distributions of non-target subspaces can lead to unnatural distortions in generated videos, complicating the interpretation and visualization of genuine animal behavioral dynamics. BeNeDiff proposes using a video diffusion model (VDM) module to explore disentangled neural dynamics in a generative manner. Specifically, the VDM module employs a neural encoder (Eq. (7) of the manuscript) to guide the generation of videos that maximize variance in the neural trajectory of target latent factor $\mathbf{z}^{(d)}$ while minimizing variance in other factors' trajectories. This approach ensures that the generated behavioral videos predominantly reflect the neural dynamics of the target latent factor. It maintains naturalness by avoiding assumptions about specific values in non-target subspaces, thereby preventing the generation of videos with unnatural distortion, which manifests the modeling-wise novelty of BeNeDiff. As shown in Figures 6 and 7 of the manuscript, for each latent factor, the VDM module provides interpretable quantifications of its neural dynamics with specificity to the behaviors of interest. These generated video results are also available in the 'BeNeDiff-Generated-Video' folder of the supplementary materials. > Though the authors highlight the application of the model to this specific dataset as a contribution, this alone would be more appropriate for a more experimentally-focused venue. For neurips, the focus should be on the model advancements and their applications. We would like to politely disagree with the statement that the application of our proposed method is specific to this dataset. Please refer to Section 1.1 of the global response for details on how BeNeDiff can be generalized to multiple datasets. For the model advancements, please refer to the answer of the above point. We also note that the diffusion model has been employed as an interpretation tool in previous neuroscience works, resulting in publication [4] at NeurIPS. > The authors only minimally discussed the limits of the ability of their model. Due to space limitations, please refer to Section 1.5 of the global response for detail. We look forward to further discussion, and are happy to answer any questions that may arise. Refs: [1] pi-VAE. (Zhou et al., 2020) [2] Partitioning variability in animal behavioral videos using ss-vaes. (Whiteway et al., 2021) [3] Classifier-free diffusion guidance. (Ho et al., 2022) [4] Cortical discovery using large scale generative models. (Luo et al., 2023) --- Rebuttal Comment 1.1: Title: Thank you Comment: Dear Reviewer o3cJ, Thank you for your positive feedback and for adjusting your score. As mentioned in our rebuttal, we will incorporate the neural LVM module comparisons and the VDM module-generated video visualizations for multiple datasets into the appendix of the revised manuscript. --- Rebuttal Comment 1.2: Comment: Thank you for the response. I think with this added detail the paper's contribution is clearer and I'd suggest emphasizing and clarifying the contribution similarly in a revised manuscript. I also appreciate the additional benchmarking with the total correlation measure. I have adjusted my score accordingly. Regarding my comment about your emphasis on the data set - I did not mean to suggest that the analysis is specific to this dataset, but rather that your phrasing in the manuscript emphasizes the data set in-and-of-itself as a major contribution. The end of the introduction reads "To highlight our major contributions: (1) This is the first work to explore wide-field imaging of the dorsal cortex of mice during a decision-making task using neural subspace analysis," and on like 167 "However, our work is the first to discover interpretable and disentangled latent subspaces of wide-field imaging data". I agree that your methods are generally applicable across datasets and I believe, therefore, that the model and it's generality should be what is primarily emphasized when discussing the major contributions in the manuscript. --- Rebuttal 2: Title: Thank you Comment: Dear Reviewer o3cJ, We sincerely appreciate your positive evaluation of our method's contribution and additional benchmarking. We apologize for the confusion in the Introduction section and will clarify the model's generalizability across datasets when discussing the major contributions in the revised manuscript. We thank you once again for the valuable feedback and suggestions.
null
null
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to all the reviewers for their insightful feedback and suggestions. We appreciate the positive comments which characterized our work as having an `"interesting and clever"` idea for leveraging video diffusion models to interpret inferred neural subspaces (o3cJ, CLKJ), `"significant and impressive"` methodological advancement (CLKJ, HCYS), being `"appropriate for NeurIPS and neuroscience audiences"` (o3cJ), with a `"well-organized presentation"` (tfQ5, 2uTS), and recognizing the experimental results of our work to be `"meaningful and useful"` (HCYS). In this place, we would like to first provide several general clarifications to enhance overall understanding of our work. **1.1 Generalizability of BeNeDiff to Datasets** We would like to clarify that our framework, BeNeDiff, can generalize to datasets without constraints on the format of neural data, while the behavioral data is in the format of timely-aligned video frames. Dataset with these properties are readily accessible in modern experimental settings. Due to the scientific fact that various brain regions, such as SSp and MOs, exhibit different neural latent trajectories related to behaviors of interest, in the paper, we selected the Wide-Field Calcium Imaging dataset [1] to investigate neural dynamics within each region and to analyze discrepancies across different brain regions. Notably, our VDM module is also compatible with single-region Neuropixels data [2] and multi-region voltage imaging data [3]. We will provide the visualization results for these datasets in the appendix of our revised manuscript. We can also include the results in an anonymized link upon request. **1.2 Main Contribution** The main modeling and scientific contribution of BeNeDiff both lies in the video diffusion modeling (VDM) module, which interprets the neural dynamics of each disentangled latent factor in a generative manner. Specifically, the VDM module employs a neural encoder (Eq. (7) of the manuscript) to guide the generation of videos that maximize variance in the neural trajectory of target latent factor while minimizing variance in other factors' trajectories. The generation results clearly interprets the neural dynamics of each latent factor, demonstrating specificity to behaviors of interest (e.g., paw-x-axis movement, jaw movement) and aligning with ground-truth behavioral trajectories. This approach ensures that the generated behavioral videos predominantly reflect the neural dynamics of the target latent factor. **1.3 Baselines Comparison for Neural LVM module** The following are our comparisons focusing on behaviorally-relevant latent disentangling of the neural LVM module of BeNeDiff. Based on Table 1 of the manuscript, we have added the following three baseline methods for comparison: SSL [4], pi-VAE [5], and CEBRA [6], in which: * Semi-Supervised Learning (SSL) [4]: a deep generative modeling method that extends the standard variational bound with behavior labels corresponding to each data point. * pi-VAE [5]: an identifiable variational auto-encoder conditioned on task variables (e.g., motor observable states) for interpretable neural latent discovery. * CEBRA [6]: a deep neural encoding method that jointly uses behavioural and neural data with constrastive loss for nonlinear neural dynamics discovery. Please refer to Table 1 of the attached PDF for the results. We observe that, compared to other behavior-informed baseline methods, the neural LVM module of our proposed BeNeDiff achieves the highest disentanglement performance (MIG) while maintaining a high neural reconstruction rate. As the neural LVM module is not our primary contribution, we did not allocate much text and space for it in the manuscript. We will incorporate these comparisons into the appendix of our revised manuscript. We will also include the corresponding results of these baselines into Figures 4 and 5 of the manuscript. **1.4 Ablation Study for Neural LVM module** Please refer to Table 2 of the attached PDF for the results of the variants of our method, demonstrating the contributions of the behavior-informed loss term and the total-correlation penalty loss term. We observe that both terms improve the disentanglement of the neural subspace while generally maintaining neural reconstruction rate. Meanwhile, the neural reconstruction expectation term and the KL regularizer term are basic components of the variational bound and removing them for an ablation study would result in a loss of mathematical integrity. We will incorporate these studies into the appendix of our revised manuscript. **1.5 Limitation Discussion** (1) For the neural latent variable model (LVM) module, there exists a balance between disentangling the neural subspace with behavior semantics and maintaining neural reconstruction performance. For each brain region and session, at this stage, a careful hyper-parameter search is necessary to balance the weight between these two components. (2) For the generative video diffusion module, we implement the neural encoder (classifier for guidance) in Eq. (7) of the manuscript as a linear regressor for interpretability. This linear assumption can be relaxed later for improved guidance performance. These points will be included in Section 6, 'Discussion', of the revised manuscript. Refs: [1] Single-trial neural dynamics are dominated by richly varied movements. (Musall et al., 2019) [2] Eight-probe Neuropixels recordings during spontaneous behaviors. (Steinmetz et al., 2019) [3] Widefield imaging of cortical voltage dynamics with an indicator evolved for one-photon microscopy. (Lu et al., 2023) [4] Semi-supervised learning with deep generative models. (Kingma et al., 2014) [5] Learning identifiable and interpretable latent models of neural activity using pi-VAE. (Zhou et al., 2020) [6] Learnable latent embeddings for joint behavioural and neural analysis. (Schneider et al., 2023) Pdf: /pdf/15b0ce602807f6319ddac97b2281d3826534caed.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning to Control the Smoothness of GCN Features
Reject
Summary: The paper "Learning to Control the Smoothness of GCN Features" investigates the impact of activation functions, specifically ReLU and leaky ReLU, on the smoothness of node features in Graph Convolutional Networks (GCNs). It provides a geometric characterization of these effects, showing how altering the input's projection onto eigenspace M can control the smoothness of output features. The study introduces a Smoothness Control Term (SCT) to modulate the smoothness of node features, aiming to improve node classification tasks in both homophilic and heterophilic graphs. Experimental results validate the efficacy of SCT, demonstrating significant improvements in node classification accuracy for several GCN-style models . Strengths: ## Originality - The paper introduces a novel approach to control the smoothness of Graph Convolutional Networks (GCNs) features, which is a significant departure from traditional methods. It builds upon and extends the work of Oono & Suzuki and Cai & Wang by integrating geometric insights with the message-passing process in GCNs. ## Quality - The paper provides a robust theoretical framework, including geometric characterizations and proofs, that underpin the proposed methods. - Extensive experiments validate the theoretical claims, showing significant improvements in node classification accuracy. Detailed descriptions of the experimental setup, including datasets and hyperparameter tuning, enhance the reproducibility of the results. ## Clarity - The paper is well-structured, with clear sections that logically flow from introduction to theoretical analysis, experimental validation, and conclusions. - The paper provides a comprehensive review of related work, situating its contributions within the broader context of graph neural networks research. ## Significance - The ability to control the smoothness of GCN features addresses a to me interesting and important challenge in graph neural networks, with potential applications in various domains such as social network analysis, biological networks, and recommendation systems. - The proposed SCT shows improvements in real-world datasets. - The insights gained from this work could inform future research on activation functions and feature smoothness in other types of neural networks. Weaknesses: ## Weaknesses - The removal of white space in the paper makes it hard to read. The authors should really try and make the paper easier to read visually by not condensing as much math in the main text as possible. Not only is this arguably in violation of the guidelines, but it also illustrates that the authors need to distinguish clearer what the main contributions are and which parts in the main text can go to an appendix. - While the geometric insights provided are valuable, the complexity of the mathematical formulations is challenging for readers not well-versed in advanced geometry and spectral graph theory. Simplifying explanations or providing more intuitive examples could enhance accessibility. Moreover, I feel like the math could be made more intuitive by giving verbal explanations before the theorems. The cramming of the paper and not highlighting enough what the main contributions should be improved. - Although the paper compares SCT with a few baseline models, it would benefit from a broader comparison with additional state-of-the-art methods in GCNs and GNNs to provide a more comprehensive evaluation of its effectiveness. - The experiments are primarily conducted on benchmark datasets. Incorporating more real-world applications and diverse datasets would demonstrate the practical relevance and versatility of the proposed method. - The drop in accuracy for deeper models (16 or 32 layers) is noted but not deeply analyzed. A more thorough investigation into the causes of this performance degradation, beyond mentioning vanishing gradients, could offer insights into potential improvements. What happens if you use techniques that combat eg oversmoothing? - While the paper mentions computational efficiency, a more detailed discussion on the computational overhead introduced by SCT, including potential trade-offs between accuracy and efficiency, would be beneficial. Technical Quality: 3 Clarity: 2 Questions for Authors: ## Questions - Could you provide more intuitive examples or visual aids to help readers better understand the geometric characterization of smoothness in GCN features? - Why did you choose the specific baselines for comparison? How would SCT perform against other state-of-the-art GCN and GNN models not included in your study? - Can you elaborate on the causes of performance degradation in deeper models (16 or 32 layers)? Have you considered any specific techniques to mitigate this issue? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: ## Limitations The authors have acknowledged several limitations of their work, including the over-smoothing issue in deep GCNs and the dependence on specific activation functions, but they could improve by providing more detailed discussions on computational efficiency and potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. In what follows, we provide point-by-point responses to your comments on the weaknesses and limitations of our paper and your questions. --- **1: The removal of white space in the paper makes it hard to read.** **Response:** We appreciate your feedback and we are happy to move certain parts to the appendix to make the paper easier to read. --- **2: The math could be made more intuitive by giving verbal explanations before the theorems. The cramming of the paper and not highlighting enough what the main contributions should be improved. Could you provide more intuitive examples or visual aids to help readers better understand the geometric characterization of smoothness in GCN features?** **Response:** Again, we appreciate your comment. We have summarized the main contributions in verbal explanations in Section 1.1. We also provide navigation for each contribution to the particular sections for details. In Lines 110-122, we provide a very brief review of used results from spectral graph theory. Regarding the geometric insight, all we established is that there is a high-dimensional sphere associated with the input and output of ReLU and leakyReLU. In Sections 3.1 and 3.2, we provide details on the center and radius of these spheres. --- **3: Why did you choose the specific baselines for comparison? How would SCT perform against other SOTA GCN and GNN models not included in your study? It would benefit from a broader comparison with additional SOTA methods in GCNs and GNNs to provide a more comprehensive evaluation of its effectiveness.** **Response:** We design the experiments to solidify our theoretical results and verify the effectiveness of the informed algorithm. In particular, we have the following two purposes in mind: First, SCT can avoid over-smoothing for GCN. Second, learning to balance the smooth and non-smooth features can improve the performance of GCN-style models that even do not suffer from over-smoothing. We choose two state-of-the-art GCN-style models - GCNII and EGNN - as the testbeds. To show the effectiveness of the proposed approach, we have comprehensively studied models with different layers on 10 different benchmark datasets, covering all datasets from the papers we have benchmarked on. Our proposed SCT can be treated as a bias term for GCN-style models and it can be integrated with many other GNNs. Studying other GNNs can be interesting for future work, especially whether the proposed SCT can improve the performance of other GNNs. We are happy to provide some comments in the revised paper. --- **4: Incorporating more real-world applications and diverse datasets would demonstrate the practical relevance and versatility of the proposed method.** **Response:** Our work is primarily theoretical with an informed practical algorithm. Using diverse existing celebrated benchmark tasks is a crucial step in evaluating the work. To better contrast with existing works, we used tasks from existing papers. We have further tested the performance of SCT on the peptide dataset originating from biophysics applications; our results in Table 6 of the rebuttal file further confirm the effectiveness and versatility of the proposed SCT. --- **5: The drop in accuracy for deeper models is noted but not deeply analyzed. A more thorough investigation into the causes of this performance degradation, beyond mentioning vanishing gradients, could offer insights into potential improvements. What happens if you use techniques that combat eg oversmoothing? Can you elaborate on the causes of performance degradation in deeper models? Have you considered any specific techniques to mitigate this issue?** **Response:** The drop in accuracy for deeper models occurs for GCN and GCN-SCT but not for other models that have skip connections in their architectures, which motivates us to investigate the vanishing gradient issue. It is noted in the original ResNet paper that using skip connections can effectively alleviate vanishing gradients in training deep networks. Our results in Figure 4 in the appendix confirm that vanishing gradients occur for GCN and GCN-SCT but not for other models. The skip connection has become a celebrated technique to mitigate the vanishing gradient issue, and this is also the case for training deep GCNs. --- **6: While the paper mentions computational efficiency, a more detailed discussion on the computational overhead introduced by SCT, including potential trade-offs between accuracy and efficiency, would be beneficial.** **Response:** The computational overhead introduced by SCT is not very significant compared to the whole cost of training and deploying GNN-style models. Notice that we can avoid performing eigendecomposition by using the fact that the basis of the space $\mathcal{M}$ -- eigenspace associated with the largest eigenvalue of the message-passing matrix -- is given by the indicator functions of each connected component of the graph; see Lines 110-114. Therefore, the problem reduces to finding connected components of the graph, and an efficient approach to identifying connected components for undirected graphs is using disjoint set union (DSU). Initially declare all the nodes as individual subsets and then visit them. When a new unvisited node is encountered, unite it with the under. In this manner, a single component will be visited in each traversal. The time complexity is $O(V)$. We provide the computational time for models with and without SCT for Ogbn-arxiv in Table 2 of the rebuttal file. --- **7: Providing more detailed discussions on computational efficiency and potential negative societal impacts.** **Response:** We will include these in the revision. See our response the point 6 above for computational efficiency. --- Thank you for considering our rebuttal. We appreciate your feedback and are happy to address further questions on our paper. --- Rebuttal Comment 1.1: Comment: Thanks a lot for the rebuttal. I will raise my score to a 5, but really urge the authors to reformat the paper in a better readable way. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for considering our rebuttal and support of our work. We will reformat the paper following your and other reviewers' suggestions.
Summary: The paper addresses the challenge of balancing smooth and non-smooth features in graph convolutional networks (GCNs) for node classification. Building on previous work that highlighted the correlation between feature smoothness and classification accuracy, the authors propose a novel method to control the smoothness of node features through a geometric approach and an augmented message-passing process. Their strategy involves establishing a geometric relationship between input and output vectors of activation functions like ReLU and Leaky-ReLU, and integrating a learnable term in the graph convolutional layers to modulate feature smoothness. The paper provides an empirical study to showcase the effectiveness of the proposed method. Strengths: The paper offers a novel geometric insight into the effects of different activation functions, specifically ReLU and Leaky-ReLU, on smoothing in GCNs. Furthermore, it introduces an innovative method to enhance the message-passing framework for GCNs (and similar networks) to better control smoothing. Weaknesses: * The empirical results section is very weak and a major revision is needed in order to be able to judge the merit of the proposed method. The most significant weaknesses are: * The paper does not compare to established baselines from other competing methods. For instance, one could take a look at the tables in https://arxiv.org/abs/2202.04579 or https://arxiv.org/abs/2210.00513 and add the results to the comparison in this paper (i.e., for cornell, texas, wisconsin, squirrel, and so on). * The paper reports different (mostly weaker) performance for the baseline models such as GCN and GCNII. Again, this should be fixed and established results can be taken from the tables in the two papers mentioned above. * The improvement using the proposed SCT is very marginal in many experiments, and in particular the performance still drops (or improves very little) for higher number of layers. It is somewhat expected that increasing the number of layers and solving oversmoothing does not help much in homophilic tasks, i.e., the tasks considered in Table 1. However, if the method works it should lead to significant improvements on heterophilic tasks, even for increasing number of layers. Therefore, please show the results for the datasets in Table 2 in the same way as you show them in Table 1, i.e., for increasing number of layers. * It is claimed in Table 1 that the drop in performance of GCN for increasing number of layers is due to vanishing gradients and not due to oversmoothing. This claim has to be justified empirically. * Figure 4 in the appendix showing gradient norms is not meaningful. The y-axis should be displayed on a logarithmic scale. Vanishing gradients occur for gradients approaching zero (exponentially fast). It is not clear if this is the case here, since it is plotted in linear scale. * The structure and readability of this paper should be improved. For instance, it would be helpful for readers who are not very familiar with the graph-learning field, to introduce the concept of GNNs and GCNs in the introduction. It would be advisable to move the technical aspects from section 1 to section 2. In fact, none of the definitions presented at the beginning of the introduction are needed anywhere else in the remainder of the introduction section. Thus, this could be moved to section 2. Technical Quality: 2 Clarity: 2 Questions for Authors: how expensive is it to compute the eigenbasis for the smoothness modulation term (5)? How does it scale with respect to the number of nodes and edges? How long does it take to compute for the biggest graph considered here, i.e., arxiv graph? Is that a limitation for large-scale graph learning? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: * The method necessitates a pre-processing step to compute the eigenbasis in equation (5). This may not scale efficiently for very large graphs. * The method does not show significant improvements. Notably, there are more effective models and methods available for mitigating oversmoothing, which have not been cited or empirically compared. Examples include https://arxiv.org/abs/2202.04579, https://arxiv.org/abs/2210.00513, https://arxiv.org/abs/2206.05437, https://arxiv.org/abs/2110.14446, https://arxiv.org/abs/2206.10991, https://arxiv.org/abs/2006.11468. * While the theoretical justification of the approach is intriguing, the insights are somewhat limited. For example, there are no provided estimates for choosing the parameter alpha, which instead has to be learned through gradient descent. * The empirical results are not convincing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. In what follows, we provide point-by-point responses to your comments on the weaknesses and limitations of our paper and your questions. --- **1: The paper does not compare to established baselines from other competing methods. E.g. tables in arxiv:2202.04579 or arxiv:2210.00513 and add the results to the comparison in this paper.** **Response:** Thank you for pointing out these papers to us. Our choice of architecture is based on the baseline of EGNN and we incorporate architectures in a comparable manner using the PyTorch Geometric framework. We provide full baseline architectures and hyperparameter details for all experiments. This enables us to consistently study the layer-wise bias introduced by SCT. The reviewer recommended papers use different architectures -- each model has a customized architecture, making it difficult to study the effects of SCT (a layer-wise bias term) introduced in different models. Of the reviewer-recommended works, the model that makes the most sense for comparison to our method is G2-GCN. Notice that they do not provide details regarding the optimal hyperparameters selected for each experiment in either the paper or source code. Instead, they provide a range of randomly chosen parameters in the paper. It is beyond the purview of this work to fine-tune G2-GCN and so a reasonable hyperparameter selection was chosen. We compare the G2-GCN architecture with and without SCT. The results are listed in Table 4 of the rebuttal pdf. As pointed out by Reviewer pS2J, our numerical results show that the proposed method is effective for various data sets and models, showing the versatility of the proposed method. --- **2: The paper reports different (mostly weaker) performance for the baseline models such as GCN and GCNII.** **Response:** The GCN and GCNII architectures we used are different from those in the reviewer-mentioned papers. Our used architectures provide a direct comparison between layer-wise operations performed by the addition of the SCT bias. We do not claim to achieve state-of-the-art performance but provide a reasonable baseline to compare methodologies. This provides a testbed for verifying our theoretical results. We utilize standard implementations of the GCN and GCNII layers using PyTorch Geometric. For additional comparison, Table 3 of the rebuttal pdf provides results for including SCT in the GCN architectures mentioned by the reviewer. Again, SCT improves the baseline model with a noticeable margin. --- **3: Please show the results for the datasets in Table 2 in the same way as you show them in Table 1, i.e., for increasing number of layers.** **Response:** In Table 5 of the rebuttal PDF file, we provide the corresponding table for the Texas dataset. We provide results for only the Texas dataset due to the space limitation, similar results hold for other datasets based on our experiments. --- **4: Justify the claim that the accuracy drop in Table 1 is due to vanishing gradients and not oversmoothing.** **Response:** We investigate the vanishing gradient issue in training deep GCN and GCN-SCT by plotting the gradient norm in Fig. 4 in the appendix, confirming that the vanishing gradient occurs for GCN and GCN-SCT but not for other models. --- **5: Figure 4 in the appendix showing gradient norms is not meaningful. The y-axis should be displayed on a logarithmic scale.** **Response:** We appreciate your feedback. In the rebuttal file, we have provided the updated figures in Figure 1. We also show the exponential decay in gradient norms. --- **6: The structure and readability of this paper should be improved.** **Response:** We are happy to incorporate your comments in the revision. --- **7: How expensive is it to compute the eigenbasis for the term (5)? How does it scale w.r.t. the number of nodes and edges? How long does it take to compute for the biggest graph considered here, i.e., arxiv graph?** **Response:** The computational overhead introduced by SCT is not very significant compared to the entire cost of training and deploying GNNs. Notice that we can avoid performing eigendecomposition by using the fact that the basis of the space $\mathcal{M}$ is given by the indicator functions of each connected component of the graph; see Lines 110-114. Therefore, the problem reduces to finding connected components of the graph, and an efficient approach to identifying connected components for undirected graphs is using disjoint set union (DSU). Initially declare all the nodes as individual subsets and then visit them. When a new unvisited node is encountered, unite it with the under. In this manner, a single component will be visited in each traversal. The time complexity is $O(V)$. We provide the computational time for models with and without SCT for Ogbn-arxiv in Table 2 in the rebuttal file. --- **8: Significance of the results. There are more effective models and methods available for mitigating oversmoothing, which have not been cited or empirically compared.** **Response:** Please refer to our response to your first comment on the significance of numerical results. We appreciate the reviewer pointing out these references to us; we are happy to cite them in the revision. --- **9: There are no provided estimates for choosing the parameter alpha, which instead has to be learned through gradient descent.** **Response:** The desired smoothness that favors node classification is task-dependent and unknown. As such, we make the model learn to automatically balance smooth and non-smooth features; our empirical results show that such a learning-based strategy does improve the performance of some remarkable GCN-style models. --- Thank you for considering our rebuttal. We appreciate your feedback and are happy to address further questions on our paper. --- Rebuttal Comment 1.1: Comment: Unfortunately, I cannot see any of the changes the authors claim to have made. --- Rebuttal 2: Title: Further clarification Comment: Dear Reviewer 5MTH, Thank you for your response. Additional experimental results are provided in the one-page pdf file and the other changes have been made in the revised paper, which is straightforward to do. We cannot update the paper on openreview per the rebuttal rule. The system does not allow us to upload the revised version. Regards, Authors --- Rebuttal Comment 2.1: Title: Further clarification -- cont'd Comment: Dear Reviewer 5MTH, We first appreciate your further feedback. We are not sure if we misunderstood your comment that ``unfortunately, I cannot see any of the changes the authors claim to have made.’’ We are happy to address your further comments before the end of the discussion period. We believe we have provided detailed responses with additional experimental results in the rebuttal file - see the general response. We have also restructured the paper following the reviewers' feedback and added the additional results together with the references pointed out by the reviewer to the revised paper. Again, the openreview does not allow us to update the paper. Moreover, the rebuttal instruction says we can only post a single-page PDF file for additional figures and tables. Regards, Authors
Summary: The paper studies how GCN smoothes node features in terms of unnormalized and normalized smoothness. The results show that adjusting projection can alter the normalized smoothness to any desired level. Based on this, the paper proposes a new method SCT to let GCN learn node features with a desired smoothness to enhance node classification and verifies it effectiveness in practice. Strengths: 1. Understanding the effect of nonlinearities in GNNs is an important yet underexplored problem due to the complexity of nonlinearities. The paper offers a new perspective on it. 2. Oversmoothing is a known issue, while it is also known that some amount of smoothness is desired for graph learning. How to find the ideal amount of smoothness among node features is an important but nontrivial problem. 3. The proposed method SCT is principled and seems effective. Weaknesses: 1. While I can see that normalized smoothness has its own merit, this notion could be better motivated and connected to the literature. a. Given analysis in [27, 4] is asymptotic, I would not say that “over-smoothing – characterized by the distance of features to eigenspace M or the Dirichlet energy – is a misnomer”, as it is too strong of a claim to make. Those results essentially say that “very deep GCNs are bad due to oversmoothing” which has their limitations but can be well justified by the distance of features to eigenspace M or the Dirichlet energy. b. My understanding is that normalized smoothness could be more connected to the non-asymptotic notion of oversmoothing studied in [32], which is defined based on the Bayes error of classification (the distance to the decision boundary) and hence taken the magnitude of features into account. c. Based on the above, the argument presented in the paper can be strengthened in the following way: the motivation of normalized smoothness should be based on a discussion on how the magnitude (and hence normalized smoothness) is more related to a non-asymptotic notion of oversmoothing, which is directly related to the classification performance of finite-depth/shallow GNNs. Based on this, the results present in this paper is more practically relevant than the previous asymptotic result. I would suggest the authors modify the relevant text in the introduction and analysis accordingly. 2. The analysis only applies for GCNs, while whether the analysis or the proposed method can be extended to more complexed GNNs such as GATs or graph transformers is unclear. 3. The analysis only applies for ReLU and LeakyReLU, which reads a bit specific. I wonder if the results can be generalized to a general family of nonlinearities. 4. For the experiments, there is a lack of baseline comparisons except the basic backbone architecture. For example, I wonder how it would compare to APPNP, which is proposed to balance the need to explore larger neighborhood and locality and the implicit goal is also to produce node features with the "right" amount of smoothness. 4. Another presentation suggestion I have for the authors is that one should minimize the use of in-text math and bold or italic fonts for highlighting (such as line 167-173). Math is hard to read in-text and when too many texts are highlighted, the paper becomes ever harder to read because everything seems to be emphasized and it kind of messes up with its original purpose. Technical Quality: 3 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review, valuable feedback, and endorsement. We appreciate your invaluable suggestions and will revise the paper accordingly. In what follows, we provide point-by-point responses to your comments on the weaknesses of the paper. ------ **1. While I can see that normalized smoothness has its own merit, this notion could be better motivated and connected to the literature.** **Response:** We appreciate all your invaluable suggestions and are happy to incorporate these suggestions in the revision. The normalized smoothness notion was pointed out in [4], and we adapted this notion and noticed the equivalence between Dirichlet energy and the orthogonal complement of the projection of features onto the eigenspace. The analysis in [27,4] is asymptotic, but both normalized or unnormalized smoothness notions can characterize the smoothness of node features for GCN with a finite number of layers. A particular motivation for our work is the empirical correlation between the accuracy of GCN and a normalized smoothness-related quantity studied in [27]. As such, we propose a practical approach to control the smoothness of GCN features based on our observed geometric relationship between the input and output of activation functions. ------ **2: The analysis only applies for GCNs, while whether the analysis or the proposed method can be extended to more complexed GNNs such as GATs or graph transformers is unclear.** **Response:** It has recently been proved in [A] that attention-based GNNs also suffer from over-smoothing. Specifically, [A] demonstrates that the node features converge to the same eigenspace as those in GCN. While using a different smoothness notion, they show that their measurement is equivalent to the one used by Oono & Suzuki (ICLR, 2020) and our work. Consequently, since the characterization of smoothness is the same, our technique can be applied to these frameworks to control normalized smoothness. In particular, in our analysis, $\bf z$ can be the feature obtained through any graph convolution, including attention-based methods. We will incorporate these discussions in the revision. The main reason we did not include these models in our work is that, within the scope of "controlling smoothness" (which is not the same as mitigating over-smoothing), to the best of our knowledge, we only found EGNN to be an example in this branch. This is why we included it in our comparisons. Moreover, since it is built upon GCN and GCNII, we also included these two models. [A] Wu, Xinyi, et al. "Demystifying oversmoothing in attention-based graph neural networks." NeurIPS, 2023. ------ **3: The analysis only applies for ReLU and LeakyReLU, which reads a bit specific. I wonder if the results can be generalized to a general family of nonlinearities.** **Response:** The geometric characterization of the effects of activation functions in Section 3 uses some particular piecewise linear properties of ReLU and LeakyReLU. We expect a similar analysis can be applied to some other piecewise linear activation functions. Some analyses of the achievable smoothness using the proposed SCT (in Section 4) can be extended to more general activation functions like ELU and SELU; see our discussion in Lines 233-235. ------ **4: For the experiments, there is a lack of baseline comparisons except the basic backbone architecture. For example, I wonder how it would compare to APPNP, which is proposed to balance the need to explore larger neighborhood and locality and the implicit goal is also to produce node features with the "right" amount of smoothness.** **Response:** We have incorporated APPNP into our experimental results. We train APPNP using the optimal hyperparameters as reported by the APPNP paper. The results are reported in Table 4 of the rebuttal file. ------ **5: Another presentation suggestion I have for the authors is that one should minimize the use of in-text math and bold or italic fonts for highlighting (such as line 167-173).** **Response:** We appreciate your suggestion and will account for this in the revision. ------ Thank you for considering our rebuttal. We appreciate your feedback and are happy to address further questions on our paper. --- Rebuttal Comment 1.1: Title: Thank you Comment: I thank the authors for responding to my review. One difficulty I can see regarding extending to GATs or GTs is that unlike graph convolutions considered in this paper, attention matrices are in general not symmetric and the current analysis techniques developed for symmetric matrices might not be trivially applied. Nonetheless, the rebuttal has addressed the rest of my concerns. I will keep my score for now and stay on the positive side. --- Reply to Comment 1.1.1: Title: Thank you for considering our rebuttal Comment: Thank you for your further feedback and your support of our work. We would like to clarify that while graph attention networks (GATs) may employ asymmetric graph convolutions, our work relies solely on the characterization of oversmoothing. As demonstrated in [1], GATs with asymmetric attention matrices can still exhibit oversmoothing, and our oversmoothing measurement is equivalent to that used in [1]. Importantly, our proposed method, smoothness control term (SCT), analyzes the impact of activation functions on the smoothness of $\bf Z = \bf W\bf H\bf G$, by considering only the input $\bf Z$ and output $\sigma(\bf Z)$ of these functions. This abstraction allows us to isolate the effects of the weight matrices $\bf W$ and graph convolution $\bf G$, providing an understanding of how SCT controls the smoothness of $\sigma(\bf Z)$. [1] Wu, Xinyi, et al. "Demystifying oversmoothing in attention-based graph neural networks." NeurIPS, 2023. Thank you again for considering our rebuttal.
Summary: This paper first shows that in GCN, the output of ReLU or LeakyReLU lies on a sphere whose input is characterized by components parallel and perpendicular to $\mathcal{M}$, the space spanned by eigenvectors for the maximum eigenvalue of a graph. As a corollary, this paper shows that these activation functions do not increase the component of the feature vector perpendicular to $\mathcal{M}$. Furthermore, this paper defines the normalized smoothness and evaluates how its range varies with the activation functions. Based on this discussion, this paper proposes an SCT that learns the parallel components of the feature vectors. The proposed method is applied to GCN, GCNII, and EGNN, and verifies its effectiveness by applying it to node prediction tasks with various heterophily. Strengths: - The theorems presented in the theoretical analysis (Propositions 3.2 and 3.3) enable a unified treatment of ReLU and Leaky ReLU. - Numerical experiments show that the proposed method is effective for various data sets and models, showing the versatility of the proposed method. Weaknesses: - I need help understanding the explanation in Section 3.3. More specifically, it is difficult to understand that the *independence* of the inequality means that the upper bound of the inequality does not depend on the value of $\boldsymbol{Z}_{\mathcal{M}}$. I suggest writing it explicitly. - P7, L.265: It seems strange that although SCT changes its architecture depending on whether the underlying GNN is GCT or GCNII, it has the same name. I suggest naming SCT for GCT and SCT for GCNII differently. Technical Quality: 3 Clarity: 2 Questions for Authors: P5, L.196: $\boldsymbol{e}$ -> $\boldsymbol{e}_{1}$ P6, L.209: What is the value of $a$ of $\sigma_a$? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors discuss in the conclusion section that the proposed method's limitation is that it assumes the model's oversmoothing. However, I do not think this is a limitation because if the model does not cause oversmoothing, there is no need to use the proposed method. Rather, I recommend evaluating whether SCT has bad effects when the model does not cause oversmoothing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review, valuable feedback, and endorsement. In what follows, we provide point-by-point responses to your comments on the weaknesses and limitations of our paper and your questions. ------ **1: I need help understanding the explanation in Section 3.3. More specifically, it is difficult to understand that the independence of the inequality means that the upper bound of the inequality does not depend on the value of $\mathbf{Z}_{\mathcal{M}}$. I suggest writing it explicitly.** **Response:** To understand this better, we consider two inputs ${\bf Z}, {\bf Z}’\in \mathbb{R}^{d\times n}$, where ${\bf Z}\_{\mathcal{M}^\perp}={\bf Z}’\_{\mathcal{M}^\perp}$ but ${\bf Z}\_{\mathcal{M}} \neq {\bf Z}’\_{\mathcal{M}}$. Moreover, let ${\bf H}=\sigma(\bf Z)$ and $\bf H’=\sigma(\bf Z’)$ with $\sigma$ being ReLU or leaky ReLU. Based on our geometric results in Sections 3.1 and 3.2, we have $||\bf H||\_{\mathcal{M}^\perp}\leq ||\bf Z||\_{\mathcal{M}^\perp}$ and $||\bf H’||\_{\mathcal{M}^\perp}\leq ||\bf Z’||\_{\mathcal{M}^\perp}$. Notice that $\bf Z\_{\mathcal{M}^\perp}=\bf Z’\_{\mathcal{M}^\perp}$, we have $||\bf H’||\_{\mathcal{M}^\perp}\leq ||\bf Z||\_{\mathcal{M}^\perp}$. Notice that $||\bf H’||\_{\mathcal{M}^\perp}\leq ||\bf Z||\_{\mathcal{M}^\perp}$ indicates that altering $\bf Z$ to any $\bf Z’$ with $\bf Z\_{\mathcal{M}^\perp}$ being preserved does not change the fact that $||\bf H’||\_{\mathcal{M}^\perp}$ will be upper bounded by $ ||\bf Z||\_{\mathcal{M}^\perp}$. We have revised our paper per your suggestion. ------ **2: P7, L.265: It seems strange that although SCT changes its architecture depending on whether the underlying GNN is GCT or GCNII, it has the same name. I suggest naming SCT for GCT and SCT for GCNII differently.** **Response:** Thank you for your suggestion, we will name them to specify the use of different SCT architectures in the revised paper. ------ **3: P5, L.196: ${\bf e}\rightarrow {\bf e}\_1$ P6, L.209: What is the value of $\alpha$ of $\sigma_{\alpha}$?** **Response:** L196: For the sake of notation, we denote ${\bf e}_1$ as ${\bf e}$ as pointed out in L193-194. We are happy to denote it as ${\bf e}_1$. L209: $\alpha$ is a smoothness control parameter and we vary this parameter from -1.5 to 1.5. ------ **4. I recommend evaluating whether SCT has bad effects when the model does not cause oversmoothing.** **Response:** Thank you for your suggestion. Indeed, we have evaluated the effect of SCT on GCNII and EGNN - two state-of-the-art GCN-style models that do not suffer from over-smoothing. Our numerical results show that SCT can further improve the performance of these models. Though GCNII and EGNN do not suffer from over-smoothing, the node features learned by these two models do not necessarily have a good balance between smooth and non-smooth components that are optimal for node classification. In fact, we can improve the performance of these models by letting the model automatically learn a good balance between smooth and non-smooth features. On the one hand, it is crucial to avoid over-smoothing for node classification. On the other hand, as noticed empirically by Oono & Suzuki (ICLR, 2020) the ratio between smooth and non-smooth features is highly correlated with the classification accuracy. Based on this empirical observation and our established geometric insights, we have proposed SCT to let GCN-style models automatically learn features with a desired smoothness to improve node classification. ------ Thank you for considering our rebuttal. We appreciate your feedback and are happy to address further questions on our paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to my review comments. Their responses answered my questions appropriately. So, I want to keep my scores. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for considering our rebuttal.
Rebuttal 1: Rebuttal: Dear reviewers, We appreciate your thoughtful reviews and valuable feedback, which have helped us significantly improve the paper. We thank the reviewers’ praise for the originality, quality, clarity, and significance of our work. We are encouraged that reviewers found our proposed approach is well-founded in theory and further the understanding of over-smoothing in GNNs. Moreover, numerical results show that the proposed method is effective for various datasets and models, showing the versatility of the proposed method. We address some common comments from reviewers in this general response, and we provide additional results in the rebuttal PDF file. --- **1. The computational overhead of SCT** **Response:** The overhead introduced by SCT is not very significant compared to the entire cost of training and deploying GNN-style models. Notice that we can avoid performing eigendecomposition by using the fact that the basis of the space $\mathcal{M}$ is given by the indicator functions of each connected component of the graph; see Lines 110-114. Therefore, the problem reduces to finding connected components of the graph, and an efficient way to identify connected components for undirected graphs is using disjoint set union (DSU). Initially, declare all the nodes as individual subsets and then visit them. When a new unvisited node is encountered, unite it with the under. In this manner, a single component will be visited in each traversal. The time complexity is $O(V)$. --- **2: Significance of the experimental results and some additional results** **Response:** We design the experiments to solidify our theoretical results and verify the effectiveness of the informed algorithm. In particular, we have the following two purposes in mind: First, the proposed SCT can avoid over-smoothing for GCN. Second, by learning to balance the smooth and non-smooth features, SCT can improve the performance of GCN-style models that even do not suffer from over-smoothing. We choose two state-of-the-art GCN-style models - GCNII and EGNN - as the testbeds. To showcase the effectiveness of the proposed approach, we have comprehensively studied models with different layers on 10 benchmark datasets, including all datasets from papers we have benchmarked on. In the rebuttal file, we provide a few additional numerical results to further confirm the effectiveness of our proposed approach: - We perform hypothesis testing in Table 1 to test the significance of accuracy improvement for the cases when accuracy does not have a large increase in absolute value. - We compare the number of parameters and the computational time of models with and without SCT in Table 2. - We show SCT can improve the performance of the model mentioned by Reviewer 5MTH in Table 3. - We further benchmark against APPNP in Table 4. - We provide more detailed results to complement the results reported in Table 2 of our paper; c.f. Table 5 in the rebuttal file. - We further conduct experiments on the peptide dataset to diversify application scenarios.; c.f. Table 6. - We provide results to further verify the vanishing gradient issue in training deep GCN and GCN-SCT; c.f. Figure 1 in the rebuttal PDF file. ----- We are glad to answer your further questions on our submission. Regards, Authors Pdf: /pdf/341cba51c5b85b18a643e06ed23944b9645aeb52.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper deals with (Over-)smoothing is Graph Neural Networks. While it was previously known that GCN-type GNNs oversmooth, this paper reexamines the case for GCN-type architectures with Relu-type activation functions in terms of *normalized* smoothness. The authors show that the convergence behaviour of GCNs can be split into two parts, a "smooth" part and a "non-smooth" part and that by manipulating the smooth part, one can influence the normalized smoothness of the signal. From these theoretical insights, the authors propose a new system that uses a learnable parameter that modulates the smooth part, making the model able to learn the most beneficial normalized smoothness for the problem at hand. Strengths: - The paper furthers the understanding of oversmoothing in GNNs - The proposed approach is well-founded in theory Weaknesses: - The experiments are unconvincing. - There is a slight improvement to be found in models with SCT, but this is not too surprising, as these models also have more parameters, and mostly improvements are quite slim. - You choose GCN, GCNII and EGNN, two of which have built-in skip connections, that are known to help with oversmoothing. This also coincides with the models that cope quite well with a larger number of layers. The vanilla GCN takes a huge hit in all benchmarks apart from ogbn-arxiv. So learning the normalized smoothness of features does not actually seem to help in the case of GCN. - The other two models don't suffer from oversmoothing to begin with Technical Quality: 3 Clarity: 2 Questions for Authors: - The parameter $\alpha$ can be chosen such that the normalized smoothness does not diminish asymptotically. However, the unnormalized smoothness does; does this mean that the norm of the features $||z|| \rightarrow 0$ tends towards 0? Doesn't this mean that features are also unusable in deeper layers? - Is this analysis extendable to other architectures? E.g. GAT and other attention-based methods are also known to oversmooth, can a similar trick be applied there? - Table 5 details the hyperparameters that were tried for each model. These seem to be very many combinations. Is it correct that you tried approx. 1.2 Million hyperparameter combinations for EGNN? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations are not well-discussed. The only limitation the authors claim for this work is, that: "without this condition [that oversmoothing happens], SCT cannot ensure performance guarantees." There are no performance guarantees given for SCT. This is the only limitation discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. In what follows, we provide point-by-point responses to your comments. ------ **1. There is a slight improvement to be found in models with SCT, but this is not too surprising, as these models also have more parameters, and mostly improvements are quite slim.** **Response:** The accuracy improvement is task-dependent, and it is quite substantial for more challenging heterophilic datasets. As shown in Table 2 of our paper, the accuracy improvements are often more than 10%. In the rebuttal file, we have conducted hypothesis testing on the significance of the accuracy improvement due to the proposed SCT. The results in Table 1 of the rebuttal file confirm the statistical significance of accuracy improvement. The accuracy improvement using SCT is not because of using more parameters: 1) Table 1 shows the baseline model with SCT and 2 layers can often achieve better accuracy than the baseline model without SCT but has 4, 16, or 32 layers. This is especially true for larger graphs like Coauthor-Physics and Ogbn-arxiv. 2) Table 2 shows that the baseline model with SCT and a shallow architecture can significantly outperform the same model without SCT but with a deeper architecture. ------ **2. Learning the normalized smoothness of features does not actually seem to help in the case of GCN.** **Response:** The accuracy drop for GCN and GCN-SCT with a large number of layers is because of the vanishing gradient issue rather than over-smoothing. We investigate the vanishing gradient issue in training deep GCN and GCN-SCT by plotting the gradient norm in Figure 4 in the appendix. The drop in accuracy for deeper models does not occur for other models that have skip connections in their architectures. It is noted in the original ResNet paper that using skip connections can effectively alleviate vanishing gradients in training deep networks. Controlling the smoothness of node features cannot help solve the vanishing gradient issue. ------ **3: GCNII and EGNN don’t suffer from oversmoothing to begin with.** **Response:** On the one hand, avoiding over-smoothing for node classification is crucial. On the other hand, as noticed empirically by Oono & Suzuki (ICLR, 2020) the ratio between smooth and non-smooth features is correlated with the classification accuracy. Based on this observation and our established geometric insights, we have proposed SCT to let GCN-style models automatically learn features with a desired smoothness to improve node classification. GCNII and EGNN are two state-of-the-art GCN-style models. We apply SCT to these models to show that learning to balance smooth and nonsmooth features can improve the performance of GCN-style models. Though GCNII and EGNN do not suffer from over-smoothing, the learned node features do not necessarily have a good balance between smooth and non-smooth components that are optimal for node classification. In fact, we can improve the performance of these models by letting the model automatically learn a good balance between smooth and non-smooth features. ------ **4: Does the norm of the features $||z||\rightarrow 0$ tends towards 0? Doesn't this mean that features are also unusable in deeper layers?** **Response:** Under GCN dynamics, $||z||\rightarrow 0$ as the number of layers goes to infinity. However, it doesn’t mean that features are unusable in deeper layers. Notice that $||z||\neq 0$ for GCN with any finite layer, and the classification results do not change when multiplying features by a constant. The feature norm itself does not capture the full spectrum of the smoothness of node features; the ratio between smooth and non-smooth features is an alternative smoothness notion. ------ **5: Is this analysis extendable to other architectures? E.g. GAT and other attention-based methods are also known to oversmooth, can a similar trick be applied there?** **Response:** It has recently been proved in [1] that attention-based GNNs also suffer from over-smoothing. Specifically, [1] demonstrates that the node features converge to the same eigenspace as those in GCN. While using a different smoothness notion, they show that their measurement is equivalent to the one used by Oono & Suzuki (ICLR, 2020) and our work. Consequently, since the characterization of smoothness is the same, our technique can be applied to these frameworks to control normalized smoothness. In particular, in our analysis, $\bf z$ can be the feature obtained through any graph convolution, including attention-based methods. We will incorporate these discussions in the revision. The main reason we did not include these models in our work is that, within the scope of "controlling smoothness" (which is not the same as mitigating over-smoothing), to the best of our knowledge, we only found EGNN to be an example in this branch. This is why we included it in our comparisons. Moreover, since it is built upon GCN and GCNII, we also included these two models. [1] Wu, Xinyi, et al. "Demystifying oversmoothing in attention-based graph neural networks." NeurIPS, 2023. ------ **6: Table 5 details the hyperparameters that were tried for each model. These seem to be very many combinations. Is it correct that you tried approx. 1.2 Million hyperparameter combinations for EGNN?** **Response:** Thank you for raising this point about the clarity of Table 5. Notice that in [2] Table 4 the optimal hyperparameters are listed for EGNN. We particularly examine tuning over $c\_{max}$, $\alpha$, and $\theta$. We tune GCN using the upper section of Table 2 and use the same selection of values for GCNII but adjust $\alpha$ and $\beta$. There are only 324 combinations and the runtime is under 60s per test. [2] https://arxiv.org/pdf/2107.02392 ------ Thank you for considering our rebuttal. We appreciate your feedback and are happy to address further questions on our paper.
Summary: This paper studies how ReLU and Leaky ReLU affect the smoothness of node features in graph convolution layers. The authors demonstrate that adjusting the input projection onto eigenspace $\mathcal{M}$ of the node feature matrix can achieve any desired normalized smoothness. Additionally, they propose a Smoothness Control Term (SCT) to enhance node classification in Graph Convolutional Networks, validated on both homophilic and heterophilic graphs. Strengths: 1. From a geometric perspective, the authors prove that how ReLU and Leaky ReLU affect the smoothness of node features in graph convolution layers. 2. The experimental results validate the theory proposed by the authors. Weaknesses: Equation 5 implies that using SCT requires performing eigendecomposition. This paper avoids the high time complexity associated with eigendecomposition, especially when the number of nodes in a graph is very large. Technical Quality: 3 Clarity: 3 Questions for Authors: None Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review, valuable feedback, and endorsement. Regarding the pointed-out weakness, we can avoid performing eigendecomposition by using the fact that the basis of the space $\mathcal{M}$ -- eigenspace associated with the largest eigenvalue of the message-passing matrix -- is given by the indicator functions of each connected component of the graph; see Lines 110-114. Therefore, the problem reduces to finding connected components of the graph, and an efficient way to identify connected components for undirected graphs is using disjoint set union (DSU). Initially declare all the nodes as individual subsets and then visit them. When a new unvisited node is encountered, unite it with the under. In this manner, a single component will be visited in each traversal. The time complexity is $O(V)$. ------ Thank you for considering our rebuttal. We appreciate your feedback and are happy to address further questions on our paper.
null
null
null
null