title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
SAMoSSA: Multivariate Singular Spectrum Analysis with Stochastic Autoregressive Noise
Accept (poster)
Summary: The authors propose SAMoSSA, an algorithm that combines deterministic trend estimation via mSSA with estimation of an autoregressive component of a time series. They provide error rates for trend estimation, estimation of the AR coefficients, as well as the prediction error. In addition, they consider real and simulated examples and show that the proposed method offers very competitive performance. Strengths: - While the time series literature is vast and well-developed, it does appear that theoretical results that analyze the effect of trend estimation on AR estimates and the prediction error were quite limited. Detrending a time series and then modeling using ARMA or VAR is a common routine so this is a practically important problem. - The derived error bounds appear to be novel. - The paper is well-written. Assumptions and implications of theorems are for the most part explained well. Weaknesses: - The authors could do a more thorough literature review when it comes to inference for trends. For example, there is a statistics literature on testing for trends. See Chen and Wu (2019): Testing for Trends in High-Dimensional Time Series and references therein. - While it is a nice result, the implications for real-world data are questionable due to the conditions on the trend as well as the assumed autoregressive structure. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The more general ARMA model is first discussed in the introduction. What challenges do estimating the MA components pose for the analysis? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors do acknowledge some limitations with the studied framework, including assuming stationarity of the stochastic component. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and valuable questions. Below we address the specific questions and comments they raise. > The more general ARMA model is first discussed in the introduction. What challenges do estimating the MA components pose for the analysis? Thank you for this great question. If an ARMA process is stable, then it can be represented as an infinite MA process whose coefficients are absolutely summable. Hence, we believe that the Non-Stationary deterministic component can be learned (with a similar bound to the one in Theorem 4.1). However, more careful analysis is needed to determine how well we can learn the ARMA parameters and how well we can forecast. This is indeed an interesting venue for future work. We will add a discussion about ARMA in the revision. > While it is a nice result, the implications for real-world data are questionable due to the conditions on the trend as well as the assumed autoregressive structure. We believe that the model we provide is rich. In terms of the deterministic part, many standard functions that model time series dynamics satisfy assumptions 2.1-2.2 either exactly or approximately. Indeed, as [3] shows, these functions include any finite sum of products of harmonics, low-degree polynomials, and exponential functions. In terms of the stochastic part, it is worth noting that, in theory, stationary stochastic processes can be approximated by AR processes suggesting that AR processes are also quite rich. However, there are valid concerns about the limitations we set in our model. In future work, we aim to also explore when the stochastic part is: 1. an ARMA structure. 2. an integrated process (e.g., ARIMA), as we discuss in the limitation These are indeed great extensions to further increase the richness of the model. That being said, the model we’re considering now is indeed rich. The improvements of SAMoSSA over mSSA and other baselines, as well as the analysis we provide in the rebuttal, strongly suggest the richness and practicality of our model. We again thank the reviewer for their comments and constructive feedback. We hope the reviewer will take these clarifications into account in their revised scores. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It is interesting to hear about the potential extension to ARMA models. While there are results that state that even non-stationary processes may be approximated well by AR processes with growing order under certain conditions, it is likely that certain analyses with autoregressive models cannot be extended using such arguments. Nonetheless, I believe that this is a nice contribution and in light of the author's comments during the rebuttal/discussion phase, I will raise my score up one point.
Summary: This paper proposed SAMoSSA, a two-stage procedure that effectively handles mixtures of deterministic nonstationary and stationary AR processes with minimal model assumptions. The authors analyze SAMoSSA’s ability to estimate non-stationary components under stationary AR noise, the error rate of AR system identification via OLS under observation errors, and a finite-sample forecast error analysis. Strengths: 1. This paper is well-written and easy to follow. 2. The theoretical results are solid. Weaknesses: 1. The theoretical contributions are somewhat incremental. 2. The experiment are insufficient. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The biggest difference between this paper and [3] lies in the different noise setting, i.e., this paper replaces the i.i.d noise with stationary AR noise. However, the theoretical derivation in this article is almost the same as it in [3]. What technical difficulties did the authors encounter during the proof process? 2. How limited are Assumptions 2.1 and 4.2 in practice? 3. In experiment, more baselines are needed. 4. How to verify that the dataset (especially the real dataset) satisfies the assumptions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and constructive feedback. Below we address the specific questions and comments they raise. > The biggest difference between this paper and [3] lies in the different noise setting, i.e., this paper replaces the i.i.d noise with stationary AR noise. However, the theoretical derivation in this article is almost the same as it in [3]. What technical difficulties did the authors encounter during the proof process? The different noise settings raised the following three difficulties - **Establishing spectral properties of AR noise.** Given that the noise is not i.i.d., both reconstructing and forecasting the deterministic part ($f_n(t)$) using mSSA required establishing an upper bound on the operator norm of the “Page” matrix of AR processes; a result that may be of interest in its own right (see Lemma A.2). - **Analysis of parameter identification of AR processes under arbitrary noise.** To establish the out-of-sample forecasting results for the observations $y_n$, we had to establish a finite sample bound for forecasting both $f_n$ and $x_n$ (whereas in the i.i.d. noise setting, $f_n$ was the only goal). This requires learning the AR parameters (system identification) under arbitrary and bounded observation noise. - **Dependence between learned parameters and future noise terms.** The forecasting analysis in [3] benefits from the iid assumption. For example, the fact that $\hat{\beta}$ is independent of future noise terms $x_n(t), t > T$ is key to cancel a few of the terms when proving the forecasting error theorem. > How limited are Assumptions 2.1 and 4.2 in practice? How to verify that the dataset (especially the real dataset) satisfies the assumptions? Assumption 2.1 implies that if you construct a matrix M where $M_{ij} = f_i(j)$, then rank($M$) $\leq R$. We would argue that this assumption is actually common in high dimensional time series analysis (see [1]). Indeed, TRMF, and also mSSA, has been shown to perform very well in practice on a variety of high dimensional multivariate time series which suggests that the assumption is not limiting in practice. To verify whether the spatio-temporal model we assume (assumptions 2.1 and 2.2) hold in practice, one can use the diagnostic test for the Spatio-Temporal Model is furnished in [2]. This test verifies whether mSSA is likely to succeed based on the spectral properties of the observed matrix. Specifically, the test measures the effective rank of the Page matrix associated with the multivariate time series with parameter $L \sim \sqrt{NT}$. If the effective rank does not scale much slower than $L$ then mSSA is unlikely to be effective. We will add a reference and discussion of this test in the revised version. Assumption 4.2 will hold in practice if the underlying deterministic part of the time series (i.e., $f_i(\cdot)$) does not change, assuming that we have observed enough samples to capture the time series complexity. We believe this assumption is both necessary and natural for our settings; Note that typically, to establish generalization error, modern statistical estimators assume the data-generating process to be i.i.d. Herein, and in line with the work in [3], we make a less restrictive assumption and rely on a purely linear algebraic condition that is natural with the proposed model. > In experiment, more baselines are needed. As other reviewers correctly point out, the theoretical characterization of multi-stage learning algorithms in this setup is missing in the literature, and this is the main focus and deliverable of our paper. That said, based on the collective feedback from reviewers, we added a new baseline to our experiments (DeepAR, see attached pdf), and we will attempt to add TRMF [1] in our revised manuscript. Interestingly, DeepAR only outperforms SAMoSSA in the traffic dataset. We again thank the reviewer for their comments and constructive feedback. We hope the reviewer will take these clarifications into account in their revised scores. **References** [1] Yu, Hsiang-Fu, Nikhil Rao, and Inderjit S. Dhillon. "Temporal regularized matrix factorization for high-dimensional time series prediction." Advances in neural information processing systems 29 (2016). [2] Agarwal, Anish, Abdullah Alomar, and Devavrat Shah. "On multivariate singular spectrum analysis and its variants." ACM SIGMETRICS Performance Evaluation Review 50.1 (2022): 79-80. [3] Agarwal, Anish, Devavrat Shah, and Dennis Shen. "On Model Identification and Out-of-Sample Prediction of Principal Component Regression: Applications to Synthetic Controls." arXiv preprint arXiv:2010.14449 (2020).
Summary: This paper proposes a two-stage approach based on multivariate Singular Spectrum Analysis (mSSA) to estimate the non-stationary components in a time series in the presence of a correlated stationary AR noise, which is subsequently estimated from the residual time series. Theoretical results on the performance of the algorithm in this novel setting, are established along with a finite-sample forecasting consistency bound. Empirical results demonstrate significant improvements in forecasting performance due to identification of the AR noise structure, across various benchmark datasets. Strengths: mSSA allows estimation of non-stationary deterministic components without domain knowledge or fine-tuning, however, cannot handle additive correlated stationary noise. The paper deals with the important problem of estimating non-stationary deterministic components in the presence of correlated stationary noise, through a unified approach via mSSA. Theoretical results for the mSSA are established beyond the i.i.d noise setting (specifically with AR noise). The paper also provides theoretical results on out-of-sample forecasting error for the proposed two-step algorithm. Overall, I found the contribution to be sufficiently novel and of high quality; addressing a crucial gap in the literature. Weaknesses: Overall, the paper is well-written but is difficult to follow in some parts, and is lacking in conveying the key ideas. For example, presentation of the algorithm (Section 3): The details of the algorithm are presented well however, the main idea behind the approach is missing. A few sentences to convey the big picture behind the steps would have been very helpful. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The abbreviation SAMoSSA is used in the abstract and introduction without being explicitly defined (I can guess what it is through the title but not sure why the order is reversed). can you please state this explicitly? Assumption 2.1: Please provide some intuitive reasoning behind this assumption and what you mean by `fundamental' time series. In Assumption 2.2, L<=T. Why does it become L<=sqrt(T) in line 168 on p.4? Section 3: The paper mentions that the proposed algorithm builds on the work of A. Agarwal, et al. [3]. It is not clear how the algorithm in Section 3 is different from what exists in the literature -it would be helpful to state this clearly. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Yes, limitations of the approach are included in the final section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and constructive feedback. Below we address the specific questions and comments they raise. > ... A few sentences to convey the big picture behind the steps would have been very helpful in the algorithm section. Thank you, we will revise the algorithm description in the revised manuscript to convey the key ideas. > The abbreviation SAMoSSA is used in the abstract and introduction without being explicitly defined (I can guess what it is through the title but not sure why the order is reversed). can you please state this explicitly? Thank you for this question. We think of SAMoSSA more like a pseudo-acronym, and not really an abbreviation. As you may probably surmised, "SAMoSSA" is a combination of the terms "Stochastic", "Autoregressive", and "Multivariate Singular Spectrum Analysis" (MSSA). > Assumption 2.1: Please provide some intuitive reasoning behind this assumption and what you mean by `fundamental' time series. Another way to state this assumption is: $f_n \forall n \in [N]$ is such that if you construct a matrix $M$ where $M_{ij} = f_i(j)$, then rank($M$) $\leq R$. That is, you can factorize the matrix M into “channel/time series” factors and temporal factors. These R temporal factors (in $\mathbb{R}^{R\times T}$) are what we call fundamental time series. This assumption, while worded differently, is quite common in high dimensional time series analysis (e.g., see [1]). > In Assumption 2.2, L<=T. Why does it become L<=sqrt(T) in line 168 on p.4? The $L\leq\sqrt{T}$ condition in the algorithm is used to simplify the exposition of the analysis but is not crucial for the results. More precisely, it's included to guarantee that the number of rows in the Page matrix is fewer than the number of columns. For consistency, we will fix the wording of Assumption 2.2 to be $L\leq\sqrt{T}$. > The paper mentions that the proposed algorithm builds on the work of A. Agarwal, et al. [3]. It is not clear how the algorithm in Section 3 is different from what exists in the literature -it would be helpful to state this clearly. We will revise the algorithm section to make this clearer. In brief, the first three steps in our algorithm (see figure 1) are the same as [3], but steps 4 and 5 are different. In particular, SAMoSSA, in addition to imputing and forecasting $f(\cdot)$, also learns the AR process $x(\cdot)$, and exploits that learned structure to forecast $x(t)$ for $t>T$. This additional step, as illustrated in the empirical experiments, helps increase the forecast accuracy. We would like to note though that a few technical challenges in the analysis arise with our model and algorithm compared to that of [3]. Specifically, - Given that we assume the noise process to be an AR process, most of the analysis presented in [3] breaks down. Specifically, we now have to (1) establish spectral properties of the Page matrix of the correlated AR noise; a result that may be of interest in its own right (see Lemma A.2); and (2) establish a forecasting error bound when the learned parameters (i.e. $\hat{\beta}$) and the stochastic terms (x_i’s) are correlated. - Further, as we attempt to learn the AR structure in the stochastic component, we had to improve over current results in system identification to accommodate for the case of arbitrary and bounded observation noise (due to the two-stage nature of the algorithm). In that sense, ours can be viewed as a robust generalization of previous work in AR system identification. We again thank the reviewer for their comments and constructive feedback. **References** [1] Yu, Hsiang-Fu, Nikhil Rao, and Inderjit S. Dhillon. "Temporal regularized matrix factorization for high-dimensional time series prediction." Advances in neural information processing systems 29 (2016).
Summary: This is a comprehensive work on a new extended variant of multivariate Singular Spectrum Analysis (mSSA), which manages to handle time series of deterministic trend/seasonality with AR stationary components, with rigorous theoretical guarantees. The algorithm is a natural extension of the variant of mSSA using the Page matrix. It further provides guarantees on the bound of estimation error of non-stationary components, the bound of AR parameter estimation error, and the bound of out-of-sample forecasting error. And its superior performance is shown in experiments on four datasets. Strengths: The extension of the algorithm naturally follows by adding the second step of AR estimation. It is impressive that this paper manages to show all these three bounds on estimation/forecasting errors. What is surprising that, for such a two-stage approach, the bound of out-of-sample forecasting error could be attained. Weaknesses: The numerical experiments could be richer to show the performance boundary of the proposed method, considering it is dealing with a general signal as the non-stationary plus the AR components. The types of non-stationary components may affects the performance; in particular, instead of the complicated one, the simple deterministic components that could be also be captured by (close to marginal stable) AR parameters may raise issues. And the signal-noise-ratio, the intensity of AR process compared to the deterministic, etc. may also be tested in experiments. All these simulation may help us to further understand the strengths and weaknesses of the proposed method. The reason why the reviewer expects more numerical testing comes from the concerns on the power of truncated SVD of Page matrix for approximately splitting the deterministic/non-stationary and the AR components. If this step fails completely, the whole would fail. We like to know its performance boundary and reliability. Another concern may be taken into account by area chair. This paper is more typical in the fields of econometrics or statistics. The reviewer is not sure if it fits NeurIPS well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Besides the questions in the "Weaknesses" part, there could be more deserving the authors' attentions. 1. Is it correct that we consider such a simplified multivariate time series modelling problem that each channel/variable (i.e. y_n) is driven an independent AR process (x_n)? Together with Assumption 2.1, we actually can model each channel signal y_n separately. This also allows the stack of Page matrix for an extension from the univariate to the multivariate. 2. Following the previous question, the analysis for SAMOSSA for multivariate case has nothing essentially different from the univariate case, right? Maybe we missed some technical challenges. 3. In the proof of out-of-sample forecasting error bound, what is the particular issue that our problem has compared to the analysis in [3]? 4. For time-series forecasting, there has been plenty of SOTA methods using deep learning (like Informer, PatchTST, etc.). We are curious about the comparative results. Indeed yours may be less accurate than DL SOTA, while yours has additional explainability. The performance gap might tell the cost for explainable modelling. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: We do not see any potential negative societal impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and valuable questions. Below we address the specific questions and comments they raise. > The numerical experiments could be richer, if this step fails completely, the whole would fail. We like to know its performance boundary and reliability. We agree that it is important to characterize the performance of the algorithm under different settings. However, we maintain that our analytical approach provides a deeper understanding of the algorithm's behavior compared to adding more numerical experiments, which can only cover a limited number of scenarios. For example, although it can be conveyed more explicitly, the analysis we provide helps characterize the performance in terms of the signal-to-noise ratio. In particular, we capture the noise effect through the dependence on the term $\sigma_x$ in the various bounds; on the other hand, the signal (for which on can use the sum of the first $k=RG$ singular value of the stacked Page matrix of $f$ as a reasonable estimate) is assumed to obey a lower bound as the balanced spectra assumption states. We will add a discussion of how the various bounds can be stated as a function of the signal-to-noise ratio as suggested. Thank you for the great suggestion. > This paper is more typical in the fields of econometrics or statistics. The reviewer is not sure if it fits NeurIPS well. While we acknowledge the reviewer's concerns, we respectfully disagree that our paper does not fit NeurIPS. Time series papers, such as the examples cited in references [1-4], have been prominently featured at NeurIPS, demonstrating a range of technical flavors relevant to our work. [1] in particular is a paper of a very similar flavor (about mSSA for change point detection and is of similar technical depth). > Is it correct that we consider such a simplified multivariate time series modelling problem that each channel/variable (i.e. y_n) is driven an independent AR process (x_n)? Together with Assumption 2.1, we actually can model each channel signal y_n separately. This also allows the stack of Page matrix for an extension from the univariate to the multivariate. Following the previous question, the analysis for SAMOSSA for multivariate case has nothing essentially different from the univariate case, right? Maybe we missed some technical challenges. The analysis of SAMoSSA encompasses studying both the deterministic and stochastic components of the time series. For the deterministic part, the analysis of the multivariate case has its own challenges compared to that of the univariate case (i.e., SSA) and it characterizes how one can exploit assumption 2.1 to learn across different time series which results in the better scaling (w.r.t. N). For the stochastic part, given that we assume that each channel is driven by its own stochastic process, the analysis of the AR parameter identification for the multivariate case is the same as the univariate case. Considering a VAR-like model (or others), where there exists some dependence between the stationary processes $x_1$,..., $x_N$, is indeed an interesting direction of future work. > In the proof of out-of-sample forecasting error bound, what is the particular issue that our problem has compared to the analysis in [3]? The difference between our setting and that of [3] is the noise settings, which raises the following difficulties when establishing the results of thm 4.4: 1. **Spectral properties of AR noise.** given that the noise is not i.i.d., both reconstructing and forecasting the deterministic part ($f_n$) required establishing an upper bound on the operator norm of the Page matrix of AR processes; a result that may be of interest in its own right (see Lemma A.2). 2. **Analysis of parameter identification of AR processes under arbitrary noise.** To establish the out-of-sample forecasting results for the observations $y_n$, we had to establish finite sample bounds for forecasting both $f_n$ and $x_n$ (whereas in the i.i.d. setting, $f_n$ was the only goal). This requires learning the AR parameters (system identification) under arbitrary and bounded observation noise (due to the reconstruction error). 3. **Dependence between learned parameters and future noise terms.** The forecasting analysis in [3] benefits from the i.i.d. assumption as learned parameters remain independent of future “noise” terms $x_n$. For example, the fact that $\hat{\beta}$ is independent of future noise terms $x_n(t), t > T$ is key when proving the forecasting error theorem in [3]. > We are curious about the comparative results. Indeed yours may be less accurate than DL SOTA, while yours has additional explainability. The performance gap might tell the cost for explainable modelling. Based on the collective feedback from reviewers, we added a new baseline to our experiments (DeepAR, see attached pdf), and we will attempt to add TRMF in our revised manuscript. Interestingly, DeepAR only outperforms SAMoSSA in the traffic dataset. We again thank the reviewer for their comments and constructive feedback. We hope the reviewer will take these clarifications into account in their revised scores. **References** [1] Alanqary, Arwa, Abdullah Alomar, and Devavrat Shah. "Change point detection via multivariate singular spectrum analysis." Advances in Neural Information Processing Systems 34 (2021): 23218-23230. [2] Mi, Xuelong, et al. "BILCO: An Efficient Algorithm for Joint Alignment of Time Series." Advances in Neural Information Processing Systems 35 (2022): 36270-36281. [3] Liu, Yong, et al. "Non-stationary transformers: Exploring the stationarity in time series forecasting." Advances in Neural Information Processing Systems 35 (2022): 9881-9893. [4] Wang, Zhiyuan, et al. "Learning latent seasonal-trend representations for time series forecasting." Advances in Neural Information Processing Systems 35 (2022): 38775-38787. --- Rebuttal Comment 1.1: Comment: Thanks for your responses and sound clarifications on the points we concerned! We appreciate your detailed comments on our questions on the difference of yours from the univariate case and [3]. Answers to Q3 are clear and highly appreciated, thanks! And the added experiments are good. We think you have soundly addressed the issues we raised. For the answers to Q1+2, the "stochastic" part is easy to understand, and we agree on it. Excuse us for one more question, we didn't really get the "deterministic" part of your answer to Q1+2, which has been quoted below: > For the deterministic part, the analysis of the multivariate case has its own challenges compared to that of the univariate case (i.e., SSA) and it characterizes how one can exploit assumption 2.1 to learn across different time series which results in the better scaling (w.r.t. N). Could you further tell us what exactly are the "challenges" or what are the new stuffs of math in your presented results or proof? Thanks a lot for your further clarification. --- Reply to Comment 1.1.1: Comment: We again thank the reviewer for their constructive feedback, and we're pleased to learn that our responses have been found clear and sound. > Could you further tell us what exactly are the "challenges" or what are the new stuffs of math in your presented results or proof? Thanks a lot for your further clarification. We will address two aspects, to make sure that our contribution is appropriately communicated. **First: Under our settings, how does the analysis for estimating the deterministic part in multivariate case differ from the univariate case?** In addressing this, it's pivotal to highlight that the first step in the algorithm differs depending on the case in question. Specifically, in the multivariate case, the algorithm constructs a stacked Page matrix of the observations (refer to eq(5)). Conversely, in the univariate case, only a single Page matrix is constructed. Given that, the analysis of the two algorithms will naturally be different. For example, to establish that the estimation error bound in the multivariate case, we had to analyze both the rank of the *stacked* Page matrix induced by the deterministic components $f_1, \dots, f_N$ and the spectral properties of the *stacked* Page matrix induced by the stationary noise process $x_1, \dots, x_N$. This clearly is different from analyzing the spectral properties of the Page matrix of a single component. This careful analysis for the stacked Page matrices results in a better scaling for the estimation error ($1/\sqrt{NT}$ v.s. $1/\sqrt{T}$ if the univariate algorithm is applied for each time series individually). **Second: How is our analysis for estimating the deterministic part different from that of [3]?** The analysis in [3] assumes an independent noise process. Here, we assume that the noise is a correlated stationary AR process. To establish the consistency of estimating the deterministic part, we had to establish certain spectral properties of the stacked Page matrix induced by the stationary noise processes we assumed. This is precisely what we have done. The spectral properties of such random matrices with dependent entries should be of interest in its own right. We hope these two points address your concern. Please let us know If you have any further questions.
Rebuttal 1: Rebuttal: We thank all the reviewers for constructive feedback. Here's a succinct highlight of our paper's key contributions, beyond our individual responses to reviewers: The main contribution of this paper is to showcase the effectiveness of a simple multi-stage algorithm in time series forecasting. For which we do the following: 1. To establish its effectiveness in theory, we had to extend prior work in the following ways: - We extend the analysis of mSSA in [3] to accommodate the case of correlated autoregressive noise, a prevalent noise structure in time series analysis. In particular, we establish that one can effectively estimate and forecast the *deterministic* part of the signal under correlated autoregressive noise. - We extend the analysis of AR parameter estimation in [14] to accommodate the case when the process is observed under arbitrary bounded noise. Our results can be thought of as a robust generalization of [14] which derives similar results but without any observation noise. Thus, we establish that one can effectively forecast the *stochastic* part of the signal. That is, we overcame two key challenges in analyzing the proposed multi-stage algorithm, significantly building upon recent prior work.. 2. We showcase that empirically, through limited but representative datasets and baselines, that SAMoSSA outperforms mSSA. This showcases the effectiveness of the multi-stage process we propose and that the model we proposed is indeed reasonable (see Figure 1 in the rebuttal for further evidence). We believe that further extensions (e.g., considering VAR-like process for the stochastic component) can help achieve even better performances -- which we will consider in future work. Pdf: /pdf/fce9369f228519d7fbfa9d796e5fa8e7e31176a4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper extends previous work on multivariate Singular Spectrum Analysis (mSSA) to observations with autoregressive (AR) noise. The method constructs a sliding window representation of the target univariate or multivariate time series called the Page matrix and learns the deterministic non-stationary component (with a linear model) from the singular value decomposition of this representation. The model then produces forecasts by first estimating the parameters of a linear model for the deterministic component followed by the AR parameter estimates of the AR model in the second step. The authors provide theoretical bounds for the error estimates under AR noise of this method, the AR model parameters under perturbation, and the out-of-sample forecasting error. Finally, the authors compare the performance on their method with other time series models (including the mSSA method that is the basis for the presented algorithm. Strengths: Originality From an algorithmic point of view, the proposed method is a straightforward extension of the previously developed mSSA model to incorporate autoregressive noise and use a two-step model to learn it’s parameters (and the parameters of the deterministic part). Of course, I think the main contribution lies in the theoretical error bound results but I am not familiar enough with the theory to judge the originality on these and will leave this to other reviewers. Quality Again, I am not an expert on the underlying theory but so I cannot judge on the correctness of the theoretical results. The presented method is an interesting approach to time series forecasting and the theoretical guarantees are appealing. The empirical evaluation on the synthetic results confirm the established error bounds on that synthetic example and the paper also contains a (albeit brief) empirical evaluation with different baselines. Clarity The paper and its contribution are clearly written. However, the paper is very much written from the perspective Singular Spectrum Analysis point-of-view. I would have appreciated more background and discussion of related approaches in classical time series literature (which I will go into detail in Weaknesses). Significance Time series methods that perform well in practice and also have provable error bounds are not very common in the recent ML literature that has focused mostly on deep learning models that lack these guarantees. As such, the method presented in this paper is significant, particularly for high-stakes applications where provable error bounds are required. Weaknesses: The paper is brief on related work, especially on related approaches in classical time series forecasting (STL decomposition, simple exponential smoothing (SES) with drift). I think it would be useful to the reader to contrast the presented method to these approaches. Another class of related models are matrix factorization models that are not discussed either. Again, I think it would be useful to the reader to discuss this class of methods. I understand that the main focus on the paper is on the derivation of the theoretical results and I appreciate the effort here. However, the quantitative evaluation is very brief and does not give much insight into the performance of the method relative to either closely related models (other than mSSA) or modern deep learning methods. I would appreciate if the authors would at least compare their method that is conceptually or algorithmically similar. To this end, I would propose to compare against SES with drift and at least one matrix factorization method for forecasting (TRMF for example: https://dl.acm.org/doi/abs/10.5555/3157096.3157191). A comparison with deep learning baselines such as DeepAR or Fedformer (https://arxiv.org/abs/2201.12740) would also be interesting to understand how the method compares to currently used deep learning models. Simple baselines are also missing (seasonal naive for Traffic/Electricity; the naive method proposed in Bergmeir et al., KDD 2022 (https://link.springer.com/article/10.1007/s10618-022-00894-5) for Exchange). I would also appreciate more insight on the non-stationary deterministic part learned by SAMoSSA. I’m actually wondering on the added benefit of that part on the Electricity and Traffic datasets, which are quite stationary (but this might be up to debate). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Why is there such a large improvement from SAMoSSA over mSSA for Electricity? I would appreciate any insight on why the proposed method performs much better here. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are briefly discussed (and the discussion is sufficient in my opinion). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and constructive feedback. In the following sections, we address each of the questions and comments they have raised. > However, the quantitative evaluation is very brief and does not give much insight into the performance of the method relative to either closely related models (other than mSSA) or modern deep learning methods. As the reviewer correctly points out, the main focus of this paper is to address the missing theoretical characterization of commonly used multi-stage learning algorithms. That said, based on the collective feedback from reviewers, we added a new baseline to our experiments (DeepAR, see attached pdf), and we will attempt to add TRMF in our revised manuscript. Interestingly, DeepAR only outperforms SAMoSSA in the traffic dataset. > Why is there such a large improvement from SAMoSSA over mSSA for Electricity? I would appreciate any insight on why the proposed method performs much better here. I would also appreciate more insight on the non-stationary deterministic part learned by SAMoSSA. I’m actually wondering on the added benefit of that part on the Electricity and Traffic datasets, which are quite stationary (but this might be up to debate). We thank the reviewer for this great question, as it gives us the chance to clarify our results and highlight the advantage of our proposed method. To explain the improvement, let us first describe how the two forecast estimates (mSSA vs. SAMoSSA) are different. In mSSA, assuming the univariate case, the forecast estimate is $$\hat{y}(t+1) = \hat{f}(t+1) = \sum_i \hat{\beta}_i y(t+1-i) $$ Whereas in SAMoSSA, the forecast estimate is (using the same $\hat{\beta}$ above) $$ \hat{y}(t+1) = \hat{f}(t+1) + \hat{x}(t+1) = \sum_i \hat{\beta}_i y(t+1-i) + \sum_i \alpha_i \tilde{x}_1(t+1-i) $$ That is, mSSA as described in [3] overlooks any potential structure in the stochastic process $x(\cdot)$ as it assumes i.i.d mean-zero noise process, while in SAMoSSA the structure of $x(\cdot)$ is captured through the learned AR process. Given that, the difference in performance would be attributed to the structure in the (estimated) stochastic process $\hat{x}_(\cdot)$. That is, if there is an AR structure in $ \hat{x}(\cdot) = y(\cdot) - \hat{f}(\cdot)$, then we expect SAMoSSA to perform better. Interestingly, we indeed find this to be the case -- in the electricity dataset, the partial autocorrelation coefficient of $\hat{x}(\cdot)$ at lag $1$ is significant (we focus on lag $=1$ for brevity and since we find it the highest coefficient on average). In particular, Figure 1 in the attached PDF, shows that on average (across the different univariate time series in the electricity dataset), the partial autocorrelation coefficient at lag $1$ equals $0.2$. We see a similar but weaker partial autocorrelation in the traffic datasets, with an average partial autocorrelation coefficient (at lag 1) of $0.1$. This could also explain why the improvement in the traffic dataset is relatively smaller. > The paper is brief on related work, especially on related approaches in classical time series forecasting (STL decomposition, simple exponential smoothing (SES) with drift). I think it would be useful to the reader to contrast the presented method to these approaches. Another class of related models are matrix factorization models that are not discussed either. Again, I think it would be useful to the reader to discuss this class of methods. We thank the reviewer for their great suggestions. We agree that the related work should discuss both topics and we will add them in the revised version. We again thank the reviewer for their comments and constructive feedback. We hope the reviewer will take these clarifications into account in their revised scores. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response, additional experiments and additional discussion of the related work in the revised manuscript. I am increasing my score. I still would like to ask the authors to consider SES with drift as a baseline in the revised paper. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our efforts and the constructive suggestions. We recognize the potential value of SES with drift as a baseline. We will duly consider your suggestion as we finalize our paper.
Summary: The paper discusses a two-stage algorithm for time series analysis, which involves estimating deterministic, non-stationary trend and seasonality components, followed by learning the residual stochastic, stationary components. The first stage involves using multivariate Singular Spectrum Analysis (mSSA) to estimate the non-stationary components, even in the presence of a correlated stationary Autoregressive (AR) component. The AR component is then learned from the residual time series in the second stage. The authors provide a finite-sample forecasting consistency bound for SAMoSSA, which is data-driven and requires minimal parameter tuning. The paper also presents empirical studies that validate the superior performance of SAMoSSA compared to existing baselines. Notably, SAMoSSA's ability to account for AR noise structure yields improvements ranging from 5% to 37% across various benchmark datasets. The authors also provide a detailed explanation of the model and assumptions used in their analysis, including the spatial and temporal structure of the time series and the properties of the AR processes. They then describe the algorithm for the univariate case and explain how it decomposes the observations into estimates of the non-stationary and stationary components. Strengths: 1. This paper assesses the importance and novelty of the proposed SAMoSSA methodology and discusses the paper's originality in the use of mSSA and AR processes to jointly learn the deterministic non-stationary and stationary stochastic components of time series. 2. The paper addresses a gap in the literature, namely the theoretical underpinning of multi-stage learning algorithms involving deterministic and stationary components. 3. The results demonstrate significant performance improvements over existing baseline approaches. Weaknesses: 1. The complexity of the paper could be a potential barrier, making it difficult for people without theoretical interests in the area of time series analysis to understand. The paper could provide clearer intuitions and simplifications alongside the complex definitions and theorems to make it more approachable. 2. The empirical evaluation could be slightly limited. The variety of benchmark baselines used to demonstrate the efficacy of the method could be extended. The analysis of the results should be discussed more carefully, providing insight into why SAMoSSA outperforms other models. It is unclear whether the assumptions made in the paper hold in some of the typical practical scenarios like the electricity and health datasets. 3. The paper should discuss in more depth where these assumptions come from and under what conditions they might not hold. It should also explore how the method could perform if some of the assumptions are violated. The method proposed seems tailored for a specific kind of problem and it is unclear how generalizable it is. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors should provide simpler explanations or visual aids alongside the more complex mathematical definitions and proofs to make the paper more accessible. More extensive empirical evaluation should be carried out with a broader array of datasets to prove the efficacy of the method. The impact of the violation of some assumptions should be discussed, indicating potential limitations of the proposed method. The authors should provide more discussion on how this method could be generalized or modified to tackle different types of time series. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss but do not highlight the potential limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for their positive and insightful feedback. In what follows, we address the specific questions and comments raised. > The authors should provide simpler explanations or visual aids alongside the more complex mathematical definitions and proofs to make the paper more accessible. Thank you for the feedback. While we have included some discussion and aimed to provide intuition for the various theorems/definitions, we will revise them to make them more approachable. Can the reviewer kindly help us identify particular definitions/theorems that were particularly complex and inaccessible? Thank you again!. > More extensive empirical evaluation should be carried out. As the reviewer correctly points out, the theoretical characterization of multi-stage learning algorithms in this setup is missing in the literature, and this is the main focus and deliverable of our paper. That said, based on the collective feedback from reviewers, we added a new baseline to our experiments (DeepAR, see attached pdf), and we will attempt to add TRMF [1] in our revised manuscript. Interestingly, DeepAR only outperforms SAMoSSA in the traffic dataset. > The impact of the violation of some assumptions should be discussed, indicating potential limitations of the proposed method. We thank the reviewer for raising this important point. To make our contributions more practical, we will add a discussion of a diagnostic test (developed in [2]) that can verify whether the main assumptions we make are valid. This will hopefully indicate the limits of the proposed method and allow practitioners to know when to use or not use the method. The test verifies whether mSSA is likely to succeed, given the observed data. In particular, the test measures the effective rank of the Page matrix associated with the multivariate time series with parameter $L$ ~ $\sqrt{NT}$. If the effective rank does not scale much slower than L then mSSA is unlikely to be effective.. We will add a reference and discussion of this test in the revised version. > The authors should provide more discussion on how this method could be generalized or modified to tackle different types of time series. In the limitation section, we identify two areas of future work that can generalize the current method. The first is extending the model to include non-stationary stochastic processes (e.g., integrated processes). The second is considering a VAR-like model where there exists some dependence between the stationary processes $x_1$,..., $x_N$. We again thank the reviewer for their comments, constructive feedback and suggestions. We hope the reviewer will take these clarifications into account in their revised scores. **References** [1] Yu, Hsiang-Fu, Nikhil Rao, and Inderjit S. Dhillon. "Temporal regularized matrix factorization for high-dimensional time series prediction." Advances in neural information processing systems 29 (2016). [2] Agarwal, Anish, Abdullah Alomar, and Devavrat Shah. "On multivariate singular spectrum analysis and its variants." ACM SIGMETRICS Performance Evaluation Review 50.1 (2022): 79-80. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts to address most of my concerns. Based on the current version of the work as well as the discussion from other reviewers, I would like to keep my score.
null
null
null
null
Score-based Generative Models with Lévy Processes
Accept (spotlight)
Summary: Score-based generative models (SBGMs) generally employ Brownian motion, also known as the Wiener process, for noise injection. However, using Brownian motion in SBGMs often leads to issues such as mode collapse or slow sampling. To address these problems, the authors propose SBGMs with an isotropic α-stable Levy distribution named Levy-Ito Model (LIM). The α-stable Levy distribution exhibits a heavy-tail property, enabling the Levy-Ito Model to achieve improved mode estimation, sample diversity, and faster convergence in terms of neural function evaluations (NFEs) compared to SBGMs employing Brownian motion. For the first time, this paper proves the time-reversal formula of stochastic differential equations (SDEs) with Levy processes. It also establishes the sampling equation and presents a numerical solver known as v-Euler-Maruyama for LIM. Furthermore, the paper introduces fractional denoising score matching, which elucidates the training process of LIM. Empirically, the proposed method shows superior mode estimation, diversity in image generation and imputation, and faster convergence of LIM compared to previous SBGMs. Strengths: In contrast to the previous works, this paper presents a valuable contribution for non-Gaussian distributions where a rigorous proof of the exact time-reversal formula for stochastic differential equations (SDEs) is provided. The provided derivation establishes a solid foundation for the proposed method. The paper provides clear and comprehensive proofs and demonstrations of the reverse SDE formula, sampling equation, numerical solver (v-Euler-Maruyama), and fractional denoising score matching. These findings effectively illustrate the construction, training, and utilization of the Levy-Ito Model (LIM) in the context of sampling. Figure 2 visually compares Brownian motion and the Levy process, clearly highlighting the advantageous characteristics of the Levy process, including its notable large jumps. Several toy examples effectively illustrate the superior capabilities of the proposed method for mode estimation compared to previous Diffusion Models (DMs). The paper is well-written and easily readable. Weaknesses: Enhanced diversity is the primary improvement of the Levy-Ito Model over previous Diffusion Models (DMs). Conducting more experiments on mode estimation and sample diversity would further strengthen the proposed method. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The primary advantages of LIM appear to arise from employing heavy-tailed noise injections, an approach already investigated in prior works. What are the specific benefits of utilizing an isotropic α-stable distribution compared to other potential heavy-tailed distributions? Training and sampling from the ImageNet dataset with SBGMs are challenging due to its vast diversity. Considering LIM's improvement in generation diversity compared to the previous DMs, I expect that LIM will meaningfully outperform DMs on ImageNet. Although the paper lacks experiments on ImageNet, future research in this area would be intriguing. Some minor typos: - line 156: the the Wasserstein-1 -> the Wasserstein-1 - line 158: v-Euler-Maruyamais -> v-Euler-Maruyama is - It$\bar{o}$ -> It$\hat{o}$ Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing valuable and keen insights to enhance the completeness of our paper. > Question 1 > LIM, Heavy-tailed DSM [Deasy et al., 2021], and Denoising Diffusion Gamma Models [Nachmani et al., 2021] all share the advantage of a faster convergence rate for sampling compared to DDPM [Ho et al., 2020]. Additionally, both LIM and Heavy-tailed DSM excel in mode estimation for imbalanced datasets, as mentioned [Deasy et al., 2021]. However, LIM outperforms significantly other models due to 1) its ability to find the exact drift term in the time-reversal formula, 2) precise and stable training, and 3) guaranteed convergence for reverse sampling. Since all three distributions follow a Lévy process [Dytso et al., 2018][Wang et al., 2012], the drift term for the time-reversal formula can be computed according to [Conforti et al., 2021]. For Isotropic $\alpha$-stable Lévy processes, which follow symmetric and stable distributions, an exact drift term can be computed from the time-reversal formula (Theorem 4.1). Moreover, training can be stable and accurate without the need for integration, as described by (Theorem 4.3). Furthermore, with the knowledge of the exact drift term, a reverse sampling formula ensuring convergence can be derived (Corollary E.1). In contrast, the drift terms used in Heavy-tailed DSM and Denoising Diffusion Gamma Models appear as an integral, making it difficult to derive an exact form. Even if integration is used during training or approximations are applied to the drift term, accumulating errors lead to inaccurate and unstable training. Moreover, when using an approximation for the drift term to make training practical, there's no assurance that the distance between the distribution $p_{\text{data}}$ of actual data and the distribution $p_{\theta}$ obtained from reverse sampling aligns within a certain range. In fact, for Heavy-tailed DSM, despite proposing a modified score function corresponding to the Generalized Gaussian distribution and providing a training method, there is no theoretical foundation proving that the modified score function converges to $p_{\text{data}}$ through reverse sampling. Similarly, Denoising Diffusion Gamma Models also fails to propose an exact drift term and present limitations by not providing theoretical evidence for the convergence of reverse sampling using the proposed score function. In fact, the FID performance of these two papers is significantly lower than that of the existing DDPM performance. We will incorporate the clear distinctions from the previously mentioned existing heavy-tailed noise methods into the paper revision. > Question 2 > Utilizing ImageNet, which has a larger number of classes compared to CIFAR10, to showcase the diversity of LIM through experimental results seems to be a convincing and effective approach. We conducted performance comparisons between Diffusion model [Song et al, 2020] and LIM using the ADM architecture [Dhariwal et al., 2021] on ImageNet 64x64. The results are summarized in the following table: | Model | FID ($\downarrow)$ | Precision ($\uparrow$) | Recall $(\uparrow)$ | | --- | --- | --- | --- | | Diffusion model [Song et al, 2020] | 14.23 | 0.6711 | 0.6932 | | LIM ($\alpha=1.8$) | $\textbf{12.97}$ | $\textbf{0.6782}$ | $\textbf{0.6937}$ | The experimental results confirm that LIM performs better than the Diffusion model [Song et al., 2020] in terms of FID, precision, and recall. However, the difference in recall is slight compared to what was expected. This aspect might be due to only using $\alpha=1.8$, and we can expect more improvement of LIM on ImageNet by searching better alpha values In the revised paper, we will further compare LIM's performance with heavy architectures like NCSN++ [Song et al., 2020] or DiT [Peebles et al., 2021], using the high-resolution dataset ImageNet 256 with varying $\alpha$. > Question 3 > We sincerely appreciate your thorough review of our paper and your detailed feedback on areas that need improvement. All the typos you mentioned have been corrected within our paper, and we will carefully inspect for any other potential errors to enhance the overall quality and completeness of the paper. Thank you once again. --- [Conforti et al., 2021] Time reversal of markov processes with jumps under a finite entropy condition (Stochastic Processes and their Applications, 2021) [Deasy et al., 2021] Heavy-tailed denoising score matching [Dhariwal et al., 2021] Diffusion Models Beat GANs on Image Synthesis [Dytso et al., 2018] Analytical properties of generalized Gaussian distributions [Ho et al., 2020] Denoising Diffusion Probabilistic Models (NeurIPS 2020) [Nachmani et al., 2021] DENOISING DIFFUSION GAMMA MODELS [Peebles et al., 2021] Scalable Diffusion Models with Transformers [Song et al., 2020] Score-Based Generative Modeling through Stochastic Differential Equations [Wang et al., 2012] Lévy Measure Decompositions for the Beta and Gamma Processes --- Rebuttal Comment 1.1: Title: Thanks. Comment: We appreciate your response. We keep our score the same.
Summary: The paper introduces the Levy-Ito Model, a novel score-based generative model (SBGM) that utilizes the isotropic $\alpha$-Levy process as perturbation noise. The authors highlight that their proposed method is the first continuous-time SBGM to incorporate a heavy-tailed process. They aim to leverage the advantages of heavy-tailed processes observed in various cases. For instance, such processes enhance convergence speed in Markov chain Monte Carlo (MCMC) methods like the Langevin algorithm. Additionally, heavy-tailed processes offer faster mode-hopping behavior compared to Gaussians. First, the authors establish the exact time-reversed stochastic differential equations (SDEs) with Levy process perturbation noise. The authors emphasize that unlike the Wiener process, whose sample paths should be continuous, the sample paths of the Levy process may be discontinuous (jumps) at multiple locations. Consequently, common differential equation techniques may not be applicable. Addressing several technical challenges associated with this SDE formulation, the paper introduces the "fractional score function," replacing the conventional score function in the drift term of the reverse process. Furthermore, the paper proposes fractional denoising score matching (fractional DSM) to approximate the fractional score in the reverse process. This approach may serve as a more general version of DSM. Additionally, the authors derive a probability flow formulation for the reverse SDE, similar to conventional SBGMs. This demonstrates that the proposed method can leverage other fast sampling techniques developed for SBGMs, such as advanced integrators. Finally, the paper presents several experiments to showcase the proposed methods' effectiveness. For instance, mode estimations and sample diversities are evaluated to compare the proposed method to the previous Wiener process-based approaches. Strengths: Overall I find that the writing is clear, concise, and well-structured, making it easy for readers to follow the arguments and understand the key points. The authors have succeeded in providing a fresh perspective on the topic, shedding new light on the subject matter and offering valuable contributions to the machine learning communities. Weaknesses: In light of the overall quality of the paper, the experimental parts would benefit from further refinement. First of all, the discussion on convergence rate can be improved. For example, regarding the results in Figure 7, the convergence rate may be influenced by various aspects, such as network architectures, noise scheduling, or the quality of the trained models. In this aspect, analysis on toy datasets would be more beneficial to provide better evidence. Secondly, there is room for improvement in the experiments related to Figure 3. It is important to note that FID utilizes classifiers trained on ImageNet datasets. This raises questions about the suitability of FID in analyzing the results for mixtures of Gaussians. I believe a distance metric like MMD may be more appropriate for this purpose. Lastly, there should be more in-depth discussions about the experiments on image generation benchmark datasets. In Table 1, the transition from Wiener to Levy process in the CIFAR-10 results shows minimal improvement in "recall." However, considering that other values exhibit more significant variations due to network architecture, it becomes necessary to determine the significance of this difference. It is also worth exploring whether the observed differences could be attributed to variations in noise scheduling. Similar patterns also emerge in the results for the CelebA dataset, warranting further analysis and discussion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It needs to be clarified what evidence supports the claim that conventional SBGM exhibits mode collapsing, as mentioned in lines 28-32 of the paper. - In lines 213-214, it is stated that noise scheduling was performed using the cosine function. However, later in the paper, it is mentioned that VP (presumably referring to a different noise scheduling method) was used. This inconsistency needs to be clarified about the accurate statement. - Section 5.2 should precede Section 5.1 to explain better the arguments the paper aims to address. By presenting Section 5.2 before Section 5.1, the authors can provide a clearer context and explanation of the differences they intend to discuss. This reordering would enhance the coherence and logical flow of the paper. - The positioning of the images in Figure 5 and the corresponding columns in the table appears to be reversed, leading to confusion. The inconsistency between the image positions and table columns can create difficulties understanding and interpreting the data. - There is a mismatch between the order in Tables 2 & 3 and their actual sequence in the document. This inconsistency can cause confusion and make it challenging for readers to locate and reference the correct tables. - Regarding the differences presented in Tables 2 & 3, it is unclear whether they can be considered statistically significant. 1. Further statistical tests or discussions of the observed variations are necessary to determine the significance of the findings. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the comments provided in the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Weaknesses > We sincerely appreciate your detailed suggestions for possible improvements. As you suggested, we will include detailed experimental results on how the convergence rate varies based on 1) architecture and 2) noise scheduling in the next paper revision. While we aimed to investigate the impact of 1) and 2) using the toy dataset, the limited dimensionality of the dataset hindered us from obtaining significant differences. Therefore, due to time and resource limitations, we regret that we could not provide detailed experimental results with real datasets in this rebuttal. However, we would be more than happy to provide supplementary explanations regarding Figure 3. The FID in Figure 3 is calculated directly from FID formula without utilizing the embeddings of a specific model. Additional measurements for MMD are also conducted and MMD are measured directly from the samples. Below is a table summarizing the values of FID and MMD measured using Diffusion models and LIM: | | Diffusion models | LIM | | --- | --- | --- | | FID ($\downarrow $) | 8.312 $\pm$ 0.904 | $\textbf{0.663 $\pm$ 0.376}$ | | MMD ($\downarrow $) | 0.026 $\pm$ 0.003 | $\textbf{0.02 $\pm$ 0.002}$ | It can be observed that the distribution of generated samples $p_{\theta}$ from LIM is more closer to the ground truth distribution $p_{\text{data}}$. We will incorporate these details during the paper revision period. > Question 1 > <Mode-collapse issue> Mode-collapse issue has been pointed out that Diffusion models show significant degradation in terms of fidelity and diversity when dealing with imbalanced datasets where the number of samples per class varies [Qin et al., 2023]. Especially, this issue intensifies for tail classes. > Question 2 > We appreciate your feedback on the terminology that might cause confusion in our paper. In [Song et al., 2020], the VP-SDE formula is defined as: $dX_t = -\frac{\beta(t)}{2}X_t dt +(\beta(t))^{\frac{1}{2}}dB_t$ and in that paper, $\beta(t)$ is specifically set to a linear function $\beta(t) = (\beta_1-\beta_0)t+\beta_0$. On the other hand, we extended [Song et al., 2020]'s formula to an isotropic $\alpha$-stable Lévy process as: $dX_t = -\frac{\beta(t)}{\alpha}X_t dt +(\beta(t))^{\frac{1}{\alpha}}dL^{\alpha}_t$ The specific $\beta(t)$ in our paper is derived from the cosine schedule proposed in [Nichol et al., 2021] and tailored to LIM as $\beta(t) = -\alpha\frac{d \log(\cos(\frac{\pi t}{2}))}{dt}$. We will make sure to provide a clearer version in the revised manuscript. > Question 3 > We sincerely appreciate for your thorough review on our paper. Highlighting motivation task for LIM before introducing sample quality, indeed appears far more reasonable. We believe that your editorial efforts provide us guidance toward a better direction in which we are able to make improvements in terms of both the readability and persuasiveness of the paper. In revision period, we will incorporate the feedback you provided and proceed to revise the paper. > Question 4 > As you mentioned, the locations of the image and table in Figure 5 have been leading confusion. Therefore, we have rearranged their locations to make Figure 5 more understandable. Additionally, we will provide additional detailed explanations for Figure 5 to further enhance the quality of the paper. We sincerely appreciate for your valuable insights. > Question 5 > Thank you for your valuable feedback. We will reposition Table 2 and Table 3 as you recommended to improve readability and clarity. > Question 6 > We sincerely appreciate your keen insights and comments on our paper. We have done our best to answer your valuable questions. It seems that there might have been a lack of experiments to demonstrate LIM's superiority in mode estimation aspects for multi-modal datasets, which is especially evident in Tables 2 and 3. Unfortunately, due to limited resource and time, we were unable to incorporate the feedback into the current rebuttal with actual results. Again we greatly apologize for this limitation. However, in order to validate the hypothesis that LIM can generate high-fidelity and diverse samples for imbalanced datasets, we are committed to conducting further experiments as soon as possible. We will conduct additional experiments and measure additional metrics to supplement the experimental results, and addressing your valuable feedback. --- [Chen et al., 2022] Approximation of the invariant measure of stable SDEs by an euler–maruyama scheme (2022) [Nichol et al., 2021] Improved denoising diffusion probabilistic models, 2021 [Qin et al., 2023] Class-Balancing Diffusion Models (CVPR 2023) [Song et al., 2020] Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2020)
Summary: Prior score-based/diffusion generative models have been defined by Brownian motion. This paper proposes a method of replacing the continuous Gaussian processes with different processes dependent on the characteristic exponent value. The heavy tail property defined by the Levy process allows for higher chance of making larger steps, thus inducing more diverse and faithful sample of the true distribution of the model than its Weiner process counterpart. The author shows that the model converges faster due to the Levy process having the capability of making large jumps during the forward and reverse process. The resulting model displays promising qualitative and quantitative results in datasets such as CIFAR10 and CelebA and synthetic datasets. Strengths: - Overall, the paper is well written/organized with clear explanation on the theoretical foundation. - Experiments are very well thought out and very useful in showing the advantages of utilizing a Levy stable distribution. - Image metrics show extremely promising results (i.e., FID score). Weaknesses: - Table 5 in the appendix shows promising FID scores but are heavily dependent on the optimal alpha value. It would be nice if we could get all alpha FID results for Table 1 and not just the optimal alpha value FID score. This would help us understand whether Levy process performs generally better (i.e., when alpha is between 1 and 2) than the baseline diffusion model (i.e., alpha equal 2) or if this seems more like a sensitive hyperparameter for best results. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Seen in Figure 3, why are diffusion models bad at faithfully learning the true data distribution? It would be nice if there were more insight on the lackluster performance of diffusion models based on the Weiner process interpretation. Theoretically, it seems like it should perform just as well as the Levy process, could it be due to the number of steps taken by the ODE solver? - Can the score function learned from the Levy noising process be compatible with other models such as using this learned score function for DDIM inferencing? - Why doesn't the heavy tail distribution create instability to the diffusion model path? Line 53-54 states that the variation becomes smaller as it approaches the sample space but wouldn't this also have adverse affect for finer pixel detailing due to the heavy tail? (I have limited knowledge of Levy process, so I may just have misunderstood). (I am willing to raise up my score) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Adequately addresses all limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Weakness 1 > Thank you for your valuable feedback. We conducted additional experiments regarding $\alpha$-selection for CIFAR10, and CelebA. Below are the FID results based on different values of $\alpha$ for CIFAR10 and CelebA: | $\alpha$ | CIFAR10 (32x32) | CelebA (64x64) | | --- | --- | --- | | 1.2 | 5.15 | 2.99 | | 1.5 | 2.86 | 1.57 | | 1.8 | 2.44 | 2.85 | | Diffusion Model [Song et al, 2020] | 2.44 | 3.21 | The proposed method's performance may differ with different $\alpha$, which means that the best $\alpha$ can differ from data or resolution. Indeed, this seems intuitively natural since images with low resolution less prefer large jumps, while images with high resolution and multiple modalities can benefit exploration from large jumps. Within similar resolution image datasets, there should be a reasonable range of $\alpha$, and here it is found to be between 1.5 and 1.8. Table 5 uses the DDPM architecture [Ho et al., 2020], whereas above table employs the NCSN++ deep architecture [Song et al., 2020]. Although the overall tendency of declining FID with lower $\alpha$ values remains consistent, it appears that differences in FID itself arises when using different architectures on CIFAR10. > Question 1 > Thank you for your insightful comments. The Wiener process follows a light-tailed distribution and has a continuous path, resulting in slow convergence rates. Consequently, smaller step sizes are necessary to reach the correct sample, requiring more steps in total. Diffusion models can converge theoretically to the data distribution $p_{\text{data}}$ when a sufficiently large number of steps are taken. However, during reverse sampling, instead of using the score function directly, a score model is used to approximate it. Theoretically, the quality of the score model and NFE determine the quality of the generated samples. However diffusion models use the score model for reverse sampling, it struggles with accurate mode estimation due to limited exploration range of noises compared to LIM during training the score model. You can observe this tendency in Figure 3. Successful mode estimation can be achieved by training the score models to capture $p(c)$ for each class $c$ of $p_{\text{data}}$. To do this, the (fractional) score function of the perturbed distribution $p_t(\mathbf{x})$ should be learned. The ability of the forward process to explore the sample space effectively determines the score model's capacity for mode estimation as the ratio between the true distribution $p_{\text{data}}(c)$ and the model's predicted distribution $p_{\theta}(c)$ for each mode is proportional to $\frac{p_{\theta}(c)}{p_{\text{data}}(c)} \propto \frac{\int_{\mathbf{x}\in\mathbb{R}^d} p_{\theta}(\mathbf{x},c) d\mathbf{x}}{\int_{\mathbf{x}\in\mathbb{R}^d}p_{\text{data}}(\mathbf{x},c)d\mathbf{x}}$ [Qin et al., 2023]. In a balanced dataset, where $p_{\theta}(\mathbf{x},c)$ is learned similarly for each $c$, the convergence of $\frac{p_{\theta}(c)}{p_{\text{data}}(c)}$ to 1 is easier to obtain. However, in cases of imbalanced datasets, the exploration range of the noised data due to the properties of Brownian motion limits the frequency of learning for the minor class $c$. In contrast, the heavy-tailed and discontinuous Lévy process has a wider exploration range than Brownian motion, particularly benefiting the accurate learning of $p_{\theta}(\mathbf{x},c)$ for the minor class (Figure 4). > Question 2 > Thank you for the insightful question. We can indeed apply DDIM inference to our model. This is possible because we have derived the probability ODE represented by the fractional score function (Theorem C.1). By applying the Euler method to (Theorem C.1), we arrive at a sampling formula with a structure similar to the fractional score function form of DDIM inference (Corollary E.2). When employing a pre-trained model on the CelebA dataset, the obtained FID scores by LIM-DDIM according to different NFEs are presented in the below table. | NFE (FID) | 20 | 50 | 100 | 200 | 1000 | | --- | --- | --- | --- | --- | --- | | LIM-DDIM ($\alpha =1.5$) | 6.73 | 4.80 | $\textbf{3.95}$ | - | - | | DDIM | 6.64 | 5.23 | - | 4.78 | 4.88 | > Question 3 > The extended version of the Forward process for the given $\alpha$ in the general VP-SDE framework is given as follows: $dX_t = -\frac{\beta(t)}{\alpha}X_tdt + (\beta(t))^{\frac{1}{\alpha}}dL^{\alpha}_t$ -(1) Here, the weak solution $X_t$ of (1) is given as $X_t = a(t)X_0 +\gamma(t)\epsilon$, where $\epsilon \sim \mathcal{S}\alpha\mathcal{S}(1)$. For this, $a(t)$ is set to the cosine schedule $a(t) = \cos(\frac{\pi}{2}t)$ independent of $\alpha$. According to the VP-SDE structure, $\tilde\gamma_{\alpha}(t)$ becomes $\tilde\gamma_{\alpha}(t) = (1-a^{\alpha}(t))^{\frac{1}{\alpha}}$. Therefore, when $\alpha_1<\alpha_2$, it holds that $\tilde\gamma_{\alpha_1}(t) \le \tilde\gamma_{\alpha_2}(t)$. In other words, LIM has sufficient ability to move from the sample space to the noise space in spite of small noise coefficient $\tilde\gamma_\alpha(t)=(1-a^{\alpha}(t))^{\frac{1}{\alpha}}$ due to large jump noises. If we fix $\gamma(t)$ for each $\alpha$ and adjust $\tilde a_{\alpha}(t)$ to correspondingly follow the VP-SDE structure in the forward process such that $\tilde a_{\alpha}(t)=(1-\gamma^{\alpha}(t))^{\frac{1}{\alpha}}$, then for $\alpha_1<\alpha_2$, it holds that $\tilde a_{\alpha_1}(t) \le \tilde a_{\alpha_2}(t)$. Therefore, as $\alpha$ decreases, a gradual degradation effect occurs for the mean corresponding to the forward process $X_t$ with $a(t)X_0$*.* Regardless of whether we keep $a(t)$ fixed or fix $\gamma(t)$ for various $\alpha$ values, LIM can reach the noise space. --- [Ho et al., 2020] Denoising Diffusion Probabilistic Models (NeurIPS 2020) [Qin et al., 2023] Class-Balancing Diffusion Models (CVPR 2023) [Song et al., 2020] Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2020) --- Rebuttal Comment 1.1: Comment: The author has satisfied all my questions, I will raise my score.
Summary: This paper presents a new score-based generative model called Lévy-Ito ̄ Model (LIM) that tackles the challenges of slow convergence rate of Number of Function Evaluation (NFE) and mode collapse in diffusion models when applied to imbalanced data. This model leverages isotropic-stable Lévy processes. Initially, the authors derive a precise stochastic differential equation in reverse-time, driven by the Lévy process, and subsequently establish the corresponding fractional denoising score matching technique. The proposed generative model harnesses the advantageous heavy-tailed characteristics of the Lévy process. The experimental findings demonstrate that LIM achieves faster and more diverse sampling, while maintaining exceptional fidelity when compared to existing diffusion models across a range of image datasets. Strengths: 1. This paper investigates a novel optimal non-Gaussian stochastic process(isotropic alpha-stable Lévy processes) for injecting noise, and prove the exact time-reversal formula of SDEs driven by Lévy process. 2. Based on isotropic alpha-stable Lévy processes, this paper proposes a novel score-based diffusion model called Lévy-Ito ̄ Model (LIM). 3. Compared to existing diffusion models, LIM offers faster and more diverse sampling capabilities while maintaining high fidelity across a range of image datasets. Weaknesses: 1. In P2-L31, the authors make claims about the slow convergence rate and mode-collapse issues of previous score-based diffusion models without providing sufficient explanation or evidence to support these claims. 2. In Figure 3, the authors doesn't explain the significance and differences of the two blue clusters in each of the three different subplots. And how is the FID calculated? 3. The related work section provides limited information and lacks a detailed investigation of the recent developments in the last two years. 4. In P5-L156, there are grammatical errors, such as repetitive words and incorrect prepositions. These should be corrected for clarity and readability. 5. Indeed, the experiments seem somewhat limited as they only validate the results on low-resolution datasets such as CIFAR-10 (32x32) and CalebA (64x64). 6. In Appendix C, on P23, Equation 112 is identified as incorrect. The authors should revise the equation or provide the correct version to ensure the accuracy of the paper. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to Weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors adequately discussed the limitations of this paper. However, potential negative societal impacts are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your feedback on our paper. We have done our best to answer your keen questions. > Weakness 1 > <Slow convergence> The reason for the slow convergence of Diffusion models is that the Brownian motion follows a light-tailed distribution and has a continuous path. Various methods have been proposed to reduce the convergence rate while maintaining a fast sampling process and high fidelity [Liu et al. 2022]. Another efforts have also been made to use distillation to reduce the number of steps [Salimans et al., 2022]. <Mode-collapse issue> It has been pointed out that Diffusion models show significant degradation in terms of fidelity and diversity when dealing with imbalanced datasets where the number of samples per class varies [Qin et al., 2023] with mode collapse issue for tail classes. > Weakness 2 > Two mixtures of Gaussian distribution is the simplest form of an imbalanced dataset. In subplot (b), it can be seen that for the Diffusion model, the generated samples have a ratio of 5.6:1. However for LIM, the generated samples have a ratio of 11.1:1, which is relatively close to the ground truth. This similarity can be observed not only from the ratio aspect but also from FID and MMD. The FID and MMD values for each model are summarized in the table below. | | Diffusion models | LIM | | --- | --- | --- | | FID$ (\downarrow )$ | 8.312 $\pm$ 0.904 | $\textbf{0.663 $\pm$ 0.376}$ | | MMD ($\downarrow $) | 0.026 $\pm$ 0.003 | $\textbf{0.02 $\pm$ 0.002}$ | FID and MMD are calculated directly without using any embeddings for the sets of true samples and generated samples. LIM demonstrates that the distribution $p_{\theta}$ of its generated samples is similar to the ground truth distribution $p_{\text{data}}$. We will incorporate these insights and make the necessary revisions accordingly. > Weakness 3 > Diffusion models have shown advancements in performance through various approaches such as incorporating guidance [Kim et al., 2023][Song et al., 2021], introducing new architectures [Peebles et al., 2023], and proposing novel training methods [Hang et al., 2023]. Additionally, various attempts to reduce convergence rates have been proposed, such as using ODE solvers or [Lu et al., 2022], Fourier neural Operator [Zheng et al., 2022], and distillation [Salimans et al., 2022]. Despite these advancements, diffusion models still face inherent limitations such as slow convergence rates and the mode-collapse issue in imbalanced datasets [Qin et al., 2023]. There are a few papers that have explored the use of noises other than Brownian motion, such as Denoising Diffusion Gamma models [Nachmani et al., 2021] and Heavy-tailed DSM [Deasy et al., 2021]. Both approaches share the common aspect of utilizing DDPM formulas and heavy-tailed distributions. Heavy-tailed DSM employs a Generalized Gaussian distribution and claims to have strengths for imbalanced tasks. Denoising Diffusion Gamma models use a Gamma distribution for noise injection and highlight the advantage of faster convergence rates for sampling. However, they face challenges in terms of performance compared to standard Diffusion models. We will add this information and proceed to modify the related work section accordingly. > Weakness 4 > We appreciate your feedback regarding the grammatical errors present throughout the paper. We will ensure to correct all of them and address the factors that could potentially compromise the readability of the text. > Weakness 5 > To compare the performance on a high-resolution dataset, we chose the DDPM architecture [Ho et al., 2020], and trained LIM and diffusion models [Song et al., 2020] on the CelebA-HQ dataset (256x256). We measured and compared the FID score for each model. | Model | FID | | --- | --- | | Diffusion model [Song et al] (NFE = 1000) | 11.87 | | LIM (NFE = 500) | $\textbf{7.76}$ | It shows that LIM outperforms Diffusion models for CelebA-HQ. However, since this experiment was limited to the DDPM architecture and conducted only on CelebA-HQ, we plan to conduct additional experiments in the next paper revision. > Weakness 6 > Thank you for pointing out the inconsistencies in our proofs. We will correct the error in Equation 112 and carefully revise all other formulas to make sure everything is correct. --- [Deasy et al., 2021] Heavy-tailed denoising score matching [Hang et al., 2023] Efficient Diffusion Training via Min-SNR Weighting Strategy (2023) [Ho et al., 2020] Denoising Diffusion Probabilistic Models (NeurIPS 2020) [Kim et al., 2023] Refining Generative Process with Discriminator Guidance in Score-based Diffusion Models (ICML 2023) [Liu et al. 2022] Pseudo numerical methods for diffusion models on manifolds (2022) [Lu et al., 2022] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps (NeurIPS 2022) [Nachmani et al., 2021] Denoising Diffusion Gamma Models (2021) [Qin et al., 2023] Class-Balancing Diffusion Models (CVPR 2023) [Salimans et al., 2022] Progressive distillation for fast sampling of diffusion models (ICLR 2022) [Song et al., 2020] Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2020) [Song et al., 2021] Denoising diffusion implicit models (ICLR 2021) [Peebles et al., 2023] Scalable Diffusion Models with Transformers (2023) [*Zheng* et al, 2022] Fast Sampling of Diffusion Models via Operator Learning (NeurIPS 2022) --- Rebuttal Comment 1.1: Title: Response to Authors Comment: The authors had addressed my problem, so I raised my initial rating.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions
Accept (poster)
Summary: This paper introduces the challenge of online learning in adversarial MDPs where the loss functions and transition functions are chosen by a malicious adversary. Although previous algorithms achieving $O(\sqrt{T})$ regret with fixed transition functions could not handle adversarial transitions, in this paper, the authors propose a new algorithms that can handle both adversarial losses and transitions, with regret increasing smoothly based on the degree of maliciousness $C^P$. The first algorithm achieves $O(\sqrt{T} + C^P)$ regret, where $C^P$ measures how adversarial the transitions are and can be at most $O(T)$. Second, a black-box reduction approach is introduced to remove the requirement of knowing $C^P$. The algorithm is further refined to adapt to easier environments and achieve $O(U + \sqrt{U C^L} + C^P)$ regret, where $U$ is a gap-dependent coefficient and $C_L$ represents the amount of corruption on losses. Strengths: - A variant of the UOB-REPS algorithm is suggested to achieve regret in completely adversarial environments when the adversarial transition parameter, $C^P$, is known. This is accomplished by using an enlarged confidence set with the log-barrier regularizer and introducing a novel amortized bonus term. - The requirement of knowing the adversarial transition parameter, $C^P$, is eliminated by proposing a black-box reduction approach. This approach provides the same guarantee (up to logarithmic factors) even if $C^P$ is unknown. - The algorithms are further refined to simultaneously adapt to the maliciousness of the loss functions and achieve low regret. This enables the algorithms to handle various degrees of adversarial behavior in the loss functions while maintaining their regret performance. Weaknesses: - The organization of the paper may be making it difficult for me to comprehend. I think it is recommended to have the main algorithm presented in the main paper for better clarity. Since the paper covers multiple algorithms, it would be helpful to focus on explaining how they differ from existing methods, the challenges they address, and the novelty they bring to the field. Providing a dedicated conclusion section would enhance the understanding of the paper by summarizing the key findings and contributions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Given that this paper is primarily focused on theoretical aspects, its direct impact on society is expected to be neutral. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your helpful feedback. Please see our response below: *** **Q:** Issues of paper's organization. **A:** Thanks for the suggestion. In the submission phase, we were unable to squeeze the algorithms and the conclusion section in the main texts because of the space limit. We will use the extra page granted in the final version to address these issues.
Summary: This paper studies online reinforcement learning in tabular MDPs when the losses and transitions can be adversarially changing from round to round. They show that one can achieve regret guarantees which are $O(\sqrt{T} + C^P)$ where $C^P$ measures the degree to which transitions are changing. Specifically they - devise an algorithm which achieves the aforementioned guarantee when the value of $C^P$ is known. - apply further reductions to get an algorithm which does not need to know the value of $C^P$. - get gap-dependent regret bounds when the value of $C^P$ is known. Strengths: - The results seem very impressive. To my knowledge this is the first paper which studies adversarial MDPs with changing transitions. The authors quantify the notion of changing transitions and prove a regret bound. - The algorithms and techniques seem very novel, and might be useful for future work. - References and connections to previous work are stated clearly. Weaknesses: - The writing is a bit unclear, specifically regarding the technical details. This paper is a very technical paper with long proofs, but I do think it would be helpful to state the algorithms (or some simplified versions of the algorithms) in the main text and relegate some of the finer details to the appendix. - Upon first reading, the proof ideas in Section 3 onwards did not make much sense, because they were trying to cover a lot of fine details, even though the overall proof sketch was not really discussed much. I'd encourage the authors to put more effort into improving readability. Typos: - Line 116: should this be $S$ instead of $X$? Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. This paper considers transitions and losses which are specified ahead of time by an adversary. Do you think it is possible to consider adversarial losses/transitions which the adversary can suggest after seeing a history of the learner's decisions? (a harder setting) 2. What is the role of the splitting into epochs feature of the algorithm? What goes wrong if the algorithm updates the empirical transition in every round, as opposed to in every epoch? 3. Is there some intuition for why the algorithm uses the upper occupancy bound following line 161, as opposed to just using the occupancy measure associated with $\bar{P}$? 4. What is the message of Lemma 3.1? Why is the inequality $\sum_t C_t^P/u_t(s) \le \sum_t b_t(s)$ useful? 5. This might be a standard technique, but why are the costs in Algorithm 3 offset by a reward $r_\tau$? What does this capture? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your helpful feedback. Please see our responses below: *** **Q1:** Issues of writing, organization, and presentation. **A1:** Thanks for your suggestions. We will consider re-organizing the content in the future version. *** **Q2:** Line 116: should this be $S$ instead of $X$? **A2:** We thank the reviewer for spotting the typo. It will be fixed in the final version. *** **Q3:** Do you think it is possible to consider adversarial losses/transitions which the adversary can suggest after seeing a history of the learner's decisions? (a harder setting) **A3:** We do believe that our algorithm can handle the standard adaptive adversary (i.e., decides the transition/loss in round $t$ based on the history up to $t-1$). We consider the oblivious adversary in the paper just for simplicity. For a stronger adversary that can decide the transition/loss in round $t$ based on the action chosen in round $t$, and the total corruption is unknown, previous work by [He et al., 2022] has shown that it is impossible to achieve $O(C^P)$ regret. [He et al., 2022] Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu. Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions. NeurIPS 2022. *** **Q4:** What is the role of the splitting into epochs feature of the algorithm? What goes wrong if the algorithm updates the empirical transition in every round, as opposed to in every epoch? **A4:** Since FTRL does not deal with varying decision sets easily (as mentioned in lines 328-329), splitting time steps into epochs guarantee that the occupancy measures in a epoch are from the same decision set. *** **Q5:** Is there some intuition for why the algorithm uses the upper occupancy bound following line 161, as opposed to just using the occupancy measure associated with $\bar{P}$? **A5:** It is important to ensure that ``optimism'' holds (i.e., $q^{P,\pi_t}(s,a) \leq \mu_t(s,a)$) in the analysis. *** **Q6:** What is the message of Lemma 3.1? Why is the inequality $\sum_t C^P_t/u_t(s) \leq \sum_t b_t$ useful? **A6:** Because of the transition corruption, we have a regret overhead of $\mathbb{E}[\sum_t \sum_s q^{P,\mathring{\pi}}(s)\frac{C^P_t}{u_t(s)}]$ (Line 183) that can be prohibitively large. By incorporating the bonus term $b_t$ into the policy update, we get an additional regret term $\mathbb{E}[\sum_t \langle q^{P,\pi_t} - q^{P,\mathring{\pi}},b_t\rangle]$. The first part of Lemma 3.1 is to show that the overhead term $\mathbb{E}[\sum_t \sum_s q^{P,\mathring{\pi}}(s)\frac{C^P_t}{u_t(s)}]$ can be cancelled by the negative part of the additional regret $\mathbb{E}[\sum_t \langle q^{P,\mathring{\pi}},b_t\rangle]$. The second part of Lemma 3.1 is to show that the positive part of the additional regret $\sum_{t} \langle q^{P, \pi_t}, b_t\rangle\approx \sum_t \langle \widehat{q}_t, b_t\rangle$ can be bounded by the order of $C^P\log T$. *** **Q7:** Why are the costs in Algorithm 3 offset by a reward $r_\tau$? What does this capture? **A7:** Introducing a ``bonus term'' $r_\tau$ is a standard technique in the model selection literature where the goal is to use a meta-bandit algorithm to learn over a set of base-bandit algorithms, and try to perform as well as the best base-bandit algorithm running alone. This bonus technique appears in [Foster et al., 2020] and [Luo et al., 2022]. The reason to introduce the bonus is that when running the model selection algorithm, every base algorithm is only chosen and updated with a certain probability (because at each round, the meta algorithm can only select one of the base algorithms to execute), which results in base algorithm's performance degradation compared to the case when it's running alone. To compensates this phenomenon, for base algorithms that are chosen with a smaller probability, the meta algorithm will add a larger bonus to them. [Foster et al., 2020] Dylan J. Foster, Claudio Gentile, Mehryar Mohri, Julian Zimmert. Adapting to Misspecification in Contextual Bandits. 2020. [Luo et al., 2020] Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou. Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits. 2022. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for your detailed answers. I believe this is a good work and I will keep my score.
Summary: The authors consider no-regret learning in adversarial MDPs, when the dynamics may change adversarially across episodes. They design an algorithm which provides a regret guarantee of $\tilde{O}(\sqrt{T} + C)$ where $C$ is the total deviation from some fixed transition function, and the benchmark is the best fixed Markov policy in hindsight. The authors show that this result can be obtained even when $C$ is unknown to the algorithm, by constructing a black-box reduction from an algorithm which knows $C$. The authors also consider the stochastically constrained adversarial setting and provide a gap-dependent regret guarantee, but in this setting knowing $C$ is required. Strengths: * The results established in the paper improve upon previous results in corruption-robust reinforcement learning, and in particular constitute the first $\tilde{O}(\sqrt{T} + C)$-type bounds even when the losses are adversarial. * The techniques presented in the paper seem novel and interesting, and may be of independent interest in various RL scenarios. * The authors provide an overview of the analysis presenting the main challenges and ideas, making it easier to understand their main contributions and high-level techniques. * The black-box reduction from an algorithm which knows $C$ to one which doesn't need to know $C$ seems particularly interesting to me, and may be of interest in other non-stationary scenarios when trying to design algorithm that adapt to some corruption budget. Weaknesses: * Since the dynamics of the MDP changes adversarially across episodes, it makes less sense to set the benchmark as a Markov policy, as it is no longer the case that the optimal policy on a sequence of MDPs is WLOG Markov. The authors do not address this point in the current version of the paper, and I would like to hear from them why this assumption is needed and whether or not competing against a more general benchmark policy is hard. * At first, it may seem that $C$ measures the total variation of the transition dynamics across time, as is the case in other non-stationary RL formulations. However, the authors define $C$ to be the sum of variations from a single fixed transition function. This is a weaker definition, and in particular the results presented in this paper do not include a setup where the dynamics have a drift of $1/ \sqrt{T}$ per episode, and for every fixed transition $P'$ the total deviation from it would be $\Omega(T)$. In such a setting, the authors' bounds would be meaningless even though obtaining sublinear regret (even dynamic regret) is indeed possible. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See my remarks above under "Weaknesses" - I'd appreciate it if the authors could address the limited generality of the benchmark policy, as well as the problem of handling drift in the transitions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable feedback. Please see our responses below: *** **Q1:** Since the dynamics of the MDP changes adversarially across episodes, it makes less sense to set the benchmark as a Markov policy, as it is no longer the case that the optimal policy on a sequence of MDPs is WLOG Markov. The authors do not address this point in the current version of the paper, and I would like to hear from them why this assumption is needed and whether or not competing against a more general benchmark policy is hard. **A1:** This is indeed a very good point. While we do not have a definite answer regarding how hard it is to compete with the best non-Markovian policy generally, we remark that in the case when corruption is ``binary'' (that is, an episode is either corrupted or not so $C^P$ counts the number of corrupted episodes), the best non-Markovian policy would not perform better than the best Markovian one on those uncorrupted episodes, and since we already have $\sqrt{T}+C^P$ regret against the best Markovian policy, the same bound would thus hold for competing against the best non-Markovian one as well. We also emphasize again that our regret notion, though restricted to competing with Markov policies only, is more general than the standard one used in the literature of learning corrupted MDPs, as we argue in L73-L78. *** **Q2:** At first, it may seem that measures the total variation of the transition dynamics across time, as is the case in other non-stationary RL formulations. However, the authors define $C$ to be the sum of variations from a single fixed transition function. **Q3:** In particular, the results presented in this paper do not include a setup where the dynamics have a drift of $1/\sqrt{T}$ per episode, and for every fixed transition $P'$ the total deviation from it would be $\Omega(T)$. In such a setting, the authors' bounds would be meaningless even though obtaining sublinear regret (even dynamic regret) is indeed possible. **A2 and A3:** (For Q2 and Q3) We agree that the amount of variation $V^P\triangleq \sum_{t=2}^T \|P_t-P_{t-1}\|$ can be much smaller than our $C^P$. It is an interesting question whether the regret (compared to a fixed policy) can be just linear in $V^P$ instead of $C^P$. Previous work in non-staitonary RL shows that a dynamic regret of $(V^P)^{1/3}T^{2/3}$ is tight, which is incomparable to our static regret bound of $C^P$. However, there do exist examples (as the reviewer point out) where $(V^P)^{1/3}T^{2/3}$ is much smaller than $C^P$, so getting the best of both is also an interesting direction. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. Q1: The fact that if the corruption is binary then competing against the best fixed Markov policy is sufficient does alleviate some of my concerns regarding the benchmark. I have no further questions about the paper.
Summary: This paper studies the problem of reinforcement learning under adversarial reward and transitions. When evaluating the regret against the best fixed policy in hindsight, the algorithm proposed in this paper achieves the optimal regret O(\sqrt{T} + C^P), which is followed by other favorable extensions including being agnostic to the corruption amount $C^P$ and $C^L$, as well as gap-dependent regret bounds. Strengths: The theoretical results presented in this paper is solid. Several new techniques, including a new model selection framework that allows adversarial environments, is of independent interestis. Weaknesses: The main weakness is the lack of justification for the specific type of regret studied in this paper. To the best of my knowledge, this is the only paper that studies the regret against the best policy in hindsight in a mild-corruption setting, i.e. C^P \leq o(T), so some words justifying this new notion would be helpful. I'm not sure when such a regret notion is desirable to study in this scenario, since if C^P \leq o(T), the best policy in hindsight will converge to the optimal policy in the underlying uncorrupted MDP, so the two notions would converge anyways. Minor: There is a missing reference for corruption-robust RL 1. Zhang, X., Chen, Y., Zhu, X., & Sun, W. (2021, July). Robust policy gradient against strong data corruption. In International Conference on Machine Learning (pp. 12391-12401). PMLR. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your helpful feedback. Please see our responses below: *** **Q1:** Lack of justification for the specific type of regret studied in this paper. **A1:** First, when the MDPs are time-varying, the ``underlying uncorrupted MDP'' is not always well-defined. On the other hand, the best-in-hindsight policy is always well-defined, so adopting it as the benchmark is more direct and requires no additional assumptions. Second, our regret bound that scales with $\sqrt{C^L}$ reflects that our algorithm is still doing a meaningful job (i.e., performing as well as the best fixed policy) even when $C^L=\Omega(T)$ and $C^P=o(T)$. In contrast, the regret notion in [Lykouris et al, 2019] and [Wei et al., 2022] will become vacuous in this scenario. Thus, our notion of regret can more clearly decouple and distinguish the hardness coming from adversarial loss and from adversarial transition. Lastly, using best-in-hindsight policy as the benchmark is a convention in the adversarial online learning literature. As mentioned in the paper (L73-L78), our regret notion always implies the other one, but not the other way around. Therefore, we never fail to capture scenarios that are captured in the other notion of regret. *** **Q2:** A missing reference [Zhang et al., 2021] for corruption-robust RL. **A2:** We thank the reviewer for highlighting this work. We will make sure to include it in the final version of this manuscript and have a more complete overview of the literature. [Zhang et al., 2021] Zhang, X., Chen, Y., Zhu, X., Sun, W. (2021, July). Robust policy gradient against strong data corruption. In International Conference on Machine Learning (pp. 12391-12401). PMLR. --- Rebuttal Comment 1.1: Title: Thank you for addressing my questions. Comment: I have no more questions about the paper
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies learning algorithms for adversarial MDP with adversarial transition functions. The authors developed an algorithm that enjoys O(\sqrt{T} + C^P) regret where C^P measures how adversarial the transition functions are. The developed algorithm could work without knowing C^P. Finally, the authors show that further refinements of the algorithm can adapt to easier environments. Strengths: This paper tackles an important question of how to learn adversarial MDPs with adversarial transition functions. Prior work shows that learning with adversarial transition is information-theoretically impossible without paying exponential dependence on the episode length. This paper provides the first algorithm, based on a modification of UOB-REPS, which allows the regret bound to degrade mildly upon the amount of corruption. Furthermore, it also resolves the question of an unknown amount of adversarial transition and considers the question of adaptivity. Weaknesses: NA Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments.
null
null
null
null
null
null
Improved Frequency Estimation Algorithms with and without Predictions
Accept (spotlight)
Summary: This paper studied frequency estimation and learning-augmented frequency estimation. CountMin and CountSketch are the most popular algorithms for this task. With the addition of learning augmentation, an algorithm is given access to a learned prediction, in this case the prediction of the heavy hitters. This paper focuses on the stream being from a Zipfian distribution, which are well-studied, well-motivated distributions with heavy tails. In the learning-augmented algorithm, if an element is predicted to be heavy, it is given a unique bucket so that a more accurate frequency can be computed for it. If it isn’t predicted to be heavy, it is simply input into a sketching algorithm. They prove bounds on the weighted error of algorithms, including, CountSketch, CountMin, and a novel algorithm. For CountSketch and CountMin, the paper gives a tight analysis. The new algorithm is studied both with and without predictions, though predictions give the largest advantage in low space settings. Experiments justify the theory is predictive of performance. Strengths: - Learning-augmented frequency estimation is itself a very nice question, I was looking forward to reading this paper in my pile. - The algorithm is clean, straight-forward. I believe the results are correct. - The paper is grammatically well-written. Weaknesses: - I am confused about the prediction model. Normally, in learning-augmented algorithms, we measure an algorithm’s performance based on the error in the prediction. Here, as far as I could tell, all of the theoretical results only held when one assumed the predicted heavy hitters were correct. I expected to see some trade-off between the quality of prediction and the weighted error bounds. The experiments briefly mentioned that the prediction quality might be poor, thus leading to worse empirical performance (as expected), but there was no theory discussing the robustness of the predictions. Robustness in the prediction error is what differentiates learning-augmented algorithm from all these other BWCA frameworks (data-driven algorithms, algorithms with advice, etc). Perhaps because of the heavy tail distribution assumptions, it’s reasonable to assume that one learns the heavy hitters perfectly? Or can you offer another explanation for this choice in the model? - This paper does not clearly lay out its improvements on prior work. I would like to see a lot more comparison to the most relevant previous work [Hsu et al. 2019]. Can this be more clearly stated in the introduction? Concretely, it would help to have previously known results listed in a column in your table 1 for that we can see your improvement. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (see Weaknesses, please) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thanks you for your thorough review and your comments. Below we address your questions and concerns. >I am confused about the prediction model. Normally, in learning-augmented algorithms, we measure an algorithm’s performance based on the error in the prediction. Here, as far as I could tell, all of the theoretical results only held when one assumed the predicted heavy hitters were correct. We would like to first point out that our algorithm without predictions (Theorem 2.1) already outperforms the learning-augmented version of the standard CountSketch algorithm supplied with perfect predictions, in low space regimes. Secondly, while it is true that the best bounds obtained in Theorem 3.1 assume access to perfect predictions, our learned version does indeed possess worst-case guarantees, even if *all* predictions are incorrect. The worst case guarantees follow simply by the design of the algorithm: it explicitly keeps track of the frequencies of a select number of elements (those deemed the top O(B) heavy elements by the predictor). For these elements we incur no error, even if they don’t turn out to be true heavy elements. For the other elements, our algorithm inputs them into our improved version of CountSketch without predictions, whose guarantees are listed in Theorem 2.1 (see Lemma 3.3 for a general version with no Zipfian assumptions). This type of worst-case guarantees as well as the prediction model is identical to prior works on learning-augmented algorithms for frequency estimation [1]. In addition, it mirrors the consistency/robustness guarantees that appear in several algorithms with predictions papers (see the survey [2]) which bound the performance with perfect predictions (consistency) and with arbitrary predictions (robustness). Lastly, we believe our best error guarantees, which are given in Theorem 3.1, can also be obtained with weaker prediction models. Our algorithm is more robust to false positives, meaning light elements which are classified as heavy, than false negatives, which are heavy elements classified as light. Therefore, it is likely that our “learned version” results extend to the case where the prediction’s accuracy is tied to the true heaviness of the frequency. Nevertheless, as demonstrated by our superior empirical results, our algorithm generalizes to noisy real world predictions. This demonstrates that the prediction model (used in our work and [1]) is a useful model to develop novel algorithms. >I expected to see some trade-off between the quality of prediction and the weighted error bounds. The experiments briefly mentioned that the prediction quality might be poor, thus leading to worse empirical performance (as expected), but there was no theory discussing the robustness of the predictions. Robustness in the prediction error is what differentiates learning-augmented algorithm from all these other BWCA frameworks (data-driven algorithms, algorithms with advice, etc). Perhaps because of the heavy tail distribution assumptions, it’s reasonable to assume that one learns the heavy hitters perfectly? Or can you offer another explanation for this choice in the model? We can in fact provide some trade-offs between the quality of the prediction and the weighted error bounds, specifically for the (Learned) CountSketch algorithm. We left these out of the submission but will consider including them in the paper. For expected error, these results give a smooth trade-off between the bounds for the classic and corresponding learned algorithms (row 2 and 4 of Table 1). The result assume that the predictor may misclassify an element with some probability $\delta$. If $\delta=O(\ln(n/B)/\ln(n))$, then it turns out that we asymptotically obtain the same expected error as with the learned variant which has access to perfect predictions. For the (Learned) CountMin algorithm (row 1 and 3 of Table 1), a similar trade-off is presented in [1] where it is also the case that when $\delta=O(\ln(n/B)/\ln(n))$, the learned variant of the algorithm has the same asymptotic expected error as with a perfect predictor. As indicated above, there is more interesting work to be done on the prediction error. Especially exploring the asymmetry in the robustness to respectively false positives and negatives is a direction for future work. >This paper does not clearly lay out its improvements on prior work. I would like to see a lot more comparison to the most relevant previous work [Hsu et al. 2019]. Can this be more clearly stated in the introduction? Concretely, it would help to have previously known results listed in a column in your table 1 for that we can see your improvement. The suggested table is already in the appendix and following the reviewer’s suggestion, we will move Table 2 in the main body and integrate it in Table 1. To summarize, [Hsu et al. 2019] only analyzed CM and its learned variant. [1] Chen-Yu Hsu, Piotr Indyk, Dina Katabi, and Ali Vakilian. Learning-based frequency estimation algorithms. ICLR 2019. [2] Michael Mitzenmacher and Sergei Vassilvitskii. 2022. Algorithms with predictions. Commun. ACM 65, 7 (July 2022), 33–35. https://doi.org/10.1145/3528087 --- Rebuttal Comment 1.1: Comment: Authors: thank you very much for the thoughtful response. I apologize for the delay in this reply; I will be much prompter to continue discussion for the rest of the response period, if needed. Your responses more than adequately addressed my concerns about the trade-offs between the quality of the prediction and the weighted error bounds and my confusion about the definition of the quality of the prediction. I would encourage you to include your first two responses to me in the paper, if there's room. For someone very familiar with learning-augmented algorithms, but less-so with sketching problems, these worst case/ robustness guarantees that you obtain (though pretty simple to explain!) were not obvious to me. And obviously they heavily impacted my understanding of the paper's contribution in the algorithms its predictions space. I will be updating my score accordingly. --- Rebuttal 2: Title: Update check Comment: Dear Reviewer iUr9, Did we address all your concerns satisfactorily, in particular your comments about the prediction model and our improvements over prior works? If your concerns have not been resolved, could you please let us know which concerns were not sufficiently addressed so that we have a chance to respond before the deadline? Many thanks, The authors
Summary: Summary of the Paper ================== * This work follows (Hsu Indyk Katabi Vakilian 2019) in trying to improve the performance of hashing-based frequency estimation algorithms (such as Count-Min, CountSketch) by making use of "advice" in the form of a learning model's predictions which classify the input elements as "heavy-hitters" or otherwise based on the input distribution. * Just as (HIKV2019), the theoretical analysis assumes a Zipfian (heavy-tail) property for the data distribution, and they provide guarantees for the expected weighted estimation error $\frac{1}{N} \sum_{i=1}^{n} f_i \cdot |f_i - \hat{f}_i|$. * They improve on the (HIKV2019) analysis of Count-Min and Learned-Count-Min algorithms to get tight bounds on the expected estimation error when there are multiple hash functions ($k \geq 2$). * They also provide tight bounds for the expected estimation error of CountSketch, with and without learning. * Finally, they propose a better frequency estimation algorithm --- both plain (Algorithm 1&2) and learning-augmented (Algorithm 3&4) --- and prove bounds on the expected estimation error in both cases, showing that the learning-augmented algorithm outperforms both Plain-CS and Learned-CS in all regimes, whereas the plain (no-learning) algorithm outperforms the Plain-CS algorithm in the low-space regime ($B = {\rm polylog}(n)$). * They also propose a parsimonious variant of the algorithm (limited number of queries) and do an experimental evaluation. Strengths: * The problem setting is already studied in the literature and thus the improvements shown in this work are clearer. The Zipfian (heavy-tail) property for the data distribution is known to hold for many real world datasets (if approximately). * This work provides tight bounds for the expected estimation error of CountSketch and CountMin, both with and without learning. In the case of CountMin, it improves upon the existing bounds from (HIKV2019). * The proposed "better frequency estimation algorithm" provides tangible improvements over CS and CM, both wiith and without learning-augmentation. * They also consider a variation of the algorithm with worst-case guarantees, even when the data distribution is not Zipfian, and the variant nicely generalises from the Zipfian case. * The work includes the implementation of the algorithms and experimental evaluation. * A reasonable level of proof-sketches are provided in the main paper. Weaknesses: * The experiments should ideally have also considered the worst-case variant of the algorithm (Algorithm 6 in the supplementary) in both the Zipfian and non-Zipfian cases. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are happy to hear that you found our paper interesting and thank you for your time and comments.
Summary: Authors study frequency estimation algorithms CountMin and CountSketch and propose their modifications tailored for heavy tailed distributions. They first analyze CountMin and CountSketch, showing that the second one achieves better theoretical bounds on such distributions which explains experimental results in previous work. They propose a different algorithm with significantly better performance bounds on heavy tailed distributions which also satisfies worst case guarantees (for the case when the input does come from a considered heavy-tailed distribution) which are comparable to CountSketch. They also propose an ML-augmented variant of their algorithm which assumes that there is an oracle which correctly identifies half of the heavy hitters. This algorithm also works in parsimonious setting where it is allowed to receive only a few predictions. Strengths: * They consider problem important both in theory and practice in a setting which occurs often in practice * They show limitations of the existing algorithms and design new ones overcoming these limitations * the ML-augmented version of their algorithm can work in a parsimonious regime: only very few predictions are needed and I believe that this is a good sign of usability in practice Weaknesses: * I did not see lower bounds for the problem in their setting. It is not clear whether better algorithms are possible * It is not clear how their algorithm's performance depend on precision of the predictor, e.g., what if it identifies too many or too few items as heavy hitters Technical Quality: 3 good Clarity: 3 good Questions for Authors: * if your algorithm reports too many items as heavy hitters, what does your algorithm do? * requirement that the predictor perfectly identifies the top B/2 heavy hitters seems rather strict. Can it be made weaker, e.g. that it identifies 90% of the top B heavy hitters, or that it correctly identifies $i$th heavy hitters with some probability depending on $i$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: assumptions clearly stated in the theoretical results Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >I did not see lower bounds for the problem in their setting. It is not clear whether better algorithms are possible Proving lower bounds for learning-augmented frequency estimation, or even frequency estimation under our expected error metric, is an interesting future research direction. >It is not clear how their algorithm's performance depend on precision of the predictor, e.g., what if it identifies too many or too few items as heavy hitters Our learned version has worst case guarantees, even if the predictions are totally incorrect. This is because the algorithm explicitly keeps track of the frequencies of a select number of elements (those deemed the top O(B) heavy elements by the predictor). For these elements we incur no error, even if they don’t turn out to be true heavy elements. For the other elements, our algorithm inputs them into our improved version of CountSketch without predictions, whose guarantees are listed in Theorem 2.1 (see Lemma 3.3 for a general version with no Zipfian assumptions). This is the same type of worst-case guarantees given by prior works such as Hsu et al. >If your algorithm reports too many items as heavy hitters, what does your algorithm do? The requirement that the predictor perfectly identifies the top B/2 heavy hitters seems rather strict. Can it be made weaker, e.g. that it identifies 90% of the top B heavy hitters, or that it correctly identifies ith heavy hitters with some probability depending on i? For our best error bounds given in Theorem 3.1, we do assume that the predictor correctly identifies the top O(B) heavy elements. This is the same prediction model used in the prior work of Hsu et al. It is indeed likely that the prediction model can be relaxed to obtain similar improvements. Our algorithm is more robust to false positives, meaning light elements which are classified as heavy, than false negatives, which are heavy elements classified as light. Therefore, it is likely that our best results extend to the case where the prediction’s accuracy is tied to the true heaviness of the frequency. Nevertheless, as demonstrated by our superior empirical results, our algorithm generalizes to noisy real world predictions, demonstrating the versatility of the prediction model used in our work and Hsu et al. --- Rebuttal Comment 1.1: Comment: thank you for your explanation.
Summary: The authors present a new error analysis for Count-Sketch (CS) and Count-Min Sketch (CMS) for heavy-tailed distributions. They propose a novel Count-Sketch-based algorithm and its learned variant to estimate the frequencies of items in a data stream. Empirically, they show that both algorithms outperform the standard CS and LCMS of Hsu et al. (ICLR 2019) in terms of weighted and unweighted estimation errors on various datasets. In addition, they introduce a parsimonious version of their learning-based algorithm which performs a limited number of queries to the oracle. Strengths: The paper is concerned with the fundamental problem of estimating the frequencies of items in a data stream. It introduces tight error guarantees for Count-Min sketch and Count-Sketch algorithms as well as their learned variants for Zipfian distributions. The authors propose a novel Count-Sketch-based algorithm and its learning-augmented variant that significantly outperform the baseline algorithms on two real-world datasets and a synthetic Zipfian dataset. Weaknesses: The results section of the paper mentions that the prediction quality for the CAIDA dataset was relatively poor, however, the work of Hsu et al. (ICLR 2019) states that the AUC score of identifying the top 1% heavy hitters for CAIDA was 0.1 higher than for the AOL dataset. Hence, it seems that the experimental results are not quite consistent with those of Hsu et al. as their LCMS offered more significant advantages than the basic CMS on the CAIDA dataset as compared to AOL. Furthermore, the experimental section does not include results for the parsimonious algorithm and less heavy-tailed distributions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The learned models for the CAIDA and AOL datasets are very complex and rather expensive to train, and the space allocation of the learned variant in the given plots does not seem to include the space reserved for the oracle. Therefore, it is unclear whether the novel learning-based algorithm offers any advantages over using its non-learned counterpart. It would be great to have a comparison of these two algorithms with equal space allocations which includes the space for the learned model. In addition, it would be helpful if the accuracies of identifying top B/2 frequent items of the learned oracles for the AOL and CAIDA datasets were given in the paper as well as a comparison of the B/2 value to the thresholds used in LCMS of Hsu et al. (ICLR 2019) to investigate why the learning-based variant of the novel CS algorithm does not offer similar advantages for the CAIDA dataset as LCMS. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It would be helpful if the limitation of the parsimonious algorithm due to having to estimate the length of the data stream was stated more clearly in the paper as well as that of Algorithm 2 for non-heavy-tailed distributions due to having to estimate the tail of the frequency vector. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your interest in our paper and your comments. We address questions and concerns below >It appears that for non-Zipfian distributions, the non-simplified Algorithm 2 would have to perform two passes over the data stream since Algorithm 6 would need to first output an estimate of the L2 norm of the tail of the frequency vector which presents a major drawback.  We believe there is a misunderstanding and we hope our comment clarifies it. All of our algorithms only require \emph{one pass} over the stream. Algorithm 2 outputs an estimate of the frequencies \emph{after} the stream has ended. Furthermore, the output of Algorithm 6 is only used to compute approximate frequencies after the stream has ended. Thus, both Algorithm 1, our main streaming algorithm, and Algorithm 6, which estimates the tail norm of the frequency stream, can be run in one pass in parallel. Later, after the stream ends, they can be combined to output approximate estimates.  >The results section of the paper mentions that the prediction quality for the CAIDA dataset was relatively poor, however, the work of Hsu et al. (ICLR 2019) states that the AUC score of identifying the top 1% heavy hitters for CAIDA was 0.1 higher than for the AOL dataset. Hence, it seems that the experimental results are not quite consistent with those of Hsu et al. as their LCMS offered more significant advantages than the basic CMS on the CAIDA dataset as compared to AOL.  Thanks for pointing out this discrepancy. Indeed, the predictions we have for CAIDA are worse than that of AOL (accuracy of recovering the top 1000 say is around 0.4 for AOL and 0.2 for CAIDA). We are working to investigate why this discrepancy exists and will update the paper accordingly as well as include quantitative details on the prediction quality. >The learned models for the CAIDA and AOL datasets are very complex and rather expensive to train, and the space allocation of the learned variant in the given plots does not seem to include the space reserved for the oracle. Therefore, it is unclear whether the novel learning-based algorithm offers any advantages over using its non-learned counterpart. It would be great to have a comparison of these two algorithms with equal space allocations which includes the space for the learned model. Accounting for the space used for learning is an important consideration. As pointed out in Hsu et al. in the setting where we are sketching many subsequent streams of data (e.g., different days), the space cost for storing a learned model can be amortized over time. We will add a brief discussion of this point to the paper including quantitative details. >In addition, it would be helpful if the accuracies of identifying top B/2 frequent items of the learned oracles for the AOL and CAIDA datasets were given in the paper as well as a comparison of the B/2 value to the thresholds used in LCMS of Hsu et al. (ICLR 2019) to investigate why the learning-based variant of the novel CS algorithm does not offer similar advantages for the CAIDA dataset as LCMS. See above. >It would be helpful if the limitation of the parsimonious algorithm due to having to estimate the length of the data stream was stated more clearly in the paper as well as that of Algorithm 2 for non-heavy-tailed distributions due to having to estimate the tail of the frequency vector. Thanks for the comment. We agree that it would be useful to have this stated more clearly in the paper, and will do so in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed clarification.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors study frequency estimation in a streaming setting using CountMin and CountSketches, both their classic and learning augmented variants. They prove tight theoretical bounds for the expected error when the frequencies follow the Zipf distribution. They also introduce and analyze a new algorithm with lower error that returns 0 for low frequencies instead of the noisy estimates of classic CountSketch. Furthermore they also introduce a parsimonious version of their algorithm that avoids consulting the potentially much slower machine learned model for each item of the stream using Poisson sampling to provably invoke it a small number of times only. Several experiments with two real world and synthetic data sets support the claims, albeit the implemented algorithm is much simpler than the one analyzed and a simple modification of the classic CountSketch also yields substantial improvements. Strengths: 1) Problem and techniques studied are extremely well motivated and widely used. 2) Solid theoretical analysis and tight new lower and upper bounds. 3) Introduces multiple new algorithm variants. 4) Substantial error reduction in the experiments. 5) Paper is well written and structured. Weaknesses: 1) No experiments with the theoretically analyzed algorithm, no theory for the simpler variant in the experiments. 2) I would love to see some experiments with the parsimonious algorithm as well. 3) When its truncation threshold is properly tuned the experimentally evaluated simplified algorithm is more accurate than returning max {0, CountSketch's estimate}. However the best threshold is dataset dependent and the wrong threshold underperforms the non-negative CountSketch (i.e. threshold = 0). Section 3.2 proposes a theoretical construction based on the Alon-Matias-Szegedy sketch to adaptively tune and set the threshold, nevertheless this variant is not evaluated in the experiments either. It would be good to evaluate a hyper-parameter free variant that works (well) on any data out of the box or explicitly leave it as future work. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Alg 1: What's median of 4? The proof section carefully requires odd number of rows. Could you please clarify, or since it's only for the sake of theory make it 3 (or 5) to keep it simple? Alg 2: Could you discuss why it's essential (or not) to take median of medians instead of using a single CountSketch with O(T) rows as a filter? Could you also discuss whether your results hold (strengthen or weaken) for more general power laws where f_i ~ (1/i)^p (or log-normal) beyond f_i ~ 1/i Zipf similarly to Du, Elbert, Franklyn Wang, and Michael Mitzenmacher. "Putting the “Learning." ICML, 2021? Probably it's best worked out and discussed after lines 222-223. Could you also measure and disclose the power law exponent for the CAIDA and AOL datasets? Figures 2-5: Could you use the same color for best Our (C=..) line in the left and right sub-plots? Lines 249-250: three columns and varying number of rows -> 3 rows and varying number of columns (typo). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, it's absolutely forthcoming and adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad to hear you found our paper interesting and appreciate your comments! We address them below: >No experiments with the theoretically analyzed algorithm, no theory for the simpler variant in the experiments. The specific setting of parameters for the theoretical algorithm (number of CS tables, threshold for heavy/light elements) are chosen to achieve the best asymptotic bounds we were able to prove. In practice, our belief (backed up by the experimental results) is that the core algorithmic idea of truncating low estimates to zero will yield benefits, but that the specific parameters/setup best for the asymptotics are likely not the best when you process a specific dataset. It is a very nice question whether a simplified algorithm could be shown to achieve the same bounds as our algorithm which uses $O(\log \log n)$ tables. > I would love to see some experiments with the parsimonious algorithm as well. We are glad to hear your interest and will try to include experiments using the parsimonious version of the learned algorithm in the final paper. >When its truncation threshold is properly tuned the experimentally evaluated simplified algorithm is more accurate than returning max {0, CountSketch's estimate}. However the best threshold is dataset dependent and the wrong threshold underperforms the non-negative CountSketch (i.e. threshold = 0). Section 3.2 proposes a theoretical construction based on the Alon-Matias-Szegedy sketch to adaptively tune and set the threshold, nevertheless this variant is not evaluated in the experiments either. It would be good to evaluate a hyper-parameter free variant that works (well) on any data out of the box or explicitly leave it as future work. This is a fair point, and we will make this explicit in the paper. At least for the learned algorithms, as the high-level idea for applications is that the user is processing similar data over time (and therefore can learn some structure), we believe it is reasonable to also think that they can tune hyperparameters on past data. Though this may not be the case in applications of the non-learned algorithms, and we will mention this. >Alg 1: What's median of 4? The proof section carefully requires odd number of rows. Could you please clarify, or since it's only for the sake of theory make it 3 (or 5) to keep it simple? Thank you for your comment and pointing out the typo. Indeed, the 4 should be a 3 (or any fixed odd integer at least 3). We will fix the typo in the updated version of the paper. > Alg 2: Could you discuss why it's essential (or not) to take median of medians instead of using a single CountSketch with $O(T)$ rows as a filter? This is a very crucial but subtle point and we thank you for pointing it out. We hope the following explanation is informative. First, we know that a *single* CountSketch (CS) table cannot achieve the guarantees of our best algorithm. Indeed, Table 1 (and our Theorem C.4) gives the tight error bound for a single CS table. The intuition for why a single CS table is not sufficient is roughly as follows: a large portion of the error is incurred due to elements whose true frequencies are much smaller than the expected error guarantees of CS (more rows only improve the concentration of the error, not the expected value!). Thus, we cannot simply rely on estimates from a CS table and must do something different. Indeed, our algorithm goes beyond CS by employing a two step procedure. The first step can be thought of as a filtering step like you stated. This filtering step informs us whether to use the estimate of the very last (large) CountSketch table or output 0. Multiple tables in the first step simply reduces the probability that we accidentally use the estimate of the last CS table for these ‘tiny’ frequencies (where the ‘right’ answer is to output 0). It turns out that it’s enough to ensure this failure probability is at most $1/poly(\log n)$, so using $O(\log \log n)$smaller CS tables in the filtering step suffices and gives a clean way to prove our improved bounds. It could be true that much fewer CS tables suffice for the filtering step, and this is an interesting question for future research. >Could you also discuss whether your results hold (strengthen or weaken) for more general power laws where f_i ~ (1/i)^p (or log-normal) beyond f_i ~ 1/i Zipf similarly to Du, Elbert, Franklyn Wang, and Michael Mitzenmacher. "Putting the “Learning." ICML, 2021? Probably it's best worked out and discussed after lines 222-223 We did indeed consider questions of this type and for the first four algorithms of Table 1 (for power law distributions), we do have (nearly) tight bounds. For the remaining two algorithms, we have some partial analysis, which needs some polishing. We will consider including these bounds in the final version of the paper. It is an interesting direction for future work to consider other distributions like log-normal as you suggested. >Could you also measure and disclose the power law exponent for the CAIDA and AOL datasets? The frequency plots of both CAIDA and AOL datasets are given in Hsu et al and we will provide a reference to their discussion on power law parameters of these datasets. >Figures 2-5: Could you use the same color for best Our (C=..) line in the left and right sub-plots? Thank you for the suggestion. We will update the figure. >Lines 249-250: three columns and varying number of rows -> 3 rows and varying number of columns (typo). Thank you for pointing out the typo. It will be fixed in the updated version. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed reply, convincing explanations, and sharing further work in progress results. I'll update my final review accordingly.
null
null
null
null
null
null
A Fast and Accurate Estimator for Large Scale Linear Model via Data Averaging
Accept (poster)
Summary: This paper studies the linear regression problem and proposes a new sketching method based on data averaging. Strengths: Please see the "questions" section. Weaknesses: Please see the "questions" section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: This topic falls outside my current expertise. My review below is pretty limited. - Comparison against some standard sketching methods such as Gaussian sketch could be valuable. Gaussian sketch is computationally expensive to apply but easier to analyze. I'm curious how the convergence rates would compare. - I wonder if some plots would help improve the presentation. Perhaps something similar to Figure 7 of Pilanci and Wainwright [2016]? Minor: - There is a latex-related issue with the reference numbering in the appendix. - typo in line 166: "no mater" Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the "questions" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment:** We sincerely appreciate your valuable comments and suggestions, particularly the positive feedback regarding the proposed method. Below, we will provide a response that centers around **questions** related to our paper. These relevant sections have been indicated with italicized text in the following. We believe and hope that our revisions will ultimately meet your satisfaction, and that the paper will have the opportunity to be published in NeurIPS. **Questions:** * *Comparison against some standard sketching methods such as Gaussian sketch could be valuable. Gaussian sketch is computationally expensive to apply but easier to analyze. I'm curious how the convergence rates would compare.* **Reply:** Thanks a lot for your suggestion. Specifically, sketching methods solve the sketched least square problem \begin{eqnarray} \min\_{\boldsymbol{\beta} \in \mathbb{R}^{p+1}}||\mathbf{O}^{\top} \mathbf{y}-\mathbf{O}^{\top} \mathbf{X} \boldsymbol{\beta}||^2 \end{eqnarray} where $\mathbf{O} \in \mathbb{R}^{N \times n}$ is a sketching matrix with i.i.d.\ standard Gaussian entries. Suppose $p \ll n \ll N$. From Theorem 1 of Ahfock et. al. (2021), under the conditions of our Theorem 5, the conditional mean squared error for Gaussian sketching least square estimator has convergence rate $\sigma_{\varepsilon}^2 p / n$, which is the same as the uniform sampling method. To make the computation of the least square estimator based on sketched data computed within $O(n p + p^3)$ time, we let $n \asymp N / p$. In this case, the convergence rate of Gaussian sketching is $O(\sigma_{\varepsilon}^2 p^2 / N)$. In this case, the convergence rate of the proposed method, which is $(1 + o_P (1)) \frac{p^2\sigma\_{\varepsilon}^2}{ 2 log(2p)N } $, is faster than Gaussian sketchign by a factor of order $\log(p)$. * *I wonder if some plots would help improve the presentation. Perhaps something similar to Figure 7 of Pilanci and Wainwright [2016]?* **Reply:** Thank you for your suggestion. We will consider improving the presentation of our paper. * *There is a latex-related issue with the reference numbering in the appendix. Typo in line 166: "no mater"* **Reply**: Thank you for pointing out the typos. We will correct typos we found, and fix the reference numbering problem. **Reference** Ahfock D , Astle W J , Richardson S .Statistical properties of sketching algorithms[J]. 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. I don't have further questions. --- Reply to Comment 1.1.1: Comment: Dear Review A4mU : Thank you for your response. We appreciate your feedback and suggestion.
Summary: This paper considers a new estimation method for a large scale linear regression model. Specifically, the regression coefficients are estimated by least squares estimation of averaged observations for which data are partitioned via a method similar to the information-based optimal subdata selection (IBOSS) algorithm proposed in Wang et al. [2019]. The paper develops both lower and upper bounds on the mean squared error of the estimator and compares it with the existing methods theoretically as well as numerically. Strengths: - The paper proposes a new estimation method for linear regression, which is arguably one of the most important estimation problems in statistics and machine learning, although the new method shares lots of similarities with the information-based optimal subdata selection (IBOSS) algorithm proposed in Wang et al. [2019]. - The paper contains a number of interesting theoretical results. Weaknesses: - The main text of the paper does not contain any experimental results, although numerical results in the supplement are quite impressive. It would benefit the readers if the paper contains a concise summary of key numerical results in the main text. - It is very important to compare the proposed method with the information-based optimal subdata selection (IBOSS) algorithm proposed in Wang et al. [2019] because data selection is very similar between the two methods. The paper includes lots of remarks on the IBOSS algorithm; nonetheless, it is still unclear exactly what sense they are different. It seems to me that the averaging method in the paper induces different type of weights across included observations relative to the IBOSS algorithm, but the exact link between the two methods is illusive. - On page 25 in the supplement, a different algorithm is introduced. There are some possible typos: (i) a variant of Algorithm ??, which I presume refers to the algorithm in the main text; (ii) $r \leftarrow\left\lfloor\frac{n}{2 r}\right\rfloor$: perhaps $2p$ in the denominator. Please check them. This variant of Algorithm ?? looks much simpler but it is only introduced in the supplement. It might be useful to include this in the main text to improve the understanding of the proposed method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The expression of "fine-grained lower bounds" in the abstract is a bit misleading because regressors are assumed to be jointly normal with mean zero vector and an identity covariance matrix. It might be better to say this limitation more explicitly in the abstract. - Line 115: $r > 0$ is introduced but it seems that it is not defined before. - The paper focuses on the normal regressor case. Would it be possible to extend to sub-Gaussian or other more general cases? - In view of the remarks after Theorem 5, the proposed method has the convergence rate $p^2/(\log(2p)N)$ for the mean squared error; whereas, the convergence rate for the sampling methods is $p^2/N$. Thus, the difference is only by $1/\log(2p)$, which may not matter much when $p$ is relatively small. The abstract states that "our theoretical results show that the proposed method can achieve a faster convergence rate than the optimal convergence rate for sampling methods." However, the difference is only at the log rate and so it might be better to explicitly state that the faster rate is only up to the log factor. - On the other hand, if we look at the numerical results in the supplement, there are huge differences between the proposed method and existing methods in terms of mean squared errors. It would be good to know where this large difference comes from. For example, is this because the subsample size $n = N/p$ in the existing methods is chosen to be too small? Some clarification would be helpful. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: - One limitation is that the paper does not provide any guidance regarding how to conduct statistical inference, e.g., construction of confidence intervals. It would be helpful to provide a method for inference if possible and difficulties if not readily available. - The supplementary material does not include replication files. It would be desirable to provide them if the paper is accepted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments and suggestions, particularly the positive feedback regarding the proposed method. Below, we will provide a response that centers around three main aspects: **weaknesses, questions, and limitations** related to our paper. We have taken note that your feedback primarily emphasizes the presentation and explanation of the paper. We believe and hope that our revisions will ultimately meet your satisfaction, and that the paper will have the opportunity to be published in NeurIPS. **Weaknesses:** * *The main text...* **Reply:** Thank you for your suggestions. Due to space constraints, all experimental results were deferred to the appendix in the original paper. In response to your feedback, we will address this by moving some content from the main paper to the appendix. If space permits, we will include Table 1 and Table 5 in the main text and include a concise summary of key experimental results. * *It is...* **Reply:** Thank you for your suggestion. As you noted, the proposed method has a similar selection method as IBOSS. However, the two algorithms has essential differences which results different convergence rate. Below we elaborate the key differences. The discussion in our paper will be improved in camera-ready version. 1. The selected observations of IBOSS is only a subset of the full data. This is an inherent limitation method of sampling methods: sampling methods only select a (typically small) subsample of the full data, other samples are discarded. In comparison, the proposed averaging method selects all samples: it clusters the full data into $2p$ clusters (via the IBOSS way), and averaging within each group. Hence the averaged samples are computed from the full data, no observation is discarded. 2. The proposed method and IBOSS has different convergence rate. Theorem 6 implies that under certain technical conditions, the convergence rate of IBOSS can reach the lower bound in Theorem 2. On the other hand, Theorem 5 implies that under certain technical conditions, the convergence rate of the proposed method can break the ice and be lower than the lower bound in Theorem 2. 3. The proposed method and IBOSS have different degree of data reduction. IBOSS reduces $N$ samples to $n$ samples where $n\asymp N / p$ to achieve $O(Np + p^3)$ computing time. In comparison, the proposed method reduces $N$ samples to merely $2p$ observations. * *On page 25...* **Reply:** We deeply apologize for these typos, and will improve the presentation of our paper. The ''Algorithm ??" should be IBOSS algorithms. The denominator of $\frac{n}{2r}$: is $2p$ instead of $2r$. This algorithm is coupled with IBOSS algorithm to prove Theorem 6. Since it is not related to the proposed algorithm, we decide to not put it in the main text. **Questions:** * *The expression...* **Reply:** Thank you for your reminder. We have addressed this limitation in the revised paper by clarifying it in the abstract * *Line 115...* **Reply:** Sorry, $r = \frac{N}{2p}$ is defined latter. We will fix this mistake. * *The paper...* **Reply:** Theorem 1 may be generalized to non-Gaussian case. We do not think Theorem 2 can be easily generalized to non-Gaussian distributions since in that case, one may need to deal with the concentration inequality of more general order statistics. Theorems 3 and 4 do not assumes $\textbf{Z}$ Gaussian. They imposes some conditions on the tail probability of $\textbf{Z}$. But the imposed conditions may not be easily verified for sub-Gaussian data. Theorems 5 and 6 rely on Gaussian assumptions. There are some $\log$ in the statement of Theorem 2 and 3. These $\log$ comes from Gaussian data. For sub-Gaussian data, the $\log$ term may not be correct. * *In view...* **Reply:** Following your advice, we will make clear that our theoretical results demonstrate that the proposed method can achieve a faster convergence rate, up to a $\log(p)$ factor, compared with the optimal convergence rate of sampling methods. We would like argue a bit that the $\log(p)$ term may not be as weak as it appear. In fact, from our theoretical results, the $\log(p)$ improvement can already break the lower bound of sampling methods stated in Theorem 2. * *On...* **Reply:** In fact, for large $p$, the term $2\log(2p)$ in Theorem 5 may not be a small number. For example, if $p = 200$, then $2\log(2p)$ is approximately 12, which may look "huge". In this view, the good performance of the proposed method is reasonable. Our experimental results show that compared with UNI, the proposed method has particularly good performance in Case 3 and Case 4. Note that in Case 3 and Case 4, data distribution has heavy tail. Theorem 5 says nothing about the case of heavy tail. Heuristically, since the proposed method relies on order statistics, it may be expected that it has excellent performance when data distribution has heavy tail. In this view, the results in Case 3 and 4 are reasonable. **Limitations:** * *One limitation...* **Reply:** It is convenient to use the reduced $2p$ observations to conduct statistical inference since these observations also satisfy the linear model; see the formula between line 179 and 180 of the main text. From taht formula, it can be seen that the reduced observations satisfy a regression whose error term $\bar {\varepsilon}\_j$ is a mean of independent stochastic errors. Hence from central limit theorem, it can be expected that $\bar {\varepsilon}\_j$ behaves just like a normal random error. In this view, classical statistical method for Gaussian data can be used which may produce asymptotically correct inference result. * *The supplementary...* **Reply:** Thank you for your suggestion. We promise that if the paper is accepted, we will open-source our code. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I very appreciate the rebuttal by the authors. I agree that the authors' plan of restructuring the paper sounds good. Still I would like to mention that Gaussianity is a very strong assumption, as pointed by other reviewers. Also, I am not fully convinced that the $2\log(2p)$ factor is important at least in terms of theory, especially given that $p$ is relatively smaller than $n$. I am open to changing my rating but will keep it as it is now because of these concerns. --- Reply to Comment 1.1.1: Comment: Thank you for your comment and feedback. 1. Regarding non-Gaussian distributions. In experiments, it is shown that for some non-Gaussain distributions, the proposed method can have very good perform. In theory, we have pushed ourselves to prove some theoretical results regarding non-Gaussian distributions (Theorems 3 and 4). But these results may not be reader-friendly so we would like to move them to appendix. Perhaps, the general behavior of the proposed method may be too complicated to have a simple characterization. A key dificulty toward a unified theory for non-Gaussian distributions is that the proposed method involves order statistics. And the concentration behavior of order statistics may not be unified in a simple way. 2. Regarding $\log(p)$ improvement. As mentioned in your review, it can be observed that the proposed method achieves "huge" improvements over competing methods in experiments. As replied in our feedback, a log term can indeed look "huge" when the dimension is moderately large, e.g., $p=200$. So there should be no disagreement that the log term can have observable impact in practice. Now we would like to argue that *in terms of theory*, the log term may not be as weak as its first look too. In fact, based on our theoretical results, with the same order of computation time, the $\log(p)$ improvement can break the lower bound of sampling methods stated in Theorem 2. So the log improvement is not relative to a specific method, but rather to a class of methods. This class of methods include some recent method, such as IBOSS. If the $\log(p)$ term is not important, it at least proves that breaking the lower bound of Theorem 2 *is possible*. Previously, it may be unclear if there is a need to consider methods beyond sampling methods. Our theoretical work implies that considering methods beyond sampling methods can indeed have benefits. We think this information may be important in theory. We appreciate your openness to changing your rating based on these concerns. If you have any further questions or need additional clarification, please let us know.
Summary: This paper gives lower bounds of the conditional mean squared error for sketching methods. They focus on least square estimator, and show that when the problem dimension is sufficiently large, the optimal error rate among all sampling reductions is achieved by uniform sampling. They also propose a sketching method based on data averaging that achieves better performance under some scenarios. Fast implementation of their proposal is also given in the paper. Strengths: This paper gives a careful analysis on the lower bound of estimation error caused by sampling methods, which is tight in many cases. They also proposes a simple yet useful sketching algorithm that has better statistical guarantees comparing to sampling based algorithms in many interesting regimes. They make the comment that the associated optimization problem could be hard to solve, and they come up with a reasonable alternative approach. This work gives nice theoretical insights on data averaging algorithms, which could help with the design of data reduction methods in practice. Weaknesses: This paper considers solely least square estimators after sketching, which could restrict its applicability in comparison to results that involve more general estimators. In the paper, they proposed a sketching method based on data averaging that is supposed to be more efficient and achieves better accuracy. However, they didn't compare their proposal with other sketching algorithms other than sampling based methods. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Line 80 "ruduce -> reduce" 2. In the equation between line 123 and 124, X stands for the training data or the testing data? 3. Why focus on mean squared error instead of prediction error? The latter objective seems more interesting to me. If the authors have specific reason for that, it would be helpful to state it in the paper. 4. The Gaussian assumption is a bit restrictive. Does the theory work for other light-tail distributions, for example sub-Gaussian? 5. I might have missed this, but where is $\hat\beta_I$ in Theorem 6 defined? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Societal impact not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment:** We sincerely appreciate your valuable comments and suggestions, particularly the positive feedback regarding the proposed method. Below, we will provide a response that centers around two main aspects: **weaknesses, questions** related to our paper. These relevant sections have been indicated with italicized text in the following. We believe and hope that our revisions will ultimately meet your satisfaction, and that the paper will have the opportunity to be published in NeurIPS. **Weaknesses:** * *This paper considers solely least square estimators after sketching, which could restrict its applicability in comparison to results that involve more general estimators. In the paper, they proposed a sketching method based on data averaging that is supposed to be more efficient and achieves better accuracy. However, they didn't compare their proposal with other sketching algorithms other than sampling based methods.* **Reply:** As you pointed out, in this work, we confine ourselves to the least square estimators after sketching, and do not consider some more general estimators such as iteration methods. There are two benefits to consider such a class of estimators. First, the considered class, although restricted, is important. Indeed, within the considered class of estimators, there are not only some recent sampling and sketching methods, but also some new methods such as the proposed one. Even for iteration methods, the initial point is often chosen as a sketched least square estimator which is in the considered class. Second, the considered class allows for theoretical analysis. The considered class of estimators is not too large, so it is possible prove some theorems to understand certain properties of the class, just like we did in Section 2. Such results may be very difficult to obtain if one considers a larger class including more general estimators such as iterations methods. Our theoretical analysis does provide insights: it shows that sampling methods may be sub-optimal. Indeed, in the supplementary material, we provided comparisons with the sketched least square estimator based on subsampled randomized Hadamard transform (SRHT). For sketched least square estimators with other sketchings, the convergence property may have a similar behavior in view of the lower bound of Pilanci and Wainwright (2016). **Questions:** 1. *Line 80 ''ruduce $\rightarrow$ reduce"* **Reply:** Thank you for pointing out the typo, and we correct it in the updated version. 2. *In the equation between line 123 and 124, $X$ stands for the training data or the testing data?* **Reply:** Thank you for raising this question. In lines 123-124, the symbol $X$ refers to the training data set. 3. *Why focus on mean squared error instead of prediction error? The latter objective seems more interesting to me. If the authors have specific reason for that, it would be helpful to state it in the paper.* **Reply:** Mean squared error (MSE) and prediction error (within training data) can both measure the performance of an estimator. MSE may be more convenient in our setting. In fact, prediction error involves the data matrix $\textbf{X}$ which is random. There is also randomness in $\hat \beta\_{A}$. These two randomness are not independent. The two dependent sources of randomness make the analysis of prediction error complicated. For some other methods, either $\textbf{X}$ is assumed to be constant or $\textbf{X} (\hat \beta\_A - \beta)$ can be simplified by explicit formula of $\hat \beta\_A$. For out method, $\textbf{X}$ is random, and $\hat \beta\_A$ does not have a simple formula. So we choose MSE for convenience. 4. *The Gaussian assumption is a bit restrictive. Does the theory work for other light-tail distributions, for example sub-Gaussian?* **Reply:** Theorem 1 may be generalized to non-Gaussian case. We do not think Theorem 2 can be easily generalized to non-Gaussian distributions since in that case, one may need to deal with the concentration inequality of more general order statistics. Theorems 3 and 4 do not assumes $\textbf{Z}$ Gaussian. They imposes some conditions on the tail probability of $\textbf{Z}$. But the imposed conditions may not be easily verified for sub-Gaussian data. Theorems 5 and 6 rely on Gaussian assumptions. There are some $\log$ in the statement of Theorem 2 and 3. These $\log$ comes from Gaussian data. For sub-Gaussian data, the $\log$ term may not be correct. 5. *I might have missed this, but where is $\hat{\beta}\_I$ in Theorem 6 defined?* **Reply:** $\hat{\beta}_I$ denotes the estimator of IBOSS. And we add a diagram which may demonstrate IBOSS more clearly. **Reference** Mert Pilanci and Martin J. Wainwright. Iterative Hessian sketch: fast and accurate solution approximation for constrained least-squares. Journal of Machine Learning Research, 17: 1–38, 2016. --- Rebuttal Comment 1.1: Comment: Thank you so much for the response! I do not have any further question. --- Reply to Comment 1.1.1: Comment: Dear Reviewer DaqC : Thanks for your response. We appreciate your comments and suggestions.
Summary: This submission studies the asymptotic estimation risk of linear regression with various data sketching strategies. The authors start by refining an existing lower bound to show that sketching by uniform sub-sampling is only minimax optimal when the feature dimension is very large. otherwise, improvement is possible and the authors prove the previously proposed IBOSS algorithm is minimax optimal. In order to circumvent their own lower-bound, the authors then exit the random sketching framework to define an new method based on data averaging. This method is shown to outperform uniform random sketching and IBOSS both theoretically and empirically. Strengths: This submission has two main strengths: - It completes the landscape of minimax lower bounds for random sketching applied to linear regression, establishing optimality of IBOSS in a new regime where the number of features is not too large. - It improves on the performance of sketching methods by developing a new framework based on data averaging. This framework is novel, achieves faster (theoretical) convergence of the estimation risk, and outperforms sketching methods in practice. It is also comparable to IBOSS in computation time. I cannot comment on the correctness of the theoretical developments or the novelty of the proofs since this paper is out of my research area. Related work seems appropriately referenced. Weaknesses: The main weakness of this paper is the presentation, which confuses the novelty of the results and makes much of the theoretical development hard to follow. In particular, - The discussion is dense, mathematical, and does not give any intuition into the bounds proved or and the assumptions required. This makes it difficult to evaluate the novelty and utility of some results, in particular Theorems 3 and 4. No effort is made to compare the risk bounds given throughout the paper, although it would be very useful for Theorems 5 and 6. - The experimental results, which make a strong argument for the utility of the proposed data averaging method, are all deferred to the appendix. - The description of the data averaging method in Section 3.1a is difficult follow so that Algorithm 1 is more useful than the text itself. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - Assumption 1: I don't see where $r > 0$ is used in the assumption. - Theorem 1/2: How restrictive is the requirement that $Z$ be normally distributed? Does the result of Pilanci and Wainright (2016) require similar distributional assumptions on $Z$ in order to derive minimax lower bounds? - Theorem 2: Why does $\\mathbf{E}[O (O^\\top O)^{-1} O^\\top | Z]$ being diagonal allow for a tighter lower bound to be derived? In particular, is there a chance that sketching matrices not satisfying this condition can out-perform the lower bound in Theorem 2? - Displays after lines 187 and 208: Where do these expressions come from? The lower bound on the trace after line 208 is used to guide the development of Algorithm 1. Since it plays a critical role, some intuition into where this expression comes from is is important for the reader to follow the argument. - Theorem 3: It is somewhat strange that $r = N / 2p$ is said to be an integer when the conditions on $p$ and $N$ require that $r \\rightarrow \\infty$. I suppose what you really mean is that the divergent sequences $N_i$, $p_i$ are chosen so that $r_i = N_i / 2p_i$ is always integer? Is this condition simply for convenience when working with Algorithm, i.e. to avoid working with floor operators? Remark 1: I think some additional comments on the tail bounds in (5)-(7) must be provided. Firstly, what is $\\mathcal{A}$? I only see the definition of $\\mathcal{A}_j$ in lines 236/237. Is this any measurable set? Secondly, what to do the tail bounds imply about the original dataset $Z$? It seems that they say something like "tail examples concentrate faster as dimension increases," but it's hard to tell since the $\\mathcal{A}$ sets also depend on $p$. Do these conditions make sense for general data? Can you give an example of a data-generating distribution for which these conditions hold, e.g. sub-Gaussian distributions? Theorem 4: So, essentially $z$ follows a symmetric, isotropic distribution with strict conditions on the tails? Equation 9 implies that the tails need to be sufficiently large, but not too large (Equation 10); how do these conditions compare to (5)-(7)? Again, how they are satisfied by real data? I cannot understand the impact of Theorems 3/4 without understanding how and when these tail conditions are satisfied. Theorem 5: Logarithmic improvement in the convergence when the number of features tends to infinity seems quite weak. Asymptotic regimes where $N \\rightarrow \\infty$ are somewhat justified since more data can be collected, but rarely can more features be collected. Without that said, I do like that the $\\log(N / n) \\ll p$ condition is not required, since this implies even more unrealistic asymptotics for $p$. Additionally, why not just assume $Z$ is standard normal from the start, rather than developing Theorems 3, 4 under awkward tail assumptions? Theorem 5 is much cleaner and Theorems 1, 2 already require the standard normal assumption. Theorem 6: It would be nice to remind the reader the difference between $\\beta_I$ and $\\beta_A$ here. I think $\\beta_I$ is the IBOSS algorithm, but this should probably be stated in the Theorem. Tables 1/2: It seems like every method but the one proposed experiences catastrophic failure when $p = 200$, performing about as well as the naive VDA method. IBOSS performs slightly better than competitors in Cases 3,4, but otherwise it also fails. I have two questions here: (i) Why do all other methods approach the same MSE for $p = 200$? Is this the error of an zeroth-order estimator, e.g. the mean? (ii) The baseline methods are all sketching-based, meaning Theorem 2 would lower-bound their risk, correct? If so, does their poor performance reflect this lower bound and the good performance of the proposed method relies on circumventing it by exiting the sketching framework? Line 83 (Appendices): Using commas to separate methods and commas to denote the thousands place makes this list difficult to read. I suggest putting the results in a standard table with one row for improved clarity. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: As mentioned in the "Weaknesses" section, the major limitation of this paper is the presentation and quality of the writing. Specifically, - The assumptions in Theorems 3 and 4 need to be clarified before their utility is clear to me. Since Theorem 5 makes a standard normal assumption just as Theorem 2 does, it's not clear to me why it is interesting to present Theorems 3/4 in the main paper when they require specific and unjustified tail bounds on the data distribution. - The explanation of Algorithm should be improved. I suggest including a figure illustrating Algorithm 1, if possible. - The rates in Theorems 5 and 6 should be compared and some commentary given. In general, more explanation of the theorems ought to be given. - I strongly suggest reducing the technicality of the main paper and introducing some of the empirical results from the appendix to clarify the utility of the proposed averaging method. The method works well in practice, which should be highlighted, rather than hidden. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments and suggestions. Due to space constraints, we display the discussion about non-normal cases, presentation and tables in the global rebuttal. Below, we will provide a response for main **questions and limitations**. **Questions:** * *Assumption 1...* **Reply:** In the latter part of the text, we define $r = \frac N{2p}$, and the condition $r>0$ can mathematically avoid the case $N<2p$. We would like to delete this condition and make clear where necessary that the case $N<2p$ is not considered. * *Theorem 1/2...* **Reply:** The response to the first question can be seen in the global rebuttal. As for the second question, the minimax result of Pilanci and Wainright (2016) does not assume $Z$ is random, and hence is more general indeed. However, as we pointed out in the main text, their restriction on $\mathbf{E}_o = \mathbf{E}\left[O(O^{\top} O)^{-1}O^T|Z\right]$ is restricted, which has a different focus. As a result, the corresponding conditions for establishing the lower bound differ. * *Theorem 2...* **Reply:** The diagonal condition of $\mathbf{E}_o$ restricts the class of legal sketching methods. And it is possible to obtain fine-grained result for a restricted class of methods. Technically, in the proof of Theorem 2, the diagonal condition is used to obtain that $\text{tr}[\textbf{X}^T\mathbf{E}_o\textbf{X}]=\sum\_{i=1}^Nd_i||X\_i||^2$. This equality does not generally hold if $\mathbf{E}_o$ is not diagonal. So there is a chance that sketching matrices not satisfying this condition can out-perform the lower bound in Theorem 2. Indeed, the proposed data averaging method is such a method, where corresponding $\mathbf{E}_o$ is not diagonal in general. According to Theorem 5, the proposed method can outperform the lower bound established in Theorem 2. So this is the merit of the proposed method, its existence proves it is possible to go beyond the lower bound of Theorem 2. * *Displays after lines 187 and 208...* **Reply:** The details for formula derivation can be seen in the PDF file attached to the global rebuttal. * *Theorem 3...* **Reply:** Yes, what we really mean is that the divergent sequences $N_i,p_i$ are chosen so that $r_i=N_i/2p_i$ is always integer for convenience. We will clarify this in the main text. * *Remark 1...* **Reply:** Yes, $\mathcal{A}$ is a any measurable set in $p$-dimensional Euclidean space, and $\{\mathcal{A}: \Pr(Z\in\mathcal A)\leq\frac 1p\}$ represents the class of measurable set $\mathcal{A}$ satisfying $\Pr(Z\in\mathcal A)\leq\frac 1p$. Roughly speaking, tail bounds (5)-(7) require that the dimension $p$ is not too large and/or the tail of $Z$ is sufficiently light. They are are satisfied by standard Gaussian distribution under some conditions on dimension, see lines 429-437 in Appendix. These bounds may be satisfied by more general distributions, sub-Gaussian distributions, but to establish such results may require a lot of mathematical work. We would like to follow the suggestion and confine ourselves to the Gaussian setting. * *Theorem 4...* **Reply:** Equation 9 is a weak condition. Roughly speaking, Equation 9 merely say that no $z_j$ will shrink to $\mu_j$. Equation 10 is another bound for the tail. So the conditions (5)-(7) and (10) all require the tail to be not too large. However, there seems no simple condition to unify these three conditions. Under certain conditions on dimension, these conditions are satisfied by standard normal distribution. We would like to confine ourselves to the normal distribution setting. * *Theorem 5...* **Reply:** We agree that the logarithmic improvement is quit weak. But the merit is that it proves that the improvement is possible. Now we know there is indeed some methods with good computing time, other than samplings, may break the lower bound of Theorem 2. So if the theoretical contribution is taken into account, the present work may be meaningful. Previously, one of our goal is to try to make the results general and go beyond normal distribution. It turns out that the cost is greatly reduced readability. * *Theorem 6...* **Reply:** Yes, $\beta_I$ is the IBOSS algorithm. * *Tables 1/2...* **Reply:** For question (1): For competing sketching methods, we take $n=N/p$ so that the typical computing time is $O(Np+p^3)$. In this case, the error rate is of order $p^2/N$. In Tables 1 and 2, $N=8\times10^4$, $p=200$ and $n=400=2p$. For VDA method, all data are reduced to $2p$ averaged data. In this view, the result in Tables 1 and 2 is reasonable. For question (2): The baseline methods would be either bounded by Theorem 1 or Theorem 2. Yes, the poor performance reflect the lower bounds of Theorems 1 and 2. The newly proposed method can also be treated as a sketching method, but it violates the key conditions of Theorems 1 and 2. So it is possible to break the lower bounds of Theorem 1 and 2, resulting better performance. **Limitations:** * *The rates...* **Reply:** Theorem 6 implies that, under some technical conditions, IBOSS achieves the lower bound given by Theorem 2, and hence the bound of Theorem 2 is tight. One may not expect a sampling method significantly better than IBOSS. For the typical implementation of sampling methods, one needs to take $n\asymp N/p$ to make the computing time within $O(Np+p^3)$. Theorem 5 implies that, under some technical conditions, the proposed method converges faster than sampling methods with comparable computing time, including IBOSS. Roughly speaking, the proposed method breaks the lower bound in Theorem 2. Hence better is possible. --- Rebuttal Comment 1.1: Comment: Thanks for responding to my review. I read the global author response and I think that the proposed changes to the organization of the text will greatly improve the paper. I understand and sympathize with the desire to go beyond Gaussian data; Theorems 3/4 may turn out to be valuable contributions, but their complexity means they are more suited to the appendix than to the main paper. I will discuss the submission with the other reviewers before updating my score. The changes to the manuscript also address some of the concerns put forward by Reviewer sefb, so I am hopeful we can reach positive a consensus on this submission. --- Reply to Comment 1.1.1: Comment: Dear Reviewer EUfX: Thank you for your review and feedback. We appreciate your kind response and value your insights in improving the overall quality of our paper through better organization of the text. We acknowledge your support for exploring beyond Gaussian data and agree that Theorems 3/4, while valuable, are better suited for the appendix due to their complexity in updated version. Thank you for engaging in discussion with the other reviewers. We hope it will lead to a positive consensus. We sincerely appreciate your valuable insight and look forward to further discussions with the other reviewers. Your updated score and feedback are highly valuable to us. Thank you for your time, effort, and contribution to improving our paper.
Rebuttal 1: Rebuttal: Thank all reviewers warmly for the time you took to review and understand our paper. Most reviewers pointed out that the presentation of the paper should be improved. The discussion is dense, mathematical. Due to space constraints, the experimental results are all deferred to the appendix. Following the reviewers suggestion for the presentation of our paper, we would like move some details of the derivations for the theorems to the supplemental material and introduce some of the empirical results from the appendix to clarify the utility of the proposed averaging method. Specifically, the primary objective of presenting Theorems 3 and 4 in the main paper is to provide a comprehensive understanding of the theoretical framework. It is important to note that Theorem 5 assumes a standard normal distribution, which can be seen as a specific case of the more general results provided by Theorems 3 and 4. In the camera-ready version, we would like to move Theorems 3 and 4 to the appendix. Then we can include a concise summary of key experimental results. If space permits, we will move Table 1 and Table 5 to the main text. To make the presentation of the results clear, we plan to add Table 1 in the PDF file which summarizes the theoretical performance of the proposed method and compare it with the ideal sampling method implied by Theorem 2 and the IBOSS algorithm. In short, the merit of Theorem 6 is that it shows that IBOSS can match the lower bound provided by Theorem 2; the merit of Theorem 5 is that it shows that there exists a method (i.e., the proposed method) which can break the lower bound of Theorem 2. Reviewer EUfX, Reviewer DaqC and Reviewer sefb showed that the Gaussian assumption is a bit restrictive. Firstly, we clarify the assumptions in Theorems. Theorem 1 may be generalized to non-Gaussian case. We do not think Theorem 2 can be easily generalized to non-Gaussian distributions since in that case, one may need to deal with the concentration inequality of more general order statistics. Theorems 3 and 4 are more general than Theorem 5, and are used for the proof of Theorem 5. These two theorems do not assume Gaussian distribution, the cost is some involved characterizations of the tail behavior. While these two theorems may shed some lights on the behavior for non-Gaussian distribution, we actually did not rigorously give results for concrete non-Gaussian distributions. They imposes some conditions on the tail probability of $\textbf{Z}$. But the imposed conditions may not be easily verified for sub-Gaussian data. Theorems 5 and 6 rely on Gaussian assumptions. There are some $\log$ in the statement of Theorem 2 and 3. These $\log$ comes from Gaussian data. For sub-Gaussian data, the $\log$ term may not be correct. Previously, one of our goal is to try to make the results general and go beyond normal distribution. It turns out that the cost is the greatly reduced readability, which may not be a good trade-off. Pdf: /pdf/f2d9d8fa6f2fdcde54e89b55af72e5e0c812294b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies linear regression where the number of samples N is much larger than the number of predictors p (N>>p), which is computationally costly due to large N. The paper investigates lower bounds for existing sampling based methods, and proposes a novel sketching method based on data averaging which reduces the original data to a few averaged observations. Theoretical results show that the method has a faster convergence rate than the optimal convergence rate for sampling methods, and experiments show that the method reduces mean squared error over previous methods. Strengths: - The proposed method is a new sketching method for large-scale linear regression with large N, and it is original. - The method is well-motivated from a theoretical perspective, and the lower-bounds for the proposed method under certain assumptions are better than recent sampling based methods. - The method has $\mathcal{O}(Np + p^3)$ which is favorable when $N>>p$. Weaknesses: - There are no results in the main paper, which are quite important for a sketching paper where claims are improved computing time with reduced mean squared error. The results which are in the supplemental are not extensive, and there is only one real data example, which makes it difficult to quantify the benefits of the proposed method. - The setup of the method is constrained to the setting with no regularization, and only the setting where $N>>p$. It might be good to add a discussion on how applicable this method is to other settings for linear models with different N and p, and with regularization. - The presentation of the paper is quite difficult to follow. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Questions: - How does the memory usage compare for competing methods? Please add the CPU memory usage either in Table 5, or in a new table. Suggestions: - Minor grammatical and typographical errors should be fixed with proofreading. The citation style also does not match the Neurips style which should be fixed. - One example for the poor presentation is that the matrix $A$ is defined after it is referenced in the text as $\beta_A$. Fixing such issues would help the paper. - The presentation of the paper can be improved by deferring some of the related work in page 2, and details of the derivations for the theorems to the supplemental material. The generated space can be used to add numerical results, which could include some more, real data examples. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There are no clear potential negative societal impacts of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments and suggestions, particularly the positive feedback regarding the proposed method. Below, we will provide a response that centers around three main aspects: **weaknesses, questions, and suggestions** related to our paper. These relevant sections have been indicated with italicized text in the following. We have noticed that your feedback primarily focuses on the presentation of the paper. We believe and hope that our revisions will ultimately meet your satisfaction, and that the paper will have the opportunity to be published in NeurIPS. **Weaknesses:** * *There are no results in the main paper...* **Reply:** Thank you for your suggestions. In the original paper, experimental results were deferred to the appendix due to the space constraint. We will consider including some experimental results in the main text. The present paper is mainly a theory/exploratory paper. The merit of the newly proposed method is that it break the ice and proves that there indeed exists a method that can break the lower bound in Theorem 2. This proves: better is possible. This fact may be much important than the practical performance of the proposed method. While the proposed method may be far from optimal, it may shed light on the future development. * *The setup of the method is constrained to the setting with no regularization...* **Reply:** Our results carries particular significance in the case of $N>>p$, where the proposed estimation method in the paper exhibits a faster convergence rate. You give two interesting direction: regression with regularization and the case beyond $N>>p$. These directions may be highly nontrivial. We will consider add brief discussions on these topics in the paper. For regularization, a straightforward method is to use the proposed method to reduce the observations to $2p$, and use the $2p$ observations to perform regularized regression. One may immediately obtain some theoretical results since the reduced $2p$ observations also satisfy the linear regression model. But much effort may be paid to seriously explore the fine-grained behavior of such approach. For the case beyond $N>>p$, the methodology may be entirely different since in this case, reduction of $N$ may not be a good strategy. In fact, perhaps most existing work on sketching methods for least squared problem can only work well for the case $N>>p$ since the core idea is reducing $N$. Overall, these two directions may be good topics to consider in the future. * *The presentation of the paper is quite difficult to follow.* **Reply:** Thank you for your comments. We will improve the presentation of our paper based on your following suggestions. Specifically, the presentation of the paper can be improved by deferring some of the related work in page 2, some theoretical results, and some details of the derivations for the theorems to the supplemental material. The saved space can be used to show numerical results. **Questions:** * *How does the memory usage compare for competing methods? Please add the CPU memory usage either in Table 5, or in a new table.* **Reply:** Now we report the memory usage results. The setting is as follows: the CPU frequency is 3.1 GHz; the memory is 16 GiB; the operation system is Ubuntu 22.04.2; the compiler is gcc 11.4.0. The data type is double. We record the maximum resident set size (maxrss) used (in kilobytes) which is obtained via the function getrusage in sys/resource.h. Since the result is slightly different in each run, we report the average result of 10 runs. Note that the data itself takes $N \times (p+1) * 8$ bytes of memory. For the case of $N = 8 \times 10^4$ and $p = 50$, the data takes 31875 KiB memory. For the case of $N = 6.4 \times 10^5$ and $p = 400$, the data takes 2005000 KiB memory. It can be seen that except for SRHT, the memory cost is mainly for storing the data. For SRHT, we need to transform the data, so additional memory is used. In our experiments, the full data is in the main memory. The present work is mainly a theoretical work, and is confined to this case. One interesting scenario is that the full data is too large to fit in the main memory. This is a topic worth exploring in the future. $N$ | $p$ | NEW | VDA | UNI | SRHT | LEV | IBOSS | FULL | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | $8 \times 10^4$ | $50$ | 36242 | 36516 | 35989.6 | 89050.8 | 36839.2 | 36260 | 35456 | $6.4 \times 10^5$ | $400$ | 2017482 | 2019960.4 | 2012389.6 | 5300062.8 | 2025016 | 2015057.6 | 2009964 | **Suggestions:** * *Minor grammatical and typographical errors should be fixed with proofreading. The citation style also does not match the Neurpis style which should be fixed.* **Reply:** Sorry for that! We will address the presentation issue carefully. Thank you for pointing out the issue of citation style. We will fix it in the camera-ready version. * *One example for the poor presentation is that the matrix $A$ is defined after it is referenced in the text as $\beta\_A$. Fixing such issues would help the paper.* **Reply:** Thank you for pointing out the confusion about notation. The subscript of $\beta\_A$ was intended to be the initial letter of "average". We will consider improving the notations in the camera-ready version. * *The presentation of the paper can be improved...* **Reply:** Thank you for providing detailed suggestions on the paper's presentation. We will improve the presentation in the camera-ready version.
null
null
null
null
null
null
Language Models Implement Simple Word2Vec-style Vector Arithmetic
Reject
Summary: The paper presents evidence that LMs sometimes use а computational mechanism similar to traditional word embeddings, specifically using simple vector arithmetic to encode abstract relations. Experiments show that this mechanism is specific to tasks that require retrieval from pretraining memory rather than local context. In sum, this paper sheds light on the inner workings of LMs and provides insights into their interpretability. Strengths: - The paper is well-written, easy to follow, and is a pleasure to read. - It is appreciated that the paper use LMs of a range of sizes to investigate the arithmetic mechanism. - The paper provides a novel way to isolate the function application. By adding vectors $\vec{o}\_{city}$, $\vec{o}\_{upper}$ and $\vec{o}\_{past}$ to a synthesized dataset, the authors show that the model can generate outputs of the intended function. - The results and analysis provide insights into how LMs work on a bunch of tasks that require retrieval from their pretraining memory, which is interesting to the community. Weaknesses: - The authors focus on the country-capital task as it is more interesting, or, due to the page limit. However, it is also encouraged to (if space allows) do an investigation of how the early decoding output token change layer-by-layer also for the other two tasks. - Except for when and why LMs output correctly, it is also important to know when and why LMs output incorrectly. However, the paper does not include such an in-depth discussion. Thus, I would encourage the authors to include some examples of when LMs do not correctly output the correct capital of a country/ the upper case of a word / the past tense of the verb. Importantly, the authors should provide some insights into the hard cases (if exist): i.e., LMs can correctly output the capital of a country but cannot when adding $\vec{o}\_{city}$ to the corresponding country vector in synthesized data. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Line 126-128: the author says that the model prepares the argument to the ***get_captial*** function prior to that in which the function is applied. From Figure 2, the model only prepares this “Poland” argument after layer 15, could you provide some intuition on what is the model doing before layer 15? Why do the outputs seem random before layer 15? - I would be very interested in the process of the cases where the model predicts the wrong answer for the capital. In case of a failure, does the model also behave as argument-function processing? If it is, did the model form a wrong argument or a correct argument but a wrong answer? - Line 186: o vectors -> $o$ vectors - Figure 4: it would be better to move the figure near to the text where the figure is discussed for the first time in the camera-ready version. - Line 144: the author claims that when the model enters saturation, the model ceases updating the token representation. This argument could be further strengthened by: constructing a sequence, e.g., “table mug free Beijing table mug free Beijing”, getting the residual stream for “Beijing”; and adding $\vec{o}\_{city}$ to it, to see if “Beijing” will still be produced. That is, instead of applying ***get_captial*** on a country, apply it directly to a capital. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper does not include the Limitations or Broader Impact sections. Therefore not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review and questions. We agree with the point that a more in depth error analysis would be helpful and can prepare this for the camera ready version. We received similar feedback from other reviewers and address this and related points in our rebuttal and accompanying pdf. The cases you pointed out specifically, like whether error cases arise because of a wrong argument or a correct argument but a wrong answer, are interesting and we will include these. To answer this one, we saw that the majority were from correct arguments and incorrect answers but we will provide specific numbers on these. Thank you again for the review, ideas, and for pointing out typos. We will incorporate this feedback into the camera ready version. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. All my questions have been appropriately answered.
Summary: The paper offers new findings on interpreting the internal processes of language models. Specifically, the authors identify a particular mechanism that is similar to word2vec style vector arithmetic. By decoding the next token after each attention layer and FFN layer, they examine the structure within the embedding space along the residual stream. They base their findings on three tasks: get_capital on factual knowledge recall, uppercase/past_tense on changing word morphology. The experiments cover three pretrained language models with varying sizes: GPT-J, GPT2-Medium, and BLOOM. Interestingly, the authors find that the information flow along the residual stream can be decomposed into identifiable stages: argument preparation, function application, and saturation. Moreover, they also identify that some particular FFN is responsible for the function application, where the output vector of this particular FFN can be used independently in different contexts to replace the FFN. Strengths: This paper provides novel insights into the internal processing of large language models. The topic of making LLMs trustworthy through the interpretation of their internal processes is timely and should attract a large audience at NeurIPS. I particularly appreciate the creative design of the intervention experiments. The discovery of a word2vec-style arithmetic operation in the FFN output vector is intriguing. It suggests that certain FFNs produce context-invariant vectors that can serve as specific functions. Overall, the experiments appear to be well-conducted and the limitations are adequately discussed. Weaknesses: I appreciate the work being done in the field of interpretability, but I have some concerns that I hope the authors can address. Specifically, I’m curious about whether the interpretability analysis holds for various models that have been pre-trained or fine-tuned on different datasets. While it’s possible to identify interesting patterns by studying information flow and neuron activation, I wonder how much of this is influenced by random initialization or dataset noise. Should our conclusions be conditioned on specific pre-training/finetuning datasets, architectures, and learning algorithms? In addition to these general concerns, I’m also interested in whether the conclusions of this paper can be extended to other tasks. The authors focused on world capitals, upper-casing, and past-tensing, but there are many other factual relations that could be investigated, such as `get_president_of`, `get_university_of`, and `get_son_of`. Why did the authors choose to focus only on `get_capital_of` instead of exploring these other relations? Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Q1: What is the difference between word2vec and language models if they both implement vector arithmetic? What advantages do language models offer over word2vec, given that they appear to perform the same function but with more computational resources? Q2: Is it possible to quantify the sensitivity of the model choice to the findings presented in this paper? Q3: There is a minor typo on line 67 where the arrow should be above x, not over x_i+1. Q4: Is the term `process signature` an established terminology or was it coined by the authors? Q5: How does the pretraining objective impact the finding? Can we find similar things with BERT-like models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. I appreciate this a lot. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review and questions. The question about the choice of relations is a good one and so we added results for six additional tasks in the rebuttal. We received similar concerns from the other reviewers and addressed those in the rebuttal and accompanying pdf. We find that there is a clear distinction between one-to-one and many-to-one and many-to-many relations, with the one-to-one relations exhibiting a spike of the argument reciprocal rank and the many-to-* relations not. It is possible there is also a relationship between frequency of the relation in pretraining data and its likelihood of this occurring as well, but without doing a full analysis of the pretraining data (which would be out of the scope of this paper) we can’t say whether or not this is true. Although we only study decoder-only models, each model family has a slightly different architecture, and we are able to observe the effect across model sizes from 100’s of millions of parameters to >100B. Each of the three families (GPT2, GPTJ, and BLOOM) were also trained on different pretraining corpora. While we agree it would be interesting to know if this extends to different pretraining objectives and/or encoder-only/encoder-decoder models, we believe that deeply understanding the phenomenon without introducing too many extra variables can more efficiently demonstrate the point which we’ve shown does generalize across models. Thank you again for your review, ideas, and for pointing our typos. We address some of your more specific questions below: \>\>Q1: What is the difference between word2vec and language models if they both implement vector arithmetic? We did not mean to claim that language models use this strategy to solve every task, so we will make that more clear in the intro. There is work showing that language models are able to do much more complex behaviors (e.g., https://arxiv.org/abs/2211.00593). The purpose of this paper is to show that this effect can occur within a language model and that we can detect them and edit the model according to the predictions we make about their behavior. We are interested in \>\>Q4: Is the term process signature an established terminology or was it coined by the authors? “Processing signature” is a general term often used in psychology and cognitive science (e.g., to refer to measurable outcomes such as eye tracking or reaction time which are indicative of the underlying processing that the human does but might be independent of the final behavior such as the answer they give to a question). We are not aware of this term being used elsewhere in the NLP or mechanistic interpretability literature, but it’s too general of a term for us to claim to have coined this phrase. --- Rebuttal Comment 1.1: Comment: Thank you for your response. After reading your response and Reviewer fqRy's comment, I am not sure about the soundness of the work. I reduced my scores accordingly. First, it seems flawed to claim "language models implement word2vec like vector arithmetic" if we are only cherry-picking a few relations instead of having a systematic study over more relations. For example, fig 17 says "Non-injective tasks show no evidence of argument-function processing on average.", where we only have one example of this type of relation (`get_nationality_of`). Is this single example sufficient to justify the claim? I am not sure. Moreover, for the many-to-many relations, e.g. `get_animal_hypernym`, the example in Table 2 is using 'anaconda' as the subject. What will happen if you use other animals as the subject? What are the impact of different subjects? Regarding Tab 3, why `get_animal_hypernym` and `get_Country_to_Language` have very different one-shot accuracies, despite the fact that they belong to the same task type? Second, I still think the analysis should be extended to at least MLM pretrained encoder models. The next-token-prediction objective used in decoder-only models inevitably promote the answer tokens in later layers, which is no surprise. However, this is not true for encoder-only models. It remains unknown if the described phenomenon will generalize to encoder-only models. Therefore I disagree with what the authors state in their rebuttal: > While we agree it would be interesting to know if this extends to different pretraining objectives and/or encoder-only/encoder-decoder models, we believe that deeply understanding the phenomenon without introducing too many extra variables can more efficiently demonstrate the point which we’ve shown does generalize across models. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback, but we believe there are a few misunderstandings about the results in the rebuttal document which may impact the change in score. \>\>"Non-injective tasks show no evidence of argument-function processing on average.", where we only have one example of this type of relation (get_nationality_of). Is this single example sufficient to justify the claim? Non-injective refers to all three not one-to-one relations that we add. We agree that a more systematic study gives us a better perspective on the relations encoded this way, but with regards to the feedback from reviewers, we wanted to show that our results could extend in predictable ways. Reviewer oNPw speculated that this relationship could not hold for non-injective relations and we show that this indeed seems to be the case, while also showing that our hypothesis extends to new one-to-one tasks. If we extended this analysis to the entire dataset of BATS relations, would you consider this sufficient? Also with regards to: \>\> it seems flawed to claim "language models implement word2vec like vector arithmetic" if we are only cherry-picking a few relations instead of having a systematic study over more relations. What claim would you be comfortable accepting based on our results? Static word embeddings do not encode a structured relationship for all relations, and yet it is generally accepted that vector arithmetic is a valid technique for exploring relations that are encoded this way. We show that language models employ this relationship for a class of relations but we don’t claim that all relations are solved this way. Is the claim in the title only true if a language model uses vector arithmetic to solve all relations? If this is just a matter of communication, then that is valid and we are happy to discuss how to better scope our claims in the title to better reflect the findings if there are concerns about that. \>\>Table 2 is using 'anaconda' as the subject. What will happen if you use other animals as the subject? What are the impact of different subjects? Could you explain what you mean? We show the results aggregated over the subjects in the BATS dataset. We are not just showing a single example, but we are not sure if that is what you are asking about. The answer about the results for many subjects are shown aggregated in Figure 17 and show the same pattern. \>\> why get_animal_hypernym and get_Country_to_Language have very different one-shot accuracies, despite the fact that they belong to the same task type? As a human do you find these tasks to be equally as difficult as each other? It is natural that a model shows better performance over some types of tasks even if the relation family is the same based on the information available in pretraining. We don’t find this to be a fair critique of the results we show. \>\>I still think the analysis should be extended to at least MLM pretrained encoder models… This is understandable. Whether this processing signature is present in both model architectures is useful information for the community. But to confidently conclude whether this is present or not in encoder models would require extensive experimentation that could not all fit into the main paper, and we think this deserves its own study.
Summary: This paper proposes the conjecture that Transformer-based large language models also implement the vector arithmetic (namely vector subtraction and addition) for word analogy tasks, similarly as the well-known property of word embeddings. Experiments on three word analogy tasks support the conjecture. Such findings lead to a better understanding to the Transformer-based models. Strengths: - Reasonable hypothesis about the implicit mechanisms of pretrained Transformers. - Comprehensive and fairly convincing experiments to verify the hypothesis. - Clear writing. Weaknesses: While it's well known that word embeddings hold the vector addition mechanism for word analogy, the choice of such mechanism is somewhat arbitrary. There is work, e.g., [this EMNLP 2019 paper](https://aclanthology.org/D19-1354.pdf), demonstrating that analogical relations can be represented by vector rotation too. This paper only investigates the mechanism of vector addition and didn't compare with other potential ways to represent analogy---in fact, given the Transformers' residual architectures, I feel the vector arithmetic mechanism for Transformers better motivated than that for word embeddings. In addition, a few points (see questions) are not perfectly clear to me. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Have you thought about other mechanisms, e.g., vector rotation, to represent word analogy? - In the figures, e.g., Figure 2, how did you decode from the intermediate layer representations? Have you applied any transformation before comparing to the vocabulary? - (relevant to the question above) In Figure 2, it's surprising to see the decoding output starts to be "Poland" as late as Layer 15, and all previous layers' outputs look like random tokens. Do you have any thoughts on the reason of this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors have discussed the limitation of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. \>\>Have you thought about other mechanisms, e.g., vector rotation, to represent word analogy? We did not, but could you please explain why we would do this? As you mentioned, the residual structure of the transformer architecture naturally implements ‘vector arithmetic’ (hence why we draw the analogy). To perform other operations, we would have to change the transformer architecture, but if you see a compelling reason to do this we would be interested to understand why. \>\>it's surprising to see the decoding output starts to be "Poland" as late as Layer 15, and all previous layers' outputs look like random tokens. It’s a good question, because we don’t know what it’s doing. Our working hypothesis is that the model is parsing the task and “moving information around” (i.e., attention is routing information onto the relevant tokens for later use) and thus is not yet making meaningful updates to the next token prediction., But this is just our theory, we don’t have good evidence for or against this, and it will require some careful experimental design to test. This is one of the areas we are most interested in pursuing in future work. --- Rebuttal Comment 1.1: Comment: Other mechanisms: This is not quite an important point. My intuition is that although vector addition is intuitive, Ethayarajh (2019) has shown other mechanisms may also be possible to explain the functions, and investigation on other mechanisms can make the paper stronger. The authors didn't answer my second question, and I assume early decoding didn't involve any transformation. It would be better if clearer description of early decoding (in equations) can be presented. The authors seem not to have thorough enough understanding on the early decoding algorithm, the key method used in this paper. There's not enough reason to expect reasonable output by decoding directly from early layer representations without any transformation. After author response, I changed my rating to 4---the paper presents an intuitive and interesting viewpoint to understand LLMs, but the contribution is somewhat limited, and the phenomena shown in the paper are not well understood. --- Reply to Comment 1.1.1: Comment: \>\> I assume early decoding didn't involve any transformation The final layer norm ($\text{LN}_f$) is applied before going through the unembedding matrix as is typically done in the literature. We can include this in the camera ready version of the paper. Here is the equation: $ U*\text{LN}_f(x) $ Where x is the hidden state vector and U is the unembedding matrix. \>\>”There's not enough reason to expect reasonable output by decoding directly from early layer representations without any transformation.” What do you mean by reasonable output? The results we provide suggest that we can use the outputs from early decoding to make predictions about model behavior. Intervention and ablation experiments based on these observations support the idea that the outputs can be interpreted as reflective of what the model is doing and is supported by previous work, which we cite throughout the paper ([Geva, et al. 2022](https://aclanthology.org/2022.emnlp-main.3/), [Dar, et al. 2022](https://arxiv.org/abs/2209.02535), [nostalgebraist, 2020](https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens)). Without a transformation, we can not interpret the very earliest layers, but the purposes of our results we don’t need to, as we observe phenomena that happen close to the final layer. If we could not make reasonable predictions about the outputs of these intermediate layers, then it does not seem likely that our interventions and other experiments would work.
Summary: This paper investigates how a large language model (LLM) computes the vector representation of an output token. In particular, the authors focus on tasks in which the LLM is required to output a token that is related to an input token in a certain kind of relation (e.g. Country-Capital relation). The authors show that an LLM implements this computation using Word2Vec-like vector arithmetic where the vector corresponding to the relation is computed by the feed-forward network in a mid-to-late layer of the transformer. The authors also show that the computed vector is mostly context independent and can be applied to different input sequences. They also show that this mechanism is not observed when the required output token is found in the input sequence. Strengths: - The paper presents interesting findings that should help understand the inner workings of LLMs. - The methods used by the authors to investigate the mechanism seem to be technically sound and non-trivial. - The findings may be used to develop a better architecture for LLMs. Weaknesses: - The paper presents interesting findings, but does not really give explanations as to why they are what they seem to be. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Would it be possible to add possible explanations or hypotheses for the findings? Is it because they are presumably the most efficient way to accomplish the task? Minor comments: - Line 15: Intro -> Introduction? - Line 161: Warsaw. -> Warsaw.” ? - Line 241: suggest -> suggest that? - Line 306: LMs -> LMs’ ? - Line 328: Appendix A), -> Appendix A) ? - Lines 392 and 395: duplicated? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for pointing out typos. \>\> Would it be possible to add possible explanations or hypotheses for the findings? We received similar feedback from other reviewers and addressed it in the rebuttal and pdf. To summarize, we found that this behavior does not extend to many-to-many or many-to-one tasks. We think this suggests that this behavior emerges only for one-to-one relations where a clear structured relationship can be defined across input-output pairs. In the camera ready, we will expand on these new results and perform a more thorough error analysis on the positive results. --- Rebuttal Comment 1.1: Comment: Thank you for your response and update. I have also read the other reviewers' comments pointing out the potential problmes with the paper. I still think the paper is a nice contribution to the community and keep my original score.
Rebuttal 1: Rebuttal: Thank you to the reviewers for thorough and thoughtful reviews. The main concern that was raised by all reviewers was when/if this behavior extends to other relations and how this explains the argument-function processing signature. We attempt to address these concerns for all reviewers below. First, we should clarify we did not intend to claim that this mechanism can explain all of language models’ behaviors. Rather, this is one particularly interesting pattern that is observable across different inputs and draws a possibly-informative connection to earlier NLP models and analyses. We will update the intro to make it more clear that the mechanism is interesting within the contexts that it arises, but that it is not meant as an overarching explanation for why LLMs work the way they do in all contexts. Second, we appreciate the reviewers suggestions and questions about additional experiments and in particular reporting negative results. In fact, we did not actually observe any negative results before submission as these were the first and only three datasets we tried. We were excited to see the consistent positive results ourselves! But, we agree with the reviewers that the chosen tasks occupy a narrow range of one-to-one relations that do not give the full picture of possible tasks. To address this, we are adding six tasks that provide some evidence that the observed behavior is likely specific to one-to-one relations, and does not extend to many-to-many or many-to-one relations. We like reviewer oNPw’s suggestion to investigate how the type of relation affects this behavior. Using relations from the The Bigger Analogy Test Set (BATS), **we found that on non-injective relations, there is no evidence of the argument token spiking before switching to the answer token**, which precludes the model from using the vector arithmetic strategy. On three additional injective tasks, we find that two mostly support the argument-answer spiking pattern shown in the original paper, and one shows mostly negative results (noun plurals). There is not room here, but we can discuss what’s going on in the noun plurals task in the revised paper. Overall they suggest that these results extend to other one-to-one tasks and perhaps only for these types of relations. We will perform the additional intervention experiments to check if the vector arithmetic pattern holds for the camera ready version. Although we can not conclude why exactly this behavior forms within the scope of this paper, these additional findings allow us to expand on our discussion on why negative results occur as well as the limits (e.g., error analysis) of the positive results. We think these topics are an exciting direction for future work. We appreciate the reviewers’ comments for helping us improve the core claims of the paper. Pdf: /pdf/4407be7fee3bc6d3ee4560cbcf47fc06fa970f39.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper examines whether the residual representations in GPT models obey word2vec-style arithmetic. For three different (head, relation, tail) relations, the authors find evidence that the transformer: 1. Writes the head into the residual stream 2. Transforms the head into the tail, observed via a sudden "inflection" point in the reciprocal rank of the tail. 3. Saturates in its prediction of the tail. For those three relations, they find that (2) can be replaced by a simple vector addition, therefore indicating word2vec-style arithmetic. Strengths: * Paper is very well written and easy to follow. * The conclusion, if true, would be very interesting. * The study of internal mechanisms within LLMs is an important and meaningful problem. We are in dire need of simple, elegant, and useful models that interpret their internals. Weaknesses: Most of my concerns have to do with whether these findings generalize beyond the three tasks they are derived from. I'm generally a fan of simple, elegant models, but it feels like an $\vec{a} + \vec{b} = \vec{c}$ framework might be _too_ simple to hold up in practice. I'd love to be convinced otherwise! * I empathize with the fact that it's hard to come up with simple, clean evaluation tasks that GPT is proficient at, but I'd like to see more relations. It seems possible that you've just stumbled upon a very small class of relations for which, maybe, the affine operator implemented by the layers is simply a vector addition. * In the three tasks presented, the mappings are mostly bijective, with the exception that some countries have multiple capital cities (which I'm not sure is accounted for in evaluations). How does vector arithmetic work with one-to-many, many-to-many, and many-to-one relations? If the delta applied by the operator has no dependence on the input, then this won't hold up, right? * It would be great to include negative results as well, if you have them. There should be some analysis on what types of relations this vector addition model struggles with — and whether you can find any interesting patterns describing what works and what doesn't. * How is this $\vec{o}$ computed by the transformer, do you know? Is it by recalling knowledge in FFN weights? Copying information from other tokens? How do you know the best $\vec{o}$ comes from FFN and not attention? * I'd like to see more discussion of the implications of these results. The story is simple and enticing, but what does this tell us? What lessons are generalizable and transferrable to other areas of interpretability? Can you definitively say that the FFNs are recalling knowledge at the last token? How are other tokens involved? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Just to make sure I understand correctly: in Figure 4, $\vec{o}_{city}$ is only injected at layer 18, right? And not any other layers. * What is your interpretation of why, in Figure 5, the intervention works much better for uppercasing than for more complex relations. What explains the fact that MRR is 0.3-0.4 for capital cities and past tense, as opposed to 0.8-0.9? * In Section 5.2 where FFNs are ablated: * Did you try ablating the attention modules as well? I wonder if it'll also harm abstractive relations, since the FFNs need to recall information from a previous token, perhaps even to compute the $\vec{o}$ vector. * Did you ablate all FFNs or just when processing the final token? If the former, don't you have confounding with previous tokens? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We received similar feedback from the reviewers about negative results and additional relations to test, so we address this in the rebuttal and accompanying pdf. We liked your idea to test tasks according to relation type and indeed find that many-to-one and many-to-many relations do not exhibit this behavior. We also show preliminary evidence of this behavior extending to 3 new one-to-one tasks, which we can expand on in the camera ready paper. We appreciate your feedback and address a few of your specific questions below: \>\> How is this o computed by the transformer, do you know? Is it by recalling knowledge in FFN weights? Copying information from other tokens? The ablation results in Figure 7 seems to suggest that it comes from FFNs on these tasks, however we can not rule out the role of attention. \>\>How do you know the best o comes from FFN and not attention? Anecdotally, we tried using attention updates and the intervention did not work, but https://arxiv.org/abs/2304.14767 suggests that attention can play a role in extracting factual information from other tokens (see also: ROME, MEMIT). We also see that attention layers can be responsible for the function application. These mechanisms are not mutually exclusive however, and we focus on studying cases in which we observe the FFN playing a role. in Figure 4, o is only injected at layer 18, right? We replace every FFN at and after layer 18 with o. We discuss this choice in appendix D and show the effect of intervening on individual layers has on the reciprocal rank in Figure 15. --- Rebuttal Comment 1.1: Comment: Thank you for all the updates! I appreciate the expedient exploration of non-one-to-one relations, the addition of new bijective ones, as well as the general update from earlier today. I think most of my previous concerns had more to do with framing & conclusions, than technical soundness. Indeed, I think it's an interesting & noteworthy finding that the FFNs transforming an input token to an output token will sometimes learn a function that is (i) somewhat predictable in what it does, and (ii) consistent across a bunch of subjects, and (iii) surprisingly implemented as a vector addition. I'll adjust my scores accordingly. FYI I would still like to see more discussion of this: > I'd like to see more discussion of the implications of these results. The story is simple and enticing, but what does this tell us? What lessons are generalizable and transferrable to other areas of interpretability? Can you definitively say that the FFNs are recalling knowledge at the last token? How are other tokens involved? What are the future works you see coming out of this?
null
null
null
null
null
null
Efficient Diffusion Policies For Offline Reinforcement Learning
Accept (poster)
Summary: This paper focuses on the improvement of computation efficiency of Diffusion-QL by adopting the property of marginal distribution in the diffusion model and the variance control scheme proposed by DPM-Solver. Besides, this paper extends the scope of compatibility with other offline RL methods, from value-based to policy gradient methods. This is achieved by directly approximating the clean examples from corrupted examples at arbitrary diffusion time steps. Strengths: - This paper is well-written, and the proposed method is easy to understand. - The evaluation of value at risk seems interesting. - The technique seems sound, and the results seem strong. Weaknesses: - The discussion of limitation seems to be neglected. - More clear discussion is needed to highlight the contribution of this paper and show the novelty. - The proposed algorithm seems to significantly increase the inference computation complexity by introducing 10000 iterations for each action. - More experiments are needed to support the contributions sufficiently. - Some typos are needed to fix through further proofreading. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See my detailed comments below. ==Major concerns== - In lines 233-235 of Section 4.5: according to Equation (9), the action approximation is the mean value of action Gaussian distribution, so why adopting $\hat{a}^0$ can not reduce high variance? - In Section 4.4, the improvement of sample efficiency (from $ K=1000$ to $K\approx 15$) comes from DPM-Solver rather than the action approximation. Firstly, if the author adopts action approximation, whether the $a_K$ comes from normal Gaussian distribution? If so, the noise prediction function $\epsilon_theta (a^K,K;s)$ should predict the action based on the state and the Gaussian noise $a^K$. Intuitively, generating proper actions based on states seems more difficult, i.e., that classical policy function. - As for the log-likelihood, why can we not obtain the exact log-likelihood of $\pi_{\theta}(a|s)$ through $log~p(a^K)+\sum_{k=1}^{K}log~p_{\theta}(a^{k-1}| a^{k}, s)$? - Does the action approximation improve the computation efficiency of diffusion policy? - I hope the authors explain how they use the action approximation during training. - During the sampling stage, the DPM-Solver speeds up generation rather than action approximation. If action approximation just works on Equation (10), I think it is very similar to the “Target Policy Smoothing Regularization” technique in Section 5.3 of the TD3 paper. I want to know why the authors use the predicted $\hat{a}^0$ rather than $a^0+\epsilon$. ==Minor concerns== - How does the author avoid out-of-distribution actions' effects on the Q function? - DPM-Solver directly makes me understand the sample efficiency of EDP, but I can not understand why the training efficiency can also be improved? So I hope more description of the reason. - In line 318-320 of Section 5.1: during training the diffusion model, we sample a step $t\sim [1, K]$, then train the diffusion model according to Equation (5). If the $K$ is large, we can still train the diffusion model. So I hope the authors explain this claim more clearly. ==Typos== - Line 188: Eqn. 7 -> Eqn. (7). - Line 176: Eqn. 1 -> Eqn. (1). - Line 201: Eqn. 11 -> Eqn. (11). - Line 292: Eqn. 4 -> Eqn. (4). - Line 518 of Appendix: Eqn. 5 -> Eqn. (5). - Line 519 of Appendix: Eqn. 9 -> Eqn. (9). Similar typos can also be found in many places in Appendix. - Line 565 of Appendix: Tab. ??. - Line 575 of Appendix: Tab. ??. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not discuss the limitations and discuss broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. In lines 233-235 of Section 4.5, why adopting $\hat{a}^0$ can not reduce high variance? A1. Here is an intuitive explanation. Given an actual action $a$, action approximation $\hat{a}^0$ represents the mean of the action Gaussian distribution that can be denoised from $a^k$. Therefore, it is related to a specific example drawn from a multi-modal distribution. $\hat{a}^0$ can be used to guide the optimization gradient for sample $a$, but can not represent the whole data distribution. > Q2. The improvement of sample efficiency comes from DPM-Solver rather than action approximation. A2. Yes, the sample or inference efficiency solely comes from DPM-Solver. However, action approximation can greatly boost the training or policy improvement efficiency, as discussed in Sec. 4.2 and Sec. 4.4. This is because one needs to backpropagate the whole sampling chain of an action to optimize the policy. Instead of sampling an action through multiple steps of denoising diffusion, action approximation just needs to forward the network once. > Q3. Whether the $a^K$ comes from a normal Gaussian distribution? A3. Yes, when $k=K$, $a^k$ is drawn from a normal gaussian distribution. However, $k$ is uniformly sampled from the discrete set ${1,2,..,K}$, which means $k=K$ is only a small portion. > Q4. As for the log-likelihood, why can we not obtain the exact log-likelihood of through $\log p(a^K)+\sum_{k=1}^{K} \log p_{\theta}(a^{k-1}| a^{k}, s) $ A4. This is because $\pi_\theta$ is actually written as the following form, $$\pi_\theta(a|s) = \int_{a^K}\int_{a^{K-1}}\dots\int_{a^1} p(a^K|s) p_\theta (a^{K-1}|a^K, s) \dots p_\theta(a^0|a^1,s) d_{a^K} d_{a^{K-1}}\dots d_{a^1}$$ where the integrations are intractable. > Q5. Does the action approximation improve the computation efficiency of diffusion policy? I hope the authors explain how they use the action approximation during training. A5. Sure, action approximation can boost the training efficiency greatly. Recall the policy optimization procedure in TD3. One needs to first draw a sample from the policy, then feed it into the Q-network to get a Q estimation. Then, the policy is optimized to maximize the Q estimation. This procedure requires backpropagation through the action sampling process. Without action approximation, It takes K times to query the diffusion network for action sampling, thus K times of backpropagation. However, with action propagation, we just need to forward the denoising diffusion network once, which greatly saves GPU memory and optimization time. > Q6. I want to know why the authors use the predicted $\hat{a}^0$, rather than $a^0 + \epsilon$. A6. This is because the Q-network takes a clean example as input. Instead, $a^0 + \epsilon$ is a noisy one, feeding it into the Q-network can not give a proper estimation of the corresponding Q value. > Q7. How does the author avoid out-of-distribution actions' effects on the Q function? A7. Apart from the RL objective, we also have a behavior cloning term (i.e., the diffusion objective), which restricts the policy to be similar with the behavior policy. > Q8. why the training efficiency can also be improved? A8. Please refer to A5. More discussion on this problem is welcomed. > Q9. Why can we not train Diffusion-QL when K is large, but train diffusion model? A9. The difference is that Diffusion-QL needs to sample an example and backpropogate through its sampling chain at training time, but normal diffusion models do not. When K is large, the sampling chain will involve thousands of neural networks, resulting in a super huge computation graph. Based on our empirical experience, it is even computational intractable to compile such a big graph with JAX. Feel free to try with our code by setting K=1000, and disable action approximation. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses that clarified my questions. After reviewing their explanations, this paper possesses the necessary qualities for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you so much for your positive feedback and for acknowledging the merit of our work. We truly appreciate the time and effort you have dedicated to comprehensively reviewing our manuscript. For all the typos raised by you, we will definitely fix all of them the future version. We would be extremely grateful if you could kindly update your rating.
Summary: The authors propose a method to efficiently train diffusion based policies in the offline-RL setting. The authors suggest three main tricks to enable this: 1) Removing the need to backpropagate through the diffusion sampling chain to update the policy by using what the authors call action approximation; 2) Replacement of intractable policy log likelihoods in reinforcement learning objectives with the ELBO, which allows using diffusion policies generally with many popular offline RL algorithms; 3) Using a fast DPM-Solver for action sampling during policy execution. Altogether, these tricks allow for relatively fast training of expressive multimodal policies which achieve state of the art results across the D4RL benchmark suite. Strengths: * The paper is generally well written and easy to follow. * The use of action approximation facilitates training diffusion policy models with much larger diffusion noising time K, with the authors using K=1000. The RL community has recently taken great interest in the prospect of using the expressiveness of diffusion models for policies, and this simple trick seems to make diffusion policy training practical without any noticeable drawbacks. * The authors also demonstrate that we can use diffusion policies with other common offline RL algorithms such as TD3+BC, CRR and IQL by simply replacing the log likelihood with the diffusion ELBO. This is a valuable contribution to the field, as it could open the door to use diffusion policies to work with many general RL algorithms, a strict improvement over simple gaussian policies. * The authors show strong empirical performance across the D4RL benchmark suite. EDP policies are especially strong compared to the FF counterparts on the more challenging antmaze and kitchen suite, which involve high multimodality due to undirected demonstrations. Weaknesses: * The main proposal to efficiently train diffusion policies hinges on action approximation, which uses a much higher variance estimate of $\hat{a}^0$. There are no ablations to show how this affects the final diffusion policy as compared to one trained without action approximation. There is a comparison against Diffusion-QL in the paper, but with a very large difference in diffusion timesteps K, so the effect of action approximation itself is difficult to gauge. I have listed some questions related to this point below, which if clarified would be helpful. * The use of DPM-Solver is highlighted as a major contribution is speeding up the sampling process compared to DDPM sampling in Diffusion-QL. It is a well known fact that faster diffusion sampling methods than DDPM exist which can be used for sampling, and Diffusion-QL could have used this as well. While it is useful that the authors show the time savings for using DPM-Solver over DDPM sampling, this is not a novel contribution. * There isn’t discussion about important hyperparameters related to the method. A very important hyperparameter that could be discussed more the diffusion timesteps K (I have noted this as a question below as well). Are EDPs more brittle to train than their FF counterparts in algorithms like IQL with regards to changing hyperparameters? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Generally, diffusion models generate higher quality samples with larger diffusion timesteps K, and the authors note in Tab. (1) that training EDP with K=1000 matched or beat Diffusion-QL which was trained with much smaller K~5 to 100. The authors cite this ability to train diffusion models with higher K without having to backprop through the chain as one of the main advantages of EDP. I am curious how much of an impact K actually has in these offline RL benchmark tasks, given that Diffusion-QL has similar scores. I am interested to see an ablation study (at least in some environments) where EDP is trained with different values of K starting at K=1, to see if generally increasing K actually significantly improves policies in these domains. * Energy-based Action Selection (EAS) seems necessary to come close to the performance of DQL using the proposed EDP policies. The appendix shows that other ways to sample actions from the diffusion model result in much worse performance. DQL does not to my knowledge require EAS to produce good action samples from its diffusion policy. Is this due to the effect of action approximation resulting in training a worse diffusion model? * As an extension to the above point, have the authors considered using sampling guidance techniques like Classifer-Free Guidance (CFG) to reduce variance of the final policy? * The authors allow for training diffusion policies with popular offline RL algorithms which require policy log likelihoods by replacing it with the ELBO. Could this be done with lower bounds of simpler deep generative models, such as the ELBO of a Variational Autoencoder as well? I am not aware of prior work that has done this. It could be that diffusion policies are overkill for the expressiveness needed to learn high value policies in the tasks under consideration, in which case these same tricks could be applied to train simpler VAE policies for example. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors have not addressed technical limitations of their work. They have briefly addressed societal impact of better reinforcement learning methods in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. The effect of action approximation A1. We compare with and without action approximation on the following three environments by using OMS metric (Table. 4). In the following table, the DDPM column will forward and backward a policy network 100 times at training time, but action approximation only needs once. We can observe that action approximation will slightly harm the performance, when the same number of diffusion steps is used. However, it supports training diffusion policies with larger K (e.g. 1000), while Diffusion-QL does not. Increasing K is able to avoid performance drop as evidenced by the last column of Table 1 and Table 4. | OMS K=100 | DDPM | Action Approx | |:-------------------------:|:-----:|:-------------:| | walker2d-medium-v2 | 86.9 | 85.5 | | walker2d-medium-reply-v2 | 96.3 | 93.3 | | walker2d-medium-expert-v2 | 111.5 | 111.1 | > Q2. Using DPM-Solver is not a novel contribution. A2. We totally agree that simply replacing the DDPM sampler with DPM-Solver is not a novel contribution. We will modify the statement accordingly to highlight that DPM-Solver is able to improve efficiency but not novelly developed by us. A diffusion policy that can be efficiently trained and a generation diffusion policy compatible with likelihood-based methods are the main contributions. > Q3. Are EDPs more brittle to train than their FF counterparts in algorithms like IQL with regards to changing hyperparameters? A3. In our experiments, we do not tune the IQL parameters when it is combined with EDP. For the hyper-parameters for the diffusion part, we just adopt the one used in TD3+EDP. Therefore, we believe that EDPs are quite stable to train with IQL. > Q4. if generally increasing K actually significantly improves policies in these domains. A4. As discussed in Q1, when K is small, action approximation generally performs worse than its full-chain counterparts. Moreover, we analyze the effect of K with EDP on three gym-locomotion tasks. We can conclude that increasing K is able to improve the performance steadily. EDP often fails when K is smaller than 10, but becomes steady when K is larger than 50. | EDP K | 1 | 2 | 3 | 5 | 10 | 50 | 100 | |:-------------------------:|:----:|:----:|:----:|:-----:|:-----:|:------:|:------:| | walker2d-medium-v2 | 1.08 | 4.65 | 5.78 | 10.05 | 81.22 | 84.08 | 85.50 | | walker2d-medium-reply-v2 | 0.45 | 1.76 | 3.10 | 4.01 | 18.89 | 89.18 | 93.03 | | walker2d-medium-expert-v2 | 0.02 | 2.00 | 4.73 | 8.34 | 80.92 | 109.90 | 111.00 | > Q5. Is the usage of EAS due to the effect of action approximation resulting in training a worse diffusion model? A5. This is a really great question. Actually Diffusion-QL suffers the same high-variance problem. And their code actually involves a similar technique like EAS, which is not discussed in their paper. Please refer to their official `diffusion-rl` repo belonging to organization `Twitter` for details. The code snippet is located at line 169-176 of file `agents/ql_diffusion.py`. > Q6. Using Classifier-free guidance (CFG) to reduce variance of the final policy. A6. That is a really great suggestion. Unfortunately, we did not try this before mainly due to the following reason. First, It is hard to decide what conditional information to use, probably we can use normalized return as a condition. Second, we need to retrain a diffusion policy such that it can take a conditional or an idle signal as input. Instead, EAS can serve as a plug-in component for evaluation without modifying the training procedure. > Q7. Can simpler deep generative models be used as policies? A7. Sure, the answer is yes. BCQ [R1] has successfully trained a conditional-VAE as the diffusion policy. Diffusion-QL also studies different forms of generative models as policies and compares them with diffusion models. The conclusion is that Diffusion policies show superior ability at capturing data with strong multi-modalities. So, when the data distribution is not so complicated, simpler generative models are definitely a promising choice. [R1] Fujimoto, Scott, David Meger, and Doina Precup. "Off-policy deep reinforcement learning without exploration." International conference on machine learning. PMLR, 2019. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response to my questions. I will maintain my score recommending acceptance of the paper. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: We appreciate your detailed review and suggestions. We will incorporate the comments and involve new results into the final revision. Thank you again!
Summary: The paper proposes to learn a policy for several offline RL tasks in the D4RL benchmark by parameterizing a policy with a diffusion model. The authors claim that their approach is computationally efficient and more compatible with several other RL approaches like maximum likelihood based approaches when compared to Diffusion-QL which is the primary baseline in their experimental studies. Strengths: 1) The paper is well written and easy to read 2) The authors give a clear and concise preface to the method and results in the introduction part of the paper which sets the flow for the rest of the paper. 3) The authors compare their approach with the Diffusion-QL method which introduced the idea of formulating policies as a diffusion model. They explain the issues with the Diffusion-QL approach and try to overcome those shortcomings through EDP 4) Their approach is compatible with max-likelihood based approaches such as IQL (which is one of the SoTA in Offline RL) 5) The run extensive experiments and ablations on the D4RL benchmark and the results reported are consistently better than Diffusion-QL Weaknesses: 1) Line 206: The authors claim that both policy approximations are empirically similar when calculating the objective though there is no evidence provided for the same. Would be good to know their intuition on that as well. 2) There is no discussion around why EDP + IQL works significantly better in some tasks like kitchen, adroit whereas it under-performs in the locomotion tasks (where EDP + TD3) performs better. Would be good to know why one works better than another in these cases. Is there any investigation done in that direction? 3) Barely no discussion around limitations of the approach. 4) Would the approach be robust to data from multi-modalities? There is a mention of why that would be an issue with gaussian policies but there is no details on that in the latter parts of the paper. 5) Would the method work on more complex offline-RL task like CARLA? (why was that excluded from the investigation? I understand that diffusion-QL does not test their method on this task but was it ever tried out?) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to some of the questions mentioned in the weakness section; Minor comments: 1) Is the speed only reason for using the DPM-Solver? Are there other alternatives that were considered 2) Please rephrase line 274 (Do not like phrasing the evaluation protocol of another peer-reviewed paper to be labelled as cheating) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not see any discussion around limitations of the approach or other assumptions made. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. Eqn. (12) and Eqn. (13) are empirically similar to each other. No evidence A1. We ran experiments with Eqn. (13) on three environments with TD3 as the base algorithm. These two approximations are compared in the following table. We can observe that they indeed perform similarly. | Base algorithm: TD3 | Eqn. (12) | Eqn. (13) | |:-------------------------:|:---------:|:---------:| | walker2d-medium-v2 | 86.9 | 86.9 | | walker2d-medium-reply-v2 | 94.8 | 94.9 | | walker2d-medium-expert-v2 | 110.3 | 110.2 | > Q2. Why EDP-IQL out-performs in kitchen, adroit and antmaze, but underperforms in locomotion. A2. Thank you for raising this valuable question. EDP-IQL actually shares the same observation as IQL. But, why does this phenomenon happen to IQL? Our hypothesis is that compared to locomotion tasks, the other three types of tasks have sparser reward. As a result, the out-of-distribution issue is amplified in these tasks. IQL performs in-distribution value estimation without querying any out-of-distribution actions, making it less prone to overestimation in such sparse tasks, resulting in better performance. > Q3. Barely no discussion around limitations of the approach. A3. Thank you very much for pointing out this problem. We will add the following discussion in the revision. Though EDP is a much more efficient class of diffusion policies, it is still inefficient compared to a feedforward policy network. For example, the training time on walker2d for a feedforward policy network is just 2 hours, while EDP takes 5 hours. Moreover, The implementation of diffusion policies is much more complicated than its feedforward counterparts. > Q4. Would the approach be robust to data from multi-modalities? A4. Sure, diffusion models/policies are superior in modeling multimodal data. We do not provide detailed experimental analysis on this as Diffusion-QL has already conducted a comprehensive study. If you are interested, please refer to Figure 1 of Diffusion-QL for more details. > Q5. Would the method work on more complex offline-RL task like CARLA? A5. Sorry, we just tested our methods following conversions. Extending diffusion policies to visual/pixel-based environments itself is a research topic. Thank you for the valuable suggestion! We agree that CARLA is definitely an interesting domain for testing the capability of EDP on more complex tasks. However, in this paper, we only focus on prototyping a general diffusion policy for offline RL, handling the vision inputs of CARLA and imbalanced distribution in self-driving scenarios is out of the scope of this paper. We will leave it for future study. > Q6. Is the speed only reason for using the DPM-Solver? Are there other alternatives that were considered A6. Yes, that is the only reason. Sure, we actually tried DDIM, it performs similar to DPM-Solver, but a bit slower. Therefore, we only report DPM-Solver in our paper. > Q7. Please rephrase line 274 (Do not like phrasing the evaluation protocol of another peer-reviewed paper to be labelled as cheating) A7. Definitely, we will remove the “cheating” statement and focus on the difference between evaluation protocols only. --- Rebuttal Comment 1.1: Comment: Thank you for the answers to my questions/concerns. I believe most of my questions were answered comprehensively. I hope you make the necessary changes as you have mentioned in the above rebuttal. I will maintain my score to acceptance with these changes being included. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: We appreciate your detailed comments and suggestions. We will polish our paper further and incorporate new changes into the final revision. Thank you again!
Summary: The focus of this paper is to enhance the diffusion policies introduced in Diffusion-QL for offline reinforcement learning. The authors address the challenges of training and sampling efficiency by incorporating action approximation and employing an advanced ODE solver for diffusion policies. They conducted extensive experiments to demonstrate the effectiveness of their proposed approach, known as Efficient Diffusion Policy (EDP). Strengths: 1. This paper is well structured and easy to follow. 2. The conducted experiments are of enough quantity and quality. Weaknesses: > I think some claims need to be more careful or accurate. “… reduce the diffusion policy training time from 5 days …” “However, it takes tens to hundreds more time to train a diffusion policy than a diagonal Gaussian one” I tried Diffusion-QL before and I don’t think it takes about 5 days for training in their default setting (K=5). In my case, it is only about 20 hours, so I am not sure what setting is inferred here to be 5 days. Moreover, “In comparison, our training scheme only passes through the network once an iteration, no matter how big K is.” This statement is in correct. According to Algorithm 1 and the description in the paper, “To improve the efficiency of policy evaluation, we propose to replace the DDPM sampling in Eqn. (4) with DPM-Solver”, the policy evaluation part still needs the action samples with the full reverse process, which takes a number of steps even with DPM-Solver. > The novelty is kind of limited. In short words, EDP tried two things: 1. replace the real action samples generated by the full reverse process in policy improvement step with one-time real action prediction. 2. replace the original DDPM solver with DPM-solver. > The Energy-based Action Selection (EAS) is not a new thing and has been studied by other works, such as “Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling (ICLR 2023)”. > Following diffusion model conventions, I use $x$ to represent $a$ here. This paper uses $\hat{x_{0, t}}$ to replace $x_0$ in policy improvement step. Note using $\hat{x_{0, t}}$ to replace $x_0$ is not a common thing in diffusion models, since the $\hat{x_{0, t}}$ has different meaning compared to $x_0$. $\hat{x_{0, t}}$ usually represents the mean of all $\hat{x_0}$ that could goes back to the true $x_0$ distribution from time step $t$. Hence, $\hat{x_{0, t}}$ could be with much noise and averaged when $t$ is large. Not sure why here the $\hat{x_{0, t}}$ could replace the $x_0$ in Eq (7). If so, why this procedure could not be applied onto Eq (6) as well? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. About the training time of Diffusion-QL. A1. Thank you for bringing this valuable problem into discussion. We noticed that in Diffusion-QL’s official code repo, they default the number of diffusion steps to 100 (K=100). Please refer to their official `diffusion-rl` repo belonging to organization `Twitter` for details. The hyper-parameter is located at line 187 of file `run_offline.py`. Therefore, to make sure we can reproduce their results, we stick to this setting throughout the experiments. It indeed takes 5 days to finish one experiment. > Q2. Incorrect statement about “our training scheme only passes through the network once an iteration, no matter how big K is”. A2. Sorry for the confusion caused. In line 218-220, we are discussing the policy evaluation efficiency. It says DPM-Solver reduces the number of steps to 15. However, in line 221-225 (where the quoted sentence appears), we are talking about the training efficiency. For training, EDP just needs to forward and backward through the network once per iteration, thanks to the proposed action approximation technique. Please see Sec. 4.2 for more details. > Q3. limited novelty, EDP tried two things: 1) action approximation, 2) DPM-solver A3. Thank you for opening this discussion on the novelty. We do agree that our proposed methods are quite simple. We’d like to emphasize that the contribution/novelty of our paper is two-fold. 1) A class of diffusion policy with superior training efficiency, which relies on two techniques, **i.e.** , action approximation and DPM-solver. 2) A more general class of diffusion policy that is compatible with both DDPG-style and likelihood-based RL methods (See Sec.4.3). > Q4. EAS has been studied by “Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling (ICLR 2023)”. A4. Thank the reviewer for reminding us of this important work. We will discuss this paper in the future version. We’d like to emphasize that our main contribution is proposing an efficient and general diffusion policy, as tested with three different RL algorithms. EAS is a minor contribution for reducing the variance of the learned policy. We agree that the formulation of EAS is not novel, as similar forms have been explored for policy improvement in MPO[R1,R2]. However, EAS is developed for a very specific and different purpose. Moreover, we found EAS differs from SfBC in the following aspects: - Motivation: EAS is developed to reduce the variance of the learned diffusion policy. SfBC aims to model a diverse policy. - Usage time: EAS is used at evaluation time only. SfBC is used during training time. - Role: EAS is for policy execution. SfBC is for value estimation. [R1] Abdolmaleki, Abbas, et al. "Relative entropy regularized policy iteration." arXiv preprint arXiv:1812.02256 (2018). [R2] Abdolmaleki, Abbas, et al. "Maximum a posteriori policy optimisation." arXiv preprint arXiv:1806.06920 (2018). > Q5. Why $\hat{x}_{0,t}$ can be used in policy improvement, but not policy evaluation. A5. The intuitive explanation is that the action plays a different role in these two steps. Policy evaluation aims to estimate the Q-value (cumulative reward) of a given action starting from a state. Therefore, a correctly paired action-Q data point is important. However, in policy improvement, we have two targets: 1) behavior cloning by diffusion objective, 2) Q guidance by RL objective (Eq. (7)). The rl objective is used to guide the policy optimization towards actions with high return. Though $\hat{x}_{0,t}$ is not the precise action, it can still provide valuable guidance for policy optimization. Moreover, we can draw connections to classifier guidance in diffusion models. The Q-network here is a bit like the classifier used in diffusion models. Instead of providing guidance on noisy intermediate steps at inference time, Q-network and action approximation gives guidance at training time. --- Rebuttal Comment 1.1: Comment: I'm disappointed by the author's feedback. - The Diffusion-QL paper repeatedly states its use of K=5 in experiments, e.g., “We found N = 5 performs well on D4RL (Fu et al., 2020) datasets, which is also a small enough value for cost-effective training and deployment. “, and “In the following D4RL tasks, we set a moderate value, N = 5, to balance the performance and computational cost.”. Additionally, the official Twitter-associated Diffusion-QL repository doesn't label itself as "official." You specifically mentioned Twitter-associated repo so you should know they provided another official repo. It's misleading to overstate improvements by selecting hyperparameters for baseline methods. Claims in this paper, especially on efficiency, should be factual. - The response does not make sense to me. In Algorithm 1, there is one section saying, “Sample next actions with DPM-Solver”, the actions samples here need a full reverse process. The statement “our training scheme only passes through the network once an iteration, no matter how big K is” is incorrect. - The authors admit their proposed methods are simple. Actually, there is nothing specific new in the design of diffusion policy. The authors mainly proposed to use the $\hat{a_{0,t}}$ to replace the actual action samples. The likelihood-based extension is not real likelihood estimation. - You know other works have studied EAS, but I didn’t see any references in Section 4.5. - The intuitive explanation does not convince me. You suggest that policy improvement does not need precise action samples. The observation is intriguing but seems counterfactual. In light of these points, my concerns remain. While I recognize the empirical performance, it's vital for claims to be accurate. I'm eager to hear further from the authors. --- Reply to Comment 1.1.1: Title: Official Comment by Authors [1/2] Comment: Dear Reviewer TjdE: Thank you very much for your effort at reviewing our paper and valuable comments. We now address your concerns as below. > R2-Q1. You specifically mentioned Twitter-associated repo so you should know they provided another official repo. It's misleading to overstate improvements by selecting hyperparameters for baseline methods. We’d like to clarify that we originally intended to paste an url pointing to the code snippet here. However, as external links are strictly prohibited, we break down the url into indicators like organization, repo and file. When we started this project, the only available codebase was under the Twitter organization (the other one was not yet publicly available). We apologize that we did not notice the newly released one has different default parameters. In addition, We’d like to clarify that in our experiments, we always emphasize **the configuration of our reimplemented baseline methods** and our improvement over the baseline. Moreover, we’d like to emphasize that action approximation brings the following intriguing benefits, which should not be neglected. Enabling training with large diffusion steps (1000 steps in EDP versus 5 steps in Diffusion-QL). Large diffusion steps give significant performance boost on kitchen, adroit and antmaze. Large diffusion steps allow us to use one set of hyperparameters for environments from the same domain. We will add further clarifications about the default configurations of Diffusion-QL and the main benefits of using action approximation in our revisions to avoid confusions. > R2-Q2. Incorrect statement about “our training scheme only passes through the network once an iteration, no matter how big K is”. In terms of this statement, we respectfully disagree. We refer the reviewer to section 4.4, line 217 to 227, where a detailed explanation on the efficiency is provided. A typical actor-critic RL algorithm alternates between policy evaluation and policy improvement, both of which affect the training efficiency. Specifically, *policy evaluation* is used to estimate the state or state-action values for the current policy, *i.e.*, Eqn (6). *Policy improvement* instead focusing on improving the current policy based on the value estimation, as shown in Eqn (7). In section 4.4, we specifically mention that EDP can only reduce the time steps to 15 for policy evaluation (line 218-229). But for policy improvement, EDP does not need backpropagation through the whole sampling chain and only needs to forward and backward the network once per iteration. Please note that our claim is for policy improvement, as explained in line 220-226. > R2-Q3. “Actually, there is nothing specific new in the design of diffusion policy.” “The likelihood-based extension is not real likelihood estimation.” We’d like to emphasize that our training strategy enables training diffusion policies with much larger steps, *e.g.*, 1000, which bring clear performance gains on adroit and kitchen environments. Moreover, we are not aiming at estimating the likelihood. Our target is making diffusion policies compatible with likelihood-based RL methods, so that we can approximately maximize the policy likelihood when a diffusion policy is trained with such algorithms. As a result, EDP works nicely with IQL and CRR, which further boosts the performance on antmaze, adroit and kitchen. We argue that the importance of our contributions should not be disregarded. > R2-Q4. “You know other works have studied EAS, but I didn’t see any references in Section 4.5.” We thank the reviewer for bringing up this discussion again. We apologize for the confusion caused. Nevertheless, we respectfully disagree with the reviewer, and would like to further clarify on this point. The Energy-based Action Selection (EAS) involves two distinct steps: 1) sampling $K$ actions from $\pi_\theta(a\mid s)$ and 2) selecting the action with the highest $e^{Q(s, a)}$. In fact, the exact form of EAS has been less discussed in the literature and people use it for different purposes. In our case, we use it for reducing the variance of a diffusion policy. Mathematically, such a sampling scheme can be treated as sampling from a non-parametric distribution $p’(a\mid s)\propto e^{Q(s, a)}\pi_\theta(a\mid s)$. Here, we refer to MPO **only to provide intuitive explanations to why EAS improves the final performance**, because $p’(a\mid s)$ can be considered as a one-step improved policy given **current** policy $\pi_\theta(a\mid s)$ and an **estimated** Q values $Q(s, a)$. We **do not suggest that MPO uses EAS for policy improvement**. Specifically, MPO performs policy improvement by minimizing the difference between the current learning policy $\pi_\theta(a\mid s)$ and the non-parametric one-step improved policy $e^{Q(s, a)}\pi(a\mid s)$. Such a scheme is completely different from the EAS. We will further clarify this and improve the presentation in our revisions. --- Reply to Comment 1.1.2: Title: Official Comment by Authors [2/2] Comment: > R2-Q5. “The intuitive explanation does not convince me. You suggest that policy improvement does not need precise action samples. The observation is intriguing but seems counterfactual.” We’d like to emphasize that our intuitive explanations in A5 of the rebuttal are actually two-fold. We first explain why action approximation can not be used for policy evaluation, then explain why it can be used for policy improvement. We did not claim “policy improvement does not need precise action samples”, our point is: compared to policy evaluation, “Though \hat{x}_{0,t} is not the precise action, it can still provide valuable guidance for policy optimization.” There is some empirical evidence for this from the field of learning from demonstration[R3-R5]. These papers show that when some imperfect demonstration data are added for RL optimization, the performance can be greatly boosted. Moreover, we also draw a connection between our method and classifier guidance, which we believe is able to better understand why action approximation works. In the end, we do admit that this intuitive explanation is based on our understanding of RL algorithms, learning from demonstration, offline RL and diffusion models. We do not have theoretical verification for it, but only empirical support that action approximation does not hurt performance much, and it can enable training diffusion models with much larger diffusion steps. [R3] Gao, Yang, et al. "Reinforcement learning from imperfect demonstrations." arXiv preprint arXiv:1802.05313 (2018). [R4] Jing, Mingxuan, et al. "Reinforcement learning from imperfect demonstrations under soft expert guidance." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020. [R5] Kang, Bingyi, Zequn Jie, and Jiashi Feng. "Policy optimization with demonstrations." International conference on machine learning. PMLR, 2018. If you have further questions, please do not hesitate to let us know. Authors of Paper 6867
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for recognizing the novelty and contributions of our work, as well as for providing valuable questions for discussion and constructive suggestions. As there are limited shared questions from the reviewers, we address them individually in the corresponding responses. Specifically, as requested by Reviewer 44Ux and Reviewer SXfY, we added additional experiments that we'd like to attract your attention, including - ablating the policy approximation by comparing Eqn. 12 and Eqn. 13 in likelihood-based policy optimization (requested by Reviewer 44Ux). - ablating the effect of action approximation (requested by Reviewer SXfY) - ablating the effect of the diffusion timesteps K (requested by Reviewer SXfY) We have directly appended the updated tables to our responses for the respective reviewers. We value your feedback and please kindly let us know if you have any further questions or concerns. Sincerely, Authors
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposed EDP to address the existing limitation of diffusion policy in offline RL. EDP relies on an action approximation to construct actions from corrupted ones, thus avoiding running the Markov chain for action sampling at training. shows that EDP achieves 25+ speedup over Diffusion-QL at training time on the gym-locomotion tasks in D4RL. Extensive experiments by training EDP with various offline RL algorithms, including TD3, CRR, and IQL justify the superiority of the proposed method. Strengths: * The proposed method EDP focuses on improving the Diffusion-QL baseline from two aspects including computational cost and limited applicability to different RL algorithms, which is the major contribution of this paper. It provides a new way to apply a complicated generative model in RL. * Extensive evaluation of datasets of different domains is performed. EDP can outperform several widely-used algorithms by replacing the originally-used Gaussian policy with the refined diffusion policy. And both the training and sampling speed of EDP are way faster than that of Diffusion-QL. Besides, it seems EDP is easier to tune compared with Diffusion-QL as it uses the same hyperparameters for tasks belonging to the same domain and could achieve satisfactory performance. * The unreasonableness of evaluation protocol OMS is pointed out and RAT is adopted to justify the performance of the proposed method and baselines. Weaknesses: * The compared baselines in Table 2 and Figure 1 are a little bit weak, which cannot demonstrate the superiority of EDP. As the authors claimed that the proposed method outperforms pre-sota methods, more powerful model-free baselines should be included such as EDAC[1], RORL[2] on Locomotion tasks, X-QL[3], InAC[4], BPPO[5] on other domains. Otherwise, I believed the superiority of the proposed method is overclaimed to some extent. [1] An, Gaon, et al. "Uncertainty-based offline reinforcement learning with diversified q-ensemble." *Advances in neural information processing systems* 34 (2021): 7436-7447. [2] Yang, Rui, et al. "Rorl: Robust offline reinforcement learning via conservative smoothing." *Advances in Neural Information Processing Systems* 35 (2022): 23851-23866. [3] Garg, Divyansh, et al. "Extreme Q-Learning: MaxEnt RL without Entropy." *The Eleventh International Conference on Learning Representations*. 2022. [4] Xiao, Chenjun, et al. "The In-Sample Softmax for Offline Reinforcement Learning." *The Eleventh International Conference on Learning Representations*. 2022. [5] Zhuang, Zifeng, et al. "Behavior Proximal Policy Optimization." *The Eleventh International Conference on Learning Representations*. 2022. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * It seems there exist mismatch of results between pre-sota methods from Figure 1 and Table 2? * Line 227 points out that Tab 1 reflects the benefits of increasing K. However, it seems Tab 1 doesn't include this information? * The GPU memory needed by Diffusion-QL is quite large. I'm curious whether EDP has the same problem? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. The baselines are a little bit weak. The superiority might be overclaimed. A1. Thank you for reminding us of these important literatures. We’d like to clarify that our paper focuses on policy representation in offline RL, which is orthogonal to the algorithmic developments by these related works. It means that our EDP can be further integrated into their methods for better policy modeling. Moreover, we have tried our best to make a fair comparison with these methods during rebuttal period. Unfortunately, we failed to do so due to the mismatch of evaluation protocols, network architectures and hyper-parameters. Taking BPPO as an example, in their Figure 3 (c), the running average score for hopper-medium-expert-v2 is around 90-100, while in Table 1, the OMS score is reported as 112.8. In light of the aforementioned challenges and your insightful suggestion, we are happy to adjust our claim to make it more precise. We will restrict the sota claim to the policy representation scope. > Q2. Mismatch of pre-sota methods from Figure 1 and Table 2. A2. Thank you so much for pointing out this typo. We will update Figure 1 to make the pre-sota of Adroit be 65.9 instead of the current 56.9. > Q3. The benefits of increasing K A3. Sorry for the confusion. We thank you for the valuable question and will definitely improve this in our revision. In table 1, Diffusion-QL are directly copied from the original paper, which is obtained by setting K to 5. DQL(JAX) is our reimplementation of Diffusion-QL but with K equals to 100. EDP is our final version whose training speed is constant to the number of K (thanks to action approximation). Therefore, EDP is using 1000 for K. On all 4 domains, EDP surpasses Diffusion-QL, which means larger K brings better performance. > Q4. The GPU memory needed by Diffusion-QL is quite large, how about EDP? A4. EDP does not suffer from this problem. The reason Diffusion-QL needs large GPU memory is that it needs to forward and backward the policy network K times to sample an action at each training iteration. However, EDP avoids this problem by introducing the action approximation technique, which only forwards and backwards the network once, thus greatly saving GPU memory usage and training time. --- Rebuttal 2: Title: Reviewering Input Needed Comment: Hello Reviewers, The authors have made efforts to address your comments on their work via the rebuttal. Part of the NeurIPS review process is participating meaningfully in the rebuttal phase to help ensure quality. Please read and respond to the author's comments today, latest tomorrow, to give everyone time to respond and reach proper conclusions. Thank you all again for your assistance in making NeurIPS a great conference for our community. --- Rebuttal Comment 2.1: Title: Reviewer Response Needed Comment: Hello Reviewer, The authors have made efforts to address your comments on their work via the rebuttal. Part of the NeurIPS review process is participating meaningfully in the rebuttal phase to help ensure quality. Please read and respond to the author's comments today, latest tomorrow, to give everyone time to respond and reach proper conclusions. Thank you for your assistance in making NeurIPS a great conference for our community. -- Your AC
null
null
null
null
null
null
Pruning vs Quantization: Which is Better?
Accept (poster)
Summary: This submission conducts a series empiricial experiments and anlysis between neural network pruning and quantization. It first used some statistics method to compare pruning/quantization. Then it measure the per-layer error based on a post-training compression framework. Finally, it conducted some experiments using LSQ for quantization and iterative magnitude-based iterative pruning. Strengths: Statistics method for compression comparison as a non-parameteric and non-trainable method is interesting. Weaknesses: 1. The main drawback of the submission is that it barely proposed new algorithm or insight. The submission is a collection of empiricial experiments and analysis. 2. Comprehensive empiricial experiments can be regarded as contribution. However, the compression methods used in the submission is limited and the experimental setting is not solid. The submission basically focused on magnitude-based pruning and uniform quantization. Author may refer to [1] for some examples. [1] https://arxiv.org/pdf/1810.05270.pdf Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: N.A. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We answer each of their comments below. **W1**. We agree that our work does not propose a new quantization or pruning method. However, we respectfully disagree that our paper does not bring new insights. We refer to the general comment on the novelty above for the details on our contribution. In our view a paper with novel insights is also a valuable contribution. In fact, the reference [1] mentioned by the reviewer is a conference paper which also does not suggest any new pruning method. **W2**. For our comparison, we used gradual magnitude pruning. First, it is not clear which pruning method exactly is the state-of-the-art. For example, the work Gale et. al 2019 claims that magnitude pruning gives state-of-the art results, or the work [1] suggested by the reviewer also suggests that the choice of the pruning method might not be the key for pruned model accuracy. Regarding magnitude pruning accuracy, in fact we report better results for pruning than the reference [1] which is mentioned by the reviewer. The best Resnet50 ImageNet results are reported in table 6 of [1] from the comment above, for example 76.09% of validation accuracy at 60% sparsity while in our experiments we obtain 76.5% of accuracy at 62.5% of sparsity (see Table 1, Resnet-50 at 6 bits). We also used uniform quantization for our comparison as a simple and the most widely used technique which is very competitive. Indeed, using non-uniform quantization formats such as FP8 format could lead to slightly better results (see e.g. https://arxiv.org/abs/2208.09225 and https://arxiv.org/abs/2303.17951), however that would only increase the number of cases where quantization is preferrable and thus would not change the overall conclusions of our paper. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I agree that "a paper with novel insights is also a valuable contribution". And my point is that the submission is lacking in insights, due to the following reasons: 1. Only maganitude-based pruning and uniform quantization is compared. 2. The submission can be summary as: (a speific form of) pruning and quantization is compared under same budget and we find that quantization is better in performance. However, for example, pruning (with a same method) can lead to very difference performance under different settings (initialization, training strategy), as shown in [1] (provided by me). Not even to mention quantization. Novel insights do not come from listing numbers but thoughtful comparison and anaysis of the difference. Besides, I don't think reporting a SOTA metric is important and contributes novelty to empricial comparison. Pruning and quantization are two parallel (not mutually exclusive) methods, their performance under same budget can be compared (in complete settings) and indeed important for decision, but their applicability is also very important. Author may focus more on: 1. Thoughtful comparsion (including but not limited to initialization, training, applicability) between pruning and quantization, if the submission's ultimate is still finding which (pruning / quantization) is better in most cases. 2. Add more analysis on statistics methods, and their relation to final performance. Overall, I keep my scores. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We believe there might be some misunderstanding and we encourage you to thoroughly read our paper, rebuttal and the papers you cite again. We have made it very clear that a change in quantization scheme would not change the outcome of the results in the paper. We also commented that the gradual magnitude-pruning scheme is state-of-the-art or close to it, and many methods that claim to improve over it do so only by small margins. Not sufficiently to refute our results. (See e.g., https://arxiv.org/pdf/1902.09574.pdf or https://arxiv.org/pdf/2202.01290.pdf). The conclusion of the paper you cite yourself [1] is that the pruning methods hardly matter, and networks can be trained from scratch to similar performance. This second part of this paper’s conclusion is in direct contradiction with later and concurrent work such as https://arxiv.org/pdf/2009.08576.pdf, https://arxiv.org/pdf/1902.09574.pdf. The most extensive study on pruning methods and their lineage https://proceedings.mlsys.org/paper_files/paper/2020/file/6c44dc73014d66ba49b28d483a8f8b0d-Paper.pdf also came to the conclusion that most differences reported by ‘new pruning methods’ in the literature were most likely due to different experimental setups (which we keep the same in our study). We do not have any reason to believe that the gradual magnitude-based pruning we employ is not state-of-the-art or produces very comparable results to other methods that would negate the conclusions in the paper, especially given the large discrepancy between the pruning and quantization results. Our paper does not only list numbers to come to a conclusion. Most of our paper is a very thoughtful comparison, as we compare pruning and quantization theoretically as much as we can, show the difference for either method without data, show the regimes in which one overtakes the other and analyze their performance with algorithmic bounds that give mathematical bounds. We also analyze the training of compressed models and share our insights in Appendix H. The conclusions and findings in the paper stay the same from the theoretical to the practical results, giving strong credibility to our comparisons. We also show the relation between our theoretical and practical analysis on the per-level basis, and the link between the per-layer MSE measures and final performance is well-known and shown again in the paper. Finally, we also provide further evidence for our conclusion in the rebuttal where we generated results for the combination of pruning and quantization – meaning our conclusions likely generalize to this setting as well. Finally, our discussion section discusses in-depth on the applicability of the methods, and what would happen in ‘a complete setting’ as you mention. If this is not sufficient for you, please indicate technically where you think our reasoning is lacking.
Summary: The authors compare the performance of post-training quantization and pruning methods with the same compression ratio using a signal-to-noise metric, a kurtosis metric, and, ultimately, model accuracy. They study the expected performance analytically and in simple toy problems, i.e., Gaussian- and Student's t-distributed weights. They also run experiments on real pre-trained models from the PyTorch Model Zoo. They conclude that post-training quantization alone generally outperforms post-training pruning alone based on their per-layer weight fidelity metrics and full-model accuracy. Strengths: - The paper represents a complete analysis of post-training pruning alone vs. post-training quantization alone when targeting the same compression ratio. - The authors analyze the problem at multiple levels (more theoretical/analytical and more empirical/experiment driven) and find the two viewpoints complement and support each other. - The toy model analysis, though straightforward, illuminates the general issues at play for pruning vs. quantization. Weaknesses: - The SNR appears to be well-motivated, but it would be ideal to demonstrate explicitly that it maps to model accuracy, e.g. by adding a subfigure with accuracy to Fig. 3 and/or adding the SNR metric to Table 1. - The authors note that the quantized models naturally exhibit a larger compression ratio than the pruned models because they have a larger fraction of real 0s, which is a minor caveat. - Some additional references could be added to the related work, e.g., - combining pruning and quantization: https://arxiv.org/abs/2102.11289, - quantization-aware training: https://arxiv.org/abs/2006.10159, https://arxiv.org/abs/1905.03696 - In the Impacts section, there is no discussion of the potential decrease in model robustness and generalization when pruning or quantization. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Why does the SNR metric use weights and not activations? - How does the layer-wise SNR metric you evaluate look for the real models in Table 1? Does it match the conclusions based on accuracy? - Can you indicate how computationally costly the pruning/quantization methods you employ are? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors discuss the limitations that they do not consider combinations of pruning and quantization, hardware considerations, and other quantization or pruning schemes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and additional references, please find our comments below. **W1**. The relation between SNR and model accuracy for both pruning and quantization is demonstrated in appendix D (see figure 6). **W2**. We agree that naturally appearing sparsity in the quantized value make the direct comparison between pruning and quantization more nuanced, however, if we would take the extra zeros into account that would only improve the quantization results and would not change the conclusions of our paper. We refer the reviewer to the general comment on joint pruning and quantization above. **W3**. We thank the reviewer for the references, and we are happy to include them to the related work. **W4**. We thank the reviewer for the comment. We would like to add a discussion on additional bias introduced by both compression techniques upon the next revision of the paper. **Q1**. We use SNR on activations whenever possible (figures 4,5,7,8), while we begin our presentation with weight SNR experiments (figures 1,2,3) in order to build up the study step by step from the distribution analysis to the full model results. **Q2**. The per layer SNR metric correlates well with accuracy in post-training quantization or pruning as mentioned in figure 6 of appendix D. In this case each compressed layer is optimized based on local SNR objective. However, the SNR layer does not necessarily make sense when the models are fine-tuned. First, there is a list of SNR values corresponding to each layer. Second, when we fine-tune the model we possibly train a new model by minimizing the loss function, and the new model is not constrained to be close to the original model (see appendix H for the related discussion). In this case the SNR metric might not be related to accuracy. **Q3**. The computational complexity of pruning and quantization techniques we consider is linear with respect to the number of weights. This overhead is relatively low compared to the computational complexity of per-layer optimization (solving the quadratic program) or fine-tuning. In the latter case the straight-through estimator is used for the gradient’s computation so that the computational complexity of fine-tuning is similar to that of the original model training. **L1. Combining quantization and pruning**. We refer the reviewer to the general comment above for a discussion on combining quantization and pruning. **L1. Other pruning or quantization schemes**. we do not expect using different quantization or pruning schemes to change the conclusions of our study. For example using 2:4 sparsity or structured sparsity generally performs worse than unstructured sparsity considered in our work (we discuss this in section 6 "other types of pruning"). In our experiment we confirmed that for 50% sparsity when switched from unstructured to 2:4 block sparsity, the validation accuracy decreases for Resnet18 by 0.5%, for Resnet50 by 0.3%, for MNV2 by 1.4%, and for EfficientNet by 1.2%. Using other quantization schemes such as FP8, in its turn, might only improve the results in some cases (https://arxiv.org/abs/2303.17951). --- Rebuttal Comment 1.1: Comment: Thank you for the response. I believe some of this information should be included in the paper or supplementary (for example the arguments/studies about why different quantization/pruning schemes won't change the main conclusions). Overall, I stand by my original score.
Summary: This paper sets out to answer the question whether quantization or pruning is better. It first provides an analytical analysis of the two methods in terms of signal-to-noise ratio (SNR) and establishes an early relationship between kurtosis and SNR. It then provides a mathematical breakdown of the compression error in both methods. Then, it empirically compares the two methods on 46 models from the PyTorch model zoo to validate the early analytical results. The authors then compress and fine-tune a set of vision models to compare the two methods after fine-tuning and show that quantization still outperforms in this region. Finally it explores the hardware considerations since dense pruned models can be difficult to accelerate on most hardware. Strengths: The authors use the reasonable FP16 baseline instead of the more common FP32. This paper makes an interesting connection between kurtosis and the relative performance of quantization over pruning. Observation that pruning tends to recover the original representation but quantization builds new ones is intriguing. The empirical evaluation spans across image classification and detection tasks. Weaknesses: Especially when considering the hardware, it would be useful to consider block sparsity like 2:4, which is supported in some modern GPUs. However, element-wise sparsity presumably has strictly better accuracy / memory performance, so block sparsity would likely be worse. Given the recent success of transformers, it may be helpful to include more (in addition to ViT) in the empirical evaluation. That said, I do not expect this will change the conclusion. You correctly point out that weight distributions are typically quantized symmetrically but they also typically use channel-wise quantization, which is especially helpful with low bitwidths. I expect channel-wise quantization could further shrink the region where pruning is preferred. I am slightly concerned about the novelty of this comparison, but I will do a better literature search later in the review process. One other potential critique may be that this conclusion may not be too surprising given the exponential decay in the importance of bits. This decay likely does not hold for individual weights. Minor: L87 is a little confusing for the definition of T, which is more the error of the magnitude pruning, as opposed to the magnitude pruning itself. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Did you consider clipping to make the distribution less sensitive to outliers? This seems standard with weight quantization too since weights are easy to profile and optimize the quantization settings. Figure 1 (right) is computed with the analytical equations with the normal distribution substituted? Are there any regions when the combination of the two techniques outperforms the techniques individually? I'm confused about Appendix H, where the conclusion is that quantization tends to relearn different representations but pruning recovers the previous ones. The figure shows that the distance between the original and new activations is larger for pruning, which should imply that it doesn't relearn the original distributions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the details review and useful suggestions. Please find our answers and comments below. **W1**. Block sparsity is a subset of structured sparsity and therefore indeed we expect it to have strictly worse accuracy for the same model size. This was confirmed in our experiments, e.g., for 50% sparsity when switched from unstructured to 2:4 block sparsity, the validation accuracy decreases for Resnet18 by 0.5%, for Resnet50 by 0.3%, for MNV2 by 1.4%, and for EfficientNet by 1.2%. Hence, more structured sparsity does not change the conclusions in our paper. Structured-sparsity would be accuracy-wise worse, at the trade-off of getting a more efficient hardware implementation. We will include this discussion upon the next revision of the paper. **W2** We totally agree that adding more transformers is beneficial for our empirical comparison. Below we add the results on SQNR values for tensors of 3 large language models (Bloom-3b, Opt-2.7b, OpenLlama-3b). The conclusions are similar to our experiment in figure 3. We are also working on including quantized and pruned Bloom-560m results with fine-tuning. We will include these results to the next revision of the paper. | Avg. SQNR | 8b | 7b | 6b | 5b | 4b | 3b | 2b | |--------------|----------|----------|----------|----------|----------|----------|----------| | Quantization | 33.8 $\pm$ 4.7 | 29.7 $\pm$ 4.2 | 25.7 $\pm$ 3.5 | 21.6 $\pm$ 2.8 | 17.3 $\pm$ 2.0 | 12.7 $\pm$ 1.3 | 7.5 $\pm$ 0.8 | | Pruning | 12.7 $\pm$ 2.3 | 10.8 $\pm$ 2.1 | 9.2 $\pm$ 1.7 | 7.7 $\pm$ 1.4 | 6.3 $\pm$ 1.1 | 4.9 $\pm$ 0.8 | 3.5 $\pm$ 0.7 | **W3**. We absolutely agree. We avoided using channel-wise quantization through all parts of the study on purpose in order to clearly demonstrate the region where pruning is preferred. For example, if channel-wise quantization was used for 8-bit quantization, the validation accuracy of Resnet18 would increase by 0.2% and MNV2 by 0.3%. Indeed, the region where pruning is preferred would shrink. **W4**. To the best of our knowledge this comparison has not been presented in the literature, we refer to the general comment above for the details on our contribution. We are eager to cite any relevant work if found. **W5**. Thank you for the comment, we are going to clarify the notation in L87. **Q1**. Yes, we use clipping with the range estimated using MSE error which is a common practice in quantization literature. **Q2**. Yes, we use the analytical equations for computing the expected error for quantization and pruning, the details are given in appendix A. **Q3**. A combination of quantization with some mild pruning outperforms quantization to a lower bit-widths in some cases. However, the picture changes if natural sparsity in quantised tensors is taken into account, we refer the comment on combining pruning and quantization above for further details. **Q4**. The range of CKA distance is from zero to one (the higher the closer), so larger distance for pruning means distributions closer to the original distributions. We kindly ask the reviewer to clarify the question further in case our comment needs further explanation. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. I believe there is value in a large empirical evaluation like this and I'll raise my original score accordingly.
Summary: In this paper, the authors try to answer whether pruning or quantization is better for network compression. The paper start by analyze the qunatization error and pruning error under standard normal distribution and heavy-tail distributions. Full-model comparison are done between quantization and pruning with a set of 8 models. In most cases, the quantization method performs better than pruning. Quantization is much better than pruning under extreme compression targets. Strengths: 1. The paper is well-written. The whole writing is to the point. The content is well organized, which makes the paper easy to understand. 2. Thorough full-model comparison is done for a set of 8 models. Weaknesses: 1. The standing point of this paper might be biased. Quantization and pruning are not two competitive methods. Each of them have a specific application scenario. They could be used sololy and jointly. According to the experiments, quantization is almost always better than pruning. Yet, this does not mean that pruning is not useful. 2. The comparison is done such that the model size after pruning and quantization are the same. Not sure whether this comparison is proper. 3. Even without the investigation in this paper, quantization is still used more often than pruning. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Quantization and pruning are two different network compression techniques. But this paper tries to compare them under the setting of same model size. Nevertheless, model size is not the most important factor for efficient applications. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The novelty of this paper is quite limited. It is more like an empirical study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for comments and suggestions. We answer each point below. **W1**. We totally agree that pruning is useful, and we did not state the opposite in our paper. Rather, for the setups where both methods are supported, using quantization leads to more accurate models. As we mention in the general comment above, quantization is more accurate even in the joint case. For example, pruning is useful in the cases where it is necessary to adjust the model size precisely. The quantised model size is proportional to integer bit-width which makes decrements in the model size relative large. However this can be easily tackled with pruning or a combination of quantization and pruning. **W2/Q1**. For this work, one might consider at least three metrics: the model size, the bit operations count (BOPs), or run-time on real hardware. The model size is the most simple of the three metrics which is suitable for the direct comparison. In many cases pruning only targets the model size, and resulting memory transfer overheads, unless there are sparse kernels implemented (see section 6 for the discussion on additional pruning overheads). For this reason BOPs count might not be always relevant. Finally, as we discuss in section 6 using run-time on real hardware would make the comparison dependent on the specific hardware architecture [mention the discussion]. With that being said, we are open for suggestions on using a different metric if there are some more suitable options. **W3/Limitation**. We do agree with the reviewer that in practice quantization is used more often than pruning. However, this is in our view not because there are prior extensive and conclusive studies which conclude that quantization is better in terms of accuracy. Likely this is more due to the fact that quantization is more beneficial from a hardware perspective (cf discussion section) and wider supported on common hardware. We respectfully disagree that the novelty of our paper is limited and refer to the general comment above. --- Rebuttal 2: Title: final discussions Comment: Dear Reviewer, As discussions come to an end soon, this is a polite reminder to engage with the authors in discussion. Please note we take note of unresponsive reviewers. Best regards, \ SAC
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments and useful feedback. We are happy to see that they found the paper well written (S4q6), has a thorough empirical evaluation on various tasks, (S4p6, jvrg), that the comparison is performed on various levels (XswT) and that they found our findings interesting such as attributing quantization/pruning performance to the kurtosis values (jvrg) and the analytical analysis on distributions (B8sD). We address some of the common comments below. ## Combining pruning and quantization While the original scope of the paper was to compare quantization to pruning, we agree that the paper would greatly benefit from also considering their combinations. Below we would like to share the results on combining pruning and quantization for MobileNet-V2 and Resnet18. Each column corresponds to a compression ratio (measured in average bit-width) which can be achieved using different combinations of pruning and sparsity, e.g. a model quantized to INT6 has size equivalent to INT8 model with 25% sparsity. Each diagonal value corresponds to quantization without pruning. We mark the best combination in each column using bold font. | MobileNet-V2 | 8b | 7b | 6b | 5b | 4b | 3b | 2b | |--------------|----------|----------|----------|----------|----------|----------|----------| | INT8+pruning | **71.9** | 71.8 | 59.1 | 47.5 | 35.6 | 0.0 | 0.0 | | INT6+pruning | - | **72.0** | 71.6 | 70.2 | 66.0 | 51.7 | 19.0 | | INT5+pruning | - | - | **71.8** | 71.4 | 69.1 | 60.1 | 31.7 | | INT5+pruning | - | - | - | **71.6** | **70.9** | 67.7 | 53.4 | | INT4+pruning | - | - | - | - | **70.9** | **70.1** | 64.8 | | INT3+pruning | - | - | - | - | - | 68.6 | **67.9** | | INT2+pruning | - | - | - | - | - | - | 59.1 | | Resnet-18 | 8b | 7b | 6b | 5b | 4b | 3b | 2b | |--------------|----------|----------|----------|----------|----------|----------|----------| | INT8+pruning | **70.5** | 70.3 | 70.1 | 69.3 | 69.0 | 64.3 | 61.3 | | INT7+pruning | - | **70.5** | 70.2 | 70.0 | 69.5 | 68.0 | 63.9 | | INT6+pruning | - | - | **70.6** | 70.1 | 69.8 | 68.9 | 65.9 | | INT5+pruning | - | - | - | **70.3** | **70.0** | **69.6** | 68.5 | | INT4+pruning | - | - | - | - | **70.0** | **69.6** | **68.9** | | INT3+pruning | - | - | - | - | - | 68.9 | 68.4 | | INT2+pruning | - | - | - | - | - | - | 67.3 | As we see from these experiments, quantization alone is better than combinations with pruning in most of cases. Only for a compression ratio below 4 bits, quantization to 3 or 4 bits with added sparsity can be better than the corresponding 2 or 3 bits quantization. However, even for the low bit cases the picture changes when taking natural sparsity into account (cf. discussion section and appendix C). Below we give the natural sparsity values for quantization: | Natural sparsity | 8b | 7b | 6b | 5b | 4b | 3b | 2b | |--------------|----------|----------|----------|----------|----------|----------|----------| | MobileNet-V2 | 1.3% | 2.4% | 4.4% | 7.7% | 13.4% | 23.5% | 39.9% | | Resnet18 | 3.4% | 5.3% | 8.5% | 13.3% | 20.3% | 31.9% | 43.4% | As we see from the tables in some cases of low bit-width the best accuracy is achieved by combining quantization and mild pruning, for example MN-V2 quantized to INT4 model with 33.33% pruning outperforms INT3 model. However, major part of the reason is natural sparsity in quantized tensors. For example, MN-V2 at INT4 is already 13.4% sparse, while it only needs to be 25% sparse to be compressed down to INT3. This model at INT3 has sparsity of 23.3% out of 33.3% needed to compressed it down to INT2 with pruning. For Resnet18, the natural sparsity values are higher, i.e. 20.3% at INT4 and 31.9% at INT3, so that the natural sparsity almost achieves what is done with pruning without further accuracy loss. For example, Resnet18 at INT3 pruned down to INT2 has accuracy of 68.4%. If we compare this model to INT3 quantization, the latter has a much higher accuracy of 68.9% which has only 2% fewer zeros. And if we compare INT3 with 33.3% pruning to INT2 model, the latter has sparsity of 43% which makes the model even more compressible. We further note that pruning only appears beneficial at lower bit-widths where relative overhead of pruning such as storing sparsity mask, which we neglected in this discussion, are potentially much higher (see section 6 of the paper for further details). We would be happy to include this discussion upon the next revision of the paper. ## Comments on the novelty of the paper, the contribution and its value Some of the reviewers mention lack of novelty in our paper. We respectfully disagree that the novelty of the paper is limited. To the best of our knowledge the insights from the analytical analysis, the lower bounds for post-training pruning/quantization, and the large scale comparison between pruning and quantization across several tasks were not published before. The closest related work we are aware of is [30] which is limited to small scale empirical comparison. It is important to note that these insights are very valuable for ML practitioners as well as HW engineers making decisions on which methods to support, and where to invest their time when making networks more efficient.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Improved Communication Efficiency in Federated Natural Policy Gradient via ADMM-based Gradient Updates
Accept (poster)
Summary: This paper proposes a communication-efficient algorithm, FedNPG-ADMM, for federated natural policy gradient by using a reformulation of quadratic problem. It reduces the communication complexity from $\mathcal{O}(d^2)$ to $\mathcal{O}(d)$. The convergence analysis is provided accordingly. Strengths: 1. The paper is well-written and clearly presented. 2. The reformulation of the original quadratic problem into a distributed manner (eq 14) is a smart move. 3. Using ADMM-based global direction estimation to reduce the communication complexity by $\mathcal{O}(d)$. 4. Stationary convergence in non-convex case is provided, which is the same order as the standard FedNPG. Weaknesses: As it is in the federated learning (FL) setting, I am curious about main difference between the FL setting considered in this paper and the classic distributed learning. 1. data heterogeneity is one of the key features in FL, but how it impact the model performance does not clearly discussed in the paper, neither in theory or in experiments. 2. To reduce the communication costs, local updates are often used within each agent for FL algorithms. But in the FedNPG-ADMM algorithm proposed in this paper, there is no local update step for each agent. Could local steps be directly applied in the FedNPG-ADMM and further reduce the communication cost while maintaining the same convergence? Given above, to be precise, the setting for this paper is a distributed learning setting, rather than FL with data heterogeneity and local update. The major contribution is the communication reduction as claimed in the paper, so I expect the experiments could have more comparison of direct communication costs. 1. Directly comparison in terms of the communication cost should be given. In other words, the results with communication cost as the x-axis are also desired. 2. Appreciate it if other baselines could be included other than FedNPG. I am curious about the actual communication costs among first-order methods, second-order methods (FedNPG), and approximated version of second-order methods (FedNPG-ADMM). Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1.a**. Yes, data heterogeneity is a key issue in FL. However, this issue only exists in model aggregation methods (with local updates). As proven in [43], gradient aggregation methods are *immune* to whether collected data is i.i.d., or not. In summary, data heterogeneity will not influence our results, as they are based on gradient aggregation. **Q1.b**. This can be taken as the same question in Q1.a. FL is a problem and it can be solved in several ways. The key difference between distributed learning and centralized federated learning is the star network [52] in federated learning, where there is a parameter server that collects the updates and broadcasts information to all agents. In the classical distributed learning, clients usually can communicate with each other, while they cannot in the federated setting. Local updates (FedAvg [24]) are proposed for model aggregation methods. The argument between local updates and gradient aggregations (called local SGD and minibatch SGD, respectively, in the distributed optimization community) has been going on for years. Proven in [43], gradient aggregation methods are *immune* to whether collected data is i.i.d., or not. [46] also compares the convergence rates between these two paradigms, and the local SGD is actually not better. As the gradient aggregation method is not generally worse than the local update method and is immune to data heterogeneity, we decide to use it. This paper does not aim to argue the differences between them, but to find an efficient way for gradient aggregation in RL. It is worthwhile to mention that these well-known FL algorithms FedNL [34] and FedPD [62] also use one-step updates. Our algorithm can be extended to multiple local steps. However, multiple local steps may slow down the convergence, as mentioned in the standard FL literature [52]. In this case, it becomes unclear whether FedNPG-ADMM can further reduce the communication cost while maintaining the same convergence. This is a good question and could be future work. **Q2**. Our original figures follow the conventional settings with the number of iterations as the x-axis. We thank the reviewer for the valuable suggestion. It is indeed better if we have direct communication comparisons as they are the main contributions. Thus, communication comparisons are added in Figure 4 (see added pdf), where communication overhead is measured by the number of transmitted parameters with double precision in each agent. FedNPG-ADMM keeps the communication overhead as the first-order methods, while FedNPG has much larger costs. The ADMM method reduces the cost by $4$ orders of magnitude in the Swimmer-v4 task and about $6$ orders of magnitude in the Humanoid-v4 task. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors' rebuttal and I will keep the score.
Summary: The paper studies how to train a global policy using distributed data in reinforcement learning. The authors propose a distributed natural policy gradient method by employing ADMM to approximately compute a natural policy gradient direction. The communication complexity is linear in the dimension of policy parameters. The authors also prove the sublinear iteration complexity for achieving a stationary point. The effectiveness of the proposed method is finally verified in MuJoCo experiments. Strengths: **originality** High communication complexity is a bottleneck for implementing many RL algorithms in federated training. I believe the novelty of this work lies in a new application of ADMM to a popular RL algorithm: natural policy gradient. This method achieves better communication complexity than one for the naive method. I am not aware of this method in the literature. **quality** - The authors provide the stationary-point analysis of the proposed method in a distributed training setting. - The authors verify the use of the proposed method in experiments. **clarity** The proposed method and theory are clearly explained. **significance** - The distributed training algorithm is important for applying RL in distributed systems. - The stationary-point convergence guarantee is provided. Weaknesses: - The naive method of fererated natural policy gradient only has quadratic communication complexity with respect to the dimension of policy parameter. This does not seem to be a real bottleneck. - Directly extending centralized RL algorithms is not very significant, since fault-tolerance and robustness issues are more important to distributed systems. - It is important to compare communication complexity with the literature, e.g., Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee. - Some assumptions can be strong in practice, e.g., the invertibility of Hessian. It is useful to provide examples when Hessian is invertible. - The provided convergence property can be arbitrarily sub-optimal, and it is not clear how to check all assumptions. The stationary-point analysis has a gap with the convergence rate analysis of natural policy gradient in the literature. - The experimental setting is artificial since it is not originally designed for federated learning. There are no comparable baselines in experiments. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Some questions are raised in Weaknesses. Here are some other questions. - Can the authors provide robustness or fault-tolerance guarantees? - Can ADMM-based update be applied to other policy gradient methods? It is useful to discuss generalizability. - Since the global convergence exists in many policy gradient methods, can the authors strengthen the convergence guarantees? - Can the authors compare the proposed method with other baselines, e.g., Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**. This is actually an important bottleneck for the following reasons. Generally, if one method can reach a large scale, it means that its complexity is at most $O(n)$ [53]. Thus, the $O(n^2)$ complexity in the naive method is not acceptable in large-scale FL. In DRL, a policy is approximated by NNs, and the sizes of NNs are generally large. For toy tasks in MuJoCo [54] or language agents [55], the parameter sizes can reach $10^6$ level, and the sizes of Hessains are ${12}$ orders. In practice, the sizes of NNs in RL are getting larger and larger. With transformers, parameter sizes reach $2.2\times 10^{10}$ [56], and its square is $4.8\times 10^{20}$ ($4$ zettabyte with double-precision). Transmitting these large amounts of data from each agent in each iteration is not realistic. **W2&Q1**. First, our paper is not a direct extension of central RL. Compared to the central setting, the first critical bottleneck is a limited communication budget in FL [17, 52]. It is necessary to design FedRL that leverages communication costs and learning performances. Our FedNPG-ADMM utilizes the 2nd-order information but only requires its approximation without sending Hessians. As NPG (2nd-order) is more stable and converges faster than PG (1st-order), we rigorously prove the convergence of FedNPG-ADMM. Second, we agree that fault tolerance is important in distributed systems. However, FedRL is a new and emerging field with a limited number of theoretical publications at this moment. The adaptation of centralized RL algorithms to FL while achieving a balance between communication cost and learning efficiency remains a challenging task, and it is the first step. Our paper thoroughly solved this problem. The fault-tolerance theory is a good direction for the next step. Noticeably, our method is orthogonal to several malicious detection methods, e.g., the attack stealth [57]. To expand our work, we add experiments with agent selection in Figure 5. In the Swimmer task, we randomly select $75\%$ and $50\%$ of agents in each iteration, and the performances only drop slightly (final rewards drop less than $6\%$). Thus, our proposed method is robust for agents with disconnection in practice. **W3&Q4**. We thank the reviewer for pointing out the valuable work (FedBR) [58], and will add it to our introduction. However, this approach is designed for Byzantine attacks, while we focus on reducing communication costs while maintaining performance. First, our algorithm converges *faster* than FedBR. FedBR focuses on PG (1st-order). Our work is NPG (2nd-order). [22] thoroughly compare the convergence performances of PG and NPG. If measured in the same metric, our convergence rate is ${O}(1/\epsilon N)$, while ${O}(1/\epsilon^{\frac{5}{3}}N^{\frac{2}{3}})$ in FedBR. Further, our method improves the communication cost in FedBR by $O(d\epsilon^{\frac{2}{3}}/N^{\frac{1}{3}})$. Second, FedBR does not show the sample complexity of FedRL can be scaled linearly by the number of agents, i.e., $\alpha =0$ in Table 1 [58]. In contrast, our paper shows that *the sample complexity is scaled linearly* by $N$, which comes from the benefit of collaboration, and in Figure 2, a thorough comparison is given to support it. In summary, NPG updates policies according to the Riemannian metric and enjoys stable on-policy explorations. We do not think it is suitable to directly compare them, though. **W4**. In practical computation, if take an $n\times n$ symmetric matrix with uniformly distributed entries, it will almost certainly be invertible. The inverse expression is widely used in the original NPG (Theorem 1) [16] and TRPO (Appendix C) [35]. It can be replaced by the MP-pseudoinverse in [2, 35]. It is worth mentioning that in practice, it is expensive to solve $Ax=b$ by computing $A^{-1}$ in both space and computation. A conjugate gradient method is used in DRL to approximate the result [33]. **W5&Q3**. Our assumptions are widely used and verified in previous works. See Q2 in Reviewer1 q8Jc for details. The convergence of FedNPG-ADMM is fixed-point optimality, and our work is NPG instead of vanilla PG. In FL, even without ADMM, it is still challenging to provide a global guarantee for NPG without additional strong assumptions. The global guarantee of NPG attributes to the gradient domination property [2]. Without it, the convergence can only be assured to local optimums instead of a global one in the nonconvex setting. However, this property is limited to the centralized setting with $N=1$, which is *the reason that lots of global guarantees are established in the centralized setting*. If each function $\\{f_i\\}^N_{i=1}$ satisfies the property, the sum $\sum_{i=1}^N f_i$ might not. We recognize the importance of a global guarantee for the ADMM approach. Finding sufficient (and then necessary) assumptions for FedNPG (even without ADMM) is a direction for future works. Practically, we verify the performances by experiments, and it shows that the FedNPG-ADMM achieves the same convergence as FedNPG. **W6**. First, we follow the standard setting in FL. In the original FL work [24], the classical image and language tasks are used. In FL research, we usually make classical tasks federated instead of designing new tasks [15, 52]. Second, FedRL is a new field and has not formed experimental baselines. There are datasets published last year for labeled images [59] and health images [60], but they are not for RL and have not formed baselines. We are more than happy to see there are open-source tasks or baselines designed for FL and widely accepted by the community. It would be better if they are designed for FedRL. **Q2**. Any 2nd-order PG can apply the ADMM method. Regarding other 2nd-order methods, there is a Hessian-aided PG attempt [61], but its conclusion still claims NPG achieves better performances. NPG itself forms a basic approach for several variants [2, 22, 35]. --- Rebuttal 2: Comment: Dear Reviewer, We just wanted to check again to see whether our comments have addressed your concerns. We are happy to provide any additional clarifications that may be needed. Thank you again for your time spent reviewing our paper. Best, Authors
Summary: This paper applies the ADMM technique to the Fed-NPG algorithm in reinforcement learning and reduces the communication cost from $O(d^2)$ to $O(d)$ where $d$ is the number of parameters, which nearly maintains the convergence results of Fed-NPG. Empirical results verify the theoretical analysis. Strengths: The communication cost reduction is impressive since in the federated learning setting, the communication cost is one of the bottlenecks, and the convergence results are nearly the same as Fed-NPG. The idea to combine natural policy gradient with ADMM style methods (or the penalty functions) is interesting (and at least novel to me). Weaknesses: I am not very familiar with reinforcement learning, and thus may not find potential weaknesses. However, I suggest comparing with the original policy gradient methods in the empirical section and even in the main content (show that Fed-NPG-ADMM converges much fast than the policy gradient variation in federated learning) and plotting all the communication cost for Fed-NPG, Fed-NPG-ADMM, and policy gradients. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**. Thank you for these constructive suggestions. We will add PG in the main content and communication comparisons are added in Figure 4 (see added pdf), where communication overhead is measured by the number of transmitted parameters with double precision in each agent. FedNPG-ADMM keeps the communication overhead as the first-order methods, while FedNPG has much higher costs. The ADMM method reduces the cost by $4$ orders of magnitude in the Swimmer-v4 task and about $6$ orders of magnitude in the Humanoid-v4 task. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Since I am not very familiar with the RL background, I will maintain my score and confidence.
Summary: The work proposed a new algorithm for federated policy optimization. The proposed work uses prima-dual update to replace the primal update of FedNPG, so that the communication cost reduces from $d^2$ to $d$. The proposed work enjoys same rate of convergence as FedNPG under certain assumptions, and numerical experiments were proposed to show the efficacy of the algorithm. Strengths: The paper is well-rounded and the presentation is clear. The proposed method clearly enjoys the supremacy in terms of communication error. The numerical experiments were proposed to show the efficacy of the algorithm. Weaknesses: (Please reply to the Questions section directly) The motivation of the proposed algorithm seems a bit less significant. Moreover, the convergence theory requires policy gradient bounded, which seems quite strong. In the numerical experiments, more results could be included to further show the efficiency of the proposed algorithm. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Here are the my questions and suggestions: 1. Concerns on the motivations: the idea of combining reinforcement learning and federated learning is interesting, which is also largely un-explored in the literature. It is good to see the exploration on this direction. An intriguing question is that can we consider existing works on reformulating the TRPO, such as the KL-penalitzed objective in [1], to federated setting? Since if we get rid of the constraint, we don’t need to communicate the Hessian in the first place (and FedAvg-type algorithm may also enjoys similar rate of convergence in theory). In summary, the authors propose to study the policy optimization in federated setting, but eventually ended up at a less significant improvement over FedNPG (and this improvement comes from essentially the constrained federated learning, not from RL or policy optimization), which makes me feel less confident about the contribution in the RL field; 2. In terms of the assumptions for convergence analysis, the authors proposed bounded policy gradient and Lipschitz continuous (Assumption 4.1) and claimed that they are fairly standard in the existing literature. I’m actually not very sure about the boundedness of the policy gradient. Could the authors just include the specific assumptions in the existing literatures? Same question for Assumption 4.2. 3. The penalty parameter $\rho$ is the penalty to make sure that each local $y_i$ consensus with the global $y$. In the experiment $\rho$ is simply a small number as 0.1 or 0.01. My concern is that during the update, is $y_i - y\rightarrow 0$ true or not? The authors could also plot the consensus errors. 4. In the experiments, for example Figure 2 and 3, FedNPG-ADMM seems not better than FedNPG in terms of the rewards. It would be interesting to see why this is the case. I know that the main claim of FedNPG-ADMM is its commutation efficiency, and I think the authors could also plot the curves wrt the CPU times. References: [1] Schulman, John, et al. "Proximal policy optimization algorithms." arXiv preprint arXiv:1707.06347 (2017). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation is well stated in weakness and question sections. The authors also include the discussions in limitations at the end of the work. I’m not aware of any potential negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**. This is a thoughtful suggestion. We had the same thought at the beginning. However, local updates in the FedAvg-type might not bring advantages compared to gradient aggregation [46, 43]. On the other hand, unlike supervised learning, local updates in RL bring different **local policies**. We do not think it is a good idea to use **on-policy** algorithms for samples collected by different local policies. Second, although the soft penalty approach has simpler forms, it might bring worse performances than the constraint methods, e.g., on the Swimmer [37] and Humanoid [10] tasks in MuJoCo environments. With a hard constraint, the NPG enjoys stable training performances and has solid theoretical guarantees [2, 22]. In Figure 3, our experiments also verify that FedPPO has worse performances. Third, as for the first-order PG, [22] thoroughly compares the convergence performances of PG and NPG. Furthermore, recent works have demonstrated that the convergence guarantees of NPG with KL divergence constraints are superior to those of PG [20, 22], motivating closer inspection for practical use. In summary, this work makes the second-order methods have the same communication complexities as the first-order methods per iteration, and achieve stable and higher rewards in the federated setting. **Q2**. The bounded score function assumption is used in Assumption 4.1 [47], Assumption 5.1 [48], Assumption 4.1 [45], Assumption 4.2 [22], and Assumption 3 [49]. The positive definite assumption is used in Assumption 2.1 [22]. The continuity assumption is used in Assumption 4.1 [47], Assumption 5.1 [48], Assumption 4.1 [45], Assumption 4.2 [22], Assumption 2 [49], and Assumption 6.4 [2] (and the following lemmas). The above assumptions are verified for simple policy parameterizations such as Gaussian policies [47, 50, 51]. **Q3**. True. We test the averaged differences $\frac{\sum_{i=1}^{N}\lVert y_i - y \rVert^2}{N}$ with respect to the number of iterations. The differences gradually decrease to $0$. Over the course of training, policies are more deterministic and less random. It is not surprising that $y_i - y \rightarrow 0$ as policies become stable. **Q4**. In the experiments, we run each task ten times with random seeds from $0$ to $9$. Sometimes FedNPG-ADMM achieves higher rewards than FedNPG, while the averaged results slightly drop in the Humanoid task. As the ADMM approach is, in fact, an approximation of the exact aggregation, it is not surprising that the ADMM approach sometimes drops a little. In any way, they generally have the same convergence rate and similar performances in these tasks. We thank the reviewer for the computing time suggestion. However, communication time dominates the cost in federated learning [3, 15, 52]. To make the comparisons more clear, with reference to the comments of two other reviewers, we add the communication overhead of each method in Figure 4. The communication time (byte transmission) is reduced by $4$ orders of magnitude in the Swimmer-v4 task and about $6$ orders of magnitude in the Humanoid-v4 task. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses to my comments and questions. I believe that paper is useful contribution to the field and the rebuttal addressed my biggest concern. I appreciate it if the authors could modify the paper accordingly. I'll raise my evaluation to 6. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your consideration. We will be sure to modify the paper accordingly.
Rebuttal 1: Rebuttal: Dear Area Chairs and Reviewers, We appreciate your organization and valuable feedback. In the one-page pdf, we add communication costs in Figure 4 and agent selection in Figure 5. In the rebuttals, references [1-45] are from the main paper, and references [46-62] are listed as follows: --- [46] Woodworth, B., Patel, K.K., Stich, S., Dai, Z., Bullins, B., Mcmahan, B., Shamir, O., Srebro, N.: Is local SGD better than minibatch SGD? ICML (2020) [47] Papini, M., Binaghi, D., Canonaco, G., Pirotta, M., Restelli, M.: Stochastic variance-reduced policy gradient. ICML (2018) [48] Xu, P., Gao, F., Gu, Q.: An improved convergence analysis of stochastic variance-reduced policy gradient. In: Proceedings of The 35th Uncertainty in Artificial Intelligence Conference (2020) [49] Ding, D., Zhang, K., Basar, T., Jovanovic, M.: Natural policy gradient primal-dual method for constrained Markov decision processes. NeurIPS (2020) [50] Cortes, C., Mansour, Y., Mohri, M.: Learning bounds for importance weighting. NeurIPS (2010) [51] Pirotta, M., Restelli, M., Bascetta, L.: Adaptive step-size for policy gradient methods. NeurIPS (2013) [52] Li, T., et al: Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine 37(3), 50–60 (2020) [53] Bottou, L., et al: Optimization methods for large-scale machine learning. SIAM Review (2018) [54] Liu, X.Y., et al: Stationary deep reinforcement learning with quantum k-spin Hamiltonian regularization. ICLR Workshop on Physics for Machine Learning (2023) [55] Zhong, V., et al: Improving policy learning via language dynamics distillation. NeurIPS (2022) [56] Dehghani, M., et al: Scaling vision transformers to 22 billion parameters. ICML (2023) [57] Bhagoji, A.N., et al: Analyzing federated learning through an adversarial lens. ICML (2019) [58] Fan, X., et al: Fault-tolerant federated reinforcement learning with theoretical guarantee. NeurIPS (2021) [59] Song, C., et al: Federated learning annotated image repository. NeurIPS (2022) [60] Ogier du Terrail, J., et al: Datasets and benchmarks for cross-silo federated learning in realistic healthcare settings. NeurIPS (2022) [61] Shen, Z., et al: Hessian aided policy gradient. ICML (2019) [62] Zhang, X., Hong, M., Dhople, S., Yin, W., Liu, Y.: FedPD: A federated learning framework with adaptivity to non-iid data. IEEE Transactions on Signal Processing 69, 6055–6070 (2021) Pdf: /pdf/dbade10f9fe3c51b8400bdd75fc94073eaff712c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SaVeNet: A Scalable Vector Network for Enhanced Molecular Representation Learning
Accept (poster)
Summary: This paper introduces a novel molecule representation network that enhances the learning capacity and scalability through the integration of innovative initialization techniques and activation functions for vector features. The conducted experiments validate the network's proficiency in three distinct molecule representation tasks, demonstrating its capability to effectively learn from data while maintaining scalability. Strengths: - A new equivariant network capable of learning scalar and vector features. - Superior performance and scalability compared to previous methods. Weaknesses: - The presentation of the method can be improved. The symbols and equations employed are unclear, such as the presence of a comma in Eq. 4 and the excessive use of unnecessary new symbols in Eq. 5. Furthermore, the substitution of the symbol for multiplication with the letter 'x' in line 187 could be clarified. Additionally, it is suggested that the authors consider using a figure to illustrate the message passing of scalar and vector features, similar to previous methods like DimeNet. - The core difference between the proposed network and previous networks for vector features, such as PaiNN, remains unclear. While it appears that the primary distinction lies in the different processing of features, it would be beneficial to highlight and discuss this core difference in greater detail. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Regarding the direction noise, it would be helpful to clarify whether theta and phi are deterministic functions of the node type. If they are, applying a rotation to the graph would not change the node type, thereby maintaining the invariance instead of the equivariance of VEN(F_i). So why can VEN(F_i) be an equivariant vector initialization? Further explanations and intuitions could be added to address this point and enhance understanding. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer FRcS, Thank you for your thorough assessment of our work. We value your feedback and would like to address your concerns and suggestions as follows: > Presentation and Equation Clarifications 1. **Eq. 4 Ambiguity**: We've refined Eq. 4 to clarify the scalar-vector tuple representation. $$\begin{split} e_s(s_j, V_j, {r}\_{ij}, \vec{\beta}\_{ij}) =& \phi_s(s_j) \odot \eta\_s(r\_{ij}) \\\\ e_v(s_j, V_j, {r}\_{ij}, \vec{\beta}\_{ij}) =& \text{VA}\big(\phi\_b(\vec{\beta}\_{ij}) \odot \phi_d(s\_j) \odot \eta_d(r\_{ij}) + V_{j} \odot \phi\_v(s\_j) \odot \eta_r(r\_{ij})\big)\end{split}$$ 2. **Concerns about Eq. 5**: We concur that minimizing the use of extraneous symbols is crucial for clarity. Hence, we've revisited Eq. 5 and streamlined the notation. We've also ensured that the scalar and vector interaction pathways are defined with precision, offering coherence with the revised Eq. 5. $$\begin{split} \text{IA}(s, V, r, \vec{\beta}) &= (s\_i, V\_i) + \sum\nolimits\_{j\in\mathcal{N}\_i}{e(s\_j, V\_j, {r}\_{ij}, \vec{\beta}\_{ij})} = (s'\_i, V'\_i) \\\\ s'\_i &= s\_i + \sum\nolimits\_{j\in\mathcal{N}\_i}{e\_{s}(s\_j, V\_j, {r}\_{ij}, \vec{\beta}\_{ij})} \\\\ V'\_i &= V\_i + \sum\nolimits\_{j\in\mathcal{N}\_i}{e\_{v}(s\_j, V\_j, {r}\_{ij}, \vec{\beta}\_{ij})}\end{split}$$ 3. **Symbol for Multiplication in Line 187**: Thank you, this has been corrected as '$\times$'. 4. **Illustrative Figure for Message Passing**: We have incorporated an illustrative figure in uploaded PDF Figure 2. > The difference between SaVeNet and previous networks We'd like to elucidate on the core differences as follows: - **Embedding:** Our approach to feature embedding is fundamentally novel. Rather than relying on fewer 3D features, our goal is to craft embeddings that relay lossless information for the 3D molecular graph and resonate with SaVeNet's design. To illustrate, while several existing methods, including Spherenet, ComENet, PaiNN, ET, and EQGAT, tap into diverse features like distance, torsion angle, and directional vectors, our unique feature sets, detailed in Section 3.1, set us on a distinct trajectory right from inception. Furthermore, we've provided both theoretical (Theorem 1) and empirical validation, underpinning the efficacy of our representations in determining the 3D structure. - **Initialization:** Contrary to conventional practice in existing works, where vector initializations are often nullified, making the network intrinsically derive these representations, our method sidesteps this potential computational pitfall. Our initialization scheme is tailored to be both efficient and resource-conscious. - **Architectural Innovations in Message Passing**: SaVeNet introduces a unique message-passing paradigm, elaborated in Sections 3.1 and 3.2. We've appended Figure 2 as suggested. Our design incorporates our proposed vector activation techniques, which not only delineates our model from predecessors but also fortifies its reliability and scalability. To provide perspective, while models like PaiNN were conceived primarily for molecular property prediction, their inherent design may not extend seamlessly to more diverse tasks. A case in point is PaiNN's convergence issues on certain demanding targets [1]. Our proposed vector activation function not only alleviated PaiNN's limitations but also enriched its capabilities, as showcased in Table 4's performance comparison. - **Comparative Performance Analysis**:Our evaluations underscore SaVeNet's superiority, which was evident when models like PaiNN and ET were benchmarked against N-Body tasks. The results clearly attest to the innovations and the efficacy SaVeNet brings to computational molecular science. In summary, from the initial motivations, through feature design and architecture, to real-world application performances, SaVeNet has been meticulously crafted to push the boundaries of what's possible in the domain, while addressing the limitations of its predecessors. > Regarding the direction noise Let us provide clarity on the concerns raised: 1. **Initialization Concerns with \($\theta\$) and \($\phi$\)**: - Both are intrinsically linked to node types. This design choice is deliberate, taking into account the potential noise from simulations, e.g., DFT. By associating noise with atomic numbers, our model achieves robustness against variances in node coordinates. 2. **Alternative Vector Initialization Techniques**: - **Direction to the Center of Mass (MCM)**: $$\text{MCM}(i, G) = \sum_{j \in \mathcal{G}}{\frac{c_{j}}{N}} - c_{i}$$ Demonstrably equivariant under rotations as $$\text{MCM}(i, RG) = \sum_{j \in \mathcal{G}}{\frac{Rc_{j}}{N}} - Rc_{i}$$. - **Direction to Neighborhood Center of Mass (NCM)**: $$\text{NCM}(i, G) = \sum_{j \in \mathcal{N_{i}}}{\frac{c_{j}}{|\mathcal{N_{i}}|}} - c_{i}$$ Provides a localized node representation. - **Direction to the Nearest Node (NN)**: $$\text{NN}(i, G) = {c_{j} - c_{i} \quad | \quad c_{j} = \text{argmin}_{k\in \mathcal{N}} \left( ||c_k - c_i|| \right) }$$ Recognizes the invariance of the closest node under rotations. 3. **Empirical Results on Molecule3D**: | Method | Homo | Gap | |----------|--------|--------| | MCM | 0.0213 | 0.0321 | | MCM + DN | 0.0208 | 0.0313 | | NCN | 0.0213 | 0.0327 | | NCN + DN | 0.0199 | 0.0301 | | NN | 0.0211 | 0.0321 | | NN + DN | 0.0205 | 0.0318 | | DN | 0.0190 | 0.0290 | This table elucidates the relevance of introducing directional noise. Across initialization techniques, the integration of DN consistently elevates model performance. We're grateful for your insightful comments and hope that these amendments will address your concerns effectively. [1] A. Morehead and J. Cheng, ‘Geometry-Complete Perceptron Networks for 3D Molecular Graphs’, _AAAI Workshop on Deep Learning on Graphs: Methods and Applications_, 2023. --- Rebuttal 2: Title: Response to reviewer FRcS Comment: Thank you for taking the time to provide insightful comments. We are glad that you recognize the soundness of our work as `excellent` and our contribution as `good`. We revised our paper based on your suggestions to improve the `presentation` of our work and addressed your questions in our previous response. Please let us know if there are any additional questions or comments, we would be more than happy to address them. Thank you!
Summary: This paper proposes an effective and efficient equivariant graph neural network for geometric learning on molecules. The model encodes 3D graphs with node types and coordinates and outputs scalar and vector representations. The message passing process is purely scalar-based, which enjoys more efficiency than baselines using irreducible representations and high-order tensor objects . A vector initialization based on spherical coordinate system is utilized for better numerical stability. Experiments on QM9, N-body and molecule3D demonstrate the superiority of the proposed model over the baselines. Strengths: 1. The proposed scalar-based message passing scheme enjoys theoretically better efficiency and empirically better performance compared to existing baselines. 2. The experiments are thorough, including three popular benchmarks and a variety of baselines. 3. The empirical analysis on efficiency is comprehensive, covering time for forward/backward pass, FLOPs, MACs and memory consumption. Weaknesses: 1. Several points claimed in the contribution are not well supported: - The authors state that the vector initialization can ensure numerical stability and speed up model convergence. However, no theoretical analysis on why it can ensure numerical stability is provided, and neither experimental evidence nor theoretical proof of speeding up convergence is provided. - The scaled version of the baselines should also be compared with SAVENET-L to show that the proposed model benefits more from the scaling (i.e. Has better scalability). Moreover, it is not shown whether the scalability of the proposed model can hold when keeping increasing the model size. 2. The presentation of the model structure can be optimized (e.g. add an overview figure). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What is the $h_j$ in Eq.5? 2. What do $n, l_e, l_d$ represent in the theoretical complexity analysis? 3. Are the coordinates fixed during message passing? 4. Why can the directional initialization improve numerical stability? What is the reason for using spherical coordinate system as initialization? What is the difference if we use random vectors as initialization and anneal their norm to zero during training? 5. I think the computational scalability of GNNs is mainly restricted by the fully connected message passing which has quadratic complexity with respect to the number of nodes. Therefore in large graphs (e.g. proteins), people often use k-nearest neighbors to reduce the complexity to be linear to the number of nodes. However, the expressiveness of scalar-based equivariant GNNs can only be ensured in nearly fully connected message passing [1]. So can you show that the proposed models can maintain its superiority over the baselines when adapted to larger graphs like proteins, where edges might have to be much more sparser than in fully connected graphs. Or, under the current experiment settings, will the superiority be maintained if the edges get sparser (e.g. use k-nearest neighbors for message passing and decrease the K or decreasing the cutoff in the radial graph)? 6. Will the scalability holds when keeping increasing the hidden size and the number of layers? [1] Scalars are universal: Equivariant machine learning, structured like classical physics Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: When scaling to larger graphs (e.g. proteins), the edges may have to be sparser (e.g. K-nearest neighbors) and the scalar-based message passing potentially will have worse expressiveness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer jP4K, Firstly, we extend our gratitude for the careful review and insightful feedback on our manuscript. We address each comment in detail to offer a clearer understanding of our work. > Numerical stability and model convergence 1. **Relevance of Vector Initialization**: Recent works such as SchNet, DimeNet, SphereNet, and ComENet have established that enhancing the richness of features, can significantly boost performance in molecular tasks. Initializing with vectorial features, as opposed to empty or sparse values, provides the network with a richer context from the outset. This better informed start allows the model to begin its learning process from a more advantageous position. 2. **Stability and Convergence**: PaiNN's challenges with the N-Body dataset highlight instability issues seen in some models. Rather than simply failing to converge, these models can have error spikes during training due to numerical instability. This often arises in networks using vector representations, especially when merging vector and scalar values, like in architectures such as GVP, PaiNN, ET, and EQGAT. If these networks perform vector-scalar combinations too early, before directional data is introduced, they can become unstable, as observed with PaiNN. This is particularly problematic when the sum of neighboring vectors is close to zero, leading to gradient explosions. Hence, initiating robust representations is pivotal, especially when early vector-scalar operations are involved. > Scaling the baselines To address this question, we conducted experiments on the QM9 dataset to scale high-performing baselines to investigate their scalability compared to SaVeNet-L. **1. Baseline Selection**: We intentionally chose to scale both vectorial and spherical representation models to provide a broad-based comparison. - **ET**, which excels in performance with aggregated std. MAE and log MAE metrics, representing vectorial models. - **ComENet** was chosen for its efficiency advancements over SphereNet, representing spherical models. **2. Metrics and Findings from the Comparison**: From our experiments below, a few key insights emerge: |Model|std.|log|Batch|Memory|Latency|Samples/s|FLOPs|Param.|MACs| |---|---|---|---|---|---|---|---|---|---| |ET|0.84|-5.90|418|3.66|49.5|1280|42.98|6.86|21.40| |ET-L|0.84|-5.96|287|5.33|74.9|832|63.36|10.00|31.55| |ComENet|0.93|-5.69|1174|0.96|26.4|2368|14.36|3.81|7.18| |ComENet-L|0.97|-5.74|748|4.45|35.7|1728|39.50|11.30|19.75| Observing the comparison, it's evident that SaVeNet not only demonstrates superior performance with the base model but also capitalizes on its design to scale effectively, as reflected in SaVeNet-L's results. > Illustration of the model structure As suggested, we uploaded PDF Figure 2 for an illustration. > What is $h_j$ in Eq.5? The $h_j$ denotes the latent representation of a node $j$ computed with a linear transformation without bias to scalar representations $s$ . The linear transformation is applied to distinguish between a source node $j$ a destination node $i$. However, recognizing that the transformation functions $\phi_s$ and $\phi_d$ can encapsulate this linear transformation, we decided to simplify our equation by omitting $h_j$. The revised version of the equations are as follows: $$\begin{eqnarray} e_s(s_j, V_j, {r}\_{ij}, \vec{\beta}\_{ij}) &=& \phi_s(s_j) \odot \eta\_s(r\_{ij}) \\\\ e_v(s_j, V_j, {r}\_{ij}, \vec{\beta}\_{ij}) &=& \text{VA}\big(\phi\_b(\vec{\beta}\_{ij}) \odot \phi_d(s\_j) \odot \eta_d(r\_{ij}) + V_{j} \odot \phi\_v(s\_j) \odot \eta_r(r\_{ij})\big) \end{eqnarray}$$ > What do $n$, $l_e$, $l_d$ represent in the theoretical complexity analysis? - $n$: denotes the number of nodes. - $l_e$: denotes the number of encoder layers in the model. - $l_d$: denotes the number of decoder layers. > Are the coordinates fixed during message passing? In our method, both invariant and equivariant features are extracted from the input graph's coordinates. While these features, including distance, direction information $\vec{\beta}$ (with components {$\vec{d}\_{ij},\vec{t}\_{ij},\vec{o}\_{ij}$}), and node types are utilized during message-passing, the input graph's coordinates and their representations remain unchanged throughout the process. > Clarifications on directional noise We appreciate your in-depth examination of our directional initialization approach. Let's unpack the underlying motivations and empirical results: 1. **Spherical Coordinate System for Initialization**: Using the spherical coordinate system reduces parameters. By initializing with angles $\phi$ and $\theta$, we bypass the need to set three distinct values for x, y, and z, ensuring efficiency and minimizing redundancy. 2. **Random Vectors vs. Spherical Initialization**: Raw vectors require normalization to be purposeful. In high dimensions, adjusting a single axis might have a negligible impact, potentially introducing noise. 3. **Additional experiment**: As suggested, testing our approach against random initialization without the assistance of a spherical coordinate system yielded: | Method | Homo | Gap | |----------|--------|--------| | Random | 0.0223 | 0.0342 | | DN (Ours) | 0.0190 | 0.0290 | Clearly, performance drops without spherical-based initialization, showing that handling raw noises is less effective than adjusting angular values. > Experiments with large graphs, sparser graphs, and scale SaVeNet further: We appreciate the reviewer's experimental recommendations. Given the rebuttal's constraints, detailed descriptions and results of these experiments can be found in the general response sections: Experiments 1, 2, and 3. We believe our responses elucidate the specific design choices and methodologies in our paper. We hope that our clarifications address your concerns adequately. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks for your detailed response, which addressed some of my concerns. I will maintain my rating.
Summary: This paper proposes an SE(3)-equivariant model called SaVeNet, designed to accommodate various geometric requirements. The proposed framework can effectively scale with the introduction of directional noise. Theoretical analysis and empirical results on several datasets are provided to validate the efficiency and expressiveness of SaVeNet. Strengths: * The proposed method achieves outstanding performance across several synthetic and real-world datasets. * SaVeNet exhibits better efficiency than baseline methods. Weaknesses: * Many claims about the proposed method and related works lack supportive evidence. For example, * Line 40, "Novel approaches for initializing ... achieving a balance between maintaining numerical stability, facilitating faster convergence, and enhancing the model’s ability to generalize to new datasets." This sentence states that the proposed approach can maintain numerical stability and facilitate faster convergence, but there is a lack of convincing empirical ablation studies or theoretical evidence supporting these claims. * Line 58-63, "one primary problem...limited performance or limited performance or scalability and cannot equally contribute towards performance improvements in the network." These sentences make informative claims about the limitations of existing methods, but the authors do not provide detailed discussions or analyses to support these points, weakening the motivation for SaVeNet. * Line 63-68, the authors propose to tackle the scalability challenge by stacking multiple encoder layers, which is a widely adopted operation in equivariant models. However, it is unclear why other baseline models (especially those vector-based models such as PaiNN) cannot improve scalability using the same operations. * The methodology section (Section 3) is poorly organized and written. * Some notations are not well defined. E.g., $IA$, $s_i$ and $e$ in Equation 4 are not defined. $h_j$ in Equation 5 is not defined. * Some equations are difficult to understand. E.g., the procedure of updating vector $V'_i$ from Equation 4 is unclear. It seems that some details of the equation are missing, making it difficult to integrate Equation 5 and Equation 4. * The overall novelty of SaVeNet is limited. Some techniques in the framework (e.g., the scalar/vector-based equivariant operations) have been studied in related works such as PaiNN, and ET. The newly proposed techniques, direction noise and vector activations, account for the main contribution of this paper. However, the ablation results in Table 6 show that removing these two modules only results in a slight performance drop, which cannot validate their importance. Accordingly, the technical contribution of SaVeNet is limited. Considering that many details in the methodology section are unclear, all comments are made based on the current version. * There are some grammatical mistakes. * Line 47, `model's` should be `model` * Line 187, $R^{Cx3}$ should be $R^{C \times 3}$ In summary, while the proposed method demonstrates exceptional performance across various downstream tasks and exhibits promising scalability, the limited technical novelty and issues with the presentation render the current version unsuitable for acceptance in NeurIPS. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * The metrics std. MAE and log MAE lack detailed descriptions. How did the authors calculate these metrics? * The proposed direction noise module's SE(3)-equivariance is unclear, as there is no provided theoretical proof. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors well clarify the limitations and potential negative societal impact of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer QLtV, Thank you for taking the time to review our manuscript and for your invaluable feedback. We acknowledge the importance of clarity in our presentation, and we address your concerns as follows. > The metrics std. MAE and log MAE The standardized MAE is derived by normalizing the model's MAE with the standard deviation of a target, and the average across targets is taken. Formally: $$\text{std. MAE}(D, x, y) = \frac{1}{T} \sum_{t}^T \Big(\frac{1}{D} \sum_{i}^{D}\frac{|x_{t,i}-y_{t,i}|}{\sigma(y_{t})} \Big) $$ Here, $D$ is dataset size, $T$ is the number of targets, and $x_{t,i}$ and $y_{t,i}$ represent the model's prediction and the ground truth for target $t$ of sample $i$, respectively. We also introduce the log MAE metric: $$ \text{log MAE}(D, x, y) = \frac{1}{T} \sum_{t}^T \log\Big(\frac{1}{D} \sum_{i}^{D}\frac{|x_{t,i}-y_{t,i}|}{\sigma(y_{t})} \Big) $$ The log MAE smoothens the scores to avoid errors being dominated by a few difficult targets such as $\epsilon_{homo}$. > Symmetrical properties of the direction noise module We appreciate your query about the SE(3)-equivariance of our direction noise module. Briefly, our direction noise is constructed based on node types, introducing SO(3) invariance. This allows for latent space traversal typically inaccessible for equivariant models. The model retains its equivariance during inference as the noise decays over training. Its introduction heightens the learning challenge, aiding in model scaling and countering issues like over-smoothing [2]. Therefore, the module rests on the principle of SO(3) invariance and is bolstered by practical results. In addition, we conducted an ablation study with three distinct equivariant initialization techniques. Please refer to the reviewer FRcS on the experiment using `alternative vector initialization techniques`. ___ > Clarification on the introduction section - **Line 40: Ablation Study and Support to Generalize New Datasets:** SaVeNet was designed to handle large-scale real-world data without sacrificing representation quality. Beyond mere model depth, it's tailored for large molecules and datasets. As evidenced in our evaluation on Molecule3D, SaVeNet accelerates training and inference, improving task performance by up to 125 times. This is crucial in today's ML landscape with growing data scales. Our focus is on maintaining model effectiveness as data scales increase. Referring to Table 6, we conducted an ablation study examining the effects of DN, VA and layer variations on the QM9 dataset. - **Line 58-63 - Limitations of Existing Methods**: We provided a detailed comparison with the related methods in lines 223-245, highlighting specific limitations of cited works, and more explicitly connecting them with the motivation for SaVeNet. We are happy to discuss the novelty of our work during the reviewer-author discussion period. - **Line 63-68: Scalability through Multiple Encoder Layers:** While stacking multiple encoder layers is a general approach for enhancing model scalability, not all models benefit equally from this operation. Our reference to [1] underscores this very point – the challenges in numerical stability faced by PaiNN and other existing methods. As demonstrated in Table 4, PaiNN exhibited convergence issues for complex tasks, such as ES(20), G+ES(20), and L+ES(20) [1]. This suggests that merely stacking layers might not necessarily lead to improved scalability for all models. **This lack of convergence was not a mere reflection of the model's depth but of inherent limitations in its architecture when tasked with these complex objectives.** > **Clarification on Notations:** - $\text{IA}$: This denotes the interatomic interactions within the molecule. - $s_i$: Represents the scalar representations associated with node $i$ in our graph. - $e$: denotes the message functions for scalar and vector tuples, specifically formulated as $e = (e_{s}, e_{v})$. - $h_j$: This notation was intended to delineate the linearly transformed version of scalar representations, $s$. 2. **Revised Equation for Enhanced Clarity**: Following your feedback, we reevaluated our notations in Section 3 and implemented amendments to further the clarity and coherence of our representation. Additionally, we observed that the $h_j$ operation can be subsumed since $\phi_s$, $\phi_d$, and $\phi_v$ inherently encapsulate this linear transformation. This realization enabled us to simplify our representation further. Our revised equations, integrating these clarifications, are articulated as follows: $$\begin{eqnarray} e_s(s_j, V_j, {r}\_{ij}, \vec{\beta}\_{ij}) &=& \phi_s(s_j) \odot \eta\_s(r\_{ij}) \\\\ e_v(s_j, V_j, {r}\_{ij}, \vec{\beta}\_{ij}) &=& \text{VA}\big(\phi\_b(\vec{\beta}\_{ij}) \odot \phi_d(s\_j) \odot \eta_d(r\_{ij}) + V_{j} \odot \phi\_v(s\_j) \odot \eta_r(r\_{ij})\big) \end{eqnarray}$$ Eq. 4 and 5: The disconnect between Equation 4 and 5, is clarified by expanding the definitions. We split Equation 4 into two parts for clarity. $$\begin{split} \text{IA}(s, V, r, \vec{\beta}) &= (s\_i, V\_i) + \sum\nolimits\_{j\in\mathcal{N}\_i}{e(s\_j, V\_j, {r}\_{ij}, \vec{\beta}\_{ij})} = (s'\_i, V'\_i) \\\\ s'\_i &= s\_i + \sum\nolimits\_{j\in\mathcal{N}\_i}{e\_{s}(s\_j, V\_j, {r}\_{ij}, \vec{\beta}\_{ij})} \\\\ V'\_i &= V\_i + \sum\nolimits\_{j\in\mathcal{N}\_i}{e\_{v}(s\_j, V\_j, {r}\_{ij}, \vec{\beta}\_{ij})}\end{split}$$ > grammatical mistakes Thank you, this has been corrected. We thank the reviewer for the detailed analysis of the proposed method. **We revised the paper and incorporated all the feedback in the related sections.** We believe that the revised version represents the proposed method effectively. [1] A. Morehead and J. Cheng, "Geometry-Complete Perceptron Networks for 3D Molecular Graphs" [2] D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, “Measuring and Relieving the Over-Smoothing Problem for Graph Neural Networks from the Topological View” --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: The responses address some of my questions, such as 'the metrics std. MAE and log MAE' and 'Revised Equation for Enhanced Clarity'. However, some concerns still remain: 1. To further substantiate the claim that other vector-based baseline models (e.g., ET, PaiNN) face greater challenges than SAVENET in enhancing scalability by stacking multiple encoder layers, I recommend the authors enlarge the baseline methods on some molecular datasets and provide a direct comparison of scalability between the different approaches. Although some evidence in the N-Body experiment suggests PaiNN's instability, there is still a lack of direct scalability comparison. Moreover, the data size of the N-Body experiment may not be sufficient to support such analysis. 2. I remain unclear about the response regarding the 'Symmetrical properties of the direction noise module.' For the claim, 'The model retains its equivariance during inference as the noise decays over training,' does this mean that the noise initialization will be reset to null vectors during the inference to ensure SE(3)-equivariance? If not, the invariant direction noise will disrupt the SE(3) symmetry of vector representations, as also mentioned by reviewer FRcS. 3. I maintain my view that the overall technical novelty of SaVeNet is limited. The authors have not provided further evidence to demonstrate the significance of the newly proposed techniques (direction noise and vector activations) in enhancing the model's expressiveness and scalability. Accordingly, I will maintain my rating score. --- Reply to Comment 1.1.1: Title: Thanks for the feedback - Novelty and Contribution Comment: > Some techniques in the framework (e.g., the scalar/vector-based equivariant operations) have been studied in related works such as PaiNN, and ET. > The novelty of SaVeNet We genuinely appreciate your feedback and the opportunity to further discuss the novelty and contributions of our work on SaVeNet. We would like to address your concerns in detail. 1. **On the Scalar/Vector-Based Equivariant Operations**: We concur that our approach utilizes some scalar/vector-based equivariant operations such as vector transformations that share weights for spatial dimensions, mixing scalar and Euclidean norm of vector representations, similar to PaiNN and ET. Starting with GVP [1], seminal works in this domain have **consistently adopted these scalar/vector-based equivariant operations**. The uniqueness of these frameworks, PaiNN included, is *not* primarily in introducing novel techniques but rather in their **distinct message-passing schemes tailored for specific research motivations**. This observation underpins the essence of our assertion: **while the operations may appear analogous, the broader application and integration offer differentiating contributions**. 2. **On the Novelty and Motivation of SaVeNet**: The gap we identified in existing literature revolves around the **dual challenges of scalability and effectiveness** in molecular representation learning, an issue not explicitly addressed by prominent works like PaiNN or ET. **We believe this research gap is of significant importance and we proved it is not easy to scale a model** with additional experiments suggested by the reviewer (provided in our following response). 3. **Delineating the Novelty and Contributions of SaVeNet**: - **Novel Study Motivation and Insights**: SaVeNet is rooted in addressing the balance between expressiveness and scalability in molecular representation learning. As shown in updated Figure 1, current SOTA methods either shows high latency or lower effectiveness. **This focus differentiates our approach from other models in the current literature.** - **Unique Methodological Approaches**: While SaVeNet does incorporate foundational scalar and vector features, it diverges from existing work in its **unique message-passing scheme**: - `Efficient embedding and novel initialization`: Unlike existing models \[1]\[2]\[3]\[4], SaVeNet leverages a unique feature set, which elaborated in Section 3.1, set us on a distinct trajectory right from inception. These features, supported by Theorem 1, allow us to represent 3D molecular graphs adeptly. Further, our unique initialization strategy avoids the pitfalls associated with null vector initializations, commonly observed in current models. It may look simple but significant (the first work in this line of work proposed). This also **demonstrates the applicability of initialization and noise schemes** and paves the way for future investigations by the community, potentially leading to even more efficient and effective modeling techniques. - `Distinctive architecture and message passing`: Central to SaVeNet is a **distinctive message-passing architecture**, detailed in Sections 3.1 and 3.2. Our design incorporates our proposed vector activation techniques, which not only delineates our model from predecessors but also fortifies its flexibility and scalability.  To provide perspective, while models like PaiNN were conceived primarily for molecular property prediction, their inherent design may not extend seamlessly to more diverse tasks and datasets. - `Insightful performance metrics with comprehensive evaluations`: Beyond conventional metrics, our evaluations delve deep into stability and scalability, particularly on equivariant tasks. **The new modules in SaVeNet are architected for both effective and scalability, rather than mere performance enhancement**. SaVeNet's assessments are broad-based, considering training speed, model intricacy, and memory overhead, aiming to redefine benchmarks in the domain (first work in the field). In summation, SaVeNet's novel message-passing scheme, and its focus on both scalability and effectiveness in molecular representation learning, distinctively positions it in the current literature. We trust this elaboration addresses your concerns and shines light on the nuances of our contributions. [1] J. Gasteiger, J. Groß, and S. Günnemann, “Directional Message Passing for Molecular Graphs,” [2] Y. Liu _et al._, “Spherical Message Passing for 3D Molecular Graphs,” [3] L. Wang, Y. Liu, Y. Lin, H. Liu, and S. Ji, “ComENet: Towards complete and efficient message passing for 3D molecular graphs,” [4] K. Schütt, O. Unke, and M. Gastegger, “Equivariant message passing for the prediction of tensorial properties and molecular spectra,” --- Reply to Comment 1.1.2: Title: Further scalability comparison Comment: > To further substantiate the claim that other vector-based baseline models (e.g., ET, PaiNN) face greater challenges than SAVENET in enhancing scalability by stacking multiple encoder layers, I recommend the authors enlarge the baseline methods on some molecular datasets and provide a direct comparison of scalability between the different approaches. Although some evidence in the N-Body experiment suggests PaiNN's instability, there is still a lack of direct scalability comparison. Moreover, the data size of the N-Body experiment may not be sufficient to support such analysis We appreciate your feedback concerning the comparison of scalability between our model, SaVeNet, and other baseline models, particularly PaiNN. Here, we aim to address the points you raised with clarity and precision: 1. **Revisiting Scalability Tests**: Following your suggestion, we revisited our scalability experiments for SaVeNet. This was **further expanded** by insights from Reviewer jP4K, where scaling was undertaken for notable and more recent baselines such as ComENet and ET. Our findings revealed that when subjected to scaling, these baselines either showed a **degradation in performance or only marginal improvements when juxtaposed with SaVeNet**. 2. **Scaling Tests for PaiNN**: While we acknowledge the merit in understanding how other models scale relative to ours, it's worth noting that foundational works such as PaiNN and ET **didn't focus on scalability in their respective publications**. One of our primary contributions is the scalable variant of SaVeNet. Regarding the scale of the model PaiNN, as you correctly noted, poses distinct challenges. During our meticulous scaling tests, the outcomes were often muddled due to the sporadic appearance of 'NaN' outputs. This not only hindered a direct comparison but also indicated underlying instability in the model when subjected to intensive scaling. 3. **Further Evaluation with PaiNN on Molecule3D Dataset**: PaiNN demonstrated impressive performance on molecular property prediction on QM9 dataset. However, the flexibility of the network wasn't experimented with. In our experiments, we found that **direct application of PaiNN is unable to generalize to difficult datasets such as large-scale property prediction dataset, Molecule3D**. To provide a comprehensive understanding, **we conducted rigorous evaluations with PaiNN on the large-scale Molecule3D dataset**. Our evaluation methodology was thorough, encompassing an exhaustive grid search that also mirrored the scales used for SaVeNet. Specific parameters were: |Parameter|Search Space| |---|---| |Layers|3, 6, 8| |Hidden dims|64, 128, 196, 256| |Learning rate|1e-4, 2e-4, 5e-4| |Output dims|128, 256| **These observations were consistent with those from the N-Body dataset**, where gradient explosions occurred prematurely. We trust this response provides clarity and reaffirms our commitment to scientific rigor. We are eager to consider any further recommendations to enhance the value of our work. --- Reply to Comment 1.1.3: Title: Further clarifications Comment: > I remain unclear about the response regarding the 'Symmetrical properties of the direction noise module.' For the claim, 'The model retains its equivariance during inference as the noise decays over training,' does this mean that the noise initialization will be reset to null vectors during the inference to ensure SE(3)-equivariance? If not, the invariant direction noise will disrupt the SE(3) symmetry of vector representations, as also mentioned by reviewer FRcS. Thank you for your meticulous attention to the SE(3)-equivariance in relation to our direction noise module. We acknowledge the concerns raised regarding the symmetrical properties of our module, particularly in the context of its potential to disrupt SE(3) symmetry during inference. To address the primary concerns: 1. **Over-Smoothing in GNNs**: Message-passing-based Graph Neural Networks (GNNs) face challenges associated with over-smoothing \[5]\[8]. These phenomena restrict the expressivity of the network and, subsequently, its scaling capabilities. The over-smoothing problem, in particular, can lead to homogenized node representations, rendering them indistinguishable and limiting model performance. 2. **Direction Noise as a Solution**: To counteract these challenges and fulfill our vision for a scalable vector network, we introduced direction noise. This is conceptually analogous to introducing regularization in traditional neural networks, wherein certain modifications during training lead to enhanced generalization during inference, even though they might seem counterintuitive initially. For stability, it provides initial directions during the early stages of the training and allows the model to explore a larger search space by not imposing equivariance restrictions during training. Therefore, the trained model is more robust and can handle unseen directional information. 3. **Equivariance Preservation**: The direction noise, though introduced during training, is designed to decay as the training progresses. By the end of the training, this noise approaches zero, ensuring that the model, when used for inference, starts with null vector representations. Hence, the noise doesn't disrupt the inherent SE(3)-equivariance of the model during the actual application, preserving the desired symmetry. 4. **Analogy with Regularization Techniques**: A parallel can be drawn with regularization techniques employed in various machine learning models. Techniques like Dropout [6] DropNode [7], DropEdge [8], and NoisyNodes [9] introduce perturbations during training to prevent over-fitting and over-smoothing, yet these perturbations are absent during inference, ensuring the model's intended functionality is undisturbed. Similarly, our direction noise serves its purpose during training and gracefully recedes to ensure SE(3)-equivariance during inference. In light of the aforementioned clarifications, our model effectively leverages the direction noise only as a transient training mechanism to enhance its learning capacity without compromising its symmetry properties in practical applications. **We revised Sec 1 & 3.1 of the paper to make the above clearer to readers.** We hope this response elucidates our design rationale and addresses your concerns. [5] Q. Li, Z. Han, and X. Wu, “Deeper Insights Into Graph Convolutional Networks for Semi-Supervised Learning,” [6] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” [7] T. H. Do, D. M. Nguyen, G. Bekoulis, A. Munteanu, and N. Deligiannis, “Graph convolutional neural networks with node transition probability-based message passing and DropNode regularization,” [8] Y. Rong, W. Huang, T. Xu, and J. Huang, “DropEdge: Towards Deep Graph Convolutional Networks on Node Classification,” [9] J. Godwin _et al._, “Simple GNN Regularisation for 3D Molecular Property Prediction and Beyond,”
Summary: This paper proposes a framework called SaVeNet for geometric representation learning of molecules. The paper includes theoretical analysis and empirical experiments to demonstrate the superiority of SaVeNet over existing methods in terms of efficiency and expressiveness. Strengths: 1. This paper proposes an efficient geometric encoding, a novel directional noising strategy in the spherical coordinate system, and a novel vector activation function. 2. The experimental results show that SaVeNet achieves very good performance over multiple invariant and equivariant molecular tasks. 3. As an equivariant model, the efficiency of SaVeNet is pretty good. Weaknesses: It will help readers to better understand if authors can add a figure of the proposed SaVeNet’s architecture, including the illustration of components described in Sec 3.1 and 3.2. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. I’m wondering if the direction noise and vector activation also applicable to other equivariant models? 2. Can you add more equivariant baselines like PaiNN to Figure 1 for efficiency comparison? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer T8Du, Firstly, we'd like to extend our gratitude for your detailed review and the constructive feedback provided on our manuscript. We would like to address your questions as follows: > Applicability of direction noise and vector activation to other equivariant models We appreciate your thought-provoking query about the potential adaptability of our proposed direction noise and vector activation to other equivariant models. At its core, the novelty of our approach in integrating direction noise and vector activation is designed with scalability and robustness in mind. This adaptability, as you rightly inferred, opens the door to improving the performance of other existing models. For instance, take the case of PaiNN. Prior works, such as [1], couldn't report PaiNN's results due to issues related to numerical stability. In response, we took the initiative to enhance PaiNN by incorporating our proposed vector activation functions into its interaction layers. This intervention not only resolved the aforementioned stability concern but also empowered us to compare PaiNN's performance with our model, as demonstrated in our performance comparison in Table 4. While our methods indeed present a promising avenue for refining other models, we acknowledge that custom tailoring might be essential. Achieving optimal results could require model-specific adjustments that respect the original architecture's nuances and intent. Our proposed framework, with its innovative features, serves as a template for upcoming research. While the direct adaptation of our innovations—namely the direction noise and vector activation—on other models was beyond this study's purview, we are confident that our findings lay the groundwork for subsequent explorations. We envisage a landscape where researchers, inspired by our findings, dive deeper into harnessing these concepts, eventually unearthing even more potent modeling techniques. > Addition of more equivariant baselines to Figure 1 Thank you for your constructive suggestion regarding the expansion of equivariant baselines in our efficiency comparison. We concur that broadening the range of baselines would offer a richer, more comprehensive insight into our analysis. While our initial evaluations were anchored around top-performing models, we appreciate the significance of juxtaposing our approach with well-established benchmarks, especially those that hold high regard in the community, like PaiNN. In response, we've undertaken supplementary efficiency tests, **incorporating two additional equivariant baselines: PaiNN and ET [2]**. These enhancements have been reflected in Figure 1 of the uploaded PDF, and we ensure their inclusion in the subsequent version of our manuscript. Upon analysis, an intriguing observation emerges from Figure 1: PaiNN, despite boasting the best latency among all contenders, unfortunately, falls short in expressiveness. This reiterates a pivotal point we emphasize in our work — the delicate balance between expressiveness and efficiency. We staunchly believe that **efficiency should not be pursued in isolation but should harmoniously coexist with effectiveness**. Again, we appreciate your feedback and hope that our enhanced comparison provides a more holistic view of our contribution in the context of the current landscape. > SaVeNet architecture illustration We thank the reviewer for the suggestion. We agree with you and believe this illustration can provide readers with a clearer mental model of our architecture, amplifying their comprehension and facilitating deeper engagements with our work. The illustration of the overall architecture including the components in Sec 3.1 and 3.2 is provided in the uploaded PDF, titled Figure 2. **Conclusion:** In conclusion, we sincerely thank you for the time and effort you invested in understanding our work and proposing suggestions. We remain open to further feedback and are committed to making all necessary improvements to serve the scientific community better. [1] A. Morehead and J. Cheng, ‘Geometry-Complete Perceptron Networks for 3D Molecular Graphs’, AAAI Workshop on Deep Learning on Graphs: Methods and Applications, 2023. [2] P. Thölke and G. D. Fabritiis, “Equivariant Transformers for Neural Network based Molecular Potentials,” presented at the International Conference on Learning Representations, Oct. 2021. --- Rebuttal Comment 1.1: Comment: Thanks for authors' response and I'd like to maintain my score.
Rebuttal 1: Rebuttal: Dear Reviewers, Firstly, we'd like to extend our sincere gratitude for your diligent review of our work and your invaluable feedback. Based on your insights and suggestions, we have undertaken significant efforts to improve our manuscript, making it both more comprehensive and accessible to the wider community. **Enhanced Clarity & Methodological Details**: 1. **Refined Equations**: We have meticulously revised our equations, ensuring greater clarity and eliminating any ambiguity. This exercise was geared towards providing a clear analytical exposition of our methodology and aiding the reader's comprehension. 2. **Improved Manuscript Clarity**: To provide a holistic overview of our proposed SaVeNet model, we've incorporated a detailed framework illustration, showcasing individual components. This can be referenced in Figure 2 of the uploaded PDF. This graphical representation complements our textual explanations, creating a more immersive understanding of SaVeNet's operation. 3. **Expanded Comparisons**: We value the importance of a comprehensive comparative study. In line with this, we've included additional baseline models in Figure 1, offering readers a broader perspective of SaVeNet's standing in the current research landscape. 4. **Additional Experiments**: In our efforts to thorough evaluation the SaVeNet, as suggested by reviewer jP4K, we tackled three additional experiments. These experiments aimed to validate SaVeNet's capability in various scenarios and provide a comprehensive view of its performance in graph-based neural network applications. **Experiment 1: Large graphs** We test the scalability of our model to larger graphs, such as protein graphs. For this purpose, we conducted the experiments on a protein-ligand complex dataset known as LBA [1] to compare with the baseline models. | Method | RMSE | Pearson | Spearman | |-----------------------------------------------------|-------|---------|----------| | Atom3D-3DCNN [1] | **1.416** | .550 | _.553_ | | ProNet-All-Atom [2] | 1.463 | _.551_ | .551 | | SaVeNet | _1.438_ | **.572** | **.559** | SaVeNet displayed a remarkable RMSE of 1.438, only narrowly being surpassed by Atom3D-3DCNN, highlighting its efficacy even in larger sparse graphs. Furthermore, SaVeNet emerged as the leader in both correlation metrics. This not only demonstrates its robustness but also its adaptability across various metrics. **Experiment 2: Experiments on sparse graphs** We explored SaVeNet's performance on sparser graphs using the Molecule3D dataset, which contains larger molecular structures. The baseline models primarily worked with radius graphs beyond a radius of $6$. Reducing this radius to $3$ led to a model, SaVeNet3-B, operating on graphs with edge numbers decreasing from **553** to **259**. Reducing the radius presents challenges. For example, one-hop neighbors can become multi-hop neighbors, complicating data access between nodes. To address this, we adapted SaVeNet3-B with 160 (SaVeNet3-B+) and 192 (SaVeNet3-B++) hidden dimensions to handle multi-hop information flow better. | Model | std. MAE | log MAE | Batch Size | Memory | Latency (ms) | Samples/s | |--------------|---------|---------|------------|---------|--------------|-----------| | SaVeNet3-B | .0205 | -3.949 | 824 | 1.29 | 17.3 | 3648 | | SaVeNet3-B+ | .0175 | -4.112 | 642 | 1.62 | 19.0 | 3328 | | SaVeNet3-B++ | .0147 | -4.283 | 518 | 1.93 | 19.9 | 3200 | | SaVeNet-B | .0156 | -4.226 | 329 | 2.84 | 27.8 | 2304 | SaVeNet3-B shows strong performance compared to the baselines, even with a smaller receptive field. SaVeNet3-B+ and SaVeNet3-B++ both outperform the baselines, with SaVeNet3-B++ achieving the best results. A reduced receptive field offers enhanced efficiency, particularly in memory and latency, highlighting SaVeNet's scalability and resilience. We thank the reviewer for the insightful prompt. **Experiment 3: scalability even larger SaVeNet** We extended SaVeNet's architecture to SaVeNet-XL, adopting the same principles used for the previous scale, SaVeNet-L. We conduct this experiment on the larger dataset: the Molecule3D dataset. For SaVeNet-XL, we applied similar scaling principles used for SaVeNet-L, which is 256 hidden dimensions across 10 layers. | Method | $\mu$ | Homo | Lumo | Gap | |-------------|--------|--------|--------|--------| | SaVeNet-B | 0.0183 | 0.0190 | 0.0173 | 0.0290 | | SaVeNet-L | 0.0136 | 0.0159 | 0.0143 | 0.0239 | | SaVeNet-XL | 0.0112 | 0.0136 | 0.0129 | 0.0209 | Notably, there is a consistent improvement across all metrics when transitioning from SaVeNet-B to SaVeNet-L and then to SaVeNet-XL. **Concluding Remarks**: Our revisions, experiments, and methodological refinements were driven by a commitment to academic rigor and dedication to the community's betterment. We believe these improvements will address concerns and illuminate the significant contributions of our work. Our hope is that the revised manuscript provides a clearer and more insightful understanding of SaVeNet's potential. Given the character constraints of the rebuttal, we've endeavored to address all concerns succinctly. If any queries remain or further clarification is required, we welcome the opportunity to respond further. Thank you once again for your time and invaluable feedback. Sincerely, Authors [1] R. J. L. Townshend _et al._, “ATOM3D: Tasks on Molecules in Three Dimensions,” [2] L. Wang, H. Liu, Y. Liu, J. Kurtin, and S. Ji, “Learning Hierarchical Protein Representations via Complete 3D Graph Networks,” Pdf: /pdf/fadd04ab067623713fc205f82bb8d64e34bf8bb5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose an efficient and scalable equivariant GNN (SaVeNet) for 3D molecular conformations. The architecture follows an encoder-decoder style framework, where the encoder is composed of "Inter-atomic Interactions" and "Atom-wise blocks" learning mechanisms. Several other modeling augmentations are proposed such as 1-hop molecule embeddings in the feature space, non-null vector initializations ("Direction Noise"), and norm-aware vector activation functions ("Vector Activations"). Strengths: The thrust of this paper is clear: designing GNNs with SO(3)/SE(3) equivariance can introduce complexity into the model. This slows down inference-time latency, and may inhibit scaling the model to larger sizes (i.e. a RNN with parameter capacity equal to a Transformer may have similar performance, but Transformers scale significantly better). Accordingly, the authors develop an equivariant GNN that uses only one-hop neighborhood vector embeddings (i.e. cheap & scales), but (assuming a connected graph -- which is the case for molecules) this feature-space representation is sufficient to reconstruct the entire molecule. The empirical results align with the theory, and state-of-the-art results are attainted on a variety of 3D benchmarks. To summarize, I believe the paper (a) tackles a relevant problem (b) proposes a relevant solution and (c) demonstrates theoretically and empirically that solution works. Therefore, I recommend acceptance. I'd also like to note the punctilious empirical evaluation---efficiency analyses can be especially difficult, but it's my opinion that the authors perform a fair comparison among methods to constitute their empirical findings. Weaknesses: I am not an expert on equivariant GNNs and am not familiar with much of the related work. I am also unable to recognize any method/evaluation-specific "red flags" which may appear and defer to the expertise of the other reviewers. One facet that stood out; however, is that the writing at times feels long. It could be significantly shortened in many areas. For instance, the abstract is 19 lines and could be made more "pointed". I would also suggest creating a "Figure 1" that is a visual depiction of the SaVeNet architecture to supplement the material in Section 3.2 on learning mechanisms. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes -- the discussion in Section 5 was appreciated and meaningful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 6Z9X, We would like to express our gratitude for taking the time to review our manuscript and providing detailed and constructive feedback. We appreciate your positive reception of our work and carefully considered each of your points and would like to address your comments as follows: **Strengths:** We're grateful that you recognized the relevance and significance of the problem we tackled, as well as our solution's novelty and empirical strength. Your acknowledgment of our empirical evaluation further motivates us to maintain high standards in our research endeavors. **Weaknesses:** 1. **Length and Clarity:** We acknowledge your concern regarding the length of certain sections. In light of your feedback, we are committed to revising the manuscript to make the content more concise, especially in places like the abstract, to enhance readability and clarity. 2. **Visual Depiction of SaVeNet:** We take your suggestion regarding creating a figure visually representing the SaVeNet architecture. We believe that such a depiction will undoubtedly aid readers in grasping the nuances of our model more intuitively. You can find the architecture illustration attached to the uploaded PDF Figure 2. We will ensure to include this visual aid in our revised submission. **Limitations:** Thank you for noting the value of the discussion in Section 5. We always strive for transparency in our work and believe that addressing limitations is crucial for both the integrity of research and future work in the area. Once again, we thank you for your positive feedback, constructive criticism, and thoughtful suggestions. We're excited about the potential of our work in advancing the field, and your feedback plays an integral role in refining our contribution. --- Rebuttal Comment 1.1: Comment: I have read the authors' response and maintain my score.
null
null
null
null
null
null
Gaussian Mixture Solvers for Diffusion Models
Accept (poster)
Summary: The authors point out that $q(x_s|x_t)$ is not necessarily Gaussian when $t$ is significantly bigger than $s$, and propose to use a mixture of Gaussians in order to model better the reverse process, when the number of integration steps is not large. Such a selection guarantees that with an increasing number of steps, the mixture of Gaussians will subsequently adopt the form of the Gaussian. In order to estimate the parameters of the Gaussians constituting the mixture, the authors estimate the diagonals of higher order moments of the mixture through higher order denoising and integrate them in the generalized method of moments in order to estimate the desired parameters. Strengths: The paper is well motivated, as $q(x_s|x_t)$ is not necessarily Gaussian when $t$ is significantly bigger than $s$. The experimental results show some improvements when the same number of steps are used. Weaknesses: **Main Weaknesses:** 1) The paper's presentation is not apt for publication. The manuscript contains several sentences that are notably difficult to interpret. For instance, the authors' intent remains unclear in the sentence beginning with 'These modeling...' in line 158 and concluding on 160. This aspect only gains clarity in Equation 9, which is located in a different section. A multitude of vital elements are left undefined. For instance, the parameters $\theta$ and $\theta^*$ are not properly concretized in the Gaussian Mixture context, and furthermore, the function $h$ in algorithm 2 lacks a definition in the main manuscript and Appendix B. The paper's structure is counter-intuitive. In Algorithm 2, the authors initially compute the moments of the Gaussian mixture (a), and subsequently utilize them to calculate the mixture's defining parameters (b). Contrarily, in Section 3, the authors commence with (b) and proceed to (a). Since this order is not natural it makes the paper difficult to read, and the situation is exacerbated since at this point in a) all the definitions of parameters and moments are general and do not relate to the problem at hand. The connection between these general definitions and the specific problem tackled in the paper is never explicitly stated. It is important to show the connection between the parameters $\theta$, moments $M_n^{(GM)}(\theta)$, $M_n(x_i)$ on one hand, and $\pi_k, \mu_k, \sigma_k, \hat{M}_n$ on the other. This would enable writing the loss in Equation 8 explicitly, which is not done. 2) Sampling times (in seconds) are not provided. Considering that the premise of paper is enabling generation in fewer steps (that is, speeding up the sampling process), the sampling time curves of the sampling process for different number of steps should be provided in the main paper (Close to Table 1), for both the proposed method as well as the benchmarks. That is, a figure such as Figure 5 in Appendix E6 should be extended for numbers of steps above 50, and placed in the main paper. The values of the curves presented should be in seconds, instead of presenting the ratio between methods. Furthermore, it would be interesting to compare against the stochastic sampler in [1]. **Minor Weaknesses:** 1) Equation 9 is not derived in Appendix B. 2) Using a mixture of only two Gaussians is a bit underwhelming considering all the material. 3) The authors state that the "higher-order correlation between two pixels is assumed to be zero". While it is true that this premise aligns with the work of Bao et al., the authors should justify this in the paper. 4) The Figure in the introduction is misplaced. It does not elucidate the method. It presents results of the performance of the method, hence it should either go to the Experimental section or to the Appendix. I am willing to increase my score if the authors extend Figure 5 (e.g. 100 200 1000 as in Table 1), and are willing to follow the presentation advices provided in Main Concern 1. Also there are grammar/spelling mistakes in the paper and the authors should ensure they are corrected. [1] Karraset al, Elucidating the Design Space of Diffusion-Based Generative Models, NeurIPS 2022. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: What are the advantages of GMS in the light of recent methods such as consistency models [2]? Have the authors performed experiments to measure how the sampling time per step increases when the number of Gaussians in the mixture (number of moments estimated) increases? [2] Song et al, Consistency Models, ICML 2023 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: While limitations are presented, the authors could elaborate more on this topic. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review and suggestions. ***Main Weakness 1: Presentation*** We appreciate your careful reading of our paper and your helpful suggestions. We will address these issues in the final version, including but not limited to: 1. We will revise the sentence in line 158 into "With such a network, the moments under the $q(x\_s|x\_t)$ measure can be decomposed into moments under the $q(x\_0|x\_t)$ measure, so that sampling any $x\_t$ to $x\_s$ requires only a network whose inputs are $x\_t$ and $t$." to make it clearer. 2. The variable $\\theta\^*$ means the potential optimal parameters for Gaussian mixture models given the first-to-third order of moments and the $\theta$ represents the variables required to optimize, which are denoted as $\mu^{(1)}_t$, $\mu^{(2)}_t$ and $\sigma^2_t$ in backward transition kernel $p(x_s|x_t) = \frac{1}{3}\mathcal{N}({\mu^{(1)}_t},{\sigma^2_t})+ \frac{2}{3}\mathcal{N}({\mu^{(2)}_t},{\sigma^2_t})$. 3. The function $h$ is defined as $h(f\^3\_{[1]}(x\_t,t),f\^3\_{[2]}(x\_t,t),f\^3\_{[3]}(x\_t,t))=M\_1(f\^3\_{[1]}(x\_t,t)),M\_2(f\^3\_{[2]}(x\_t,t)),M\_3(f\^3\_{[3]}(x\_t,t))$, where $M\_1(f)=E[x]$ in Eq.(22), $M\_2(f)=Cov(x)$ in Eq.(23) and $M\_3(f)=Ske(x)$ in Eq.(24). 4. We will re-organize the paper, explicitly state the connection between these general definitions and the specific problem tackled and the loss in Equation 8. We will also thoroughly correct the grammar/spelling mistakes in our paper. ***Main Weakness 2a: Sampling times (in seconds) are not provided*** We have included the sampling time curves of the sampling process for different numbers of steps in Fig. C in the rebuttal PDF. Under the same sampling steps, our GMS incurs approximately 10% additional time compared to SN-DDPM due to extra optimization for Gaussian mixture solving. This optimization, however, contributes to improved quality, as shown in Table 1. Besides, we show the optimizing time curve which is required in GMS by changing the batch size and optimizing steps in Fig. C. For a more intuitive comparison, we also plot the FID curve w.r.t. the sampling time (in seconds) in Fig. A of the rebuttal PDF. Across the same computational budget (sampling time in seconds), we consistently observe improved performance over SN-DDPM. We will include these figures and results in the revision. ***Main Weakness 2b: Comparison with more baselines*** Following your suggestions, we compare GMS with two additional SDE-based solvers: EDM [a] and SEED [b], with the latter being a higher-order SDE solver. We note that EDM in [a] use $x\_0$ prediction networks, which differ from ours and are thus not directly comparable. The symbol $\^*$ indicates network distinctions from the noise network in discrete time. | Method/NFE | 10 | 20 | 40 | 50 | 100 | | :---------------: | :---: | :----: | :----: | :----: | :----: | | GMS | 17.43 | 7.18 | 4.52 | 4.16 | 3.26 | 3.01 | 2.76 | | EDM[a]$\^*$ | 35.07 | 14.04 | 9.56 | 5.12 | 2.99 | | | SEED-2[b] | 481.09 | 305.88 | 51.41 | 11.10 | 3.19 | | SEED-3[b] | 483.04 | 462.61 | 247.44| 62.62 | 3.53 | 3.08 | We observe that our GMS consistently outperforms within 50 NFEs. We will add these result in the revision. ***Minor Weakness 1: Derivation about Eq. (9)*** Thanks for the comments. We will derive Eq. (9) step by step in Appendix B in the final version. ***Minor Weakness 2: Reasons for using a mixture of only two Gaussians*** It's worth noting that in GMS, we use a mixture of two Gaussians (i.e., a two-mode distribution) to effectively capture the reverse transition kernels per sampling step. Intuitively, this choice can potentially encompass exponential modes across the entire trajectory. Empirically, we consistently observe that employing a mixture of two Gaussians yields favorable results across all settings in our experiments. ***Minor Weakness 3: Assumption of zero higher-order correlation*** Estimating full higher-order moments results escalated output dimensions (e.g., quadratic growth for covariance and cubic for the third-order moments) and thus requires substantial computational demands. We therefore consider the diagonal higher-order moments in our method for computational efficiency, similar to Bao et al. [c]. We will clarify this in the revision. ***Minor Weakness 4: The placement and the function of Figure 1*** The primary purpose of including Fig. 1 in the introduction was to provide readers with a visual insight into the motivation behind our method. The figure showcases the diminishing effectiveness of Gaussian reverse transition kernels (SN-DDPM) when using fewer discretization steps, which motivates the choice of non-Gaussian transition kernel in this paper. We are also open to further discussion on this issue and willing to re-organize it for better presentation. ***Question 1: Consistency models*** Consistency models [d] focus on distilled ODE-based solvers, while our work is an SDE-based solver. Notably, SDE-based solvers offer distinct advantages in various downstream tasks, including stroke-based synthesis, image translation, and image manipulation, as outlined in Sec. 1. We will discuss the relation with [d] in the final version. ***Question 2: How does the sampling time per step increase when the number of Gaussians in the mixture increases?*** In our experiments, we did not try increasing the number of Gaussians in the mixture (as well as the number of moments estimated) due to the reason mentioned in our response to ***Minor Weakness 2***. ***References*** [a] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models. 2022\ [b] Gonzalez et al. SEEDS: Exponential SDE Solvers for Fast High-Quality Sampling from Diffusion Models. 2023\ [c] Bao et al. Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models. 2022\ [d] Song et al. Consistency Models. 2023 --- Rebuttal Comment 1.1: Comment: Thank you for your response. My main concerns have been addressed and hence I adjusted the score accordingly. As you intend to discuss the relation with [d] in your final version, it is important to note that consistency models trained in isolation are not focused on distilled ODE-based solvers (Theorem 2 and Algorithm 3). --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for your detailed comments. We will discuss the relation between our work and consistency models [d] trained both via distillation (i.e., CD) and in isolation (i.e., CT) in the final version. We appreciate your time and effort in reviewing our paper. Thank you!
Summary: The authors address the efficiency-effectiveness dilemma faced by existing SDE-based solvers in diffusion models during inference. They observe that the Gaussian assumption in the reverse transition kernel is frequently violated, even with a limited number of discretization steps. To overcome this limitation, the authors propose a new class of SDE-based solvers called Gaussian Mixture Solvers (GMS). In this approach, they estimate the first three-order moments and optimize the parameters of a Gaussian mixture transition kernel using generalized methods of moments. They present empirical results that demonstrate that GMS outperforms other SDE-based solvers in terms of sample quality for image generation and stroke-based synthesis in various diffusion models. Strengths: The authors provide a solid theoretical foundation for their approach, highlighting the discrepancy between the empirical data and the assumptions made for the Gaussian transition kernel commonly used in models like SN-DDPM. They tackle this challenge by proposing a novel and effective solution tailored to address this specific issue. Their innovative approach not only takes into account theoretical benefits but also considers computational constraints. Weaknesses: Despite introducing GMS as a potential resolution to the efficiency-effectiveness dilemma, the authors fail to provide compelling evidence in their presentation to substantiate their assertion. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Can you provide further clarification regarding the claim made in the abstract and various sections of the text that a limited number of discretization steps amplifies the violation of assumptions for the Gaussian transition kernel used in SN-DDPM? How does this relate to the performance of GMS compared to SN-DDPM when considering a small number (e.g., 100) versus a larger number (e.g., 1000) of discretization steps? 2. In Table 1 and Figure 3, it is apparent that the relative (or absolute) improvement of GMS over SN-DDPM is not significantly different when comparing a small number of discretization steps to a larger number. Can you address this discrepancy and explain why the expected noticeable improvement in GMS performance for a smaller number of discretization steps is not observed? 3. The authors mention that GMS is superior to SN-DDPM when considering computational cost (line 273). Could you provide more clarity on this claim and its implications? If GMS does indeed offer computational advantages, please elaborate on this aspect in the text and present corresponding results. Conversely, if this claim is not supported by evidence, please clearly address this discrepancy. 4. Considering the theoretical value of this work, a crucial question arises: Why would one choose to utilize GMS if comparable results can be obtained within the same computational budget using a simpler method like SN-DDPM? Please provide a compelling rationale for using GMS, taking into account its potential advantages and drawbacks compared to alternative approaches. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors emphasize that a limited number of discretization steps exacerbates the violation of assumptions for the Gaussian transition kernel in SN-DDPM, as stated in the abstract and various sections of the text (e.g., line 39, 50, 117, and 290). However, it is surprising to note that the expected significant improvement of GMS over SN-DDPM for a smaller number (e.g., 100) versus a larger number (e.g., 1000) of discretization steps is not observed, as evident from table 1 and figure 3. This discrepancy requires clarification. Additionally, the authors vaguely assert that GMS outperforms SN-DDPM when considering computational cost (line 273). If this claim holds true, it is crucial to provide clear explanations and present corresponding evidence in the text and results. Conversely, if this claim is not substantiated, it should be explicitly addressed. Although this work holds theoretical value, the fundamental question remains: What is the justification for using GMS if comparable results can be achieved within the same computational budget using a simpler method like SN-DDPM? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and questions. ***Question 1: Clarification regarding the claim that a limited number of discretization steps amplifies the violation of assumptions for the Gaussian transition kernel used in SN-DDPM*** Thanks for the insightful comment. We clarify this issue and confirm our claim from both theoretical and empirical standpoints. Theoretically, we apply Bayes' rule to the posterior distribution $q(x_t|x_{t\~+\~\Delta~ t})$ as follows: $q(x_t|x_{t\~+\~\Delta~ t}) = \dfrac{q(x_{t\~+\~\Delta~ t}|x_{t})q(x_t)}{q(x_{t\~+\~\Delta~ t})} =q(x_{t\~+\~\Delta~ t}|x_{t})\exp(\log (q(x_t))-\log(q(x_{t\~+\~\Delta~ t})))$ $\propto \exp(-\dfrac{\left \\| x_{t\~+\~\Delta~ t}-x_t-f_t(x_t)\Delta~ t \right \\|^2 }{2g_t^2\Delta~ t} +\log p(x_t)-\log(x_{t\~+\~\Delta~ t})),$ where $\Delta~ t$ is the step size, $q(x_t)$ is the marginal distribution of $x_t$. When $x_{t\~+\~\Delta~ t}$ and $x_{t}$ are close enough, using Taylor expansion for $\log p(x_{t\~+\~\Delta~ t})$, we could obtain: $\log p(x_{t\~+\~\Delta~ t}) \approx \log p(x_t)+(x_{t\~+\~\Delta~ t}-x_t)\nabla_{x_t} \log p(x_t)+\Delta~ t \dfrac{\partial}{\partial t} \log p(x_t)$, $q(x_t|x_{t\~+\~\Delta~ t})\propto \exp(-\dfrac{\left \\| x_{t\~+\~\Delta~ t}-x_t-[f_t(x_t)-g_t^2\nabla_{x_t} \log p(x_t)]\Delta~ t \right \\|^2 }{2g_t^2\Delta~ t} +O(\Delta~ t))$. By ignoring the higher order terms, the reverse transition kernel will be Gaussian distribution. However, as $\Delta~ t$ increases, the higher-order terms in the Taylor expansion cannot be disregarded, which causes the reverse transition kernel to deviate from a Gaussian distribution. We will include these clarification in the revision. Empirically, from Fig. 2 in our paper, we observe that as the number of sampling steps decreases, the backward transition kernel increasingly deviates from a Gaussian distribution. For more details, please refer to Appendix D.2 in our paper. Please see our response to ***Question 2*** for analysis of the emprical performance in terms of FID. ***Question 2: About the improvement in a limited number of steps*** As shown in Tab. 1 of the submission (we also plot a curve in Fig. B of the rebuttal PDF for a clearer illustration), GMS achieves an FID improvement of >4.0 with a small number of steps (e.g. 10) and <0.5 with a large number of steps (e.g. 200). However, we respectively clarify that due to the nonlinear nature of the FID metric, the relative/absolute FID improvements are not directly comparable across different number of steps. We will make it clearer in the final version. ***Question 3: About the claims that GMS is superior to SN-DDPM when considering computational cost*** The intended statement here is: "(In the SDEdit task,) GMS exhibits superior performance compared to SN-DDPM **under the same** computational cost." We indeed observed a performance-time curve similar to that of the unconditional generation experiments (Fig. A in the rebuttal PDF), providing evidence for our intended claim. Thank you for pointing this out, and we will correct the original inaccurate claim. ***Question 4: Why choosing GMS if comparable results can be obtained within the same computational budget using a simpler method like SN-DDPM*** We would like to clarify that GMS can achieve superior performance under the same computational budget. To provide an intuitive comparison, we present the CIFAR-10 FID curve against the computation time in Fig. A of the rebuttal PDF, where GMS consisently improves SN-DDPM when varying the computational budgets. Besides, we provide more results compared to other advanced SDE-based sovlers including EDM [b] and SEED [c], GMS still outperforms within limited number of discretization steps. See details in response to ***Main Weakness 2b*** of Reviewer iSEj. ***References*** [a] Song et al. Score-based generative modeling through stochastic differential equations. 2020\ [b] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models. 2022\ [c] Gonzalez et al. SEEDS: Exponential SDE Solvers for Fast High-Quality Sampling from Diffusion Models. 2023 --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. The authors have address most of my concerns and hence I have adjusted the score. A notable lingering weakness pertains to whether the relatively modest enhancements can be justified by the heightened intricacy of the approach. This, indeed, warrants additional exploration. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for your feedback and for raising the score. We will discuss the limitation related to the added intricacy in the final version. We appreciate your time and effort in reviewing our paper. Thank you again!
Summary: Sampling from diffusion models is equivalent to solving the reverse diffusion SDEs or the corresponding probability flow ODEs. In comparison, SDE-based solvers can generate samples of higher quality and are suited for image translation tasks. However, during inference, existing SDE-based solvers are severely constrained by the efficiency-effectiveness dilemma. To overcome this limitation, this paper introduces a novel class of SDE-based solvers called Gaussian Mixture Solvers (GMS) for diffusion models. Experimental results validate the motivation and effectiveness of GMS solvers. ---Post-rebuttal: The authors address some of my concerns, However, I strongly enough the authors to conduct more experiments. I would like to keep my score. Strengths: a. This paper systematically examines the assumption of the Gaussian transition kernel and reveal that it can be easily violated under a limited number of discretization steps even in the case of simple mixture data. To this end, the authors propose a new type of SDE-based solver called Gaussian Mixture Solvers. b. This paper presents an approach for estimating the high-order moments utilizing noise networks. Weaknesses: a. In Sec. 3.2, these authors claim that a designed Gaussian mixture model can degenerate to a Gaussian, however, it is not clear how to design such a Gaussian mixture model. b. As the weight in Eq. 7 is manually setting, how to select an optimal set of weights for a specified testing benchmark? c. In Table 1, except for the FID score, extra computational cost and running time should also be compared to verify the effectiveness of the proposed GMS solvers. d. From Table 1, we can observe that the improvement of GMS is not so obvious when compared with SN-DDPM. Especially from Table 9 in the supplemental material, we see that with the same computation cost, the gap between GMS and SN-DDPM is small. e. In the experiments, the resolution of the dataset is small, how about the improvement of GMS solvers on data with larger scale? f. The experimental section is not enough. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review and valuable comments. ***Weakness (a): Design of the Gaussian mixture model & How can it degrade to a Gaussian*** Our design choice of the Gaussian mixture model for the reverse transition kernel is $p(x\_s|x\_t)=\\frac{1}{3} \\mathcal{N}(\\mu\^{(1)}\_t(x\_t),\\sigma\^2\_t(x\_t))+\\frac{2}{3} \\mathcal{N}(\\mu^{(2)}\_t(x\_t),\\sigma\^2\_t(x\_t))$ (as described in L182 of Sec. 3.2), where $\\mu\^{(1)}\_t(x\_t)$, $\\mu\^{(2)}\_t(x\_t)$ and $\\sigma\^2\_t(x\_t)$ are the three parameters to be optimized given the first three-order moments. It can degenerate to a Gaussian when $\\mu\^{(1)}\_t(x\_t)=\\mu\^{(2)}\_t(x\_t)$, in which case we have $p(x\_s|x\_t)=\\mathcal{N}(\\mu\^{(1)}\_t(x\_t),\\sigma\^2\_t(x\_t))$. We will clarify in the revision. ***Weakness (b): How to select an optimal set of weights?*** For a Gaussian mixture with two components (i.e., our design choice), determining optimal component weights and parameters from only the first three-order moments is underdetermined. To tackle this, we pragmatically set the weights to ($\\frac{1}{3}$, $\\frac{2}{3}$) and observe that such choice consistently yielded superior performance among different datasets. Identifying an optimal weight value remains a promising direction for further enhancing the GMS. We will clarify in the revision. ***Weakness (c & d): Computational cost and running time should also be compared & The improvement when considering the extra computation time*** Thank you for your suggestions. As our GMS enhances sample quality at the expense of additional computation, we provide further insights by presenting the CIFAR10 FID curve plotted against sampling time (in seconds) in Fig. A of the rebuttal PDF, which should be more intuitive than Table 9. Notably, a consistent and demonstrable improvement over SN-DDPM emerges within the same computational budget (sampling time in seconds), showing the superiority of our method. Specifically, we observe an FID improvement of \~0.9 (\~0.6, \~0.3, resp.) at sampling time of 40s (60s, 200s, resp.). We will incorporate this figure adjacent to Table 1 in the revision. Other analysis and results pertaining to computational cost and running time can be found in the Appendix. E.6. Specifically, Fig. 5 shows that GMS incurs approximately 10% higher computational time per step compared to SN-DDPM, while Fig. 6 offers a breakdown of time allocation across various components. ***Weakness (e): Experiments on large-scale data*** Given the limited time available for rebuttal, we were unable to finish experiments on larger images. Nonetheless, we are actively engaged in experiments on ImageNet 256*256, and we are committed to delivering our best efforts in this regard. ***Weakness (f): The experimental section is not enough.*** We have supplemented the following experiments and results to better demonstrate the effectiveness of GMS: - we present the CIFAR10 FID curve plotted against sampling time (in seconds) for SN-DDPM and GMS in Fig. A of the rebuttal PDF. - we present the sampling time (in seconds) curve plotted against the NFE (Number of Function Evaluation) for SN-DDPM and GMS in Fig. C of the rebuttal PDF. - we compare GMS with more SDE-based solvers such as EDM [a] and SEED [b]. Results show that GMS consistently outperforms within 50 NFEs. Please see details in response to ***Main Weakness 2b*** of ***Reviewer iSEj***. We are open to conducting additional experiments if they would contribute to a more comprehensive evaluation. Thank you for your feedback. ***References*** [a] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models. 2022\ [b] Gonzalez et al. SEEDS: Exponential SDE Solvers for Fast High-Quality Sampling from Diffusion Models. 2023 --- Rebuttal 2: Title: Providing additional results for Reviewer iBkM Comment: Thanks for your feedback. In this reply, we added the experiments of the large resolution (Imagenet 256$\\times$256). Combined with the content of the first rebuttal, we supplemented all the experiments according to your comments. In particular, we chose one of the SOTA latent diffusion models in Imagenet 256$\\times$256, called U-ViT; we use the U-ViT-Huge from [c] as our backbone network and train extra the second-order and the third-order noise prediction heads with two transformer blocks with the frozen backbone (details in line 226). With the same sampling parameters such as the total denoising time steps and values of cfg, the conclusions of the experiments remain unchanged, GMS outperforms SN-DDPM within the same number of steps. FID results of class-conditional image generation on ImageNet 256$\\times$256: | Method/# Steps | 15 | 20 | 25 | 40 | 50 | 100 | 200 | | :-: | :-: |:-: | :-: | :-: | :-: | :-: | :-: | | DDPM | 6.48 | 5.30 | 4.86 | 4.42 | 4.27 | 3.93 | 3.32 | | SNDDPM | 4.40 | 3.36 | 3.10 | 2.99 | 2.97 | 2.93 | 2.82 | | GMS |4.01 | 3.07 | 2.89 | 2.88 | 2.85 | 2.81 | 2.74 | FID results of unconditional image generation on ImageNet 256$\\times$256: | Method/# Steps | 25 | 40 | 50 | 100 | | :-: | :-: | :-: | :-: | :-: | | DDPM | 8.62 | 6.47 | 5.97 | 5.04 | | SNDDPM | 8.19 | 5.73 | 5.32| 4.60 | | GMS | 7.78 | 5.42 | 5.03| 4.45 | ***References*** [c] Bao et al. "All are worth words: A vit backbone for diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Summary: The paper proposes to weaken the Gaussian assumption of the transition probability in the reverse SDE used in deep diffusion models. They first illustrates how and when the Gaussian assumption is wrong. Then they suggest to approximate the non-Gaussian transition probability by a Gaussian Mixture which is adjusted with the method of moments. Finally, they illustrates the superiority of their methods on different standard datasets (CIFAR, ImageNet64) using standard metric (FID). Strengths: - The manuscript is well-written and the problem is clearly stated and illustrated. - The proposed method allows to reduce the number of time step - The results are superior in terms of sample quality (measured by FID). Specifically appreciated : - The authors account for the computational cost of their method in the supp material. - The authors are transparent about real-time applicability of their methods. Weaknesses: - It's unclear what is the interest of the proposed method beyond mathematical and empirical curiosity. - The proposed method weakens an assumption at an extra-computational cost. When accounting for this extra-cost improvements are lowered. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I did not fully understand why the method of moments is preferred. To me, the issue of estimating a mixture model that depends on both s and t also apply to the estimate of moment. What is the advantage of using the method of moments ? EM algorithm is easy to implement and I feel like you might spare computational resources compared to the calculation of higher-order moment. Have you try EM ? Fig 1 : You should plot the log density to make more visible the differences between the two curves. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: ok Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review and valuable suggestions. ***Weakness 1: The interest of the proposed method*** We understand that your concern may relate to the theoretical contributions and practical benefits of our method. (We acknowledge the possibility of potential misunderstanding and welcome any further questions.) We find that the Gaussian assumption in the reverse transition kernel can be violated, as shown by our theory and experiments (Fig. 1). Building on this, we introduce the GMS. This new approach improves the efficiency of SDE-based solvers. SDE-based solvers have notable advantages over ODE-based solvers in various downstream applications. Noteworthy examples include stroke-based synthesis, expounded upon in Section 4.2, and image translation, as detailed in [a]. These instances underscore the superior efficacy of SDE-based solvers. Concurrently, when an ample quantity of sampling steps is employed, the performance of SDE-based solvers exhibits superior outcomes in both unconditional and conditional sampling scenarios (as demonstrated in [b]). Our experiments in Fig. A of the rebuttal PDF show that **GMS achieves superior performacne than competitive SDE-solvers under the same computational budget**. Therefore, we believe that GMS is promising in applications investigated in [a,b]. ***Weakness 2: Extra-cost during inference*** Our GMS achieves superior performance under the same computational budget. To provide an intuitive comparison, we present the CIFAR10 FID curve plotted against sampling time (in seconds) in Fig. A of the rebuttal PDF. Notably, a consistent improvement margin over SN-DDPM emerges within the same computational budget (sampling time in seconds). Specifically, we observe an FID improvement of \~0.9 (\~0.6, \~0.3, resp.) at sampling time of 40s (60s, 200s, resp.). This observation corroborates our claim that the Gaussian reverse transition kernel deviates when employing fewer discretization steps. We will clarify this in the revision. ***Question 1: Advantages of the method of moments compared to the EM algorithm*** The EM algorithm is not well suited for our method due to the following reasons: 1. Nontrivial training and loss modifications: Learning the reverse transition kernel $p(x\_s|x\_t)=\\sum\_{i=1}\^{M}w\_i \\mathcal{N}(x\_s|{\\mu\_i}(x\_t),{\\Sigma\_i}(x\_t))$ ($\\sum\_iw\_i=1$) with EM algorithm requires alternatively performing expectation and maximization steps, where the latter are not differentiable. Moreover, it requires sampling time step pairs $(s, t)$ and many-to-one pairs of $x\_s$ and $x\_t$, incurring computational costs. In contrast, our approach seamlessly expands with the noise prediction loss within the diffusion framework and maintains (training) efficiency. 2. Substantial model architecture changes: EM necessitates nontrivial architectural changes for handling double time inputs $(s, t)$ in modeling $p(x\_s|x\_t)$. This poses higher model capacity demands and complex design challenges. We will include these in the revision. ***Question 2: Re-plot Figure 1*** Thank you for your valuable suggestion. We will update it in the final revision. ***References*** [a] Zhao et al. Egsde: Unpaired image-to-image translation via energy-guided stochastic differential equations. 2022\ [b] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models. 2022 --- Rebuttal Comment 1.1: Title: post-rebuttal response Comment: Thanks for the feedback. Why do you say that the M-step is not differentiable in Gaussian Mixture Models ? I am not sure of that because the M-step in GMM is in closed form so it feels like it could be differentiable, no ? --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We appreciate the opportunity to provide more detailed clarification on the challenge of utilizing the EM algorithm to learn the reverse transition kernel. Let $p(z\_s=i|x\_s,x\_t)$ denote the posterior probability that the point $x\_s$ belongs to mixture component $i$, where $i\\in\\{1,\\cdots,M\\}$. In general, parameterizing the mixture distributions $p(x\_s|z\_s=i,x\_t), i=1,\\cdots,M$ with neural networks will lead to an intractable $p(z\_s=i|x\_s,x\_t)$ and challenges in the expectation steps. To tackle this, common approaches like Monte Carlo EM approximate the expectation steps through sampling from $p(z\_s|x\_s,x\_t)$, often involving non-differentiable processes like MCMC sampling. The non-differentiability of these samples can hinder a fully differentiable learning process, necessitating iterative updates. However, our specific case benefits from the Gaussian form of $p(x\_s|z\_s=i,x\_t)$, making both the expectation and maximization steps tractable and differentiable. We will properly address this in the revision. Nevertheless, the need for paired samples, required loss modifications (rather than the simple noise prediction), and potential nontrivial architectural adjustments remain key obstacles to the seamless integration of the EM algorithm into our method. Title: Thank you for your feedback
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable and constructive feedback, and we have responded to each reviewer individually. We have also uploaded a rebuttal PDF that includes: - **Fig. A**: The relation between the sample quality (in FID) and the sampling time (in seconds) of GMS and SN-DDPM on CIFAR10. - **Fig. B**: The reduction in the FID on CIFAR10 for GMS compared to SN-DDPM when sampling with different number of steps. - **Fig. C**: The relation between the sampling/optimizing times (in seconds) and the sampling/optimizing steps of GMS on CIFAR10. In addition, we have extended our experimental comparisons to include more baselines, such as EDM from [a] and SEED from [b]. ***References*** [a] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models. 2022\ [b] Gonzalez et al. SEEDS: Exponential SDE Solvers for Fast High-Quality Sampling from Diffusion Models. 2023 Pdf: /pdf/3a992462aa67d82989de249498d309eb48f19cdf.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection
Accept (poster)
Summary: They state that diffusion models using Unet suffer from unstable training and oscillations of features and gradients. They also state that while this is sensitive to coefficients related to scaling the skip connections of Unet. They set out to provide an explanation and more robust scaling methods for the skip connections to address this. Their methods are constant scaling based exponentially on the depth of the skip connection, and learnable scaling. The methods seem to consistently reduce FID versus the same model with regular scaling. Strengths: I believe this paper’s contribution is simple and meaningful, and written clearly. The experiments seem thorough to changes made and the questions you would have reading it. I cannot fully speak either way to novelty in this area, but it seems like a sound paper. Weaknesses: I did not see meaningful weaknesses in the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **`(1)` Thank you for your encouragement and taking the time to review our article.** We will further improve our paper based on the suggestions of other reviewers. --- Rebuttal Comment 1.1: Title: Rebuttal read Comment: I confirm I have read the rebuttal and would like to keep my score.
Summary: U-Net is the most popular neural network backbone for diffusion models. In U-Net, the long skip connection (LSC) links the long-distant information near to the input and the intermediate network outputs. However, they suffers from unstable training, which is resolved by scaling down the LSC coefficients. This paper addresses the theoretical aspects of stabilizing LSC, robustifies the training process, and accelerates this. Both hand-crafted and learning-based parameterization are proposed. Strengths: * The organization of the paper is clear, and the figures of motivating experiments are well demonstrated. * The method and simple yet efficient, improving both the training speed and sampling performance while requiring only some hand-crafted hyperparameters (CS) or lightweight learning module (LS). Weaknesses: * This work does not deal with more recent diffusion model baselines such as EDMs, which is much more powerful than the methods that are compared in this work. * Prior, related works are not introduced, especially for those who are not familiar with the scaling of skip connections in other network architectures. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * This work focuses on scaling the skip connection coefficients U-Net-based diffusion models. Does this technique also hold for other U-Net based methods rather than diffusion models? * Can you add some figures or tables that addresses the optimal value of $\kappa$ in CS cases, like the $\kappa$ from LS in Figure 7? * Please write the full name of IE and SE in line 330. * Please address the number of sampling steps in each methods. * Does parameterizing LC to some additional module (Figure 4) and training the network end-to-end adequately learn $\kappa_i$? =========== Corrections * Line 85: serir --> series * Line 86: UNettraining --> UNet training Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors adequately addressed the limitations and introduced further directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and positive comments! In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues. **`(1)` About the baseline EDMs.** Thank you for your suggestion. Since our method focuses on the neural network architecture for the diffusion model, it could be applicable to EDMs which also uses UNet for training. However, this needs to be thoughtfully tested. Since EDMs require much longer training time, such as 16 GPU days for CIFAR10 and over 300 GPU days for ImageNet64, it is really challenging for us to finish the experiments within the rebuttal phase. In the revision, we will try our best to evaluate our method on EDMs and report the results in revision. **`(2)` About the related works.** Thank you for your suggestion. At present, due to the limited space (9 pages), we have to briefly introduce the related works in Line 22-26 and Line 101-128, and spend much space to introduce three theoretical analysis from three different aspects, and our proposed methods. We very much appreciate your suggestions about introducing more related works, including the previous scaling methods of skip connections, and will try to discuss more in the revision since the final version often allows us to use an extra page. **`(3)` Our technique may not directly improve other UNet based tasks.** Our analysis is based on diffusion models. For example, our mathematical derivation relies on the particularity of the diffusion model, such as the approximately normal distribution of the network's predictions (i.e., noise). For other typical UNet-based scenarios, such as image segmentation and depth prediction, the neural network's output consists of segmentation masks or depth maps which may not fully satisfy our analysis needs. While our theories cannot be directly applied to other tasks, our technology, especially the experimental setup based on our theories, may not be directly transferable to other UNet-based scenarios. In the future, we will test our proposed LS and CS in some common UNet-based scenarios, such as image segmentation, to further enhance the versatility of our approach. **`(4)` The optimal value of $\kappa$ in CS cases** please refer to rebuttal PDF Fig.1 (c). The values of $\kappa$ in CS are 0.5 (for CIFAR10), 0.5 (for CelebA), 0.8 (for MS-COCO) and 0.95 (for ImageNet64). **`(5)` About the full name of IE and SE**. Thanks for your suggestion. We will add the full name of IE (Instance enhancement batch normalization [1]) and SE (Squeeze-and-excitation networks [2]) in the revision. **`(6)` About the sampling steps in each methods**. We follow the default and official sampling settings in UViT. Specifically, for CIFAR10 and CelebA64, the experiments used the "Euler-maruyama-sde" method with 1000 sampling steps for sampling. On the other hand, for MS-COCO and ImageNet64, the experiments used the "DPM solver" method with 50 sampling steps. **`(7)` About the LC parameterizing**. Apologies, we may not fully understand your question. Let me try to clarify: - If you're asking whether LS can be used to end-to-end adaptively learn $\kappa_i$, then according to the explanation in Section 4.2, our LS is originally designed for adaptive learning of $\kappa_i$, rather than being a fixed value like CS. - If you're inquiring about the feasibility of applying LS with additional (other) modules for adaptive learning of $\kappa_i$, Table 2 explores the use of alternative modules for learning $\kappa_i$. However, the results indicate that the current structure already achieves satisfactory performance. I hope my response addresses your issue. If not, please let me know. **`(8)` For the typos**, per your suggestions, we will fix them in revision. [1] Senwei et al. Instance enhancement batch normalization: an adaptive regulator of batch noise. AAAI 2020. [2] Jie Hu et al. Squeeze-and-excitation networks. TPAMI 2017. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for the response. For the LC parameterization, my question was the latter: for some cases, the CS case adequately achieves good performance for learning $\kappa_i$, and is it beneficial to additively learn with some adaptive module? The rebuttal resolved most of the questions. As I expect this work to contribute to the architectural design of the well-performing diffusion model, I am raising the review score.
Summary: This paper proposes to scale the skip connection in diffusion model Unets by an exponential factor. The authors should that the feature norms of vanilla Unets oscillate across batches, and that their proposed method results in much smaller feature output oscillations. They conjecture that this method stabilizes training, and prove bounds on the oscillation which their method decreases. The proposed method shows faster convergence and shows improvements across multiple models, datasets, model sizes. Strengths: 1. The authors method follows prior work on scaling block output and/or the skip connection, where such scaling methods have been shown to increase stability/robustness and result in Lipschitz continuous models. 1. The learnt scales in LS method seems to somewhat mirror the author's theoretical exponential CS method, providing further validation. 1. The authors method results in faster convergence and shows improvements across multiple models, datasets, model sizes, and compares favourably to some other scaling methods. 1. The author's proposed method can be very easily adapted to existing models. Weaknesses: 1. The authors show that feature norms oscillate across samples, and they say this implies parameters must also be oscillating (line 47, line 172). While the authors demonstrate that feature norms do oscillate (and I am willing to agree the same may be true of gradient norms), this may not directly cause the parameters/updates to oscillate. 1. The authors works (and proposed exponential CS solution) is remarkably similar to [1], where a scaling of $b^l$ is proposed for the block output (compared to this paper's scaling of the skip connection by a similar value). An experimental comparison with this method should be added to Table 3, or perhaps the similarities/differences discussed. 1. Discussion of prior related works should be expanded. For example, [2] proposes scaling of the skip connection and/or the block output, and has similar exponential bounds. 1. The constants in theorem 3.1 are ignored (assumed to be 1) in line 270 to derive the values for the scaling parameter. If this constant is for example 10 or 0.1, that will dramatically change the proposed values for CS. Further investigation into this constant is required. [1] Hanin, B., & Rolnick, D. (2018). How to Start Training: The Effect of Initialization and Architecture. Advances in Neural Information Processing Systems, 31, 571–581. [2] Balduzzi, D., Frean, M., Leary, L., Lewis, J.P., Ma, K.W. &amp; McWilliams, B.. (2017). The Shattered Gradients Problem: If resnets are the answer, then what is the question?. <i>Proceedings of the 34th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 70:342-350 Available from https://proceedings.mlr.press/v70/balduzzi17b.html. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors derive bounds for oscillations (eq 4), and they show that their methods decreases this bound, but how tight is this bound? Comparing the minimum and maximum norms predicted by this formula, with that in Figure 1., is important to show if this bound is indeed useful, or perhaps their proposed method has some other side effect that results in the performance improvements. 1. Direct evidence of parameter oscillation (mentioned in line 47, 172 can be shown), perhaps by comparing a per-parameter Adam style "m" and "v" across the baseline method and the author's method. If oscillation of parameters is indeed smaller, one would perhaps expect the norm of "m/v" to be larger in the author's case. Is this indeed the case? 1. Given that the exponential change in scaling with increasing total number of layers, does the performance of the method suffer compared to $\frac{1}{\sqrt{2}}-CS$ as number of layers increases to more depths (as large depth will cause the skip connection to essentially be scaled to zero)? Minor presentation issues (authors are not expected to respond to these, as they are minor issues that can be fixed later): 1. Line 85 "serir" should perhaps be "series"? 1. In table 1, comparisons to author's method "x+LS/CS" should perhaps be placed directly below "x", to make it easier to compare scores. 1. The values for CS for table 1 are mentioned in the supplementary, but perhaps they should be mentioned at line 272 or in 315. 1. Quotes around 'CS.py' in supplementary above Figure 2 are inverted. 1. $\frac{1}{\sqrt{2}}-CS$ reads as "1 by square root 2 minus CS", and for a moment I was trying to think what $CS$ are the authors subtracting from $\frac{1}{\sqrt{2}}$. Perhaps a better notation for this may be something along the lines of $(CS)\frac{1}{\sqrt{2}}$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and positive comments! In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues. **`(1)` For parameter oscillation,** we indeed observe it as discussion below, and our approach effectively mitigates this oscillation. As shown in Fig.2 in the attached PDF, taking UViT on CIFAR-10 as an example, we select several parameters from both the encoder and decoder's final layers. We then computed the average sliding variance (using a window size of 5) of these parameters during training for the three methods (org, LS, CS). This approach provides more direct observation of parameter oscillation compared to monitoring the norm of "m/v." From the visualization results, one can observe that parameter oscillation does occur, and our method effectively mitigates this oscillation to stabilize model training. **`(2)` For the scaling method on the block output in the work [1],** we compare it with ours in Table 2 in the attached PDF, where the scaling method in [1] is denoted as $b^l$. We follow three settings from the work [1] by respectively setting $b=0.90, 0.75, 0.50$, and find that this kind of scaling method on the block output cannot achieve good performance improvement. Therefore, for stable training of the diffusion model, we believe that applying scaling on long skip connections, similar to our proposed CS, would be more effective. We will include this comparison into our revision. **`(3)` In the revision, we will discuss more related works.** For the work [2], our work mainly differs from it in two notable differences. - (**Architecture**) Our method focuses on the UNet architecture, particularly the long skip connections, which are not considered in [2]. Most of the discussions and analyses about ResNet (Kaiming He 2016) from [2] cannot be directly applied to UNet. - (**Scaling location**) The work [2] also primarily focuses on scaling at the block output, rather than on scaling at skip connections. As mentioned in section 3.3 of paper [2], scaling at skip connections may not work well in practice for ResNet (Kaiming He 2016). On the contrary, in the scenario of UNet and diffusion models, we notice that scaling at long skip connections is more effective than scaling at the block output. Our work's analysis and conclusions contribute to an expanded understanding of scaling strategies in neural network architectures within the community. In the revision, we will include the above discussion to further strengthen our work. **`(4)` For the constant $c_1$ in theorem 3.1,** as mentioned in Line 270-272, we use a rough estimate for constant $c_1 \approx 1$ which is valid in many experiments. Moreover, per your suggestion, we conduct statistical analysis on different datasets such as CIFAR10, CelebA, and ImageNet64, and find that the real range of the constant $c_1$ is $(0.752, 1.060)$ for analysis in Line 270-272, which is close to our rough estimate $c_1\approx 1$ and will not dramatically change the proposed values for CS. In the revision, we will modify this section to enhance our writing. **`(5)` For the oscillation bound in Eq (4),** due to the use of some approximations and consideration of worst-case scenarios in Eq (4), our bound may not be very tight. And it's also not easy to be visualized in Figure 1 since some parameters are hard to measure during training directly and quantitatively, e.g., $c_2$. In this paper, Eq(4) serves as a qualitative estimation of feature oscillations, providing inspiration and explanation of our proposed scaling method from a feature map perspective. This indicates that the estimation of Eq(4) is meaningful and insightful. Moreover, we are the first to conduct such an analysis on the diffusion model of the UNet neural network architecture. In future work, we will attempt more sophisticated analytical approaches to obtain tighter bounds. **`(6)` The comparison between $\frac{1}{\sqrt{2}}$-CS and ours on very deep network.** If the number of layers is large enough, indeed, our scaling will tend to zero. However, this situation is relatively rare in the practical diffusion model tasks using UNet. On the most popular image or text generation tasks, a UNet usually doesn't have a large number of long skip connections. Therefore, in this practical real-world setting, our scaling factor will not be scaled to zero, and our methods are more effective than $\frac{1}{\sqrt{2}}$-CS. As technology advances, our proposed scaling strategy will need further refinement to handle potential cases where diffusion models may be employed in extremely deep networks. Additionally, if a long skip connection approaches zero but the performance can be maintained, it may not necessarily be a bad thing, as these connections can be almost removed to save memory and improve model inference speed without sacrificing performance. **`(7)` For the minor presentation issues**, per your suggestions, we will fix them in revision. [1] Hanin, B. et al. How to Start Training: The Effect of Initialization and Architecture. NeurIPS 2018. [2] Balduzzi, D. et al. The Shattered Gradients Problem: If resnets are the answer, then what is the question? ICML 2017. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. In particular, I am satisfied with your empirical validation of constant C1, and Table 2 comparisons to prior works. The charts in Figure 2 of the global author rebuttal are also welcome. The figure does indeed show the "oscillations" the author's hypothesize and try to fix by bounding. As such, I am raising my review score.
Summary: In this paper, the authors focus on the challenge of the instability arising from the commonly adapted U-Net architecture for diffusion models. In particular, the authors start by theoretically analyzing the influence of the coefficients of long skip connects in U-Net-based diffusion models, specifically on the stability of the forward and backward propagation, along with the robustness of the network. Motivated by the theoretical discussions, the authors propose two corresponding scaling methods that adapt the coefficients of the long skip connections for more stable training, including a constant scaling method and a learnable scaling method, which is straightforward and ideally similar to that in dynamic neural networks and meta learning, but backed by solid theoretical analysis. Experiments on four datasets validate the effectiveness of the proposed two forms of scaling manners, with key codes provided in the supplement. Strengths: - The proposed constant and learnable scaling methods are very straightforward, but with solid theoretical analysis and guarantee. The results shown in Fig. 1 are also promising that successfully alleviate the oscillations. - Extensive ablation studies including the robustness of LS to network architecture are provided. - Key codes are provided in the supplementary material for handy reproducibility. Weaknesses: - Method: The second part of learnable scaling method in this paper is algorithmically related to meta learning and dynamic neural networks. However, there is a lack of discussions on related works. Also, it would be great if the authors could discuss some alternative ways for learnable scaling by borrowing the ideas from meta learning and dynamic neural networks. - Experiments: The authors are encouraged to provide some qualitative comparative results with existing methods, as a complement to the quantitative results reported at, for example, Tab. 1. The instability issue in diffusion models with U-Net is not my primary research area. As such, I may not be capable of correctly evaluating the novelty of this paper. I will refer to the comments of other reviewers for my final justification. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please kindly refer to the weakness section for more details. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in Sect. 6 of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and very positive comments! In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues. **`(1)` Per your suggestion, here we discuss some alternative ways for learnable scaling** by borrowing the ideas from meta-learning and dynamic neural networks. - For dynamic neural networks, the modules like LS can be viewed as attention modules. The work [1] suggests that such modules can be considered as adaptive feature-regulating dynamic neural networks. So by replacing or designing improved attention modules, we can effectively borrow ideas from dynamic neural networks to enhance the performance of diffusion models. Indeed, in our manuscript, we have already explored different attention modules. For example, Table 2 in the main text also presents the performance of some other alternative modules. - For meta-learning. We can loosely follow the steps inspired by MAML [2]. First, We can regard the scaling module $\mathcal{M}$ as a meta-learner in meta-learning, and regard each sample $\mathcal{T}_ i$ as a task. For the inner loop, given a task (sample) $\mathcal{T}_ i$, the vanilla diffusion loss can be used to tune $\mathcal{M}$ via a step of gradient descent to adapt $\mathcal{M}$ to the task $\mathcal{T} _ i$ and obtain a new task-specific scaling module $\mathcal{M}_ {T_ i}$. In the outer loop, we feed the sample $\mathcal{T}_ i$ into the diffusion model with a task-specific scaling module $\mathcal{M}_ {T_ i}$ again, and update the diffusion model and the scaling module $\mathcal{M}$ via gradient. The benefits of this meta-learning is that we can learn a meta model $\mathcal{M}$ as the scaling module, and adapt it to a specific sample for generating LSC weights. Though this idea is promising,this meta-learning method requires to compute the Hessian matrix, and thus suffers from high computational cost which is not very suitable for diffusion models whose model size is often large. In the revision, we will supplement the above discussion, and try to provide further validation experiments and analysis. [1] Han Y et al. Dynamic neural networks: A survey[J]. TPAMI, 2021 [2] Finn C et al. Model-agnostic meta-learning for fast adaptation of deep networks[C]. ICML 2017. **`(2)` For qualitative comparative results,** we have already provided some visualizations of our method (in Tab. 1) in Appendix for qualitative analysis. From the visual results (namely, the generated images), our methods can generate highly qualified images. Per your suggestion, we will include more comparisons with other methods in the revision. --- Rebuttal Comment 1.1: Title: Reviewer Ttzr Comment: Dear Reviewer Ttzr, Could you please comment on whether the rebuttal addresses your comments and concerns? Best, AC
Rebuttal 1: Rebuttal: We provide the necessary charts for the rebuttal stage in the attached PDF. Reviewers are kindly requested to refer to them. Pdf: /pdf/4ab6cb4d89415647e2a0a994b1c6a0a6d00f92ee.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper discusses the stability issues observed while training UNet in diffusion models, and theorizes on the role of Long Skip Connections (LSCs) in causing this instability. Diffusion models (DMs), lauded for their ability to model realistic data distributions, involve a forward and a reverse diffusion process. Most DMs use UNet as their backbone due to its use of LSCs which facilitate long-distance information aggregation and prevent vanishing gradient issues. However, despite the use of shared UNet for predicting injected noise at each step, instability is noticed during training. The paper's main contributions revolve around investigating this instability and deriving effective (and efficient) methods to address it. Specifically, it is theoretically proven that the coefficients of LSCs in UNet significantly affect the stability of forward and backward propagation as well as the robustness of UNet. The paper also proposes two coefficient scaling methods, Constant Scaling (CS) and Learnable Scaling (LS), designed to adjust the coefficients of LSCs in UNet for training stability. CS involves setting the coefficients as a series of exponentially-decaying constants while LS uses a small shared network to predict the scaling coefficients for each LSC. Strengths: - One of the key strengths of the paper is its rigorous theoretical analysis. It provides a comprehensive understanding of the instability of UNet in diffusion models by focusing on the significant impact of the coefficients of Long Skip Connections (LSCs). It's an important insight that broadens the understanding of UNet's performance in diffusion models. - The paper proposes two novel scaling methods, Constant Scaling (CS) and Learnable Scaling (LS), designed to enhance the stability of the UNet training process. These simple methods address the identified problem and provide tangible ways to improve training stability, indicating a proactive approach to problem-solving. Weaknesses: - The paper primarily focuses on the role of Long Skip Connections (LSCs) in causing training instability in UNet. While LSCs are a significant part of the model, the authors might have explored other potential contributing factors to this instability as well. Broadening the scope of their investigation could potentially have led to a more comprehensive understanding of the issue. - The placement of Figure 1 doesn't align well with its corresponding description, causing some disconnection. Ideally, figures on the first page should offer readers a quick, intuitive understanding of the paper's content. In its present state, Figure 1 seems to lack sufficient information to accomplish this. If this positioning is due to space constraints, please consider relocating it to a more suitable page. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and positive comments. In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues. **`(1)` In addition to LSCs, we have also explored other potential contributing factors.** We found that the decoder of UNet is also a significant part of the model. We have extensively explored how to apply scaling strategy to the output of each block in the decoder, hoping to stabilize model training. However, we find that this approach cannot achieve satisfactory performance for diffusion models, and are exploring the reasons behind. **`(2)` For the layout of the figures and the corresponding text**, per your suggestion, we will arrange them in the revision so that the figures and the corresponding text are close, improving the readability. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' thoughtful response. I've carefully read other reviews and the authors' rebuttals, and I've decided to stick with my original score. Based on the feedbacks (particularly concerning typos, figures, and writing), I believe the work can be further polished.
Summary: The paper presents a study and algorithm for scaling UNet's long-range connections such that convergence and stability can be improved. The results are strong in the setup of training diffusion models. Strengths: - The paper is very well written. The text and graphs are polished and it's easy to follow the idea and contributions. - The proposed modification of using an exponential scale over depth is simple and effective, which could be directly applicable to any UNet architecture. - The experiments are thorough and the results are strong. The proposed CS and LS algorithms are shown to be better than the vanilla UNet and heuristic $1/\sqrt(2)$ scaling rule in terms of stability and performance. Weaknesses: - Explanation of the algorithm: Although the CS and LS algorithms are shown effective, an intuitive explanation on why they work is missing. - For CS, how is the direction of applying the exponentially decayed scale determined? For instance, can we do $\kappa_i=\kappa^{D-i-1}$? In the explanation L251-259, I don't see anything specific to the direction or order. - For LS, I'm assuming there is no additional supervision to the calibration network. In that case, is there any explanation of why it can discover a reasonable scaling rule (Fig.7)? Since both the weights and the scale contribute to the final output of each block, I think decoupling these would be challenging without additional regularization and supervision. Is there any explanation why LS discovers a decaying scaling curve similar to the CS heuristics, but not an increasing scale? - Experiments - For most of the experiments, the results are reported for "Org" (no scaling) and "CS-$1/\sqrt(2)$". However, Fig. 5 does not contain results for "CS-$1/\sqrt(2)$". How does the proposed method compare to it in terms of convergence? - Significance - Although the paper primarily focuses on training diffusion models, many of the analyses seem general enough to be applicable to other scenarios that use UNets, e.g., depth prediction and segmentations. I wonder if this is true and if there is a reason for only showing results on the generation tasks. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Minor: - L86: There should be a space between Unet and training. - Fig.3 (a): What does the unit m mean on the x-axis? I'm also missing the point this figure is trying to make, as referred to by L161-L163. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and very positive comments! In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues. **`(1)` For the direction of the exponentially decayed scale in CS**, it is derived by our theory. Specifically, if we use the reverse direction, namely, $\kappa_i = \kappa^{N-i+1} \ (\kappa<1)$, the stability bound in Theorem 3 is extremely large. The main term of stability bound in Theorem 3 can be written as $\mathcal{S}_{\text{r}} =\sum_{i=1}^N\kappa_i M_0^i = \kappa^NM_0 + \kappa^{N-1}M_0^2+...+ \kappa M_0^N$, and could be very large, since $M_0^N$ is large when $N$ is large and scaling it by a factor $\kappa$ could not sufficiently control its magnitude (here $M_0>1$, please see Line 230 in manuscript). In contrast, our default setting $\kappa_i=\kappa^{i-1}$ of CS can well control the main term in stability bound: $\mathcal{S} = \sum_{i=1}^N\kappa_i M_0^i = M_0 + \kappa^{1}M_0^2+...+ \kappa ^{N-1}M_0^N$, where the larger terms $M_0^{i+1}$ are weighted by smaller coefficients $\kappa^{i}$. In this way, $\mathcal{S} $ is much smaller than $\mathcal{S}_{\text{r}}$, which shows the advantages of our default setting. Besides, the following Table 1 also compares the above two settings by using UViT on Cifar10 (batch size = 64), and shows that our default setting exhibits significant advantages. | Training step | 5k | 10k | 15k | 20k | 25k | 30k | 35k | 40k | 45k | |--------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | $\kappa_i = \kappa^{N-i+1}$ | 67.26 | 33.93 | 22.78 | 16.91 | 15.01 | 14.01 | 12.96 | 12.34 | 12.26 | | $\kappa_i=\kappa^{i-1}$ (ours) | 85.19 | 23.74 | 15.36 | 11.38 | 10.02 | 8.61 | 7.92 | 7.27 | 6.65 | **`(2)` There are two possible reasons for why LS discovers a decaying scaling curve similar to the CS.** - On one hand, from a theoretical view, as discussed in our reply (1), for the $i$-th long skip connection $(1\leq i \leq N)$, the learnable $\kappa_i$ should be smaller to control the magnitude of $M_0^i$ better so that the stability bound, e.g. in Theorem 3, is small. This directly yields the decaying scaling strategy which is also learnt by the scaling network. - On the other hand, we can also analyze this observation in a more intuitive manner. Specifically, considering the UNet architecture, the gradient that travels through the $i$-th long skip connection during the backpropagation process influences the updates of both the first $i$ blocks in the encoder and the last $i$ blocks in the UNet decoder. As a result, to ensure stable network training, it's advisable to slightly scale the gradients on the long skip connections involving more blocks (i.e., those with larger $i$ values) to prevent any potential issues with gradient explosion. **`(3)` For the convergence curve of $1/\sqrt{2}$-CS**, we do not include it in Fig. 5 for more clear comparison, since too many curves may lead to unclear comparison and possible misunderstanding. But in Figure 1 of the vanilla manuscript, we have compared with $1/\sqrt{2}$-CS, and find our methods can better stabilize the UNet training. Moreover, in Table 1, our methods also show better synthesis performance than $1/\sqrt{2}$-CS. Per your suggestion, we have compared with $1/\sqrt{2}$-CS in terms of convergence. Taking the setting about UViT on CIFAR10 as examples, Fig.1 (a) and (b) in the rebuttal PDF show that our methods reveal much faster convergence behaviors than $1/\sqrt{2}$-CS. **`(4)` Our analysis are based on diffusion models, and may not be directly applicable to other applications**. For example, our mathematical derivation relies on the particularity of the diffusion model, such as the approximately normal distribution of the network's predictions (i.e., noise). For other typical scenarios, such as image segmentation and depth prediction, the neural network's output consists of segmentation masks or depth maps which do not fully satisfy our analysis needs. While our theories cannot be directly applied to other tasks, there is potential to bridge these gaps through more calibrated analyses. This could be an important direction for future work. **`(5)` For the value of $m$ in Fig.3 (a)**, it denotes the feature dimension of $\mathbf{x}_t$ (Line 158). The small subfigure in Fig.3(a) is to experimentally verify the distribution of $\mathbf{x}_0$ required in Lemma 3.2. For the larger subfigure, it is used to validate the conclusion of Lemma 3.2 that $\mathbb{E}(||\mathbf{x}_t||_2^2)$ is of the order of $\mathcal{O}(m)$. In conclusion, the theoretical analysis and findings in Lemma 3.2 are consistent with the real statistical results on three datasets (CIFAR, CelebA, and ImageNet) in Fig.3. **`(6)` For the typos**, per your suggestions, we will fix them in revision. --- Rebuttal Comment 1.1: Comment: Thank you for the clear explanation. I'd like to keep my original rating of acceptance.
null
null
null
null
Universal Prompt Tuning for Graph Neural Networks
Accept (poster)
Summary: The paper introduces a universal prompt-based tuning method called Graph Prompt Feature (GPF) and its variation (GPF-plus) for pre-trained Graph Neural Network (GNN) models. GPF is a universal method that can be applied to any pre-trained GNN model under any pre-training strategy. It operates on the input graph's feature space and can achieve an equivalent effect to any form of prompting function. GPF introduces a learnable vector $p$ of dimension $F$, which is added to the node features, where $F$ corresponds to the dimensionality of the node features. The authors also purpose GPF-plus, which assigns an independent learnable vector $p_i$ to each node $u_i$ in the graph, instead of a single vector $p$. Experimental results show that GPF outperforms fine-tuning, with an average improvement of about 1.4% in full-shot scenarios and 3.2% in few-shot scenarios. Strengths: 1) Theoretical Guarantees: The authors provide rigorous derivations and theoretical analyses to demonstrate the universality and effectiveness of GPF. They prove that GPF can achieve an equivalent effect to any form of prompting function and can outperform fine-tuning in certain cases. This theoretical foundation strengthens the credibility of their proposed method. 2) Experimental Validation - Reproducibility: The paper includes extensive experiments conducted under various pre-training strategies, including both full-shot and few-shot scenarios. The experimental results consistently show that GPF outperforms fine-tuning, achieving an average improvement of about 1.4% in full-shot scenarios and 3.2% in few-shot scenarios. Furthermore, GPF surpasses existing specialized prompt-based tuning methods designed for specific pre-training strategies. Moreover, the availability of the source code enhances the reproducibility of the experiments. Weaknesses: 1) A weakness of the paper is the lack of a clear explanation or motivation regarding why the addition of the learnable vector p to the input features in the GPF model leads to better results compared to linear probing with a trainable layer in the final layer. While the Appendix demonstrates the superior performance of GPF, the paper does not provide a comprehensive analysis or reasoning behind this improvement. The absence of a clear explanation may leave readers questioning the underlying mechanisms and factors that contribute to the observed performance gain. Without a proper understanding of the motivations and justifications for the proposed approach, it becomes challenging to assess the significance and generalizability of the findings. 2) Theorem 2 assumes a simple linear 1-layer GNN (without activations). While the theorem provides theoretical insights into the convergence properties of this specific type of GNN, it may not accurately capture the behavior of more complex GNN architectures commonly used in practice. In real-world applications, GNNs often incorporate activation functions to introduce non-linearity and capture more intricate patterns in graph data. By focusing solely on a linear 1-layer GNN without activations, the theorem may limit the generalizability of its conclusions and overlook important aspects of GNN models commonly used in practical scenarios. Therefore, the applicability of the theorem's findings to more sophisticated GNN architectures with non-linear activations remains uncertain. 3) One weakness of the paper is the exclusive use of Graph Isomorphism Network (GIN) as the backbone GNN for finetuning. While GIN is a widely used GNN architecture, it may not be the most expressive or optimal choice for every graph-related task[1,2]. The paper does not provide a comprehensive exploration or comparison of different backbone GNN architectures in the finetuning process. Moreover, maybe a more powerful GNN would be able to achieve better performance in the fine-tuning stage. Therefore, it would be interesting for the authors to examine if their approach leads also to better results when a more powerful GNN is used. [1] Frasca, Fabrizio, et al. "Understanding and extending subgraph gnns by rethinking their symmetries." Advances in Neural Information Processing Systems 35 (2022): 31376-31390. [2] Morris, Christopher, et al. "Weisfeiler and leman go neural: Higher-order graph neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1) Explanation of the relationship between GPF and linear probing: The paper compares GPF with linear probing, which involves a trainable layer in the final layer. It would be beneficial if the authors could discuss the relationship between these two approaches and explain why GPF, with its learnable vector added to the input features, achieves superior performance compared to linear probing. Is there any inherent advantage or characteristic of GPF that enables it to outperform linear probing? 2) Generalizability of Theorem 2: The paper presents Theorem 2, which focuses on a simple linear 1-layer GNN without activations. However, many practical GNN architectures incorporate activation functions to introduce non-linearity. It would be helpful if the authors could discuss the generalizability of the theorem's conclusions to more complex GNN models commonly used in real-world applications. Can the findings of Theorem 2 be extended to GNNs with non-linear activations? If not, what are the limitations or implications of the theorem in the context of practical GNN architectures? 3) Have you considered or experimented with using more powerful GNN architectures, such as subgraph GNNs or higher order GNNs, as the backbone GNN for the finetuning stage? If so, how does the performance of GPF and GPF-plus compare when utilizing these alternative GNN architectures? I will be happy to increase my score for the paper, if the authors adequately address the mentioned weaknesses and provide satisfactory explanations and improvements in response to the questions and suggestions raised. *** After rebuttal increased score to "Borderline Accept" ** Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response** Dear reviewer KwHM, We hope our point-to-point responses can address your concerns and provide you with better clarification. 1. (Weakness 1 \& Question 1) Comparison with linear probing. The difference between linear probing and GPF lies in the introducing of additional learnable parameters in GPF that modify the input graph. While linear probing focuses solely on training the last linear head $\theta$ of the model, GPF not only trains $\theta$ but also introduces a learnable vector $p$ into the input graph's feature space. To emphasize the theoretical advantage of GPF over linear probing, we define the pre-trained model as $f$, the input graph as $G$, the linear head as $\theta$, and the target matrix as $Y$. Let $f(G)=H$ represent the representation matrix obtained from the pre-trained model. In linear probing, the goal is to find the optimal $\theta$ that minimizes the discrepancy between $H\theta$ and $Y$. Meanwhile, GPF incorporates additional learnable parameters $p$ into the input graph $G$, resulting in $f(G+p)=H^\prime$. When applying GPF, we can simultaneously adjust $H^\prime$ and $\theta$ to make $H^\prime\theta$ close to $Y$. This simultaneous adjustment yields better results compared to only modifying $\theta$, thus establishing the theoretical superiority of GPF over linear probing. Notably, linear probing cannot modify the representations $H$ obtained from the pre-trained model. In contrast, GPF addresses this limitation by manipulating the input to modify the representations obtained by the pre-trained model. Additionally, the comparison between GPF and linear probing can be regarded as an extension of Theorem 2. When the GNN model does not contain any learnable parameters, Theorem 2 highlights the theoretical advantage of GPF over linear probing. Furthermore, akin to the success of prompt tuning in the NLP domain, the merit of GPF can also be attributed to its capacity to bridge the disparity between pre-training and downstream tasks through input transformation. Consequently, compared to linear probing, GPF exhibits advantages in both intuition and theory. 2. (Weakness 2 \& Question 2) Generalizability of Theorem 2. Theorem 2 can also be applied to multi-layer GNN models, albeit with slight variations in some coefficients during the derivation process. However, incorporating non-linear activation functions presents a significant challenge for theoretical analysis. The crux lies in accurately estimating the expressive power of the multi-layer perceptron (MLP) architecture within the model. The universal approximation theorem describes the upper bound on the expressive power of MLP, stating that MLP networks with infinite depth or infinite width can theoretically approximate any function. Consequently, comparing the theoretical optimal expressive power of any network with an MLP architecture becomes meaningless. However, in practical scenarios, especially for GNN models with limited dimensions and depth, the real impact of multi-layer linear transformations with non-linear activation functions is not significantly different from that of single-layer linear transformations [1,2]. Therefore, similar to many existing works that analyze the theoretical capacity of models [3,4], we omit the consideration of non-linear activation functions in Theorem 2. Nonetheless, the presented derivation can be generalized to practical applications with only minor discrepancies. 3. (Weakness 3 \& Question 3) Results on more powerful GNN backbones. Your suggestion is rather rational, and we have included additional experimental results using more powerful GNN backbones. The results can be found in B.1 of the global response. From the experimental results, we can find that our method still achieves better results on these powerful models. We hope our explanation can dispel your concerns. If you have any other questions or concerns, please feel free to let us know. **Ref** [1] Simplifying Graph Convolutional Networks [2] DFG-NAS: Deep and Flexible Graph Neural Architecture Search [3] Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution [4] In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I would like to thank the authors for their rebuttal and the new experimental results. I increase my score to "borderline accept". --- Reply to Comment 1.1.1: Comment: Thanks for your support in our work. Your valuable feedback has made our work better.
Summary: This paper proposed a universal prompt-based tuning method, called GPF, for pre-trained GNN models. The idea is to operate on the feature space of the downstream input graph. The authors theoretically showed that GPF can achieve results equivalent to any prompting function, and is not weaker than full fine-tuning. Experiments showed the competitive results. Strengths: 1. This paper proposed a universal graph prompting approach, which basically operates on the input graph feature space. As far as I know, the idea of augmenting input graph node features for prompting is novel. 2. The authors provide theoretical analysis, which showed that GPF can achieve results equivalent to any prompting function, and is not weaker than full fine-tuning. 3. Experiments results showed the performance gains. Weaknesses: 1. The motivation for a universal prompting needs better explained, and advantages over specialized prompting approaches. 2. Comparison with other prompting meothds is only perfomed on the models pre-trained by edge predcition. Some minor comments: 1. It is not clear to me how the avg. in Table 1 and Table 2 obtained, and what does it mean. It is just the average across all datasets? 2. I suggest the authors include the detailed pretraining settings in experiments (e.g., datasets used) instead of in the appendix. 3. Fig. 1 is not informative. The authors should use a figure to better illustrate how GPF works, and the main differences with existing prompting approaches. 4. Table captions should be on top of tables insead of below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Potential limitations of the proposed GPF not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response** Dear reviewer dbJu, We really appreciate your comments on our work. We hope our response can address your concerns. 1. (Weakness 1) The motivation and advantages of universal prompting. The universal graph prompt tuning method that we proposed offers three main advantages over specialized prompting methods: a. Generalization without knowledge of pre-training details: One of the primary advantages of our method is that it eliminates the need for an in-depth understanding of the pre-training strategies employed by the pre-trained GNN models. In practical applications, gathering such detailed knowledge can be challenging and inconvenient. Our universal method can achieve satisfactory results without requiring extra information about the model's pre-training details. b. Applicability to complex graph pre-training tasks: Existing graph prompt tuning methods predominantly focus on the pre-training task of link prediction. However, when dealing with more abstract and complex graph pre-training tasks, such as graph infomax and graph contrastive learning, it becomes challenging for researchers to design appropriate prompt templates intuitively. Our universal method circumvents the need for an explicit characterization of concrete prompt templates, thus making it applicable to models trained on any graph pre-training task. c. Practical and theoretical effectiveness: Our universal method is easy to apply in practice and comes with strict theoretical guarantees of effectiveness. It offers valuable insights for future investigations in this field. We appreciate your suggestion to include additional discussion, and we have added similar content in our latest revision. 2. (Weakness 2) Comparing existing graph prompt tuning methods under alternative pre-training strategies. We compare our approach with existing graph prompt tuning methods in the context of edge prediction because both GPPT and GraphPrompt explicitly state in their papers that they are designed specifically for the pre-training task of link prediction. These methods explicitly transform node classification tasks into link prediction tasks, which limits their ability to bridge the gap between upstream and downstream tasks when applied to other pre-training strategies. To provide more evidence, we present additional experimental results that indicate the limitations of these methods in dealing with other pre-training tasks. As an example, the table below shows the ROC-AUC scores (%) of these methods on the model pre-trained by graph infomax. | | | BBBP | Tox21 | ToxCast | SIDER | ClinTox | BACE | | ------- | ----------- | -------------- | ------------- | -------------- | -------------- | -------------- | -------------- | | Infomax | FT | **67.55±2.06** | 78.57±0.51 | 65.16±0.53 | 63.34±0.45 | 70.06±1.45 | 81.32±1.25 | | | GPPT | 56.92±0.46 | 59.37±0.84 | 53.73±0.92 | 48.23±0.79 | 53.22±0.84 | 56.22±0.16 | | | GPPT(w/olo) | 62.87±0.05 | 71.50±0.70 | 57.55±0.13 | 55.77±0.19 | 54.49±0.40 | 75.37±0.30 | | | GraphPrompt | 62.95±0.89 | 61.65±0.40 | 54.98±0.32 | 51.21±0.21 | 47.53±0.12 | 54.77±0.99 | | | GPF | 66.83±0.86 | 79.09±0.25 | 66.1±0.53 | **66.17±0.81** | 73.56±3.94 | 83.60±1.00 | | | GPF-plus | 67.17±0.36 | **79.13±0.70** | **66.35±0.37** | 65.62±0.74 | **75.12±2.45** | **83.67±1.08** | 3. Collection of minor comments. (a) You are right, and the Avg. in Table 1 and 2 presents the average values across all datasets. (b) Thanks for your suggestion. We have moved the detailed experiment settings in our main text. (c) Thanks for your suggestion. We have adjusted the last two figures to make their meaning clearer. (d) Thanks for your detailed advice, and we have resolved this mistake. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I have read comments from other reviewers, as well as rebuttals, and I think my score is resonable and will keep my rating. --- Reply to Comment 1.1.1: Comment: Thanks for your support in our work. Your valuable feedback has made our work better.
Summary: This paper aims for efficient adaptation of pre-trained graph neural networks to downstream tasks. A simple prompt tuning method (i.e. GPF) is proposed for adaptation and is applicable to GNN pretrained with any objectives. The main idea of GPF is to add a learnable vector on all the node features in the input graph, while an improved version (i.e. GPF-plus) leverages a set of learnable vectors to compose the bias added to each node feature. Experiments show that GPF and GPF-plus achieve better results than the full finetuning. Strengths: Strength 1. Figure 1 illustrates the difference between the proposed method and existing methods. 2. Provide a theoretical analysis of the proposed method. 3. The proposed method is simple and easy to understand. Weaknesses: Major 1. The novelty of GSF may be limited because it just adds a bias term on the input of the pretrained network and there is nothing specific for the graph structure. However, the bias-term finetuning is not a new idea [a, b]. 2. The authors claim that the developed method is universal for all the graph learning. However, the theoretical proof is only valid for graph classification downstream tasks where the node features are sum-pooled as the representation. There is no evidence showing that GSF is applicable to other types of downstream tasks. 3. Baselines missing in Table 1 & 2. For example, Linear Probing is a standard baseline, and residual adapter in [c] can also be a baseline. 4. Given the variance shown in Table 1 & 2, the improvement of GPF-plus over GPF is marginal. Why is it an enhanced version? If GPF is theoretically feasible to address the problem (section 3.4), why there GPF-plus is needed? [a] BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. ACL’22. [b] Tinytl: Reduce memory, not parameters for efficient on-device learning. NeurIPS’20. [c] CLIP-Adapter: Better Vision-Language Models with Feature Adapters. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The author is suggested to address the concerns in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitation is not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response** Dear reviewer Cfab, We hope our point-to-point responses can address your concerns and better clarify the contributions and value of our work. 1. (Weakness 1) The novelty of GPF and the comparison with existing methods. The main contributions of our work compared to existing methods can be summarized as follows: a. We have pioneered the general prompt tuning method on pre-trained GNN models, offering rigorous derivations to fill the gap in the lack of theoretical validity for the effectiveness of graph prompt tuning. Specifically, GPF can be seen as a general graph template theoretically applicable to any pre-training task. Despite its relatively simple form, GPF's effectiveness in bridging the gap between pre-training and downstream tasks is theoretically guaranteed (Theorem 1). While the bias-term tuning methods you mentioned are designed for parameter efficiency, our method stands out by considering the influence of the pre-training task, which distinguishes our method from conventional parameter tuning techniques. b. Our work provides valuable insights that can guide future investigations in this field. We aim to find a universal prompt tuning method that can provide effective prompts for any graph pre-training strategy. Our paper introduces GPF and GPF-plus as the most intuitive and concise options for universal graph prompt tuning. Based on our deduction and analysis, designing more complex prompt tuning methods is possible. We highlight an updated work, "All in One: Multi-task Prompting for Graph Neural Networks", which received the best research paper award of SIGKDD 2023 recently and successfully incorporated our innovative ideas with multi-task meta-learning. Our work serves as its theoretical foundation, providing indispensable theoretical guarantees for their proposed method. Our method, GPF, operates on the feature space of the input graph, but it does not imply that we overlook modifications to the graph structure. While we focus on adding extra learnable vectors to the feature space, it is important to note that we have demonstrated the ability of this operation to achieve an equivalent effect to any structure modification. Detailed proofs regarding this equivalence can be found in Propositions 4 and 5 in the appendix. Consequently, our method does not need explicit structural modifications. There are several advantages to this simplification. The adjacency matrix, an N*N matrix, is much larger than the feature matrix. By working on the feature matrix, our method significantly reduces resource consumption and minimizes training difficulties. 2. (Weakness 2) Applicability of theoretical proof and other downstream tasks. The theoretical proof provided can also be extended to node-level tasks. From a subgraph aggregation perspective, the node representations can be considered as the subgraph representations of the central nodes. This means that obtaining node representations can be transformed into obtaining subgraph representations. To further explain the application of GPF on node-level tasks, please refer to Section A.1 in the appendix. Furthermore, in Section 3.4 of the main text, we demonstrate that GPF achieves high flexibility and effectiveness in generating graph representations. This implies that when applied to node-level tasks, GPF can also obtain sufficiently flexible and effective node representations. We are sorry to cause your concern and want to assure you that we have included a theoretical discussion on the effectiveness of GPF for node-level tasks in our latest revision. To address your concerns, we also provide additional empirical results on node classification tasks, which can be found in B.1 of the global response. Regarding the sum pooling assumption in the theorem derivation, it is not a necessary condition for the theorem to hold. Our analysis can be extended to accommodate any weighted aggregation readout function, including average pooling, max/min pooling, and classical hierarchical pooling. These pooling methods are all encompassed within our analysis. We use the sum pooling assumption to facilitate a straightforward and comprehensible analysis. And we have included additional discussions on other pooling techniques in our latest revision. 3. (Weakness 3) The comparison with linear probing and residual adapter. We have already conducted a thorough performance comparison of our method with linear probing, and the corresponding results can be found in Section B.5 within the appendix. Additionally, we have evaluated our method against other classical tuning methods, and the corresponding results can be found in Section B.6 within the appendix. Regarding your suggestion to compare our method with the residual adapter, we have taken it into consideration and provided the results in B.3 of the global response. 4. (Weakness 4) Advantages of GPF-plus over GPF. We introduce GPF-plus to enhance the expressiveness and scalability of our method. Both GPF and GPF-plus are theoretically guaranteed to be universal. However, GPF-plus offers greater flexibility by allowing for different prompted features to be provided for each node. When GPF and GPF-plus are employed to approximate a specific graph template, GPF-plus possesses a larger set of feasible solutions than GPF due to its enhanced flexibility. Consequently, GPF-plus has a greater chance of obtaining better solutions during the practical training stage. Additionally, while the number of parameters in GPF is fixed according to the dimensionality of the node features, GPF-plus provides the option to adjust the number of parameters freely by selecting the number of prompt feature bases. This adaptability enables GPF-plus to flexibly adjust to the characteristics of downstream datasets. We hope our explanation can dispel your concerns. If you have any other questions or concerns, please feel free to let us know. --- Rebuttal Comment 1.1: Title: Thank you for rebuttal Comment: Thanks the author for providing the rebuttal. My concerns have been addressed and the score will be increased. --- Reply to Comment 1.1.1: Comment: Thanks for your support in our work. Your valuable feedback has made our work better.
Summary: This paper introduces the Graph Prompt Feature (GPF) approach, which aims to adapt pre-trained Graph Neural Networks (GNNs) for downstream tasks by appending tunable embeddings onto the frozen node embeddings. By doing so, the authors achieve a significant reduction in the number of parameters required for the downstream task compared to full fine-tuning. The experimental results demonstrate promising performance in binary classification tasks on Molecular and Protein datasets when compared to full fine-tuning. Strengths: The paper is clearly written, the contribution is easy to understand and provide theoretical proof on effectiveness. Weaknesses: There are several concerns that I would like to see addressed: 1) In the formulation of GPF, its relation to the concept of prompting seems unclear. It is challenging to connect this approach to the motivating concept of prompting in NLP. Instead, GPF appears to resemble task-specific adaptive fine-tuning in transformers, such as LoRA[1]. It would be helpful if the authors could elaborate more on where the similarity between GPF and prompting lies. 2) The choice of downstream tasks seems to be limited to small binary classification tasks, and the comparison is only made against fine-tuning as a baseline. Why are there no results presented for multi-class classification tasks? This decision weakens the potential of the proposed method. 3) Unlike previous graph prompt methods, GPF does not make any modifications to the graph's structure. It would be beneficial if the authors could explain how graph classification highlights the strength of GPF, as the additional results provided may seem somewhat redundant. 4) There is no detail provided on the number of features appended into the embedding space for each node. Since GPF-plus suggests a simple size range, does this mean that the authors used different settings for GPF-plus for each dataset? This is not discussed in the experimental settings and might make the comparison somewhat incomparable. [1] https://arxiv.org/abs/2106.09685 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The author did not address any limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response** Dear reviewer GUF8, We hope our point-to-point responses can address your concerns and provide you with better clarification of the contributions and value of our work. 1. (Weakness 1) The relationship between our methods and prompting methods. Our method, GPF, is a general prompt tuning method that provides theoretically guaranteed prompts for any pre-training task within the context of GNNs. What distinguishes GPF from conventional parameter tuning techniques, such as LoRA, is its consideration of the influence of the pre-training task. Specifically, in the field of NLP, prompt tuning methods transform downstream tasks into sentence completion tasks to align them more closely with the pre-training tasks. Similarly, in the field of graphs, researchers aim to bridge the gap between downstream and pre-training tasks for GNN models. Graph prompt tuning involves selecting suitable graph templates based on the pre-training tasks, and this process is formally described in Section 3.2 of our paper. All existing graph prompt tuning methods, such as [1,2], satisfy our definition, with the only difference lying in the selection of distinct $\psi_i(\cdot)$ in Formula 3. Built upon the principles of graph prompt tuning, our method, GPF, is a general graph template that is theoretically applicable to any pre-training task. While the form of GPF may appear relatively simple, its effectiveness in bridging the gap between pre-training tasks and downstream tasks is theoretically guaranteed (Theorem 1). Traditional parameter tuning methods are primarily designed to optimize parameter efficiency. However, our method is designed with the primary goal of bridging the gap between pre-training tasks and downstream tasks. Therefore, our method should be categorized as a prompt tuning method that also offers advantages in terms of parameter efficiency. 2. (Weakness 2) The results for multi-class classification tasks. Not all experiments involved are conducted on binary classification tasks. The IMDB-M dataset illustrates a multi-class classification task on graphs, and you can refer to the corresponding results in Table 10 in the appendix. We have included additional results in B.2 of the global response to address your concerns about GPF's performance on multi-class tasks. These experimental results showcase the satisfactory performance of our methods in handling multi-class classification tasks, and we hope they can dispel your concern. In addition to comparing the results with fine-tuning, we have also conducted a comparison between GPF and existing graph prompting methods, as presented in Table 2. Furthermore, we have assessed GPF's performance against linear probing, as shown in Table 7, and compared GPF with other model tuning methods, as presented in Table 8. Through these extensive comparisons, we aim to evaluate GPF's performance comprehensively. 3. (Weakness 3) The modifications to the graph structure. Our method, GPF, operates on the feature space of the input graph, but it does not imply that we overlook modifications to the graph structure. While we focus on adding extra learnable vectors to the feature space, it is important to note that we have demonstrated the ability of this operation to achieve an equivalent effect to any structure modification. Detailed proofs regarding this equivalence can be found in Propositions 4 and 5 in the appendix. Consequently, our method does not need explicit structural modifications. There are several advantages to this simplification. The adjacency matrix, an N*N matrix, is much larger than the feature matrix. By working on the feature matrix, our method significantly reduces resource consumption and minimizes training difficulties. Furthermore, it is worth noting that our method, GPF, is not restricted to graph classification tasks alone. Section A.1 in the appendix provides detailed elaboration on how GPF extends to encompass node-wise tasks (node classification and link prediction). To address your concern regarding GPF's performance on node-wise tasks, we have included additional experimental results in B.1 of the global response. 4. (Weakness 4) Details about the experiment settings. The dimension of the basic prompt features matches that of the input graph node features. In our experiments, when applying the GCC framework, the dimension of the prompt features is set to 64, while for other frameworks, it is set to 300. You can find the relevant description in the main text at line 203 and Section B.4 in the appendix. Regarding the hyper-parameter $k$ of GPF-plus, it determines the number of prompt feature bases utilized. This parameter can be adjusted manually and vary across different datasets. You can find a detailed description of this parameter in Equation 9 and lines 300-301 of the main text. We are sorry to make you confused, and we have included an additional hyper-parameter list in the latest revision to provide more clarity on these parameters. We hope our explanation can dispel your concerns. If you have any other questions or concerns, please feel free to let us know. **Ref** [1] GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks. [2] GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks. --- Rebuttal Comment 1.1: Comment: I'd like to express my gratitude to the authors for their thorough response. After careful consideration, I'm convinced that the authors have adeptly addressed all the concerns I raised. In light of this, I will revise my score to a 5. Thank you for your diligent efforts. --- Reply to Comment 1.1.1: Comment: Thanks for your support in our work. Your valuable feedback has made our work better.
Rebuttal 1: Rebuttal: ### **Global Response** Dear all reviewers, We appreciate your valuable comments on our work. We provide the following clarification and additional experimental results based on feedback. **A. Contributions and influence** We propose a universal graph prompt tuning method that can be applied to any pre-trained GNN models, and its effectiveness is theoretically guaranteed. By considering the impact of the pre-training task, our method is distinguished from conventional parameter tuning techniques. Importantly, our work is the pioneer in providing rigorous theoretical analysis on the effectiveness of graph prompting methods, offering valuable insights for future investigations in this field. We highlight an updated work, "All in One: Multi-task Prompting for Graph Neural Networks", which received the best research paper award of SIGKDD 2023 recently and successfully incorporated our innovative ideas with multi-task meta-learning, leading to remarkable performance improvements. Our work serves as its theoretical foundation, providing indispensable theoretical guarantees for their proposed method. **B. Additional experiment results** 1. The performance of advanced backbone models on node classification. We conduct experiments with advanced backbone models SUN(EGO+) [1] and 1-2-GNN [2] with subgraph aggregation on four node classification datasets pre-trained by Infomax. The following table presents the accuracy (%). | SUN(EGO+) | Cora | CiteSeer | PubMed | Ogbn-arxiv | | :-------: | :------------: | :------------: | :------------: | :------------: | | FT | 83.15±0.23 | 71.10±0.27 | 79.11±0.21 | 70.89±0.11 | | GPF | 83.46±0.33 | 72.81±0.67 | 79.91±0.42 | 70.92±0.09 | | GPF-plus | **83.70±0.45** | **72.89±0.53** | **80.79±0.95** | **70.95±0.17** | | 1-2-GNN (Subgraph) | Cora | CiteSeer | PubMed | Ogbn-arxiv | | :----------------: | :------------: | :-----------: | :------------: | :------------: | | FT | 83.47±0.49 | 71.33±0.84 | 78.27±0.95 | 70.32±0.97 | | GPF | 83.85±0.25 | 71.29±0.76 | 79.87±0.44 | **71.03±0.05** | | GPF-plus | **84.01±0.87** | **71.56±0.10** | **79.95±0.20** | 70.82±0.40 | 2. The performance on multi-class classification. We conduct experiments on the multi-class dataset Ogbg-ppa, and the following table presents the accuracy (%). | Ogbg-ppa | Infomax | EdgePred | AttrMasking | ContextPred | GCL | | :------: | :------------: | :------------: | :------------: | :------------: | :------------: | | FT | 69.17±0.52 | 69.59±0.82 | 69.02±0.55 | 69.35±0.97 | 68.43±0.66 | | GPF | 69.55±0.43 | **70.03±0.37** | 69.76±0.69 | 69.40±0.34 | 69.03±0.51 | | GPF-plus | **69.95±0.27** | 69.73±0.37 | **69.88±0.43** | **69.51±0.37** | **69.11±0.63** | Besides, some datasets in Table 1 and Table 2 of our paper contain multiple binary classification tasks. When we combine these binary classifications into a multi-class task and conduct experiments, we obtain the following ROC-AUC scores (%). | | | Tox21 | ToxCast | SIDER | ClinTox | | :------: | :------: | :------------: | :------------: | :------------: | :------------: | | Infomax | FT | 63.84±0.80 | 52.18±0.19 | 52.55±0.51 | 60.11±0.95 | | | GPF | 64.17±0.95 | 53.35±0.47 | **55.21±0.19** | 63.81±0.33 | | | GPF-plus | **64.28±0.13** | **53.54±0.28** | 54.81±0.54 | **65.38±0.94** | | EdgePred | FT | 63.75±0.72 | 52.36±0.09 | 53.45±0.75 | 59.30±0.44 | | | GPF | 64.85±0.18 | 52.78±0.55 | 56.35±0.05 | **59.60±0.51** | | | GPF-plus | **65.13±0.47** | **52.95±0.99** | **56.61±0.30** | 58.82±0.45 | 3. Comparison with the adapter. We conduct experiments with an adapter [3] before the final linear head. The final output representation dimension of the adapter is the same as the output representation dimension of the pre-trained model. The following table presents the ROC-AUC scores (%). | | | BBBP | Tox21 | ToxCast | SIDER | ClinTox | BACE | | :------: | :------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | Infomax | Adapter | 64.19±0.46 | 76.30±0.02 | 63.21±0.20 | 62.14±0.39 | 69.33±0.01 | 80.51±0.26 | | | GPF | 66.83±0.86 | 79.09±0.25 | 66.10±0.53 | **66.17±0.81** | 73.56±3.94 | 83.60±1.00 | | | GPF-plus | **67.17±0.36** | **79.13±0.70** | **66.35±0.37** | 65.62±0.74 | **75.12±2.45** | **83.67±1.08** | | EdgePred | Adapter | 63.95±0.25 | 75.21±0.41 | 63.72±0.26 | 61.78±0.23 | 67.12±0.28 | 79.45±0.40 | | | GPF | **69.57±0.21** | 79.74±0.03 | 65.65±0.30 | 67.20±0.99 | **69.49±5.17** | 81.57±1.08 | | | GPF-plus | 69.06±0.68 | **80.04±0.06** | **65.94±0.31** | **67.51±0.59** | 68.80±2.58 | **81.75±2.09** | Due to limited time, some experiments are not validated under all pre-training strategies and datasets. We will complete them later. **Ref** [1] Understanding and extending subgraph gnns by rethinking their symmetries. [2] Weisfeiler and leman go neural: Higher-order graph neural networks. [3] Parameter-Efficient Transfer Learning for NLP.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes the Graph Prompt Feature (GPF) to improve Graph Neural Networks (GNNs) performance amidst scarce labeled data and low out-of-distribution generalization. GPF, a universal prompt-based tuning method, operates on the input graph's feature space and is applicable to any GNN architecture. The authors also present a stronger variant, GPF-plus, providing diverse prompted features for different nodes. Both methods show better performance than fine-tuning, validated through theoretical analyses and extensive experiments. Strengths: 1. GPF and GPF-plus are universal, model-agnostic solutions that can be applied to any pre-trained GNN model, enhancing their general applicability in diverse scenarios. 2. The paper provides theoretical guarantees for the effectiveness of GPF and GPF-plus, strengthening the scientific rigor of the presented methods. 3. The authors demonstrate that their proposed methods outperform existing fine-tuning strategies and specialized prompt-based tuning methods in both full-shot and few-shot scenarios, showcasing their practical effectiveness. Weaknesses: 1. The effectiveness of GPF and GPF-plus largely depends on the quality and representation of the feature space, which may not be optimally prepared in all real-world applications. 2. While the paper provides theoretical analysis for the methods' effectiveness, more detailed explanation or examples of the theoretical proofs could enhance the clarity and comprehensibility. 3. The real-world applicability and performance of the proposed methods could be further substantiated with experiments on a wider range of tasks and datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How do GPF and GPF-plus perform in scenarios where the feature space is noisy or inadequately represented? Are there strategies to mitigate these potential issues? 2. Could you provide more detailed explanation or real-world examples to further illustrate the theoretical guarantees of GPF and GPF-plus? 3. How would GPF and GPF-plus integrate with other advancements in GNNs or machine learning in general? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not really. Dear authors, please enumerate some limitations of your work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response** Dear reviewer aqDC, We appreciate your comments and your support for our work. We hope our response can address your concerns. Please find our detailed response below. 1. (Weakness 1 \& Question 1) Dealing with scenarios where the feature space is noisy or inadequately represented. Our proposed GPF and GPF-plus methods operate on the feature space of the input graph. We understand that your concern may lie in the fact that node features can be severely corrupted or contain unacceptable levels of noise in certain datasets. Notably, reducing dependence on node features often leads to better results in such cases. In these situations, we suggest replacing the raw features with synthetic features, such as positional encoding features [1,2] or Gaussian features [3], before applying GPF and GPF-plus to the generated node features. In Appendix B.7, we present experimental results on GCC that demonstrate the effectiveness of GPF on synthetic features. In this particular experiment, all node features are generated as positional encodings, and in this scenario, GPF still outperforms fine-tuning. 2. (Weakness 2 \& Question 2) Detailed explanation or examples of the theoretical proofs. Our theoretical analysis ensures the universal capability and effectiveness of our method. We appreciate your suggestion to include additional examples, and we have incorporated concrete examples in our latest revision to illustrate the theorems more comprehensively. Regarding Theorem 1, which establishes the universal capability of GPF, we have enhanced the theorem proofs by incorporating specific examples. These examples encompass existing graph templates like GPPT as well as intuitively designed graph templates. By demonstrating the equivalence of these templates to specific forms of GPF, we aim to provide readers with a deeper understanding of the universal capability of our method. As for Theorem 2, which showcases the effectiveness of GPF, we have manually designed a specific graph task to illustrate how GPF outperforms fine-tuning by providing a more optimal solution. This example serves as evidence that GPF can achieve superior performance in certain scenarios. Thanks for your detailed suggestion, and we believe that these enhancements and additional examples strengthen our theoretical claims. 3. (Weakness 3 \& Question 3) The integration with other advancements in GNN models. Your suggestion is indeed reasonable. We have included additional experimental results utilizing more powerful backbone models. You can find these results in B.1 of the global response. **Ref** [1] Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks. [2] On the Equivalence Between Positional Node Embeddings and Structural Graph Representations. [3] Random Features Strengthen Graph Neural Networks. --- Rebuttal 2: Comment: I read the authors' response, and maintain the same rating of 7. --- Rebuttal Comment 2.1: Comment: Thanks for your support in our work. Your valuable feedback has made our work better.
null
null
null
null
null
null
Dynamic Non-monotone Submodular Maximization
Accept (poster)
Summary: This work studies non-monotone submodular maximization subject to a cardinality constraint in a fully dynamic setting, i.e., maintaining a good solution as elements are inserted and deleted from the "current" ground set. Studying non-monotone submodular maximization in this model is the natural follow-up to the works of: - Monemizadeh (NeurIPS 2020): $(0.5 - \varepsilon)$-approximation for *monotone* dynamic submodular maximization with amortized update time $O(k^2 \varepsilon^{-3} \log^{5} n)$. - Lattanzi et al. (NeurIPS 2020): $(0.5 - \varepsilon)$-approximation for *monotone* dynamic submodular maximization with amortized update time $O(\log^4(k) \log^2 (n) / \varepsilon^{7})$. - Chen and Peng (STOC 2020): proves query complexity lower bounds for approximation ratios strictly greater than $0.5$ for monotone dynamic submodular maximization. Gives $(1-1/e-\varepsilon)$-approximation for "insertion-only" streams for matroid constraints. This work gives a $(1/8-\varepsilon)$-approximation for the *non-monotone* fully dynamic cardinality-constrained problem with amortized query complexity $O(\varepsilon^{-3} k^3 \log^3 n \log k)$ queries per update. The proposed algorithm builds on connections to thresholding algorithms for *monotone* submodular maximization and then altering these solutions to get guarantees for non-monotone functions. The authors include experiments and compare their algorithm to the Simple-Streaming in Feldman-Karbasi-Kazemi (NeurIPS 2018). Strengths: - Gives $(1/8 - \varepsilon)$-approximation for fully-dynamic non-monotone submodular maximization subject to a cardinality constraint, answering an open question in Chen-Peng (STOC 2020). - Builds on thresholding techniques commonly used for monotone submodular maximization. This helps connect the toolkits for each problem type. Some missing references on line 54 when discussing thresholding: 1. "Submodular Optimization in the MapReduce Model" (Liu-Vondrak, SOSA 2019) 2. "Submodular Maximization with Nearly Optimal Approximation, Adaptivity and Query Complexity" (Fahrbach-Mirrokni-Zadimoghaddam, SODA 2019) 3. "Fully Dynamic Algorithm for Constrained Submodular Optimization" (Lattanzi et al., NeurIPS 2020) 4. "Practical and Parallelizable Algorithms for Non-Monotone Submodular Maximization with Size Constraint" (Chen-Kuhnle, arXiv:2009.01947, 2022) Weaknesses: - The biggest weakness of this paper are the experiments. They almost check all of the boxes, but they aren't very motivating. This paper investigates: 1. video frame summary on the entire set, which doesn't use any aspect of streaming but is a reasonable starting place to show what the algorithms output. 2. sliding window model of length $W$, which is fully dynamic but not in the most interesting way (though possibly the most practical way). 3. only compares against the Sample-Streaming algorithm of Feldman-Karbasi-Kazemi (NeurIPS 2018). It would be nice to include comparisons to the three papers discussed in the abstract, too, even though they are for monotone submodular functions. - Given that there is randomness in this paper's Update algorithm (line 4 of SubsetSelection), it is important for these experiments to be averaged over several trials with standard deviation error bars. Both the oracle calls and objective value plots appear somewhat noisy and non-monotone. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: **Questions** - [line 74] This is less of a question and more of a comment: The authors are correct in questioning whether thresholding works in a non-monotone setting when a randomly sampled set is added to the current solution, i.e., $f(S_t) \ge k \tau$. See Chen-Kuhnle (2022) for an example where this property does not hold. However, also see Chen-Kuhnle (2022) and Fahrbach-Mirrokni-Zadimoghaddam (ICML 2019, arXiv:1808.06932v3) for a method to circumvent this problem. **Typos and suggestions** - [line 27] Two relevant missing works for non-monotone submodular maximization: 1. "Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity" (Fahrbach-Mirrokni-Zadimoghaddam, ICML 2019) 2. "Practical and Parallelizable Algorithms for Non-Monotone Submodular Maximization with Size Constraint" (Chen-Kuhnle, arXiv:2009.01947, 2022) - [line 38] suggestion: The description of the dynamic setting "... set of elements that are inserted but not deleted after their last insertion time till time $t$" is not clear and should be improved. The description in Section 1.1 is better, but could still be improved (i.e., saying that $V_t$ is the set of ``active'' elements at time $t$). - [line 71] typo: "The only such a result" --> "The only such result" - [line 184] suggestion: Consider using $\text{Insert}_{i}(v, \tau)$ to avoid a double subscript. - [line 313] typo in Figure 3 description: "submdoular" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments. We will make sure to fix the issues you pointed out and incorporate your suggestions in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: I read all the reviews and author rebuttals, and will keep my rating the same. I would like confirmation from the authors: Will you add standard deviation error bars to your plots in Figure 3? Better yet, would you include them in a one-page pdf response if the upload option is still available? --- Reply to Comment 1.1.1: Comment: Sure, as the confirmation, the latex file of our experiments, including plots of error bars are given below. In case you had any difficulty in running the latex file, we can provide a link to access the plots. Unfortunately, the NeurIPS guidelines prohibit link sharing, but if there is any other way that we can provide the links to plots, we will be happy to provide that. In addition, we would like to mention our previous experiments that we did for the max-cut problem along with error bars were given in Appendix E.1 (see Figure 4). \begin{figure*}[h] \centering \begin{tabular}{@{}c@{}} \begin{tikzpicture} \pgfplotsset{width=8cm,compat=1.9} \begin{axis}[ xlabel={$k$}, ylabel={oracle calls}, ymin=0, ymax=1000, xtick={2, 4, 6, 8, 10}, legend pos=north west, ymajorgrids=true, grid style=dashed, legend style={nodes={scale=0.5, transform shape}}, ] \addplot[ color=red, mark=square, error bars/.cd, y dir=both, y explicit, error bar style={ color=red} ] coordinates { (1, 331.20000) += (0, 106.80000) -= (0, 123.20000) (2, 373.60000) += (0, 108.40000) -= (0, 143.60000) (3, 346.80000) += (0, 142.20000) -= (0, 64.80000) (4, 371.00000) += (0, 77.00000) -= (0, 124.00000) (5, 329.80000) += (0, 94.20000) -= (0, 89.80000) (6, 309.50000) += (0, 50.50000) -= (0, 50.50000) (7, 368.80000) += (0, 80.20000) -= (0, 127.80000) (8, 352.70000) += (0, 54.30000) -= (0, 74.70000) (9, 367.60000) += (0, 112.40000) -= (0, 104.60000) (10, 385.60000) += (0, 169.40000) -= (0, 94.60000) }; \addlegendentry{\textsc{Sample-Streaming}} \addplot[ color=blue, mark=star, error bars/.cd, y dir=both, y explicit, error bar style={ color=blue} ] coordinates { (1, 461.40000) += (0, 9.60000) -= (0, 15.40000) (2, 662.30000) += (0, 90.70000) -= (0, 99.30000) (3, 529.50000) += (0, 91.50000) -= (0, 64.50000) (4, 447.80000) += (0, 31.20000) -= (0, 54.80000) (5, 323.90000) += (0, 20.10000) -= (0, 24.90000) (6, 235.40000) += (0, 16.60000) -= (0, 12.40000) (7, 179.20000) += (0, 9.80000) -= (0, 16.20000) (8, 178.50000) += (0, 18.50000) -= (0, 10.50000) (9, 189.80000) += (0, 16.20000) -= (0, 16.80000) (10, 238.30000) += (0, 14.70000) -= (0, 18.30000) }; \addlegendentry{\textsc{Our Dynamic Algorithm}} \end{axis} \end{tikzpicture} \begin{tikzpicture} \pgfplotsset{width=8cm,compat=1.9} \begin{axis}[ xlabel={$k$}, ylabel={f}, ymin=0, ymax=8, xtick={2, 4, 6, 8, 10}, legend pos=north west, ymajorgrids=true, grid style=dashed, legend style={nodes={scale=0.5, transform shape}}, ] \addplot[ color=red, mark=square, error bars/.cd, y dir=both, y explicit, error bar style={ color=red} ] coordinates { (1, 1.62818) += (0, 0.44989) -= (0, 0.57743) (2, 2.14678) += (0, 0.65564) -= (0, 0.41891) (3, 2.48939) += (0, 0.33780) -= (0, 0.40127) (4, 2.39747) += (0, 0.34600) -= (0, 0.64868) (5, 2.28432) += (0, 0.52396) -= (0, 0.47845) (6, 2.30636) += (0, 0.45690) -= (0, 0.35722) (7, 2.43273) += (0, 0.48661) -= (0, 0.59661) (8, 2.39775) += (0, 0.46000) -= (0, 0.46214) (9, 2.28745) += (0, 0.42656) -= (0, 0.32351) (10, 2.39145) += (0, 0.50267) -= (0, 0.74555) }; \addlegendentry{\textsc{Sample-Streaming}} \addplot[ color=blue, mark=star, error bars/.cd, y dir=both, y explicit, error bar style={ color=blue} ] coordinates { (1, 2.46816) += (0, 0.07933) -= (0, 0.42347) (2, 2.15307) += (0, 0.32738) -= (0, 0.17542) (3, 2.29888) += (0, 0.09777) -= (0, 0.19832) (4, 2.19944) += (0, 0.43743) -= (0, 0.58491) (5, 2.39385) += (0, 0.40503) -= (0, 0.38268) (6, 2.36480) += (0, 0.17151) -= (0, 0.35363) (7, 2.47486) += (0, 0.06145) -= (0, 0.17877) (8, 2.49553) += (0, 0.03520) -= (0, 0.02626) (9, 2.46536) += (0, 0.07095) -= (0, 0.25866) (10, 2.26592) += (0, 0.20335) -= (0, 0.26033) }; \addlegendentry{\textsc{Our Dynamic Algorithm}} \end{axis} \end{tikzpicture} \end{tabular} \small (a) Video 106 total oracle calls and average output
Summary: In this paper, the authors consider the non-monotone submodular maximization problem under the cardinality constraint and dynamic model. Here, the dynamic model means that the ground set of the submodular function changes every time step where one element is inserted into or deleted from the ground set, and the update is controlled by an oblivious adversarial. They show a reduction from non-monotone case to monotone case, and then obtain a dynamic algorithm with (8+eps)-approximation ratio. They further test their algorithm on some real-world data sets. Strengths: 1. In the dynamic setting, they provide a constant approximation algorithm for non-monotone submodular maximization problem under cardinality constraint. 2. The reduction between non-monotone dynamic algorithm and monotone dynamic algorithm shows some connection between these two cases. Weaknesses: 1. I’m confused with the relation between parameter $\tau$ and the results in theorem 1.2. The current description looks like there is no relationship between these two. But, if $\tau$ is large enough, it seems we can satisfy definition 1.1 by keeping $S_t$ an empty set. For example, we can assume without loss of generality $f(u)\leq 1$ for any u and set $\tau =1$. In this case, we have a trivial $\tau$-thresholding dynamic algorithm. Then, what does theorem 1.2 look like? 2. Although the paper tries to show the connection between monotone case and non-monotone case by the reduction theorem, it seems that this connection is weak. Firstly, to show a monotone dynamic algorithm is \tau-thresholding algorithm is not an easy task, so that it is not easy to use this reduction. More importantly, the reduction does not mean that a better monotone dynamic algorithm can imply a better non-monotone dynamic algorithm (comparing to the reduction result in [35]). The better approximation ratio of the monotone dynamic algorithm does not help. It is also not clear whether the lower amortized update time can help or not. The author's response has clarified my main concern here, so I would like to raise my score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness 1 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Please see below for our answers to your comments: >I’m confused with the relation between parameter $\tau$ and the results in theorem ... Thank you for pointing out this issue. It appears that the writing here has been confusing, and you may have misunderstood the theorem statement. The assumption of the theorem is that there exists a $\tau$-thresholding for **any** given value of $\tau$. In other words, $\tau$ is an input parameter. Therefore, one cannot simply set $\tau=1$ and obtain an algorithm for this value; the algorithm should work for *any* value of tau. Specifically, our reduction sets $\tau = \frac{OPT}{k(3+1/(2\alpha))}$, where OPT denotes our guess for the optimal value. Please let us know if you have any additional questions regarding the theorem. >Although the paper tries to show the connection between monotone case and non-monotone case by ... We answer different parts of your comment separately to cover all the points you made, with the hope of resolving all of your concerns. Firstly, regarding your concern about the difficulty of showing that an algorithm is $\tau$-thresholding: We agree that for any monotone algorithm, the $\tau$-thresholding property would need to be formally verified (e.g., we do this in our paper in Appendix D). However, all dynamic algorithms known for monotone submodular maximization under cardinality constraint $k$ have been based on the thresholding algorithm proposed in [1], and it is not unreasonable to expect current and even future dynamic algorithms to be $\tau$-thresholding. Secondly, regarding your concern about improvements in monotone dynamic algorithms: Chen and Peng in [2] show that $\frac{1}{2}$ is a tight approximation guarantee for the dynamic submodular maximization problem. Therefore, we do not expect any improvement in the approximation guarantee of monotone dynamic algorithms. However, the query complexity of dynamic algorithms for the problem of monotone submodular maximization under cardinality constraint can be improved. As we observe in Theorem 1.2, if such improvements are obtained using a $\tau$-thresholding algorithm, this will improve the update time (measured by query complexity) of our dynamic algorithm for the non-monotone version as well. It is worth mentioning that a new dynamic algorithm for monotone submodular maximization has been proposed in [3] (also mentioned by another reviewer). We believe it can be shown that this new and improved algorithm is also a $\tau$-thresholding algorithm, which works fine in our reduction, and our reduction has already resulted in a better dynamic algorithm for non-monotone submodular maximization under cardinality constraint $k$. Lastly, regarding the reduction in [35] ([4] below), we should mention that their reduction does not seem to be general as well. In their paper, they claim that the solution obtained by approximation algorithms for monotone submodular functions often satisfies $f(S) \geq \alpha f(S \cup C^∗)$, where $1 \geq \alpha > 0$, and $C^∗$ is the optimal solution. (See Section Streaming Local Search for a collection of independence systems on page 3.) They use this as an assumption in their proof for their Theorem 1. (See equation (2) in proof of theorem 1 in supplementary materials). Hence, we believe their reduction also only works for monotone algorithms with that particular property and not any general algorithm. [1] Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, Andreas Krause: Streaming submodular maximization: massive data summarization on the fly. KDD 2014: 671-680 [2] Xi Chen, Binghui Peng: On the complexity of dynamic submodular maximization. STOC 2022: 1685-1698 [3] Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh: Dynamic Algorithms for Matroid Submodular Maximization. CoRR abs/2306.00959 (2023) [4] Baharan Mirzasoleiman, Stefanie Jegelka, Andreas Krause: Streaming Non-monotone Submodular Maximization: Personalized Video Summarization on the Fly. http://arxiv.org/abs/1706.03583 --- Rebuttal Comment 1.1: Comment: Dear reviewer, We were wondering whether our response has addressed your concerns, especially regarding the statement of Theorem 1. We'd be happy to provide any additional clarifications.
Summary: The paper studies the dynamic submodular maximization problem and gives the first efficient dynamic algorithm that maintains a constant approximation solution. In the problem of dynamic submodular maximization, there is a sequence of updates (insertions/deletions) of ground set element, and one wants to maintain a subset (of size at most $k$) that obtains the maximum possible function value. Previous work provide tight approximation guarantee when the function is monotone, while this paper extends to non-monotone submodular function (albeit with worse approximation guarantee). The main result of this paper is an $(8+\epsilon)$ approximation algorithm with $\mathsf{poly}(k, \log n, \epsilon^{-1})$ amortized update time per update. The algorithm is obtained through a clever reduction from dynamic algorithms for monotone function (and satisfies certain threshold property). The overall idea is to simulate the monotone threshold algorithm twice and combines double greedy or random sampling. The first run guarantees the marginal is small for the second algorithm, if it works bad. Besides theoretical results, the author also conduct empirical study and verify its practicality. -------------- I have read the author's response and I keep my positive evaluation of the paper. Strengths: The paper provides the first sublinear dynamic algorithm for non-montonne submodular maximization. The theoretical contribution is novel and interesting from my perspective. The experiment result is also promising. Weaknesses: There is no major weakness. Some minor issues are listed below. (1) Line 201, it seems one can directly use the double greedy algorithm to extract $S_1'$, I don't see why you mention the 3-approximation deterministic algorithm. (2) Line 242, delete "what" (3) The related work is quite extensive, there are a few missing reference that are relevant to the paper. The first two reference below seems appear on Arxiv later than NeurIPS submission, but I encourage the author to add the reference in the later version. The third reference gives lower bound for white box dynamic submodular maximization. [1] Dynamic Algorithms for Matroid Submodular Maximization. Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh [2] Fully Dynamic Submodular Maximization over Matroids. ICML 2023 Paul Dütting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard, Morteza Zadimoghaddam [3] Dynamic influence maximization. NeurIPS 2021 Binghui Peng Technical Quality: 3 good Clarity: 3 good Questions for Authors: . Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for pointing out the issues and listing the extra references. We will make sure to incorporate them in the revised version of the paper.
Summary: The authors consider an submodular non-monotone optimization problem in fully dynamic setting. The authors propose a 8+epsilon approximation algorithm by combining several existing methods. Strengths: - The technique behind the proposed algorithm is interesting. - Proven approximation guarantee. - A nice continuation of the existing literature in submodular optimization. Weaknesses: - 8 + epsilon is a rather weak approximation guarantee. - The other algorithm with 10 + epsilon approximation uses a weak algorithm, so the algorithm is largely pedagogical. - Presentation needs improvement. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: The authors should improve the presentation of the paper - The abstract is probably better without references to Neurips or STOC - Line 20-22. Provide citations. - Page 2: remove grey outline boxes. - Line 51, remove bolding - Property 2. queries is not defined. - Reword/remove metatheorem. - Line 98: . That is -> , that is, - Line 178: algorithms -> algorithm - Named paragraphs starting on Page 5 need connecting text. - There is a follow-up paper of [7] that derandomizes the algorithm of [7] (and keeping 2-approximation). Can this algorithm used? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: Probably the biggest limitation of the method is a rather weak approximation guarantee. It is not clear whether a stronger approximation is possible. This is not discussed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Please see below for our answers to your comments: > 8 + epsilon is a rather weak approximation guarantee. Our primary goal in this paper was to obtain the first dynamic constant-factor approximation algorithm for non-monotone submodular maximization. This answers affirmatively an open question posed by Chen and Peng in [11]. Indeed, developing a dynamic algorithm for this problem with a better approximation factor than $(8 +\epsilon)$ is an interesting future direction in this area. Yet, it is worth mentioning that non-monotone submodular maximization is far more difficult than monotone submodular maximization. Monotone submodular maximization under cardinality constraint has a tight $\frac{e}{e -1}$ approximation algorithm in the offline mode and nearly tight $2 + \epsilon$ approximation algorithms for both streaming and dynamic settings. However, for the non-monotone version, there is a hardness result that says it is impossible to obtain a $2.04$ (i.e., $0.491$) approximation algorithm for this problem even in the offline setting, and to the best of our knowledge, the current state of the art algorithms for this problem have $2.6$ and $3.6$ (i.e., $0.385$ and $0.2779$) approximation guarantee for offline and streaming settings respectively. Hence, we believe our algorithm with $(8+\epsilon)$ approximation guarantee as the first non-monotone algorithm in the dynamic setting is valuable. > Probably the biggest limitation of the method is a rather weak approximation guarantee. It is not clear whether a stronger approximation is possible. This is not discussed by the authors. The reduction resulting in the $(10 + \epsilon)$-approximation algorithm is interesting because of its potential to develop a dynamic algorithm with polylog(n) oracle queries for non-monotone submodular maximization. In particular, if one can show that a dynamic algorithm for monotone submodular maximization with polylog(n) oracle queries is a $\tau$-thresholding dynamic algorithm, our reduction immediately provides a counterpart dynamic algorithm for non-monotone submodular maximization with polylog(n) oracle queries. > The abstract is probably better without ... Thank you for your comments on the presentation. We will incorporate them in the revised version of the paper. > There is a follow-up paper of [7] that derandomizes the algorithm of [7] (and keeping 2-approximation). Can this algorithm be used? The dynamic algorithm from [38] that we use as our $\tau$-thresholding algorithm is a randomized algorithm. Therefore, even if we use the derandomized version of the algorithm of [7], our dynamic algorithm would remain randomized. > Probably the biggest limitation of the method is a rather weak approximation guarantee. It is not clear whether a stronger approximation is possible. This is not discussed by the authors. Please see our response to the first comment.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Multi-Agent Meta-Reinforcement Learning: Sharper Convergence Rates with Task Similarity
Accept (poster)
Summary: This paper studies the interdependence between the convergence of MARL and the quality of policy initialization. Strengths: 1. It proposes a new algorithm that has an initialization-dependent convergence guarantee. 2. It establishes several theoretical results that connect policy initialization and convergence of MARL in various types of games. Weaknesses: No Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: No. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the appreciation of our work. We would be happy to discuss if the reviewer has any questions about the paper.
Summary: The paper introduces a meta-learning method to initialize the OOMD algorithm. Combined with the introduced initialization-dependent convergence guarantees, authors then can show faster convergence when the meta-learning initialization is close. Strengths: The paper is well written, and the prior work is well referenced. The authors also provide extensive theoretical analysis of their algorithms. I think the most interesting result is Theorem 1 (other theorems often seem like an application of Theorem 1). I believe that the result is novel and interesting. I also appreciate that authors analyze two-player zero-sum markov games as well as markov potential games Weaknesses: First, I agree that “closest to this work” is Harris, Keegan, et al. "Meta-learning in games." arXiv preprint arXiv:2209.14110 (2022). More recently, a closely related (also meta learning in games and regret minimization) paper presents an algorithm combining meta-learning and regret guarantees: Sychrovsky, David, et al. "Learning not to Regret." arXiv preprint arXiv:2303.01074 (2023). I believe that paper should be included in the related work. My biggest issue though is the empirical analysis (Section 6 - Simulations). There are no details in the main text and one can only find it in the appendix. It is fine to move details to the appendix, but the authors do not even mention the size of the games in the main text. Looking into the appendix, I think the games are trivially small (2x2 matrix games). Furthermore, the games in the meta-learning sequence seem very similar (the epsilon noisy seems very small). I think this is rather confirmed by Figure 1c. It is then not a big surprise that the convergence is so fast, when the games are so small and the Nash Equilibria very similar/close. I think the authors should to evaluate substantially more interesting/diverse/larger games (e.g. some epsilon/noisy parametrization of small poker games). If the authors add experiments in larger games than the trivially small currently included, I am happy to move my score to "Accept" as otherwise I think the paper is good. ---------------------------------------------------------------------------------------------------- Authors addressed my comments and included more experiments, increasing the score. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Did you have a chance to run the algorithm on larger games than the small ones included in the current version of the submission? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Sufficient Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. Our detailed responses are as follows. 1. We thank the reviewer’s appreciation of our Theorem 1, which is indeed one of our most interesting results. While we agree that Theorem 2 can be considered as an application of Theorem 1 (as the reviewer mentioned), we would like to point out that our Theorem 4 (for potential games) and Theorems 5/6 (for general-sum games) are built upon quite different techniques and might be of interest to the reader as well. 2. We thank the reviewer for sharing the recent reference [Sychrovsky et al., 2023]. It focuses more on the empirical side of meta-learning in games, which is a bit different from our theoretical focus but is still a relevant and very interesting work. We have properly included [Sychrovsky et al., 2023] in our Related Work section. In addition, in this rebuttal, we have also added new simulations similar to the “kuhn_poker” example considered in [Sychrovsky et al., 2023]. The kuhn_poker task in [Sychrovsky et al., 2023] only considers 3 poker cards while our new simulation considers a full set of poker with 52 cards, which leads to a significantly larger state space. We would like to refer the reviewer to our “global” response for the details. 3. We appreciate the reviewer’s concern of our empirical analysis and for suggesting alternative experimental setups. We would like to remark that the focus of our work is mostly theoretical, and the simulations were primarily used as proof of concept. But given the reviewer’s interests in our empirical performances, we have added new and larger-scale simulations in this rebuttal to further evaluate the empirical performances of our algorithms. These new simulations include a task with more than 1.7 million states and a task with 4 agents, 625 states and 81 joint actions, both of which are significantly larger than our previous simulations. Please refer to our “global” response to all the reviewers for details of the new simulations. --- Rebuttal Comment 1.1: Comment: I appreciate the authors response, and I think the larger games experiments are a great addition. Increasing my score to "Rating: 8: Strong Accept", nice paper!
Summary: The authors proposed a meta-learning approach based on MAML for multi-agent domains where tasks with similar NE policies, when learned sequentially, converges faster to desired equilibria solutions. Strengths: * Originality The authors investigated theoretical convergence properties of multi-agent learning through the lens of meta-learning of a sequence of similar tasks. This is novel and the problem setting is relevant as non-stationarity is inherent to multi-agent learning that may benefit from the MAML family of methodologies. * Quality I find the theoretical results thorough but the work could benefit significantly with more substantial results on a few more standard benchmark domains. * Clarity The paper is generally well written and easy to follow. Weaknesses: * Task similarity metric: my main reservation with the proposed approach is related to the proposed task similarity metric (Sec 3.2) which translates to measuring similarity in different tasks' NE policy. Wouldn't selecting tasks in such way naturally promote faster convergence to NE policies when learning on "similar" tasks sequentially? I would appreciate further clarification on why this metric is used and if alternatives have been considered. * Multi-agent learning inherently deals with non-stationarity during learning as the environment dynamics (from the perspective of any one player) is non-stationary. Quite often such non-stationarity does not translate to similar NE policy though (e.g. rock-paper-scissors, where NE polices would be quite different). Could you clarify if the proposed method should be applicable to such "similar tasks" of this nature? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: L82-83: " ... weaker solution concepts such as (C)CE ...", perhaps clarify that in general-sum games NE would not allow for coordination and is therefore restrictive in such games? By "weaker" perhaps you meant computationally tractable beyond two-player zero-sum games? L49: "... convergence guarantees for MARL.", do you mean convergence to equilibria specifically? Perhaps clarify in writing that convergence bounds are wrt equilibria upfront? Related Works: "meta-learning" is overloaded in the RL literature and another line of work in meta-learning follows from prior works such as [1-3] where a policy is conditioned on prior belief over tasks and can infer Bayes-optimally through interaction at test time to adapt to different tasks. This idea is then extended to multi-agent setting [3-4], with a focus on faster convergence / transfer learning via shared representation learning. Would it make sense to include a brief discussion of this line of meta-learning works in related works? [1] Meta-learning of Sequential Strategies: https://arxiv.org/abs/1905.03030 [2] Meta reinforcement learning as task inference: https://arxiv.org/abs/1905.06424 [3] Meta-Learning with Memory-Augmented Neural Networks: https://proceedings.mlr.press/v48/santoro16.pdf [3] NeuPL: Neural Population Learning: https://arxiv.org/abs/2202.07415 [4] Simplex NeuPL: https://arxiv.org/abs/2205.15879 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see my comments on the choice of task similarity metric which may be a limitation on when and where the proposed method could be effectively applied. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and the valuable suggestions on improving our work from multiple different perspectives. Our detailed responses are as follows. 1. We appreciate the reviewer’s concern of our task similarity metric. In Section 3.2, we choose the closeness of the NE policies of different games as the similarity metric because we believe it is most natural and comparably less restrictive. Such a metric also resonates existing works [Harris et al., 2022] for matrix games, which allows for direct comparisons. An important alternative metric that we have considered is the $L_1$ norm of the transition function together with the $L_{\infty}$ norm of the reward function; that is, two games are considered “similar” under this metric if their transition and reward functions are pointwise close. However, we believe that such a metric is more restrictive than the NE closeness metric we consider. This is because, intuitively, in finite zero-sum games every saddle-point solution (NE) in mixed strategies is the solution to a linear program, and the optimal solutions of two LPs are very likely to be attained at neighboring points if these two LPs have similar constraints and objectives. In this sense, closeness of reward functions tends to imply closeness of NE, but not vice versa because if the optimal solutions of two LPs happen to be attained at the same point, this does not imply that their constraints or objectives are similar. Our NE closeness metric is hence less restrictive in this sense. We would be happy to learn if the reviewer can find any other natural and possibly less restrictive metric, which can be greatly helpful to improve our work. 2. For the “non-stationarity” part, if we understand it correctly, the reviewer’s concern is that there could be multiple NE in a game that are quite different, and there is no guarantee which one the learning algorithm will converge to due to the “non-stationary” learning dynamics. Our method should not be affected by such a case because our similarity metric only assumes one of the NE (not all of them) to be close to those of other games. In fact, the exact advantage of meta-learning is to be able to quickly find a particular NE that is close to the meta-initialization. 3. L82-83: By “a weaker solution concept”, we simply mean that every NE is automatically a (C)CE but not vice versa. As the reviewer has pointed out, NE is in general not computationally efficient beyond two-player zero-sum games, but (C)CE requires an additional public signal to correlate the players’ policies, which might not exist in practical scenarios. L49: Yes, we mean convergence to equilibria specifically. We thank the reviewer for calling attention to these points, and we have revised the paper to make these points clear. 4. We thank the reviewer for sharing the references. These works tackle “meta-learning” from a slightly different perspective than ours, but we totally agree that they are very relevant and can help provide a more comprehensive viewpoint of the background to the readers. We have included a discussion of this line of research in our Related Work section. 5. We would also like to refer the reviewer to our “global” response section for new simulation results added during the rebuttal based on the feedback from the other reviewers. These new simulations include a task with more than 1.7 million states and a task with 4 agents, 625 states and 81 joint actions, both of which are significantly larger than our previous simulations. --- Rebuttal Comment 1.1: Comment: Thank you for your explanation. My two questions are related to each other. Let me clarify: In rock-paper-scissors, the BR may differ significantly, depending on the opponent strategy. However, the NE of (variations of, say by scaling all payoffs linearly) RPS game is the same, which according to your similarity metric the tasks would be similar. > Could you clarify if the proposed method should be applicable to such "similar tasks" of this nature? Quickly adapting to different opponent strategy in this case would be interesting and useful which is why I was wondering if your proposed method would be beneficial in such setting, though I understand this is a different problem setting than yours if I understood correctly. This is also why I referred to a different class of "meta-learning" algorithms in my comment which tackles this setting. My main reservation then is still if the proposed similarity metric doesn't make the sequential learning problem "easy" by definition. For instance, solving for NE in a sequence of re-scaled RPS game would be immediate and trivial. As such I would keep my initial rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the follow-up comments, which are helpful for us to better understand your questions. The reviewer is correct that our similarity metric is “behavior-dependent” in the sense that it depends on the best response policies of the players of each game rather than the exact NE: The former depends on the opponent strategy, but the latter only relies on the inherent properties of the game and is generally preferred. While our current solution only applies to the former case, we believe that extensions to the latter setting would be fairly straightforward. In particular, this is essentially the extension from Theorem 3.1 to Theorem 3.2 for meta-learning in normal-form games in [Harris et al., 2022], where their Theorem 3.1 assumes a similarity metric in terms of the optimum-in-hindsight strategies and Theorem 3.2 only depends on the similarity of any NE from each game. A similar extension can also be made possible in our setting for Markov games. We note that Theorem 3.2 of [Harris et al,. 2022] requires an additional assumption that after the termination of each game the players can obtain an exact NE of that game, which is also needed in our setting to make such extensions. When such an assumption does not hold, we can still use the approximate NE learned by the players at the end of each game, but our meta-learning convergence rate will suffer an additional term associated with the “inexactness” of the learned NE. We totally agree that “quickly adapting to different opponent strategies” is indeed a very interesting and useful setting and we thank the reviewer for sharing the references, but we believe that it is a different problem than what we considered. Using the second similarity metric as described in the paragraph above, the “different opponent behavior” does not affect the convergence behavior of our meta-learning methods, because our algorithms only depend on the properties of the game itself but not on the players’ behavior. One of our main contributions is to formally and quantitively show “why” and “how” the similarity metric makes the sequential learning problem “easier”.
Summary: This paper establish theoretical results for meta-learning in a wide range of fundamental MARL settings, including learning Nash equilibria in two-player zero-sum Markov games and Markov potential games. Numerical results are shown to demonstrate the advantages of meta-learning. Strengths: This paper establish theoretical results for meta-learning in a wide range of fundamental MARL settings, including learning Nash equilibria in two-player zero-sum Markov games and Markov potential games. Numerical results are shown to demonstrate the advantages of meta-learning. Weaknesses: The simulation part is relatively simple since only toy examples are given. The advantages of meta-learning are not fully demonstrated. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The simulation part is relatively simple since only toy examples are given. Is it possible to have much more sophisticated examples (at least more than two player) to demonstrate the theoretical results? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. The focus of our work is mostly theoretical, and the simulations were primarily used as proof of concept. But given the reviewer’s interests in the empirical performances of our results, we have added new and larger-scale simulations in this rebuttal to further evaluate the empirical performances of our algorithms. These new simulations include a task with more than 1.7 million states and a task with 4 agents, 625 states and 81 joint actions, both of which are significantly larger than our previous simulations. Please refer to our “global” response to all the reviewers for details of the new simulations. --- Rebuttal Comment 1.1: Comment: Thanks the authros for taking the time to run additional experiments and answering the questions. I believe these additional experiments are very valuable validating the theoretical results.
Rebuttal 1: Rebuttal: We thank all the reviewers for the insightful feedback. In this “global” response, we would like to share some new and larger-scale simulations that we conducted in this rebuttal phase following some of the reviewers’ advice. We believe these new simulations can help address the reviewers’ questions on the empirical performances of our algorithms. We added two sets of new simulations. The first task is a Poker endgame similar to the one considered in [Harris et al., 2022]. We use a public River endgame that was released by the authors [Brown and Sandholm, 2018] in the Brains vs AI competition. This task is a zero-sum game with 2 players, ~1.7 million states, and 2 actions (calling or folding) for each player. Poker is a partially observable game, but we found that our algorithm still performs well if each agent simply uses its local observation as the state. We generated similar games by adding $\mathcal{N}(0, 0.5)$ perturbations to the normalized stack amounts of the players, which essentially perturbates the reward function. Figure 1 in the attached PDF file shows that our method can handle such a large state space well, and our meta-learning method can converge to an approximate NE policy faster than individual learning. In the second task, we consider a 1D linear-quadratic tracking problem where each agent tries to track and state close to the other agents. We adopt the discrete setting as has been utilized in a few recent works [Perrin et al., 2020; Laurière et al., 2022]. The tracking task we consider is a Markov potential game with 4 players, 625 states, and a joint action space of size 81. For each agent $i$, its location transition is given by $s_{t+1,i}=s_{t,i}+a_{t,i} \Delta_t +\epsilon_t \sqrt{\Delta_t}$, where $\Delta_t$ is the time duration, and $\epsilon_t$ is the i.i.d. noise taking values from $\{-2, -1, 0, 1, 2\}$ following a normal distribution. Let $\mu_t$ denote the empirical mean of all the agents’ locations at time $t$. The reward function for agent $i$ is specified as $(-\frac{1}{2}a_{t,i}^2-\frac{1}{4}(\mu_t – s_{t,i})^2)\Delta_t$. Intuitively, this reward function incentivizes agents to track and stay close to the population (despite the random drift $\epsilon_t$), but discourages agents from taking large-magnitude actions. We generated similar games by adding $\mathcal{N}(0, 0.5)$ perturbations to the location transition drift magnitude and the reward functions. Figures 2 and 3 in the attached PDF file demonstrate that our meta-learning method achieves faster NE-gap and value convergences. References: M. Laurière, S. Perrin, S. Girgin, P. Muller, A. Jain, T. Cabannes, G. Piliouras, J. Pérolat, R. Élie, O. Pietquin, et al. Scalable deep reinforcement learning algorithms for mean field games. arXiv preprint arXiv:2203.11973, 2022. S. Perrin, J. Pérolat, M. Laurière, M. Geist, R. Elie, and O. Pietquin. Fictitious play for mean field games: Continuous time analysis and applications. Advances in Neural Information Processing Systems, 33:13199–13213, 2020. Pdf: /pdf/732e798d3a9dfb717becd0210901a66b4b3ec99e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors introduce theoretical results on Model-Agnostic Meta-Learning in a multi-agent reinforcement learning setting. In particular they show that meta-learning can achieve stronger convergence guarantees than an RL baseline when tasks are similar. The results hold for zero-sum, potential and general-sum games. In order to establish this, they provide theoretical results on convergence for online mirror descent which are dependent on initial conditions. Finally they conduct some small-scale experiments to validate the faster convergence empirically. Strengths: - Well motivated and timely article: meta-learning is becoming increasingly prevalent in the multi-agent setting, and theoretical results are lagging behind empirical ones, to the best of my knowledge. - The notation is consistent and clear throughout. - The theoretical results seem convincing, although I haven't checked the proofs in detail. - There is an interesting and intuitive definition of game similarity, which may well find use in other algorithms in the future. Weaknesses: - The empirical results are quite small scale, and it would be useful to have a little more explanation about the environment in the main text. - It would be nice to have at least one sketch proof in the main text. Perhaps there is a reorganisation of some other material to the Appendix that might permit this. - There are a few places where more discussion or justification would be beneficial, for example: why is this "without loss of generality" on line 102; to what extent is it standard to neglect the logarithmic terms in line 201, and what is the reason that these terms show up. - The limitations / future work section could have a little more detail. Would the authors be happy to comment on scaleability, and to discuss how similar results might be obtained for other meta-learning approaches outside MAML? Technical Quality: 3 good Clarity: 3 good Questions for Authors: On lines 94-95, the authors may want to cite another recent paper using meta-learning in the context of a distribution over multi-agent cooperative tasks: https://arxiv.org/abs/2301.07608. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the appreciation of our work and the valuable feedback. Our detailed responses are as follows. 1. We thank the reviewer for the comments on our empirical results. The focus of our work is mostly theoretical, and the simulations were primarily used as proof of concept. But given the reviewer’s interests in the empirical performances of our results, we have added new and larger-scale simulations in this rebuttal to further evaluate the empirical performances of our algorithms. These new simulations include a task with more than 1.7 million states and a task with 4 agents, 625 states and 81 joint actions, both of which are significantly larger than our previous simulations. Please refer to our “global” response to all the reviewers for details of the new simulations. 2. We appreciate the reviewer’s advice on the organization of the paper. We will follow these guidelines to include more discussions of the simulation setup as well as a proof sketch in the main text when preparing for the camera-ready version (where one extra page is allowed). 3. In Line 102, our results readily extend to the setting where the initial state is sampled from an arbitrary distribution. Assuming a fixed initial state is without loss of generality because one can imagine that there exists an extra step $h=0$ where the agent starts from a fixed state $s_0$. The transition probabilities from $s_0$ to any other state $s_1$ can be made arbitrary, which equivalently allows us to start from an arbitrary state distribution at $s_1$. This is a very common assumption in existing works, e.g., [Jin et al., 2022; Song et al., 2022]. In Line 201, suppressing logarithmic terms is also common in the literature as the log terms grow much more slowly than the polynomial terms. In fact, most existing results do have this logarithmic term, e.g., [Zhang et al., 2022]. In our work, the direct reason of this log term is that we divide the $T$ iterations into $\bar{\tau}=O(\log T)$ stages. We have added more discussions in our paper to make these points clear to the reader. 4. For the limitations / future work section, we believe that our methods will not suffer much from scalability issues because our algorithms are essentially designed to be decentralized (in the same sense as discussed in [Mao et al., 2022]): The agents only use local information to update their policies and rarely need to exchange information with others. In addition, while our proofs do not directly generalize to other meta-learning approaches outside MAML, we believe that our analytical methodology (properly defining a similarity metric and then tracking how the policy trajectories on different tasks deviate according to the metric) could still be helpful. 5. We thank the reviewer for reminding us of the recent paper that applies meta-learning to cooperative tasks. We have properly included the reference in the Related Work section. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I thank the authors for their rebuttal. They have adequately addressed my points. I am particularly pleased to see the strong results of their algorithm on a larger game. Therefore I will increase my score to "Strong Accept".
null
null
null
null
null
null
Efficient Potential-based Exploration in Reinforcement Learning using Inverse Dynamic Bisimulation Metric
Accept (poster)
Summary: This paper introduces a novel approach that combines bisimulation metrics with inverse dynamics modeling to formulate potential functions for reward shaping. The integration of these techniques offers potential-based exploration, and the paper provides theoretical analyses highlighting the benefits of this proposed method. Experiments show its robustness and scalability in various tasks. Strengths: 1. A proper potential-based reward-shaping method that can preserve state differences to be based on task-specific features. 2. The main claim and the method in this paper are presented in a clear and straightforward manner, making them easy to comprehend. 3. The proposed exploration bonus does not rely on prior human knowledge. 4. Utilizing bisimulation-based metrics as an exploration bonus is interesting and worth to be investigated. Weaknesses: 1. Although the empirical studies have demonstrated successes in Mujoco tasks and Atari games, it would be beneficial to conduct additional experiments that compare exploration methods. The authors should also evaluate their method on DMC [1] tasks, such as Humanoid tasks, which are known to pose more challenging exploration scenarios. Utilizing benchmarks such as URLB [6] could provide helpful insights for evaluating its exploration ability in an reward-free settings. 2. There are some confusing notations in the manuscript. For instance, in Section 3, the reward is defined as $r = \mathcal{R}(s, a)$; however, the modified reward function $\mathcal{R}' = \mathcal{R} + \mathcal{F}$ introduces a discrepancy in the arguments, where $\mathcal{F}(s, a, s') = \gamma\Phi(s') - \Phi(s)$. This inconsistency is also present in the reward function of Theorem 4 in the Appendix. To maintain clarity, the notations should be consistent throughout the entire manuscript. Additionally, Equation 29 appears unusual, as the expectation is computed by sampling the next state $s'$ while there is another $s'$ as the subscript of the max term within the expectation, which seems strange. 3. Many of the theoretical analyses (Theorem 1-3) closely resemble previous works ([2, 3, 4, 5]), which limits the novelty of these contributions. Furthermore, there appear to be issues with the proof of Theorem 4 (see in 2.), which undermines the soundness of the paper. It is important to address these concerns to strengthen the overall manuscript. [1]: Tassa, Yuval, Doron, Yotam, Muldal, Alistair, Erez, Tom, Li, Yazhe, Casas, Diego de Las, Budden, David, Abdolmaleki, Abbas, Merel, Josh, Lefrancq, Andrew, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018. [2]: Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, Sergey Levine: Learning Invariant Representations for Reinforcement Learning without Reconstruction. ICLR 2021 [3]: Norm Ferns, Prakash Panangaden, and Doina Precup. Metrics for finite markov decision processes. In UAI, volume 4, pages 162–169, 2004. [4]: Norman Ferns, Doina Precup: Bisimulation Metrics are Optimal Value Functions. UAI 2014: 210-219 [5]: Pablo Samuel Castro: Scalable Methods for Computing State Similarity in Deterministic Markov Decision Processes. AAAI 2020: 10069-10076 [6]: Michael Laskin, Denis Yarats, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel: URLB: Unsupervised Reinforcement Learning Benchmark. NeurIPS Datasets and Benchmarks 2021 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In lines 185-189, it is unclear why exploration solely based on bisimulation metrics would lead to meaningless exploration. The bisimulation metric is derived from the reward function defined for the task. If the reward function effectively captures the task's objectives (e.g., collecting coins has a higher reward than attacking monsters), it can be assumed that the features learned by the bisimulation metric would also be meaningful. Further clarification is needed to address this potential contradiction. 2. It is unclear whether $\theta$ represents the parameters of the inverse dynamics model or the policy network. To avoid confusion, the authors should use different notations to distinguish between these two entities explicitly. 3. Equation 4 introduces the L1-norm between $a_i$ and $a_j$ as the discrepancy in action, but it is not clear why this specific norm is chosen instead of alternatives such as the L2-norm or some measurements of the distribution distance like W-distance. The authors should provide explanations or justifications for the selection of the L1-norm to enhance understanding and reasoning behind this decision. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: As the proposed method requires permuting the batch to compute the metric (similar to DBC), it may require higher computational complexity (wall-clock time) compared to the other potential-based methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful reviews, for the citations in the response please refer to the reference list in the global comment. *W1: The authors should also evaluate their method on DMC tasks... are known to pose more challenging exploration scenarios...* **Response**: Thank you for your suggestions, the DMC tasks and URLB benchmark are very interesting and we are eager to evaluate our algorithm on these benchmarks if time permits. The importance of the challenging exploration scenarios has also been discussed in [M] and [N], where the MuJoCo environment with a delayed reward setting is acknowledged as a challenging benchmark for exploration problems. To further address your concern, we additionally include results in another challenging exploration scenario like the goal-conditioned environments, suggested by the 3rd Reviewer pk3V, where the reward is only given when reaching the goal. The results can be found in the Figure 3 in global comment, we can see that our method still achieves best performance in the challenging tasks of widely used [K,L] goal-conditioned environments. *W2: There are some confusing notations in the manuscript... in Section 3, the reward is defined...This inconsistency is also present in the reward function of Theorem 4...Equation 29 appears unusual...* *Q2: It is unclear whether $\theta$ represents the parameters of the inverse dynamics model or the policy network. To avoid confusion, the authors should use different notations to distinguish between these two entities explicitly.* **Response**: Thank you for your valuable comments. We use $\theta_I$ to distinguish the parameters of the inverse dynamics model from policy network, and we will consider selecting another notation to replace $\theta_I$ to avoid confusion. We will make all the notations of the reward function in sec 3 and thm 4 consistent, e.g. $\mathcal{R}^{\prime}(s,a,s^{\prime})=\mathcal{R}(s,a,s^{\prime})+\mathcal{F}(s,a,s^{\prime})$. Furthermore, considering that Eq(29) represents the Bellman optimality equation and $V_{M}^*(s)$ denotes the optimal value function which means $\max_{s^{\prime} \in \mathcal{S}} V_{M}^*(s^{\prime}) = V_{M}^* (s^{\prime})$, we will eliminate the max term to alleviate the potential confusion, so Eq(29) equals to $V_M^*(s, a)=\mathrm{E}_{s^{\prime}\sim \mathcal{P}(\cdot \mid s, a)}[\mathcal{R}(s, a, s^{\prime})+\gamma V_M^*(s^{\prime})]$ The notation problem will be fixed in the revised version. *W3: Many of the theoretical analyses (Theorem 1-3) closely resemble previous works ([2, 3, 4, 5]), which limits the novelty of these contributions...* **Response**: Previous work [2, 3, 4, 5] (citations refer to the review) mainly emphasize state representation learning and the computation and evaluation of bisimulation metrics. Specifically, [2] uses the bisimulation metric to learn the state representation, [5] just proposes the algorithm for computing on-policy bisimulation metrics. [3,4] proposes the theory framework for bisimulation metric and the the theory guarantee of [2] and [5] are also based on [3,4]. However, these approaches differ significantly from our work, which focuses on boosting exploration under the reward shaping framework. We are the first to propose the **inverse dynamic** bisimulation metric for exploration and the novelty is evidenced by other 3 reviewers pk3V ,dnFb and naKC (strength points). All the theoretical analysis are based on **reward shaping** framework, which significantly different from **representation learning** and **metric computation**. Specifically, our metric incorporates the inverse dynamic based on the bisimulation metric, making thm 1 crucial in ensuring the convergence of our approach. Theoretical analysis in Thm. 2-4 mainly focus on how our method effectively promotes **efficient** exploration . We will improve the emphasis on our contribution in Sec. 5, and the notation issue in Thm. 4 is fixed in the revised version. *Q1: ..it is unclear why exploration solely based on bisimulation metric would lead to meaningless exploration...features learned by the bisimulation metric would also be meaningful..* **Response**: Firstly, we mean the **shaping reward** calculated solely based on the bisimulation metric may result in meaningless exploration bonus. While we acknowledge the value of the features learned through the bisimulation metric within the framework of **state representation learning**, it's important to clarify that we only use the metric for calculating state differences as a shaping reward to assist exploration, rather than using it for training the state representation. We mean that exploration bonus calculated by the bisimulation metric can't detect state changes resulting from other environmental factors. For example, the agent Mario is navigating a level with random moving monsters and changing background, it will visit a vast number of different states and collect lots of cumulative reward without taking actions. The agent will learn to stay in the same position without taking any useful actions since the shaping reward is still high in this case. So we introduce the inverse dynamic module in the metric to identify whether the state changes are caused by actions, so the agent can explore more effectively. *Q3: Equation 4 introduces the L1-norm between $a_i$ and $a_j$ as the discrepancy in action, but it is not clear why this specific norm is chosen instead of alternatives such as the L2-norm or some measurements of the distribution distance like W-distance...* **Response**: We followed previous famous work([E,F]) which includes the inverse dynamic module. L1-norm is the lowest computational cost since the output of the action in inverse dynamic module is a deterministic scalar or vector in RL environments. We believe the more detailed discussion about the choice of L1-norm or L2-norm is a direction for future work in the specific algorithm and environment. --- Rebuttal Comment 1.1: Comment: After carefully reviewing all the responses from the reviewer, I have several questions and suggestions for improvement: 1. > $V_M^*\left(s\right)$ denotes the optimal value function which means $\max _{s^{\prime} \in S} V_M^*\left(s^{\prime}\right)=V_M^*\left(s^{\prime}\right)$ The statement regarding $\max _{s^{\prime} \in S} V_M^*\left(s^{\prime}\right)$ and $V_M^*\left(s\right)$ is incorrect. $V_M^*\left(s\right)$ represents the optimal value of state s, while $\max _{s^{\prime} \in S} V_M^*\left(s^{\prime}\right)$ represents the maximum value over all states. Thus, they are not equivalent. 2. > All the theoretical analyses are based on reward shaping framework, which is significantly different from representation learning and metric computation. Theorems 1-3 (specifically from Line 498 to Line 550 in Appendix C) are not relevant to reward shaping. This should be clarified. 3. > For example, the agent Mario is navigating a level with random moving monsters and changing background, it will visit a vast number of different states and collect lots of cumulative reward without taking actions. How the agent can collect cumulative rewards without taking any actions, even if the background keeps changing? Rewards are typically obtained by collecting coins, hitting monsters, or reaching the flag. It is unclear how rewards can be collected by staying in the same position. Please provide further clarification on this point. 4. > We followed previous famous work([E,F]) which includes the inverse dynamic module. I was unable to find the specific module referred to in paper [F] (specifically used L1-norm settings). I would suggest the authors provide the exact Equation or Lines of descriptions for more information. Besides, as far as I know, [1] has already used bisimulation as a bonus term. From this perspective, the main contribution of this paper is to integrate the inverse dynamics model into bisimulation to design exploration bonuses. Consequently, it is important to clearly emphasize the specific reasons for choosing each module, particularly the non-trivial design of the inverse dynamic module. [1]: Dadashi R, Rezaeifar S, Vieillard N, et al. Offline reinforcement learning with pseudometric learning[C]//International Conference on Machine Learning. PMLR, 2021: 2307-2318. 5. I partially agree with the Q3 from Reviewer naKC. > **Response to Q3 from naKC**: The shaping reward will have no impact on training, so the agent stops exploring after the convergence of optimal policy No, it just means the bisimulation metric converges to its fixed point, instead of meaning that the potential function is zero. $\gamma d_\text{inv}(s',s_0) - d_\text{inv}(s,s_0)$ would still be large if $s$ and $s'$ are totally different states. Then this bonus will keep being non-zero even if the policy is converged. 6. > **Response to Q2 from dnFb**: So the primary focus of the shaping reward is on capturing the difference between $s_t$ and $s_{t+1}$. I noticed that in Equation 24, the derivation shows that $|V^{\pi}(s)-V^{\pi}(s_0)|=|V^{\pi}(s)|-C_1$. Let's say two different cases: $V^{\pi}(s)=5$ and $V^{\pi}(s)=1$, and we set $V^{\pi}(s_0)=2$. The values should be $|5-2|=3$ and $|1-2|=1$ respectively. However, the equation suggests $|V^{\pi}(s)|-C_1$. I am confused about this discrepancy. I do hope the authors can help me to address this. Overall, I believe addressing these questions and suggestions will help improve the clarity and accuracy of the paper. I will keep the score as-is. --- Reply to Comment 1.1.1: Title: Emphasizing the contribution and addressing notation issues Comment: Thanks for your comment, we believe the main question of you is the contribution of our work. Apart from including the inverse dynamic module in bisimulation metric, we want emphasize that our key contribution is that we are the **first** exploration method to address **(1)** no guarantee convergence of optimal policy, **(2)** lack scalability, **(3)** reliance on prior knowledge, and promoting learning efficiency (achieves SOTA in competitive envs). Please refer to line 28-46, table 3 in Appendix E and Appendix D.3 and D.4 where we had clarified and verified the weakness of typical exploration methods. The contribution is also acknowledged by reviewer dnFb and pk3V. the detailed response is as follows. Q1: ..$\max _{s^{\prime} \in S} V_M^*(s^{\prime})$ and $V_M^*(s)$ are not equivalent.. A1: Yes, $\max _{s^{\prime} \in S} V_M^*(s^{\prime})$ is not equivalent to $V_M^*(s)$, we means that Eq(29) is based on **bellman optimality function** where the max term over states should be omitted: $V_M^*(s)=\max _a \underset{s^{\prime} \sim P}{\mathrm{E}}[R(s, a)+\gamma V_M^*(s^{\prime})]$ , the max term is chosen over actions and we will fix it in revision. Q2: Theorems 1-3 (specifically from Line 498 to Line 550 in Appendix C) are not relevant to reward shaping. This should be clarified. A2: Our shaping reward is calculated by bisimulation metric-based potential function, so the convergence of the metric (thm 1) is to ensure that our shaping reward meets the form of potential-based reward shaping(Eq (1)). Thm 2 demonstrates the property of our shaping reward of why it can encourage agents to explore states with high value difference. Thm 3 serves thm 4 by explaining how our shaping reward function changes the optimal function between the original and modified MDP. These analyses are all conducted within the framework of **potential-based reward shaping** and aim to elucidate reasons behind the superior efficiency of our shaping reward (please also refer to 1st response in reviewer pk3V). Q3: ..It is unclear how rewards can be collected by staying in the same position. Please provide further clarification on this point. A3: The reward you mentioned is the original reward function $R$ feedback from the environment. In this work, we reshape the reward calculation by $R^{\prime} = R + F$(see line 115), where $F(s_t, a, s_{t+1})=\gamma d_{i n v}(s_{t+1}, s_0)-d_{i n v}(s_t, s_0)$, when background state changes, the shaping reward $F$ collected by agent is high, thus collecting numerous rewards. Q4: ..I was unable to find the specific module referred to in paper [F]..[1] has already used bisimulation as a bonus term..it is important to clearly emphasize the specific reasons for choosing each module. A4: Please refer to the Embedding network (inverse dynamic module) in section 2 of [F], which is built upon a Siamese network [Koch et al.] using L1 norm to maiximize likelihood, besides please check the L1-norm of inverse dynamic model used in Eq (2) in [Hong et al.], and many other work using L1 norm in the module (please check section 2 and Eq(6) in [Meier et al.]. The intuitive reason behind L1 norm is the lowest computational cost since the action output is mostly **scalar** in RL envs. To further address your concern, we will add more citation on the choice of L1 and discuss its lowest computational cost in revision. Bonus term in [1] is not a "reward", which is very different from ours. We use the metric to calculate exploration reward and [1] uses it to restrict exploration without modifying the reward (the author of [1] details the difference in sec 3.1). We want to emphasize again that our **primary** contribution lies in being the first exploration method to ensure the convergence of an optimal policy and enhance learning efficiency, without relying on prior knowledge. Q5: ..this bonus will keep being non-zero even if the policy is converged... A5: The policy $\pi^*(s)$ is converged, in this case although the bonus (shaping reward $F$) remains non-zero, it won't affect the how agent choose actions since the input of the policy $\pi^*(s)$ is solely the state and not the reward, so our bonus will have no impact on training. Q6: ..I am confused about this discrepancy.. A6: Sorry it's a confusion caused by the line break, we mean $d_{inv}(s, s_0) \geq |V^\pi(s)-V^\pi(s_0)| \geq |V^\pi(s)| - |V^\pi(s_0)|$, since $s_0$ is the initial state $env.reset()$ fixed in each episode, we have $|V^\pi(s_0)| \geq constant = C_1$, thanks for pointing out the issue and we will fix in the revision. We do hope that your concerns are well addressed. * Reference > [Koch et al.] Koch G, Zemel R . Siamese neural networks for one-shot image recognition ICML 2015 > > [Hong et al.] Hong Z W, Fu T J, et al. Adversarial active exploration for inverse dynamics model learning CoRL 2020 > >[Meier et al.] Meier F, Kappler D, et al. Towards robust online inverse dynamics learning. IROS IEEE 2016
Summary: This paper focuses on the topic of reward shaping in reinforcement learning to encourage exploration. Different from previous methods that heavily rely on the count-based episodic term in the exploration bonus, they provide an end-to-end potential-based exploration bonus. This paper proposes to use the bisimulation metric in potential-based reward shaping. Specifically, they propose the Inverse Dynamic Bisimulation Metric to avoid meaningless exploration that is not caused by actions. They provide rigorous proof demonstrating the convergence of their method to a fixed point under certain assumptions. Experimental results on mujoco locomotion tasks and Atari games show that their method outperforms other reward-shaping algorithms by a large margin. Strengths: 1. The paper is well-written and easy to follow. Figure 1 gives an illustrative example to explain the meaning of the exploration bonus. 2. The idea of using bisimulation metric to identify the difference between states is a novel idea, which is intuitive but has not been explored before. 3. Experiments on both continuous control and discrete Atari game are comprehensive and thorough. Weaknesses: 1. There are 4 theorems in the main context without enough discussion about their necessity and importance, particularly theorem 3 and theorem 4. 2. The algorithm may need to be evaluated on sparse reward settings, for example, goal-conditioned tasks. The delayed reward setting of mujoco is still not challenging enough. 3. A minor thing: Curves in Figure 5 could be smoothed for better visualization. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The observation in the Atari experiments shows that increasing the horizon h from 1 to 2 causes a significant decline of the proposed method. Does this mean that the algorithm is unstable? In some real-world tasks or complicated tasks, the influence of action may be delayed and the consecutive states may have no differences. Therefore, we may need to select a proper h to calculate the exploration bonus. If h has such a large influence, sometimes using a fixed h may also cause problems. 2. Reward shaping that encourages exploration is usually used for goal-conditioned tasks that have very sparse rewards. I am curious about the performance of the proposed method in such scenarios. The mujoco locomotion and Atari game environment may not be challenging enough. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: One potential limitation may come from Potential-based reward shaping, which is the basis of the proposed method. Sometimes it is hard to select the horizon to calculate the difference between states. The experimental results also show that the algorithm is sensitive to the horizon. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful reviews, we had included the required experiments in the pdf of global comment regarding to W2 and Q2. For all the references mentioned in the response, please find the reference list in the global comment. *W1: There are 4 theorems in the main context without enough discussion about their necessity and importance, particularly theorem 3 and theorem 4* **Response**: The theoretical analysis is critical in supporting the evidence of our contribution (see line 60-66). It serves to explain how our shaping reward guarantees policy invariance and enables more efficient exploration when compared to other exploration methods. The purpose of Theorem 1 is to demonstrate that our inverse dynamic bisimulation metric converges to a fixed point, ensuring that it does not interfere with the convergence of the policy. Theorem 2 provides compelling evidence supporting the intuition behind the bisimulation metric and explains Figure 2, which serves as a guarantee that our metric establishes a bound on value differences. Theorem 4 introduces the relation between optimal value function of the original MDP $M$ and optimal value function of the modified MDP $M^{\prime}$ : $V^*_{M^{\prime}}=V_{M}^*-\Phi(s)$, where $\Phi(s)$ is the potential function, which explains the **necessity** of a good choice of potential function. And Theorem 3 provides the reason why our potential function $d_{inv}$ can accelerate training efficiency. Since $d_{inv}$ is an approximation of absolute value of optimal value function, the value function of modified MDP $V^*_{M^{\prime}}$ can be learned efficiently only by focusing the non-zero V-values. In a nutshell, Theorem 3 and 4 analyze how our method promotes efficiency from the view of the learning of value function. We are the first work in achieving exceptional exploration capabilities while also boosting training efficiency by incorporating value function learning. The analysis of four theorems is imperative to substantiate the use of the term **efficient** in the title. Thank you for the comment, we will further enhance the discussion in the revised version. *W2: The algorithm may need to be evaluated on sparse reward settings, for example, goal-conditioned tasks. The delayed reward setting of mujoco is still not challenging enough* *Q2: Reward shaping that encourages exploration is usually used for goal-conditioned tasks that have very sparse rewards. I am curious about the performance of the proposed method in such scenarios. The mujoco locomotion and Atari game environment may not be challenging enough* **Response**: Thanks you for your suggestion, the atari games are the one of the most acknowledged baselines in exploration methods (ICM[E], RND[J], NGU[F]), and the mujoco benchmark is one of the most widely used [M] for continuous control problem (the actions are continuous). The delayed reward setting makes the reward more sparse with the increasing delayed steps. As for your suggestion, we include the results of widely used [K,L] goal-conditioned environments (the robot-arm control envs) FetchPickAndPlace, FetchSlide and FetchPush. The results can be found in Figure 3 in the pdf of global comment, we can see that our method LIBERTY still achieves the best performance in the challenging tasks of goal-conditioned environments compared with other baselines, and we will include the results in revision. *W3: A minor thing: Curves in Figure 5 could be smoothed for better visualization.* **Response**: Thank you for the suggestion, we will fix the issue in the revised version. *Q1: ...increasing the horizon h from 1 to 2 causes a significant decline of the proposed method. Does this mean that the algorithm is unstable? In some real-world tasks or complicated tasks, the influence of action may be delayed and the consecutive states may have no differences. Therefore, we may need to select a proper h to calculate the exploration bonus. If h has such a large influence, sometimes using a fixed h may also cause problems.* *Limitations: Sometimes it is hard to select the horizon to calculate the difference between states. The experimental results also show that the algorithm is sensitive to the horizon* **Response**: Thank you for your comments, we emphasize that the study on the length of state sequence $h$ is an ablation experiment, and in our algorithm $h$ is set to 1 by default to meet the form of potential-based reward shaping (see Eq(1)). Recent studies [G,H] have explored the incorporation of diversity between neural networks and multi-agents to promote exploration. They conducted ablation studies to investigate factors that influence diversity within the system. In our method, the length of state sequence $h$ is the factor which influence the diversity between states sequence, so we carry the ablation study following [G,H]. In some real-world tasks or complicated tasks when the influence of action may be delayed and the consecutive states may have no differences, we agree with you that a fixed $h$ may not be the best fit, we can train a dynamic $h$ as a parameter using meta-gradient like the dynamic $\gamma$ in the work[I], and we think this is another interesting direction worth exploring in the future. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks for addressing my concerns. I don't have further questions. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our contribution. We believe that your valuable feedback will improve the quality of our paper.
Summary: This paper proposes the automatic construction of a potential function for policy-invariant reward transformation. The basic idea is adding the discrepancy in action outcomes from the inverse dynamic model to the on-policy bisimulation metric proposed by Castro [2020]. Then, the authors propose a method to train the metric named inverse dynamic bisimulation metric. The authors show that it is used as a potential function for reward shaping and prove that the proposed metric bounds the value difference. Experimental results show that the proposed method outperforms several baseline methods, such as ICM, RND, NGU, RIDE, and DPBA. Strengths: - Originality: The inverse dynamic bisimulation metric is novel, although it is a simple extension of the on-policy bisimulation metric. - Quality: The experimental results support the claims and the proposed method. - Clarity: The paper is written well and easy to follow. - Significance: As the authors pointed out that the previous studies designed the potential function based on some domain knowledge. On the contrary, this study provides a new research direction for automatic construction. Weaknesses: - The proposed method is a kind of model-based reinforcement learning (RL) because it explicitly estimates the transition and reward functions. Therefore, comparing the proposed method and model-based approaches is important, but the authors do not discuss this point. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Major comments: - If my understanding is correct, the proposed method estimates the transition function explicitly. In addition, I think that the reward function is also estimated explicitly because $r_i^\pi$ is computed by $\mathbb{E}_{a \sim \pi}[\mathcal{R}(s_i, a)]$. It suggests that the proposed method can be interpreted as a model-based approach. Is the proposed method more efficient than model-based RL? Please discuss this point. - Definition 3 is interesting, but it is unclear how $s_0$ is determined. If the simulation always starts from the same state, the potential function (6) is fine. However, it is unclear how $\Phi(s)$ works when $s_0$ is sampled from some initial state distribution. - I do not fully understand why $d_{inv}(s, s_0)$ is a good approximation of the optimal value function because the optimal value function is not necessarily non-negative. For example, it is negative if the reward function is negative. However, $d_{inv}$ is non-negative. - I understand that the inverse dynamic model $I: \mathcal{S} \times \mathcal{S} \to \mathcal{A}$ is widely used in this field, but it implicitly assumes that the action that makes a transition from $s_t$ to $s_{t+1}$ is determined uniquely. The inverse dynamic model does not work well if multiple actions make the same state transitions. Would you discuss what happens if the actions are redundant? - The proposed inverse dynamic bisimulation metric (4) uses the 2-Wasserstein metric, while the on-policy bisimulation metric (2) uses the 1-Wasserstein metric. Although the difference is discussed in Appendix B, it is unclear how the difference between 1- and 2-Wasserstein metrics affects the learning process. Minor comments: - The authors explain the behavior of the exploration bonus reward in SuperMarioBros in Figure 1. However, the proposed method is evaluated on the MuJoCo and the Atari environments. It would be nice to show the results in SuperMarioBros. - Line 124: I think that $\Phi: \mathcal{S} \to \mathcal{R}$ should be $\Phi: \mathcal{S} \to \mathbb{R}$. - Line 224: $R$ -> $\mathbb{R}$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Minor comment: - The authors provide researchers with a broader impact of this study but do not discuss this work's potential negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful reviews. For all the references mentioned in the response, please find the reference list in the global comment. *W1: ...because it explicitly estimates the transition and reward functions. Therefore, comparing the proposed method and model-based approaches is important...* *Q1: It suggests that the proposed method can be interpreted as a model-based approach. Is the proposed method more efficient than model-based RL? Please discuss this point...* **Response**: First, we clarify that our method only estimates the transition function but not the reward functions which is different with model-based approaches. Model-based approaches [D] involve learning the environment transition function $\mathcal{P}(s,a)$ and reward function $\mathcal{R}(s,a)$ to facilitate planning in policy learning. In our case, the computation of $r_i^{\pi}=\mathbb{E}_{a \sim \pi}[\mathcal{R}(s_i, a)]$ is done explicitly without the need to learn the reward function $\mathcal{R}(s, a)$ directly. This is because $\mathcal{R}(s, a)$ is derived from the environment's feedback. Consequently, we do not draw a direct comparison with model-based approaches. Our approach focuses on enhancing exploration by estimating transition functions as a component of the state difference. What sets our method apart are several key advantages. Firstly, our approach is **end-to-end**, eliminating the prerequisite of learning the environment model in advance. Secondly, our method has been mathematically proven to converge (refer to Theorem 1), while model-based approaches [D] often lack guaranteed convergence of their learned models. Thirdly, in the context of sparse reward scenarios, model-based approaches often yield reward functions that remain close to 0 for extended periods, which hampers their effectiveness in addressing sparse reward challenges. In contrast, our method excels in sparse reward problems, as evidenced in Table 1 of the experiments. *Q2: ...but it is unclear how $s_0$ is determined. If the simulation always starts from the same state, the potential function (6) is fine. However, it is unclear how $\Phi(s)$ works when $s_0$ is sampled from some initial state distribution.* **Response**: Good catch, $s_0$ is determined as *env.reset()* in the training period. For environment like autonomous driving where $s_0$ is sampled from some initial state distribution. At the begin of the episode, $s_0$ is sampled from the initial distribution, $\Phi(s)=d_{inv}(s, s_0)$ will serves the potential function to calculate the shaping reward for the transitions in this episode, when it reaches to the end of the episode, the transitions of this episode will be added into the buffer, the initial state $s_0^{new}$ will be resampled for the begin of the new episode, and the potential function is set as $\Phi(s)=d_{inv}(s, s_0^{new})$ in the new episode. It is noteworthy that different initial state $s_0$ has minimal impact on the training process. This is due to the shaping reward function, where $F = \gamma d_{inv}(s_{t+1}, s_0) - d_{inv}(s_t, s_0)$, in which the initial state $s_0$ acts as a baseline. So the primary focus of the shaping reward is on capturing the difference between $s_t$ and $s_{t+1}$. *Q3: I do not fully understand why $d_{i n v}\left(s, s_0\right)$ is a good approximation of the optimal value function because the optimal value function is not necessarily non-negative...* **Response**: Thank you for pointing out this issue, as shown the proof of Theorem 3 in Appendix C, $d_{inv}$ is an approximation of the *absolute value* of optimal value function, we will fix this typo in the revised version. *Q4: I understand that the inverse dynamic model $I: \mathcal{S} \times \mathcal{S} \rightarrow \mathcal{A}$ is widely used in this field, but it implicitly assumes that the action that makes a transition from $s_t$ to $s_{t+1}$ is determined uniquely. The inverse dynamic model does not work well if multiple actions make the same state transitions. Would you discuss what happens if the actions are redundant?* **Response**: Good question, we clarify that the uniqueness of action output in inverse dynamic module is widely acknowledged in previous work [E,F]. when multiple actions make the same state transitions, the inverse dynamic model can first output the probability of each redundant action $a_i$ as $p(a_i|s,s^{\prime})$, then a deterministic action $a_j$ is sampled as output, when the actions are continuous, the final output of action can be sampled from distributions (e.g. Gaussian). Consequently, the redundancy of actions has no impact on the training process in our method due to the uniqueness of action outputs. *Q5: ...it is unclear how the difference between 1- and 2-Wasserstein metrics affects the learning process* **Response**: As shown in Lemma 1 in Appendix B, for any two distributions $\mu, \lambda$, $W_1(\mu, \lambda) \leq W_2(\mu, \lambda)$. During the learning process, the $W_2$ metric offers a greater amount of shaping reward for the same transition. This proves particularly advantageous for exploration, especially in sparse reward settings where the external reward remains zero for the majority of the time. Furthermore, the closed-form solution for the $W_2$ metric when applied to Gaussians substantially reduces the computational cost associated with estimating the term "$W_2(d_{inv})(\mathcal{P}^\pi(\cdot \mid s), \mathcal{P}^\pi(\cdot \mid s^{\prime}))$" in comparison to $W_1$. Thanks for your comment, we will make a more detailed discussion in the revised version. *Minor comments* **Response**: Thank you for your valuable feedback. We would like to emphasize that this work does not have any potential negative social impacts. Furthermore, in the revised version, we will incorporate the results obtained from SuperMarioBros and address the issue regarding the character in Line 124 and 224.
Summary: This paper proposes to use inverse dynamic bisimulation metric for potential-based reward-shaping (PBRS). Specifically, the authors introduce the inverse dynamic bisimulation metric, which augments the bisimulation metric with an inverse dynamics term to account for state differences caused by actions. They then use the inverse dynamic bisimulation distance between the initial state and the current state as the potential function for PBRS. Compared to the L2 distance used in standard PBRS, inverse dynamic bisimulation metric prioritizes visitation of states with higher TD error. Moreover, this bisimulation metric enjoys numerous theoretical guarantees, including convergence to fixed-point and connections to the value difference. The authors validate the superiority of their methods compared to prior curiosity-based and potential-based exploration methods across a suite of MuJoCo and Atari tasks. Strengths: - The idea of using the bisimulation metric for potential-based exploration is original. In particular, by augmenting the bisimulation matrix with an inverse dynamic term, the agent is less incentivized to visit states with similar action outcomes. - The experiments are substantiative and exhaustive, demonstrating an overall improvement from prior exploration methods. - The theoretical results are solid. Weaknesses: - While using the bisimulation metric for exploration is appealing, the intuition for doing so remains largely unclear. From my understanding, the bisimulation metric offers a means to partition the state space via the similarity of reward and transition. However, it is unclear how incentivizing the agent to visit states that are different relates to incentivizing the agent to explore states that are novel. - The theoretical results seem a bit irrelevant. While it is nice to know the connection between the inverse dynamic bisimulation metric and the value difference / optimal value function, it remains elusive how this could be beneficial to exploration. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - If the intuition behind bisimulation-based exploration is that the potential function corresponds to TD error, then how is it better than an exploration method which directly incentivizes the visitation of transitions with large TD error? Where does the improvement come from? - How does the value difference bound in Theorem 2 relate to TD error? The TD error includes the current timestep reward and the discount factor, but the value difference does not. - It seems that even at convergence, the bisimulation metric would still be large between adjacent but critically different states (e.g. states that have vastly different rewards). How does the method know when to stop exploring in this case? - How does the method perform so well in the sparse reward setting, when the bisimulation metric explicitly includes a reward term? - Can you include comparisons with the variant of the algorithm w/o inverse dynamics on Atari games? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors mention that their method may encounter limitations when tackling prolonged and hard exploration. But this is a rather vague statement. It would help to elaborate on specific settings that their method struggles with. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful reviews. The requested exp have been included in the PDF file. For all the references mentioned in the response, please find the reference list in the global comment. *W1:.. the intuition of bisimulation metric for doing so remains largely unclear... However, it is unclear how incentivizing the agent to visit states that are different relates to incentivizing the agent to explore states that are novel...* **Response**: Recent studies, such as RIDE[A] and PBRS[B], have investigated the encouragement of agent exploration through state differences. However, these methods encounter issues of inefficiency and restricted scalability, compounded by their reliance on pre-existing knowledge (see lines 28-46 for reference). We endeavor to tackle these challenges in our work by introducing a perspective rooted in bisimulation metric-based state differences. A visual representation of this concept can be observed in Figure 2 (lines 158-184): the agents can discover novel states by additionally considering the value difference, and it's acknowledged by reviewer dnFb (Strength point 3), pk3V (Strength point 1) and AtiU (Strength point 2). As shown in Figure 1, spikes in the curve (high state difference) correspond to pivotal moments of Mario's exploration, like jumping, boarding, and raising the flag. These actions lead to substantial state changes, including Mario reaching novel states like getting on the hoverboard. Conversely, in the 2nd and 3rd frames, we observe that Agent Mario can become stuck, with the state remaining almost unchanged during this interval. *Q1:If the intuition ... is that the potential function corresponds to TD error, how is it better than exploration method which directly incentivizes the visitation of transitions with large TD error? Where does the improvement come from?* **Response**: The summary of exploration methods can be found in Sec 2 and Table 3 in Appendix E. Even with the prioritization of visitation for transitions with large TD error, these methods fail to ensure *policy invariance* of the original MDP and lack *scalability* when compared to our approach (please refer to line 36-46 and line 60-69). Additionally, we employ Prioritized Experience Replay [C] to prioritize the visitation of transitions with large TD error in the most competitive baseline RIDE, the results can be found in Figure 2 in pdf of global comment, the performance of RIDE with PER declines or remains relatively unchanged across the six tasks. The reason behind is that RIDE only learns to explore state pairs with high TD error which significantly restrict the exploration. It's a trade-off between promoting the agent's exploration and visitation of transitions with large TD error for other exploration methods. Note that our approach takes into account the TD error between states derived from the metric. This means that we do not require prioritization of transitions with high TD error, as our model can autonomously assess its own performance. So we can achieve excellent exploration (see Figure 4 and 6) as well as accelerate convergence speed. *W2:theoretical results seem a bit irrelevant...it remains elusive how this could be beneficial to exploration* **Response**: The theoretical analysis is essential in supporting the evidence of our contribution, *policy invariance* and *more efficient exploration* (see line 60-66). Thm 1 offers the evidence of the convergence of our metric. Thm 2 is the guarantee of how our method achieves **efficient** exploration by considering value difference so that the training efficiency can be improved. Thm 3 and 4 analyze the relationship between our potential function and the optimal value function of the modified MDP and the original MDP. Since Eq(10) is satisfied, the learning process of optimal value function can be more efficient by focusing on the non-zero V-values. Thank you and we will enhance this part in the revised version. *Q2: How does the value difference bound in Theorem 2 relate to TD error...* **Response**:The TD error is defined as: $\delta_t=R_{t+1}+\gamma V\left(S_{t+1}\right)-V\left(S_t\right)$. Intuitively, $\gamma$ is a constant during training, if the value difference of state $S_t$ and $S_{t+1}$ is large, the shaping reward $F$ evaluated by potential function will be large, which means $R = R^e +F$ will be large ($R^e$ is the external reward from the environment). So the TD error of the transition will be large if the value difference between adjacent states is large. *Q3: ..even at convergence, the bisimulation metric would be large between adjacent different states...How does the method know when to stop exploring in this case?* **Response**:When the policy is at convergence, please refer to sec 3 (line 122-126), our shaping reward is potential-based and our metric is proved to converge to a fixed point (see thm1), so it won't affect the optimal policy of the original MDP. As the policy converges to the optimal policy, the shaping reward will have no impact on training, so the agent *stops* exploring after the convergence of optimal policy, even in the case where states that have vastly different rewards. *Q4:How does the method perform so well in the sparse reward setting, when the bisimulation metric explicitly includes a reward term?* **Response**: Please refer to Eq(4), in sparse reward setting, the shaping reward mainly relies on the last two terms in Eq(4) which are the difference between transition distribution and action outcomes, so the agent will try to maximize the reward by exploring actions and transitions distributions that have large difference, agents are encouraged to take more diverse actions to maximize the last two terms in Eq(4) so that the exploration is effectively promoted. *Q5: Can you include comparisons with the variant of the algorithm w/o inverse dynamics on Atari games?* **Response**: Yes, we have included in Figure 1 in the pdf of the global comment. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my comments and providing additional results. I am convinced that this work has a substantial contribution and will adjust my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our contribution. We are confident that your valuable suggestions will undoubtedly enhance the quality of our paper.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable comments, and we summarize the major concerns regarding to the reviewers as follows: ### Sparse reward setting and more challenging environments According to the review of the 1st reviewer naKC, there are questions about how our method can achieve good performance in sparse reward setting, and the 3rd reviewer pk3V suggests that we should evaluate our method on a more sparse reward goal-conditioned environment. The 4th reviewer AtiU suggests that we should also evaluate our method in more challenging exploration scenarios. To address this, we have included additional experiments in more challenging environments in the pdf and provide the analysis of how our method can perform well in the individual responses. ### Theorem connection and discussion The 1st reviewer naKC has some concern how the theoretical analysis is beneficial to exploration, the 2nd reviewer dnFb has some questions on theorem 2, and the 3rd reviewer pk3V suggests that we should improve the the discussion about the necessity and importance of our theoretical results. lastly, the 4th reviewer AtiU has some concern that Theorem 1-3 closely resemble previous work. To address this, we had detailed the contribution and necessity of our theoretical analysis in each individual response. ### Intuition and questions of model-based approaches The 1st reviewer has concern on the intuition behind our method, and the 2nd reviewer suggests that we should make comparison with model-based approaches. To address this, we had explained our intuition referring to Section 4, and we had clarified that our method is not model-based and detailed the comparison of model-based approaches in the individual responses. We thanks again for all the reviewers putting time and care into reviewing the paper, and we had answered all the reviewer's questions and minor comments in the individual response. To facilitate cross-referencing, we present all the references utilized in the response here. ### Reference list > [A] Raileanu R, Rocktäschel T. Ride: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments. ICLR 2020 > > [B] Ng A Y, Harada D, Russell S. Policy invariance under reward transformations: Theory and application to reward shaping. ICML 1999 > >[C] Schaul T, Quan J, Antonoglou I, et al. Prioritized experience replay. arXiv 2015. > >[D] Kaiser L, Babaeizadeh M, Milos P, et al. Model-based reinforcement learning for atari. ICLR 2020. > >[E] Pathak D, Agrawal P, Efros A A, et al. Curiosity-driven exploration by self-supervised prediction. ICML 2017. > >[F] Badia A P, Sprechmann P, Vitvitskyi A, et al. Never give up: Learning directed exploration strategies. ICLR 2020. > >[G] Sheikh H, Phielipp M, Boloni L. Maximizing ensemble diversity in deep reinforcement learning. ICLR 2021. > >[H] Li C, Wang T, Wu C, et al. Celebrating diversity in shared multi-agent reinforcement learning. NIPS 2021. > >[I] Xu Z, van Hasselt H P, Silver D. Meta-gradient reinforcement learning. NIPS 2018. > >[J] Burda Y, Edwards H, Storkey A, et al. Exploration by random network distillation. arXiv 2018. > >[K] Rui Zhao, Yang Gao, Pieter Abbeel, Volker Tresp, Wei Xu: Mutual Information State Intrinsic Control. ICLR 2021. > >[L] Eysenbach B, Zhang T, Levine S, et al. Contrastive learning as goal-conditioned reinforcement learning. NIPS 2022. > >[M] Zheng Z, Oh J, Singh S. On learning intrinsic rewards for policy gradient methods. NIPS 2018. > >[N] Mazzaglia P, Catal O, Verbelen T, et al. Curiosity-driven exploration via latent bayesian surprise. AAAI 2022. Pdf: /pdf/74cc58283e0d0fa81606919a0474cf6d50ed6cce.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
AR-Diffusion: Auto-Regressive Diffusion Model for Text Generation
Accept (poster)
Summary: This paper presents a diffusion model on text generation. The idea is generally interesting. It learns to diffuse sentence-level and token-level diffusion, where the latter one is diffused with dynamic movement speeds.Its experiments are well-designed and its empirical results are strong. Strengths: 1. The method is interesting with a perspective to discuss autoregressive and non-autoregressive diffusion models. 2. The skipping mechanism is useful to accelerate the generation process. 3. The empirical results are strong. Weaknesses: 1. The paper is hard to follow, and the writing should be improved. 2. Limited diiffusion model baseline and missing some related baselines. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. I wonder if the skipping mechanism can also be applied to other diffusion models. 2. In Algo2 line 6, where does n come from? Do you enumerate n? 3. In Diffusion-LM, there is a rounding operation as Algo 2 line 9 at each diffusion step. It seems AR-Diffusion do not requires such rounding operation at each step. If true, would DPM-Solver help accelerating the diffusion process? 4. GENIE is the main (almost the only) diffusion model baseline in the experiments, but some related baselines are mssing, such as DiffuSeq [1], CDCD[2], Difformer [3], DINOISER[4] and RDM[5]. It seems the authors already noticed some of them in the related work wonder, and I think it would be helpful to include more complete baselines. If my conerns are addressed, I'm willing to raise my score. References 1. Gong et al. Diffuseq: Sequence to sequence text generation with diffusion models 2022 2. Dieleman et al. Continuous diffusion for categorical data 2022 3. Gao et al. Difformer: Empowering Diffusion Models on the Embedding Space for Text Generation, 2022 4. Ye et al. DINOISER: Diffused Conditional Sequence Learning by Manipulating Noises 2023 5. Zheng et al. Reparameterized Discrete Diffusion for Text Generation 2023 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our paper, in response to your concerns, we will give the following explanations. **Q1: The paper is hard to follow, and the writing should be improved.** A1: Our approach is designed to apply the inherent sequential features of natural language to diffusion language models. AR-Diffusion ensures that the generation of tokens on the right depends on the tokens generated on the left, by employing a dynamic number of denoising steps that vary according to the token position. We will polish the writing in the next version, and open our code so that other researchers can reproduce it. **Q2: I wonder if the skipping mechanism can also be applied to other diffusion models.** A2: Yes. Our skip mechanism can be seamlessly applied to other diffusion models. As depicted in Figure 2(c), we also apply the skip mechanism to GENIE, which yields superior results compared to DDIM + GENIE. **Q3: In Algo2 line 6, where does n come from? Do you enumerate n?** A3: $n$ in Algo2 line 6 indicates the n-th token, which is explained in Line100 Page3. ($n$ is in $\\{1,...,N\\}$, where $N$ is the target sentence length.) In the process, we assign different token-level timesteps for each token according to the sentence-level timestep and its position. **Q4: In Diffusion-LM, there is a rounding operation as Algo 2 line 9 at each diffusion step. It seems AR-Diffusion do not requires such rounding operation at each step. If true, would DPM-Solver help accelerating the diffusion process?** A4: Since we are following the Diffusion-LM and GENIE, we also implement the rounding operation at each step, i.e., the map to nearest operation in Algo2 line 9. Although we would very much like to use DPM-Solver to speed up the diffusion process, it does not seem to be adaptable. **Q5: GENIE is the main (almost the only) diffusion model baseline in the experiments, but some related baselines are mssing, such as DiffuSeq [1], CDCD[2], Difformer [3], DINOISER[4] and RDM[5]. It seems the authors already noticed some of them in the related work wonder, and I think it would be helpful to include more complete baselines.** A5: Thank you for your valuable suggestions. 1. In Appendix Table 8, we compared the results on the IWLST14 dataset with those in the DINOSIER and Diffusion-LM papers. 2. For the latest baselines you mentioned, CDCD, Difformer and DINOSIER were not open-sourced before NeurIPS submission, so we could not reproduce it accurately. Recently, DINOISER and DiffuSeq have released their code, and we also run their code in summarization, namely, CNN/Daily Mail and XSum. You can refer to A1 in the [Author Rebuttal by Authors](https://openreview.net/forum?id=0EG6qUQ4xE&noteId=vkeE6NZjqk) for more details. We will add all the comparision in the next version. 3. Only GENIE has been experimented on CNNDM and XSUM datasets, and release their code, so we compare with GENIE. We hope our answers have resolved your concerns. If you have any other concerns, please feel free to let us know. Thanks again for your review. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' updates. The results look promising. I will raise my score. --- Reply to Comment 1.1.1: Comment: We want to express our sincere gratitude for your thorough review of our paper. Your deep expertise has truly enhanced the quality of our work, and we are committed to incorporating your suggestions as we revise. Thank you once again for recognizing our efforts.
Summary: This paper presents AR-DIFFUSION, a diffusion model that displays auto-regression-like generation behavior. The primary contributions of this work can be summarized as follows: 1) A multi-level diffusion strategy is proposed, encompassing both sentence-level and token-level diffusion. 2) A skipping mechanism is introduced, which works in tandem with the multi-level diffusion strategy to expedite the process. 3) The superiority of the model over existing diffusion language models is verified in terms of text generation tasks and inference efficiency. Strengths: The author presents an approach for integrating an auto-regressive-like mechanism into the diffusion model for text generation and has conducted comprehensive experiments to validate the efficacy of the proposed method. The idea of incorporating autoregressive dependency into the diffusion model is captivating. Weaknesses: I find the experiments in this paper insufficiently convincing. My primary concern is that the main gains appear to be derived from MBR. With the skipping mechanism, the per-sentence generation step is reduced from 2000 to 20. This gives you chances to use K=500 for MBR, which is really large, as MBR is N^2 in computation. The NFE metric is somewhat misleading, as it only considers the number of model forwards and does not account for the MBR process. It would be more appropriate to report the runtime speed of all methods for a fair comparison. The comparison with baselines is not exhaustive. To substantiate the claim that their method benefits from introducing auto-regressive dependency back into text diffusion models, the authors should compare additional diffusion baselines, such as DIFFUSEQ and SeqDiffuSeq, with the same diffusion steps and candidates used in MBR, rather than just GENIE. Moreover, GENIE itself reported a Rouge-L score of 41.2 on CNN/Daily Mail, while the table in this paper shows only 32.1. This discrepancy should be clarified in the main text. If the lower score is due to a smaller parameter size, why not increase it to match GENIE's level? If the proposed method is effective, it could potentially be comparable to models like BART. As it stands, the parameter size of AR-Diffusion is too small to demonstrate its superiority. Furthermore, the comparison with NAR methods is not solid enough. Baselines such as CMLM and LevT were proposed three years ago. More recent methods like DA-Transformer should be included for a more comprehensive comparison. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please answer the questions mentioned in the Weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our paper, and I will explain your concerns in detail below. **Q1: My primary concern is that the main gains appear to be derived from MBR. With the skipping mechanism, the per-sentence generation step is reduced from 2000 to 20. This gives you chances to use K=500 for MBR, which is really large, as MBR is N^2 in computation. TheNFE metric is somewhat misleading, as it only considers the number of model forwards and does not account for the MBR process.It would be more appropriate to report the runtime speed of all methods for a fair comparison.** A1: Please let me address your primary concern by breaking it down into two parts. 1. NFE is a common way to compare inference speed within diffusion-based language models. 1. When generating the same number of candidate samples (k), the time taken to calculate MBR between different diffusion models is considered identical. The most critical factor at this point is the number of function evaluations (NFE), or the number of model forward, enabling a relatively fair theoretical comparison among different models. 2. In the following table, it can be observed that the time taken to calculate mbr (\~0.1s) is 20 times shorter than the time taken for NFE (\~2s) when generating k≤50 samples. Therefore, in this situation, we consider it to be negligible. 3. Table 5 presents a comparison of the inference efficiency between AR-Diffusion and GENIE, using the same number of generated candidate samples (k). In Table 3, our model with only 20 inference steps (the 5th line) outperforms SeqDiffuSeq's performance with 2000 steps (the 4th line) when generating 1 candidate sample. Thus, we claim that our model achieves a speed improvement of 100 times compared to SeqDiffuSeq in machine translation and is 600 times faster than GENIE. 4. We present k=500 is to illustrate that if the resources and time are sufficient, the performance can still be gradually improved as the number of generated candidate samples increases. 2. We report the runtime speed of GEINIE and AR-Diffusion in the following Table. 1. To avoid randomness, we select 50 samples from the CNNDM dataset, and each sample generates K candidates. Then calculate the total sampling time divided by the number of samples (50) to get the time required for each sample to perform NFE. Then calculate the time required to pick the best candidate by MBR. All experiments are 1 A100-40G and 50 CPUs. 2. From the table, we can observe that when k≤50, the MBR time is very small, which is negligible compared to NFE. So in this case, we can measure the decoding speed of the model through the NFE indicator. In addition, when k=500, step=20, although the time has increased, AR-Diffusion is still nearly twice as fast as GENIE (k=10, step=2000). In particular, the number of function evaluations (NFE) is actually the forward propagation number of the model. | |GENIE|AR-Diffusion| | | | |:----|:----|:----|:----|:----|:----| |K|10|10|10|50|500| |Steps|2000|3|20|20|20| |Speed of Model Forward (NFE) (s/it)|47.54s/it|0.25s/it|0.61s/it|2.12s/it|21.03s/it| |Speed of MBR (s/it)|0.02s/it|0.02s/it|0.02s/it|0.08s/it|7.14s/it| |Total Speed (s/it)|**47.56s/it**|**0.27s/it**|**0.63s/it**|**2.20s/it**|**28.17s/it**| |ROUGE-2 in XSum|8.78 |8.68|9.32|10.1|10.6| **Q2: The comparison with baselines is not exhaustive. To substantiate the claim that their method benefits from introducing auto-regressive dependency back into text diffusion models, the authors should compare additional diffusion baselines, such as DIFFUSEQ and SeqDiffuSeq, with the same diffusion steps and candidates used in MBR, rather than just GENIE.** A2: Please check A1 in the [Author Rebuttal by Authors](https://openreview.net/forum?id=0EG6qUQ4xE&noteId=vkeE6NZjqk). **Q3: GENIE itself reported a Rouge-L score of 41.2 on CNN/Daily Mail, while the table in this paper shows only 32.1. This discrepancy should be clarified in the main text. If the lower score is due to a smaller parameter size, why not increase it to match GENIE's level? If the proposed method is effective, it could potentially be comparable to models like BART. As it stands,the parameter size of AR-Diffusion is too small to demonstrate its superiority.** A3: In line 142 of Section 4.2, we mentioned that GENIE selects the best sample by calculating the maximum score for each generated one using ground truth, leading to unfairness. To ensure a fair comparison with our method, we re-implement GENIE and use the MBR method to select the best sample. Due to our model not undergoing pre-training, making a fair comparison with pre-trained BART is currently challenging. Nevertheless, we are currently developing a pre-trained model and intend to conduct a comprehensive comparison with BART in the next version. **Q4: Furthermore, the comparison with NAR methods is not solid enough. Baselines such as CMLM and LevT were proposed three years ago. More recent methods like DA-Transformer should be included for a more comprehensive comparison.** A4: Thanks for your suggestion. We will supplement these experiments in the next version. We hope our answers have resolved your concerns. If you have any other concerns, please feel free to let us know. Thanks again for your review. --- Rebuttal Comment 1.1: Comment: Thanks for your response. After reading it, I choose to keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thorough review of our paper. Your feedback suggests a preference for maintaining your perspective. It's possible that our initial response may not have fully addressed your concerns. Therefore, if there are any remaining points we haven't yet covered, we kindly request your further insights. Your valuable input will undoubtedly assist us in enhancing our work. Finally, thank you again for your dedicated efforts in this review process.
Summary: This paper introduces a diffusion method optimized for the autoregressive text generation scheme. They employ different movement speeds for denoising with respect to the token positions. Specifically, they apply a lower movement speed to right-side tokens to guide models to reflect information in left-side tokens. Based on the dynamic movement speed method, they also propose a skipping mechanism during inference for efficient decoding. Experimental results show that the proposed method outperforms the previous diffusion-based approach at the same NFE, and the average performance drop is much lower in an extremely limited number of inference steps. In ablation experiments, they show that both AR-diffusion and the skipping mechanism are effective and the skipping mechanism can be applied to the other diffusion-based model. Strengths: - The methodology is highly intuitive and well-motivated. - The proposed method is simple while mathematically supported and powerful. - They conduct experiments in various text-generation tasks to show not only consistent performance but also the efficiency of their method. - The skipping mechanism can be effectively applied to the other diffusion model. Weaknesses: - Additional case studies comparing with GENIE or (N)AR models would provide further insights. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Have you compared the decoding speed between AR-Diffusion with an inference step of 2 or 3 and NAR models? - Would AR-Diffusion also be robust to infilling tasks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Since the model configuration of AR-Diffusion is based on Transformer-base in the paper, it would be possible to conduct a scalability study for various sizes in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable suggestions, and we will reply to your questions one by one below. **Q1: Additional case studies comparing with GENIE or (N)AR models would provide further insights.** A1: The following two tables are the results generated by GENIE and AR-Diffusion for the same case. It can be seen that AR-Diffusion has a clear tendency to generate from left to right, while GENIE is generated irregularly. |GENIE Case| |:----| |[unused487] ع in [unused673] [unused285] response constituted ##司 ##iaceae [unused744] ##ː hart ##elial - annapolis yep trent in [unused302] support | |stability hit 博 tyne the helping embassy unbeaten former knesset australian and ##play [unused99] interacting have short sickness the struggle of one by syria .| |leave only withdrawn from the uk embassy of the built australia and ##play [unused99] benton in hour controversial the killing of the uk russia .| |from only withdrawn from midfielder uk embassy of reasonable built australia and with who were aground active controversial the killing of the in syria .| |britain has withdrawn from the uk embassy in london , australia and israel who were among in for the killing of the in countries .| |AR-Diffusion Case| |:----| |co₂ stomped out [unused673] ##δ did boxed ##ɣ ##iaceae kannada cheap hart avoided [unused285] [unused654] ##® 崎 in| |britain has withdrawn from midfielder uk of reasonable built trent ##play [unused99] benton aground pondered tightening| |britain has withdrawn from midfielder uk embassy in london and australia with blah tramway lowlands parana 忄##orescence [SEP] ##?| |britain has withdrawn from the uk embassy in london and australia after israel who were have intercontinental ی [unused174] £5 戸| |britain has withdrawn from the uk embassy in london and australia after israel who were in dubai for the killing of the country .| **Q2: Have you compared the decoding speed between AR-Diffusion with an inference step of 2 or 3 and NAR models?** A2: Different frameworks of the AR-Diffusion and NAR hinders the fair comparison. However, in theory, due to the parallel decoding mechanism of both NAR and AR-Diffusion, their decoding speed mainly depends on the number of function evaluations (NFE), in other words, the number of model forward compuatation. Thus, if NAR model has the same inference step, or decoding iterations in NAR research[1], then the decoding speed are comparable. If the NAR model requires lots of steps, then AR-Diffusion is faster. [1] A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond. **Q3: Would AR-Diffusion also be robust to infilling tasks?** A3: We are now trying to pretrain AR-Diffusion for textual generation. One of the pre-training tasks we choose is infilling like T5 and UL2. From the observed loss curve, the loss is indeed gradually decreasing, indicating that AR-Diffusion can also be applied to filling tasks. We are currently conducting the experiment and will release these results in the next version. **Q4: Since the model configuration of AR-Diffusion is based on Transformer-base in the paper, it would be possible to conduct a scalability study for various sizes in future work.** We will conduct scalability research on various scales of AR-Diffusion in the next version. We hope our answers have resolved your concerns. If you have any other concerns, please feel free to let us know. Thanks again for your review. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and additional results. I look forward to the next version of the paper. I have read the rebuttal and will keep the score. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback and taking the time to review our responses! We'll be happy to address any remaining questions or concerns. Moreover, we will incorporate your suggestions into our next version.
Summary: This work introduces left-to-right sequential characteristics into diffusion models, enhancing the text generation performance of diffusion models. By considering the AR model as a diffusion model with two states: to be decoded and already decoded, AR-Diffusion defines a continuous diffusion model with decreasing diffusion speeds from left to right. Experiments on various text generation tasks show that AR-Diffusion achieves improvements over existing diffusion models. Strengths: 1. The idea of introducing the left-to-right inductive bias into diffusion models for text generation is straightforward and reasonable. 2. By controlling the number of inference steps and generation candidates, AR-Diffusion can achieve tradeoff between quality and efficiency, which is more flexible than the Transformer. 3. Compared with the autoregressive model (BART), the generation of AR-Diffusion is more diverse Weaknesses: 1. I think the authors overclaim the decoding speedup. First of all, most diffusion baseline models in the paper have no advantage in both generation quality and efficiency compared with the Transformer. Thus, AR-Difffusion should compare with the Transformer for decoding efficiency. Besides, existing diffusion models can achieve competitive results with much fewer steps. For example, Difformer[1] and Dinoiser[2] can achieve competitive scores with 20 steps, and Diff-GLAT[3] can even generate high quality sequences with only 3 steps. Therefore, I think more comprehensive experiments should be conducted to claim decoding speedup. 2. Although the AR-Diffusion achieves better BLEU scores than that of the Transformer in Table 3, the results in Table 8 of Appendix C shows that the Transformer is still better than AR-Diffusion. Why are the results in the two tables contradictory? As SacreBLEU is a more standard metric for machine translation[4], does the results indicate that AR-Diffusion still lags behind the Transformer with a certain gap? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Which decoding method does the Transformer use, greedy search or beam search? If beam search is used, what is the beam search size of each reported Transformer result? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: AR-Diffusion requires a large number of candidates to achieve better results. Although generating hundreds of samples has large generation overhead, AR-Diffusion achieves promising results with MBR decoding. I think the large number of candidates is not a serious issue in the current stage. Reference [1] Gao, Z., Guo, J., Tan, X., Zhu, Y., Zhang, F., Bian, J., & Xu, L. (2022). Difformer: Empowering diffusion model on embedding space for text generation. arXiv preprint arXiv:2212.09412. [2] Ye, J., Zheng, Z., Bao, Y., Qian, L., & Wang, M. (2023). Dinoiser: Diffused conditional sequence learning by manipulating noises. arXiv preprint arXiv:2302.10025. [3] Qian, L., Wang, M., Liu, Y., & Zhou, H. (2022). Diff-glat: Diffusion glancing transformer for parallel sequence to sequence learning. arXiv preprint arXiv:2212.10240. [4] Post, M. (2018, October). A Call for Clarity in Reporting BLEU Scores. In Proceedings of the Third Conference on Machine Translation: Research Papers (pp. 186-191). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful review, and I will elaborate on each of your concerns below. **Q1: I think the authors overclaim the decoding speedup. First of all, most diffusion baseline models in the paper have no advantage in both generation quality and efficiency compared with the Transformer. Thus, AR-Difffusion should compare with the Transformer for decoding efficiency. Besides, existing diffusion models can achieve competitive results with much fewer steps. For example, Difformer[1] and Dinoiser[2] can achieve competitive scores with 20 steps, and Diff-GLAT[3] can even generate high quality sequences with only 3 steps. Therefore, I think more comprehensive experiments should be conducted to claim decoding speedup.** A1: 1. We claim that AR-Diffusion achieves 100× faster than SeqDiffuSeq in machine translation and 600× faster than GENIE in line 58, this can be verified by NFE in Figure 3 (AR-Diffusion and SeqDiffuSeq) and Figure 5 (AR-Diffusion and GENIE). 2. To compare the decoding efficiency between AR-Diffusion and AR (Transformer), we conduct additional experiments for them where they share the same model architecture and size. In comparison, we randomly select 50 samples from the CNN/Daily Mail test set. The beam size of AR (Transformer) is set to 5, which achieves the best performance. We also generate 50 candidate samples using AR-Diffusion with 20 inference steps. Additionally, the computation of MBR is also included in the time cost of AR-Diffusion. We use 1 A100-40G and 50 CPUs for the experiment. The running time of each case is averaged, and we report the time in seconds per case (s/it) for comparison. The results demonstrate that AR takes 5.30s, whereas AR-Diffusion only takes 2.18s. Therefore, it is evident that the speed of AR-Diffusion is faster than that of AR. 3. We run DINOISER in CNN/Daily Mail and XSum, and you can check A1 in [Author Rebuttal by Authors](https://openreview.net/forum?id=0EG6qUQ4xE&noteId=vkeE6NZjqk) for more details. Through generating 50 candidate samples, the performance of DINOISER is worse than AR-Diffusion, even the AR-Diffusion with 2 steps and 10 candidate samples (Table 5). For Difformer, they do not release their code. 4. Diff-Glat is not a continuous diffusion based language model. The main contribution of Diff-Glat is to develop a residual glancing strategy for NAR, and it is common for NAR model to generate sentence with 3 steps. **Q2: Although the AR-Diffusion achieves better BLEU scores than that of the Transformer in Table 3, the results in Table 8 of Appendix C shows that the Transformer is still better than AR-Diffusion.Why are the results in the two tables contradictory? As SacreBLEU is a more standard metric for machine translation[4], does the results indicate that AR-Diffusion still lags behind the Transformer with a certain gap?** A2: While our model's performance in terms of the SacreBLEU on IWSLT14 is not as strong as AR's, AR-Diffusion outperforms AR on various other datasets, particularly in summarization tasks such as CNN/Daily Mail and XSum. Additionally, AR-Diffusion demonstrates notably higher diversity in comparison to the auto-regressive model, as indicated by Table 6. It is foreseeable that researchers may devise improved selection strategies beyond MBR, leading to the much better samples among these diverse candidate samples. **Q3: Which decoding method does the Transformer use, greedy search or beam search? If beam search is used, what is the beam search size of each reported Transformer result?** A3: The result of AR in the paper is beam search, and the beam size is 5. We hope our answers have resolved your concerns. If you have any other concerns, please feel free to let us know. Thanks again for your review. --- Rebuttal Comment 1.1: Comment: Thanks for your response. After reading the reviews and responses, I decide to maintain my score. From my perspective, the remaining concerns lie in the performance on machine translation benchmarks and the absence of comparisons with stronger NAR models. --- Reply to Comment 1.1.1: Comment: Thank you very much for your patient review. Regarding the two concerns you raised in your response, we provide detailed explanations below. **Q1: The absence of comparisons with stronger NAR models.** A1: On the one hand, as far as we know, NAR model is still slightly behind the AR model on most NLG tasks in terms of performance. Therefore, we primarily selected AR for comparison. On the other hand, certain NAR models like BANG[1] and MIST[2] are pre-trained, making a fair and direct comparison unfeasible. Similarly, some NAR models such as SUNDAE[3] and INSNET[4] have not provided results on datasets like XSUM or IWLST14. The most recent DA-Transformer[5,6] achieved results comparable to AR, but their results were obtained by ensemble the best five checkpoints and using a large beam size of 200. They did not provide results without ensemble, making it difficult for us to compare. Additionally, NAR models like latent-GLAT[7] and CMLMC[8] reported BLEU scores for the IWSLT14 De->En dataset in their papers, respectively, as indicated in the following table. |Pattern|Model|IWSLT14 De->En| |:----|:----|:----| |AR|Transformer|34.74 | |NAR|GLAT[2021]|29.07| | |CNAT[2021]|29.81| | |CMLM[2021]|31.80| | |latent-GLAT[2022] |32.31| | |CMLMC[2022]|34.81| |Diffusion|AR-DIFFUSION ($k$ = 50) |34.95| | |AR-DIFFUSION ($k$ = 500)|35.62| As seen from the table above, our method outperforms all the NAR models listed in the table at $k$=50, and its performance is even stronger at $k$=500. Furthermore, within our paper, Tables 1, 3, and 4 are provided, presenting results across various NAR models (such as CMLM, LevT, CNAT, ConstLeven) for reference. Nevertheless, we deeply value the importance of your suggestions. We are currently engaged in the pre-training of an AR-Diffusion model. Consequently, in our upcoming version, we intend to implement your recommendations and incorporate comparisons with stronger NAR models. **Q2: The performance on machine translation benchmarks.** A2: As addressed in the Q2 of our Rebuttal, the SacreBLEU metric on the translation dataset is indeed slightly lower than AR. However, across other metrics and datasets, we have achieved comparable results with AR. Overall, the performance is on par with AR. Once again, we truly appreciate your diligent efforts. We hope our response addresses your concerns. Furthermore, we will incorporate all the suggestions you mentioned into the appendix and related work. If you have any further questions, please feel free to reach out to us at your convenience. [1] Qi W, Gong Y, Jiao J, et al. Bang: Bridging autoregressive and non-autoregressive generation with large scale pretraining[C]//International Conference on Machine Learning. PMLR, 2021: 8630-8639. [2] Jiang T, Huang S, Zhang Z, et al. Improving non-autoregressive generation with mixup training[J]. arXiv preprint arXiv:2110.11115, 2021. [3] Savinov N, Chung J, Binkowski M, et al. Step-unrolled Denoising Autoencoders for Text Generation[C]//International Conference on Learning Representations. 2021. [4] Lu S, Meng T, Peng N. Insnet: An efficient, flexible, and performant insertion-based text generation model[J]. Advances in Neural Information Processing Systems, 2022, 35: 7011-7023. [5] Huang F, Ke P, Huang M. Directed Acyclic Transformer Pre-training for High-quality Non-autoregressive Text Generation[J]. arXiv preprint arXiv:2304.11791, 2023. [6] Huang F, Zhou H, Liu Y, et al. Directed acyclic transformer for non-autoregressive machine translation[C]//International Conference on Machine Learning. PMLR, 2022: 9410-9428. [7] Bao Y, Zhou H, Huang S, et al. latent-GLAT: Glancing at latent variables for parallel text generation[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 8398-8409. [8] Huang X S, Perez F, Volkovs M. Improving non-autoregressive translation models without distillation[C]//International Conference on Learning Representations. 2021.
Rebuttal 1: Rebuttal: **Q1: Compare with more diffusion language models.** A1: We have compared AR-Diffusion with SeqdiffSeq in the Table 3 , and compare with DINOISER and Diffusion-LM in the appendix Table 8. Furthermore, we enrich the comparision with more baselines in the following table. Due to the unavailability of source code for CDCD and Difformer, we choose to perform experiments using DiffuSeq and DINOISER on the CNN/Daily Mail and XSUM datasets. 1. In the case of DINOISER, we employ the same architecture as iwslt_base_postnorm in the DINOISER code. Furthermore, we adhere to the same hyperparameters utilized in IWSLT. The training process is facilitated by bf16. For CNN/Daily Mail, DINOISER is trained for a total of 65 hours spanning 200 epochs. In the case of XSum, the training duration is 37 hours for 200 epochs. During inference, we adhere to the procedure described in the DINOISER paper, wherein we generate 50 candidate samples and subsequently select the best one using MBR. 2. For Diffuseq, it undergones training for approximately 33 hours with fp32, which equates to roughly 22 epochs. Furthermore, due to Diffuseq's generation process involving 2000 steps, generating one candidate necessitates approximately 7.5 hours. Consequently, we restrict the generation to only 5 candidate samples. To ensure fair comparison, we also included AR-Diffusion with k=5 in the result. 3. All experimental procedures were executed on a total of 8 A100-40G machines. 4. Analysis of the table reveals that AR-Diffusion outperforms alternative diffusion language models, whether for k=5 or k=50. This observation substantiates the effectiveness of our proposed methodology. | |Step|CNN/Daily Mail| | |XSUM| | | |:----|:----|:----|:----|:----|:----|:----|:----| |Metrics|-|ROUGE-1|ROUGE-2|ROUGE-L|ROUGE-1|ROUGE-2|ROUGE-L| |Diffuseq(k=5)|2000|18.1|3.1|16.1|25.5|5.3|19.6| |AR-Diffusion(k=5)|20|**23.8**|**10.3**|**22.1**|**30.5**|**8.9**|**23.5**| | | | | | | | | | |GEINIE(k=50)|20|29.3 |8.3 |21.9|34.4 |12.8 |32.1| |Dinoiser(k=50)|20|24.5|7.1|19.2|35.7|13.4|33.2| |AR-Diffusion(k=50)|20|**31.7**|**10.1**|**24.7**|**39.6**|**16.3**|**37.1**|
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent Representations
Accept (poster)
Summary: The paper presents a new framework for uncertainty estimation in baysian neural networks. The core contribution is formulating uncertainty estimation as multiple high-dimensional hypothesis testing problem and deriving the test statistics necessary. The paper then present multiple empirical results, showing good performance in well established benchmarks and a excellent ablation study of relevant hyperparameters of their proposed method. Strengths: Originality: The core idea of the paper of formulating uncertainty estimation as hypothesis testing seems novel. The paper essentially combines already established methods, BNNs with hypothesis testing (both which are well know), into a new exiting direction for this line of work. Quality: The paper is of high quality, both in outlining the method and their experimental section. For example, it is nice to see standard deviations added for nearly all results and a lot of relevant baselines to compare against. The inclusion of DRD datset, to test the method on a more "real-life" dataset is also a welcome addition and really shows the that the methods does not only work on benchmark datasets. Clarity: The paper is well written and easy to follow. Significance: The paper does open a new sub-direction within uncertainty estimation, regarding the use of hypothesis testing. Only time will tell how influential this will be. I am not truly convinced that the ARHT testing size is the correct way to go as it still involves an assumption of sub-Gaussuanity on its input. The core strength of the paper is its section 5 regarding the ablation studies. The framework presented in the paper includes a lot of "moving parts" and therefore without a thoroughly investigation on which parts of this many layered method, it would be impossible to say which had a significant impact on the performance of the framework. The ablation study does a good job highlighting how each component feeds into the framework. Also, thanks to the authors for providing anonymised code, helped understand parts of the paper. Weaknesses: In L56-58 the authors claims that "We formulate uncertainty estimation as a multiple high-dimensional hypothesis testing problem, and propose a Bayesian deep learning module to address better the aleatoric and epistemic uncertainties when learning feature distributions.". While I agree with the first part of the sentence, I would say the second part is not a contribution as too my knowledge the BNN framework used in the paper is not novel, which there is nothing wrong with, but should not be listed as a contribution. In that regard I think the authors could make it more explicit that the first part of their framework is not unique in any sense, and therefore could be any BNN. Regarding results, similarly the authors should be more honest about their results. L206-206 state that their method is "superior", however if accounting for standard deviations the results only seems significant in a single case. Thus overall, authors should adjust the claims in their paper to better reflect the work presented. A small correction/question: From L130-139, the authors provide a bunch of equations, where the choice of parenthesises seems very arbitrarily. Sometimes {} is used, sometimes [] is used. sometimes () is used. It is confusing. Additionally in L139 there seems to be missing a "/" between what I assume is the nominator and denominator in a fraction. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * Have the authors any thoughts on the influence of n_1, e.g. the size of the training set? For all datasets tested in the paper this is fixed, but the number of samples in the training distribution compared to the test distribution seems to be important for this method to work. * Regarding Figure 5: Have the authors tried out this kind of ablation studies on other dataset combinations to see if they experience similar pattern? Mnist-FMnist is one of the easier cases for OOD and I wonder if this has something to do with the observed robustness of the method. * Regarding Figure 5: The results from varying n_2 is strange to me. I would assume as we sample more test datapoints we get more certain about the test distribution making it easier to distinguish for the test distribution, however this does not seem to be the case. Does the authors have an explanation for this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: As the authors state in L300-302, BNNs are still hard to scale which seems to be the main limitation this line of work. That said, assuming that in future this gets solved this method seems very easy to apply. Another limitation of the method, seems to be that it is limited to OOD, or at least tasks where distributions are compared. OOD is an important tasks, however just estimating the predictive uncertainty of a single sample is similarly important, which the framework does not seem to support. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your positive comments on our manuscript and your insightful summary of the impacts and novelty of our work. We take great pleasure to respond to several intriguing discussions you raised as follows. >**Q1. Improving Clarity of Claims.** Thank you for your suggestions on improving the clarity of several critical claims in our manuscript. For the use of BNNs, we agree that our contribution is to adopt BNN to generate better latent distributions of samples (in contrast to conventional uncertainty estimation methods assuming parametric distributions), instead of proposing a new BNN learning framework. We will revise this claim and specify that our method adopts existing BNN advances instead of proposing a new BNN framework for better clarity. Also, we agree that the word "superior" may be inaccurate to describe the experiment results, and thus we will adjust the claims with a more careful choice of words (e.g., removing the "superior") in the future stage. >**Q2. Influence of $n_1$.** Thank you for pointing this out and we think this is an interesting discussion on future designs of the sample sizes $n_1$ and $n_2$. We think this is also related to your question on the strange performance varying by $n_2$. The size $n_1$ is controlled by the hyperparameter $s$ in our framework. We have conducted experiments with $s$ ranging from 1 to 5. Please see Figure 1 in the rebuttal file for the results. The pattern shows that the performance is decreasing when $s$ is increasing (i.e., more embedding samples from the in-distribution dataset). This demonstrates that covariance structure affects ARHT more as $s$ increases, and the contribution of testing embeddings is less weighed, which leads to slightly decreasing performance. >**Q3. Influence of $n_2$.** Figure 2 in the rebuttal file presents the OOD detection results under a more complex setting (CIFAR10 to SVHN). We can observe a similar pattern as the one shown in the manuscript. This shows that the sample covariance is more influenced by the $n_1$ training/in-distribution samples, making the test statistics reflect more the training distribution (hence the overall consistent pattern). Future work on variance-adjusted test statistics may put more weight on the feature distributions of testing samples, where we may more easily observe improving performance as $n_2$ increases in this case. >**Q4. Support of Single-Sample Uncertainty.** Since the training set (as the in-distribution set) is available (at least training or fine-tuning the encoder) for most of the problems, one can use samples from training sets and the testing samples to compute ARHT. One significant exception is the zero-shot case when we only have the pre-trained encoder without the original data (i.e., in-distribution sample). In this case, most of the uncertainty estimation methods may not work since they require at least in-distribution data to fit their parametric assumptions (e.g., concentration rates of the Dirichlet distributions in classification problems). However, one may still obtain ARHT as an uncertainty estimate using methods to reconstruct/generate pseudo-training data from the pretrained models, which is not the focus of our work but an interesting direction in the future. >**Q5. Use of Brackets.** Thank you for the suggestions. We will make the use of brackets consistent in the future stage given that it is not possible to update the manuscript at this stage. We have double-checked that the expression is correct for L139 (i.e., $\hat{\rho}_2(-\lambda, \gamma)$ is the product of two expressions $(1 + \gamma \hat{\Theta}_1(\lambda, \gamma))(p^{-1}{\rm tr} (\boldsymbol{S_n}) - \lambda \hat{\rho}_1(-\lambda, \gamma))$). --- Rebuttal Comment 1.1: Title: Answer to authors Comment: I thank the authors for the response to my questions. I am glad that the authors are willing to change some of the language in their paper to better reflect the paper contributions. Additionally, I am happy with the response to my questions regarding some of the hyperparameters of the method. I hope some of this will be included in the camera-ready version or appendix in the future. Based on the response from the authors and the other reviews, I will be keeping my score for now. --- Reply to Comment 1.1.1: Comment: Dear reviewer HkqB, We are glad that you are happy with our response. Yes we will include the discussion and additional results in our future version. Please let us know if there are any further questions on our manuscript or response and we are pleased to answer. Thank you again for your precious time and effort on our manuscript. Best Regards, Authors
Summary: This paper proposes a framework to detect out-of-distribution (OOD) data via high-dimensional testing on latent representations. The proposed framework consists of: - a Bayesian Neural Network that, for any input, can produce an ensemble of latent presentations by sampling the posterior of the weights - a hypothesis testing procedure that computes the adaptable regularized Hotelling's $T^2$ score (ARHT) as a measure of uncertainty, and classifies the input as OOD when the ARHT score is larger than a threshold calibrated by the Benjamini-Hochberg (BH) procedure The paper demonstrates the superior performance of the proposed framework on standard OOD detection benchmarks and a medical image dataset. Strengths: - The idea of using Bayesian NN to generate an ensemble of latent representations, and then leveraging the ARHT score as a measure of uncertainty is novel, interesting, and useful for many practical applications - The paper is generally well written and easy to follow. Weaknesses: - A major weakness of this paper is that, for readers who do not have a statistical background, the ARHT score, i.e., equations (1)-(4), appear to come from nowhere. Particularly, I think the paper will be much easier to appreciate if comments and explanation could be made about: - What is the intuitive explanation of the Hotelling's $T^2$ test statistic $T$? Why should we believe it is a good metric for separating in-distribution and OOD samples? - What is the motivation of loading a scaled identity matrix to $T$? In the related work section (which appears at the end of the paper) you seem to state that was due to the potential singularity of the matrix $S_n$. I believe you should state this earlier when you present the equation (2) and (3). - Why the ARHT score $T(\lambda)$ satisfies a Gaussian distribution $\mathcal{N}(0,1)$? This claim seems to come from reference [24]. I believe you need to briefly explain this to the reader. - Section 3.4 describes the B-H procedure without intuitive explanation and detailed derivation. I think more contents are need to explain this, at least in the supplementary materials. - There are many typos in the manuscript. The authors should carefully review the grammar if the paper was accepted. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The paper says that $T(\lambda) \sim \mathcal{N}(0,1)$, but looking at Figure 4, the scores do not appear to be Gaussian, and the scale of the scores grows to several thousands. Is my understanding about Fig. 4 wrong, or there is some mistake in plotting the figure? - To computer $RHT(\lambda)$ in equation (3), one needs to invert the matrix $S_n + \lambda I_p$, which has size $p$ that is the dimension of the latent representation. What if $p$ grows to, say one million? Would it still be possible to compute the score? Also the paper does not seem to clearly state the runtime of the proposed algorithm. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your positive and insightful comments on our manuscript. We would address your comments as follows. >**Q1. More Explanation on ARHT.** We understand the audience without a solid statistical background may find it difficult to understand the proposed ARHT. We provide more explanations of ARHT as follows and will revise the manuscript for a clearer explanation of key statistical components to the general audience. * **Why ARHT?** One of our motivations is to formulate uncertainty estimation as a hypothesis testing problem, where we interpret high-dimensional test statistics as **distance measures**. ARHT can be viewed as a distance measure between each testing sample and the in-distribution sample, where a larger ARHT indicates the sample is more likely to be OOD. The detection of OOD is then determined by a threshold (we presented the Benjamini--Hochberg (BH) procedure to compute the optimal threshold). One advantage of ARHT is that it directly operates on the sampling distributions of the latent features, and hence it does not require parametric assumptions on the latent features or logits (e.g., Dirichlet distribution on class probabilities), which makes it a more robust metric for uncertainty estimation. * **Motivation of Loading $\lambda I$.** This is a standard way to ensure the covariance matrix is positive definite (and thus invertible) and hence improve the numerical stability. We will state this earlier in the article to avoid confusion. * **Why ARHT follows standard normal?** The major part of Li *et al.* is to prove why $T(\lambda)$ follows the standard Gaussian distribution. The proof is complex and hence is not focused in this paper. Intuitively, $T(\lambda)$ can be viewed as a ``standardized" version of RHT (i.e., known as Mahalanobis distance) by its theoretical mean and SD (where the derivations of the theoretical mean and SD are also specified in detail by Li *et al.* * **Intuitive Explanation and detailed derivation of the BH procedure.** We agree that the BH procedure may not be intuitive to the general audience although it is well-known in the statistics community (due to multiple testing). Intuitively, we consider the OOD detection procedure for each testing image as a single hypothesis testing problem. Then, such a procedure for the whole testing set can be viewed as a multiple-testing problem (i.g., conducting many tests). However, applying a universal threshold for all tests (e.g., $\alpha =0.05$) is too conservative and leads to many false discoveries. Hence, the BH procedure is applied to assign a threshold adaptively to each sample according to the $p$-values of all tests such that the false discovery rate (FDR) can be minimized. >**Q2. Plot of $T(\lambda)$.** Yes, this observation is inconsistent with the theoretical property of $T(\lambda)$. In fact, this plot is very similar to the $F$-distribution of Mahalanobis distance. We believe this should result from a partial violation of the assumption of ARHT (most likely the covariance scale is heterogeneous (i.e., unequal variance testing) between the training and testing distributions). However, other empirical evaluation shows that the violation of assumption is not significant on performance, and we believe this violation only affect the scale of the test statistic. We will further develop a variance-adjusted version of ARHT as an attempt to resolve this issue. >**Q3. Dimension of $p$.** Since we are focusing on testing on latent representation, it is uncommon that the dimension of latent representation is too large (say one million). Typical dimensions of embeddings range from 64 to 1024, and we have conducted ablation studies on how it affects performance. Indeed, the costs of inversion increase significantly with the increase in dimension. Therefore, we rely on the assumption that the encoder can generate good feature distributions such that our method is less sensitive to the feature dimension (i.e., a feature dimension of 128 can be sufficient). The runtime of the algorithm is majorly on computing the inverse of the sampled covariance matrix and the multiple forwards of the Bayesian Neural Network (i.e., getting distributions of latent representation). The inference runtime is around 20 seconds per batch (size: 1024) of samples when $n_2=200$ and $p=64$. >**Q4. Typos in the Manuscript.** Thank you for pointing out the issues. We have conducted a thorough review of the typos and grammar and will update the typo-free manuscript at a later stage. --- Rebuttal Comment 1.1: Comment: Thanks for the response, I maintain my original score.
Summary: The paper proposes BNN-ARHT, which introduces a uncertainty estimation framework that uses high dimensional hypothesis testing in the feature space of a network. The key idea is to use ARHT to determine in vs outliers in a feature space, and so it is generic and broadly applicable to any kind of task and network type trained with supervised or unsupervised objectives. In particular, the paper relies on a Bayesian encoder to compute the ARHT on train and test data, that is trained with Stochastic Variational Inference (SVI). The testing statistic is the Hotelling’s $T^2$ test statistic, of which the adaptable and regularized version is used. The threshold $\lambda$ is tuned from a pre-defined set for each sample, and for each task separately which is argued to produce the best results Strengths: * ARHT seems to be a interesting and novel statistic from the uncertainty estimation point of view. * An appealing aspect of the proposed approach is that it can work in arbitrary feature spaces, which vastly expands its applicability. This is also relatively under studied as compared to most approaches that compute uncertainty in the output space. * Predictive performance on the networks tested is preserved, while retaining benefits of OOD detection. Good ablation studies are performed to understand the characteristics of the method. * The finding that the method can be used with a encoder trained with supervised or unsupervised architecture is very interesting, and raises several intriguing questions. This can be studied in more detail — perhaps a related work is "Predictive Inference with Feature Conformal Prediction” from ICLR 2023. https://arxiv.org/abs/2210.00173 Weaknesses: * The paper only compares with few uncertainty estimation methods for the OOD detection setup — whereas i think it should compare more comprehensively with the wide range of state of the art techniques that have achieved high performance on all the benchmarks considered here. * The lack of more realistic OOD benchmarks is a limitation. The experiments are only done on CIFAR10/MNIST with simple networks (ResNet18) whereas it is essential to validate these interesting claims on more realistic benchmarks. * The generality of the approach needs to be rigorously evaluated — UQ for regression, and tasks other than classification can be evaluated and compared against existing methods. * On UQ -- why is OOD the only application considered? OOD benchmarks are often artificial and can have a lot of artifacts which can aid in detection (See for e.g. Semantically Coherent Out-of-Distribution Detection, ICCV '21). Tasks like Bayesian function optimization can shed more light on the quality of uncertainties, and its performance on regression. * Scalability needs to be addressed — a big limitation is the scalability of the proposed approach to bigger and more realistic datasets and networks. Even for ResNet18 considered here, the only a single layer is replaced with its BNN equivalent. it’s unclear why the last layer is optimal, or how this choice can be made in other settings. Further, the threshold needs to be identified each time for each task, which is a computational burden, that other competing methods do not have. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My main concerns are regarding the lack of more comprehensive evaluation for OOD detection. Using better baselines, large scale benchmarks and more modern architectures. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments on the novelty and the appealing aspects of our methods. And thank you for indicating an intriguing future direction for this work using conformal inference. We will address your concerns and questions as follows. >**Q1. More Comprehensive Evaluation.** According to your comments, we conduct additional experiments to more extensively evaluate our method, including adding more baselines, experiments on more diverse datasets, and experiments with larger architectures. * **More baselines.** We have added additional baselines on uncertainty estimation approaches for comparison: (1) I-EDL [1]: use the Fisher information matrix to measure the informativeness of evidence carried by each sample; and (2) RKL-PN [2]: prior networks trained with the reverse KL divergence. **Table 1 in the rebuttal file** presents the results. We will comprehensively compare these baselines on more datasets in the future. * **More datasets.** We validated our framework on a more realistic dataset in medical imaging --- the Diabetes Retinopathy Detection (DRD) dataset (see Table 2 in the manuscript for the results) in addition to standard datasets like CIFAR and MNIST. Additionally, we performed uncertainty estimation on a larger and more diverse dataset (as suggested by reviewer vwXa) with TinyImangeNet as the OOD dataset. The results show that our model is still robust when generalized to a larger dataset. Please see the **general comments** for the results and a detailed discussion. * **More tasks.** We choose OOD as the benchmark task since it is the most common benchmark for uncertainty estimation. We additionally construct a regression setting with two multivariate Gaussian distributions with different means and variances indicating different distributions. Most of the uncertainty estimation frameworks cannot apply in this setting since they only work under the classification settings. The following table presents the results. The Experiment demonstrates that our method also achieves satisfactory performance under the **regressions settings**, showing the generalizability to other tasks. Table: The OOD detection performance (in \%) of our method compared with various competitors, using the ResNet50 architecture. We constructed a simulated regression setting where $\mathcal{N} (\boldsymbol \mu, \boldsymbol \Sigma)$ is the distribution for in-distribution data and $\mathcal{N} (-\boldsymbol \mu, \boldsymbol \Sigma)$ is the distribution for OOD data. The auxiliary regression task is to predict the norm of the sampled vector with an MLP with 2 layers. We choose $\boldsymbol \mu = [0.5, 0.5, \ldots, 0.5]^\top$ and $\boldsymbol \Sigma = 9 I$. | Model | AUROC | AUPR | | ---- | ---- | ---- | | MC Dropuout | 62.12 | 63.35 | Deep Emnsembles | 73.18 | 70.45 |Kendall and Gal | 67.00 | 70.00 | BNN-ARHT (Ours) | **73.52** | **72.99** >**Q2. More Modern Architecture \& Scalability concerns.** Following your suggestion, we additionally tested the performance of our ARHT using ResNet 50 with some layers being replaced by its Bayesian replication. The results show that our method still performs well when being scaled to large model architecture. Note that our method does not necessarily rely on a Bayesian encoder. In fact, a large frequentist network (e.g., ViT) can also generate ideal latent distributions, while our experiments focus on the BNN to emphasize its capability to generate good latent distributions. Detailed results and discussion can be found in the **general comments**. >**Q3. Replacing Some Layers to Create Deep Bayesian CNNs.** The rationale of replacing some layers (either convolutional or linear layer) as Bayesian is to introduce variance to parameters. The empirical performance demonstrates that our method is **less sensitive to which layers are being replaced** (we experimented on several combinations of layers), as long as the total variance in parameters is not too large (such that the BNN would not underfit to a random guess). >**Q4. Identification of Threshold $\lambda$.** We consider the threshold $\lambda$ as a characteristic in our algorithm which makes the uncertainty estimation adaptive. We understand the concern about the computational burden. However, since the computation of the inverse of the covariance matrix dominates the algorithm, the computation of the optimal $\lambda$ is relatively trivial. Moreover, from the ablation study, we observe that the framework is robust to changes in $\lambda$, and thus one can also set a universal $\lambda$ in practice. >**Q5. Uncertainties with Conformal $p$-Values.** We would like to thank the reviewer for highlighting an interesting direction of uncertainty estimation with conformal inference. Currently, our settings assume the samples drawn from the latent distribution are IID (also the assumption of ARHT). While this may be true for the in-distribution data, it is likely that the IID assumption is violated for testing samples (as latent features are drawn from a single testing image, which leads to highly correlated features). This is also a possible reason why Figure 4 is not strictly the standard normal (see the discussion with reviewer VpvZ). With conformal inference relaxing the IID assumption to exchangeability, we may be able to develop more robust P-values as uncertainty estimation measures in future work (a related work is Testing for Outliers with Conformal $p$-values S Bates *et al.* 2021) [1] Deng *et al.* Uncertainty estimation by fisher information-based evidential deep learning. ICML 2023 [2] Malinin and Gales. Reverse kl-divergence training of prior networks: Improved uncertainty and adversarial robustness. NeurIPS 2019 --- Rebuttal Comment 1.1: Title: response Comment: Dear authors, thank you for the rebuttal. This answers some of my questions, but many still remain. First, i still think scaling is a big weakness of the proposed approach. While it is encouraging to see it being scaled to Resnet-50, i find the performance quite surprising. Weaker models on Cifar-10/SVHN have performed far superior to the numbers reported here -- can the authors explain why? For e.g. Deep Ensembles has an AUROC of only 65 here, where as previously published work has shown even with Resnet-18 to have AUROC of 90+ even with a 3-model ensemble. (See table 3, http://proceedings.mlr.press/v119/van-amersfoort20a/van-amersfoort20a.pdf). Is this because of the scoring function used? Even a simple MSP with resnet-18 should give high AUROC on CIFAR-10/SVHN, which makes the justification of the proposed approach weaker. Next, by scalability of datasets, I meant the in distribution datasets (such as ImageNet) and not necessarily OOD sets like TinyImageNet -- though this is still good evidence to have in support of your technique. Next, by regression tasks, I was primarily talking about uncertainty in regression settings which is much broader (such as function optimization) but it is encouraging to see the synthetic experiment nevertheless. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt reply. We will address your concerns as follows. > **Q1 Better Performance observed for Weaker Models.** We are also aware of this phenomenon in experiments. The reason may be several points. First, the CIFAR 10 is a relatively small dataset (i.e., the training set is small compared to its complexity, also mentioned in [2]), so it is not easy to optimize large model architectures with such a small dataset. Second, a deeper architecture may not always generate good latent distribution. Although the mean generated is close to the true mean (with a deeper architecture), the covariance structure of the embedding may not be well-approximated. Hence, the ARHT may obtain lower OOD detection performance even with a very good frequentist encoder. We have discussed this (i.e., the difference between frequentist and BNN encoder) in detail in the discussion section of the manuscript. Moreover, we perform additional experiments using TinyImageNet as the training/in-distribution dataset and CIFAR10 as the OOD dataset to show the case where a larger architecture takes advantage. We observe that a larger architecture (i.e., ResNet50) obtains better performance in this case. Furthermore, since ARHT is determined by many factors, one may not always observe a monotonously improving performance with larger architectures. We have conducted a thorough **ablation study in the manuscript and in the discussion with reviewer HkqB** on the material factors affecting the performance. We will perform a more thorough analysis of the effect of deeper architectures in future works beyond the analysis in **Table 1**. Table 1 The OOD detection performance (in \%) of our method using different architectures. We use TinyImageNet as the in-distribution dataset and CIFAR10 as the OOD dataset. | Model | AUROC | AUPR | | ---- | ---- | ---- | | LeNet | 69.27|71.54 | ResNet50 | 71.50| 72.68 > **Q2 Performance of ResNet-based Deep Ensemble.** Thank you for introducing the related work. For the implementation of deep ensembles, we adopt the original implementation and its scoring function [1], while [2] re-implemented the scoring function with their own design. We think there are several reasons for this phenomenon. Firstly, [2] improves the classic uncertainty measure for classification (i.e., entropy) as the distance to the closest latent centroid. This addresses the epistemic uncertainty in the model. Secondly, they also regularize the learning process with gradient penalty so that the model (especially the large model) is less easy to overfit, making the ResNets in the ensemble more well-trained. Thirdly, the proposed method in [2] is also limited to the classification problem only so it is expected that the uncertainty estimation performance under the classification settings is better. These factors greatly improve the original implementation of the deep ensemble on OOD detection performance. Hence, it is reasonable that the refined deep ensemble obtains a high performance. [1] Lakshminarayanan et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. NeurIPS 2017. [2] Amersfoort et al. Uncertainty Estimation Using a Single Deep Deterministic Neural Network. ICML 2020. > **Q3 ImageNet as in-distribution dataset.** Thank you for your clarification. Per your suggestion, we conducted additional experiments with ImageNet as the in-distribution dataset and CIFAR10 as the OOD dataset. Please see the table below for the results. The results validate that our method can also generate well when the feature encoder is trained at a more diverse and complex dataset (i.e., TinyImageNet). | Model | AUROC | AUPR | | ---- | ---- | ---- | | MC Dropuout | 64.36| 60.47 | Deep Ensembles | 66.41| 63.97 |Kendall and Gal | 58.54| 55.29 |EDL |50.37| 70.39 |DPN|59.59| 59.87 | BNN-ARHT (Ours) | **69.27** | **71.54** > **Q4 Generalization of Tasks.** Thank you for your appreciation of our simulation experiments and for suggesting Bayesian optimization as a potential application of our work. We are aware that Bayesian optimization is a well-known field of optimization. However, the definition of optimizer uncertainty is slightly different from ours. To the best of our knowledge, Bayesian optimization is concerned with the uncertainty from **optimizer** instead of the uncertainty from data and model [3]. The model quantifies the uncertainty as the **posterior distribution** given the trajectory at step $t$ (please see [3] for a formal definition), which we think can be better approximated by a BNN instead of a distance measure (e.g., ARHT or Mahalanobis distance). Extension of our method to Bayesian optimization is possible, but non-trivial, which is not the focus of our paper, although we agree this is an interesting and exciting direction to work on in the future. [3] You et al. Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why and How. ICLR 2022
Summary: This paper proposes an OOD detection procedure by applying adaptable regularized Hotelling’s T-square (ARHT) test [24] on the feature representation of learned BNN networks. Authors introduced the application o ARHT on BNN encoder, and proposed a procedure to adaptively calibrate detection threshold based on Benjamini–Hochberg (BH) procedure. On a suite of low-dimensional image experiments (CIFAR, MNIST, OMNIGLOT, SVHN) and a medical image benchmark (DRD) and using small architectures (e.g., LeNet, AlexNet, ResNet18), authors illustrates that the proposed method clearly outperforms previous methods in terms of OOD AUC. Thorough ablation study is conducted. Strengths: * An novel approach to OOD detection leveraging multivariate tests and associated calibration procedures. * Authors conducted thorough experiment on academic benchmarks to show the advantage of the approach. Weaknesses: * Some description of previous literature may be incorrect. For example, in line 43-48, authors mentions that (1) BNNs perform poorly when the dimension of the output is high. (2) BNN are limited to classification problems. I am not sure if either of these are true for modern BNNs. For example, [rank-1 BNN](https://arxiv.org/pdf/2005.07186.pdf) outperforms their deterministic counterparts on ImageNet, and regression is a rather standard BNN task (e.g., conducted on UCI benchmarks). * The experiments are done on rather simplistic architectures and benchmarks. Therefore the generalization of this approach (either in terms of quality and computational feasibility) to more realistic data setting (e.g., ImageNet) and nontrival modern large models is less clear. * I find that the point "our framework is sensitive to the quality of the encoder" (line 260-266) to be rather important. Three suggestions regarding this point: * this seems to contradict the paper's claim that ARHT is superior to traditional BNN whose "performances heavily rely on the feature encoder and are poor when the features are of poor quality" (line 48). As it seems ARHT suffer a similar issue but to a lesser extent. I recommend adjust the descriptions in 48-50 correspondingly so it is not misleading (maybe refocus (3) in terms of sample efficiency). * It would be interesting to have a more thorough investigation on the relationship between encoder quality v.s. OOD detection performance, where encoder quality can be quantified in terms of standard representation learning metric such as linear probing accuracy. This can be done via a scaling study of different sample sizes and / or architectures. I think a study like this can possibly show ARHT performance is associated with encoder quality, but can achieve a stronger OOD performance when compared to other method under the same accuracy. * Please consider discussing this in the limitation section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive comments. We will address your questions and concerns as follows. >**Q1. A More accurate description of the literature.** Thank you for the suggestions. We believe that there might be some misunderstanding. We focused on comparing the previous uncertainty estimation methods instead of BNNs. Most of the **uncertainty estimation methods** (with or without BNNs) are limited to classification problems. They do not operate on latent distributions, hence they perform poorly when the dimension of output is high (e.g., high dimensional regression). Hence, we highlight that our proposed uncertainty estimation method can address these common limitations. We will revise the description in our manuscript for a clearer presentation to avoid confusion. >**Q2. Generalization to More Realistic Datasets and Modern Architectures.** We have included additional experiments on TinyImageNet and ResNet50. The results still outperform the baseline methods, validating the performance of our framework when being scaled to larger datasets and architecture. Please see the **general comments** for the results on TinyImageNet and the **comments to reviewer jk7P** for results with ResNet50. >**Q3. Influence of the Quality of Encoder.** Thank you for the suggestions. We are also aware of the effect of the encoder quality, and thus we fixed the encoder (say, LeNet) when comparing our framework with the current SOTA methods. Since the baseline methods are trained on classification problems (e.g., the CIFAR 10 image classification and DRD auxiliary task), we adopt the **training accuracy at different epochs** to measure the quality of the encoder, and we observe that the OOD detection performance is overall monotonously improved with the increase in training accuracy (i.e., the encoder quality). See **Figure 3 in the rebuttal file** for the visualization. More specifically, we claim that ARHT outperforms traditional **uncertainty estimation methods** which heavily rely on the encoder quality. Since ARHT operates on latent representations, it still requires the quality of the encoder to generate ideal latent distributions. We would adjust the claim in a later version of the manuscript and clarify this point in the limitation section.
Rebuttal 1: Rebuttal: We thank all the reviewers for your time and efforts on our manuscript. According to the reviewers' comments, we conduct additional experiments to more extensively evaluate our method, including adding more baselines, experiments on more diverse datasets, and experiments with larger architectures. Experiment results are summarized in the rebuttal file and tables in respective threads. >**Q1. More realistic datasets (jk7P, vwXa).** We have conducted additional experiments on OOD detection, with CIFAR10 as the in-distribution dataset and TinyImageNet as the OOD dataset. The result is presented in Table 1 below and it shows that the uncertainty estimation performance is still satisfactory when being generalized to larger datasets. Table 1: The OOD detection performance (in \%) of our method compared to various competitors, using the LeNet architecture. We use CIFAR 10 as the in-distribution dataset and TinyImageNet as the OOD dataset. | Model | AUROC | AUPR | | ---- | ---- | ---- | | MC Dropuout | 66.98 | 64.46 | Deep Emnsembles | 66.41 | 63.97 |Kendall and Gal | 63.23 | 63.06 |EDL | 51.64 | 66.31 |DPN| 64.68 | 58.33 | BNN-ARHT (Ours) | **67.77** | **66.74** >**Q2. Scalability (jk7P, vwXa, HkqB).** We aim to use the Bayesian counterpart of a smaller architecture to demonstrate the capability of BNNs to generate latent feature distributions (for details please see the discussion section in the main text). Note that one can generate ideal feature distributions using very large frequentist vision models (e.g., ViT), which however induces larger complexity in training and inference. A related ablation experiment comparing the frequentist and Bayesian architecture is presented in the main text. We have additionally conducted an experiment with the Bayesian model architecture scaled up to ResNet50. Table 2 below presents the results and it shows that our method can still perform satisfactorily when scaled up to a larger architecture. We further conduct an example experiment using the frequentist ResNet50 (the hypothesis test reduces to a one-sample test) and the testing AUROC is 72.77. These results validate the scalability of our method to large and modern vision architectures. Table 2: The OOD detection performance (in \%) of our method compared with various competitors, using the ResNet50 architecture. We use CIFAR 10 as the in-distribution dataset and SVHN as the OOD dataset. | Model | AUROC | AUPR | | ---- | ---- | ---- | | MC Dropuout | 68.32 | 78.24 | Deep Emnsembles | 65.13 | **82.19** |Kendall and Gal | 72.24 | 81.43 |EDL | 51.21 | 73.78 |DPN| 62.33 | 79.11 |Detectron | 73.16 | 82.5 | BNN-ARHT (Ours) | **73.46** | 78.27 Pdf: /pdf/0a8d0783b95d2e5387665ba25b887cda4a1519f1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Partial Label Learning with Dissimilarity Propagation guided Candidate Label Shrinkage
Accept (poster)
Summary: In this submission, authors propose a novel partial label learning method named DPCLS by realizing the effectiveness of the dissimilarity relationship. They develop a semantic similarity and dissimilarity matrix which form an adversarial relationship, which is further utilized to shrink the solution space of the label confdence matrix and promote the dissimilarity matrix. Strengths: 1. Partial label learning is an interesting topic. 2. A novel partial label learning approach is proposed. 3. Experiments validate the effectiveness of the proposed approach. Weaknesses: 1. The motivation is unclear and the novelty might be limited. 2. The technical details of proposed approach can be further discussed. 3. More experiments can be conducted to validate the effectiveness of the proposed approach. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The submission does propose a partial label learning approach, but the motivation for proposing it is unclear. Why similarity and dissimilarity matrix? From this pespective, the novelty is very limited. 2. Why $l_1$ norm in Eq.(3)? Some comparative experiments? 3. The hyper-parameter $\sigma$ in Eq.(4) is not given. 4. According to Figure S1(a) and Figure S1(b) in supplementary material, the smaller the value of $\alpha$, the better the performance (with some $\beta$). Thus, it is suggested to do further ablation studies where $\lambda$, $\alpha$, $\beta$ in Eq.(6) are respectively set to zeros. 5. As shown in Algorithm 1, the proposed approach works in an iterative manner, so the convergence analysis should be conducted? How many rounds the approach will converge? 6. As shown in Table 1, the traditional baseline CLPL achieves relative superior performance while the newest baselines only achieves relative general performance? Data sets matters? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are not discussed in the current version. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your time and effort in reviewing our paper.** --- **W1: The motivation is unclear and the novelty might be limited.** **Answer to W1:** * **Motivation**: please refer to the **Global Response** for the **Motivation** of our work, and in the final version, we will improve the introduction to better describe the motivation of our work. * **Novelty**: The main novelties of our work are summarized in the response to **W1** of **Reviewer ysvn**. --- **W2: Discuss the technical details of the proposed approach.** **Answer to W2:** \ Please refer to **Q2**. --- **W3: More experiments.** Following your suggestion, we compared our method with an additional SOTA PLL method PICO [1] on the real-world data sets in **Table R1** of the **Global Response (PDF)**. PICO is designed for image classification (it involves data augmentation for images, such as image rotation, image resize, etc.), and cannot be directly evaluated on the real-world data sets (as the commonly used real-world PLL data sets are non-image data sets). To use PICO on non-image data sets, the encoder of it is changed from ResNet to a multi-layer Perceptron, and the image augmentation is changed to randomly mask 20% of the features as part of the augmented data. Compared with PICO, DPCLS achieves better performance in 5/6 cases. --- **Q1: Motivation of our paper.** **Answer to Q1:** \ Please refer to **W1 Motivation**. --- **Q2: Why $l_1$ norm in Eq.(3)?** **Answer to Q2:**\ We use the $\ell_1$ norm mainly because of its excellent computational property. Specifically, as both the values of the dissimilarity matrix $D$ and similarity matrix $FF^T$ are limited to [0,1], the seemingly non-smooth $\ell_1$ norm can be written as the trace of some multiplicated matrices ($\|\|D\odot F F^\mathsf{T}\|\|\_1=trace(F^TDF)$), making the associated optimization regarding $D$ and $F$ feasible. The well-known alternative like the Frobenius norm will lead to a fourth-order optimization problem, which is quite difficult to optimize. Another alternative is the $\ell_0$ norm, which is both non-convex and non-smooth. Therefore, we selected the $\ell_1$ norm . Besides, minimizing the $\ell_1$ norm fits the requirement of the adversarial prior, i.e., a larger element in $D$ indicates a smaller element in the corresponding position of $FF^T$ and vice versa. --- **Q3: The hyper-parameter $\sigma$ in Eq.(4) is not given.** **Answer to Q3:** \ Thank you for the reminder. Following LALO [1], hyper-parameter $\sigma$ is determined by **$\sigma$=$\sum_{i=1}^{m}||x_i-x_{i k}||_2/m$**, where $x_{ik}$ denotes the $k$-th nearest neighbor of $x_i$. The setting of $\sigma$ is included in the code file. But as suggested, we will illustrate it in the last second paragraph of **Section 3**. --- **Q4: Further ablation studies.** **Answer to Q4:** \ In **Fig. S1 (a) and (b)**, the brighter color indicates a larger value. It can be observed that when $\alpha$ ($\beta$) decreases, the classification accuracy of DPCLS drops. To better present the experimental results, we convert the image into a table format, please refer to **Table R2 and Table R3** in the **Global Response (PDF)**, which show that when $\alpha=0.01, \beta=0.001$, DPCLS achieves the best performance on data sets Lost and MSRCv2. In the following **Table RA4**, we show the performance of our model with $\alpha=0$, $\beta=0$ and $\lambda=0$, respectively (Actually, the **Table 4** of our paper has shown the case of $\alpha=\beta=0$, which is denoted as DPCLS-DP.). The results in **Table RA4** prove the effectiveness of each term of our method. (Note that the experimental results of DPCLS $\alpha=0$ and DPCLS $\beta=0$ are the same, mainly because when $\alpha=0$, the enhanced dissimilarity matrix will not affect the label confidence. And since the initial dissimilarity matrix and the similarity matrix are complementary, i.e., they are both constructed by the candidate label set, when $\beta=0$, the dissimilarity matrix will also not affect the label confidence matrix. For more details please refer to line 98 of our paper.) **Table RA4: Ablation study** |Data set |FG-NET| Lost |MSRCv2|BirdSong|Malagasy| |:----:|:----:|:----:|:----:|:----:|:----:| |DPCLS|.077$\pm$.009|.770$\pm$.024|.557$\pm$.014|.751$\pm$.009|.676$\pm$.004| |DPCLS $\lambda=0$|.047$\pm$.011|.267$\pm$.088|.110$\pm$.039|.393$\pm$.110|.245$\pm$0.11| |DPCLS $\alpha=0$|.073$\pm$.010|.687$\pm$.027|.466$\pm$.018|.721$\pm$.014|.612$\pm$.011| |DPCLS $\beta=0$|.073$\pm$.010|.687$\pm$.027|.466$\pm$.018|.721$\pm$.014|.612$\pm$.011| --- **Q5: Convergence analysis of DPCLS.** **Answer to Q5:** \ Our algorithm solves the optimization problem in Eq. (7) of the paper with four blocks of variables, however, to the best of our knowledge, there is no general convergence proof for the IALM algorithm with more than two blocks of variables [2]. Fortunately, since each subproblem can be solved efficiently, our algorithm empirically converges well. The empirical curves can be found in the **Fig. R2** of the **Global Response (PDF)**. It is shown that our method converges within about 60 iterations. --- **Q6: Why CLPL achieves relatively superior performance.** **Answer to Q6:** \ According to **Table 1**, CLPL achieves relatively good performance on Orl and Amazon data sets, but its performance was not as good as the newest algorithms on other data sets (See **Figure 1** and **Table 2** of the paper). Therefore, we think the observation in **Table 1** is due to the characteristics of the data sets. We will analyze this observation in **Section 5** of the final version. --- [1] 2022-ICLR-PICO: Contrastive Label Disambiguation for Partial Label Learning \ [2] 2011-Foundations and Trends in Machine learning-Distributed optimization and statistical learning via the alternating direction method of multipliers. --- Rebuttal Comment 1.1: Comment: Thanks for the response. In my opinion, this is a borderline paper. I would like to hear from other reviewers. --- Rebuttal Comment 1.2: Comment: Thanks for the response. After I read all the comments of other reviewers, I would like to keep my rating for the moment. --- Reply to Comment 1.2.1: Comment: Thank you for your response and we would like to express our gratitude for your willingness to accept our paper.
Summary: This paper proposes a new approach to partial label learning, called DPCLS, which learns the similarity and dissimilarity matrices to improve labeling accuracy in an adversarial relationship. The proposed method is compared to several existing methods on a variety of datasets, and the results demonstrate its superior performance in most cases. The paper also includes theoretical proof of the rationality of the proposed adversarial prior and a visualization of the enhanced similarity and dissimilarity matrices on a real-world dataset. Strengths: 1. This paper introduces a new partial-label learning method called DPCLS, which leverages similarity and dissimilarity matrices an adversarial relationship to effectively tackle the challenges associated with partial-label learning. 2. A theoretical proof of the rationality of the proposed adversarial prior is included to further validate the proposed method’s effectiveness. 3. Extensive experiments conducted on both real-world and artificial partial label datasets showcase the efficacy of the proposed method. Weaknesses: The paper focus on an interesting partial label learning problem and has several issues that can be improved: the datasets adopted in this paper are small-scale and the motivation of this paper should be explained in detail. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Fig.1 (c), compared with the co-occurring probability of 0.7, it confused me that the proposed method seems to get a better performance in the co-occurring probability of 0.8. 2. The datasets adopted in the paper are simple and small-scale which is hardly convincing, and the moderately sized datasets are suggested to adopt to validate the effectiveness of the proposed method. For example, CIFAR-100. 3. I am confused about the reported comparison results in Table 2: for the comparison method CAVL, there is a big performance drop between the reported experimental results and the CAVL paper. And can you compare the proposed method with the representative PLL method PiCO? 4. For hyper-parameters sensitivity, the author should provide the variance in the figure. 5. The motivation of the paper should be further explained in detail. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the weakness and limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your time and effort in reviewing our paper.** --- **W1: Small-scale data sets and motivation of this paper** **Answer to W1** * **Motivation**: Thank you for your valuable comments and suggestions. Please refer to **Global Response** for the detailed **Motivation** of our work, and in the final version, we will improve the introduction to better describe the motivation of our work. * **Data sets**: To the best of our knowledge, _the largest real-world partial label data sets_ are Soccer Player (17472 samples and 279 dimensions) and Yahoo! News (22991 samples and 163 dimensions), which were both evaluated in our paper. Moreover, some recent partial label learning works adopt some larger-scale synthetic data sets. Therefore, in this rebuttal, we further evaluated our work on CIFAR-100 (60000 samples and 1024 dimensions). Please refer to **Q2** for the detailed experimental results on CIFAR-100. --- **Q1: In Fig.1 (c), compared with the co-occurring probability of 0.7, it confused me that the proposed method seems to get a better performance in the co-occurring probability of 0.8.** **Answer to Q1:** \ Thanks for your careful reading. Ecoli is a small-scale data set with 336 samples. Due to the influence of the random factors in producing the partial labels, the classification accuracy may fluctuate with the increase of the co-occurring probability ($\epsilon$). But, the overall trend is clear that as the co-occurring probability increases, the classification accuracy will decrease. Note that a similar phenomenon also occurs in the original papers of LALO [1], SURE [2] and AGGD [3]. --- **Q2: Data sets.** **Answer to Q2:** \ Based on your suggestion, we conducted experiments on the CIFAR-100 data set (50000 training samples and 10000 testing samples). As CIFAR-100 is not a partial label data set, following the settings of the recent works [4, 5], we randomly flipped some negative labels to constitute the candidate labels with the probability $q$ ($q$ was set to 10%, 20%, and 30%). Then, we compare our method with two state-of-the-art deep learning based PLL algorithms: PICO[4] and CAVL[5]. As CIFAR-100 is an image data set, to get a good classification on it, we need some reasonable feature representation. Therefore, we use PICO to extract the features of CIFAR-100 and reduce the dimensions of it from 1024 to 128. Then, we apply the learned representation to our model. The comparison is shown in the following **Table RA3**, where we can see that our DPCLS still achieves better performance than SOTA PLL methods on CIFAR-100. To be specific, as $q$ increases, more false positive labels exist in the candidate label sets, and PLL becomes harder. Accordingly, the accuracies of the PLL algorithms decrease rapidly, but the performance advantage of our method becomes more salient, which proves that our method is skilled in more challenging PLL tasks. --- **Table RA3: Classification accuracy (mean$\pm$std) on CIFAR-100.** |Data set |CIFAR-100 $q=10$%|CIFAR-100 $q=20$%|CIFAR-100 $q=30$%| |:----:|:----:|:----:|:----:| |DPCLS|**70.07**|**64.60**|**34.23**| |PICO|68.60|62.50|27.55| |CAVL|58.80|21.83|12.27| --- **Q3: Experiment results of CAVL and compared with PICO.** **Answer to Q3:** * **CAVL**: In the original CAVL[5] paper, **ten-fold cross-validation** was used on the real-world data sets, which consisted of 90% samples for training and 10% samples for testing. In our paper, **ten runs of 50%/50% random train/test splits** were performed on each data set, which consisted of 50% samples for training and 50% samples for testing. Different experimental settings result in different classification accuracy. * **PICO**: PICO [4] is designed for image classification (it involves data augmentation for images, such as image rotation, image resize, etc.), and cannot be directly evaluated on real-world data sets (the commonly used real-world PLL data sets are non-image data sets). In order to apply PICO on non-image data sets, the encoder of PICO is changed from ResNet to a multi-layer perceptron that is suitable for real-world data sets, and the image augmentation (like image rotation, image resize, etc.) is changed to randomly mask 20% of the features as part of the augmented data. The experimental results are shown in **Table R1** of **Global Response (PDF)**. Compared with PICO, DPCLS achieves higher classification accuracy on the real-world data sets in 5/6 cases. --- **Q4: For hyper-parameters sensitivity, the author should provide the variance in the figure.** **Answer to Q4:** \ Thank you for your suggestion, we added **standard deviation** in the **Global Response (PDF) Table R2, Table R3 and Fig. R2 (c) and (d)**. And we will add them to the **supplementary file** in the final version. --- **Q5: The motivation of the paper should be further explained in detail.** **Answer to Q5:** \ **Motivation**: Please refer to **Global Response** for detailed **Motivation**, and in the final version, we will improve the introduction to better describe the motivation of our work. --- [1] 2018-IJCAI-Leveraging Latent Label Distributions for Partial Label Learning \ [2] 2019-AAAI-Partial Label Learning with Self-Guided Retraining \ [3] 2022-TPAMI-Adaptive Graph Guided Disambiguation for Partial Label Learning \ [4] 2022-ICLR-PICO: Contrastive Label Disambiguation for Partial Label Learning \ [5] 2022-ICLR-Exploiting Class Activation Value for Partial-Label Learning --- Rebuttal Comment 1.1: Comment: Thanks for the response. I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response and we would like to express our gratitude for your willingness to accept our paper.
Summary: This paper constructs a second-order similarity matrix and a semantic dissimilarity matrix. The similarity matrix is obtained by leveraging the confidence obtained from the underlying model, while the semantic dissimilarity matrix is determined based on the label candidate set and the distribution of samples in the feature space. An objective function is formulated using the adversarial relationship between the similarity matrix and the dissimilarity matrix. Additionally, the paper proposes a version of the algorithm to handle nonlinear problems by mapping the original feature space to a high-dimensional reproducing kernel Hilbert space (RKHS). Finally, the proposed method is compared with several state-of-the-art PLL algorithms and validated on ten synthetic datasets and seven real-world datasets. Strengths: 1. An adversarial relationship is constructed for label disambiguation, and in addition, two versions are proposed: linear separable and non-linear separable. 2. Solid theoretical analysis and ample amount of work. Weaknesses: 1. The research motivation of the paper is not clearly stated. 2. The paper mainly focuses on describing the proposed method, without summarizing and extracting issues from existing PLL research. 3. The principle behind the construction of the loss term based on adversarial priors is not explained. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What was the motivation behind the author's paper? What problems in PLL did it solve? 2. If the relationship between two samples is characterized by both low confidence and low dissimilarity, can the loss term based on adversarial priors handle this situation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your time and effort in reviewing our paper.** --- **W1: The research motivation of the paper is not clearly stated.** **W2: The paper mainly focuses on describing the proposed method, without summarizing and extracting issues from existing PLL research.** **Answer to W1 and W2:** \ Thank you for your valuable comments and suggestions. Please refer to the **Global Response** for the detailed review of the **Related Work** and the summarizing of their issues in the **Motivation**. In the final version, we will improve the related works and motivation in the introduction. --- **W3: The principle behind the construction of the loss term based on adversarial priors is not explained.** **Answer to W3:** \ The dissimilarity matrix $D$ represents the dissimilarities between samples, i.e., if the value of $D(i,j)$ is large, the sample $x_i$ is highly dissimilar to the sample $x_j$. While the similarity matrix $FF^T$ represents the similarities between samples, i.e., if the $(i,j)$-th element of $FF^T$ is large, $x_i$ is highly similar to $x_j$. Therefore, an adversarial relationship naturally exists between $D$ and $FF^T$. We, accordingly, formulate this adversarial relationship as \ $\mathop{\min}\limits_{D, F}\|\|D\odot F F^T\|\|\_{1}=\mathop{\min}\limits_{D, F}\sum_{i, j=1}^{m}|D_{ij} \cdot (FF^T)_{ij}|.$ \ By minimizing the above equation, the adversarial relationship is captured and the solution space of the label confidence matrix $F$ is shrunk to achieve better label disambiguation. Besides, the quality of the dissimilarity matrix $D$ can also be enhanced. In the final version of the paper, we will explain the principle behind the adversarial prior clearer in the second paragraph of **Section 2**. --- **Q1: What was the motivation behind the author's paper? What problems in PLL did it solve?** **Answer to Q1:** \ Please refer to **Global Response** for the detailed motivation of our paper and the disadvantages of the exiting PLL methods. Here we summarize the motivation of our work simply. Exploiting the information in the label space is important to label disambiguation in PLL. The current methods leverage this information by constructing a semantic dissimilarity matrix [1, 2] like SDIM [1] and PANGOLIN [2]. However, the dissimilarity matrices constructed by them are very sparse (See the **Fig 2 (e)** of the paper, the initial dissimilarity matrix is sparse). Therefore, we propose to enhance the initial dissimilarity matrix by using the geometric structure between samples in the feature space. Moreover, the dissimilarity matrix and the similarity matrix constructed by the label confidence matrix form an adversarial relationship. We use this adversarial relationship to further enhance the dissimilarity matrix and more importantly, shrink the solution space of the label confidence matrix to achieve better label disambiguation. Our model can achieve superior performance than the comparing algorithms (experiments show that our algorithm achieves significantly superior performance on real-world data sets and synthetic data sets in 79.6% cases) and obtain a better dissimilarity matrix (See the **Fig 2 (f)** of the paper, the dissimilarity matrix produced by DPCLS become denser, which is quite close to the ideal one in **Fig 2 (d)**). Besides, we also theoretically prove the effectiveness of the adversarial term in **Section 4** of the paper. --- **Q2: If the relationship between two samples is characterized by both low confidence and low dissimilarity, can the loss term based on adversarial priors handle this situation?** **Answer to Q2:** \ Yes, our adversarial term can handle the situation when low confidence and low dissimilarity occur at the same time. Specifically, the loss of the adversarial term can be written as\ $\mathop{\min}\limits_{D, F}\|\|D\odot F F^\mathsf{T}\|\|\_{1}=\mathop{\min}\limits_{D, F}\sum_{i, j=1}^{m}|D_{ij} \cdot (FF^T)_{ij}|$. \ When the confidence between $x_i$ and $x_j$ is low and the dissimilarity between $x_i$ and $x_j$ is also small, the values of both $ FF^T (i,j)$ and $D(i,j)$ are small. Accordingly, the value of the above objective function is also small. Therefore, the proposed adversarial loss can accept this uncertainty case (both low confidence and low dissimilarity). \ On the contrary, our adversarial term does not allow two samples to have both high confidence and high dissimilarity which will lead to a large objective function value. Therefore, the proposed adversarial term adopts a conservative strategy. The reason is that PLL is a weakly supervised problem, with insufficient supervision, we should allow some uncertainty cases exist (both low label confidence and low dissimilarity). Extensive experiments have demonstrated the effectiveness of the adopted adversarial strategy. And we also theoretically prove its effectiveness in **Section 4** of this paper. --- [1] 2019-IJCAI-Partial Label Learning by Semantic Difference Maximization \ [2] 2020-CIKE-Learning with Noisy Partial Labels by Simultaneously Leveraging Global and Local Consistencies --- Rebuttal Comment 1.1: Comment: Thanks for the response. After reading all the reviewers' comments, most of the reviewers agreed that the paper's motivation was not clearly described, which puts the paper in a borderline state. If the AC needs to make a clear acceptance or rejection, I am inclined to give an acceptance. But for now keep the score the same (borderline accept). --- Reply to Comment 1.1.1: Comment: Thank you for your response and we would like to express our gratitude for your willingness to accept our paper.
Summary: The paper proposes a new method for partial label learning called Dissimilarity Propagation guided Candidate Label Shrinkage(DPCLS). The method captures the confidence of candidate labels by constructing a constrained regression model and uses the product of the label confidence matrix and its transpose to build a second-order similarity matrix. Additionally, the method constructs a semantic dissimilarity matrix by considering the complement of the intersection of candidate label sets and propagating the initial dissimilarity relationships throughout the entire dataset using the local geometric structure of the samples. The adversarial relationship between the similarity and dissimilarity matrices is further utilized to narrow down the solution space of the label confidence matrix and facilitate the construction of the dissimilarity matrix. The method is evaluated on artificial datasets and real-world partial label datasets, demonstrating superior performance compared to existing partial label learning algorithms. Strengths: 1. The paper introduces a unique combination of dissimilarity propagation and guided candidate label shrinkage for PLL, offering a fresh perspective on the problem. 2. The authors present a detailed framework, including the construction of similarity and dissimilarity matrices, leveraging local geometric structures, and extension to a kernel version, providing a comprehensive solution for PLL. 3. The proposed method is extensively evaluated on multiple artificial and real-world datasets, demonstrating its effectiveness and outperforming existing algorithms. Weaknesses: 1. The structure and logic of the paper need further improvement. I recommend that the authors provide a clearer background and motivation in the introduction, enumerate the contributions of the paper, and provide an overview of the overall framework of the paper. 2. In the setting of hyperparameters of the method, the authors do not provide a specific method or principle for parameter selection. For parameters λ, α, β, and k, the authors only mention fixed values or ranges without explaining how to choose these parameters to obtain optimal performance. It is recommended that the authors provide guidance or experimental results regarding hyperparameter selection. 3. The paper has a limited number of references and lacks a comprehensive review of the latest research in the relevant field. I recommend that the authors conduct a more thorough literature search on the relevant field and provide more background and explanations of related studies in the introduction and related work sections. 4. The proposed model involves constructing similarity and dissimilarity matrices and solving the problem using the augmented Lagrange multiplier method, which may result in higher computational complexity compared to simpler approaches. It is recommended to perform complexity analysis on the algorithm. 5. In the conclusion section, the authors can further discuss the limitations of the proposed method and directions for future improvements to enhance the completeness of the conclusion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See the above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your time and effort in reviewing our paper.** --- **W1: The structure and logic of the paper need improvement.** **Answer to W1:** Thank you for your suggestion. In the final version, we will improve the introduction to make the logic clearer. Specifically, the **Background (Related Work)** and **Motivation** of our work are presented in the **Global Response**. The enumerated contributions and the overall framework of this paper are summarized as follows. **Contribution and Novelty** of our work * We propagate the initial sparse semantic dissimilarity relationships to the whole data set, obtaining a dense and information-rich dissimilarity matrix. * We form an adversarial relationship between the enhanced dissimilarity matrix and the second-order similarity matrix constructed by the label confidence matrix, which helps shrink the solution space of the label confidence matrix. * The proposed model is extended to a kernel version to fit the non-linear structure of the samples and solved efficiently by an IALM-based algorithm. * We give some theoretical analyses and guarantees on the effectiveness of the proposed model. * Extensive experiments on real-world and synthetic data sets demonstrate the effectiveness of our model. **Framework** of our paper: \ Section 1 briefly introduces PLL, motivation and the contributions of our work. Section 2 presents the proposed model, and Section 3 shows the optimization algorithm. Section 4 illustrates the theoretical analysis of the model. Section 5 reports the experiments and the associated analyses. Finally, section 6 concludes the paper. --- **W2: Setting of hyperparameters.** **Answer to W2:** \ **First**, determining the best hyper-parameters is a challenging task for almost all machine learning models. However, as shown in the last second paragraph of **Section 3**, most of our hyper-parameters are fixed, suggesting the robustness of our model. \ **Second**, we determine the hyper-parameters based on the following principles. $k$ controls the number of k-nearest neighbors, and $\lambda$ controls the model complexity. As they are commonly used parameters in many related methods [1, 2], we directly followed the settings of the related works and set $k=10, \lambda=0.05$. $\alpha$ and $\beta$ introduce the adversarial term and the dissimilarity propagation term, and we set their values according to many experiments. In **Section D** of the **supplementary file**, we experimentally show their influence on the proposed model, and therefore fixed $\beta$ = 0.001 and selected $\alpha$ from {0.001, 0.01}. \ **Finally**, if all hyper-parameters are carefully tuned, our algorithm can be further improved. Even with these fixed hyper-parameters, our model still statistically outperforms others in 85.9% cases (85 out of 99) on the real-world data sets. --- **W3: Limited number of references and lacks a comprehensive review of the latest works** **Answer to W3:** \ As suggested, we will enhance the references in the final version. Please refer to the **Global Response** for the comprehensive review of the related works. --- **W4: Complexity analysis.** **Answer to W4:** * Actually, we have already analyzed the computational complexity of our algorithm and compared it with other PLL methods in **Table S1** of the **supplementary file**. For your convenience, we paste **Table S1** here. DPCLS solves a QP problem with the computational complexity of $\mathcal{O}(m^3q^3)$, which is the same as many SOTA PLL methods like AGGD [2], SDIM [4], and PL-CLA [3]. * We also compared the actual running time of DPCLS with other baselines in **Table RA2**, where DPCLS is only slightly slower than the baselines but with a significant accuracy improvement. More analyses can be found in the supplementary file. --- **Table S1: Computational complexity comparison between the linear regression based PLL methods.** | |AGGD|PL-CLA|SDIM|DPCLS| |:----:|:----:|:----:|:----:|:----:| |Computational complexity|$\mathcal{O}(m^3+mk^{3}+m^3q^3)$|$\mathcal{O}(m^3+m^3q^3)$ |$\mathcal{O}(m^3+m^3q^3)$|$\mathcal{O}(2m^3+m^{2}+m^3q^3)$| --- **Table RA2: Accuracy Vs. Time cost** |Data set|Type|AGGD|PL-CLA|SDIM|DPCLS| |----|----|----|----|----|----| |Glass $r=1, \epsilon=0.8$|Accuracy|.491$\pm$.063|.458$\pm$.073|.508$\pm$.073|.560$\pm$.051| | |Time|2.63s|2.13s|2.25s|2.65s| |Ecoli $r=1, \epsilon=0.8$|Accuracy|.801$\pm$.033|.803$\pm$.031|.801$\pm$.028|.833$\pm$.014| | |Time|3.34s|2.88s|2.88s|3.56s| |Lost| Accuracy | .702$\pm$.024 |.696$\pm$.021|.736$\pm$.023|.770$\pm$.024| | |Time|12.22s| 5.80s| 7.26s|22.03s| --- **W5: Discuss the limitations of the proposed work** **Answer to weakness5:** \ We agree with the reviewer that every work has some limitations. For our work, one of the major limitations is the computation issue on large-scale data sets. Although in **Section A** of the **supplementary file** (Eq. (5)), we have made the solving of the QP problem regarding the $F$-subproblem more scalable. In our work, the sizes of the dissimilarity and similarity matrices are both $m\times m$ with $m$ the number of the training samples, making the construction and the associated computation regarding them not scalable to the extreme large-scale data sets. In practice, we can remedy this issue by handling the data sets through mini-batches or an anchor graph. Nevertheless, the heavy computation burden on large-scale data sets is still a limitation of our work. In the final version of our paper, we will discuss this limitation in the **Conclusion** section. --- [1] 2018-IJCAI-Leveraging Latent Label Distributions for Partial Label Learning \ [2] 2022-TPAMI-Adaptive Graph Guided Disambiguation for Partial Label Learning \ [3] 2021-JCST-Partial Label Learning via Conditional-Label-Aware Disambiguation \ [4] 2019-IJCAI-Partial Label Learning by Semantic Difference Maximization --- Rebuttal Comment 1.1: Title: I would like to change the rating to bordline accept Comment: Thank you for your response. I would raise the scores. --- Reply to Comment 1.1.1: Comment: Thank you for your response and we would like to express our gratitude for your willingness to accept our paper.
Rebuttal 1: Rebuttal: Thanks to all the reviewers and the area chair for handling our paper and the valuable comments and suggestions to improve its quality. In the initial comments, we received 4 positive recommendations (1 Accept, 1 Weak Accept, 2 Borderline Accept) and 1 negative recommendation (1 Borderline Reject). In this **"Global Response"**, we will respond to two common questions posed by several reviewers, i.e., **"Related Work"** and **"Motivation"** of our work. --- **To reviewers sPwu, ysvn and vgJ9** **Related Work:**\ Partial label learning (PLL) [1, 2, 3] is an emerging weakly supervised learning framework. In PLL, each sample is associated with a set of candidate labels, among which only one is the ground-truth label. The key to solve the PLL problem is label disambiguation, i.e., identifying the ground-truth label of a sample from its candidate label set. We roughly divide the existing label disambiguation strategies into three categories. The first kind of methods [4, 5, 6, 7] leverages the similarity relationship of samples in the feature space to achieve label disambiguation. For example, [4] makes predictions by weighted voting on neighboring instances. [5] used graph regularization to disambiguate the candidate labels, i.e., if two samples are similar in features, they are likely to share the same ground-truth label. [6] adopted an adaptive graph structure to estimate label confidence, and the label with the highest confidence is regarded as the ground-truth label. [7] proposed discrimination augmentation for PLL by using the class prototypes. However, when the neighboring relationships and class prototypes of samples are inaccurate, the performance of this kind of methods will be degraded. The second kind of methods [8, 9, 10] uses the output of the model to guide label disambiguation. For example, [8] narrowed the candidate label set through a sparsity-based self-training procedure. [9] and [10] disambiguate the candidate label sets by the outputs of the deep neural network itself. However, the model output may be inaccurate (especially in the early stages of model training), which will result in performance degradation. The third kind of methods [11, 12] uses the information of the label space to achieve disambiguation. Especially, the information of the non-candidate labels that accurately indicates a sample does not belong to that set of labels is exploited. For example, SDIM [11] first proposed to use the dissimilarity relationship of samples in the label space (if two samples do not share any common candidate labels, they must belong to the different classes), and then maximized the label confidence between dissimilar samples to achieve label disambiguation. [12] guided the disambiguation by using the dissimilarity matrix and the class prototypes simultaneously, i.e., maximize the label confidence between dissimilar samples and reduce the label confidence of those with large distances from the class prototypes. Although these methods use the information of the label space, the constructed semantic dissimilarity matrix is sparse and predefined, which limits its applicability. --- **To reviewers ysvn, vgJ9, 9Zpk and 6Rie** **Motivation:**\ Like SDIM [11], our method belongs to the third category. The core idea of SDIM [11] is to use the constructed semantic dissimilarity matrix $D$ to guide label disambiguation. However, the dissimilarity matrix $D$ in SDIM is sparse and predefined. Especially with a larger candidate label set, $D$ will become extremely sparse, limiting the model performance. To solve this problem, we aim to construct a denser and information-rich dissimilarity matrix to help label disambiguation. Specifically, we propose to enhance the initial dissimilarity relationships to the whole data set by the local geometric structure of samples in the feature space, i.e., If two samples $x_i$ and $x_j$ are close to each other in the feature space, their dissimilarity relationships should also be similar. When the enhanced dissimilarity matrix is obtained, another problem is how to apply it to label disambiguation. As $D\in\mathbb{R}^{m \times m}$ indicates the pairwise dissimilarity relationships among samples, we further use the label confidence matrix $F$ multiplied by its transpose to construct a second-order similarity matrix $FF^T\in\mathbb{R}^{m \times m}$ among samples. The dissimilarity matrix $D$ and the similarity matrix $FF^T$ naturally form an adversarial relationship, i.e., a larger (resp. smaller) element in $D$ implies a smaller (resp. larger) element in $FF^T$. We formulate this adversarial prior as an $\ell_1$ norm minimization problem. By optimizing it, the enhanced dissimilarity matrix $D$ can shrink the solution space of the label confidence matrix $F$ to achieve label disambiguation, and meanwhile, the similarity matrix induced from the label confidence matrix also contributes to build a better dissimilarity matrix. Theoretical analysis in **Section 4** and empirical evaluation in **Section 5** demonstrate the effectiveness of the above approach. --- [1] 2023-NN-Partial label learning: Taxonomy, analysis and outlook \ [2] 2022-ICLR-PICO: Contrastive Label Disambiguation for PLL \ [3] 2022-ICLR-Exploiting Class Activation Value for PLL \ [4] 2012-Intelligent Data Analysis-Learning from ambiguously labeled examples\ [5] 2018-IJCAI-Leveraging Latent Label Distributions for PLL \ [6] 2022-TPAMI-Adaptive Graph Guided Disambiguation for PLL \ [7] 2022-KDD-Partial Label Learning with Discrimination Augmentation \ [8] 2019-AAAI-Partial Label Learning with Self-Guided Retraining \ [9] 2020-ICML-Progressive Identification of True Labels for PLL \ [10] 2021-ICML-Leveraged Weighted Loss for Partial Label Learning \ [11] 2019-IJCAI-Partial Label Learning by Semantic Difference Maximization \ [12] 2020-CIKE-Learning with Noisy Partial Labels by Simultaneously Leveraging Global and Local Consistencies Pdf: /pdf/1de0846fc9d27e2b932dbb77196ab8c0724399ab.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The manuscript delineates an innovative method, termed as DPCLS, which is designed for partial label learning. In addressing the issue - namely label disambiguation - the DPCLS method exhibits an amalgamation of similarity relationship and dissimilarity relationship in an adversarial manner that endows the method with superior performance in comparison to baseline methods. The manuscript manifests commendable quality through its comprehensive and well-crafted approach toward partial label learning. The clarity is evident in the lucid and well-structured exposition of the proposed methodology. The originality lies in the adversarial learning of similarity and dissimilarity relationships, addressing the challenges posed in partial label learning. The paper's significance is underscored by its superior performance over baseline methods, along with its potential to inspire future research in this domain. Strengths: S1. The DPCLS method's novelty is encapsulated in its adversarial learning manner of similarity and dissimilarity relationships, effectively addressing the unique challenges in partial label learning. S2. The paper employs a sound theoretical approach with clear and well-structured explanations of the methodology and its components. S3. The evaluation is comprehensive, with comparisons to baseline methods providing a compelling demonstration of the superior performance of the DPCLS method. Weaknesses: W1. Although the paper provides a comprehensive explanation of the methodology, further technical insights regarding the implementation and specific algorithms within the DPCLS method would be beneficial. For example, does the order of Steps. 4-9 have an influence on the final performance? The auxiliary matrix $A$ needs more clarification, which is not easy to follow. W2. The paper falls short in providing a detailed analysis of the limitations of the proposed approach, a factor that could be significant for future research and practical applications. W3. Too many hyper-parameters in a method would somewhat degrade its quality. W4. A more detailed exposition of existing methods related to this work, including their characteristics and potential biases or weaknesses, would enrich the manuscript. W5. A clear motivation about real-world applications or potential application scenarios, would strengthen the practical significance. W6. Some claims in Section 4 Theoretical Analysis need more clarifications, e.g., the number of training samples, and dissimilarity matrix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please find my comments on weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please find my comments on weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your time and effort in reviewing our paper.** --- **W1: Steps of Algorithm 1 and explanation of the auxiliary matrix $A$** **Answer to W1:** * **The effect of the different orders:** Steps 4-9 in **Algorithm 1** solve four subproblems, and the order of them will not affect the final performance empirically, as with different orders the algorithm stops only when it is converged [1, 2]. To validate this statement, we reversed the order of steps 4-7 and conducted experiments on data sets Lost, Ecoli ($e$=0.8) and Steel ($e$=0.8). The experimental results are shown in the following **Table RA1**, where we find the order of steps 4-9 will not affect the experimental results. * **Clarification on $A$**: The variable $D$ in the initial problem (Eq. (6)) has many constraints ($\mathbf{0}\_{m\times m} \leq D \leq \mathbf{1}\_{m\times m}, D_{i j}=D\_{0 i j}, \text{if} D\_{0 i j}=1$), making the $D$ subproblem very difficult to solve. To simplify the optimization, we introduce an auxiliary matrix $A$ and let $A=D$, and transfer some of the constraints on $D$ to the variable $A$. Then the initial problem in Eq. (6) equivalently becomes the problem in Eq. (7). Accordingly, the original subproblem regarding $D$ becomes two subproblems regarding $D$ (in Eq. (13)) and $A$ (in Eq. (15)), which are both easy to solve. More details can be found in the code, which has already been submitted to the appendix. As suggested, we will provide more detailed explanations about the technical insights of **Algorithm 1**. --- **Table RA1: Origin order and reverse order classification accuracy** | Data set | Origin order | Reverse order| |:----:|:----:|:----:| | Lost | .770$\pm$.024 | .770$\pm$.024 | | Ecoli (e=0.8) |.832$\pm$.014|.832$\pm$.014| | Steel (e=0.8) |.638$\pm$.022|.638$\pm$.022| --- **W2: The paper falls short in providing a detailed analysis of the limitations of the proposed approach, a factor that could be significant for future research and practical applications.** **Answer to W2:** \ We agree with the reviewer that every work has some limitations. For our work, one of the major limitations is the computation issue on large-scale data sets. Although in **Section A** of the **supplementary file** (Eq. (5)), we have made the solving of the QP problem regarding the $F$-subproblem more scalable. In our work, the sizes of the dissimilarity and similarity matrices are both $m\times m$ with $m$ the number of the training samples, making the construction and the associated computation regarding them not scalable to the extreme large-scale data sets. In practice, we can remedy this issue by handling the data sets through mini-batches or an anchor graph. Nevertheless, the heavy computation burden on large-scale data sets is still a limitation of our work. In the final version of our paper, we will discuss this limitation in the **Conclusion** section. --- **W3. Too many hyper-parameters in a method would somewhat degrade its quality.** **Answer to W3:** \ In the second last paragraph of **Section 3**, we have discussed how to set the hyper-parameters of our method. Specifically, our method has 4 hyper-parameters $\alpha$, $\beta$, $\gamma$ and $k$, **three of which are fixed ($\lambda, k, \beta$), and $\alpha$ is selected from {0.001, 0.01}**. As a comparison, the compared methods like SDIM [3] have two hyper-parameters, where one is selected from {0.001, 0.005, ..., 0.5} and the other one is selected from {0.00001, 0.00005, 0.0001, ..., 0.1}. Therefore, our method does not need complex parameter tuning and can achieve better classification performance. Moreover, as shown in **Fig. S1** of the **supplementary file**, the performance of our method is robust to the hyper-parameters, making it quite easy to use in practice. --- **W4: A more detailed exposition of existing methods related to this work, including their characteristics and potential biases or weaknesses, would enrich the manuscript.** **Answer to W4:** \ Thank you for your suggestion. Please find the detailed **Related Work** in the **Global Response**. In the final version, we will add the detailed related works and discuss their characteristics. --- **W5: A clear motivation about real-world applications or potential application scenarios, would strengthen the practical significance.** **Answer to W5:** \ The research on PLL is initially motivated by several real-world problems. For example, in video face recognition, several persons may appear on a single frame with captions indicating their names. In this case, one person is annotated by several names as the candidate labels and only one name is the correct label for this person [4]. Please refer to **Fig. R1** of the **Global Response (PDF)** for more detailed real-world applications. \ In fact, some data sets used in the experiments were collected from real-world scenarios, such as the data set Birdsong from the bird song classification task, FG-NET from the facial age estimation task, and Lost from the automatic face naming from videos. \ In the final version, we will show more real-world PLL applications to strengthen the practical significance. --- **W6: Some claims in Section 4 Theoretical Analysis need more clarifications, e.g., the number of training samples, and dissimilarity matrix.** **Answer to W6:** \ Thank you for your suggestion. $m$ denotes the number of training samples, and $D$ is the dissimilarity matrix. We will add detailed explanation for each variable in **Section 4** of the final version. --- [1] 2021-TPAMI-Partial Multi-Label Learning with Noisy Label Identification \ [2] 2023-Tcyber-Prior Knowledge Regularized Self-Representation Model for Partial Multilabel Learning \ [3] 2019-IJCAI-Partial Label Learning by Semantic Difference Maximization \ [4] 2017-TKDE-Disambiguation-Free Partial Label Learning --- Rebuttal Comment 1.1: Comment: Thanks for the response. I appreciate the idea of adversarial learning for similarity and dissimilarity relationships, but I also realize that the paper has some weaknesses, e.g., hyper-parameters, and its applications in the real world. Overall, I am inclined to give an acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your response and we would like to express our gratitude for your willingness to accept our paper.
null
null
null
null
null
null
Gaussian Process Probes (GPP) for Uncertainty-Aware Probing
Accept (poster)
Summary: This paper provides Gaussian process probes (GGP), a probabilistic method to evaluate uncertainty for a binary classification task over a (pre-trained) feature extractor. The core idea is to use GP instead of a linear probe on the feature extractor. The use of GP provides a natural way to estimate two uncertainty measures: aleatory and epistemic. To examine the performance of GPP, the authors conduct two experiments using synthetic and real images, showing that GPP can preferably estimate uncertainty. Strengths: 1. Novel formulation to evaluate uncertainty for linear probe problems. 2. The paper is well-written and easy to follow. A lot of illustrative figures help to understand the key concepts. 3. Solid derivation of the GP model and uncertainty estimation. Weaknesses: 1. The experiment results (Fig 5) look somewhat strange. For me, the LPE result doesn't make sense. The setting here is that only positive labels are noisy (flipped to negative with 50% probability), and the negative labels do not contain noise. In such a situation, the well-trained classifier will output 0.5 as the judged probability for positive and 0 for negative examples. This should occur when the classification problem is linearly separable on the feature space, and the number of observations is large enough. I assume both conditions are satisfied when the number of observations is 128, but I cannot see such a tendency from LPE. Also, due to the imbalanced noise, I expect to see a vertically asymmetric behavior for LPE (e.g. the upper limit of judged probability would be close to 0.5), but it is not. 2. This paper doesn't explicitly explain multiclass cases. Line 111 says extending GPP for multiclass problems is straightforward, but I don't feel it is trivial. Also there are no experiments for multiclass cases. 3. Some technical descriptions are unclear. See "Questions" below. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In 154, it is said the kernel k is defined as "k(a, a) = v", but the definition (3) tells us that "k(a, a) = v (||a||^2 + 1) / (||a|| + 1)^2", which is not v unless a = 0. Why does this inconsistency happen? 2. Line 144 says that GP with the kernel (3) is "equivalent to defining a distribution over linear latent functions", but then why does Fig 1 show some nonlinearity? 3. What is the training procedure of LPE in Fig 5? Did you confirm that LPE was properly trained? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors fairly address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. > LPE results in Figure 5, “the well-trained classifier will output 0.5 as the judged probability for positive and 0 for negative examples. This should occur when the classification problem is linearly separable on the feature space, and the number of observations is large enough. I assume both conditions are satisfied when the number of observations is 128, but I cannot see such a tendency from LPE.” In our experiments, we randomly flipped the labels for the positive class, but it is possible for the linear classifier to find features that linearly separate the activations with new labels. This is especially true if the dimension of the activations is larger than the number of observations. For example, a hyperplane in 3D almost always exists to separate 2 points. Since the number of dimensions for the activations is 256, and the number of observations is 128, it is very likely that there exists a hyperplane that separates the activations (unless the activations with positive labels are almost overlapping). Also note that points in high dimensions are naturally distant from each other. Roughly speaking, for logistic regression, it only requires the distance to be larger than 3 to predict the judged probability to be larger than 95% or lower than 5%. So it is not surprising that judged probability predictions of LPE are close to 0 or 1. > “due to the imbalanced noise, I expect to see a vertically asymmetric behavior for LPE (e.g. the upper limit of judged probability would be close to 0.5), but it is not.” As mentioned above, it is very likely for LPE to find a hyperplane that separates activations with the new labels. And since the queries are sampled from the same distribution as the observations with original labels, LPE is still going to output probabilities close to 0 or 1. > “ Line 111 says extending GPP for multiclass problems is straightforward, but I don't feel it is trivial. Also there are no experiments for multiclass cases” Extending the Beta GP component of GPP to multiclass only requires switching the Beta distribution to the Dirichlet distribution (Beta distribution is a special case of Dirichlet), and we can achieve this by using k latent functions instead of 2, where k is the number of classes. That is what we meant by being straightforward, and we will make this clearer in the paper. We aim to detect whether a model is able to represent a concept or not (probing as binary classification), so multiclass classification was not the best problem setup for us. However, we agree that the multiclass problem is a very important topic, and we need to formulate the problem carefully and spend more effort to understand and build tools for multiclass probing (in addition to the GP classification component). For example, some problems may require multiclass multilabel outputs, where the binary version of GPP can be directly plugged in for each label. But for multiclass single-label problems, we might need to redefine episteme for each label and understand the new relationship of episteme, alea and judged probability. We hope to explore the multiclass probing problem more in our future work, and we will add discussions about these considerations in the next version of this paper. > “1. In 154, it is said the kernel k is defined as "k(a, a) = v", but the definition (3) tells us that "k(a, a) = v (||a||^2 + 1) / (||a|| + 1)^2", which is not v unless a = 0. Why does this inconsistency happen?” Thank you for pointing this out. We made a typo in Equation (3). There should be a square root in the denominator to properly normalize, i.e., $k(a, a') = v\frac{a^T a' + 1}{(\lVert a\rVert^2+1)^{\frac12}(\lVert a'\rVert^2+1)^{\frac12}}$, which ensures that $k(a,a)=v$. This was implemented properly in our code (line 191 of code/gp.py, which is used by line 211 in the cosine_kernel function). We will correct this in the paper. > “2. Line 144 says that GP with the kernel (3) is "equivalent to defining a distribution over linear latent functions", but then why does Fig 1 show some nonlinearity?” This is because the linear latent functions ($f_\alpha$ in Equation 4) are only linear to the *normalized activations with bias terms*. The latent functions are nonlinear in the space of activations. > “3. What is the training procedure of LPE in Fig 5? Did you confirm that LPE was properly trained?” LPE is a bootstrap ensemble of linear probes. For each member of the ensemble, we trained a logistic regression classifier on a dataset sampled from the original set of observations with replacement. The size of the dataset was the same as the number of observations. Each ensemble has 100 members. We also made sure to include at least one positive and one negative example so that logistic regression is possible. We will make these clear in the new version of the paper. To validate the correctness of our LPE implementation, we performed sanity checks using LPE for model M.1 and task P.1 with no fuzzy labels. In this sanity check, LPE was able to achieve AUROC=1 with 40 or more observations. We include this result in the PDF attached to the “global response”, and the full figure can be found in Figure 15 in the appendix. Figure 14 of the appendix also illustrates that judged probabilities predicted by LPE are close to 1 for the majority of positive examples when ground truth probability is 1; if the ground truth probability is 0.25, the judged probabilities predicted by LPE are closer to 0. Collectively, these pieces of evidence support that LPE was properly trained in our experiments. --- Rebuttal Comment 1.1: Title: Raise my score Comment: Thank you to the authors for the response. The explanations and the additional experiments about LPE make sense. My major concerns are resolved, and I will raise my score.
Summary: This work introduces a unified framework called Gaussian process probes (GPP) for probing and quantifying uncertainty in models' representations of concepts. GPP extends linear probing methods and uses a Bayesian approach to estimate the distribution of classifiers induced by the model. This distribution measures the model's ability to represent concepts and provides a measure of confidence in the representation. GPP is a simple procedure applicable to any pre-trained models with vector representations, requiring no access to training data, gradients, or architecture details. The validation experiments on synthetic and real-world datasets demonstrate that GPP can effectively probe concept representations with a small number of examples, accurately measure both epistemic (confidence) and aleatory (fuzziness) uncertainties, and detect out-of-distribution data. Strengths: The paper addresses an important problem of understanding the inner representations of complex models. It does so by providing some estimates over two different types of uncertainty. It gives clear explanations for the epistemic and aleatoric uncertainties. The use of GPs and the study of two types of uncertainty in this particular context seems novel. Weaknesses: As a non-expert in the area of probing, I found some parts of the paper hard to follow (see Questions below). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you explain in more detail the choice of prior as outlined in Section 2.3.1. I'm not sure I understand what you mean by matching the Beta prior and matching the normal distribution in lines 125-126. I think this must impose important assumptions on the model, affecting the outcomes so it would be good to communicate it clearly. Are there any practical implications to considering the two types of uncertainty separately? Are there any cases where it might give counterintuitive results? Do you foresee any practical issues with using higher dimensional Dirichlet-based GPs? Can the inference still be performed in closed form? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors outlined limitations in the main part of the paper and provided a broader impact statement. From a practitioner's point of view, it would be important to outline the exact assumptions that the GP model imposes (I assume those will depend on the specific dataset/model). I believe this is done to some extent in the paper but I have further questions (as mentioned above). The quality of the writing could be improved, it requires a proof-read. Minor: Line 69 - best repeated Line 124 - uniformly distributed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. > “Can you explain in more detail the choice of prior as outlined in Section 2.3.1. I'm not sure I understand what you mean by matching the Beta prior and matching the normal distribution in lines 125-126.” The goal is to use a Log-normal distribution to approximate a Gamma distribution. Because a Beta variable can be written as two independent Gamma variables, we can approximate the Beta variable with two Log-normal variables. The logarithms of Log-normal variables are distributed according to normal distributions. That is why we can approximate a Beta distribution with two normal distributions. Please see more details on how close the PDF approximations are in Section B.3 (especially Figure 9) in the appendix included in the supplementary material. > “Are there any practical implications to considering the two types of uncertainty separately? Are there any cases where it might give counterintuitive results?” The practical implications are that we can now distinguish between fuzziness in concepts (i.e., aleatory) and not having enough information to reveal what the concept is (i.e., epistemic). In the past literature, if a linear probe predicts 0.5 probability for the binary case, we would conclude that the model does not represent the concept. However, in this work, we showed that predicting 0.5 does not mean that the model cannot represent the concept, since some concepts can be intrinsically fuzzy. For example, some people think tomatoes are fruits but others don’t, but that does not mean people don’t have representations for fruits. Another situation is that the probe does not have enough observations to tell what the concept is. Please see the figure and 2nd paragraph in the introduction for more insights. In short, the predictions from GPP can reveal much richer information than previous methods, and the rich information provides practical insights on what the model is able to represent. We have not seen cases where GPP gives counterintuitive results. > “Do you foresee any practical issues with using higher dimensional Dirichlet-based GPs? Can the inference still be performed in closed form?” For multiclass problems using Dirichlet-based GPs, the number of latent functions is the same as the number of classes. Hence the computation for inference grows linearly with the number of classes. The inference can still be performed in closed form, since those latent functions are independently distributed according to Gaussian processes, and the inference for each GP can be done in closed form. For higher dimensional activations, we don’t expect any issues since the GP computations rely on the kernel and the kernel only requires computing norms or inner products of activations. > “writing” and typos Thanks for pointing this out. We will further polish the paper with more rounds of proof-read. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I believe the paper raises interesting points and shows some appealing results. However, I am not able to evaluate the significance of the contribution in the context of existing literature, hence my low confidence stands.
Summary: The manuscript proposes a Gaussian process-based probing (monitoring a layer using only the layer’s activations without influencing the model itself) method that can estimate uncertainty in prediction. In the experimental result section, the proposed method is applied to several example datasets that can check whether the proposed method can correctly estimate fuzziness in concepts and to out of distribution (OOD) detection. Strengths: I think that the text is easy to read and understand. This work can be significant because we need more tools to understand deep neural networks, which are black-box models. Uncertainty quantification is also a major selling point of the proposed method. However, I hope that the manuscript would have provided monitoring and diagnostic tools to understand layers’ behaviors using estimated uncertainty. Weaknesses: 1. I think that the novelty of the proposed method is incremental. In fact, a number of works have been proposed based on GP or Deep GP models for uncertainty estimation (including decomposing uncertainty into Aleatoric and Epistemic uncertainty) and out-of-distribution (OOD) detection. The authors put together several ideas (from existing work) to create a new method that can be a good tool for certain problems. However, I did not find any new innovations or improvements from the methodologies of the proposed method. 2. I think that the manuscript could be further improved in terms of presentation. My main concern is that the motivation for the use of Beta (Dirichlet) Gaussian processes is not clearly stated in the main text. Why not just use the original Gaussian process formulation for classification? This approach requires approximation for inference, but there already have been accurate and practical tools, e.g., expectation and propagation (EP). If the main motivation is about computational complexities (as in the reference paper for Beta GP [ref Milios et al., 2018]), then the examples included in the experiments do not seem to be well designed to get the readers appreciate this motivation from them (as the numbers of the training datasets appear to be small). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In eq. (4), what is \mu(a)? I could not find its definition in the text. Regarding eq. (4) (and eq. (2)), it appears that we don’t need to consider two linear functions, W_alpha and W_beta, because they can be reduced a single linear function f = f_alpha - f_beta = W^T \psi(x), where W=W_alpha - W_beta. What did I miss here? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes. No potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review and constructive feedback. We would like to emphasize the novelty of this work comes from the application of kinds of uncertainty and GPs in the context of probing. . Our goal is not simply to measure the uncertainty of predictions from a neural net, where we agree GPs have previously been used. Rather, we designed GPP (Gaussian process probes) to measure uncertainty related to any "concepts" represented by a neural net from its activations -- a new way to "probe" neural nets. To the best of our knowledge, delineating different kinds of uncertainty for deep neural net representations via probing has not been studied in the past. Both intrinsic fuzziness in concepts and uncertainty as to which concept applies are ubiquitous cognitive phenomena, making it natural to ask how these quantities can be extracted from neural nets. When learning new concepts, it is natural for people to perform OOD detection (i.e., more colloquially, being able to say “I’m not sure since I haven’t learned it yet”). GPP fills in the gap between the existing probing literature, which has measured the presence or absence of concepts but not uncertainty about those concepts, and people’s intuitive representation of concepts (where uncertainty of different kinds can easily be articulated). This is an important step for advancing the capabilities of explainable and interpretable AI. > “monitoring and diagnostic tools to understand layers’ behaviors using estimated uncertainty” We agree with the reviewer that deeper analyses and comparisons between layers are exciting directions for follow-on work. Since GPP can be used for any layers of activations in a deep neural network, our work builds a strong foundation from which to pursue these directions in the future. > “novelty” from a GP perspective It is straightforward to distinguish the two kinds of uncertainty in the **GP regression** framework. However, to the best of our knowledge, aleatory uncertainty (fuzziness of concepts) has not been studied in the **GP classification** literature. There also has not been previous work in the GP literature to establish the correspondence between human perceptions of uncertainty [Fox and Ülkümen,2011] and GP classification predictions. > “the motivation for the use of Beta (Dirichlet) Gaussian processes”, “Why not just use the original Gaussian process formulation for classification?” Thank you for raising this point. Beta (or Dirichlet) Gaussian process classifiers have been shown to either outperform or achieve similar performance than classic GP classification approximations [Milios et al., 2018]. While our work is not limited to Beta GPs, we used the Beta (Dirichlet) GPs mainly because to the best of our knowledge, the Beta (Dirichlet) GPs is the state-of-the-art GP classification method. The other reason we decided to use the Beta GP is because the GP posterior inference can be done in closed form, and we can easily inspect the observations and posteriors to make sure the Beta GP is doing what we intended to do. We will make this motivation clearer in the paper. > “In eq. (4), what is \mu(a)?” $\mu$ is the mean function of the Beta GP first mentioned in the last paragraph of Section 2.2, and defined in Section 2.3.1. We will make this clearer for Eq. (4). > “Regarding eq. (4) (and eq. (2)), it appears that we don’t need to consider two linear functions, W_alpha and W_beta, because they can be reduced a single linear function f = f_alpha - f_beta = W^T \psi(x), where W=W_alpha - W_beta. What did I miss here?” That’s a very nice observation. We simplified the expression to a single linear function, but we have to use two different functions in the Beta GP, because each observed datapoint transforms to two pseudo datapoints with heteroscedastic noises, one for $f_\alpha$ and the other for $f_\beta$ (Section 2.3.2). During inference, the posteriors of $f_\alpha$ and $f_\beta$ are updated accordingly. If we only use one function $f$, we cannot directly obtain the posterior of $f$ without updating $f_\alpha$ and $f_\beta$. Please see Section B.1 in the appendix for more details. *References* Craig R Fox and Gülden Ülkümen. Distinguishing two dimensions of uncertainty.Fox, CraigR. and Gülden Ülkümen (2011),“Distinguishing Two Dimensions of Uncertainty,” in Essaysin Judgment and Decision Making, Brun, W., Kirkebøen, G. and Montgomery, H., eds. Oslo:Universitetsforlaget, 2011. Dimitrios Milios, Raffaello Camoriano, Pietro Michiardi, Lorenzo Rosasco, and Maurizio Filippone.Dirichlet-based Gaussian processes for large-scale calibrated classification.Advances in NeuralInformation Processing Systems, 31, 2018. --- Rebuttal Comment 1.1: Comment: I thank to the authors for answering my reviews. I still think that the novelty of the submission is limited from the point of view of mythology. I will stick to my initial rating. --- Reply to Comment 1.1.1: Comment: We appreciate the discussion. In the comment, "mythology" seems to be a typo for methodology. To clarify, our method is based on adapting and applying state-of-the-art methodology in GP classification for **novel applications to solve a new problem** (uncertainty-aware probing) in the area of interpretable and explainable AI. We drew inspirations from cognitive science and probabilistic ML to define, compute and show insights on uncertainty in the context of probing. Novelty is NOT just about methodology. New problem formulations and new applications are also important criteria for novelty. In fact, in the reviewer guidelines (https://nips.cc/Conferences/2023/ReviewerGuidelines), it was pointed out that originality is about new tasks or a novel combination of well-known techniques, besides new methods. It would be helpful if the reviewer could point out papers demonstrating that the novelty or contribution (since the score was 2/4) of our work is limited. For example, which papers in the GP classification literature discussed the aleatory uncertainty and its relations to judged probability and epistemic uncertainty, or have shown how to formulate and solve the problem of uncertainty-aware probing? Moreover, since the presentation was rated 2/4, it would be great if the reviewer could provide constructive suggestions for improving the presentation. It seems to us that the lack of motivation for Beta GP is not a core issue for "the writing style and clarity, as well as contextualization relative to prior work" in the description of what "Presentation" is in the reviewing guide.
Summary: - The authors propose a probabilistic probing method to understand a given pre-trained classifier. - The authors describe how looking at classifier predicted class probabilities is not enough since "0.5" in a binary task can happen for several reasons spanning the aleatoric/epistemic uncertainty spectrum - On the other hand, the proposed probabilistic method allows for posterior estimates of classifier's predicted probabilities, which allows for example to produce variance/entropy/etc - The particular method used is to compute functions g() of a classifier's representation a(x) for inputs x and to study the distribution of g() - The distribution of g() is defined through a hierarchical GP called the Beta GP - After giving an exposition of the model the authors introduce two sets of experiments: whether the probe can correctly identify that there is true label uncertainty in some synthetic experiments, and whether the model can correctly identify that data is OOD for a given classifier. While proposed method is well motivated and properly defined, I have some major questions regarding evaluation (proper definition of all tasks and baselines). For now, I think the paper needs some clarification before acceptance, but would be glad to raise my score, given clarification from authors. Strengths: The papers strength's are: - Beta-GP-based probing method is very clearly defined - All of the computations e.g. posterior computations are precisely stated in the appendix - The overall motivation of the work (probabilistic probing) is solid - The introduced entropy/variance based metrics that pull apart some aspects of aleatoric versus epistemic uncertainty, adapted from previous literature, seem like a nice contribution to uncertainty/probing evaluation metrics Weaknesses: There are two downstream uses of the method - checking correlation of probes' reported uncertainty versus the true label uncertainty for synthetic/semi-synthetic datasets - OOD detection The precise definitions for baselines in uncertainty estimation experiments were not given (namely LPE) and more definitions are necessary to correctly interpret the variance/entropy-based metrics. See "Questions". For OOD detection, the superiority over baselines is exemplified but the baselines are out of date. The synthetic dataset is not named, and some details are missing from the Imagenet result to understand exactly how to the pretrained model checkpoints were run on the binarized data. See "questions" Minor writing style suggestion that does not affect my review: In sentences like "There are important details in Beta GPs that require special attention: how to set the prior and how to approximate the posterior." Since there are a few distributions floating around that were recently introduced to the reader, it could be helpful to include the symbols in mid-sentence: "There are two important details in Beta GPs that requires special attention: how to set the prior (mu and k(,) in GP(mu,k)) and how to approximate the posterior p(f_alpha,f_beta | D, mu k )." And include a sentence after "With samples from the posterior (falpha, fbeta) we can then approximate the distribution of g()~G with the log normals parameterized by f_alpha, f_beta." This could help the reader piece everything together. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The results for the first experimental section (uncertainty quantification) seem convincing relative to the baselines but one major issue - LPE is not cited, and the authors do not mention that they propose it. There is only a high-level description given. I was trying to look for a precise definition of the method since it is taken as one of the main baselines, but cannot find it anywhere. - Please give a citation for LPE, or precisely define it and state that this is a baseline proposed by the authors. - This is moreover important since one cannot arbitrarily compare model entropies/variances with each other for continuous models without more assumptions (same base measure, etc). And for this, precise definition of all distributions is important. For OOD detection, two main groups of concerns, task definition and baselines: Task issue 1: - there seem two be two datasets, a synthetic one and ImageNet - this section doesn't explicitly name which synthetic dataset. I only assume by continuity that it is the shapes dataset from the previous subsection. Please name all datasets in all sections/captions/figures they are used in rather than just refer to "synthetic data". - The authors say "we generate another set of queries that include 1024 ID images and 1024 OOD images (uniform random noise)" but don't say from where is the in-distribution. Task issue 2: - For Imagenet there are also some missing details that stop me from understanding the experiment. - I understand this: "Queries and observations are sampled disjointly from the validation split of the ImageNet dataset" - I understand this + I read the appendix section: "We make 10 set of Ds using 10 binary classification tasks defined by supersets of ImageNet classes" - How are the pre-trained classifiers use? Did you specifically pull checkpoints that were also trained on the binarized problems? Or did you somehow pool together a multiclass classifier's probabilities for the underlying model + renormalize? What happened to the probabilities assigned to the other classes? - Please let us know and then revise the paper to clarify the exact use of the pre-trained models versus the data and what exactly was computed, including any equations if there were any transformation steps from multiclass checkpoints to the binary task, or clarify that binary classification model checkpoints were used that correspond to the same binarization that you applied to the data - Also, is the out-distribution in both datasets just uniform noise, or uniform noise added to ID data? Baselines: The baselines seem a little out of date, with the two non-LPE baselines (again, where is LPE from?) being from 2016 and 2018. It's okay to include older baselines as part of a broader evaluation, but why not include also recent methods from 2020-2023 such as any of the below. I might be missing context on this sub-area of ML, but at least in others I review actively, it's fairly rare to find a paper that only evaluates to methods from <=2018. Several well-cited recent methods are documented below. Maybe not all are applicable as baselines for your particular setup/model assumptions/method assumptions (e.g. black box vs having access to something), but please clarify. - 2018, ODIN, Enhancing the reliability of out-of-distribution image detection in neural networks. https://arxiv.org/abs/1706.02690 - 2018, A simple unified framework for detecting out-of-distribution samples and adversarial attacks. https://papers.nips.cc/paper_files/paper/2018/hash/abdeb6f575ac5c6676b747bca8d09cc2-Abstract.html - 2019 Likelihood Ratios for Out-of-Distribution Detection https://proceedings.neurips.cc/paper_files/paper/2019/file/1e79596878b2320cac26dd792a6c51c9-Paper.pdf - 2020, Energy-based out-of-distribution detection. https://proceedings.neurips.cc/paper/2020/hash/f5496252609c43eb8a3d147ab9b9c006-Abstract.html - 2021, REACT, React: Out-of-distribution detection with rectified activations. https://proceedings.neurips.cc/paper/2021/hash/01894d6f048493d2cacde3c579c315a3-Abstract.html - 2022, Dice: Leveraging sparsification for out-of-distribution detection. In ECCV, https://arxiv.org/abs/2111.09805 - 2022, Out-of-distribution detection with deep nearest neighbors. https://arxiv.org/abs/2204.06507 - 2022, Scaling out-of-distribution detection for real-world settings, https://arxiv.org/abs/1911.11132 - 2022, Vim: Out-of-distribution with virtual-logit matching, https://arxiv.org/abs/2203.10807 - 2022, a review: https://arxiv.org/pdf/2110.11334.pdf In short, it seems like good work, but it seems like authors know more than the readers (about task definitions, datasets, why newer baselines were not used). Please clarify. Glad to increase score if sufficient answers given to these concerns. Most concerns can be answered without experiments except the "old baselines" concern, which requires either experiments or a commitment to include newer baselines. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review and constructive feedback. We would like to first clarify that our primary goal is not to perform OOD detection but to understand which concepts a model can and cannot represent (i.e., probing). We designed GPP (Gaussian process probes) to measure uncertainty of “concepts” by using activations/representations as a medium -- a new way to "probe". Probing and delineating different kinds of uncertainty for deep neural net representations has not been well studied, but both fuzziness in concepts and unsureness are ubiquitous cognitive phenomena. When learning new concepts, it is natural for people to perform OOD detection (i.e., more colloquially, being able to say “I’m not sure since I haven’t learned it yet”). GPP fills in the gap between existing probing literature and a person’s intuitive understanding of concepts. This is an important step to advance the new capabilities of explainable and interpretable AI. > “precise definition” and “citation for LPE” and “distributions” in LPE Thank you for pointing out this omission. Linear probe ensembling (LPE) adopts a standard bootstrap aggregating method to ensemble linear classifiers. The same method was used as a component in Kim et al., 2018. Tran et al., 2022 used ensembling and an entropy score for OOD detection. We will include both citations. Precise description of LPE: LPE is an ensemble of linear probes. For each linear probe, we train a logistic regression classifier on a dataset sampled from the original set of observations with replacement. The size of the dataset is the same as the number of observations. Each ensemble has 100 linear probes. By bootstrapping, each linear probe in LPE can be viewed as i.i.d. samples from an underlying distribution over classifiers (denoted as $g$ in the paper). Since variance/entropy-based metrics etc are computed with classifier samples, we can use those linear classifiers in LPE to compute those metrics in the same way as GPP (Section 2.4). > “writing style suggestion” Thank you. We will polish the writing accordingly. > “Task issue 1” on names of datasets and what is in distribution We thank the reviewer for their comment, and will clarify these points in the paper. The synthetic dataset is the 3D Shapes dataset [Burgess and Kim, 2018]. The ID examples for ImageNet are images that come from the same distribution that the probe observes. For example, consider a probe trained to classify "dogs" vs "cats"; images of dogs and cats would be ID, whereas random noise would be OOD. > “Task issue 2”: “How are the pre-trained classifiers use?” The pre-trained models are off-the-shelf ImageNet classifiers trained on all 1000 fine-grained classes (e.g., "hummingbird"). The probe takes as input the model's intermediate activations (features), and is trained to classify between two coarse-grained classes (e.g., "bird" vs "cat"). These coarse-grained classes are constructed using the WordNet hierarchy (https://wordnet.princeton.edu/). > “Did you specifically pull checkpoints that were also trained on the binarized problems?” No, as mentioned above, we used off-the-shelf ImageNet classifiers trained on all 1000 fine-grained classes. These classifiers are the pre-trained models used for probing. > “did you somehow pool together a multiclass classifier's probabilities for the underlying model + renormalize? What happened to the probabilities assigned to the other classes?” No, the probe used the activations of the pre-trained model. Our goal was to see if the probe can distinguish between ID/OOD data so that it is a reliable probe. We did not need the probability outputs of the original pre-trained model for our purpose. > “is the out-distribution in both datasets just uniform noise, or uniform noise added to ID data?” The OOD data in both datasets is pixel-wise uniform noise, not a noisy version of the ID data. > “OOD detection baselines” For OOD detection baselines, we included MSP (maximum predicted softmax probabilities using LP) [Hendrycksand Gimpel, 2016], Maha (negative Mahalanobis distance-based score) [Lee et al., 2018] (this is the 2nd paper pointed out by the reviewer) and LPE (negative predicted variance from linear probe ensembles) [Tran et al., 2022, Kim et al., 2018]. As shown in the extensive analyses of OOD detection methods and tasks in Appendix E of Tran et al., 2022, Maha and LPE (LPE is equivalent to their “Entropy” method for ensembles) achieved top performance, surpassing more recent Ren et al., 2021. We included preliminary results on “deep nearest neighbors” [Sun et al., 2022] pointed out by the reviewer in the PDF, and we will add it to the revised version of the paper. We thank the reviewer again for the helpful suggestions and we will address those in detail in our revised version. *References* Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al.Interpretability beyond feature attribution: Quantitative testing with concept activation vectors(TCAV). InInternational Conference on Machine Learning (ICML), 2018. Dustin Tran, Jeremiah Liu, Michael W Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han,Zi Wang, Zelda Mariet, Huiyi Hu, et al. Plex: Towards reliability using pretrained large modelextensions.arXiv preprint arXiv:2207.07411, 2022. Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji Lakshminarayanan. A simple fix to mahalanobis distance for improving near-ood detection. arXiv preprint arXiv:2106.09022, 2021. Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems (NeurIPS), 2018. Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. InInternational Conference on Machine Learning (ICML), 2022.
Rebuttal 1: Rebuttal: We are encouraged that the reviewers found our work novel (**Reviewers 5Gmc, VzJz, zzXk**), significant (**3D6e**), well-motivated (**iFt6, VzJz**), well-written and easy to follow (**zzXk, 3D6e**). Moreover, **Reviewer 5Gmc** acknowledged that our measures of uncertainty for probing are interesting and useful, and enable “obtaining insights into the function of a deep learning model that could not have been obtained before”. We are also pleased that reviewers recognized that our method was well-presented (**5Gmc**), clear (**iFt6, VzJz**) and solid (**zzXk**). In the attached PDF, we included the following figures: 1. Figure 1 (for Reviewers iFt6, 5Gmc) presents additional results on OOD detection with a recent baseline method [Sun et al., ICML 2022] suggested in iFt6. 2. Figure 2 (for Reviewer zzXk) shows that LPE is a valid probing method and supports the fact that it was properly trained. We really appreciate the suggestions and questions from the reviewers, and we reply to them individually below. We will incorporate all feedback in the new version of the paper. Pdf: /pdf/d8cfbf300e808036cde04f50a83653b772443764.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors introduce Gaussian process probes, a probabilistic probing method. They use this method to obtain additional insights into the internals of deep learning models, using the concepts of aleatoric and epistemic uncertainty. Strengths: * This is an interesting and useful addition to the literature on probing methods. It enables obtaining insights into the function of a deep learning model that could not have been obtained before. * The method is novel to the best of my knowledge. * The method is well-presented. The methods section is well-structured and clear. Weaknesses: * Other parts of the paper could use from better writing. In particular, I thought the intro and the experiments section could use additional polishing. * The paper could benefit from more extensive experiments. At the very least, it would be standard to test the method on more than one dataset. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * What other methods and datasets could serve as benchmarks and how would the method perform there Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No concerns about addressing limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. > “writing” Thank you for these suggestions. We will polish both the intro and experiment sections. > “test the method on more than one dataset” In the paper, we conducted experiments on 2 standard datasets and 1 set of photographic images for demo: (1) 3D Shapes [Burgess and Kim, 2018], (2) ImageNet [Russakovsky et al., 2015], and (3) online or proprietary real world images. For (1), we constructed **3 datasets based on concept ontology of the 3D Shapes dataset** to set up experiments where we can train several CNN models to get representations for specific tasks. The 3D Shapes dataset helps us to validate the method with a full control over the ground truth labels (with different ontologies) and levels of label noise. The results are shown in Figure 4-7. For (2), we constructed **10 datasets defined by coarse-grained labels of the ImageNet dataset** (Section C.2) to verify the usefulness of epistemic uncertainty predictions from GPP, and performed experiments on OOD detection tasks. The results can be found in Figure 7. For (3), we collected **online and proprietary images** (that the models have never been trained on) to demonstrate the predictions from GPP. The demos can be found in the introduction, Figure 3 and Table 1. Hence in total, we have 14 different settings in which we evaluated the approach, using several different datasets. We will clarify this point in the paper. > “What other methods and datasets could serve as benchmarks and how would the method perform there” As mentioned above, we used 3 types of datasets (14 settings in total) and we will make this clearer in the paper. For baseline methods, we included established SOTA or near-SOTA methods, including LP (linear probes), SVM probes, LPE (linear probe ensembles) [Kim et al., 2018], MSP (maximum predicted softmax probabilities using LP) [Hendrycksand Gimpel, 2016], and Maha (negative Mahalanobis distance-based score) [Lee et al., 2018]. As shown in the extensive analyses of OOD detection methods and tasks in Appendix E of Tran et al., 2022, Maha and LPE (LPE is equivalent to their “Entropy” method for ensembles) achieved top performance, surpassing more recent proposals from Ren et al., 2021. We will include more OOD detection baselines such as “deep nearest neighbors” [Sun et al., 2022] in the revised version of the paper. Preliminary results on “deep nearest neighbors” can be found in the PDF from the “global” reply. We thank the reviewer again for the helpful suggestions and we will address them in our paper accordingly. *References* Chris Burgess and Hyunjik Kim. 3D Shapes dataset. https://github.com/deepmind/3dshapes-dataset/,2018. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNetLarge Scale Visual Recognition Challenge.International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al.Interpretability beyond feature attribution: Quantitative testing with concept activation vectors(TCAV). InInternational Conference on Machine Learning (ICML), 2018. Dustin Tran, Jeremiah Liu, Michael W Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han,Zi Wang, Zelda Mariet, Huiyi Hu, et al. Plex: Towards reliability using pretrained large modelextensions.arXiv preprint arXiv:2207.07411, 2022. Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji Lakshminarayanan. A simple fix to mahalanobis distance for improving near-ood detection. arXiv preprint arXiv:2106.09022, 2021. Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. InInternational Conference on Machine Learning (ICML), 2022. --- Rebuttal Comment 1.1: Comment: Thank you, I acknowledge having read the response. I continue to support acceptance of the paper. I do think the paper could benefit from an improved presentation of the results (e.g., the figure that shows ImageNet results does not include the word "ImageNet" as far as I can tell).
null
null
null
null
null
null
Stochastic Multi-armed Bandits: Optimal Trade-off among Optimality, Consistency, and Tail Risk
Accept (spotlight)
Summary: This paper tackles the problem of trading off problem-dependent and worst-case regret and "tail risk" of the regret in bandits. Here, the tail risk means the probability that the regret is larger than $\Omega(T^\delta)$ for some $\delta>0$. Recent studies have shown that the usual algorithms in bandits have unavoidably a linear regret with probability $O(T^{-1})$, and that without changing the scaling in $T$ of the confidence bounds of standard UCB algorithms one cannot hope to do better than a polynomially decreasing tail risk. Knowing if an exponentially decreasing tail risk is achievable is an interesting question in my opinion, and the authors answer favorably to it in the paper. Providing a worst-case upper bound of $\Omega(T^\alpha)$ for $\alpha\geq1/2$ and a problem-dependent bound of $\Omega(T^\beta)$ for $\beta>0$, the authors determine the values $(\delta, \gamma)$ for which a bound of the kind $P(R(T) \geq cT^\delta)\leq \exp(-CT^\gamma)$ is achievable. They then provide a successive elimination strategy based on the UCB principle, that matches the best guarantees established given values of $(\alpha, \beta)$. They then provide an extension of their algorithm for a structured non-stationary setting. Strengths: The paper has several merits. In my opinion it tackles an interesting problem in an original way. Though it is a bit notation-heavy, overall every notion is clearly stated and relatively easy to understand. The theorems are also clear, both regarding the lower bounds and the upper bounds for the SEwRP algorithm. I think that the focus on this kind of successive elimination strategy is a good thing for the clarity of the paper. Overall, I have a positive opinion of the paper, even if I believe that it requires some changes and clarifications before publication (see comments below). Weaknesses: Even if the paper is relatively easy to follow, the message of the paper is in my opinion not so clear and some changes may largely improve its clarity. * In my opinion, starting from $(\alpha, \beta)$ to derive the bounds on the tail risk is not very natural. It seems more natural to first consider a given tail-risk constraint (e.g. imposed by a law-maker), and then try to propose an algorithm satisfying this constraint with the best regret guarantees. Furthermore, I tend to think that once we drop the possibility of logarithmic regret then the problem-dependent guarantees may not matter much. For this reason, I would tend to suggest a much simpler version of the paper where the authors would simply provide the best achievable worst case regret bound given a tail risk. In my opinion, this would make the paper just as interesting, and much clearer. * From reading the main paper the intuition of the proofs and the technically difficult points are not very clear. I think that extended discussions on the results may bring value to the paper. * Following previous point, to me it is not very clear that SEwRP is actually needed, and maybe a much simpler explore-then-commit strategy may satisfy all criteria. * Section 4 does not bring much insights on the table and is more suited for the appendices in my opinion. I suggest to remove this section in order to provide more intuitions on the results and their proof in previous sections * Practicality of the algorithms: it is folklore that the confidence bounds of UCB are already generally rather conservative. With such inflated confidence bounds, it is natural to wonder what happens when running the algorithm for reasonably large horizons. It is a bit disappointing that the experiment section is only in appendix, and only present regret distributions only (which is interesting) and not standard regret curves too. In particular, I wonder if the regret is really sub-linear for e.g $T=10^4$ and $9$ Bernoulli distributions with $\mu=0.5$ and one with $\mu=0.6$. Furthermore, a comparison of the performance of SewRP and standard bandit algorithms (or just standard UCB) is necessary in my opinion to also assess the "experimental" trade-off, even if their theoretical guarantees are not comparable and it is certain that the safe algorithm will perform worse on average. The previous points are more opinions than objective statements, so I would be happy to discuss them with the authors in the discussion phase. ------------- Post-Rebuttal ---------------- The authors provided convincing answers to the previous points in their rebuttal, making me believe that none of these points is a major issue. Although I am still unsure on the practicality of the algorithm, I believe that the theoretical contribution of this work is strong enough so that it will be a nice Neurips paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See previous section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable time and review. We like to provide responses to your comments, which are helpful and enlightening. - We’ve indeed tried adopting what you suggested, but ended up not doing so for two reasons. - The relation between the tail probability and the tail threshold is complicated even for the worst-case scenario. Since the optimal log tail probability scales linearly with $x/T^{1-\alpha}$, if we were to specify the tail-risk constraint, it is unclear how we should set the threshold $x$ and how we should make the tail probability scale with $T$. Further, our tail bound holds for *any* threshold $x$, and thus it seems we should specify a group of tail-risk constraints instead of only one, which can possibly make the problem formulation a bit artificial. - If instance-dependent consistency is considered, the tail probability becomes more complicated. We are not quite sure about the reviewer’s claim that “once we drop the possibility of logarithmic regret then the problem-dependent guarantees may not matter much”, but we would like to note that even if the instance-dependent scenario is not considered, consistency remains a significant effect on the tail probability (see Table 1). Further, considering consistency apart from optimality allows us to demonstrate how the adaptiveness of a policy to different instances affects the tail probabilities. This has not been studied in previous works. - We like to note the main proof ideas appeared in the paper (Line 237-269). The tail risk of an algorithm is incurred by two types of events: (i) spending too much time before correctly discarding a sub-optimal arm; (ii) wrongly discarding the optimal arm. For the first type of events, we focus on the phase $n_0$ when a sub-optimal arm is not eliminated by the optimal arm (see Line 248). For the second type of events, we focus on the phase $n_0$ when the optimal arm is eliminated by some other arm. In fact, it is particularly when bounding the probability of the second type of events that our new bonus design, compared to standard ones, allows light-tailed tail risk with optimal rate on the regret threshold $x$ and time horizon $T$. Finally, we note that the goal of defining $A^*$ in the worst-case scenario is to make sure our analysis aligns with our new bonus design (we have a $\sqrt{K}$ factor in the denominator) and our tail probabilities have optimal dependence on the number of arms $K$ (see also [24] in the paper and Line 242-243). - Applying the explore-then-commit (ETC) strategy is indeed a good idea to circumvent heavy-tailed risk. However, this approach has two main issues: - From the regret expectation perspective, the ETC policy can only achieve $O(T^{2/3})$ worst-case expected regret bound. When $\alpha\in[1/2, 2/3)$, we still have to consider other types of policies. Further, it seems unlikely that without knowing the arm gaps $\Delta_k$, the explore-then-commit policy can achieve $\tilde O(T^{\alpha})$ worst-case regret and $\tilde O(T^{\beta})$ instance-dependent regret simultaneously with $\beta < \alpha$. - From the regret tail risk perspective, the tail probability of incurring a large regret may not be optimal in the worst-case. Consider a simple $2$-armed bandit case with arm gap $\Delta$. An ETC policy with $m = \Theta(T^\alpha)$ ($\alpha\in[2/3, 1)$) steps of exploration has a probability of $\Omega(\exp(-m\Delta^2))$ committing to a wrong arm. In the worst-case scenario, let $\Delta = T^{(\alpha – 1)/2}$ and the regret threshold be $x = T^{(1+\alpha)/2} / 4 \in (m\Delta, (T-m)\Delta)$. Then if we incur a regret of $x$ it means we commit to a wrong arm after the exploration, which suggests $\mathbb P(R_\theta^\pi(T)>x) = \Omega(\exp(-m\Delta^2)) = \Omega(\exp(-T^{2\alpha-1})) = \omega(\exp(-x/T^{1-\alpha}))$. - Section 4 is to show that our policy design reaches beyond the standard MAB case and is able to handle (structured) non-stationarities. We deem the results in Section 4 as an addition to Section 3 to demonstrate the generality of our results. In the next version, we are considering adding discussions about the proof of Section 3 and why ETC cannot apply in our setting, and briefly illustrate experiments. - On practicality of the algorithms: - We note that in both Figure 1 and 2, the first column mimics the standard SE policy. This is because we set $\beta=0$ which makes $\text{rad}(n)$ dominated by its second term ($\propto\sqrt{\ln T/n}$). In Figure 1 and 2, one can observe that there is an experimental trade-off: Larger $\alpha$ (more sub-optimality) allows more light-tailed behavior on extreme values and larger $\beta$ (more inconsistency) gives rise to more concentration. - Even with inflated bonus terms, the expected regret is comparable to standard policies. This phenomenon is indicated by comparing the blue distributions between the first and last column in Figure 1 and 2. We also run the experiment suggested by the reviewer. We take $\alpha\in\\{1/2, 2/3\\}$ and $\beta\in\\{0, 1/6, 1/3, 1/2\\}$. For each fixed $\alpha$ and $\beta$, we traverse $\eta_1$ and $\eta_2$ through $\\{0.05, 0.1, 0.2, 0.4, 0.8\\}$. For each $(\alpha, \beta, \eta_1, \eta_2)$, we run 1000 simulations and record the empirical mean regret, and so for each $(\alpha, \beta)$, we have 25 numbers. Then for each $(\alpha, \beta)$, we take the minimum of the 25 numbers (that is, we choose the best performed $(\eta_1, \eta_2)$). The results are listed below. We note again that when $\beta=0$, our policy can be regarded as the standard SE policy. As we can see from the results, for any fixed $\alpha$, as we increase $\beta$ and put more emphasis on the first term, the expected regret is approximately the same, suggesting that our policy does not sacrifice the expected regret. | $\alpha$ \ $\beta$| 0 | 1/6 | 1/3 | 1/2 | | --- | --- | --- | --- | --- | | 1/2 | 169.287 | 154.451 | 156.414 | 163.058 | | 2/3 | 271.661 | 276.071 | 272.956 | 271.200 | --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: I acknowledge reading the other reviews and the rebuttal. Thank you very much for your insightful answers, that encouraged me to revise my score and vote for acceptance. More precisely, * Thank you for the precision on the presentation of the guarantees, I see your point and I now agree with you that your presentation may indeed be the better option. * Thank you for detailing the technical contribution. * Thank you for your detailed answer on ETC, which properly motivate using your method instead. * I understand your point of view on section 4, my comment was a minor issue. To be completely honest I am still quite unconvinced on the practical aspect of the algorithms, and still believe in my initial intuition. Although I appreciate your effort in providing experimental result, I believe that a more extensive set of experiments may be necessary to convince me (and the problems that you consider in your short experiment section are rather easy). However, I do not believe that this is a major issue, and the theoretical contribution of this work is enough to make it a good Neurips paper (so please don't spend time on this). --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and comments. We appreciate your comment on the practical aspect as well as the consideration. Indeed our experiments provide simple illustrations of algorithm performance, and we hope they may represent the performance for a wider range of scenarios. We look forward to exploring more in future work, as suggested by the reviewer, for a more extensive set of experiments. Thanks again!
Summary: This paper explores the stochastic multi-armed bandit (MAB) problem. The authors investigate the relationship between worst-case optimality, instance-dependent consistency, and light-tailed risk in policy design. Three main properties are considered for policy design: worst-case optimality, instance-dependent consistency, and light-tailed risk. - The authors show the interplay between worst-case optimality, instance-dependent consistency, and light-tailed risk is characterized, indicating that relaxing the worst-case or instance-dependent regret order can lighten the regret tail in an information-theoretic way. - A novel policy is designed that achieves an optimal trade-off among worst-case optimality, instance-dependent consistency, and light-tailed risk. - The theory is generalized to a MAB model that allows for non-stationary baseline rewards, a common situation among all arms for each time period. Strengths: This paper makes contributions to the understanding of the stochastic multi-armed bandit problem and presents a novel policy design, which provides further insight into worst-case optimality, instance-dependent consistency, and light-tailed risk. The authors have provided a detailed characterization of the interplays among worst-case optimality, instance-dependent consistency, and light-tailed risk. They successfully illustrate how different levels of these properties affect tail risk, and have determined an optimal trade-off among them. The authors propose a unique policy Successive Elimination with Random Permutation (SEwRP) that achieves the optimal regret tail risk for any regret threshold, exhibiting desirable qualities in both worst-case and instance-dependent scenarios. This policy builds upon the concept of successive elimination, introducing novel bonus terms to balance the three key properties of policy design. For any given $\alpha$ and $\beta$, the proposed policy obtains optimal worst-case regret and instance-dependent regret, while also achieving the best possible regret tail probability for both scenarios. The authors have successfully generalized their analysis to include a stochastic multi-armed bandit problem with non-stationary baseline rewards. This extension could prove useful in various applications dealing with structured non-stationarity. Weaknesses: Though the paper presents a novel and intriguing perspective, its primary reliance on theoretical analysis may be viewed as a limitation. Incorporating empirical validation of the proposed algorithm within the main body of the text could offer a more persuasive argument by substantiating the theoretical outcomes. One limitation of this study is the unresolved question of whether the presented results can be extended to 'any-time' scenarios, where the policy has no prior knowledge of the time horizon $T$. The inability to validate the presented approach in the 'any-time' setting, a more realistic and complex scenario, leaves a gap in the research. This complexity should be addressed in future work to enhance the general applicability and robustness of the proposed algorithm. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper is well-written, and the authors make the content accessible and easy to understand. Given your paper's discussion in L228 on the phase transition regarding the size of the confidence interval in the design of the novel bonus term, could you elaborate on its interpretation and implications? Specifically, how does this phase transition impact the overall performance and robustness of the proposed policy? Additionally, could you provide further insight into the practical significance of this transition from the dominance of the second term to the first term in real-world applications? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors seem to have adequately addressed the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable time and review. We would like to provide responses to your comments and questions, which we find very helpful to us. - Incorporating empirical validation: In the appendix, we provide detailed numerical experiments. We would like to emphasize that in both Figure 1 and 2, the first column indeed corresponds to the standard SE policy. This is because we set $\beta=0$ which makes $\text{rad}(n)$ dominated by the second term $\propto \sqrt{\ln T/n}$. One can observe that there is an experimental trade-off: Larger $\alpha$ (more sub-optimality) allows more light-tailed behavior on extreme values and larger $\beta$ (more inconsistency) gives rise to more concentration. - Insight of phase transition: The phase transition suggests that in the first phase where the second term dominates the first term, we focus more on exploration within the consistency constraint; in the second phase where the first term dominates the second term, we focus more exploitation within the optimality condition. While this is distinctive from the commit-then-explore paradigm where in the first phase we do pure exploration and in the second phase we do pure exploitation, our policy design suggests that in real-world practice, to achieve more light-tailed risk, it might be beneficial to have two different phases in the policy design: more exploration at the beginning, and more exploitation afterwards. --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: I appreciate the response from the authors. The response addressed all my questions and concerns, and I will keep my initial score for the paper. --- Reply to Comment 1.1.1: Comment: Thanks again for your time and comments!
Summary: The submission studies the stochastic multi-armed bandits. The arm-selection policy is required to be worst-case optimal, instance-dependent consistency, and have low tail risks (worst-case and instance-dependent) simultaneously. Lower bounds for achieving these goals at the same time are provided in Theorem 3.1. The submission proposed a policy (SEwRP) that matches the lower bounds (Theorem 3.3 and Proposition 3.4). The main results are also generalized to non-stationary scenarios of baseline rewards (Theorem 4.1 and Proposition 4.2). Strengths: - (a) Studying the interplay between worst-case bound, consistency, and risk provides a new perspective to understanding MAB. - (b) A novel confidence interval (rad(n)) is designed to address different criteria at different phases of the learning process. - (c) Besides the rigorous analysis, the treatment for handling the confidence interval and the choice of tail event to tighten the bound constitute the technical contributions. - (d) Fluent arguments and clarifying intuitions provide a comfortable reading experience. Weaknesses: - (e) The submission, with its supplementary, is a complete and self-content paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - (f) In Algorithm 1, does i represent the number of elimination iterations, and t represent the number of arm pulls? - (g) Is it correct that the reward's formulation (Line 124) contains the common Gaussian reward and normal reward? - (h) What is the role of empirical regret? It is defined in Line 130 but does not appear in the key results and derivations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable time and review. We would like to provide responses to the three questions (f) (g) and (h), which we find very helpful. - Yes, indeed you are correct. We will add more discussion for illustration in the next version. - Yes. Our model assumes subGaussian noises, and thus include the special case of pure Gaussian/normal rewards. - We define the empirical regret for completeness of describing the formulation. In practice, the DM can only observe the empirical reward $\sum_{t}r_{t, a_t}$, and thus the empirical regret can be naturally defined. We then focus on the pseudo regret by arguing that the sum of noise terms (genuine noise) is in general ignorable in the worst-case scenario or inevitable in the instance-dependent scenario (Line 141-145). --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: I want to thank the authors' feedback on all reviews and all reviewers' comments. The feedback clarifies all my questions and provides insights from other reviews' comments. I would like to keep my original decision.
Summary: This paper presents an insightful investigation into the trade-off between optimality, measured by expected regret, and risk, defined as the probability of large regret, in the context of Multi-Armed Bandit problem algorithm design. The authors have made several significant contributions: * They have demonstrated an "impossibility" result (Theorem 3.1), which indicates that a lower expected regret and a reduced rate of large regret cannot be achieved simultaneously. * The authors have proposed an algorithm capable of achieving a Pareto trade-off between expected regret and tail rate. * They have also extended these findings to a structured non-stationary bandit setting. Strengths: I find the authors' results compelling and their technical contributions to be of high value. This is indeed a noteworthy piece of work. The result provided could provide a general guideline (what can and cannot be acheived) for the MAB algorithm design. Weaknesses: NA Technical Quality: 3 good Clarity: 3 good Questions for Authors: * It would be beneficial to provide a clearer explanation of the intuition behind the "impossibility" result. * Please elucidate the necessity for random permutation within the proposed algorithm. * It would be interesting to explore the influence of the non-stationary constant $b_t$ on the performance of the algorithm. While Lemma 4.3 provides valuable insight, it would be helpful to understand why the result in Lemma 4.3 holds true despite $b_t$ potentially following any sequence (even one chosen in an adversarial manner). * A numerical example illustrating the superior tail behavior of regret under the proposed algorithm, as compared to the conventional successive elimination algorithm, would be a beneficial addition. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable time and review. We would like to provide responses to the four questions, which we find very helpful to improve our work. - Thanks for the question. In Table 1, we provide the critical values of the log tail probabilities, which serves as a more intuitive way to illustrate the impossibility results in Theorem 3.1 and Corollary 3.2. In the worst-case scenario, the log tail probability can be approximately regarded as $-x/T\cdot\text{(worst-case expected regret)}$, while in the instance-dependent scenario, the log tail probability can be approximately regarded as $-\text{(instance-dependent expected regret)}$. The intuition is that when the regret expectation becomes less optimal and consistent, more space is left for alleviating tail risks. - The randomization step is crucial to hedge against the baseline rewards $\{b_t\}$, for which deterministic algorithms may fail. The uniformly random permutation leads to two advantages that facilitate the analysis: (1) in each phase, the baseline rewards are transformed into "random noises''; (2) after each phase, the number of times any arm two arms in the active set have been pulled are the same. With uniform permutation, although the estimation of one arm is still biased, the difference between estimation for two different arms becomes unbiased. - As is suggested in the paragraph above, the randomization step is independent with the baseline rewards, and so it protects the policy against any potentially adversarial choice of $b_t$. In fact, the randomization idea is useful and necessary to hedge against an adversarial environment (see, e.g., [1]). - In the appendix, we provide a range of numerical experiments. We would like to emphasize that in both Figure 1 and 2, the first column indeed mimics the standard SE policy. This is because we set $\beta=0$ which makes $\text{rad}(n)$ dominated by the second term $\propto \sqrt{\ln T/n}$. One can observe that there is an experimental trade-off: Larger $\alpha$ (more sub-optimality) allows more light-tailed behavior on extreme values and larger $\beta$ (more inconsistency) gives rise to more concentration. [1] Krishnamurthy A, Wu ZS, Syrgkanis V (2018) Semiparametric contextual bandits. International Conference on Machine Learning, 2776–2785 (PMLR) --- Rebuttal Comment 1.1: Comment: Your responses addressed all my questions. I would like to keep my score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models
Accept (oral)
Summary: This paper studies the "task vectors" framework where the weights of models can be perturbed in specified directions corresponding to tasks which result in improvements on those tasks. They attribute the success of this framework to "weight disentanglement" which means that adding a task vector for task i does not change how the network behaves when seeing input for task j != i. In addition, their investigation reveals that task vectors are not a consequence of fine-tuning occurring in the NTK regime (post-hoc linearization underperforms), but explicitly linearizing fine-tuning improves the task vectors framework as it "amplifies" "weight disentanglement". Strengths: Overall I thought this paper was very interesting, it's claims are mostly precise and thoroughly investigated, and it's results are impressive (+ 5.8 for task addition). I though Figure 5 was particularly nice evidence of the authors hypothesis. Weaknesses: - Limited experiments for many tasks. I would be very curious to see what happens in e.g. table 1 if you scale up the number of tasks. - Limited attempt to falsify the hypothesis -- for instance can you consider two tasks which share the same input images? Would task vectors still work? (See question in questions section). - Throughout the paper the authors mention variations of "Specifically, we probe the hypothesis presented in Wortsman et al. [80] that task arithmetic is possible thanks to the fact that models inherently operate in a linear regime" but I cannot actually find anywhere in [80] where this hypothesis is stated? I thought that Wortsman et al. [79, 80] were observing that ensembles behaved roughly similar to averages in the fine-tuning regime, and simply used the NTK regime as an example of where this would occur, but noted that there was still differences between the two (which is not predicted by the NTK regime). How do the authors feel about the hypothesis that averaging models (~= task vectors) ~= output-ensembling + terms-which-may-be-small (e.g., Section 4 of [79]). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Do the authors findings point to any "better" way of adding task vectors rather than just adding them for all weights in the network? - How does weight disentanglement change with scale? - Can the authors think of an experiment which might falsify their hypothesis? E.g., do they think that task vectors might still "work" if two tasks share the same input space. Here's a quick thought: get one task vector which corresponds to the problem "first 5 classes in CIFAR10 or second 5 classes" and another corresponding to the task "traditional CIFAR10". If you apply both these task vectors then look at CIFAR10 performance on just the first 5 classes I think performance would still be as good as just applying the CIFAR10 task vector? - Should the community switch to linearized fine-tuning? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Would have been nice to see a limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the reviewer reported that our paper is very interesting and has impressive results, and their engagement to improve it. Below, we address their comments. **Adding more tasks** For the paper, we are using the experimental setting of Ilharco et al. [39], where task arithmetic was introduced. Namely, our task-addition experiments already involve a set of 8 different vision tasks. Yet, while the scalability of task arithmetic across a larger number of tasks is an appealing question, it goes beyond the paper’s objective and is left for future investigations. In the revised version of our manuscript, we will also include results on NLP tasks and using other convolutional architectures. **Disjoint task support hypothesis** Our theoretical analysis specifically focuses on non-overlapping scenarios, consistent with the case which is most studied in the empirical literature. In fact, the hypothesis of having disjoint task supports is satisfied in both Ilharco et al. [39] and our work. Encompassing overlapping task supports – which might require a slight change in the definition of weight disentanglement – remains an interesting problem. However, we do not see studying only the non-overlapping case as a weakness. Indeed, in the context of vision-language tasks, the input space is the Cartesian product of all images and captions, which is a high-dimensional space in which the overlap between tasks might be minimal. We will ensure to add this justification in the revised version. **Previous linearity hypothesis** Wortsman et al. [80] observe in Appendix F that even though averaging weights and averaging output functions in practice are not exactly the same, these two methods are not as different as they appear. In particular, they report that these methods are equivalent in the NTK regime and this regime might be an accurate approximation of fine-tuning. Similarly, in Appendix A of Ilharco et al. [39], the authors justify their results on task arithmetic based on the NTK hypothesis for which they also cite Wortsman et al. [80], sharing many of the same authors. Concerning the question on the hypothesis introduced in [79], we do not have arguments to refute it. In fact, we believe that weight disentanglement might play a role in this setting as well if different models are specializing to different regions of the input space. In that case, the terms-which-may-be-small would be the disentanglement error. **Effect of scale on weight disentanglement** Our results reveal that by scaling the number of model parameters, the performance of linearized fine-tuning becomes closer to the one of standard non-linear fine-tuning. As commented briefly in Appendix D.1, one plausible interpretation is that larger models, which are more over-parameterized, inherently induce a stronger kernel behavior during fine-tuning. Namely, since the models have more parameters, each parameter has to change less to fit the training examples. As a result, they tend to stay closer to the NTK approximation, closing the gap with linearized models and taking benefit of the better weight disentanglement of the models lying in the tangent space. As also suggested by Reviewer [DjkZ](https://openreview.net/forum?id=0A9f2jZDGW&noteId=v6Lz3dFhri), we conducted supplementary experiments that visualize how weight disentanglement varies changing the model scale (see Figure R.2 of the *Author Response document* attached [here](https://openreview.net/forum?id=0A9f2jZDGW&noteId=7E6o5YEkJw)). Consistently to our results, larger models exhibit stronger weight disentanglement, as highlighted by the larger light region in the right panel of the first row of Figure R.2. Yet, interestingly, the models linearly fine-tuned are always more weight disentangled than the non-linearly fine-tuned ones, highlighting the strength of linearized models for model editing. We will add this discussion and the new results to the paper. **Adoption of linearized fine-tuning** Linearized fine-tuning, as shown in our paper, gives multiple benefits for, e.g., ensembling, composition, and forgetting (negation). Notably, for convex losses, this method also enjoys a convex optimization landscape and could be used to provide further theoretical guarantees in the future. However, we want to remark that studying the performance of linear vs non-linear models is an ongoing line of research (see, e.g., references in Related Works – Linear vs non-linear regime). In fact, linearized fine-tuning may not be uniformly superior in all settings and architectures, although it seems to be the case for the models we studied in the present work. All in all, we hope that our findings will motivate further empirical exploration to discern in which cases linearized fine-tuning competes effectively with standard non-linear fine-tuning. **Limitations** Our limitations are clearly reported in the paper. In particular, we had a section in the Appendix to the O(1) computational complexity increase of linearized models and discussed the limitations of Proposition 1 as a remark in the main body of the text. Following the reviewer’s suggestion, we will emphasize those points further in the revised version of our work. We thank the reviewer for their valuable feedback and remain available to answer further questions or provide more clarifications regarding the previous points.
Summary: This paper presents a comprehensive theoretical and empirical analysis of task arithmetic for model editing, where adding different task vectors (obtained by taking the difference between fine-tuned and pretrained model checkpoints) could improve the model’s performance on these tasks and vice versa. The authors propose weight disentanglement to investigate the underlying principles of task arithmetic, which involves decomposing the learned model function into a sum of localized components with disjoint supports. Specifically, the authors compare **regular** and **post-hoc linearized** fine-tuned models and find that weight disentanglement significantly contributes to the ability of task arithmetic. Based on these insights, the authors then propose to directly employ the linearized model for fine-tuning, obtain optimized task vectors, and lead to improved task arithmetic performance. Further analyses are conducted to reveal the connection between task arithmetic and weight disentanglement. Strengths: - The paper is clearly written and well organized. - This work presents a neat analysis of task arithmetic based on the use of linearization and neural tangent kernels, which is novel to my knowledge and interesting, offering a fresh perspective on understanding the geometry of pre-trained checkpoints’ weights. - The proposed method to further improve task arithmetic is simple yet effective, supported by extensive empirical findings. Weaknesses: As the authors already mentioned, one potential weakness is the introduced computational overhead during training. However, since the main focus of this work is a comprehensive analysis of task arithmetic, this is not a significant issue in the context of this work. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. How does model scale impact weight disentanglement, and thus the benefits of linearized fine-tuning? 2. (Minor) Can the findings about task arithmetic be generalized to natural language tasks as well? By intuition, it seems that the setting of natural language texts is more straightforward for weight disentanglement due to the inherent discreteness of text tokens. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s recognition of our work and their engagement to improve it! Below, we address their comments. **Effect of scale on weight disentanglement** Our results reveal that by scaling the number of model parameters, the performance of linearized fine-tuning becomes closer to the one of standard non-linear fine-tuning. As commented briefly in Appendix D.1, one plausible interpretation is that larger models, which are more over-parameterized, inherently induce a stronger kernel behavior during fine-tuning. Namely, since the models have more parameters, each parameter has to change less to fit the training examples. As a result, they tend to stay closer to the NTK approximation, closing the gap with linearized models and taking benefit of the better weight disentanglement of the models lying in the tangent space. As also suggested by Reviewer [DjkZ](https://openreview.net/forum?id=0A9f2jZDGW&noteId=v6Lz3dFhri), we conducted supplementary experiments that visualize how weight disentanglement varies changing the model scale (see Figure R.2 of the *Author Response document* attached [here](https://openreview.net/forum?id=0A9f2jZDGW&noteId=7E6o5YEkJw)). Consistently to our results, larger models exhibit stronger weight disentanglement, as highlighted by the larger light region in the right panel of the first row of Figure R.2. Yet, interestingly, the models linearly fine-tuned are always more weight disentangled than the non-linearly fine-tuned ones, highlighting the strength of linearized models for model editing. We will add this discussion and the new results to the paper. **Generalization to NLP tasks** Task arithmetic and our findings on weight disentanglement can be readily generalized to NLP tasks. In order to show the generality of weight disentanglement, we conducted a new experiment on a pre-trained T5 base model from Hugging Face, fine-tuned on two benchmark NLP tasks (sentiment analysis on movie reviews and question answering). The results, illustrated in the right panel in Figure R.1 of the *Author Response document*, show a notable region around the pre-trained checkpoint characterized by low disentanglement error. This finding echoes the ability of T5 to perform task arithmetic as demonstrated in Ilharco et al. [39] (Appendix D.6), thereby reinforcing the robustness of our conclusions. We will report this result in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed feedback. After carefully reading through the clarification as well as the other reviews, most of my concerns have been addressed, and these new experimental results further strengthen the presented analyses. I thus raised my score.
Summary: This paper theoretically and emperically investigates the reasons why task arithmetic (an emerging technique for editing pre-trained neural networks) works. The paper shows that, contrary to previous hypotheses [39,79,80], lienarity of the fine-tuning on individual tasks is not sufficient to fully explain the success of task arithmetic (Sec 3). Instead, the authors propose the related and straight-forward idea of weight disentaglement (Eq 4) as an explanation (Sec 4), also providing a measure of the disentaglment error for two tasks (Eq 6). This idea is leveraged to improve task arithmetic performance by constraining the fine-tuning to linearized models - which importantly is shown to be an improvement over post-hoc linearlization, with marginal additional computational complexity (Sec 5). Finally, an additional connection between task arithmetic and the eigen space of the Neural Tangent Kernel is used to argue that weight disentanglement (and hence the capacity for task artihmetic) are a learned properties during pre-training, not inherent properties of parameterization or architecture (Sec 6). The empirical results focus on the CLIP image-text Vision Transformer model family, and many experimental details are reported, inclduing in the additional material in the appendix. Strengths: This paper is the best kind of NeurIPS paper; Beautifully written and a delight to read, the authors have considered a timely and pertinent problem in an emerging machine learning domain, applied a methodological theoretical investigation; clarified prior hypotheses in the literature; used the theoretical findings to propose a simple novel methodology, and compellingly evidenced the subsequent effectiveness with an appropriate evaluation protocol. As a cherry on the cake, the methodology is a 'drop in' method that can be applied easily to existing approaches, and actual code is provided without breaking review anonymity in the supplementary material. Bravo. ### Originality * There are multiple practical take-aways that are immediately useful; the task disentanglement error metric (Eq 6), intuition for why task vector coefficients should be << 1 (Ln 181), simple drop-in code in the appendix that enables re-production and immediate application of the ideas (Listing 1 in the appendix). ### Quality * The work appears to be of high quality, the metrics and emprical appear to support the theoretical findings and claims. ### Clarity * This paper is superbly well written, and the logical argument and flow is well constructed. This paper was a delight to read and think about. ### Signifigance * Multiple compelling and genuine avenues for future research are identfidied (Ln 239, Ln 286, Ln 315, Ln354) * Provides timely and much-needed (i) overview of the emerging and rapidly evolving technique of task arithmetic with pre-trained models, and (ii) the beginnings of a theoretical grounding and understanding for why this technique is possible. Weaknesses: * The main weakness of the paper in the present form is that the empirical results are limited to ViT models (specifically, CLIP). Within this modality, the empirical results are compelling (e.g. using multiple model families and 8x distinct datasets/tasks), and provide evidence for the theoretical findings, however the paper's authority and strength would be enhanced by demonstrating some of the key findings hold with another architecture or data type. E.g. I would love to see the results in Table 3 (Sec 6.2) replicated, even in a small way, with GPT or BERT on a text dataset. This would provider further strength for the empirical claims around Eigenfunction localization. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Ln 336 on 'Feature disentanglement'; to what degree is feature disentanglement enabled by over-parameterization / very large numbers of parameters? If this is the case, what is the parameter count relative to - i.e. some measure of the complexity of the input space or the function space represented by the data? Is there a way to quantify with a metric the 'capacity' for feature disentaglement of a given NN archiecture? These could be useful areas for investigation. 2. Related to the above point, Ln 233 notes that the advantage of the linearized model task arithmetic approach diminishes as the number of model parameters increases. I find this curious and would have expecte the opposite result - do you have any intuition why this is the case? 3. Pseudo-code or actual PyTorch code for Eq 6 would be a very helpful (and I assume straight-forward) addition to your supplementary material - this would be of immediate practical help to other researchers working on practical task arithmetic applications. 4. Related to the above, Ln 297 - can the degree of local linear independence of the NTK eigenfunctions be measured/computed with a simple metric to test this condition in other models / datasets? This would likewise be a useful contribution of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: * It's not clear to me if there are situations where the linerarization technique won't be readily applicable. E.g. are some activation functions or more esoteric neural network architectures (e.g. recurrent NN's; long-term memory components) going to problematic to linearize? Some negative examples of where NTK linearization can't be used and/or approximations or work-arounds could be a useful addition in this regard. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate the reviewer’s enthusiasm and acknowledgment of the significance of our work and their engagement to improve it! Below, we address their comments. **Generality of our results beyond CLIP/ViT models** We would like to emphasize that all our theoretical results are directly applicable to any model which satisfies Property 1 (Task arithmetic) regardless of pre-training scheme or architecture. Notably, any model satisfies Property 1 if and only if it is weight disentangled. As suggested, in order to show that our results are robust to the architecture choice and input modality, we have replicated our experimental analysis using convolutional neural networks and large language models: - We have repeated all our experiments using a CLIP ConvNeXt model pre-trained on LAION-400M. Remarkably, also for this architecture weight disentanglement is stronger in the tangent space to the pre-trained checkpoint (see left panels in Figure R.1 of the *Author Response document* attached [here](https://openreview.net/forum?id=0A9f2jZDGW&noteId=7E6o5YEkJw)) and linearized fine-tuning enhances task arithmetic (see Table R.1 of the *Author Response document*). We will present the complete set of new results in the revised version. - In addition, in order to investigate the effects of pre-training, we are repeating the same analysis for non-contrastively pre-trained (closed-vocabulary) ViT and ConvNeXt models, and we will add these results to the final version as well. As suggested, we will also make explicit what architectures we consider, both in the abstract and in the main text. - To substantiate the generality of weight disentanglement, we conducted a new experiment on a pre-trained T5-Base model from Hugging Face, fine-tuned on two benchmark NLP tasks (sentiment analysis on movie reviews and question answering). The results, illustrated in the right panel in Figure R.1 of the *Author Response document*, show a notable region around the pre-trained checkpoint characterized by low disentanglement error. This finding echoes the ability of T5 to perform task arithmetic as demonstrated in Ilharco et al. [39] (Appendix D.6), thereby reinforcing the robustness of our conclusions. We will report this strong result in the paper. Finally, while exploring whether linearization improves weight disentanglement and task arithmetic for different modalities – such as language – is undoubtedly captivating, it goes beyond the current scope of the paper, and we reserve it for subsequent investigations. **Disentanglement and overparameterization** We are not aware of any results showing to what extent feature disentanglement is enabled by overparameterization. Intuitively, achieving a distinct disentanglement in feature space, as well as in weight space in the case of weight disentanglement, demands a sufficient number of model parameters. A systematic study of these quantities as the model size scales lies beyond our current scope. However, we concur that this is a fascinating question for future research. **Linear vs non-linear** Our intuition for the observation of the diminishing advantage of linearized-model task arithmetic with increasing model size is that larger models tend to stay closer to the NTK approximation. In other words, they tend to behave linearly without being explicitly constrained to do so (see, e.g., Figure 2). One explanation for this is that larger models, being more over-parameterized, inherently induce a stronger kernel behavior during fine-tuning, i.e., having more parameters, each parameter requires smaller adjustments to fit the training examples. As a result, being closer to their linearized counterparts, larger models have better weight disentanglement and can perform task arithmetic similarly to linearized models (see also the answer Effect of Scale on Non-Linear Advantage to reviewer [DjkZ](https://openreview.net/forum?id=0A9f2jZDGW&noteId=PEwcdQxgup) and the new results in Section 3 of the *Author Response document* displaying weight disentanglement as a function of model size). **Pseudo-code for the weight disentanglement error** We thank the reviewer for their suggestion of providing the PyTorch code for computing the disentanglement error in the Appendix. This addition will enhance reproducibility, and we intend to implement it in the revised version. **Eigenfunction localization** Approximating the eigenfunctions of the NTK is a costly operation since it requires computing the kernel matrix and diagonalizing it. Hence, in general, measuring localization or other properties of the eigenfunctions is challenging. Moreover, in practice, all these properties are not displayed exactly, so a sound investigation should take into account this fact as well. All in all, this precludes an exhaustive exploration within the confines of this paper. Yet, we agree and believe that studying the spectral properties of the NTK for different datasets, architectures, and modalities holds promise for future research. We will acknowledge the importance of this avenue in our manuscript, reflecting your input. **Linearization of other architectures** Linearization readily works with the majority of architectures currently used, encompassing convolutional and transformer architectures with both smooth and non-smooth activation functions (as shown by our new experiments using ConvNeXts). Although our current framework doesn't explicitly address recurrent architectures, we believe that implementing linearization for recurrent architectures is possible We will explicitly outline the range of applicability of our procedure in Appendix B (Implementation aspects of linearized models). We thank the reviewer for their valuable feedback and remain available to answer further questions or provide more clarifications regarding the previous points. --- Rebuttal Comment 1.1: Title: Acknowledgement of author responses Comment: I thank the authors for their responses and the further experiments. These results further strengthen what is already a great paper. Well done.
Summary: This paper presents a comprehensive analysis of task arithmetic using pre-trained CLIP models. It challenges the early hypothesis that task arithmetic arises from linear fine-tuning in the NTK regime and introduces weight disentanglement as a necessary condition for enabling task arithmetic. Further experiments demonstrate that linearized fine-tuning of pre-trained CLIP models exhibits stronger weight disentanglement and improved task arithmetic compared to standard, non-linear fine-tuning. This result is accompanied by an analysis of the NTK spectrum to facilitate the understanding of weight disentanglement in linearized models. Lastly, empirical evidence suggests that weight disentanglement emerges from large-scale pre-training. The findings of the paper may shed new light on the effective adaptation of foundation models for downstream applications. Strengths: - This paper advances the theoretical understanding of task arithmetic. In particular, it introduces weight disentanglement as a strong indicator of task arithmetic, and demonstrates that linearized fine-tuning in the tangent space of pre-trained weights promotes weight disentanglement. It further analyzes the NTK spectrum of linearized models and presents a sufficient condition for weight disentanglement. - In the meantime, the findings of the paper are significant on the practical side. The experiment results justify an emerging fine-tuning scheme for adapting foundation models. - Overall, this paper strikes a good balance between theory (NTK) and practice (fine-tuning pre-trained models). Embracing a broad audience is a key strength of the paper. - Finally, the paper is very well-written. The flow of presentation is very easy to follow. Weaknesses: - Overall, I found the theoretical analysis motivating and the experiment results convincing. That said, all conclusions of the paper are drawn from CLIP fine-tuning, which makes me wonder whether the same findings are valid for other pre-trained models that similarly exhibit task arithmetic. To this end, I encourage the authors to report results on a second pre-trained model that differs in network architecture (e.g., ResNet), learning objective (e.g., MAE) or input modality (e.g., natural language), in order to establish the generality of their findings. - The authors observed that increasing model size (ViT-B/32->ViT-L/14) closes the gap between linearized and non-linear fine-tuning. Is it a consequence of stronger weight disentanglement because the fine-tuning of larger models approaches the NTK regime? Visualization of weight disentanglement throughout the paper only considers the smallest ViT-B/32 model. Similar visualizations for larger models could be informative. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: These questions are likely outside the scope of the paper, yet I think addressing any of them via an empirical analysis could strengthen the work. - Task arithmetic makes the strong assumption that datasets from different tasks have disjoint support. This is often not true in practice. Does task addition still yield cooperative behavior when datasets overlap? How does linearized fine-tuning compare to standard, non-linear fine-tuning, especially when the assumption of non-overlapping data is violated? - In linearized fine-tuning, the optimization is restricted to the tangent space of pre-trained weights. Notably, this is conceptually similar to LoRA and adaptor-based fine-tuning, where a task vector exhibits low-dimensional structure. A natural question to ask is whether these fine-tuning approaches also produce favorable weight disentanglement / task arithmetic. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper discussed two main limitations. First, linearized fine-tuning is currently implemented using the JVP algorithm, which doubles the cost of a forward pass as compared to standard fine-tuning. Second, the spatial localization of NTK eigenfunctions is a sufficient (yet not necessary) condition for enabling task arithmetic. While the linearized models indeed respond to disjoint spatial regions in the experiments, this is not a must for task arithmetic to hold. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the reviewer recognized that our paper is well-written, our analysis is motivating, and our experiments are convincing and their engagement to improve it. Below, we address their comments. **Generality of our results beyond CLIP/ViT models** We would like to emphasize that all our theoretical results are directly applicable to any model which satisfies Property 1 (Task arithmetic) regardless of pre-training scheme or architecture. Notably, any model satisfies Property 1 if and only if it is weight disentangled. However, in order to show that our results are robust to the architecture choice and input modality, we have replicated our experimental analysis using convolutional neural networks and large language models: - We have repeated all our experiments using a CLIP ConvNeXt model pre-trained on LAION-400M. Remarkably, also for this architecture weight disentanglement is stronger in the tangent space to the pre-trained checkpoint (see left panels in Figure R.1 of the *Author Response document* attached [here](https://openreview.net/forum?id=0A9f2jZDGW&noteId=7E6o5YEkJw)) and linearized fine-tuning enhances task arithmetic (see Table R.1 of the *Author Response document*). We will present the complete set of new results in the revised version. - In addition, in order to investigate the effects of pre-training, we are repeating the same analysis for non-contrastively pre-trained (closed-vocabulary) ViT and ConvNeXt models, and we will add these results to the final version as well. As suggested, we will also make explicit what architectures we consider, both in the abstract and in the main text. - To substantiate the generality of weight disentanglement, we conducted a new experiment on a pre-trained T5-Base model from Hugging Face, fine-tuned on two benchmark NLP tasks (sentiment analysis on movie reviews and question answering). The results, illustrated in the right panel in Figure R.1 of the *Author Response document*, show a notable region around the pre-trained checkpoint characterized by low disentanglement error. This finding echoes the ability of T5 to perform task arithmetic as demonstrated in Ilharco et al. [39] (Appendix D.6), thereby reinforcing the robustness of our conclusions. We will report this strong result in the paper. Finally, while exploring whether linearization improves weight disentanglement and task arithmetic for different modalities – such as language – is undoubtedly captivating, it goes beyond the current scope of the paper, and we reserve it for subsequent investigations. **Effect of scale on non-linear advantage** The reviewer correctly highlights that by scaling the number of model parameters, the performance of linearized fine-tuning becomes closer to the one of standard non-linear fine-tuning. To the best of our knowledge, this is a novel observation. As commented briefly in Appendix D.1, one plausible interpretation is that larger models, which are more over-parameterized, inherently induce a stronger kernel behavior during fine-tuning. Namely, since the models have more parameters, each parameter has to change less to fit the training examples. As a result, they tend to stay closer to the NTK approximation, closing the gap with linearized models and taking benefit of the better weight disentanglement of the models lying in the tangent space. In response to the reviewer’s suggestion, we conducted supplementary experiments that visualize how weight disentanglement varies changing the model scale (see Figure R.2 of the *Author Response document*). Consistently to our results, larger models exhibit stronger weight disentanglement, as highlighted by the larger light region in the right panel of the first row of Figure R.2. Yet, interestingly, the models linearly fine-tuned are always more weight disentangled than the non-linearly fine-tuned ones, highlighting the strength of linearized models for model editing. We will add this discussion and the new results to the paper. **Open questions** We thank the reviewer for raising these stimulating questions, although we agree that these are beyond our current scope. In particular, the disjoint support assumption adopted in our work stems from the way in which task arithmetic was first introduced in Ilharco et al. [39], wherein the datasets are indeed disjoint. Yet, we acknowledge that studying task arithmetic in other settings is an interesting direction (see also the answer Disjoint Task Support Hypothesis to Reviewer [eVrq](https://openreview.net/forum?id=0A9f2jZDGW&noteId=QWXTFZW72i)). Similarly, studying the sparsity of task vectors and the effects of LoRA training are exciting open questions and avenues that warrant separate exploration. We thank the reviewer for their valuable feedback and remain available to answer further questions or provide more clarifications regarding the previous points. --- Rebuttal 2: Title: Updated rating Comment: The rebuttal addressed my concerns. The new results on a convolutional backbone and NLP tasks empirically confirmed the generality of the analysis. Overall, this work is well-positioned to inspire both the theory and practice of foundation model adaptation. I thus raised my rating from weak accept to strong accept.
Rebuttal 1: Rebuttal: We kindly thank all the reviewers for their time and for providing valuable feedback on our work. We appreciate that reviewers have pointed out that our work is interesting (Reviewer [eVrq](https://openreview.net/forum?id=0A9f2jZDGW&noteId=kbSkLPUU32)), intriguing (Reviewer [qb7d](https://openreview.net/forum?id=0A9f2jZDGW&noteId=MpKbXwmZKw)), and very well written (Reviewers [qb7d](https://openreview.net/forum?id=0A9f2jZDGW&noteId=MpKbXwmZKw), [DjkZ](https://openreview.net/forum?id=0A9f2jZDGW&noteId=v6Lz3dFhri), [sE5a](https://openreview.net/forum?id=0A9f2jZDGW&noteId=CDcYUaXFFc)), and that our results are solid (Reviewer [qb7d](https://openreview.net/forum?id=0A9f2jZDGW&noteId=MpKbXwmZKw)), impressive (Reviewer [9cnX](https://openreview.net/forum?id=0A9f2jZDGW&noteId=CetQ5EK9GW)), and impactful (Reviewers [DjkZ](https://openreview.net/forum?id=0A9f2jZDGW&noteId=v6Lz3dFhri), [qb7d](https://openreview.net/forum?id=0A9f2jZDGW&noteId=MpKbXwmZKw)). In response to the reviews, we ran a series of **new experiments** to show the generality of our findings. Specifically, - We have replicated our experimental analysis using a **convolutional architecture**. Our new results, reported in Table R.1 and Figure R.1 of the *Author Response document* (see attached pdf) reveal that also for this architecture weight disentanglement is stronger in the tangent space and linearized fine-tuning enhances task arithmetic. - We have extended the experimental results on weight disentanglement to language by demonstrating that a T5-Base model, fine-tuned on two distinct **NLP tasks**, exhibits a region around its pre-trained initialization with low weight disentanglement error (Figure R.1, right panel). - We have analyzed the **effect of model scale** on weight disentanglement, showing that larger models are more weight disentangled, but not as much as their linearized counterparts (Figure R.2) We hope that these new results and the clarifications detailed in the individual comments given to each reviewer will effectively address the concerns raised during the review process. We remain available for engaging in any further discussions that may arise, and we thank you once again for your comments. Pdf: /pdf/ae2c3b932867b5ffece2ee6a4e1e5bfb9bd604fb.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work challenges the prevailing belief regarding the origin of task arithmetic in CLIP models. While it is commonly attributed to the linear nature of fine-tuning, the authors argue that the critical factor lies in the "weight disentanglement" that occurs between tasks during the fine-tuning process. The paper presents a theoretical analysis and extensive empirical validation to support this claim. Strengths: This paper explores an intriguing topic with promising real-world applications. The authors exhibit strength in establishing a solid theoretical foundation, formulating straightforward research questions, and providing well-justified explanations that strike a remarkable balance between accessibility and rigor. The content is commendably clear and understandable, facilitating understanding for readers from diverse backgrounds. Personally, I was able to (at least, I think) grasp portions of the paper (Section 6) which required prerequisite knowledge I was not already familiar with. Notably, this paper achieves a synergy between its experimental design and the overarching claim that "Task arithmetic is not merely a result of linear fine-tuning," but rather depends on "weight disentanglement of the model with respect to the fine-tuning task set." The authors successfully validate the performance of their proposed method through rigorous experiments (and ablations) while simultaneously providing substantial support for key theoretical hypotheses. Overall, this paper represents a remarkable contribution, embodying the meticulousness, clarity, and scholarly standards I would expect in top-tier NeurIPS submissions. I’m starting my recommendation with an 8, given my not-high confidence, I’ll adjust it according to the rebuttal discussion. Weaknesses: One aspect that requires attention is the clarification of the paper's applicability beyond CLIP models. Although there are specific references throughout the text (e.g., lines 38-49 in the introduction) emphasizing the focus on CLIP models, the overall messaging may appear too general regarding the broader applicability of the proposed method and study. To address this, I recommend at least modifying the abstract to explicitly state the paper's objective as "… a comprehensive study of task arithmetic in **CLIP** models…" instead of using the term "vision-language models" which could be misleading. Furthermore, exploring preliminary tests or investigations concerning the weight disentanglement of different pre-trained models would be valuable. For instance, considering the citation of various architectures in Section 6.2 (lines 301-303), extending the analysis to include some of them (e.g., convolutional neural networks) would be advantageous, strengthening the paper's applicability and relevance to a broader range of models and areas. Additionally, while Tables 1 and 2 provide average performance across tasks, it would be beneficial to include the standard deviation to provide a more comprehensive understanding of the variability in performance and, therefore, the method's robustness. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I’ve seen that in the supplementary material, there are some code snippets, but do the authors plan to release the full codebase upon acceptance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s recognition of our work and their engagement to improve it! In what follows, we address their comments. **Generality of our results beyond CLIP/ViT models** We would like to emphasize that all our theoretical results are directly applicable to any model which satisfies Property 1 (Task arithmetic) regardless of pre-training scheme or architecture. Notably, any model satisfies Property 1 if and only if it is weight disentangled. However, in order to show that our results are robust to the architecture choice and input modality, we have replicated our experimental analysis using convolutional neural networks and large language models: - We have repeated all our experiments using a CLIP ConvNeXt model pre-trained on LAION-400M. Remarkably, also for this architecture weight disentanglement is stronger in the tangent space to the pre-trained checkpoint (see left panels in Figure R.1 of the *Author Response document* attached [here](https://openreview.net/forum?id=0A9f2jZDGW&noteId=7E6o5YEkJw)) and linearized fine-tuning enhances task arithmetic (see Table R.1 of the *Author Response document*). We will present the complete set of new results in the revised version. - In addition, in order to investigate the effects of pre-training, we are repeating the same analysis for non-contrastively pre-trained (closed-vocabulary) ViT and ConvNeXt models, and we will add these results to the final version as well. As suggested, we will also make explicit what architectures we consider, both in the abstract and in the main text. - To substantiate the generality of weight disentanglement, we conducted a new experiment on a pre-trained T5-Base model from Hugging Face, fine-tuned on two benchmark NLP tasks (sentiment analysis on movie reviews and question answering). The results, illustrated in the right panel in Figure R.1 of the *Author Response document*, show a notable region around the pre-trained checkpoint characterized by low disentanglement error. This finding echoes the ability of T5 to perform task arithmetic as demonstrated in Ilharco et al. [39] (Appendix D.6), thereby reinforcing the robustness of our conclusions. We will report this strong result in the paper. Finally, while exploring whether linearization improves weight disentanglement and task arithmetic for different modalities – such as language – is undoubtedly captivating, it goes beyond the current scope of the paper, and we reserve it for subsequent investigations. **Robustness of our method** In regard to the variability in performance in Tables 1 and 2, it is important to consider that the diverse nature of the tasks results in distinct levels of difficulty. Hence, standard deviations are likely more affected by this variability in task difficulty than the robustness of a given method. Nevertheless, we concur with the reviewer that solely looking at averages might not convey the whole picture. To address this, we showed that the improvements in performance are consistent across tasks in Appendix D.2. This clarification will be further stressed in the revised version. **Code release** We confirm that we will release the complete codebase in a public GitHub repository once the work undergoes deanonymization. We thank the reviewer for their valuable feedback and remain available to answer further questions or provide more clarifications regarding the previous points. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their thorough response to every reviewer and for addressing all of my concerns. Having closely examined the other reviews and the authors' responses, it's clear they've put in substantial effort to incorporate reviewer feedback and improve their work (which was already of high level). During the rebuttal period, the authors added more experiments and clarifications that strengthened the paper's contributions and impact across fields, significantly extending its applicability to different models and fields. Given the unanimous acclaim for the paper's strengths (i.e., exceptional clarity in writing, substantial theoretical and experimental components, and valuable implications) and the authors' diligent revisions, I am confident in upgrading my recommendation to a score of 10.
null
null
null
null
null
null
Online Ad Procurement in Non-stationary Autobidding Worlds
Accept (poster)
Summary: This work studies an advertiser's online high-dimensional lever decision problem with long-tern constraints under limited bandit feedback for different input models. The authors' main contributions include: (1) model formulation; (2) proposing an algorithm universally applicable to input models; (3) theoretical regret analysis. There is an advertiser that repeatedly interact with an ad platform during a time horizon $T$, aiming to maximize her total conversions subject to multiple constraints. At each time $t$, the advertiser need to make a multi-dimensional lever decision and observed her realized conversion as well as her multi-dimensional realized cost. The authors propose an algorithm with universally good performance and provide regret lower bounds of this problem with respect to different input procedures. Strengths: 1. The proposed algorithm is oblivious to input models such that it can achieve high performance without knowing which setting the decision-maker is in. 2. It is novel to adopt the random perturbation approach and the expert-based decision-making in online autobidding. The regret analysis requires non-trivial insights and techniques. Particularly, the proof of lemma 4.6 that bounds the primal ascent regret stands out from standard primal-dual frameworks. 3. The organization of this paper is great. Weaknesses: 1. The idea of solving a problem in many worlds is not new. In addition to commonly studied stochastic and adversarial input models, [1] also considers $\delta$-corrupted, periodic, ergodic input models. Actually, the proofs for the latter three cases are analogous to the stochastic case as they still assume stationary distributions to some extent. [1] Santiago Balseiro, Haihao Lu, and Vahab Mirrokni. The best of many worlds: Dual mirror descent for online allocation problems. 2. Some mistakes are spotted. Some statements are not clear enough. See the questions below. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Line 13, multi-dimension -> multi-dimensional. 2. Line 14. What does "uncertain" mean throughout this paper? The bidder should known their constraints at the beginning, e.g. known budget. 3. Line 168, a redundant "of". 4. Line 219, realzied -> realized. 5. Line 283, violates constraints violation? 6. Line 664, missing values for $a$ and $b$. 7. Line 288, what does $\alpha$ do in algorithm 1? To my understanding, it makes sure that the opinion of an expert lies in the interior of the domain. However, the value of $\alpha$ is not specified. It seems fatal if $\alpha$ is close to 1. 8. Line 299, the number of experts $N$ seems to be negative when $T$ goes to infinity. 9. Line 301, N experts or (N+1) experts? 10. Line 539, the first term in the RHS of equation (15) is not bounded after timing $T$. 11. Line 542, $\tau$ -> $t$. The authors should check $t\in [T]$ and $\tau\in [t]$ throughout the appendix. the order of $\beta$ also seems wrong in equation (16). 12. Line 626, observe that the sum operator is over $t\in [\tau_A]$ so the optimal $x$ does not vary to $t$. Does it implies all $y_t$ are the same? 13. Line 629, arer -> are. 14. Line 636, according to the theorem 4.2 one has $\gamma_{0} = K^{-1/6} (1+DT)^{1/2} T^{-3/4}$ and $\gamma_{N} = 1$. Then $\gamma_{0}$ is not the largest element in the stepsize set since it goes to $0$. Be willing to raise my rating if it is confirmed there is no technical flaw. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and valuable feedback. Response to weakness: The key difference between our paper and [1] is that the reward function and the constraint functions are known in [1] before making the decision (i.e. [1] studies the full information setting), while in this paper we study the bandit setting where the reward and constraint functions are unknown before a decision is made. This makes the analysis much more difficult than that of [1], as we need to carefully handle unknown per-round constraint violation (before making decisions) to satisfy long-term constraints, while not allowing regret to decay too much. Our proposed algorithm requires much more complex design and analyses than that of [1], which simply employs a mirror descent approach. We would also like to point out that although our proposed algorithm is inspired by well-established approaches, it is interpretable and implementable and thus can be easily adopted in practical setups. We also believe that our proposed algorithm can inspire future works to develop more complex algorithms and improve performance guarantees (to match with lower bounds for the full-information setting). Response to questions: We apologize for the typos in the paper, and we will carefully review the paper to ensure accuracy of our statements and results. In the following we will address major technical questions raised by the reviewer, namely questions 2, 6, 7, 8, 9, 10, 11, 12, 14: (2). “Uncertain” refers to the fact that the amount of per-round constraint violation each period is unknown. Take for example a single long-term budget constraint which says total spend cannot exceed a certain amount. In our bandit setting, the incurred cost for making a decision is only observed after the decision is made, and hence there are two sources of uncertainty: (a) The decision maker does not know how much budget is consumed before making any decision; (b) The decision maker may not necessarily satisfy her long-term budget constraint if she does not act intelligently. (6). In line 664, we utilize Eq. (45) and choose $a = -\frac{d}{\rho}\left(\bar{F}+ K \frac{\bar{F}}{\beta} \bar{G}\right)\cdot (1-\alpha) D$ and $b = \frac{d}{\rho}\left(\bar{F}+ K \frac{\bar{F}}{\beta} \bar{G}\right)\cdot (1-\alpha) D$, while the original term of $\frac{\epsilon^2}{8}$ will be replaced by $\frac{\epsilon^2(b-a)^2}{8}$. We remark that the rest of the proof follows with this small correction. (7). You are absolutely correct that $\alpha$ ensures the opinion of an expert lies in the interior of the domain. In the input line of Algorithm 1 on page 7, we do require choosing $\alpha \in (0,1)$. In particular, in the analysis (specifically for Lemma 4.6), we set $\alpha$ to be a parameter with the same order of $\rho$; i.e., $\alpha = K^{1/3}T^{-1/4}$. We will add the requirements on $\alpha$ explicitly in the statement of Theorem 4.2. (8). Thank you for catching this typo! Instead of the previous definition of $N$, we should set $N = \max (1, \lceil -\log_2 (K^{-\frac{1}{6}}(1+D T)^{\frac{1}{2}}T^{-\frac{3}{4}}) \rceil ) = O(\log(T))$, which is always positive. (9). Thank you for pointing this out. We should have $N+1$ experts. (10). We take $\alpha = K^{1/3}T^{-1/4}$ as mentioned in the answer to Question 7. This ensures the regret due to this term after $T$ rounds is $O(T^{3/4})$, which matches our final regret bound. We will emphasize this explicitly in the revised version of our paper. (11). Thank you for carefully reading our paper and catching this typo. We will fix all these typos as the reviewer mentioned, including $(\tau, t)$ and the order of $\beta$ in Equation (16) (which should be squared). Note that this does not impact the final regret bound as $\beta$ is in the order of $1/\log(T)$. (12). Thank you for your comment. Yes, all $y_t$ are the same in the stochastic setting, given the nature of i.i.d. randomness. We remark that to the best of our knowledge, it is not clear for the other settings what structural properties the $y_t$ sequences possess, as they may vary significantly depending on the specific (unknown) underlying reward distribution sequences. Mathematically this is one aspect that makes the problem we are tackling extremely challenging. Please see our response to Question 2 from reviewer EvVQ for more detailed discussions. (14). We believe this issue will be fixed after changing the value of $N$ as we suggested in the answer of Question 8. The primal ascent step sizes will then be decreasing as $i$ increases. We hope the above response addresses the reviewer’s concerns about technical details, and we would be more than happy to answer other questions should they arise. Given our above responses to technical concerns, we would greatly appreciate it if the reviewer can re-assess the contributions of this submission as well as the corresponding rating. Again, we sincerely thank the reviewer for the comments, suggestions and questions. --- Rebuttal Comment 1.1: Comment: Line 634-635. With the new definition of $N_t$, how do you get the result of equation (38) by using equation (39)? By the choice of $\gamma_i$, $$O\left(\frac{1+P}{\gamma_i}\right) = O((1+P)^{1/2} \cdot T^{3/4}) \leq O(T^{1/2} \cdot T^{3/4}),$$ which is greater than $O(T)$. --- Reply to Comment 1.1.1: Comment: We apologize for the lack of clarity here which is related to the earlier typos pointed in the review, and we remark that the final regret bound should indeed be $(1-1/\chi)OPT + T^{3/4}\sqrt{P}$ as pointed out by the reviewer. Despite the fact that when $P$ is at least the order of $\sqrt{T}$ we get $T^{3/4}\sqrt{P} \geq O(T)$ which leads to a non-meaningful regret bound, we remark that to the best of our knowledge even in the no-constraint setting, there does not exist any algorithm that is able to remove the term $T^{3/4}\sqrt{P}$ in the bandit, multi-dimensional, single-point feedback online optimization setup with dynamic regret (see detailed discussion in Remark 4 of [R1]). The most widely studied approach to achieve a sharper bound involves querying twice each round (i.e. two-pointed feedback) but this is not the practical setup we consider in the paper in the context of online advertising. Further, as described in [R1], the term $P$ can be viewed as a problem instance-dependent factor that measures the hardness of the problem. Note that the term $(1-1/\chi)OPT $ is the regret lower bound and unavoidable [R2], and our bound still bears value in the case where $P = o(\sqrt{T})$. Nevertheless, we acknowledge that this adversarial bound is not satisfactory, and we believe that a key future direction would be to improve this bound. Finally, we will definitely add more discussions in the paper on the limitations of our bounds to the adversarial setup, and complement the paper with numerical studies to illustrate that our algorithm performs well in practical settings. [R1] Zhao, Peng, et al. "Bandit convex optimization in non-stationary environments." The Journal of Machine Learning Research. [R2] Balseiro, Santiago R., Haihao Lu, and Vahab Mirrokni. "The best of many worlds: Dual mirror descent for online allocation problems." Operations Research
Summary: This work studies the problem of dynamic online allocation under constraints with bandit feedback, and derives a generic algorithm applicable to various input settings (stochastic, adversarial, $\delta$-corrupted, ergodic, periodic). It recovers regret rates close to those of the lower bounds in each of these settings. The algorithm uses a dual gradient descent over $\lambda_t$ to decouple the decisions over time by considering the lagrangian, a gradient ascent for the optimal choice of $x_t$ (that also uses the technique from Flaxman et al (2004) to handle the bandit feedback), and finally a multiplicative weight update to adapt the learning rates used to the correct input setting. Strengths: - The algorithm generalizes to multiple input settings and general constraints the problems related to Bandits with knapsack constraints, and of online learning with constraints. (to be clear this is an important strength of this paper!) - The problem is well motivated through the consideration of running multiple ad campaigns Weaknesses: - The assumption that there exists some safe action of level $\beta$ do simplify the problem of constraint satisfaction by guaranteeing to be able to satisfy the constraint at the end - Some of the upper bounds (stochastic, periodic, corrupted) are a bit loose compared to their respective lower bounds - The writing can be improved, in particular in the proofs which are lacking discussion about their main ideas and intuition. As an example page $8$ of the appendices is almost only a sequence of inequality and is hard to follow. For instance the inequalities could be cut in multiple parts, and some comments could be added to better explain the goal of the proof. There are also some typos. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Do the $T^{2/3}$ rates come from the use of the method to handle bandit feedback? How would a generic gradient feedback affect the rates, would we be able to derive tight rates with respect to the lower bounds presented? I think it might have been better to first present the results with gradient feedback, and mention in the appendix that bandit feedback can be handled by a standard technique. This would allow the main part of the paper to focus on the new contributions. - Could you give some intuition on what this optimal dynamic sequence looks like in the various settings? For instance, if my understanding is correct, the optimal dynamic sequence in the stochastic setting is simply a unique point (because the data is i.i.d). Could the optimal sequence for the ergodic setting be a function of $\kappa$ close to the unique optimal point with respect to the stationary measure? - How much does the meta algorithm degrade the regret? (compared to assuming that we know the input setting) - l 657 Why is it $... +D \dots$ and not $...+(t-1)D\dots $ ? Comments/Typos: - L304 and l324 stochstic -> stochastic - I find the notation of $\lambda \in [0,F e/ \beta]$ confusing, is it $\lambda \in [0,F/\beta]^K$? - L553 ‘which states $\max_{x \in \mathcal{X}} f(x)$: is a word missing? - L608 I am not sure to understand this statement, could you include a reference for this result? Is the sup taken over $(g_{\tau},f_{\tau})$ - over $[t]$ or $[T]$? - I think it would be nice to cite the paper of Mannor et al (Online learning with sample path constraints), which deals with very similar problem and started the works on online convex optimization with varying constraints - Equation below l643: $v$ -> $\Vert v \Vert$, and the last equality should be an inequality (as it is in the ball, not the sphere) - Equation below l655 I think some of the gradients are missing $\Vert \nabla_t \Vert^2$ and $x^i_{t+}$ -> $\tilde{x}^{i}_{t}$. Why are some of the gradients bolded and not the others? - Same thing for the gradients below l657, in addition I am not sure where $P(y_1:T)$ is defined. - L 664 ‘$a=$ and $b=$’ it is unfinished I have not read through all the proofs, but I would recommend to read it again to look for additional typos Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes the authors address some limitations in the section $5$ of the paper regarding the lower bounds. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and valuable feedback. Regarding Weakness 1 on the safe action: We would like to point out that the existence of a “safe action” is quite common in online advertising. Take for example the simple case where an advertiser only has a long-term budget constraint, and the only lever used is the budget set each round (i.e. a single decision variable that represents the maximum spend in each round/ad campaign). In this case, the safe action is to simply set the per-round budget to be 0, so the advertiser would always acquire some positive constraint balance (i.e. limit spend to ensure expenditure does not exceed long-term budget after $T$ rounds). Another example is the simple case where the advertiser only has a long-term ROI constraint which states total reward exceeds total spend after $T$ rounds, and again assume the single decision variable of interest is per-round budget. Then, by setting a small per-round budget, ad platforms tend to procure the most “lucrative ads” on behalf of the advertiser that have a large value-to-cost ratio, which generates a positive ROI-balance per round that helps satisfy the long-term ROI constraint after $T$ rounds. Similar assumptions are made in many related works see; e.g. [R1,R2]. Without the safe action, there is no guarantee that the problem is feasible and performance guarantees may be intractable. [R1] Deng, Yuan, et al. "Multi-channel autobidding with budget and roi constraints." [R2] Feng, Zhe, Swati Padmanabhan, and Di Wang. "Online Bidding Algorithms for Return-on-Spend Constrained Advertisers." Regarding Weakness 2 on loose upper bounds: We would like to point out that our setting is related to the more general problem of bandit online constrained optimization in continuous high-dimensions with single-point feedback (i.e. the decision maker can only make a decision once per round and observe a single feedback). To the best of our knowledge, there is no existing algorithm that achieves optimal bounds even in purely stochastic or adversarial environments (see discussions in [R3]). Our paper considers an even more complex setup than these “pure environments” as we demand a single algorithm that yields tractable performance across various non-stationary environments. Indeed, many of the lower bound stated in the paper are for a more restrictive class of problems under full-information feedback (i.e., the reward and constraint functions are known before making any decision, while we study bandit feedback). To the best of our knowledge, lower bounds for bandit feedback problems are unknown, so we present those for full-information feedback. The gap may come from the intrinsic difference between these two classes of problems. Nevertheless, we believe that this work has the potential to inspire future research to develop more sophisticated algorithms and close the upper-lower bound gap. [R3] Zhao, Peng, et al. "Bandit convex optimization in non-stationary environments." Regarding Weakness 3: we thank the reviewer for pointing this out and agree that the exposition of the paper can be improved. We will revise our paper and include more discussions accordingly in both the main body and the technical results/proofs in the appendix. Response to Question 1: The presented non-optimal rates are indeed due to the need to handle bandit feedback. In the full-information setup where the decision maker can first observe the reward and constraints in each period before making a decision, it has been shown in [R4] that a simple mirror descent approach can achieve optimal rates. Nevertheless, we point out that the bandit setup considered in this paper is much more complex and requires our proposed techniques to effectively balance regret minimization and constraint satisfaction in a limited information bandit environment. See the response for weakness 2 for more details. [R4] Balseiro, Santiago R., Haihao Lu, and Vahab Mirrokni. "The best of many worlds: Dual mirror descent for online allocation problems." Response to Question 2: The reviewer’s insight is correct that in the stochastic setting the optimal dual variable sequence is a fixed point, and the optimal primal variables of course would depend on the observation. Nevertheless, to the best of our knowledge, it is not clear for the other settings what structural properties the optimal sequences possess, and the optimal sequence may vary significantly depending on the specific (unknown) underlying reward distribution sequences. Take an extremely simplistic example in the $\delta$-corrupted environment, where there is only a single round at which the reward distribution is corrupted and the advertiser only has a long-term budget constraint. If the corrupted expected reward is very large (e.g. in the order of $T$), then she should spend most of her budget during that single round in the optimal decision sequence, and spend nearly nothing in other rounds; on the other hand, if the reward in the corrupted round is 0, then the setting reduces to the stochastic setting, and the optimal action in each round should be some fixed point. Nevertheless, we believe we can produce experimental simulations to characterize the structure of optimal sequences in simple settings, and we would include relevant discussions/results in the revised version of our paper. Response to Question 3: We will include a summary of the best existing upper bounds for each non-stationary environment, respectively, in the paper in our revision. Response to Question 4: we apologize for the typo in line 657. In the second equality (within the large parentheses), we should have $y_{t}^\top\tilde{x}_{t+1}^i$ (which is independent of $\tau$ that we sum over). This term can then be bounded by $D$ (as opposed to $(t-1)D$). Finally, we thank the reviewer for the comments and for pointing out typos. We will carefully review the paper and make corrections accordingly. --- Rebuttal 2: Comment: I thank the authors for their detailed and careful responses. If I understand correctly the reply regarding the safe action, this assumes that the budget scales with $T$ (e.g. $B=\rho T$), so that $g_t(x_t)=\text{cost}_t*x_t-\rho$ and thus $g_t(0)=-\rho<0$? If so then indeed it is reasonable; I think it would be nice to include the examples mentioned by the authors to justify the existence of such a safe action. I think the novelty of the technical contributions would be clearer if in the proof sketches this work was more directly compared to [9] and its proof techniques, as well as which arguments need special care in the combination of the bandit gradient estimation technique and the mirror descent for $\lambda_t$. Many typos were found in the proofs (in particular by reviewer bowu), and as such I will keep my current grade. Otherwise, all my questions have been clearly addressed. I believe that the paper should be accepted, assuming that the comparison with [9] is made clearer, and that this work is proofread again (including the appendix) to catch any additional typos. --- Rebuttal Comment 2.1: Comment: Regarding your understanding/example for the safe action (in the case of budgets), you are completely correct. We will indeed include such examples in the revision of the paper to provide intuition for existence of the safe action. We will also present more comparisons with [9] in our revision to clearly convey our contributions and novelties, and will carefully review the paper to correct typos. Again, we sincerely thank the reviewer for taking the time to offer all the constructive feedback!
Summary: This paper concerns a two-stage autobidding scenario, such as an advertising platform environment. Each advertiser wants to maximize value received (e.g., clicks) subject to long-run constraints (e.g., budget or ROI). As actions, the advertiser can specify certain instructions to an autobidding agent (e.g., a spend rate or ROAS target) and observe the results. The advertiser observes bandit feedback, and wishes to tune their choice of actions/instructions to solve their long-run optimization problem. Because the autobidders themselves are learning over time and potentially facing changing market conditions, the evolution of payoffs observed by the advertiser may not be stationary. The paper considers a variety of different payoff evolution models, including partial adversarial corruption, periodic, and ergodic payoffs. The main result is a universal learning method for the advertiser that achieves good regret for each of these payoff evolution models. The idea is to combine dual descent methods for constraints with a modified online convex optimization approach to adequately explore lever decisions given the dual variables. This combination leads to vanishing regret in each of the settings considered. The resulting regret rates are then compared with known lower bounds for each of these settings (many of which apply in relaxed settings, such a full feedback rather than bandit feedback). Strengths: I like this paper. The modeling framework that separates true "long-run" objectives from "short-term" directive levers is extremely natural and, as far as I'm aware, novel. It also tracks my understanding of how autobidding works in practice: advertisers need not keep their specified constraints (to the autobidder) fixed over time, but can manipulate them online as tunable knobs. The assumption of a safe action is likewise very reasonable, and the proposed algorithm makes use of it to good effect. The proposed algorithm combines multiple well-established ideas from the online optimization literature in a reasonable way. The fact that this comes together into a unifying framework is an appealing feature, as is the need for only bandit feedback. The regret rate suffers somewhat compared to known bounds, but not by too much --- what amounts to a rate of T^{3/4} for each of the non-fully-adversarial settings is quite good, while leaving room for future work to improve. Getting these regret rates down to sqrt{T} (or showing this isn't possible) is a nice open challenge. Weaknesses: The biggest question for me is how "real" autobidding (e.g., competitive uniform bidding) falls into this scenario. For instance: even in a stationary environment in terms of competitors, the relationship between levers and outcomes is not necessarily stationary for advertiser i because the underlying autobidder is learning over time how best to satisfy the directive communicated by a given lever setting. So my understanding is that this scenario would fall under the ergodic setting. But what if the competitors are not stationary, but are learning as well? Can these theoretical frameworks be linked back to the motivating setting of autobidders that simultaneously learn? Either way, it would be nice to have a more thorough discussion of this in the body of the paper. Another potential weakness is that the technical contributions largely synthesizes known approaches, so the marginal technical contribution is not extremely high. I therefore view the conceptual and modeling contributions as the main selling points for the submission. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Are there natural conditions under which a scenario with mutually competing autobidders would fall into one of the analyzed worlds? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I feel that the paper is sufficiently up-front about its limitations, and the authors do an adequate job of describing what their paper does and does not do. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and positive feedback! Regarding Weakness 1 and Question 1: The game-theoretic interactions between actions among multiple agents is indeed an interesting yet challenging future direction. For this work, one can view the various environments of interest (namely stochastic, adversarial, periodic, Markovian, and corrupted) as “aggregate market-dynamics” driven by competitor algorithms. We believe that this is a reasonable aggregate view of real-world algorithmic bidding in online advertising markets: for example, the environment is mostly stationary (or i.i.d) when looking at a short time period, such as one hour (see [R1] for practical evidence); for longer time horizons, aggregate algorithmic interactions may exhibit periodicity (such as more aggressive and active bidding during non-work hours) see [R2] for practical evidence; competing algorithms may occasionally be driven by adversarial behavior due to the competition [R3] etc.. The beauty of the proposed algorithm is that the algorithm can achieve good performance without the need of knowing this “aggregate view" on competitor algorithmic behaviors. We would also like to point out that most of the previous theoretical works on online learning in advertising focus on either stochastic i.i.d. model, which is too optimistic in practice, or adversarial model, which is too pessimistic. Nevertheless, we completely agree that our paper can benefit from including discussions on relevant multi-agent learning topics, and we will also support such discussions with practical evidence from the literature. [R1] Feldman, Jon, et al. "Online stochastic packing applied to display ad allocation." [R2] Yuan, Shuai, Jun Wang, and Xiaoxue Zhao. "Real-time bidding for online advertising: measurement and analysis." [R3] Golrezaei, Negin, et al. "Learning product rankings robust to fake users." Regarding Weakness 2: We would like to respectfully respond to the reviewer’s comment “the technical contributions largely synthesizes known approaches, so the marginal technical contribution is not extremely high”. The most relevant previous theoretical work on this topic may be [R4], which studies the best-of-many-world setting for online allocation problems. The key difference is that the reward function and the constraint functions are known in [R4] before making the decision (i.e. [R4] studies the full information setting), while in this paper we study the bandit setting where the reward and constraint functions are unknown before a decision is made. This makes the analysis much more difficult than that of [R4], as we need to carefully handle unknown per-round constraint violation (before making decisions) to satisfy long-term constraints, while not allowing regret to decay too much. Our proposed algorithm requires much more complex design and analyses than that of [R4], which simply employs a mirror descent approach. We would also like to point out that although our proposed algorithm is inspired by well-established approaches, it is interpretable and implementable and thus can be easily adopted in practical setups. We also believe that our proposed algorithm can inspire future works to develop more complex algorithms and improve performance guarantees (to match with lower bounds for the full-information setting). Finally, as the reviewer describes, another key contribution of the paper is the modeling aspect, as to the best of our knowledge this is the first work that studies a universal algorithm that yields reasonable performance in various non-stationary autobidding setups under bandit feedback. [R4] Balseiro, Santiago R., Haihao Lu, and Vahab Mirrokni. "The best of many worlds: Dual mirror descent for online allocation problems." --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for the thorough response. Your point about the difficulties in extending to a fully game-theoretic setup is well-taken. I am happy to hear about your plan to include a discussion about these connections. Your response about the relationship to [R4] was very helpful. I agree with your assessment of the additional challenges; your note about the bandit feedback setup is especially clear in this regard. I agree that the practicality/implementability afforded by this setup is a strength.
Summary: The paper proposes a universally constrained online learning framework for ad procurement in non-stationary autobidding worlds. The paper makes contributions to the field by addressing the challenges of ad procurement in non-stationary autobidding worlds and developing a unified algorithm that can perform well in autobidding world while satisfying long-term constraints. Strengths: - The paper addresses the challenges of ad procurement in non-stationary autobidding worlds and develops a unified algorithm that can perform well in autobidding world while satisfying long-term constraints. - The paper makes contributions to the field by developing an algorithm that yields good performance guarantees under different procedures. - The paper is well-written and clearly presents the problem and the proposed solution. Weaknesses: - The paper does not have experiments to validate the theoretical findings. - The paper could provide more insights into the practical implications of the proposed algorithm and how it can be applied in real-world settings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are the challenges with applying the algorithm to obtain experimental results? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and valuable feedback. Regarding Weakness 1 and Question 1 on experimental results: we agree that having experimental results would strengthen our paper’s key messages as well as contributions. We did not include experimental results in our paper due to space constraints, and we will add relevant discussions/results in our revision of this paper. We would like to point out that there are no major challenges in applying our algorithm to real or synthetic data to obtain experimental results, as the proposed algorithm is quite clean and easy to implement in practice. We would also like to mention that the main contributions of the paper lies in the modeling aspect (for autobidding under realistic non-stationary and limited information environments) as well as theoretical results. Regarding Weakness 2 on practical implications of our proposed algorithm: Online advertisers nowadays face a large array of advertising platforms such as search engines, social media platforms, web publisher display etc. Determining how to set advertising goals in each of these platforms (e.g. allocating total budget across different campaigns on a platform or setting target cost-per-click etc), especially in non-stationary environments due to changing user behavior, competition etc., becomes essential for online advertisers to optimize ad conversion outcomes. Our proposed algorithm presents a rigorous methodology that helps online advertisers set advertising goals in these ad platforms under non-stationary markets. As described above, we will also include experimental results in our revision to showcase the practicality of our proposed methodology for real-world settings.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection
Accept (poster)
Summary: The paper notices that while outlier exposure has shown promising potential in improving OoD detection performance, all previous studies on outlier exposure have been limited to utilizing visual outliers. The paper uncovers the benefits of using textual outliers by replacing real or virtual outliers in the image domain with textual equivalents. Then, it proposes various ways of generating preferable textual outliers. The extensive experiments demonstrate that generated textual outliers achieve competitive performance on large-scale OoD and hard OoD benchmarks. Strengths: 1. The paper is written well and is easy to understand. 2. The studied problem is very important. section 3 is quite interesting. 3. The results seem to outperform state-of-the-art. Weaknesses: 1. I am curious about section 4.1.1 and section 4.1.3, why do the authors choose to use the images from the validation set of the ID dataset for generation? 2. It is not clear that if the class labels of the outlier dataset is missing, how can we generation description-level supervision in section 4.1.2? 3. I am curious about the performance of the method without the large-scale pretrained models (CLIP) as the classification backbone. The current approach seems to be highly model-specific, which hinders its general usage. 4. The comparison with a reasonable baseline, i.e., NPOS [1] is missing. [1] Leitian Tao, Xuefeng Du, Jerry Zhu, and Yixuan Li. Non-parametric outlier synthesis. In The Eleventh International Conference on Learning Representations, 2023. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see above Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for all of the constructive feedback and suggestions. In particular, we appreciate your recognition of the novelty and efficiency of our work and the insightfulness of our analyses. Here, we show additional experimental results to address your concerns. > W1. Why do the authors choose to use the images from the validation set of the ID dataset for generation? Due to computational constraints, utilizing the entire training set for generating textual outliers proves time-consuming. Our findings demonstrate that employing only a validation set yields sufficiently favorable performance. > W2. How can we generation description-level supervision in section 4.1.2? Our method operates without the need for class labels from the outlier dataset. Instead, we leverage the class labels from the in-distribution data to generate textual outliers, excluding the class label from the descriptions. Descriptions without class labels can be identified as outliers. Illustrative examples can be found in Appendix C, Figure 2. > W3. I am curious about the performance of the method without the large-scale pretrained models (CLIP) as the classification backbone. Our method is designed to leverage the power of a joint embedding space that contains both visual and textual embeddings. By definition, such an embedding space is only available in vision-language models, and thus, our method is contingent on the use of VLM models. Regarding your concern with the scalability of our method to smaller models, it's worth noting that CLIP offers a smaller model option, such as ResNet50. Our method still demonstrates comparable performance even on models of reduced size (176M vs 77M). For this experiment, we utilized ImageNet1K as the in-distribution dataset. The results are obtained from description-level textual outliers. | | **parameter size** | **iNaturalist** | **SUN** | **Places** | **Textures** | **AVG** | |----------|:------------------:|:---------------:|:-------:|:----------:|:------------:|:-------:| | ViT-B/32 | 176M | 92.07 | 85.24 | 77.40 | 79.77 | 83.62 | | RN50 | 77M | 91.01 | 82.44 | 74.95 | 81.70 | 82.52 | In the revised manuscript, we will incorporate these results to offer a more comprehensive set of experiments. > W4. The comparison with a reasonable baseline, i.e., NPOS [1] is missing. Following the suggestion of the reviewer, we experimented with NPOS and the results are as follows. | | **iNaturalist** | **SUN** | **Places** | **Textures** | **AVG** | |------|:-----------------:|:---------:|:------------:|:--------------:|:---------:| | NPOS | 99.14 | 98.06 | 97.69 | 98.19 | 98.27 | | ours | 98.86 | 97.93 | 98.29 | 98.48 | **98.39** | We employed the ImageNet100 dataset as the in-distribution data and utilized CLIP-L/14, which incorporates a ViT-L/14 Transformer as the image encoder for both approaches. The outcome for our method is derived from caption-level outliers. The average AUROC values across all four OoD datasets highlight that our textual outlier approach outperforms NPOS. In the revised manuscript, we will incorporate these results to offer a more comprehensive set of experiments.
Summary: The paper proposes a new method that takes the textual outliers to help the model better detect out-of-distribution samples. Specifically, they build the pipeline on top of CLIP models with a classifier and use different representations of the text sample to synthesize outliers in the CLIP space, then use them to train the model with ood awareness. The authors provide extensive experiments and analysis of the method. Strengths: I think the paper is well-written, and the paper is in a great flow. Figures and texts are clear and easy to follow. The authors also give a good rationale and background of the problem. The authors approach the OOD problem with the textual outliers and the help of CLIP fundamental models, which is a novel and plausible method. The authors provide extensive experiments and detailed analysis of the problem and extensively worked on different types of textual outliers. Weaknesses: The authors build the classifiers on top of CLIP, which is a large model with 151 Million parameters (~88 Million in the Image encoder) and implies a large hidden feature space. Comparing this with previous methods may lead to unfair comparisons. The use of CLIP may limit this to other fields like object detection/segmentations. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How can you prove the ood robustness is come from your method but not from CLIP itself? In the appendix, only the larger CLIP model is studied. However, in the open-sourced CLIP API, they provided smaller models. I wonder how much impact it would have on the metrics as the CLIP shrinks. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The method seems only experimented with classifiers. The appliance of the CLIP Image encoder makes it hard to extend to other problems like Object Detection, semantic segmentations, .etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the novelty of our method and the extensive experimental results. Here, we show various newly-conducted experimental results to further verify the effectiveness of our method. > Q1 (W1). How can you prove the ood robustness is come from your method but not from CLIP itself? In order to demonstrate that the success of our method cannot be attributed solely to the use of CLIP, we have provided results of using CLIP without outlier exposure (labeled as 'None' in Table 5) to substantiate that the performance enhancement is not merely a result of the inherent bias within the CLIP embedding space. Notably, without the integration of textual outliers, the AUROC score is only 51.72, whereas our caption-level textual outliers achieve an AUROC score of 85.53. This disparity underscores that CLIP, without supplementary techniques tailored for OoD detection, yields significantly lower AUROC scores, further attesting that the success of textual outliers is not merely a byproduct of the CLIP embedding space. NPOS [1] is another work that leverages CLIP's embedding space. NPOS synthesizes outliers from in-distribution data and fine-tunes the CLIP image encoder using these synthesized examples. In the table below, we compare the performance of NPOS vs. Ours. When compared against NPOS, we employed the ImageNet100 dataset as the in-distribution data and utilized CLIP-L/14, which incorporates a ViT-L/14 Transformer as the image encoder for both approaches. The outcome for our method is derived from caption-level outliers. Our approach achieves slightly superior average AUROC values across all four OoD datasets, all without necessitating resource-intensive fine-tuning steps. This implies that textual outliers are potent tools for advancing visual OoD detection. | | **iNaturalist** | **SUN** | **Places** | **Textures** | **AVG** | |------|:-----------------:|:---------:|:------------:|:--------------:|:---------:| | NPOS | 99.14 | 98.06 | 97.69 | 98.19 | 98.27 | | ours | 98.86 | 97.93 | 98.29 | 98.48 | **98.39** | In the revised manuscript, we will incorporate these results to offer a more comprehensive set of experiments. [1] Leitian Tao, Xuefeng Du, Jerry Zhu, and Yixuan Li. Non-parametric outlier synthesis. In The Eleventh International Conference on Learning Representations, 2023. > W2. The use of CLIP may limit this to other fields like object detection/segmentations. The primary objective of our paper is to improve OOD detection in the context of classification. However, the CLIP encoder can also be extended to object detection or segmentation tasks. Notably, there are approaches designed for object detection [2] and segmentation [3] that are built upon the CLIP architecture. [2] Yiwu Zhong, et al. RegionCLIP: Region-based Language-Image Pretraining. CVPR 2022 [3] Chong Zhou, Chen Change Loy, and Bo Dai. Extract Free Dense Labels from CLIP. ECCV 2022 > Q2. I wonder how much impact it would have on the metrics as the CLIP shrinks. Following the suggestion of the reviewer, we conducted an ablation study using a smaller model (ViT-B/32 vs ResNet50). For this experiment, we utilized ImageNet1K as the in-distribution dataset. Remarkably, our textual outlier exhibited comparable performance levels (AUROC) even with a significantly smaller model size (176M vs 77M). The results are obtained from description-level textual outliers. | | **parameter size** | **iNaturalist** | **SUN** | **Places** | **Textures** | **AVG** | |----------|:------------------:|:---------------:|:-------:|:----------:|:------------:|:-------:| | ViT-B/32 | 176M | 92.07 | 85.24 | 77.40 | 79.77 | 83.62 | | RN50 | 77M | 91.01 | 82.44 | 74.95 | 81.70 | 82.52 | In the revised manuscript, we will incorporate these results to offer a more comprehensive set of experiments. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification. I think it's a good paper and hope it finds its way into the conference.
Summary: This paper studies visual OOD detection by introducing the textual outlier under the outlier exposure paradigm. Different from previous research focused on utilizing visual outliers, this work explores the benefits of textual outliers in the image domain. Specifically, they propose different ways to generate textual outliers based on the powerful GPT model. Comprehensive experiments have been conducted to demonstrate the effectiveness of textual outliers in OOD detection. Strengths: 1. From the perspective of originality, to the best knowledge, this paper is the first work to explore the potential of textual exposure with multimodal neural networks for outlier exposure. 2. The overall presentation of this work is good, having intuitive illustrations and clear organization. The framework and the proposed textual outliers generation are easy to understand. 3. Instead of straightforwardly utilizing the LLM under the multimodal paradigm for visual OOD detection, this paper investigates the practical adaptation of different textual outlier types in the outlier exposure framework. 4. Experiments from different perspectives have been conducted to demonstrate the effectiveness of description-level textual outliers. Weaknesses: 1. Regarding the technical level, exploring the textual information (like description level or caption level) under the multimodal setting for the image domain has been studied in [1]. The general paradigm, e.g., using LLM to generate the corresponding description or caption and then utilizing them in the image task, is similar. 2. Compared with the conventional outlier exposure, the optimizing objective and test time OOD detection have limited advanced design beyond the previous method. 3. Compared with the generation part, the filtering part seems to be more important as it has a more close relationship with the data quality for outlier exposure. However, the current filtering process is a little bit heuristic. 4. Considering the performance, the improvement compared with other advanced methods of using pure image outliers is not very significant (like Table 5 compared with DOE). Could the authors provide more explanation or discussion? I generally appreciate the idea of introducing and excavating the potential of textual outliers. I hope the previous comments or the later questions can help the current contribution to be clearer and enhanced. [1] Sachit Menon and Carl Vondrick. Visual classification via description from large language models. In The Eleventh International Conference on Learning Representations, 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Regarding the current version of the draft, I have the following question: 1. Could the authors discuss more the unique contribution of OOD detection beyond the general paradigm of utilizing textual information? 2. Could the authors provide more examples or system comparisons of the three different types of textual outlier generation? 3. Could the authors discuss more why the performance of textual OE is not consistently performing better than visual OE? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper has discussed the limitation, e.g., the proposed textual outlier method involves a heuristic selection process to refine the outputs of the generative model and also provided the potential solution in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing constructive feedback and suggestions. In particular, we appreciate your recognition of the novelty and efficiency of our work and the insightfulness of our analyses. Here, we hope that a more detailed explanation can address your concerns. > Q1 (W1, W2). Could the authors discuss more the unique contribution of OOD detection beyond the general paradigm of utilizing textual information? Our contribution is primarily centered around the proposition of textual outliers. Through a comprehensive analysis, we illuminate the potential of textual outliers as valuable indicators for detecting visual OoD detection (as elaborated in Section 3). We propose a range of textual outlier categories (outlined in Section 4) and identify key attributes of effective textual outliers (discussed in Section 5). As the reviewer pointed out, the distinctiveness of our method does not lie in the process of generating text through the LLM. Our focus is on utilizing these texts as textual outliers for the OoD detection task. Identifying informative textual outliers within the generated text demands a nuanced and intricate approach (W1). Our textual outlier approach is not constrained to using only uniform loss or energy scores, as demonstrated in our paper; it can also be applied with other objective functions or OoD scores. This flexibility enhances the scalability of our method (W2). Going beyond the utilization of textual outliers, our method exhibits computational efficiency by achieving competitive performance solely through training of a linear classifier. This advantage in computational efficiency can come in particularly handy when new OoD instances emerge frequently, and outlier exposure must be performed repeatedly. > W3. The current filtering process is a little bit heuristic. While our filtering approach includes heuristic elements, we address this by conducting ablation studies for each method to determine the optimal parameters (Appendix B.4, B.5). > Q2. Could the authors provide more examples or system comparisons of the three different types of textual outlier generation? Here are a few examples of textual outliers for ImageNet. We offer three distinct types of textual outliers for each class. Class: Warplane * word: “This is a photo of army”, “This is a photo of airport”, “This is a photo of airliner” * desc: “a photo of large and powerful", "a photo of designed for carrying weapons and other military equipment", "a photo of typically has a camouflage paint job", * caption: “a small plane with a window on the side”, “a silver airplane”, “a plane with a large window on the front”, “a plane taking off” Class: Greater Swiss Mountain Dog * word: “This is a photo of green”, “This is a photo of puppy”, “This is a photo of pets” * desc: "a photo of large, fluffy white dog", "a photo of black or brown markings on the face, ears, and tail", "a photo of long, thick coat" * caption: “two dogs are playing with each other”, “a dog with a white face”, “a dog with its tongue out” Based on the provided examples, our word-level outliers convey more abstract concepts. Similarly, caption-level outliers mostly contain descriptions of background elements or lack class-specific attributes. Description-level outliers include class-relevant information, but when the class label is omitted, they become very vague and difficult to interpret. In the revised manuscript, we will incorporate additional examples to enhance understanding. > Q3 (W4). Could the authors discuss more why the performance of textual OE is not consistently performing better than visual OE? The results of using pure image outliers are presented in Table 5 under the label 'OE,' where our textual outlier approach notably outperforms the OE baseline. It should be noted that while DOE is not strictly categorized as a pure image outlier method. Therefore, the fact that textual outliers do not consistently outperform DOE does not imply that textual outliers do not consistently outperform pure image outliers. While DOE utilizes model perturbations, which are computationally demanding, our approach only requires training a lightweight linear classifier, making it computationally efficient. A key advantage of textual outlier exposure over DOE, albeit insufficiently highlighted in the current manuscript, lies in its computational efficiency. We will elaborate on this discussion in the revised manuscript to further underscore the benefits of adopting textual outlier exposure. --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: Thanks for the detailed response! Most of my concerns are well addressed by the further clarification and I appreciate the idea of exploring textual outliers in OOD detection.
Summary: This paper addresses the challenge of detecting Out-of-Distribution (OOD) data by introducing "textual outlier exposure" as an alternative to visual outliers. Instead of relying on visual examples, the authors explore the benefits of using textual equivalents in OOD detection. They propose various methods for generating textual outliers, which are validated through extensive experiments on large-scale OOD benchmarks. The findings demonstrate that the generated textual outliers outperform visual outliers and establish criteria for effective textual outliers, including their proximity to the data distribution, descriptiveness, and incorporation of visual semantics. The contributions of this work include investigating the potential of textual outlier exposure with multi-modal neural networks, utilizing large-language models for generating textual outliers at different levels of detail, and validating their effectiveness in various OOD detection scenarios. The paper presents a novel and promising approach to OOD detection, showcasing the advantages of textual outliers over visual counterparts and providing valuable insights for designing impactful textual outliers. Strengths: - The paper presents an innovative and compelling idea, offering valuable insights for open-world learning, Out-of-Distribution (OOD) detection, and online learning. The findings of this study have significant implications for various research studies in these domains. - The clarity and coherence of the paper are commendable, making it easy to comprehend and follow the authors' methodological approach. - The inclusion of illustrative examples effectively enhances the understanding of key concepts. - The figures and plots presented in the paper are visually clear, aiding in the visualization and interpretation of the experimental results. - The paper offers comprehensive experimental studies, demonstrating the efficacy of the proposed method. The promising performance observed in these experiments further strengthens the validity and potential impact of the proposed approach. Weaknesses: While the paper presents detailed experimental studies, there are two notable weaknesses that should be addressed: - The focus of the experimental studies is primarily limited to outlier exposure methods, neglecting the exploration of other types of Out-of-Distribution (OOD) detection methods. It would be valuable to consider and compare the proposed approach against alternative techniques in order to provide a more comprehensive evaluation. - The omission of an ablation study to assess the impact of network architecture on the proposed methodology is a notable oversight. Investigating the influence of different network architectures on the performance of the proposed approach would enhance the understanding of its strengths and limitations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The experimental evaluation in your paper is commendable. However, I have two inquiries regarding the experiments: - It appears that the recent state-of-the-art methods, such as ASH [1] and GradNorm [2], have not been included in your experimental comparisons. Could you provide insights into the rationale behind this omission? It would be valuable to report and compare the performance of your proposed method with these state-of-the-art approaches to provide a comprehensive assessment. - Additionally, considering the impact of different network architectures on your method is crucial. Could you elaborate on how varying network architectures affect the performance and effectiveness of your proposed approach? Investigating this aspect would provide deeper insights into the robustness and generalizability of your method. [1] Djurisic, Andrija, et al. "Extremely simple activation shaping for out-of-distribution detection." arXiv preprint arXiv:2209.09858 (2022). [2] Huang, Rui, Andrew Geng, and Yixuan Li. "On the importance of gradients for detecting distributional shifts in the wild." Advances in Neural Information Processing Systems 34 (2021): 677-689. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the novelty of our method and the extensive experiments. Here, based on your comments, we show various newly-conducted experimental results to further verify our method’s effectiveness. > Q1 (W1). It appears that the recent state-of-the-art methods, such as ASH [1] and GradNorm [2], have not been included in your experimental comparisons. Following the suggestion of the reviewer, we have compared our method with ASH and GradNorm, demonstrating superior performance in comparison to both. The table below presents the results of OoD detection performance, evaluated using AUROC, for a model trained on ImageNet1K across four OoD datasets. We would like to note that all the results are derived from the ResNet-101 architecture (BiT-S-R101 for GradNorm and ASH, CLIP-RN101 for our approach). In this experiment, we solely present results based on caption-level outliers for our method. In the revised manuscript, we will incorporate these results to offer a more comprehensive set of experiments. | | **iNaturalist** | **SUN** | **Places** | **Textures** | **AVG** | |----------|:---------------:|:-------:|:----------:|:------------:|:-------:| | GradNorm | 90.32 | 89.02 | 84.82 | 81.07 | 86.30 | | ASH | 94.84 | 88.72 | 86.61 | 88.40 | 89.64 | | ours | **95.54** | **91.84** | **88.42** | **91.70** | **91.93** | Additionally, for your reference, we have conducted and reported a comparative analysis of our method with post-hoc OoD detection methods in Appendix A.3, presented in Table 4. > Q2 (W2). Could you elaborate on how varying network architectures affect the performance and effectiveness of your proposed approach? Following the suggestion of the reviewer, we conducted experiments to compare the performance of our method across two different image encoder architectures, ResNet and ViT, the two architectures offered by CLIP. Importantly, our method consistently produces encouraging outcomes, even when applied to CLIP models built upon the ResNet architecture. We selected RN50x4, designed with a parameter size akin to that of ViT-B/32 (174M vs 176M). The performance analysis reveals a comparable trend between RN50x4 and ViT-B/32, resulting in AUROC scores of 86.43 and 87.55, respectively. For this experiment, we utilized ImageNet1K as the in-distribution dataset. The results are obtained from caption-level textual outliers. | | **parameter size** | **iNaturalist** | **SUN** | **Places** | **Textures** | **AVG** | |----------|:-------:|:---------------:|:-------:|:----------:|:------------:|:-------:| | RN50x4 |174M| 91.69 | 85.19 | 82.93 | 85.94 | 86.43 | | ViT-B/32 |176M| 94.55 | 86.59 | 81.51 | 79.48 | 87.55 | In the revised manuscript, we will incorporate these results to offer a more comprehensive set of experiments. --- Rebuttal Comment 1.1: Comment: Thanks the authors for providing responses to my questions and concerns. Would you please provide the FPR values of your reported experiments as it is a very important metric in OOD method performance? --- Reply to Comment 1.1.1: Comment: Thank you for expressing interest in further results. Presented below are tables summarizing the FPR and AUROC values for two experiments, each involving a comparison with recent state-of-the-art methods and a different architecture. > for Q1 | | **iNaturalist** | | **SUN** | | **Places** | | **Textures** | | **AVG** | | |----------|:-----------------:|:-------:|:---------:|:-------:|:------------:|:-------:|:--------------:|:-------:|:---------:|:-------:| | | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | | GradNorm | 50.03 | 90.32 | 46.48 | 89.02 | 60.86 | 84.82 | 61.41 | 81.07 | 54.69 | 86.30 | | ASH | 30.95 | 94.84 | 56.17 | 88.72 | 59.17 | 86.61 | 52.75 | 88.40 | 49.76 | 89.64 | | ours | **20.19** | **95.54** | **46.17** | **91.84** | **52.23** | **88.42** | **50.62** | **91.70** | **42.30** |**91.93** | > for Q2 | | **iNaturalist** | | **SUN** | | **Places** | | **Textures** | | **AVG** | | |:--------:|:---------------:|:--------:|:-------:|:--------:|:----------:|:--------:|:------------:|:--------:|:-------:|:-------:| | | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | | RN50x4 | 51.53 | 91.69 | 64.13 | 85.19 | 66.29 | 82.93 | 57.93 | 85.94 | 59.97 | 86.43 | | ViT-B/32 | 32.92 | 94.55 | 55.68 | 86.59 | 70.54 | 81.51 | 74.29 | 79.48 | 58.35 | 87.55 | Indeed, the FPR results exhibit a similar trend as the AUROC results.
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers' time and invaluable feedback. The unanimous consensus among reviewers highlights our paper's insightful contribution on textual outliers (T6K7, Lz76, JSZe, yebH). The reviewers also commend the novelty and significance of the addressed problem (T6K7, JSZe, yebH), along with acknowledging the method's effectiveness through extensive experiments (T6K7, Lz76, JSZe). Additionally, we are pleased that the reviewers found the paper clear and easily understandable (Lz76, JSZe, yebH). We respond to each reviewer's comments in detail below. We will incorporate the reviewers' suggestions into the manuscript revisions, which we believe will significantly enhance the paper's strength.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models
Accept (poster)
Summary: The authors performed extensive experimental study on various image-based generative models. Based on the study, it showed that no existing metric strongly correlated with human evaluations. The authors also included alternative self-supervised features extractors for evaluation. Additionally, data memorization was investigated. The experiments revealed limitations of existing evaluation metrics for generative models. Strengths: The authors implemented extensive experiments to evaluate various image-based generative models from different perspectives, eg., encoders, human evaluations, diversity, and memorization. The evaluation was performed using several state-of-the-art evaluation metrics including humans. The comparison results are expected to have a high impact on evaluating existing generative models. Weaknesses: The experiments showed that no existing metric strongly correlated with human evaluations. There is some concern about how the human error rate is calculated. The reviewer wonders whether the conclusion would change by improving the calculation of the human error rate. See detailed comments under Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: . Human error rate: this metric is calculated as the fraction of images which were incorrectly classified. The incorrectly classified cases include both real-->fake and fake-->real. From the reviewer's perspective, these two cases should be separately treated, and the fake-->real case should be more important for evaluation. For example, if there are a lot of real-->fake cases from one human participant, this probably indicates that this human participant has some judgment issues and his/her results cannot be trusted. One thing we probably can try is to use the real-->fake case to evaluate whether this human participant can be trusted or not, and then use the fake-->real case as the human error rate. Of course, there are definitely other ways to improve that. . Memorization: how to evaluate pixel-wise memorization and reconstructive memorization mathematically? Based on these two metrics, how to set the threshold values to determine memorized samples? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations of their work, and probably will address them in the future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review, we appreciate the time and effort that went into evaluating our work, and we are glad you found that the “results are expected to have a high impact on evaluating existing generative models”. - On concerns with the “human error rate” metric: The human error rate metric was established and explored in [91] which we build on in our work. We agree that the individual real-->fake and fake-->real metrics are useful for various diagnostic purposes, but there are a number of conceptual reasons why we prefer the full human error rate as in [91] for ranking generative models by human preference. Still, we included the three metrics for all generative models in Appendix C.3 as the “Error Rate”, “R Error Rate“, “F Error Rate”, respectively, and they are all very highly correlated. - On a conceptual basis, the average of R Error Rate and F Error Rate, what we simply call Error Rate, is our preferred choice to rank models as it has a few properties that neither R Error Rate nor F Error Rate have in isolation. Focusing on the suggestion that “the fake-->real case should be more important for evaluation”, we note that F Error Rate does not detect a scenario of “hyper-realistic generation”, where fake images look more realistic than the dataset on which a model is trained. For example, on our ImageNet and LSUN-bedroom datasets, some images from the training set look “less realistic” due to effects of data curation (lower resolution, aspect ratios, etc.). Nevertheless, these are real images that the model was trained on, and a generative model which has learned the true data distribution should be capable of generating images with these features. The F Error Rate metric does not capture this, but R Error Rate does, and hence Error Rate does as well. - On a quantitative basis, we note that R Error Rate, F Error Rate, and Error Rate are very highly correlated with each other - *Pearson’s correlation coefficient between F Error Rate and Error Rate over all models is 0.99, and replacing the Error Rate with the F Error Rate results in no change to the trends seen in Figure 4 or the rest of the results*. One implication of the high correlation is that R Error Rate is not constant across experiments using the same training dataset, but different generated datasets, as one might suppose *a priori*. Instead, when the generated images are more realistic, humans make more mistakes on the real images as well. This phenomenon was previously discussed in [91], so we only discussed it in Appendix C.3. - On a diagnostic basis, when analyzing individual participant trials, we found no evidence of a significant divergence of the measures (i.e. we saw no evidence of a high R Error Rate and low F Error Rate, which would be evidence of a participant simply selecting “fake” for every image). With the observation above that R Error Rate is higher when generated images are more realistic, we cannot filter out participants simply for having a high R Error Rate, because this is a normal outcome on the more challenging tasks. Thus, using this metric as a diagnostic to rule out poor performing participants would introduce a bias into the final results, as we would have to set a model-dependent threshold on the R Error Rate guided by prior knowledge. - On memorization: The questions you raise on defining the concepts of reconstructive memorization mathematically are very interesting, but to our knowledge there is no consistent formal definition for this in the literature and it is an open-ended question that deserves further study. While pixel-wise memorization is more direct to evaluate, it still requires a choice of threshold values to determine a sample as memorized. We followed the examples set by [11] and [58] for our analyses in Section 4.3, and included full details for reproducibility in Appendix B.2. We performed the memorization analysis as we felt it was necessary to determine if models with good FD_DINOv2 scores (i.e. mostly diffusion models) are just memorizing the training data, and thus “cheated” their human alignment; indeed we found that this is not the case. To our knowledge a memorization analysis has not been performed on the datasets in our paper. --- Rebuttal Comment 1.1: Title: comments on responses Comment: Thanks. The reviewer has read all responses, which has generally addressed the concerns. By reading other reviewers' comments, the reviewer decided to keep the current score.
Summary: This paper try to get rid of the limitations of current evaluation metrics for generative models and focuses on the perceptual fidelity of diffusion models. The authors conduct an extensive study using a wide range of image-based generative models across diverse datasets. They employ psychophysics to measure human perception of image realism and compare it with existing evaluation metrics. The paper reveals that commonly used metrics, such as FID, do not align with human evaluations of perceptual realism in diffusion models. The authors attribute this discrepancy to over-reliance on the Inception-V3 network. They propose using alternative self-supervised feature extractors to improve the evaluation of generative models. Additionally, the paper explores the issue of data memorization in generative models and highlights the limitations of current metrics in detecting memorization accurately. The authors release the generated image datasets, human evaluation data, and a library to compute 15 common metrics for 8 different encoders, facilitating further research in this area. Strengths: - The author comprehensively points out the issues with the existing evaluation metrics for generative models. - The article presents a considerable number of insights and involves a substantial amount of work. Weaknesses: - Figure 1 lacks intuitiveness, and it is recommended to further improve it so that readers can grasp its meaning within a short period of time. - The conclusion of the entire article is extensive, making it challenging to grasp the main points. It is suggested to enhance the writing by highlighting key conclusions, particularly in the introduction section. - If possible, I suggest conducting a bias analysis on the 1000 paid participants, such as examining whether these participants are all students or individuals involved in the field of artificial intelligence, or if the age distribution is concentrated within a specific range, like 20-22 years old, and so on. This preliminary examination of bias can enhance the reliability of the experimental results. Of course, privacy concerns should be carefully addressed during the analysis. Overall, I greatly appreciate this work and recommend acceptance of the article. Furthermore, I strongly suggest the authors share the code for analyzing the figures and charts in the article to facilitate further analysis by other researchers. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Despite the considerable experimental analysis and insights presented by the authors in this paper, I suggest that they provide a detailed discussion of the limitations of the content being discussed. It would be preferable to dedicate a separate section at the end of the article specifically for this purpose. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review, we appreciate the time and effort that went into evaluating our work, and we are glad you found it “extensive” and “comprehensive”. Please see our general rebuttal for an answer to your question about potential bias of the participants, where we provide additional examination of demographics. In short, the paid participants were very diverse in terms of age, gender, and ethnicity, but there is no significant connection between the demographic attributes and performance on our tests. We plan to release the demographic data along with the performance data, where we have the participants consent to do so, to instill confidence in the data we collected. As to your other points: - On Figure 1 lacking intuitiveness: Could you please elaborate what in the figure was unclear? We strive to provide our results in an easily-to-understand format and will happily update this figure accordingly to improve the clarity of the final paper. - On the main takeaways not being highlighted enough: Thank you for pointing this out, we will reformat the introduction in the final version of our paper so as to better summarize the key results of our paper. --- Rebuttal Comment 1.1: Title: re: Rebuttal by Authors Comment: Thank you for your reply and I have also read the comments of other reviewers. I tend to keep my score.
Summary: In this paper, the authors conduct a thorough investigation into the limitations of the Frechet Inception Distance (FID) metric for evaluating generative models. They address this issue by performing human evaluation and proposing a superior alternative for automatic generative model evaluation. Through dedicatedly designed experiments, the authors empirically demonstrate that FID, which relies on a pre-trained InceptionV3 model, exhibits a weak correlation with human evaluation. However, by replacing the InceptionV3 with a self-supervised model like DINO, the automatic evaluation becomes more closely aligned with human evaluation. The insights presented by the authors offer valuable guidance on the appropriate approach for evaluating image generative models. Strengths: [1] This paper addresses a highly significant issue within the generative modeling community. The authors' findings, demonstrating the lack of correlation between FID and human fidelity judgment, are particularly intriguing to me. [2] The proposed alternative, FD_{DINO}, appears to be a sensible solution and holds promise for future evaluations of generative models. [3] I think the provided benchmark tables are helpful for researchers trying to evaluate their generative models. [4] In my opinion, the paper is well-written and makes a noteworthy contribution to the community. Weaknesses: [1] One potential concern is that all interpretations in the paper are based on the assumption that human evaluation is entirely correct, which may introduce some inherent risks. [2] Providing further explanations of the evaluation metrics would enhance the paper's accessibility for individuals who are not experts in this particular field. [3] Adding more detailed explanations about the differences between the authors' paper and closely related work [R1] would be beneficial for better understanding the novelty and contributions of the presented work. [R1] M. Yang, C. Yang, Y. Zhang, Q. Bai, Y. Shen, and B. Dai. Revisiting the evaluation of image synthesis with GANs. arXiv preprint arXiv:2304.01999, 2023. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: please refer to the weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I believe this paper delivers a crucial message to the community by emphasizing the inadequacy of FID as a metric for evaluating image generative models. Additionally, the authors propose a substantial improvement by changing from InceptionV3 to DINO as the backbone, which significantly mitigates this issue. While it may not completely resolve the problem, it remains a commendable contribution. Considering these contributions, I would give a score of 7 to this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed and helpful review. We are thrilled to hear you frame our insights as “valuable guidance on the appropriate approach for evaluating image generative models.” You rightly mentioned that including more details on the metrics in our paper will make it more accessible - please see the general response where we address that. We address your other concerns below: - “All interpretations in the paper are based on the assumption that human evaluation is entirely correct”: This is essentially true. We carried out this study with the premise that human alignment is superior because humans are the users of the generated output. If a particular neural net architecture were the intended viewer, for instance, we would need to evaluate images differently. This is the premise of GANs, and as we demonstrate in our work, generative models that can fool a neural net classifier need not look the most realistic to humans. Taking a practical perspective, we do believe that human perception is the best metric available for image fidelity. We highlight that we did our best to capture it with the cleanest crowd-sourced data possible through our experimental design informed by best practices from the literature [20, 19], namely by including a long training period and financial incentives for correct answers. This was also the reason why we structured our experiment to provide direct comparison between generated and training images, and only compared generative models that were trained on identical sets of images, as we wanted to separate out notions of fidelity to those of “style”. We will include this discussion in the final version of our paper. - "Further explaining the differences between our work and Yang et al. (2023) would be helpful": We believe lines 172 to 181 of our manuscript do contain the fundamental differences between our work and Yang et al. (2023), which was concurrent with our study (the first public version was released April 4, less than two months before the NeurIPS submission deadline). Namely, this work’s trials targeted much more ambiguous tasks than our two alternative forced choice task. Rather than a direct measurement on the ability of humans to distinguish real and fake images, their main user study instead scored whether each generated image was “photorealistic”, yet participants had no knowledge or familiarity with data samples from the training set, and received no training or compensation. This lack of alignment between their human evaluation study and the goal of FID and similar generative metrics of quantifying distributional distances between real and generated images makes their analysis difficult to interpret. This difficult interpretation coupled with a much smaller and less rigorous human evaluation setup (they used fewer models and collected responses from 10 times fewer participants) are fundamental differences. Nevertheless, we are excited to see that others see value in evaluating generative metrics. --- Rebuttal Comment 1.1: Title: re: Rebuttal by Authors Comment: The authors addressed all my concerns in the rebuttal. I am inclined to maintain my original score of 7.
Summary: This paper initially demonstrates which embedding space is similar to human evaluation criteria by utilizing various datasets and image generation models. It reveals that the embedding space of DINOv2 aligns most closely with the tendencies identified through a large-scale human survey. Moreover, it highlights the fact that most of the commonly used metrics do not align with human preferences. Additionally, the paper proposes approaches to address the issue of memorization, assessing whether the models are simply reproducing existing images. Strengths: The limitations of the traditional FID metric were widely recognized, and several papers were already trending towards using CLIP FD as the primary metric. While FID captures typical trends well, its clear limitations prompted this paper to explicitly demonstrate these constraints and advocate for the use of specific models' embedding spaces. The paper made significant contributions by conducting large-scale surveys to create metrics for human evaluation and by publicly sharing the generated images used in subsequent analyses, which is particularly valuable considering the time-consuming nature of Diffusion models. Furthermore, the paper addressed the issue of memorization and provided a quantitative representation of it. Weaknesses: The setting of using DINOv2's gradCAM is inevitably beneficial. It would be advantageous to include comparisons with metrics such as LPIPS as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there an intention to make all the features publicly available? Additionally, what distinguishes this paper from the next paper? https://dreamsim-nights.github.io/ Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The novelty may not be significant; however, I believe the experiments conducted in this paper hold sufficient value. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review and helpful feedback! To address your questions: 1. *Is there an intention to make all the features publicly available?* Yes, we will be making all the features of the code publicly available, and all of the datasets. Hopefully this will help facilitate further research into evaluating deep generative models. (If you meant feature representations of the images, we did not intend to release these, but they are easy to compute using the code and images we will provide). 2. *What distinguishes this paper from https://dreamsim-nights.github.io/?* In short, our paper is about evaluating deep generative models, while theirs focuses on image-to-image similarity similar to LPIPS. To be specific, they create a dataset of human-perceived image-to-image similarities and design an encoder that reflects these human judgements. On the other hand, our human trials focus on image quality rather than similarity, and our subsequent analysis uses these human judgements to evaluate current metrics for generative models. In other words, DreamSim focuses on obtaining an encoder which maps images that humans assess as similar to nearby points in latent space, whereas we focus on finding an encoder where distances between probability distributions on its latent space, such as FD, correlate with human judgment. It is natural to wonder whether their improved DreamSim encoder might provide a better representation space for evaluating generative models than current self-supervised models such as DINOv2; however, when the paper came out, we ran a preliminary comparison and found it did not. For example, We found that the OpenCLIP-DreamSim fine-tuned encoder had nearly identical human/FD alignment as the original OpenCLIP encoder, and that their ensemble of encoders did not align with our human experiments as well as DINOv2 does. Unfortunately the DreamSim paper was first publicly released June 15, well after NeurIPS submissions had closed, so we were unable to describe these results in the paper (but will in the final version). --- Rebuttal Comment 1.1: Comment: Thank you. I decide to keep the current score.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful feedback and the time they spent assessing our work. We are very encouraged by the largely positive feedback, including that our paper provides “significant contributions” (uifn), “presents a considerable number of insights” (uTCz), and “delivers a crucial message to the community” (6HZb). Below we address themes raised by more than one reviewer, and otherwise reply to each reviewer individually. 1. On selection, diversity, and potential bias of participants (xpCe, uTCz): Thank you for bringing this up. We agree that it is relevant to discuss, and that doing so will enhance our paper. Our attached rebuttal pdf includes demographic information about the participants (only those who explicitly agreed to provide and to allow us to use this information are included). It is clear that the set of participants is diverse, and that the results of the human evaluation experiment exhibited no biases from any of the diverse groups of participants (as the difference in means of normalized error rates between demographic groups is much smaller than the corresponding standard deviations). While our participants were recruited from a global crowd-sourcing platform, we did not explicitly select participants based on diversity due to various reasons: - In the first experiments we ran (on CIFAR-10), we found no systematic difference in participant’s performance based on demographics, and we thus decided to not filter based on demographics on subsequent experiments. This is indeed confirmed once again in the post-hoc analysis (on all datasets) included on the rebuttal pdf. - Not only do we not observe an empirical difference in responses based on demographics, but there is also no theoretical reason to believe there might be one. The dominant perspective in visual cognition is that vision is impenetrable, which is another way of saying bottom-up, or not influenced by cognition [A,B]. Another way of thinking about this would be to imagine the architecture of the visual system as a CNN. The processing is predominantly in the direction of the forward pass, or, from the retinae to the primary visual cortex to areas of higher cognition. Backward connections exist but serve the stability of perception rather than the content. In other words, a person’s culture or ethnicity is unlikely to affect the forward pass. - [A] Pylyshyn, Z. (1999). Is vision continuous with cognition?: The case for cognitive impenetrability of visual perception. Behavioral and brain sciences, 22(3), 341-365. - [B] Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and brain sciences, 39, e229. - Conventionally, studies in psychophysics do not report demographic information on the participant pool beyond age and gender. This largely reflects widespread belief in the views expressed in the point above. As of August 4th 2023, a cursory examination of the 20 most recent open access articles in *Visual Cognition*, a popular journal dedicated to studies in visual psychophysics, confirms this. 17 of the 20 were empirical studies. Of these, all report age and gender. Only two report any demographic information on the participants’ ethnicity. Both were studies on face perception, which is a subfield where exposure to faces of different ethnicities has known effects on performance. In other words, unless there is an *a priori* reason to expect an effect of ethnicity or other demographic variable (other than age and gender), it would be unusual to report them. Nonetheless, we are happy to report additional demographic information anyway in the final version of our paper. In summary, our participants are diverse in spite of diversity not being a source of concern for bias in our study, and we find no systematic difference in participant’s performance based on demographics. 2. On including more detailed definitions of the metrics used in our paper (xpCe, 6HZb): Thank you for pointing out this area of improvement. In the final paper we will expand upon the current descriptions to add the full mathematical descriptions and explanations for each of the metrics used throughout the paper. Pdf: /pdf/cb26ddd46d94bcbd8aef052af6880df4c6318921.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper constructs an image dataset sampled from various generative models and scored by human participants in terms of their fidelity, and argues that existing metrics do not correlate well with this notion of fidelity. Then, it investigates how different choices of embedding space (i.e. different encoders) affect the metrics in terms of their ability to measure diversity and fidelity, and concludes that DINOv2 can be a superior choice of encoder. Finally, the paper studies whether existing memorization metrics can effectively measure memorization in high resolution datasets, and argues that none are reliable. Strengths: The paper asks interesting and important questions regarding the reliability of existing metrics of generative performance, and provides many valuable experiments with unique insights on the limitation of existing metrics for measuring fidelity and diversity of generative models. The proposed dataset of human evaluation can be a useful addition to the existing datasets for measuring and improving generative performance. Weaknesses: My main criticism of this paper is that it is not one coherent paper, rather parts of three different papers: one paper on constructing a dataset of human evaluation of generative models, a second paper on how the choice of encoders affects existing metrics, and a third paper on measuring memorization in high resolution datasets. As a result of this, many important results are shifted to Appendix (Appendices B and D in particular), and several observations are poorly explored and studied. I think each of these three directions deserves its own focused paper, carefully considering the caveats, and avoiding broad unjustified claims. I’ll elaborate on my specific concerns below. ***Regarding the dataset:*** 1. I think the human experiment is flawed in the sense that “choose the fake sample” could result in choosing the unlikely samples as fake (confusing fakeness with unlikeliness). For example, a contorted low quality image of the common crow can be rated as “real” whereas a high quality image of an exotic rare parrot can be rated as “fake”. Therefore, a diverse model can receive a lower human error rate compared to a model of lower quality but less diversity. The experiments must propose a way to control for the effect of rarity, otherwise the dataset can be misleading. 2. Another issue with the human experiment is that it is unclear how the participants were selected. For example, if participants are mostly of one ethnicity, it is possible that they penalize diversity in place of fakeness. I understand that this issue is not easy to solve, but at the very least, I expect to see some effort in making sure the human participants are of sufficiently diverse backgrounds and origins. 3. For the results in Table 1, the difference being statistically significant alone is not enough, the amount of difference is itself of value. For example, a 1 percent difference is not as important, albeit statistically significant, as a 10 percent difference in human error rates. Reporting the actual numbers in a table similar to Table 1 will clarify this matter. 4. I am not sure how the main claim of “unfair treatment of diffusion models” in the title is justified. The brief explanation in L198-199 is too broad: “Coupling the results in Table 1 with the FID rankings in Figure 2, we conclude that current diffusion models produce the most realistic images, but are unfairly downranked by FID”. Could you be more specific about the connection from evidence to this claim? ***Regarding the metrics and encoders*** 5. L225 claims that “We find that Inception does not perceive a holistic view of images even on its ImageNet training set”, however I don’t think the qualitative results in Fig 3 can support such a strong statement. A quantitative experiment is required to be able to claim this finding (I don’t see how the quantitative results in Appendix D.0.2 can support this claim, if so, please elaborate.) 6. L299 claims that: “We see this as strong evidence of a fundamental limitation of the use of the Inception network when computing FID”. I don’t see how this shows a fundamental limitation. To me, it only shows that for some generative models, FID seems to not correlate well with per class Vendi score. If the intention is to show that FID does not correlate with Vendi in general, why not explicitly report correlation? But even then, can you elaborate what “fundamental” means in this context? I do understand that different embeddings focus on different features, that is given by design, but that one embedding is fundamentally weaker than another needs a more formal argument. 7. The conclusion that DINOv2 is better than Inception is not well justified to me. The main evidence for this strong claim seems to be Figure 6, which is at best a motivating observation (reporting the correlation coefficients could help make Figure 6 more substantial). A systematic ablation study for the encoder on the effect of a) the number of training classes, b) various loss functions, and c) various architectures (only changing one factor at a time and fixing the other factors), on the performance of metrics can be more conclusive regarding which encoder is more reliable. 8. For a paper about metrics, the definition of considered metrics should be restated to facilitate the clarity and readability of the paper. This can be done for the main metrics in the body of the paper (FD, P/R, Vendi) and for others in the appendix. The current definitions in Appendix B lack the exact mathematical definitions except for FD. ***Regarding memorization*** 9. The paper does not discuss why each of the considered memorization metrics fail, that is, what assumptions in the definition of each metric deviates from practice and potentially causes the observations. The lack of such discussions and followup experiments to pinpoint the cause, makes the empirical results inconclusive in general. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See the weaknesses section, particularly on the collection of the dataset. Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We are happy our paper “provides many valuable experiments with unique insights on the limitation of existing metrics”. We believe the replies below address your concerns, and kindly ask that if you agree, to consider raising your score. On our work being “three different papers”: Given the long history of generative evaluation, any proposed metric must be demonstrably superior on a wide range of quantitative and qualitative evaluations. The three components of our paper are a cohesive whole to accomplish this goal, and removing any would make the paper insufficient to build community trust and support. The first part – constructing the dataset – would be an incomplete contribution in terms of novelty, as we follow (yet improve) on the methodology of HYPE [91]. The resulting novel insight that FID does not correlate well with human judgment demands a subsequent investigation of alternate encoders and metrics. This second part is the most natural way to search for a new evaluation metric that correlates better with human evaluators. The third part resolves two critical caveats: Section 4.2.1 dismisses the concern that FID fails due to humans not measuring diversity, and Section 4.3 establishes whether the best generative models “cheated” their human alignment through memorization (a valid concern since FD does not detect it). We thank you for pointing out an area where we can improve our writing and will refine this narrative in the final version of our paper. 1\. We respectfully disagree that this is a flaw in our experimental design - our methodology accounts for and negates such an effect: each participant sees a total of 250 images from ImageNet (which has 1000 distinct classes), making it essentially impossible to learn any diversity within any class, and thus forcing a focus on fidelity. Using your crow/exotic parrot example, *in ImageNet these would be different classes, and generative models synthesized class-conditional images that closely resemble the semantic information of a class* (see Figure 20 for proof). Therefore, the crows/parrots from different generative models will have varying degrees of fidelity, which will be measured by the human error rate. Additionally, if participants were confusing fakeness with unlikeliness, *this effect would be consistent across models and therefore would not alter rankings*. Another control for any such effects is the training phase (Figure 18), in which participants gain a sense for the diversity of images before starting the test. Finally, we highlight that our experimental design is highly similar to the widely adopted HYPE design, and was aided by experts in psychophysics precisely to avoid these types of flaws and measure human perception of realism accurately. 2\. & 8. See general rebuttal. 3\. & 4. We believe that the titular claim of “unfair treatment of diffusion models” is justified. Figure 2 clearly shows that diffusion models score the highest human error rates, and that GANs often score lower human error rates yet achieve a better FID ranking. Table 1 summarizes the statistical significance, and we report the exact values and error bars in Table 11. We agree that the word “unfair” in lines 198-199 is not yet justified at that part of the paper though, and will remove it. Nonetheless, the remainder of the paper investigates potential causes for this misalignment of FID and human evaluation, and ultimately builds up sufficient evidence to support the “unfair” conclusion. 5\. Thank you - we agree that this is a qualitative claim, which we will soften in the final version. We point out however that it is consistent with our quantitative analysis and that in Appendix C of [47], which we will also make clear in the paper. 6\. We believe this viewpoint stems from a misunderstanding of the goal of our diversity experiments, which we will clarify in the final version of our paper. The lack of correlation between FID and human evaluators could be due to either (a) FID being fundamentally flawed, or (b) FID accounting for diversity where human evaluators do not - perhaps diffusion models trade diversity for fidelity. The goal of our diversity experiments is to rule out option (b) (i.e. checking that differences in Vendi score do not explain flips in ordering between FID and human evaluators – this is not equivalent to correlation). Our experiments show that images from diffusion models have good diversity, which, combined with results from our human experiments, allows us to rule out option (b) and conclude option (a). 7\. On Fig 6 please see the point above. On an ablation study: what you describe would require retraining self-supervised foundational models on internet-scale data, which would be prohibitively expensive (DINOv2 alone required 200k GPU-days [58]) and out of scope for this work. We also note that the self-supervised foundational encoders that we use were trained with a variety of objectives and architectures. 9\. Given the increasing attention placed on memorization of generative models [11, 58] we included our results which show that memorization metrics can not be relied on at this stage, which presents a natural call for further study. Recent work [C] (made publicly available after the NeurIPS submission deadline) has shown that various pathologies associated with precision and recall are due to the curse of dimensionality as a byproduct of nearest neighbors. These findings are consistent with our own observations of precision and recall, and we hypothesize that this could cause the failure of memorization metrics. We will include this discussion in the final version of our manuscript. [C] Emergent Asymmetry of Precision and Recall for Measuring Fidelity and Diversity of Generative Models in High Dimensions. ICML 2023. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Regarding “dataset”, “metric” and “memorization” sections being part of a coherent paper, I respectfully disagree with the authors: the “metric investigation” section, uses the dataset to study several metrics, but rather than focusing on why existing metrics do not correlate with Human Error and if that reveals a limitation in the Human Error metric itself or the existing metrics (since these metrics are backed by their own user studies and theories), it quickly takes Human Error as ground truth and moves on to improving metrics (hence a separate paper in my opinion); b) the “memorization” section does not use the proposed dataset or the Human Error metric, it does not even mention them. 1.1. You claim “each participant sees a total of 250 images from ImageNet (which has 1000 distinct classes), making it essentially impossible to learn any diversity within any class, and thus forcing a focus on fidelity”, the issue is not that your participants will learn to focus on diversity or not, it is that they might already associate rarity with fakeness, so you need to provide evidence for the claim that “... forcing a focus on fidelity”: one way to do so is to report Human Error rate on rare real samples versus common real samples (e.g. using Rarity Score, Han 2022), and see if the error rate is the same on both sets. Lack of such experiments, and any discussion of this potential issue in the paper, is why I do not think you can claim that “our methodology accounts for and negates such an effect” at present. 1.2. The parrot/crow is of course just an extreme example to clarify my point, within each ImageNet class you can find unlikely as well as likely samples. 1.3. You claim “if participants were confusing fakeness with unlikeliness, this effect would be consistent across models and therefore would not alter rankings”, I don’t see why the effect would not alter rankings: if participants confuse fakeness with rarity, they will make less mistakes on models that generate more rare samples (because they will just flag them as fake based on the samples being unlikely), so your score would unfairly rank models that are more diverse worse. 1.4. I don’t think being similar to HYPE or aided by psychophysicist answers any of my very specific concerns. What is lacking here is quite straightforward, as I explained in 1.1, you should provide a control experiment, otherwise the dataset – which I want to emphasize that I think is valuable and interesting – will incentivise a series of misleading and incorrect followup works. 2.. I don’t understand what this means: “The Normalized Error Rate accounts for the varying difficulty of tasks over different combinations of (dataset, model).”, please elaborate. 3.1. You claim “Figure 2 clearly shows that diffusion models score the highest human error rates, and that GANs often score lower human error rates yet achieve a better FID ranking”, but in Figure 2 CIFAR10 the diffusion model is ranking best in terms of FID too, so FID is not unfair. Same in ImageNet. My point is by just looking at Figure 2, there is no concrete evidence to back “unfairness”. You need to be more specific about what is the mathematical definition of “unfair” in your work, and how you measure it. For example, you could define unfair as low correlation between FID and Human Error Rate, and then report the correlation coefficients and claim that the correlation is higher for GANs, but lower in Diffusions. 3.2. You claim “Table 1 summarizes the statistical significance”, yet you report no significance test results, it is unclear what the ordering in this table are based on. Table 11 also does not clearly show “unfairness” towards diffusion models. 6.. I understand your motivation, but what I still do not understand is what “fundamental” means in this context. In any way, I do not consider this a main concern, I acknowledge that whether something is a “fundamental” flaw, or simply a lack of correlation between some metrics, is subjective. I appreciate the additional explanations by the authors. 7.. If computational restrictions do not allow you to sufficiently support the claim that DINOv2 is better than Inception, please avoid making that claim in your abstract. If Figure 6 is able to sufficiently support this claim, please elaborate. 8.. I agree that there might be interesting connections, but lack of a study on those connections makes the memorization results inconclusive. Han, Jiyeon, et al. "Rarity score: A new metric to evaluate the uncommonness of synthesized images." ICLR 2022. --- Reply to Comment 1.1.1: Title: Second Reply (1/2) Comment: Thank you for replying quickly and engaging in discussion, we appreciate the added clarification and hope to continue discussion on any lingering concerns. 0. You are correct that we take human assessment of fidelity as ground truth, we will make this more explicit in the final version of our paper. We believe this is an extremely reasonable assumption though, as humans are the end users of these models. The goal of our diversity experiments was to verify if this assumption holds: to see if diversity (which we do not believe humans are particularly good at detecting, nor were our experiments designed to detect) could potentially explain discrepancies between FID and human error rate (HER; see point 6 of our rebuttal). The memorization part of the paper indeed does not use our FD_DINOv2 score or HER because neither FD metrics nor HER are expected to detect memorization (note we include the value of all memorization metrics for DINOv2 in Table 11). Our main goal with these experiments is not to further validate the FD_DINOv2 score itself, but rather to ensure models at the top of the FD_DINOv2 leaderboard did not “cheat” their way in by memorizing training data. We see this as a relevant step to establish community trust in the leaderboard. We emphasize once again that we will update our manuscript to better convey this narrative. 1. Thank you for clarification, and for suggesting the Rarity Score (RS) of Han et al. 2022. We agree this analysis, which we have now performed, provides more evidence for whether participants confuse fakeness with unlikeliness. Our rebuttal pdf cannot be updated and no further attachments can be added, so here we describe the results in detail and outline what we will add to the final paper. *Experiment*: We focus on your suggestion of “Error rate on rare real samples versus common real samples”. For each of the 2000 real images we used from each dataset (evaluated by an average of 13 humans), we determined the fraction of humans that labeled it as fake, as well as its RS. We performed the calculation using both Inception and DINOv2 to quantify the dependency of RS on the embedding space. *Results*: We find no correlation between HER and RS on ImageNet and FFHQ. On CIFAR-10 and LSUN-Bedroom we find a small (e.g. see Table 1 in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3576830/) but statistically significant correlation, which we identify as driven by dataset issues: the non-zero correlation is caused by a very small percentage of “real” images which are clearly taken from 3D-generated scenes (instead of bedroom photographs in LSUN-Bedroom), or from 2D-generated scenes or low quality (extremely blurry) images (in CIFAR-10). These results show that 1.) humans are more likely to label generated scenes as fake/generated (LSUN-Bedroom, CIFAR-10), and 2.) humans are more likely to label low-quality images as fake/generated (CIFAR-10). Such images have a higher than average RS, and hence *the small correlation between human evaluation and RS on CIFAR10 and LSUN-Bedroom is due to humans properly identifying these dataset issues*. We find that removing just 6% of the “fakest” (as measured by humans) real images on LSUN-Bedroom removes the correlation of RS and HER - quantitative proof that the small correlation is driven by dataset issues, and not due to humans associating diversity with fakeness. Rare defects in the training set are not enough to affect our results: being so rare (~6%) means they barely affect the average error rate; and *this training-set effect is the same for every generative model evaluation and thus does not change their rankings*. We have prepared a few additional scatter-plots and image visualizations for the final version of our paper (the lack of correlation is visually evident), and summarize the correlations and their significance on the table below. We will also include the rarity score in our public codebase. Table 12: Pearson correlation of the fraction of humans that labeled a real image as fake, and the Rarity Score (RS; Han 2022) of that image. The RS can only be determined for images that fall “on manifold” | | | % on manifold | r | p-value | r (94%) | p-value (94%) | |--------------|-----------|-----------------|--------|---------|----------|----------------| | CIFAR10 | Inception | 76 | 0.28 | 0.00 | 0.216 | 0.000 | | | DINOv2 | 88 | 0.062 | 0.01 | 0.00 | 0.86 | | ImageNet | Inception | 82 | -0.03 | 0.16 | -0.00 | 0.92 | | | DINOv2 | 93 | 0.01 | 0.73 | -0.01 | 0.62 | | LSUN-Bedroom | Inception | 70 | 0.11 | 0.00 | 0.05 | 0.11 | | | DINOv2 | 90 | 0.10 | 0.00 | 0.02 | 0.39 | | FFHQ | Inception | 79 | -0.03 | 0.29 | -0.02 | 0.36 | | | DINOv2 | 90 | -0.02 | 0.43 | -0.04 | 0.10 | --- Reply to Comment 1.1.2: Title: Second Reply (2/2) Comment: 2\. Tasks (defined as dataset/model pairs) have varying difficulties - if a generative model is very poor, humans have low error rates as they easily distinguish fake samples from real ones. Thus we cannot average error rates across tasks, as these numbers are not directly comparable. To avoid this issue, if $x_i$ is the error rate of participant $i$ at task $t(i)$ (the task performed by participant $i$), the normalized score is given by $(x_i - \mu_{t(i)}) / \sigma_{t(i)}$, where $\mu_{t(i)}$ and $\sigma_{t(i)}$ are the mean and standard deviation, respectively, of the error rate at task $t(i)$ across participants. Normalized scores are comparable across participants and tasks. 3\.1 Note that we discuss CIFAR-10 being an exception in Fig 2 on L249-252. Nonetheless, our argument is *not* that Fig 2 shows unfair treatment of diffusion models by FID, but the figure does show that FID and human error rate (HER) are uncorrelated (except on CIFAR-10). This is what we meant in point (3 & 4) of our rebuttal: we agree that at this point of the paper, there is not yet enough evidence to call the lack of correlation between FID and HER unfair. However, we believe that in the rest of the paper, we present enough evidence to call this unfair: (a) the Inception network focuses on the wrong aspects of images (again, we do assume here that humans provide ground truth, and understand “wrong” as “unlike humans”), which is shown in Sec 4.1; (b) metrics which *do* focus on the correct parts of an image, like DINOv2, have a much stronger correlation with HER, which is shown in Sec 4.2; and (c) diversity is *not* the reason FID is uncorrelated with HER, which is shown in Sec 4.2.1. Together, all this evidence does allow us to say the treatment given by FID to diffusion models is indeed unfair, even if we have no formal mathematical definition of what “unfair” means. Again, we will more clearly convey this narrative in the final version of the paper. 3\.2 We apologize for poor phrasing in our rebuttal. Table 11 (sorted in decreasing order of FD_DINOv2) summarizes the mean and standard error HER (which is what you asked us for in your original review), not statistical significance of the tests from Table 1 (which as mentioned in L191-193, are all highly significant). Again, we do not see Table 11 as showing diffusion models are treated unfairly by FID: we believe our paper as a whole establishes this conclusion, not just the lack of correlation between FID and HER or the numbers in this table. 6\. We will happily change the phrasing from “fundamental” to something less ambiguous. 7\-1 We believe the claims in our paper allow us to say that DINOv2 is better than Inception, *but not just because of Fig 6*: (a) Figs 2 and 4 show that FID does not correlate with HER, whereas FD_DINOv2 does; (b) Fig 3 shows that DINOv2 focuses on more human-relevant aspects of images than Inception; (c) Fig 5 shows that diversity does *not* explain differences between HER and FID: for example, let’s focus on LDM (a diffusion model) and StyleGAN-XL (a GAN). LDM has both better HER than StyleGAN-XL (Fig 2), and has diversity that more closely matches the data’s (Fig 5), yet has a worse FID (Fig 2). FD_DINOv2 does not exhibit this “unfair” ranking of LDM and StyleGAN-XL. (d) Fig 6 shows that diversity *does* explain some differences between HER and FD_DINOv2 score (e.g. DiT-guided has better HER than DiT, yet has worse FD_DINOv2 score as its diversity is a much worse match to the data’s than DiT’s). Together, all this evidence does let us say DINOv2 is better than Inception. Again, we will make sure this line of thinking is clearer in the final version of our paper. 7\-2 Our claims about computational restrictions in the rebuttal are simply about the impossibility of a full ablation study of our own foundational models to understand exactly *why* they outperform Inception, and have no bearing on whether the evidence in our paper is enough to “sufficiently support the claim that DINOv2 is better than Inception”. We specifically designed the selection of encoders in our study to shed light on what components of training/architectures benefit generative evaluation. As discussed in L214-240, an improved supervised model (ConvNeXt) indicates whether more modern supervised models share the same issues as Inception, we chose both self-supervised CNNs and ViTs trained with different objectives, models trained on ImageNet and those trained on internet-scale datasets, and performed a study of the effect of architecture in App B.4.1 (6 varieties of CLIP and 4 of DINOv2). 8\. We agree our experiments on memorization are inconclusive *in terms of establishing the reason or a solution for the failures of memorization metrics*, but not in terms of establishing that models at the top of the DINOv2 leaderboard did not “cheat their way to the top” through memorization, which was the main objective of these experiments. We will further clarify this in the paper.
null
null
null
null
null
null
Knowledge Diffusion for Distillation
Accept (poster)
Summary: This authors propose to explicitly eliminate the noises in student feature with a diffusion model to reduce the dicrepancy between student and teacher model for better knowledge distillation. Specifically, they build a lightweight diffusion model to reduce computation cost and introduce an adaptive noise matching module to align the student feature with the approximate noisy level of intermediate diffusion. Strengths: 1.The paper adopts the diffusion model to model the teacher feature and denoise the student feature for reducing the dicrepancy between student and teacher model. 2.Quantitative experiments on multiple high-level tasks are provided to evaluate the effectiveness of the proposed method. 3.The paper is well organized and presented. Weaknesses: 1.For methods, (1)The technique contribution of this work is not significant. (2)It is intuitive to regard the student as a noisy version of the teacher, which lacks the theoretical analysis. And the NFEs of diffusion model also depend on the capacity of student models for different noise levels. (3)The design principle of adaptive noise matching is not explained clearly. For different student models, the final learned γ should be analyzed. (4)It is still confused about how to determine the initial timesteps for the student model to reverse diffusion. The reverse diffusion is started from pure Gaussian noise, how to perform the inference for the feature output from the noise adapter. Would the γ near 0? 2.For experiments, (1)For Table 2 and Table 3, it seems that the proposed method achieves marginal performance improvement for students trained with strong strategies. The reason should be further discussed and analyzed. Besides, it only obtains comparable performance on object detection task, which did not show obvious advantages than other methods. (2)More visualized results should be provided, such as the affinity matrix. (3)Whether other generative models, e.g., GANs, FLOW, can be adopted to model the dicrepancy between student and teacher model or not? It should be discussed and compared to demonstrate the effectiveness of diffusion model. (4)In Table 6, Params and FLOPs are not shown completely. (5)Since different model capacity determines the dicrepancy between student and teacher model, how about the difference of NFEs for different student models? (6)Some important details are missing, such as the total timesteps for training the diffusion model, the initial timesteps for the student model to reverse diffusion. 3.Some writing typos, e.g., Line 13: adpative -> adaptive Line 136: We -> we Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Section Weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The paper is incrementally novel . Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1.1 The technical contribution of this work is not significant. We respectfully believe our contribution is sufficient. Directly applying diffusion models to KD is difficult and not straight-forward, and we provide effective solution on it with the following adaptations. (1) We assume the student feature is a noisy version of teacher feature so that the DM is naturally involved with a teacher-training and student-denoising manner. (2) The original diffusion model is computationally-expensive, we introduce efficient DM and autoencoder that effectively reduce the cost. (3) We design an adaptive noise matching module to solve the issue of unknown noise levels in student features. > 1.2.1 Theoretical analysis of regarding student as a noisy version of teacher. In KD, the student's objective is approximating the teacher outputs, i.e., $L_{kd} := d(F^{(s)}, F^{(t)})$. However, in practice, this approximation cannot be identical to the teacher feature ($L_{kd}=0$). This indicates that the equivalence $F^{(s)} - F^{(t)} = \delta$, with $\delta$ being an arbitrary non-zero tensor always holds. We regard the student as an additive noise model [1] where its output is equal to the addition of teacher output and noise $\delta$. For a shorter and more appropriate optimization of the approximation in KD, we want to discard the bias $\delta$ with a function $f$ so that it's possible to achieve the optima $F^{(t)} = f(F^{(s)})$, where $f(F^{(s)}) = F^{(s)} - \delta$. The transformation function $f$ needs to eliminate the additive noise $\delta$, which is similar to the task of image denoising. Therefore, we adopt diffusion models, a popular variant of generative models that can effectively generate clean images or features from the noisy ones. **References** [1] Hoyer, Patrik, et al. "Nonlinear causal discovery with additive noise models." NIPS 2008. > 1.2.2 NFEs of diffusion model depend on the capacity of student models for different noisy levels. Actually, in conventional diffusion models, all the numbers of NFEs are valid for generating images, but larger NFEs usually result in higher generation quality. Similarly, we can also use the same NFEs for different settings in DiffKD, and we empirically find that NFEs of 5 can obtain promising results for all the settings (see response 2.5). > 1.3 Analysis of the final learned $\gamma$ of different student models. We measured the average final learned $\gamma$ of different student and teacher models, as shown in the Table 1 of rebuttal PDF. We can see that, the $\gamma$ tends to be larger when the accuracies or architectures of teacher and student are more varied, as the noise in student feature is large and it needs smaller additional noise. > 1.4 Design principle of ANM & how to determine the initial timesteps for the student model. The student features would have different noise levels for different samples, which may not always match the fixed initial noise level. Therefore, we propose adaptive noise matching (ANM) to match the noise level to our defined level. The ANM is optimized with distillation loss, since the denoising effect becomes optimal when the noise level is exactly matched. > 2.1.1 Marginal performance improvements for students trained with strong strategies. The students trained with strong strategies have very competitive accuracies, so it is difficult to obtain significant improvements over them. But we still achieve 0.3%, 0.5%, and 0.3% gains on ResNet-34, MobileNetV2, and ResNet-50 compared to previous SOTA DIST, which we believe is sufficient to show our superiority. > 2.1.2 DiffKD only obtains comparable performance on object detection task. DiffKD is a general KD method that applies to various tasks. Unlike the compared methods such as FGD, which are specifically-designed for object detection task and have to use the ground-truth bounding boxes to assist the distillation, our method only uses features as input and MSE loss, yet achieves consistent improvements on all models with simple MSE loss. We think this is sufficient to show our superiority. > 2.2 More visualizations. See rebuttal PDF. > 2.3 Can other generative models be adopted to model the discrepancy? Applying diffusion model is more natural since we treat the student feature as a noisy teacher feature. Besides, we can also use other generative models to predict the denoised student feature, but they may face some issues. For example, GANs are known to suffer from a variety of issues such as non-convergence and training instability, and the autoencoder and flow-based models may have inferior performance. In contrast, DM offers desirable properties such as distribution coverage, a stationary training objective, and easy scalability, which is more friendly to our task. > 2.4 In Table 6, Params and FLOPs are not shown completely. Sorry for the confusion. Actually, the values of Params and FLOPs inside the table apply to all compared methods since the models are the same. We will refine this table. > 2.5 Different NFEs for different student models? To explore whether we should adopt different NFEs for different model settings or not, we added experiments to show the performance of various models under multiple NFEs, as summarized in Table 2 of rebuttal PDF. Note that due to the limitation of our computational resource, we only trained the EfficientNet-B0 model for 100 epochs with B1 strategy. We can see that all the three settings have relatively high performance at 5 NFEs, and their preferences on NFEs are similar. However, we do acknowledge that different NFEs for different settings or even samples could achieve better performance-efficiency trade-off, and it is a valuable direction for future increments. > 2.6 Some important implementation details are missing. The total range of timesteps for training is 1000, and the initial time step for denoising is 500. We will report more details and release our code in the publication. --- Rebuttal 2: Title: Discussion to Reviewer AKTb Comment: Dear Reviewer AKTb, We sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether your concerns have been addresses or not. Please let us know if you still have any unclear part of our work. Best, Authors --- Rebuttal 3: Title: Additional Experiments with GANs Comment: Dear Reviewer AKTb, We extend our sincere gratitude for your dedicated effort and invaluable feedback aimed at enhancing the quality of our paper. We are truly appreciative of your contributions. We are pleased to provide you with an update on our progress, specifically concerning the incorporation of GANs in the context of KD. As the author-reviewer discussion period approaches its final days, we wish to encourage open and constructive dialogue. If you have any concerns, insights, or suggestions, we are eagerly available for discussion. Your input is highly valuable to us, and we are committed to further refining and advancing our method based on your expertise. --- **Experiments on GANs.** We conducted experiments that integrated mainstream GAN methods in lieu of our DM to transform student features. Specifically, we employed a generator designed with identical architecture to our DM. The generated refined student features were subsequently optimized using a discriminator and an adversarial loss within the GAN framework. The discriminator is trained with refined student feature (fake data) and teacher feature (real data). For a comprehensive evaluation, we compared multiple adversarial losses commonly used in GANs, including DCGAN [1], LSGAN [2], and Hinge loss [3]. We then compared the outcomes of these GAN variants with the results of our DiffKD approach in the table below. |Method|Teacher|Student|ImageNet ACC (%)| |--|:--:|:--:|:--:| |MSE baseline|ResNet-50|MobileNetV1|72.39| |DCGAN|ResNet-50|MobileNetV1|72.89| |LSGAN|ResNet-50|MobileNetV1|72.22| |Hinge Loss GAN|ResNet-50|MobileNetV1|71.24| |DiffKD|ResNet-50|MobileNetV1|**73.62**| Our findings demonstrate that, the GAN-based methods we investigated consistently yielded inferior performance compared to DiffKD. Notably, the performance of these GAN variants exhibited sensitivity to the choice of adversarial loss. This susceptibility can be attributed to the well-recognized challenges of non-convergence and training instability associated with GANs. --- **References** [1] Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. [2] Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S. (2017). Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2794-2802). [3] Lim, J. H., & Ye, J. C. (2017). Geometric gan. arXiv preprint arXiv:1705.02894. --- Rebuttal Comment 3.1: Comment: I appreciate the authors carefully answering my questions which covers my concerns. The motivation and contributions had better be polished further. I would upgrade the score.
Summary: This paper proposes a novel method of knowledge distillation. It uses a diffusion model to denoise the student model features, reducing the gap between the teacher and student model. An auto-encoder is also designed to reduce the computational efforts and an adaptive noise module to improve the denoising effect. The method is extensively evaluated and achieves SOTA results in image classification, object detection, and semantic segmentation. Strengths: This paper is innovative in applying a diffusion model to knowledge distillation to reduce the gap between the teacher and the student model. The method proposed in this paper is highly applicable and can be used for different types of features. It works well for a variety of tasks such as image classification, target detection, and semantic segmentation. Weaknesses: Some of the hyperparameters are not described clearly. Line 177 says λ1=λ2=1, does this indicate that λ3 is set differently? line 220 says λ1=λ3=1. Distillation methods are usually very sensitive to different backbones/downstream tasks? More evaluations and clarifications of this can be added. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Is the training of the autoencoder and diffusion model done at the same time as the distillation training (according to Equation 9)? Does this cause inadequate training of teacher features and denoising effects at the beginning of training to interfere with student model learning? 2. Why are the methods of comparison in Tables 2 and 3 different? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and the positive evaluation. Our responses according to the reviewer's comments are summarized as follows. --- > 1. Hyperparameters of $\lambda_1$, $\lambda_2$, and $\lambda_3$. There is a typo in line 220, and $\lambda_3$ should be $\lambda_2$. We clarify our hyperparameters as follows. All our experiments use $\lambda_1=\lambda_2=1$. For image classification and semantic segmentation tasks, the $\lambda_3$ is also set to 1; while for object detection task, we set $\lambda_3=1$ on Faster RCNN students, while for other detection frameworks, $lambda_3$ is adjusted according to their loss values compared to Faster RCNN (details are summarized in lines 498~501 of the appendix). --- > 2. Are distillation methods sensitive to different backbones or downstream tasks? Most recent advanced distillation methods are designed specifically for one task. For example, DKD [1] and DIST [2] are designed for distilling logits on classification task, while feature-level distillation on downstream tasks such as object detection is more effective (DIST obtains 40.4 mAP on Faster RCNN-R50, while the state-of-the-art FGD [3] has 42.0 mAP). Nevertheless, FGD is sophisticatedly-designed for object detection task, and it requires the ground-truth bounding boxes to compute the loss, which cannot be directly applied to other tasks. One recent work, MGD [4], proposes a feature-level distillation loss and conducted experiments on classification, detection, and segmentation tasks, but its performance on ImageNet classification is not as competitive as that on downstream tasks (e.g., MGD has 71.58% on ResNet-18, while DIST achieves 72.07% accuracy). To this end, our work does not focus on customizing the loss function, but aims to design a general method for all the feature types and tasks and gain consistent improvements with the most common distillation losses. --- > 3. Is the training of autoencoder and diffusion model done at the same time as distillation training? Does this cause inadequate training of teacher features and denoising effect at the beginning of training? Yes, the autoencoder and diffusion model are trained simultaneously with distillation training, as we found that they converge very quickly and may have minor impact on training stability. We added experiments on ResNet-50 teacher and MobileNetV1 student to prove this. As the table shows below, we train DiffKD with pretrained and fixed AE, DM, and AE & DM, respectively, and all settings have similar accuracies. |Ours|Pretrained AE|Pretrained DM|Pretrained AE & DM| |:--:|:--:|:--:|:--:| |73.62|73.68|73.51|73.55| --- > 4. Why are the methods of comparison in Table 2 and 3 different? The settings in Table 2 are the most common benchmark for ImageNet distillation, we select the recent state-of-the-art methods and directly report the results in their papers for basic comparisons. While the settings in Table 3 are with stronger models and strategies proposed by DIST [2], we regretfully have not enough computation resources to re-implement all methods in Table 2 on these settings. --- **References** [1] Zhao, Borui, et al. "Decoupled knowledge distillation." Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2022. [2] Huang, Tao, et al. "Knowledge distillation from a stronger teacher." Advances in Neural Information Processing Systems 35 (2022): 33716-33727. [3] Yang, Zhendong, et al. "Focal and global knowledge distillation for detectors." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [4] Yang, Zhendong, et al. "Masked generative distillation." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. --- Rebuttal 2: Title: Discussion to Reviewer 78h4 Comment: Dear Reviewer 78h4, We sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether your concerns have been addresses or not. Please let us know if you still have any unclear part of our work. Best, Authors --- Rebuttal 3: Comment: Dear Reviewer 78h4, We express our sincere gratitude for your insightful feedback and thorough evaluation of our manuscript. We have taken careful consideration of your queries and provided comprehensive responses to address each of them. We are eager to ascertain whether all your concerns have been adequately addressed. Additionally, we would appreciate your input on whether any new concerns have arisen as a result of our responses. Thank you once again for your time and expertise in reviewing our manuscript. Regards, Authors
Summary: The paper introduces a new knowledge distillation (KD) method named DiffKD, which aims to bridge the representations between teacher and student features via a diffusion model. The motivation is based on the finding that the student feature is noisier than the teacher feature, and therefore diffusion models can be leveraged to denoise the student feature. Additionally, an efficient diffusion model and a noise matching module are proposed for better efficiency and accuracy. The authors conduct KD on image classification, object detection, and semantic segmentation tasks to validate the method. Strengths: - The idea of reducing the gap between teacher and student features is an emerging and important topic in KD. This paper discusses the discrepancy between teacher and student, then proposes using diffusion models to reduce the discrepancy, which is interesting and straightforward. - It is good to see that the method can generalize to various feature types and tasks. Unlike existing methods that often focus on specific tasks and design complex loss functions, this method can be used in various tasks and achieve advanced performance with simple loss functions. - The technical contribution is evident. Instead of directly adopting classical diffusion models, this paper introduces a lightweight architecture with an autoencoder to speed up the model and a noise-matching module to improve performance. - The improvements are significant in image classification, detection, and segmentation tasks. Weaknesses: - The method of denoising the student feature with a teacher-trained diffusion model is a bit strange. According to my understanding, the diffusion model is a generative model. Why don't the authors directly train a diffusion model to take the student model as input and generate a feature that is similar to the teacher for distillation? - In Table 3, compared to DIST, the improvement of DiffKD on Swin-T is not as strong as that on smaller models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In Table 8, why is the performance of DiffKD without AE worse than DiffKD with AE of 512, 1024, 2048 channels? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and the positive evaluation. Our responses according to the reviewer's comments are summarized as follows. --- > 1. Train diffusion model with teacher feature vs. Student feature. Training the diffusion model requires the features to be stable and consistent, while the student feature changes iteratively through the training, it is inappropriate to use student feature as the generative target. We added experiments to validate our intuition. |DiffKD|Student target| |:--:|:--:| |73.62|72.78| We can see that, training diffusion models with student target causes a significant performance drop of 0.84%, which demonstrates our intuition. --- > 2. Improvement on Swin-T is not as strong as that on smaller models. The Swin-T is a strong model that can obtain high ImageNet accuracy with standalone training, so gaining improvement on it is more difficult compared to the small models with much lower accuracies. Besides, the discrepancy between Swin-T and Swin-L is also not as large as the settings with small students. Therefore, the denoising effect is also limited. --- > 3. Why is the performance without AE worse than DiffKD with AE 512, 1024, 2048 channels? We think the AE may have an effect of purifying the features, and it focuses more on the factors that help reconstruct the input features. Therefore, distillation on AE-encoded features could guide the student to learn more from valuable features. --- Rebuttal 2: Title: Discussion to Reviewer hqhp Comment: Dear Reviewer hqhp, We sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether your concerns have been addresses or not. Please let us know if you still have any unclear part of our work. Best, Authors --- Rebuttal Comment 2.1: Title: Response to the authors' rebuttal Comment: Thanks for the authors' responses. All my concerns have been solved. I tend to keep my initial rate and vote for accepting this paper.
Summary: This paper presents a novel knowledge distillation (KD) approach. The difference from the existing methods lies in the computation of discrepancy between the teacher and student signals. This paper formulates it using a diffusion model and uses a denoising procedure to reconstruct the teacher features from the student features. Experiments are performed on image classification, object detection, and semantic segmentation datasets. Strengths: + The idea of measuring the discrepancy using diffusion models is novel. + The paper is well-written. Weaknesses: - Although using diffusion model to model the discrepancy is a reasonable idea, this paper lacks sufficient analysis on the essential benefit of using diffusion models for this purpose. Does this paper mean that the diffusion procedure finds the shortest path in a distorted feature space (rather than the plain Euclidean space) which is better in measuring the discrepancy? If yes, are there any validations (metrics, visualizations, etc.) for the statement? - The proposed method mixes a large number of loss terms (Eqn 9). In the ablation part, the contribution of each term is not thoroughly ablated. - The results for image classification, object detection and semantic segmentation are mostly conducted on weak student models. It is questionable whether the method works well on strong student models (because improvement becomes more difficult). - The improvement beyond some competitive results is not strong enough (e.g. 0.1-0.3% gain on object detection, given that the student model is relatively weak). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the concerns above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Overall, I think the idea of this paper is interesting. However, there are insufficient validations on either the performance (experiments are not strong enough) or the principle. I am looking forward to further results and/or explanations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and the positive evaluation. Our responses according to the reviewer's comments are summarized as follows. --- > 1. Does this paper mean that the diffusion procedure finds the shortest path in a distorted feature space (rather than the plain Euclidean space) which is better in measuring the discrepancy? If yes, are there any validations (metrics, visualizations, etc.) for the statement? Yes, the denoising yields a distorted feature space that is better for optimizing the discrepancy. Compared to simple linear transformation or other generative models such as GAN, diffusion model (DM) offers desirable properties such as distribution coverage, a stationary training objective, and easy scalability. To validate the statement, we measure the discrepancies of original student feature and denoised student feature to the teacher feature, respectively. The discrepancy metrics are mean square error (MSE) and peak signal-to-noise ratio (PSNR, higher is better). As shown in the following table, the denoised student feature has smaller discrepancy comared to the original feature, this indicates that the distillation with denoised feature has shorter path in minimizing the discrepancy. |Feat|MSE|PSNR| |:--:|:--:|:--:| |Original student|4.62|24.32| |Denoised student|2.87|33.99| --- > 2. The contribution of each loss term. In Eq. (9), our loss is composed of task loss, diffusion loss $L_\mathrm{diff}$, autoencoder loss $L_\mathrm{ae}$, and kd loss $L_\mathrm{diffkd}$. We added experiments to validate the contribution of loss terms by removing each loss in the equation, as shown in the following table. |w/o KD|MSE baseline|Ours (Eq. (9))|Ours w/o $\mathcal{L}_\mathrm{diff}$|Ours w/o $\mathcal{L}_\mathrm{ae}$| |:--:|:--:|:--:|:--:|:--:| |70.13|72.39|73.62|72.37|72.68| We can infer that, both the diffusion loss $L_\mathrm{diff}$ and autoencoder loss $L_\mathrm{ae}$ are important to performance. --- > 3. Performance on strong student models. We have conducted experiments for stronger student models (ResNet-50, Swin-T) on ImageNet classification in Table 3. We can see that DiffKD can improve the standalone training of ResNet-50 and Swin-T by 2.0% and 1.2%, respectively, which are significant. --- > 4. The improvement beyond some competitive results is not strong enough (e.g., COCO dataset). Our method achieves consistent improvements on all the tasks, which is sufficient to demonstrate our superiority. Besides, our method uses simple distillation losses such as KL divergence and MSE, and our improvements over these losses are fairly significant. For COCO dataset, unlike the compared methods such as FGD, which are specifically-designed for object detection task and uses the ground-truth bounding boxes to assist the distillation, our method only uses features as input and MSE loss, yet achieves consistent improvements on all models with simple MSE loss. --- Rebuttal Comment 1.1: Title: Post-rebuttal comments Comment: I read the authors' rebuttal and other reviewers' comments. I think the rebuttal addressed part of my questions. The technical contribution of this paper is not so significant but sufficient to get published at NeurIPS. Two more concerns. - Regarding the answer to my Q2, I am a bit curious why removing $L_\mathrm{diff}$ can downgrade the results below the MSE baseline. - For Q3, I meant that a stronger student model (e.g. Swin-B) shall be tested and reported. Swin-T is not acceptable. I choose to keep my original rating for now. If new results on Swin-B (or comparable models) are not reported, I will consider downgrading my score. --- Reply to Comment 1.1.1: Title: Response to post-rebuttal comments Comment: Dear Reviewer pbtu, Thank you for your follow-up discussions. Below, I have refined my responses to your queries: > 5. Regarding the answer to my Q2, I am a bit curious why removing $L_\mathrm{diff}$ can downgrade the results below the MSE baseline. The $L_\mathrm{diff}$ term plays a crucial role in optimizing the diffusion model within our DiffKD framework. By removing $L_\mathrm{diff}$, the parameters of the diffusion model remain in a randomly initialized state. Consequently, the model cannot accurately predict the noise in the student features and may even introduce new noise to the features. This undesired denoising of features significantly degrades the performance of DiffKD, resulting in results below the MSE baseline. > 6. For Q3, I meant that a stronger student model (e.g. Swin-B) shall be tested and reported. Swin-T is not acceptable. We have initiated experiments to train Swin-B with Swin-L. Due to the larger size of these models, the experiments require a longer time to complete. We assure you that we will include the results of these experiments in a new comment as soon as they are available. --- Rebuttal 2: Title: Discussion to Reviewer pbtu Comment: Dear Reviewer pbtu, We sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether your concerns have been addresses or not. Please let us know if you still have any unclear part of our work. Best, Authors
Rebuttal 1: Rebuttal: Dear Reviewers, We thank all the reviewers for their valuable comments and efforts in reviewing our paper. We are delighted to see that Reviewer tFex, pbtu, hqhp, and 78h4 stated that our method is interesting and novel; Reviewer hqhp and 78h4 acknowledged that our method is widely applicable and has evident technical contribution. We have also responded all the reviewers' concerns such as the effectivenesses of proposed modules, evidence of our statements, and significance of improvements with additional experiments, visualizations, and explanations. The rebuttal PDF containing new figures and tables is attached in this comment for your reading. Regards, Authors Pdf: /pdf/022098cd28764dcd88fbce83d7dc87603eb0132f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose DiffKD, a knowledge distillation technique based on the hypothesis that the student's feature is a noisy version of the teacher's feature. Based on this assumption, they use a diffusion model to iteratively denoise the student's features before matching with the teacher. In addition, they propose to use a linear autoencoder to make diffusion more efficient to compute, and they propose a noise adapter module to predict the student's noise level. Strengths: 1. Treating the student's feature as a noisy version of the teacher's feature is an interesting idea. 2. The authors conducted experiments on three tasks and showed improvements for DiffKD. Weaknesses: 1. The authors did not show strong evidence for the claim that the student's features are the noisy version of the teacher's features. While Figure 2 provides a visualization, it would be best if this can be analyzed quantitatively, as it's the central hypothesis that DiffKD is based on. 2. Figure 4 shows that denoising may not be very important, since the performance is not much better even if more denoising steps are taken. This suggests that it's possible that the noise in the student is very weak, which is further supported by the experiments showing very little improvement over existing methods (less than 1 point), despite the heavy machinery of the diffusion model. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Could you please share some statistics about the predicted noise level from the noise adapter? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and positive evaluation. Our responses according to the reviewer's comments are summarized as follows. --- > 1. Evidence for the claim that the student feature is the noisy version of the teacher feature. In knowledge distillation, given the student's objective of imitating the teacher's outputs, i.e., $\mathcal{L}_\mathrm{kd} := d(\boldsymbol{F}^{(s)}, \boldsymbol{F}^{(t)})$, we can infer that the student feature is an approximation of the teacher feature. However, in practice, the student's approximation cannot be identical to the teacher feature. This indicates that the equivalence $\boldsymbol{F}^{(s)} - \boldsymbol{F}^{(t)} = \delta$ with $\delta$ being an arbitrary non-zero noise always holds. For empirical analysis, we take mean square error (MSE) and peak signal-to-noise ratio (PSNR) as the metrics to evaluate the noise ratio of original student features and denoised student features w.r.t. teacher features. As shown in the following table, the original student feature has a small PSNR value, while the denoised student feature has higher PSNR comared to the original feature, indicating that the original student contains noise, and our DiffKD can effectively reduce the noise ratio. |Feat|MSE|PSNR| |:--:|:--:|:--:| |Original student|4.62|24.32| |Denoised student|2.87|33.99| --- > 2. Denoising may not be very important, since the performance is not much better even if more denoising steps are taken. We find that just a few denoising steps (e.g., 5) can achieve good KD performance, since generating the cleaned student feature is not as difficult as generating an image in DM. However, this does not mean that the denoising is unimportant. We want to clarify that our performance gain does not solely come from the denoising process of diffusion model (DM), but also because of the training strategy of DM. The training target of DM is to predict the noise of different levels (zero to full gaussian noise) of noisy features, so the DM would be more robust and accurate to denoise and align the student feature to the teacher feature, compared to directly using student feature to predict the teacher feature. As shown in the following table, we compare DiffKD with simple transformation (we use the same architecture of DM to transform the student feature and compute distillation loss on transformed feature, instead of using the denoising process in DM). The results show that DiffKD significantly outperforms transformation even when they have the same FLOPs (NFEs=1). |DiffKD (NFEs=5)|DiffKD (NFEs=1)|Transformation| |:--:|:--:|:--:| |73.62|73.36|72.33| --- > 3. Statistics about the predicted noise level from the noise adapter. We first show the distribution of noise weight $\gamma$ in Fig. 1 (a) of the rebuttal PDF. Revealing that the student feature is noised with $\boldsymbol{Z}^{(stu)}_T = \gamma\boldsymbol{Z}^{(stu)} + (1 - \gamma)\boldsymbol{\epsilon}_T,$ the larger $\gamma$ denotes smaller additional noise added. We can see that a large amount of values is in the range of $\gamma > 0.9$, indicating that the student feature itself contains non-negligible noises and only requires small noises to match the initial noise level, while there also exist some cleaner samples that require large noises. We also plot the curves of average $\gamma$ in each epoch during training. The Fig. 1 (b) indicates that, at the beginning of training, the student feature contains more noises, so only small weights of noises should be added. When the model gets converged, the noise in student feature becomes smaller and $\gamma$ goes smaller to match the noise level accordingly. --- Rebuttal Comment 1.1: Title: Reply to the authors Comment: Firstly, thank you for the detailed reply. I really appreciate it! 1. I think you are right that for any two features, we can say $\boldsymbol{F}^{(s)} - \boldsymbol{F}^{(t)} = \delta$. However, I feel this is different from whether $\delta$ is truly a random Gaussian noise. I think this is also not something that MSE and PSNR can show. 2. The result shows one step diffusion already achieves good performance. This makes me question whether diffusion is truly necessary. On the other hand, a simple linear transformation may be too simple. Do you know how this performs if you apply GAN, for example? 3. I could not find the updated figure about the distribution of the noise levels. I'm sorry if I was looking at the wrong place. Again, thanks for the response! --- Reply to Comment 1.1.1: Comment: Thank you for your kind reply and follow-up discussion. Our responses to your queries is as follows. > 4. I feel this is different from whether $\delta$ is truly a random Gaussian noise. We want to clarify that the noise $\delta$ in our diffusion models is not limited to being a Gaussian noise. In fact, it is a mixture of Gaussian noises, which has been shown to have the capability to approximate almost all distributions [1, 2]. This choice allows us to effectively model and denoise complex data distributions. To provide further clarity, in the forward diffusion process, "_the diffusion process is fixed to a Markov chain that gradually adds Gaussian noise to the data according to a variance schedule $\beta_1,...,\beta_T$_" (DDPM [3]). In the diffusion process, it recursively predicts Gaussian condictions for multiple timesteps (it is the same as the timesteps of diffusion process in original DM). Therefore, by utilizing DMs, we can effectively eliminate any distribution of $\delta$ and generate a clean, denoised student feature that aligns with the teacher feature. > 5. The result shows one step diffusion already achieves good performance. This makes me question whether diffusion is truly necessary. On the other hand, a simple linear transformation may be too simple. Do you know how this performs if you apply GAN, for example? We apologize for any confusion caused by our previous response. To clarify, the `transformation` mentioned in our table is not a simple linear transformation. Instead, we adopt the same architecture as the diffusion model to transform the student feature. By making this comparison, we aim to demonstrate that the training manner of diffusion models provides a benefit to the distillation performance, even without the multistep denoising process. Furthermore, we have included results for different NFEs on other models in Table 2 of the rebuttal PDF. These results indicate that with ResNet-18 and EfficientNet-B0 students, the improvement achieved with 3 NFEs compared to 1 NFE is significant. Regarding your suggestion of using GANs for transformation, we have initiated an experiment to transform the student feature using a GAN. We are currently in the process of training and evaluating the results. Once the training is completed, we will report the outcomes and include them in our paper. > 6. I could not find the updated figure about the distribution of the noise levels. I'm sorry if I was looking at the wrong place. In accordance with the guidelines for this year's NeurIPS conference, authors are not allowed to update the submitted manuscript or appendix directly. Instead, we have provided a rebuttal PDF that includes additional information and clarifications. You can find the rebuttal PDF in our common response to all the reviewers, titled "Author Rebuttal by Authors," marked with an orange label. In the rebuttal PDF, Figure 1 presents the statistics of the noisy levels ($\gamma$) that you were looking for. --- **References** [1] Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer. Chapter 9 [2] McLachlan, G. J., & Krishnan, T. (2007). The EM Algorithm and Extensions. Wiley. Chapter 8 [3] Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, 6840-6851. --- Rebuttal 2: Title: Discussion to Reviewer tFex Comment: Dear Reviewer tFex, We sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether your concerns have been addresses or not. Please let us know if you still have any unclear part of our work. Best, Authors
Summary: This work proposes the use of the diffusion model for denoising the noisy feature of student. It tackles the issues of employing the diffusion model for knowledge distillation which are heavy computation and inexact noisy level of student feature. To tackle the issues, it proposes a light-weight diffusion model consisting of two bottleneck blocks in ResNet and adopt a linear auto encoder to compress the teacher feature. It also proposes an adaptive noise matching module which measures the noisy level of each student feature adaptively and specifies a corresponding Gaussian noise to the feature to match the correct noisy level in initialization. Strengths: The proposed idea is simple. Showed what is the problem of directly applying the diffusion model to knowledge distillation. (Expensive Computation Cost, Inexact noisy level of student feature) It has conducted various experiments on various vision tasks to show that the proposed method can be widely applicable. (classification, object detection, semantic segmentation). It shows good performance gain on several benchmarks. Weaknesses: Applying diffusion model to denoise the student feature seems somewhat obvious approach, in other words, not novel. The fact that the feature of student is noisier than that of the teacher is already well studied by other papers as mentioned in line 109-117. In my opinion, the contribution of this paper comes from showing what are the practical problems of applying diffusion process to KD. It showed two problems (Expensive Computation Cost, Inexact noisy level of student feature). However, I am not sure if this paper has done enough experiments to show that it solved the above problems. There are not enough analysis to prove the proposed method is efficient. (Table 8 and Figure 4 seems to be the only ones). How is the diffusion process applied on the logit-level? The effect of ANM is not really convincing, there is only a marginal performance gain. It would be more convincing how the ANM is helping to denoise the student feature better by analyzing the student feature with and without the noise adapter. Visualization of student feature with/without ANM could help to better understand the effect of ANM. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper. Our responses according to the reviewer's comments are summarized as follows. --- > 1. Applying diffusion model to denoise the student feature is not novel. We summarize our novelties as follows. (1) The existing methods solve the discrepancy between teacher and student by improving the loss functions or training strategies, while we are the first work that uses a new perspective of distilling valuable knowledge from the teacher by eliminating the noise in the student feature. (2) There is no practice on how to involve diffusion model (DM) in KD before. We assume the student feature is a noisy version of teacher feature so that the DM is naturally involved with a teacher-training and student-denoising manner. DM enjoys better generation performance than the other transformation modules in previous methods. (3) The original diffusion model follows a heavy UNet architecture which is computationally-expensive and heavily slows down the training speed of KD, we introduce efficient DM and autoencoder that effectively reduce the computation cost. (4) We design an adaptive noise matching module to solve the issue of unknown noise levels in student features. --- > 2. Experiments to show the efficacies of efficient model and ANM. **(1) Efficacy of efficient diffusion model.** We train RetinaNet R50 student with R101 teacher with 1x schedule in COCO, and compare our Effieicnt DM with original UNet in DDPM using DiffKD. As shown in the following table, with feature shape (256, 80, 124), the original UNet has much larger parameters and GFLOPs, and thus leads to ~3x training time. The original UNet only achieves similar performance as our Efficient DM, as the generation of features is easier than images, and a small model would suffice. |DM|Params|GFLOPs|Training time|AP|AP50|AP75| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| |UNet (DDPM)|13.62 M|650.77|1.80*8 GPU Days|39.1|58.0|41.9| |Efficient DM (Ours)|0.21 M|132.76|0.59*8 GPU Days|39.2|58.1|42.0| **(2) Efficacy of ANM.** Actually, for the simple training strategy B1, the 0.28 performance gain of ANM on MobileNetV1 is not marginal since the 73.6% accuracy is high w.r.t to its capacity and it is challenging to obtain significant increment on the simple 100-epoch training strategy. To validate the efficacy of ANM more sufficiently, we further conducted experiments on more model settings and training strategies, as summarized in the following table. |Student|Teacher|Strategy|w/ ANM|w/o ANM| |:--:|:--:|:--:|:--:|:--:| |MobileNetV1|ResNet-50|B1|73.6|73.3| |ResNet-18|ResNet-34|B1|72.2|71.7| |MobileNetV2|ResNet-50-SB|B2|74.9|73.7| We can see that, when with stronger strategy and teacher, the improvement of ANM is more significant (1.2% improvement on MobileNetV2 compared to 0.3% and 0.5% improvements on MobileNetV1 and ResNet-18). One possible reason is that, when the augmentations and teacher become stronger, the noisy gaps between predicted features of teacher and student become more various, and therefore ANM is more effective in matching the noisy levels. --- > 3. How is the diffusion process applied on the logit-level? We have elaborated the implementation details and comparison of DiffKD on feature level and logits level in lines 481~488 of our supplementary material. The predicted logits can also be regarded as a feature that has only one dimension compared to the intermediate feature of 3 dimensions (height, width, and channels), and we can also use diffusion model (DM) to denoise the student logits feature. To achieve this, we replace the convolutional networks of our DM with MLP network, and the goal of DM is to predict the 1-D noise of the logits. --- > 4. The performance gain of ANM is marginal. See response 2 (2). --- > 5. Analyzing the student feature with or without ANM. **(1) Statistics of learned noise weight $\gamma$.** To analyze the effectiveness of ANM, we first show the distribution of noise weight $\gamma$ in Fig. 1 (a) of the rebuttal PDF. Revealing that the student feature is noised with $\boldsymbol{Z}^{(stu)}_T = \gamma\boldsymbol{Z}^{(stu)} + (1 - \gamma)\boldsymbol{\epsilon}_T,$ the larger $\gamma$ denotes smaller additional noise added. We can see that a large amount of values is in the range of $\gamma > 0.9$, indicating that the student feature itself contains non-negligible noises and only requires small noises to match the initial noise level, while there also exist some cleaner samples that require large noises. We also plot the curves of average $\gamma$ in each epoch during training. The Fig. 1 (b) indicates that, at the beginning of training, the student feature contains more noises, so only small weights of noises should be added. When the model gets converged, the noise in student feature becomes smaller and $\gamma$ goes smaller to match the noise level accordingly. **(2) Comparison of features with or without ANM.** To validate how much ANM can improve the denoised student features, we measure the mean distance between teacher (ResNet-50) features and denoised student (MobileNetV1) features with or without ANM on ImageNet validation set, as summarized in the following table. |Type|Metric|Distance w/ ANM|Distance w/o ANM| |:--:|:--:|:--:|:--:| |Intermediate feature|MSE|2.87|3.29| |Logits|KL div.|0.21|0.35| We can infer that ANM effectively reduces the discrepancy between teacher and student features, and therefore leads to better distillation performance. --- Rebuttal 2: Title: Discussion to Reviewer NwaZ Comment: Dear Reviewer NwaZ, We sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether your concerns have been addresses or not. Please let us know if you still have any unclear part of our work. Best, Authors --- Rebuttal 3: Comment: Thank you for your kind responses. Some of my concerns are now a bit resolved. Another question is raised while reading your response and the paper again. I understood that minimizing eq.(7) would not only optimize the parameters of student but also gamma of ANM, but does it also affect the diffusion model during training? Is DM only trained by eq.(2) or is it also affected by eq.(7) ? Thank you. --- Rebuttal Comment 3.1: Comment: Thank you for your kind reply. Below, we have refined my responses to your queries: > 6. I understood that minimizing eq.(7) would not only optimize the parameters of student but also gamma of ANM, but does it also affect the diffusion model during training? Is DM only trained by eq.(2) or is it also affected by eq.(7) ? In our early experiments, we have explored both options: * (a) updating DM with teacher feature (eq. (4)) and distillation loss (eq. (7)); * (b) updating DM with only the teacher feature (eq. (4)), while discarding the gradients produced by the distillation loss (eq. (7)). Interestingly, we observed that both options (a) and (b) yielded almost identical performance. However, for the sake of simplicity and efficiency in implementation, we have chosen option (a) as the approach to be used in our final code. |Option|Teacher|Student|ACC (%)| |:--:|:--:|:--:|:--:| |(a)|ResNet-34|ResNet-18|72.22| |(b)|ResNet-34|ResNet-18|72.18| |(a)|ResNet-50|MobileNetV1|73.62| |(b)|ResNet-50|MobileNetV1|73.68| Thanks, Authors --- Rebuttal 4: Comment: After reading authors' rebuttal, some of my concerns are resolved, so I changed my rating to 'Borderline accept'.
null
null
null
null
Exploiting hidden structures in non-convex games for convergence to Nash equilibrium
Accept (poster)
Summary: This paper proposes a preconditioned hidden gradient descent to provide strong formal convergence guarantees in a general class of multi-agent settings which are referred to as call hidden monotone games. Theoretical analyses and synthetic experiments are also provided. Strengths: 1. The method seems novel to handle the proposed problem of hidden monotone games. 2. Theoretical analyses and synthetic experiments are also provided to demonstrate the effectiveness of the proposed method. Weaknesses: 1. Experiments on real-world datasets (e.g., MNIST or multi-agent reinforcement learning environments) are absent. 2. Experiments on adversarial generation, adversarial attack, and adversarial transfer learning are absent. 3. Computational complexity analyses are absent. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Is it feasible for this method to be applied to common neural networks, such as ResNet50, BERT, etc? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your input and positive evaluation. We reply to your questions point-by-point below: 1. **Discussion about the experiments included in the paper:** Our work distinctly stands out for its depth and novelty in experimental design within the realm of hidden games. A review of recent studies, such as [1,2,3], showcases their focus on continuous dynamics, offering only a fleeting glance at experiments with discrete step-sizes. Specifically: - Their experiments, although illustrative, were limited to synthetic setups, often featuring a single hidden variable and straightforward mappings, like the invertible sigmoid in games such as Rock Paper Scissor. - [3] ventured slightly further, introducing a single hidden layer, but fell short in detailing the nuances of converting continuous dynamics to a discrete format using the Euler method. In stark contrast, our experiments probe deeper, navigating games with multiple hidden layers using our distinctive method. Furthermore, our appendix introduces a seminal benchmark for Nash computation in expansive policy domains, exemplified by the El-Farol Bar (see [4] and references therein). 2. **Discussion about real-world application, MARL or adversarial training:** Our primary objective was to devise a game theoretically applicable algorithm for computing Nash equilibria in structured and ML-inspired non-convex settings. Unlike prior works, ours introduces theory and examples with intricate hidden maps. Training MARL and the other applications are outside the scope of the current paper. However, the strength of our theoretical analysis distinctly underscores its applicability. In particular, when evaluating works such as [5] and [6] (highlighted by another reviewer for a comparative examination of related work), our framework and algorithm not only address challenges in variational policy methods within MARL and the econometric literature but do so with a broader range of assumptions than those outlined in [5,6]. Thus, through establishing the theoretical basis for PHGD, we anticipate inspiring engineers and practitioners to adapt PHGD to large-scale challenges. 3. **On the computational complexity of Hidden Games:** The computational complexity of Hidden Games is an open and captivating topic. First, it's essential to focus on loss functions/utilities and hidden maps with a succinct polynomial representation under certain well-behaved arithmetic circuits [7]. Without this, defining a meaningful complexity result inside NP for continuous strategy games becomes challenging. When arithmetic circuits that describe losses are generic, the problem becomes FIXP-hard [15]. When equipped with these succinct models, we can efficiently compute value, gradient, and Jacobian oracles for hidden maps in polynomial time. Within this modeling framework, the problem could be shown to be at least as hard as PPAD (the standard complexity class for computing ε-Nash equilibrium ). On the other hand, proving membership to PPAD for general hidden maps — without escalating higher in the computational hierarchy of TFNP — remains an open question. While the authors of [8] outlined a meta-method using a version of Kakutani's theorem, the specifics are yet to be fleshed out. *All this underscores why the assumptions we stipulated are vital for a positive outcome in computing Nash equilibrium in non-concave general games.* Furthermore, when more structures like diagonal convexity/monotonicity are introduced, our results provide an algorithm for finding the Nash equilibrium. **Using our algorithm, PHGD, and given certain value/gradient oracle access, the problem is actually determined to lie in P based on iteration complexity results.** The discussion on computational complexity doesn't end here. If we assume only black-box access to the objective, [9-11] derive unconditional lower bounds for the general hidden games. However, a more detailed account goes well beyond the scope of the current work, so it is not possible to go into more length here. 4. **Regarding the utilization of more general neural networks like BERT/ResNet:** In essence, the crux of our theoretical requirements can be distilled to: *if the neural networks are highly expressive, our method can reliably uncover the NE in associated games.* The empirical evidence surrounding the aforementioned networks suggests they meet the prerequisites our theorems mandate. Developing a theoretical framework for such Deep Learning Game Theory would likely necessitate an average/smoothed complexity analysis. Recent findings related to learning models with NNs [12,16] and computing Nash equilibria in generalized random games [14] might provide an intriguing trajectory for subsequent research. Given the potential interest in this topic within the community, we will aim to incorporate elements of this discussion into the appendix or the camera-ready version, focusing on future and open challenges. [1] https://arxiv.org/abs/1910.13010 [2] https://arxiv.org/abs/2101.05248 [3] https://openreview.net/forum?id=bsycpMi00R1 [4] https://arxiv.org/abs/2106.01285 [5] https://arxiv.org/abs/2007.02151 [6] https://arxiv.org/abs/2205.01774 [7] https://arxiv.org/abs/2011.01929 [8] https://arxiv.org/abs/2207.07557 [9] https://arxiv.org/abs/2009.09623 [10] Exponential lower bounds for finding Brouwer fixed points [11] Problem complexity and method efficiency in optimization [12] https://arxiv.org/abs/2302.07426 [13] Book Computational Complexity. [14] https://arxiv.org/abs/2007.10857 [15] https://arxiv.org/abs/2111.06878 [16] https://arxiv.org/abs/2211.03975
Summary: It is known that a convergence guarantee exists for monotone games. However, most games are not monotone. This paper considers a new scenario where the monotone structure is presented in latent space. It then proposes the 'Preconditioned Hidden Gradient Dynamics' to design the preconditioned hidden gradient descent algorithm. It is demonstrated that the proposed algorithm will converge to the desired Nash equilibrium. The theoretical result is also verified using empirical experiments. Strengths: Novelty: Existing works have shown the difficulties of finding the equilibrium in hidden-monotone games. Other results usually require additional assumptions on the structure. This work proposes a new perspective to design the preconditioned hidden gradient descent and solves the problem that has not been addressed before. Significance: Most of games are not monotone. Studying the hidden-monotone structure will significantly extend the existing literatures on monotone games, which makes me believe this work opens a new direction of the resarch. The positive result obtained in this work and the new design in the continous-flow will also help understanding on the hidden structure. Weaknesses: This paper is very well-written. But it may lack some real-world examples to help the reader understand the importance of studying the hidden games. Technical Quality: 3 good Clarity: 3 good Questions for Authors: This paper has mentioned many times "minimal assumptions" or "without additional assumptions". I am probably interested in how strong other assumptions are, such as separability or low-dimensionality assumptions. Do you have any simple examples saying these assumptions are not satisfied. Figure 1 is very helpful to understand the concept of hidden games. Do you have any practical example or real-world example that would have such Rock-Paper-Scissors hidden structure? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: This is a purely theoretical work so no negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support and your helpful comments. 1. **This paper is very well-written. But it may lack some real-world examples to help the reader understand the importance of studying the hidden games.** We are glad to hear that you found our paper very well written! A natural example would be to focus on the competition between two companies that fight for market share. Specifically, let's consider the competition between two ride-share platforms such as Uber vs Lyft. Recent work has produced explicit models of such competition under the assumption that the demand for each company at each location depends linearly on the set of posted prices by each company [1]. Furthermore, they show that these models correspond to a strongly monotone game and then apply dynamics to solve them. Clearly, the linear demand model is a simplified assumption. By introducing a non-linear model (that is linear only on some space of latent variables) we can capture much more complex, real-world dependencies on the elasticity of demands while satisfying hidden monotonicity. [1] Narang, Adhyyan, et al. "Learning in stochastic monotone games with decision-dependent data." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. 2. **This paper has mentioned many times "minimal assumptions" or "without additional assumptions". I am probably interested in how strong other assumptions are, such as separability or low-dimensionality assumptions. Do you have any simple examples saying these assumptions are not satisfied.** Sure thing, consider e.g., the weighted softmax function $\chi(\theta_1,\theta_2) = \log\left[(1+\theta_2^2)e^{\theta_1} + (1+\theta_1^2)e^{\theta_2}\right]$. This is neither one-dimensional nor separable. 3. **Figure 1 is very helpful to understand the concept of hidden games. Do you have any practical example or real-world example that would have such Rock-Paper-Scissors hidden structure?** We kindly refer you to our answer to the first point of our discussion, as this is a special case. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the detailed examples! My concerns have been well addressed. I've also gone through comments from other reviewers, but none raised any new concerns for me. So I will maintain my positive rating.
Summary: The paper uses hidden structure to provide continuous time and algorithmic theoretical learning guarantees for certain non-convex games. Specifically, they provide Preconditioned Hidden Gradient Flow and its discrete-time variant Preconditioned Hidden Gradient Descent that can be proved to converge when analyzed under a specific Lyapunov function. Strengths: The theoretical contribution of the paper is significant, compared to past work, the setting is significantly more general. The paper is very well presented, with figures explaining the basic theoretical ideas behind the paper. The theory is solid and rigorous, while I have not checked the proofs line by line, they seem to be correct. Furthermore, significant motivation is provided both visually and by argumentation for the proof strategies employed. Experiments are also provided, and in toy examples the algorithm is demonstrably better than alternatives. Overall, the paper solves an important problem in algorithmic game theory. Weaknesses: I think this is a strong paper with a solid contribution. Some possible weaknesses: * A discussion of the relevance of the faithfulness assumption on the hidden map could be discussed. For instance, the experiments conducted seem to be on particular toy problems, it’s not immediately clear if there are large classes of problems with hidden structure that satisfy the given conditions. * The authors make an effort to present the ideas behind the proofs, which are the main contributions of the paper. Still, the presentation in the main body could benefit from a condensed roadmap of the proof strategy. While Section 4 contains main ideas, the overall picture was difficult to grasp. * The assumption on the representation map (e.g. faithfulness) is difficult to locate, the assumption could be made more explicit for ease of reading. * Experiments are limited to somewhat simple cases, an evaluation of PHGD on for instance GAN training or other relevant problems in practice could be a stronger empirical argument. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The examples mentioned in the experiment are interesting but also seem to be restricted to very specific cases. Are there larger classes of problems besides Examples 2.1, 2.2 that could be relevant in applications? For instance, has there been work on when neural network parameterization could satisfy the faithful map condition? While being possibly out of the scope of the main contribution of the paper, a discussion of these points could be interesting. As a possible limitation, is the implied access to the Jacobian of representation map standard in the literature? The paper mentions dimension reduction as a motivation of the representation map. It is not immediate to me that dimension reduction could be achieved while simultaneously preserving the faithfulness assumptions, due to topological considerations. A discussion of this point could be interesting. For instance, does the El Farol Bar game parameterization satisfy the faithfulness assumptions? *Minor questions:* Line 195 -> Is the message that $\mathbf{P}_i$ will be designed to achieve certain properties of was it defined before? Was not clear on the first read. Lines 177 - 189 -> The connection between the two paragraphs was not clear. Was NHGD shown to not require the mentioned assumptions? *Possible minor Typos:* Line 116: $\Theta$ -> $\Theta_i$? Line 164: For the Jacobian notation, should it depend on $\theta_i$ Line 172: Here, it is possible that the notation is confusing with the previous separation of the control variables of each player. As far as I understand, the individual coordinates of the controls of one player is meant here. Line 254: an closed -> a closed Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: No major limitations. As mentioned above: empirical validation could be extended to relevant problems in application. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging remarks and your positive evaluation. We reply to your questions point-by-point below: 1. **A discussion of the relevance of the faithfulness assumption on the hidden map could be discussed.** We will be happy to add a discussion, however, we believe that it could be more useful to see our setting as more prescriptive than descriptive - in the sense that we provide an intuitive set of conditions that allow for strong theoretical guarantees. Our contributions can help the design of practical systems with better performance and stability. 2. **The presentation in the main body could benefit from a condensed roadmap of the proof strategy.** Sure thing, if accepted, we would be happy to take advantage of the extra page allowed in the revision to include such a roadmap. Thank you for the suggestion! 3. **The assumption on the representation map (e.g. faithfulness) is difficult to locate, the assumption could be made more explicit for ease of reading.** Of course, we will be happy to collect it with the rest of the assumptions in the revision stage. 4. **An evaluation of PHGD on GAN training or other relevant problems in practice could be a stronger empirical argument.** Although GAN training is undisputably an important problem, we would like to point out that in game-theoretic setup it corresponds to a 2-player Hidden Convex-Concave Game. Some preliminary experimental results in this setup can be traced to the work of Mladenovic et al. GAN training however is a 2-player game, while in this paper we'd like to provide a method that goes beyond the 2-dimensional barrier. The examples we present build toward that goal, starting with examples that provide good intuition and visual feedback, and eventually arriving at the El Farol Bar game, a hard $n$-player benchmark. 5. **Are there larger classes of problems besides Examples 2.1, 2.2 that could be relevant in applications?** We kindly refer the reviewer to Appendix D where we provide empirical evidence in a variety of examples, including a hidden El Farol Bar structure, which is a highly non-trivial $n$-player game. Furthermore, we would like to point out that the complexity of the type of problems we study arises from at least two factors, namely, the complexity of the base game, and the complexity of the associated latent game. For example, in a Deep Learning setup, while the complexity of the base game played in the input layer of the MLPs is high, the latent game, played to, or near, the output layers of the MLPs is comparatively lower. In the examples we provide in Section 5 and Appendix D we provide evidence of the applicability of PHGD in setups where the complexity of the base game is high relative to the capabilities of our own equipment, but also the associated latent games are gradually increasing in difficulty in each of the examples. 6. **Has there been work on when neural network parameterization could satisfy the faithful map condition?** We kindly refer you to our answer to the first point of our discussion. 7. **As a possible limitation, is the implied access to the Jacobian of representation map standard in the literature?** Yes, indeed: in the literature on hidden games, the latent structure - that is, the map $\chi$ or its equivalent - is assumed known, so the calculation of the Jacobian is also accessible by the same tenet. 8. **The paper mentions dimension reduction as a motivation of the representation map. It is not immediate to me that dimension reduction could be achieved while simultaneously preserving the faithfulness assumptions, due to topological considerations. A discussion of this point could be interesting. For instance, does the El Farol Bar game parameterization satisfy the faithfulness assumptions?** Any linear map $(A:\mathbb{R}^m \to \mathbb{R}^d)$ with full column rank is faithful so, if $m>d$, generic linear maps are faithful. Then, taking a composition with a faithful activation function (for instance, a leaky ReLU) would lead to a faithful representation, so there are no topological obstacles in this regard. [In our "faithful" does not mean injective / "1-1" in our context; this would indeed be topologically impossible.] 9. **Line 195 -> Is the message that $P_i$ will be designed to achieve certain properties or was it defined before? Was not clear on the first read.** The idea was that $\mathbf{P}$ would be designed to satisfy certain key properties; apologies if this was not clear, we will clarify this in the revision round. 10. **Lines 177 - 189 -> The connection between the two paragraphs was not clear. Was NHGD shown to not require the mentioned assumptions?** In our discussion, the work of Mladenovic is, in a sense, "sitting on the fence". To elaborate, they sidestep some of the more restrictive assumptions articulated in the introductory paragraph of Section 3. Notably, their portrayal of NHGD eschews any necessity for a one-dimensional representation, instead honing in on the latent space rather than the parameter domain. To recast this in our notation for clarity: Mladenovic et al. refrain from prescribing a distinct structure to the latent maps, denoted as $\chi_i(\theta_i)$ for $i=1,2$. This approach marks a progression from the strategies adopted by Vlatakis et al. However, it remains imperative to recognize the confines of Mladenovic et al.'s methodology. Among various limitations, their model is primarily tailored for a 2-player scenario. Additionally, upon examining their convergence justification (as delineated in Proposition 4.1 of their paper) and subsequent demonstrations regarding NHGD, their analysis only applies to one-dimensional problems. 11. **{Minor typos}** Will fix them, thanks for bringing them to our attention! --- Thank you again for your input and positive evaluation - and please let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. Regarding the matter of dimension reduction: Assumption 1 still requires that the Jacobian has singular values lower bounded away from zero. By the inverse function theorem then we can find (at least) a neighborhood where the representation map is locally continuous bijection to the image of the neighborhood, implying an open neighborhood of the parameter space and the latent space is homeomorphic and has the same dimension. In fact, in the example given by the authors, the linear map would have non-empty nullspace (implying a 0 singular value for the Jacobian). Or am I missing something? --- Reply to Comment 1.1.1: Comment: Thanks for your follow-up, we understand now the source of the confusion! To make things absolutely clear, consider first as an example the linear transformation $J\colon\mathbb{R}^2 \to \mathbb{R}$ with matrix $$ J = \begin{matrix}(1 & 2)\end{matrix} $$ The singular values of $J$ are defined either as the eigenvalues of $J^\top J$ or as the eigenvalues of $JJ^\top$, depending on convention. In the former convention, the singular values of $J$ are $\sqrt{5}$ and $0$, in the latter there is only one, $\sqrt{5}$. The latter convention is more parsimonious for linear transformations that are surjective, so this is the convention we tacitly employed throughout (precisely because the regime of interest is when $m \geq d$). [When there is danger of confusion, the latter convention is sometimes referred to as "thin singular value decomposition" in the literature] With this in mind, the relevant part of Assumption 1 reads more explicitly as $$ \sigma_{\min}^2 \leq \mathrm{eig}(J_\theta J_\theta^\top) \leq \sigma_{\max}^2 $$ so there is no issue if $m>d$: in this case, $J_\theta \colon \mathbb{R}^m \to \mathbb{R}^d$ is surjective (or, equivalently, $J_\theta$ has full rank as a matrix). In particular, the inverse function theorem *does not apply* for $m>d$ (since $J_\theta$ cannot be injective if it is surjective and $m>d$), but the *implicit* function theorem does, and it implies that $\chi$ locally looks like a projection (in suitable coordinates around the point under study). We hope this clears things up. To avoid any risk of ambiguity or confusion and make things more explicit, we will restate Assumption 1 directly in terms of the eigenvalues of $J_\theta J_\theta^\top$. Thank you again for the follow-up - and please let us know if you have any further questions! Kind regards, The authors
Summary: The paper focuses on studying non-convex games with hidden structures, where latent variables can be seen as a function of control variables and are decoupled. The authors propose a discrete algorithm called Preconditioned Hidden Gradient Descent (PHGD) to exploit the hidden structure and achieve convergence to Nash equilibrium. The method establishes a connection with natural gradient methods through the selection of gradient preconditioning schemes. The paper provides the first discrete convergence analysis on hidden convex concave games and extends the separable assumption to a more general multi-player setting. Convergence is proven under minimal assumptions, both in deterministic and stochastic environments. Strengths: - Originality: The paper introduces a novel algorithm, PHGD, to address non-convex games with hidden structure, providing the first discrete convergence analysis in this context. The connection established with natural gradient methods adds an original perspective to the research. - Quality: The paper demonstrates rigorous analysis by providing explicit convergence rates under minimal assumptions. It extends the previous work by considering a more general multi-player setting. - Clarity: The summary provided is clear and understandable, indicating good clarity in the paper. - Significance: The paper addresses an important problem by leveraging hidden structure in non-convex games, which has implications in various domains. The developed algorithm and its convergence properties contribute to the understanding and applicability of game theory. Weaknesses: - Lack of diverse examples: The paper could benefit from including additional examples beyond the matching Pennis zero-sum game to illustrate the applicability of the proposed method in different settings. The example shown in this paper can be included within previous 1-dimensional assumptions presented Mladenovic et al. Are there significant instances that this paper covers while previous results do not? - Lack of comparison with existing methods: It would be helpful if the authors provided a more detailed explanation of the technical challenges associated with discrete analysis compared to continuous ones. For the continuous dynamics, does it reduces to the exact natural gradient flow in hidden convex-concave games in Mladenovic et al., - It seems that the faster convergence is at the expense of high computational complexity. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Preconditioning method and its relation to natural gradient: The authors claim that "a judiciously chosen gradient preconditioning scheme bears an unexpected connection to natural gradient method". However, the connection has not been well explained in the main context. What is the literature of the preconditioning method? Why is the previous natural gradient approach not seen as preconditioning? - Technical difference in extending to the multi-dimensional case: What are the main technical differences or challenges encountered when extending the approach to the multi-dimensional case? - Minor issue regarding notation: In Line 195, where $P_i(\theta_i)$ appears before introducing what $P_i$ represents. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: The limitations of this paper have not been addressed adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your input and detailed remarks. We address each of your questions point-by-point below and we will revise our manuscript accordingly at the first revision opportunity. 1. **Lack of diverse examples: The paper could benefit from including additional examples beyond the matching Pennies zero-sum game to illustrate the applicability of the proposed method in different settings.** We kindly refer you to Appendix D where we demonstrate the algorithm's applicability in a series of different examples. 2. **The example shown in this paper can be included within previous 1-dimensional assumptions presented Mladenovic et al. Are there significant instances that this paper covers while previous results do not?** The focus of Mladenovic et al (ICLR 2021) is a continuous-time setup. In our work, although we begin by developing a more general continuous-time method than that of Mladenovic et al (Section 3.1), we continue by developing a bona fide, discrete-time algorithm; more importantly, we then go on to provide theoretical guarantees about its convergence, and its rate of convergence (Section 3.2 onwards). This is a crucial distinction from Mladenovic et al, who do not provide an algorithmic analysis of the proposed dynamics. 3. **Lack of comparison with existing methods: It would be helpful if the authors provided a more detailed explanation of the technical challenges associated with discrete analysis compared to continuous ones.** We should first point out that, to the best of our knowledge, all dynamics that have been proposed for solving hidden games evolve in *continuous* time; any iterative algorithm that can be programmed in a computer would have to de facto evolve in *discrete* time, so there are no other *algorithms* to compare to. Now, as for the difficulty of going from continuous to discrete time, this has been the driving force for the development of stochastic approximation theory, starting from the original works of Robbins & Monro in the 1950s to the classical textbooks of Kushner and co-authors in the 1970s, and Benaïm in the 1990s. To make an extremely long story extremely short, when performing a Lyapunov/energy analysis of a continuous-time system, the chain rule of ordinary calculus suffices; however, when the analysis needs to be performed at discrete time steps, the step size plays a crucial role, as it introduces further error terms that propagate and require completely different techniques to handle. Then, when noise is also present in the mix, the situation becomes even more complicated because one needs to employ martingale limit theory to control the various stochastic errors that arise - and which are completely absent in the continuous-time flow limit (which is de facto deterministic). 4. **For the continuous dynamics, does it reduces to the exact natural gradient flow in hidden convex-concave games in Mladenovic et al.?** The dynamics featured in Mladenovic et al can indeed be viewed as a special case of Preconditioned Hidden Gradient Flow. However, note that Mladenovic et al feature the aforemntioned formula for the case of a single datapoint (i.e., one-dimensional systems). Their general formula is not directly comparable. 5. **It seems that the faster convergence is at the expense of high computational complexity.** We are not sure what you mean here: the per-iteration complexity of the algorithm remains the same throughout, so a faster convergence rate would be synonymous with lower computational complexity. Could we kindly ask you to elaborate if needed? 6. **Preconditioning method and its relation to natural gradient: The authors claim that "a judiciously chosen gradient preconditioning scheme bears an unexpected connection to natural gradient method". However, the connection has not been well explained in the main context. What is the literature of the preconditioning method? Why is the previous natural gradient approach not seen as preconditioning?** Preconditioning simply means finetuning the search direction of an iterative optimization algorithm to make more informed gradient steps. It is a technique that permeates the optimization literature and is the cornerstone of some of the most widely used methods - like L-BFGS, CG, and the like; see e.g., the classical textbook of Himmelblau, "*Applied Nonlinear Programming*". Natural gradient methods essentially redefine the gradient operator, so they can be seen as a special case of preconditioning. What we found unexpected was the fact that the choice of preconditioner which was the most appropriate for our analysis ended up being itself a natural gradient method (and included the continuous-time method of Mladinovic et al.). We were constrained by space in our original submission but, if the paper is accepted, we will use the extra page allowed in the camera-ready phase to explain all this in more detail. 7. **Technical difference in extending to the multi-dimensional case: What are the main technical differences or challenges encountered when extending the approach to the multi-dimensional case?** The original approach of Mladinovic et al (ICLR 2021) is inherently one-dimensional and requires separability: if either condition fails, we see no way of extending it. In this regard, the main technical difficulty was in guessing the form of the preconditioner that would allow us to carry out a Lyapunov analysis. 8. **Minor issue regarding notation: In Line 195, where $P_i(\theta_i)$ appears before introducing what represents $P_i$.** Good catch, we will add a reference to equation PHGF on that line. --- Please let us know if any of the above is not sufficiently clear. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I believe the contribution of this work is good, but there is still room for improvement in its presentation. I will raise my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you for your positive re-evaluation, we will update our paper according to your detailed input and remarks. Regards, The authors
Rebuttal 1: Rebuttal: Dear AC, dear reviewers, We are sincerely grateful for your time and constructive input. To streamline the discussion phase, we reply to each reviewer’s questions in a separate point-by-point thread below. Kind regards, The authors
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies a hidden preconditioned stochastic gradient descent method for finding Nash equilibrium in games with hidden monotone structures. It demonstrates non-asymptotic convergence bound of the proposed algorithm. The complexity bound of the hidden monotone game matches that of monotone games. The topic is very interesting as it provides a way to handle non-monotone stochastic games, which motivates me to read in details. Overall, it is an interesting paper. However, the presentation quality needs serious improvement. It is hard to judge the correctness with so many typos. Strengths: The motivation of the hidden gradient descent design is clear. The convergence rate result for hidden monotone stochastic games seems to be new and matches the optimal complexity bound for stochastic games. Weaknesses: The presentation of the paper is poor with lots of typos. Without the Errata provided by the authors in the Appendix, when reading the main context for the first time, I found many mathematical mistakes. After reading the authors own corrections, there still remain many typos, making it hard to understand. The notation system is very confusing. I have tried my best to go through the proof. The overall idea seems correct but I cannot guarantee details. See more comments below. The examples used to motivate the problems seems to be too simple. Example 2.1 is too simple. Example 2.2, it is unclear why one does not first solve the bilinear problem and find a corresponding mapping, which is not even needed actually. The example provided in figure 1 does not really need a neural network to approximate the probability space. For Assumption 1, the paper claims that $\theta$ and $x$ does not need to be within the same space, i.e., the dimension of them can be different. I wonder if $\Theta$ is the whole space on $R^m$ and if $\mathcal{X}$ is the whole space on $R^d$. Please specify the conditions for the Jacobian has positive and finite minimal and maximum singular values when $m\not=d$. Please give more concrete examples when the Jacobian can be exactly computed as required in the PHGD algorithm. If the Jacobian can be exactly computed, why one does not solve the hidden problem to optimality first and then find a mapping from control to optimal decision directly? Missing literature: there exists some literature on hidden convex structures in optimization and reinforcement learning with non-asymptotic convergence guarantee. The hidden gradient descent also appears previously. See literature [1-2] and reference therein. [1] Zhang, Junyu, et al. "Variational policy gradient method for reinforcement learning with general utilities." Advances in Neural Information Processing Systems 33 (2020): 4572-4583. [2] Chen, Xin, et al. "Efficient Algorithms for Minimizing Compositions of Convex Functions and Random Functions and Its Applications in Network Revenue Management." arXiv preprint arXiv:2205.01774 (2022). Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: P2 Line 77, the "[]" should be deleted. P3 Line 99, Here $F$ is defined over $\theta$. $F$ is later defined over $x$. Making it super confusing. Although $F$ is represent $l$ here, later $f$ is also used to denote $l$. Another notation $L$ is used to denote the stochastic counterpart of $l$. And then $f$ is used to denote a stochastic counterpart of $L(\chi)$. Please read these notations to see how confusing they are. P5 Line 195, it is unclear why $P$ is defined over $\theta_i$ rather than $\theta$. P5 Lemma 1, what is a hidden smooth structure? What assumptions are exactly needed on $F$ and $\chi$ function to ensure the format of $P$. P6, in Lemma 2, I suppose $\chi(\theta)=x$ by definition? P6 Line 230, about the statement for the first moment is totally wrong. P6 Line 238 & 239, $f_i$ is previously defined for deterministic function. Now $f_i$ becomes a random function. What is exactly the algorithm that one should run? I suppose that one should run PHGD with $\hat g_{i,t}$ defined by $L$ rather than $f_i$ right? Line 308, there should be a space between "games" and "This". Lemma 6, the notation is broken. Equation (C.5j) missing 1/2 in the third line. There are many mismatch notations in the main context and the appendix. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: The motivation for using the method is not adequately discussed. Some more explanation on the limitation of some assumptions is definitely welcomed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your input. Before our point-by-point replies, we would only like to respectfully point out a potential misunderstanding: the fact that the latent structure may be known to the players does not imply that the problem can be solved in the latent space and the solution transferred back to the control layer. Specifically, even if $\chi$ is known (and its derivatives computable), it may still be computationally hard to invert it, so mapping latent variables to control variables is, in general, not possible. We detail this issue in our replies below. 1. **The [paper has] lots of typos.** We spotted a number of typos in our original submission, which we fixed in the supplement (cf. Erratum in Appendix A). We trust that our replies below will clear up any remaining confusion. 2. **Example 2.1 is too simple.** This only served as a gentle start to ease the reader into the model, hence its simple (but not simplistic) nature. 3. **Example 2.2, it is unclear why one does not first solve the bilinear problem and find a corresponding mapping.** Inverting $\chi$ might be an intractable problem, as hard (or harder) than the original game. Since the aim is to find a solution in $\theta$, solving in $x$ and then inverting is not a viable approach. 4. **Figure 1 does not need a neural network.** This was only meant to illustrate a standard simple example in the spirit of Mladenovic et al. We will add a pointer to clarify. 5. **I wonder if $\Theta$ [and $\mathcal{X}$] is the whole space. [...] Please specify the conditions for [the singular values of] the Jacobian.** The simplify the presentation, we assumed that $\Theta = \mathbb{R}^{m}$ from the second line of Section 2. By contrast, the latent space $\mathcal{X}$ **need not be** all of $\mathbb{R}^d$ (and, indeed, in many cases of interest, it isn't). For the singular value requirement for $\chi$, this is a mild topological condition: for example, if $\chi$ is a linear map, this simply posits that it has full column rank. The general case is the nonlinear version of this requirement, namely that $\chi$ can be uniformly approximated at each point by a full-rank linear map. 6. **If the Jacobian can be computed, why not solve the hidden problem and then find a mapping from control to optimal decision?** Even though the Jacobian can be computed exactly, this does not mean that $\chi$ can be inverted. As a toy example, let $\chi(\theta_1,\theta_2) = \log\left[(1+\theta_2^2)e^{\theta_1} + (1+\theta_1^2)e^{\theta_2}\right]$: the Jacobian is trivial to calculate, but the inversion $\chi(\theta_1,\theta_2) = x$ can be very hard to compute. The difficulty of inverting $\chi$ scales exponentially in $m$ and $d$ so, in general, backsolving is not a feasible approach. 7. **[Missing literature]** Thanks for bringing these papers to our attention. We will certainly discuss them, but we should also stress that our primary focus and results are notably distinct from these references. On [1]: hidden games can indeed serve as a model for expressive neural nets. However, Assumption 4.1 of [1] stipulates the invertibility of the policy's hidden map, an assumption that we explicitly avoid. [Inverting a function represented by a neural net can be computationally prohibitive, whereas the Jacobian is typically available in closed form). On [2]: Assumption 2.1(b) in [2] calls for the hidden map to be non-decreasing across segments of hidden parameters. This is an inherently one-dimensional consideration, while our model has been explicitly designed to treat high-dimensional problems. 8. **L99: $F$ is defined over $\theta$ [and later] over $x$.** Please disregard L99, this was a typo; see L144 for the proper definition as the pseudo-gradient of $f$. As for $\ell$ and $f$, since we introduce stochasticity after defining them, randomness was implied from the context. We will update notation to make this clear. 9. **L195, it is unclear why $P$ is defined over $\theta_i$ rather than $\theta$.** $\mathbf{P}_i$ should involve only the control variables of player $i$; otherwise, each player would have to know everyone else's control variables. 10. **Lemma 1: what is a hidden smooth structure? What assumptions are needed on $F$ and $\chi$?** Apologies, "smooth" should read "monotone". Other than that, Lemma 1 was only stated as a stepping stone to achieve the desideratum right above and to derive the form of $P$. 11. **In Lemma 2, I suppose $\chi(\theta) = x$?** Yes - cf. Fig. 1, the "Notation" paragraph at the end of Section 2, the beginning of Section 3, etc. 12. **Line 230, about the statement for the first moment is totally wrong.** Yes, this statement was incorrect; please see the Errata in Appendix A. 13. **Lines 238 & 239, $f_i$ is previously deterministic. Now $f_i$ becomes random.** To lighten the notation, we treated $f$ as a variadic function, taking an extra stochastic argument when needed. We did this to minimize notational clutter, but we realize the confusion, so we will correct this in the revision. 14. **One should run PHGD with $\hat g_{i, t}$ defined by $L$ rather than $f_i$ right?** Yes, the second equality in L239 was only intended as a reminder. 15. **Lemma 6, the notation is broken.** This was fixed in the Erratum (App. A). 16. **The motivation for using the method is not adequately discussed.** As we explained above, the motivation is straightforward: knowledge of the hidden structure map $\chi$ does not mean that it is possible to invert it. PHGD retains all the good convergence properties one could expect in the latent, monotone space, without needing to solve a computationally prohibitive nonlinear system. 17. **{Minor typos}** Thank you. We will fix them all in the revision. Thanks again for your input and please let us know if any of the above is not clear. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Despite the readability issue caused by typos, I appreciate the technical novelty in the paper. With regard to point 16, it seems that the proposed method is most useful when one cannot control the decisions made in the latent monotone games. Otherwise, there is no point in learning from the control layer, as one can directly play a monotone game in the hidden latent space. In this regard, the examples provided do not support the motivation well enough. --- Reply to Comment 1.1.1: Comment: First, we would like to sincerely thank you for your quick response and input. The first thing we would like to clarify is that, in line with the established literature on hidden games (see below for a representative list of references), the agents *do not play the hidden game directly*: the agents' decisions are their control variables, not the latent variables. The latent variables merely represent a modeling abstraction and are best thought of as virtual/auxiliary variables that capture a certain structure in the players' payoff functions. In this regard, identifying a solution in terms of latent variables is not meaningful from the players' viewpoint unless they can also find the control/decision variables that realize said solution. We should also note that this model is not due to our paper but is the standard model for hidden games in the literature, see references [1-4] below. Given that this is an established model in the literature, the main focus of our paper was to develop a suite of learning algorithms and tools that would allow the effective solution of such problems. As for additional instances/examples of our model, another interesting class of problems that can be cast in the framework of hidden games is that of team zero-sum games [5], where there exist two competing teams of agents. For each possible outcome of the game, the payoffs of all members within a single team are equal to each other and represent the payoff of the team. The sum of the payoffs of two teams is equal to zero. Several variations of these models have been the object of recent study [6-8] and they readily fall within a hidden game framework where the space of latent variables is the set of (mixed extensions) of the strategy outcomes of each team. We hope that the above clarifies further the positioning of our work in the existing literature on hidden games. We thank you again for your input and please let us know if you have any further questions. > [1] Vlatakis et al. Poincare recurrence, cycles and spurious equilibria in gradient-descent-ascent for non-convex non-concave zero-sum games. (NeurIPS 2019) > [2] Flokas et al. Solving min-max optimization with hidden structure via gradient descent ascent. (NeurIPS 2021) > [3] Mladenovic et al. Generalized natural gradient flows in hidden convex-concave games and gans. (ICLR 2021) > [4] Pattathil et al. Symmetric (optimistic) natural policy gradient for multi-agent learning with parameter convergence. (AISTATS 2023) > [5] Schulman et al. The duality gap for two-team zero-sum games. Games and Economic Behavior (2019) > [6] Anagnostides et al. Algorithms and Complexity for Computing Nash Equilibria in Adversarial Team Games. (EC 2023). > [7] Kalogiannis et al. Teamwork makes von Neumann work: Min-max optimization in two-team zero-sum games. (ICLR 2023) > [8] Kalogiannis et al. Efficiently Computing Nash Equilibria in Adversarial Team Markov Games. (ICLR 2023)
null
null
null
null
null
null
Robustness Guarantees for Adversarially Trained Neural Networks
Accept (poster)
Summary: This paper studies the optimization convergence of adversarial training in two-layer neural networks. This paper also proposes a reflecting loss which search for a better attack. Strengths: The paper is clear and easy to understand. Weaknesses: My major concern regarding this paper is that the condition on the attack strength $\nu$ is too strong. The constraint $\nu\leq \beta/(2(1-\alpha)\kappa\sqrt{m})$ is exactly used in Page 13 (in the appendix) so that the attack on the current model $f_{W,t}$ is closed to the one for the model $f_{W}$. However, in such a scenario, there is almost no difference between clean training and adversarial training. The authors need to relax this condition and provide more insights on the role of the attack strength and the difficulty when it gets larger. Besides the major concern on the theoretical contribution, there are some issues in the experiments. First, the experiment in CIFAR-10 does not obtain a significant improvement. In Table 2, if counting the colored cells, PGD takes up 4, and R_PGD takes up 3, i.e., PGD may be even better than R-PGD in some cases. Second, there is no error bar information in this paper. Given that the difference between benchmarks and the proposed method is only of a small quantity, the authors need to run experiments for multiple times to obtain a standard error of the accuracies to verify that the improvements are statistically significant. Finally, in addition to the weakness of the assumption on the attack strength, there is also another gap between the theoretical study and the experiments: the authors may consider theoretical justification on the generalization performance of adversarial training in the linear separable data. I would suggest the authors to investigate the effect on $\beta$ towards the generalization performance. In many existing literature, people study the generalization performance in simple models, e.g., Gaussian mixture model, linear regression model, but there is limited understanding on linear separable data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please try to address my major concern (the first paragraph). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No ethical concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1.*Regarding attack strength*, the constraint $\nu \leq \beta/(2 (1-\alpha) \kappa \sqrt{m})$ is not too small since **$\kappa = 1/\sqrt{m}$; see line 134**. In fact, it is exactly what we should hope for. Consider, for example, the setting with $\alpha=0$ (i.e., linear unit). Then the constraint simplifies to $\nu \leq \beta/2$. If we think of $\beta$ as a hard margin between two classes and assume that the class conditional marginals are supported on hyperplanes that are a distance of $\beta$ from each other, then any perturbation of size larger than $\beta/2$ will result in robust test loss equal to one. We know that our results are tight since we recover several prior results when we set the parameters accordingly; see the remarks following Theorem 3.4. 2.*Regarding experiments*, the goal here is not to improve upon standard adversarial training. We know that adversarial training is quite successful in practice. This is what we state in the opening paragraph of the experimental results section (**see lines 232 — 235**). However, we cannot hope to give convergence guarantees for adversarial training as it involves maximizing a convex function in the inner loop which is not tractable. This is why we use a concave lower bound on 0-1 loss. **The question that remains is whether changing the adversarial training in this way does worse in practice. The answer is no.** It does not change the practice in any significant way but we gain new insights by establishing computational learning guarantees. 3.*Regarding linear separability*, we recall that the goal here is to give a bound on the computational complexity of adversarial training, i.e., how much time is needed to train a model that generalizes robustly. **Without linear separability, we know that empirical risk minimization is computationally hard even for simple models such as linear predictors.** So, if we cannot hope to say anything about the runtime of learning in a clean setting, there is little hope we can do so in the adversarial setting without assuming linear realizability. We know that it is a strong assumption, but unfortunately, there is not much to say in terms of runtime otherwise. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' feedback. Regarding the attack strength, thanks for pointing out my misunderstanding. However, could you please provide some more intuitions on why the proof can work? From $\kappa=1/\sqrt{m}$, following paper Ba, J., Erdogdu, M., Suzuki, T., Wu, D., & Zhang, T. (2020, April). Generalization of two-layer neural networks: An asymptotic viewpoint. In International conference on learning representations. and Xing, Y., Song, Q., & Cheng, G. (2021, March). On the generalization properties of adversarial training. In International Conference on Artificial Intelligence and Statistics (pp. 505-513). PMLR. My understanding is that the neural network is expected to behave similar to a linear network, in which the attack simply follows the one in linear model. Or is there any difference in the intuition? --- Reply to Comment 1.1.1: Title: Responding to the follow-up comment from Reviewer 8NN7 Comment: Thank you for reading our rebuttal and for the follow-up questions. We will be sure to discuss these works in the revision. Note though that there are significant differences between the settings/results of our work compared to those shared by the reviewer. The paper of Ba et al. focuses on the least squares regression problem with two-layer networks trained using gradient flow in a high dimensional setting wherein samples n, features d, and neurons h tend to infinity. We, on the other hand, focus on the problem of binary classification with two-layer networks trained using SGD. There is a fair bit of difference between the settings of the two papers. Furthermore, the paper of Ba et al. seeks to understand the double descent phenomenon and the role of inductive bias in different settings involving various initialization schemes, which is a very different focus from that of ours. We are not sure what connection we need to identify to be able to gain insights into why our proof works based on the work of Ba et al. The paper of Xing et al. focuses on statistical aspects of adversarial training for regression problems and makes some connection with the Lasso problem. They consider a lazy training setting and leverage the fact that the dynamics of training a two-layer neural network in such a setting is close to the dynamics of training a linear predictor. We on the other hand focus on binary classification, do not consider a lazy regime and focus on computational aspects of the learning problem (i.e., how much runtime is needed for robust learnability rather than the statistical complexity). There is very little connection between the settings and tools and methods used in the two works. Regarding the reviewer’s question about why the proof works. Our proof techniques actually do not leverage the lazy regime or somehow exploit the fact that the neural network behaves as a linear network. We make that amply clear in the paper in several places (e.g., see lines 39 — 42, lines 73-74, lines 154 — 156). The proof follows the convergence proof of the Perceptron algorithm after making appropriate changes to the nonlinear architecture of the neural networks. There are three key ideas: first, showing that the objective is non-convex and every critical point is a global minimum; second, showing that SGD converges to a global minimum after performing at most certain number of non-zero attributes. Finally, showing that as long as the attack size is smaller than the margin, robust training is not much harder than standard training. The reviewer may also want to check the following well-cited paper by Alon Brutzkus, Amir Globerson, Eran Malach, and Shai Shalev-Shwartz which laid the foundation for this work: https://arxiv.org/pdf/1710.10174.pdf. --- Rebuttal Comment 1.2: Comment: Hi Reviewer 8NN7, Please let me know whether the response addressed your concern. Best AC
Summary: The paper studies robust training in 2-layer networks. This problem is studied as a 2-spet procedure - finding adversarial samples, and training over these samples. The main result is given a linearly separable dataset, then a 2-layer network with leaky-ReLU activation trained robustly with SGD converges to a robust network in polynomial time. Several experiments are given to support the theoretical results. Strengths: - I think it is interesting to study robust optimization in a rigorous way, and the main result of this paper- namely that robust SGD converges to a robust network in polynomial time is nice. - The empirical result complements the theoretical results, while also generalizing to a multi-class setting. - I think the proof sketch gives a nice and intuitive explanation of the main result Weaknesses: - The use of the reflected loss seems like a crucial part of the proof, although I don’t think it is motivated enough. Is it used just for some technical part of the proof? or is there more to it? I think if it is indeed a crucial part, then there should be a more in-depth explanation of why it is used. - The model analyzed is a 2-layer network without biases. It is not clear whether removing the bias is done just for simplicity, or it is indeed necessary. I think the proof uses homogeneity w.r.t x, which may be the cause for the bias-less network, but if so, can the results be generalized to networks with bias terms? - More of a suggestion: Are the convergence results of algorithms 1 & 2 can be decoupled? i.e. will Theorem 3.4 about the convergence of algorithm 2 will hold when instead of algorithm 1 we use any other attack? If so, I think it is good to mention it. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Will the proof work also on networks with bias terms? - Why is the reflection loss used in the analysis? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [Q1] We think that the proof should go through even with bias terms as well, but it may require some extra work. In particular, some of the recent work (e.g., see https://arxiv.org/pdf/2102.11840.pdf and https://arxiv.org/pdf/2301.00327.pdf). The idea here is similar to how the proof goes through in the NTK setting — i.e., ensuring that the weights and the bias terms do not move too far from the initialization, which is indeed the case in many settings as these works show. [Q2] The reason that the reflected loss is used in the analysis is that it is a concave function that is a lower bound on the 0-1 loss. See, the technical challenge is that the inner loop of adversarial training for finding an adversarial attack amounts to maximizing a convex function which is computationally intractable. It is also unclear why one would use an upper bound on the 0-1 loss function if one were trying to maximize it. So, instead, we use a concave lower bound which is intuitive and computationally tractable. We will add a discussion to that effect. Regarding your suggestion of decoupling the results of the two algorithms, that is indeed the case. That is actually how we structured Section 3. Theorem 3.2 is for Algorithm 1 and Theorem 3.4 is for Algorithm 2 which involves Algorithm 1. It is also possible to use a different procedure other than PGD — as long as that algorithm returns a good enough attack vector in polynomial runtime, we can still give analogs of Theorem 3.4. We will think more about other alternatives. Thanks for your suggestion. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: I thank the authors for the response.
Summary: This paper studies the convergence of adversarial training of two layer neural network, with gradient descent ascent (GDA) type algorithm. The algorithm considered here solvies inner max problem on a surrogate concave loss, and solves outer minimization problem on log-exp loss. Finally, the authors show the convergence to $\epsilon$ weaker robust error with no more than $O(1/\epsilon^2)$ iterations. The proof contains two parts, first they show that, solving the surrogate concave loss will yield a pertubation, which is almost as good as the true optimal adversarial perturbation, and then they borrow the existing analysis of two layer classification neural network training, to show that under the solved perturbation, the outer minimization problem can converge to global minima. To the best of my knowledge, this is the first work towards analyzing behavior of GDA on adversarial neural network training. Strengths: The convergence of adversarial neural network training is a longstanding problem. The most relevant work is [Gao et al 2019], where they assume the inner max problem is solved by some oracle, not PGD, and hence they only need to perform minimization convergence analysis on the adversarial loss. This paper, for the first time, consider algorithm-based convergence, where they study the case that the inner max problem is solved by projected gradient ascent. Weaknesses: 1. My main concern is that, the proof seems to be heavily depending on previous result, and the main technical novelty is the idea of running PGD on concave surrogate, and show the converegnce of PGD. 2. Running PGD on concave surrogate loss is not widely used in practice. In practice, the gradient descent and ascent steps are usually performed on the same loss function. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I notice that [Allen-Zhu and Li 2022] also studies adversarial training, where they consider Fast gradient method to solve inner max problem. Could the authors do some comparison with this paper? Allen-Zhu, Zeyuan, and Yuanzhi Li. "Feature purification: How adversarial training performs robust deep learning." In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pp. 977-988. IEEE, 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: One limitation as I mentioned is that the algorithm used here is not exactly consistent with what people use in practice. I do not see any societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [W1] Regarding the “*main technical novelty is the idea of running PGD on concave surrogate, and show the converegnce of PGD*”. That is a fair characterization of the main contribution. The computational learning guarantee for the end-to-end adversarial training follows as a corollary once we have the result for the PGD attack. Note though that the result for the PGD attack is novel and it required us to rethink adversarial training. While, in hindsight, things look easy once you know the trick, it was not quite straightforward. We also believe that simple ideas are elegant and often powerful. [W2] Regarding the comment about “*Running PGD on concave surrogate loss is not widely used in practice. In practice, the gradient descent and ascent steps are usually performed on the same loss function.*” That is true. However, we can never hope to analyze that approach since maximizing a convex function is computationally intractable. What we do show is that running PGD on concave surrogate is principled and does not change the practice by much. This is evidenced by our empirical results. Both approaches are actually quite comparable. [Q1] Yes, we would be happy to add a discussion of how our approach and results compare with that of Allen-Zhu and Li. Note though that the two settings are somewhat incomparable as Allen-Zhu and Li make various distributional assumptions, e.g., the sparse coding model. Nonetheless, we do agree that some comparisons on how the two works handle the inner loop maximization are in order. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: Thanks for providing explanation to my concerns. Now I see why the authors consider to maximize the surrogate loss since maximizing convex loss is not feasible to analyze. I change my score to 6 since this is the first work analyzing the GDA type of algorithm on neural network training, and I recommend to accept. I also suggest the authors can discuss more relevant works like Allen-zhu and Li's FOCS paper, on the difference of handling the inner maximization problem.
Summary: This paper investigates the adversarial training of two-layer neural networks on linearly separable data. The authors propose to reflect the commonly used convex surrogate loss during the inner loop that generates adversarial attack via the PGD method, and derives guarantee on the convergence of the attack. Meanwhile, this paper also provides theoretical results on iteration complexity for the adversarial training on linearly separable data that holds for any width and initialization of the network. Numerical studies are conducted to show the performance. Strengths: 1. The investigated topic of adversarial training of neural networks is important and has many applications in real world, while the theoretical aspects have not been well-studied. The results presented in this paper are non-trivial contribution to the field, and might help with better understanding of adversarial training of neural networks in more complicated settings. 2. The theoretical analysis seems solid. The authors introduce some interesting terms such as $\beta$-effective attack and $\beta$-robust to help the analysis, which might help in other study. 3. Overall, the paper is well-organized and easy to follow. Weaknesses: 1. The empirical results with MNIST and CIFAR-10 do not show significant difference between the performance of standard adversarial training and adversarial training using proposed reflected loss function. 2. There are some space of improvement regarding the experiments. Specifically, please consider the followings: (1) The result presented in Figure 2 does not seem convincing to me because (a) the setting is too simple and hand-crafted, while the real data can behave very differently; (b) Only one specific data point ($x=[3,2,1]$) is considered. It is doubtful whether it is cherry-picked or not. (2) In Table 1, the robust testing error for standard training model under standard PGD attack (0.033) is much smaller than FGSM (0.286), which is unlikely since PGD usually generates stronger attack through multiple iterations. Please double-check the correctness. (3) In Table 1, please consider including the evaluation of different models over clean testing data (un-attacked) for better comparison. Also, for Tables 1 and 2, please consider including the evaluation under other attacks, e.g., CW attack [1] and AutoAttack [2]. Please also consider my questions in the next part. [1] Carlini, N., & Wagner, D. (2017, May). Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp) (pp. 39-57). ieee. [2] Croce, F., & Hein, M. (2020, November). Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning (pp. 2206-2216). PMLR. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. What feature of the reflected loss function enables the derivation of the theoretical results, or why the standard loss function make the analysis difficult? Can one replace the reflected loss function with a more general loss function? 2. Theorem 3.4 says that "in at most $T_{tr}\cdots$ iterations, Algorithm 2 $\cdots$ $\textit{finds an iterate}$ $\tau$” with certain property. What does that mean exactly? Is it guaranteed that the model at the end of iteration $T_{tr}$ maintain such property? Or, is any early-stopping criteria necessary in order to guarantee the property? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [W1] Regarding “*The empirical results with MNIST and CIFAR-10 do not show significant difference between the performance of standard adversarial training and adversarial training using proposed reflected loss function.*” That is actually what we were hoping for. The goal here is not to improve upon standard adversarial training. We know that adversarial training is quite successful in practice. This is what we state in the opening paragraph of the experimental results section (**see lines 232 — 235**). However, we cannot hope to give convergence guarantees for adversarial training as it involves maximizing a convex function in the inner loop which is not tractable. This is why we use a concave lower bound on 0-1 loss. **The question that remains is whether changing the adversarial training in this way does worse in practice. The answer is no.** It does not change the practice in any significant way but we gain new insights by establishing computational learning guarantees. [W2] (1) Regarding “*improving experiments*”, we note that the point of experiment in Figure 2 was to help compare the utility of surrogate loss vs the reflected loss at finding the attack vectors. **It is meant to be a toy example so that we can carefully understand what is going on rather than simply report numbers that often are not very insightful. The goal is here not to have a leaderboard competition, but to unravel the inner workings of adversarial training.** [W2] (2) It is perhaps not very informative to compare performance across different attacks since the noise budget is not quite “calibrated.” In other words, it is unclear what the noise perturbation budget for an $\ell_2$ attack be for a certain noise budget corresponding to $\ell_\infty$ attack or the FGSM attack. [W2] (3) Sure, we will be happy to do so. If you look at the code we provide, it is actually quite straightforward to run the corresponding experiments for other attacks and respective reflected losses. [Q1] The feature of reflected loss that allows our analysis is that it is a concave function that is a lower bound on the 0-1 loss. See, the technical challenge is that the inner loop of adversarial training for finding an adversarial attack amounts to maximizing a convex function which is computationally intractable. It is also unclear why one would use an upper bound on the 0-1 loss function if you were trying to maximize it. So, instead, we use a concave lower bound which is intuitive and computationally tractable. That is the key feature. [Q2] That is a poor way of writing. We should have written “after $T_\textrm{tr}$ iterations”. In any case, what we mean is that after $T_\textrm{tr}$ iterations we find a model that is guaranteed to have a small robust error. --- Rebuttal Comment 1.1: Title: Follow-up comment on W2-2 Comment: Thank the authors for clarification during the discussion period. Most of my concerns get addressed. Here is my follow-up comment regarding W2-2. I think the significant difference between the robust testing error for standard training model under standard PGD attack vs FGSM attack (0.286) is not due to the perturbation budget. Note that the reported robust testing error for PGD-$\infty$ (0.033) and PGD-2 (0.003) are both much smaller than 0.286 as reported for FGSM attack. Considering that PGD takes multiple iterations compared to FGSM and is more powerful in searching a strong attack, I still do not think this difference is reasonable. Did you repeat this experiment for a few times? --- Reply to Comment 1.1.1: Title: Regarding Follow-up comment from Reviewer avt7 Comment: Yes, we did repeat the experiments multiple times and we do not believe there is a problem with the experiments. We provide the code as part of the supplement and it is fairly simple to check and run the code. We do note that the size of $\nu$ is an order of magnitude different for FGSM vs. PGD. Given our simple setting, the difference here could have been effectively even larger. Please refer to lines 267 -- 269 where we write: "The perturbation size for FGSM, PGD-$\infty$, and BIM (and their corresponding reflected version) is set to $\nu = 0.1$. For PGD-2 and R-PGD-2, we let a larger perturbation size of $\nu = 2$ as recommended in the Adversarial ML Tutorial."
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Can Neural Networks Improve Classical Optimization of Inverse Problems?
Reject
Summary: In this paper the authors explore whether better optimization solutions can be found by jointly optimizing several inverse problems together. These inverse problems share a connection as they all can be formulated through a differentiable function $F(\xi_i | x_i)$. The authors implement the joint optimization through a set of NN parameters $\theta$ which connect all the different inverse problem variables $\xi_i = \hat{\xi}_i (\theta)$. Upon reading the authors' rebuttal to the questions I posed, I am inclined to adjust my score accordingly. Strengths: * The main problem that the paper is trying to solve is interesting and relevant. As the authors mention in lines 101-105 "generic optimizers often fail to find the global optimum due to local optima, flat regions, or chaotic regions". * Also the setting expressed in section 3 is not typical which opens up many potential future work leveraging this setting. * The experimental results are interesting in how they all improve the solutions by increasing $n$ which does provide evidence of cross-talk between the problems. Weaknesses: * The experiments generate inverse problems synthetically. I would like to see a real-life example of a problem that follows the settings exposed in section 3. I understand that several of the equations have practical applications but in this case I'm referring to a real-life problem that has a naturally occurring (not sampled from a known distribution) set of inverse problems that can be connected through a function $F$. * The method is not applicable to many problems. I'm not familiar with any ML application that has similar optimization problems that can be pooled together. Put differently, it is unclear to me how restrictive is the setting in section 3 of having a function $F(\xi_i | x_i)$ that expresses all the inverse problems. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * I understand the BFGS is used as a high-accuracy optimizer. However, in ML SGD is the workhorse as it seems to avoid getting stock in undesirable parts of the solution space. Could you replicate the experiments in section 3 also showing the loss evolution vs steps for SGD? * What are the runtimes of the experiments that you ran on section 4? I would appreciate a runtime comparison between baselines BFGS and Neural adjoint (not only steps, as the steps could be much more expensive for some methods). I know you make mentions of this on lines 346-348 but a break down for each of the problems and baselines would be ideal. How different are the trained NNs from their initialization? * Flexible ML models like NNs require several examples to extract the patterns from the data. It is unclear to me how can I learn a NN out of 2 to 256 examples (line 313) as in several problems the neural networks are pretty large too (lines 203-205). * Could you provide results without the refinement stage (lines 155-161)? I would like to understand how crucial is this last step is to the results obtained. I understand that this stage is done to achieve high accuracy, however it is unclear to me if this stage is actually doing all the work but simply starting from a better initialization coming from the previous stage or if it is actually the opposite and the second stage only improves marginally the results but the first stage gets relatively close to the optimal solution. * "Supervised learning can sidestep many difficulties that complex loss landscapes pose, such as local minima, alternating gradient directions, or zero-gradient areas" (lines 321-323). Supervised learning is a paradigm of ML where the model's parameters are trained on a dataset with specified targets. As such supervised learning has nothing to do with complex loss landscapes, etc. Could you elaborate on this line and rephrase in the paper? * How you would you ensure reproducibility of the code as in the README.md it reads: "The code in this directory is preliminary and requires unreleased versions of some libraries" as it is unclear to me what these "unreleased versions" are and whether they would be available as well as your code upon publication (lines 357-358). Could you also add a requirements.txt or setup.py. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * The limitations that I see are encapsulated on the questions that I raised above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I'm not familiar with any ML application that has similar optimization problems that can be pooled together. We have addressed this point in our general rebuttal above. Our approach can be applied whenever an experiment is performed multiple times or records multiple instances, such as time series data. > Could you replicate the experiments in section 3 also showing the loss evolution vs steps for SGD? Thank you for the suggestion! We have now run gradient descent (GD) with adaptive step size as an additional baseline. Example learning curves are shown in Fig. 1 on the attached PDF page. It turns out that GD generally performs worse than BFGS in our experiments. The following table shows the fractions of examples in which GD performs better than or equal to BFGS. | Experiment | GD better | Equal to BFGS | |------------------------|-----------|-------| | Wave packet fit | 18.4% | 24.6% | | Billiards | 0.8% | 30.9% | | Kuramoto–Sivashinsky | 7.0% | 0.4% | | Incompr. Navier-Stokes | 3.9% | 0.0% | Consequently, our method shows a bigger improvement over GD than BFGS, if slightly. In the incompressible fluids experiment with $n=128$, for example, the reparameterized and refined optimization outperforms GD in 88.3% of cases vs 86.0% when compared against BFGS. > What are the runtimes of the experiments that you ran on section 4? We have put together a table that includes all training and optimization times. The table can be found in the general rebuttal above. We further discuss the performance comparison to BFGS in our response to reviewer nXp8. > How different are the trained NNs from their initialization? This is a very interesting question. We plotted the weight change by layer and data set size in Fig. 2 of the attached PDF page. We can observe that, in all experiments, the weight change grows with increasing data set size, the only real outlier being the final bias of the KS experiment. Partly convolutional networks (wave packet, KS, fluids) concentrate the weight change on the final layers while fully-connected networks (billiards) primarily change the initial layers. We will add these plots along with a discussion to the appendix of our paper. > It is unclear to me how can I learn a NN out of 2 to 256 examples as in several problems the neural networks are pretty large too. This question touches on the frontiers of our understanding of ML. Technically, a neural network can be trained with a single example by repeatedly feeding it that data point and backpropagating the residual. Eventually, the network will output the correct result. While overparameterization often leads to overfitting and poor generalization in classical optimization, this does not seem to be the case regarding neural networks, see e.g. https://arxiv.org/abs/2105.14368 . In our setting, we do not require the networks to generalize well, as there is no separate test set. However, the fact that increasing the data set size improves performance in our experiments shows that the networks generalize to some extent. > Could you provide results without the refinement stage? Table 2 in the appendix lists the results without refinement. Also, Figs. 1, 4, 7, 10, 11, and 12 in the appendix show results both with and without the refinement stage. > it is unclear to me if this stage is actually doing all the work [...] or if it is actually the opposite and the second stage only improves marginally [...]. We have compiled a table listing the fraction of the total loss improvement performed by the network fit. It can be found in our general rebuttal above. The table shows that, in all experiments, the first stage (network fit) is responsible for the bulk of the improvement and the refinement stage improves the loss much less overall. We can also see that for larger data set sizes $n$, the network fit contributes even more to the overall improvement while for small data set sizes, the refinement stage is more important. > I would like to see a real-life example of a problem that follows the settings exposed in section 3. We agree that this is a very interesting avenue of research. However, we believe that applying this approach to real-life problems would go beyond the scope of this paper. We have plans to pursue this direction in the future. > (lines 321-323). [...] Could you elaborate on this line and rephrase in the paper? What we meant here is that the gradients which supervised learning receives are not passed through the simulator and, thus, do not directly depend on the loss landscape that the classical optimizer sees. Supervised learning here simply is linear regression for a given set of training inputs and labels under an $L^2$ loss. With overparameterized networks, this objective almost always results in smooth and stable convergence towards zero training loss. We will reformulate this in the paper. > How you would you ensure reproducibility of the code as in the README.md it reads [...] Could you also add a requirements.txt or setup.py. We had to use nightly builds of one library during development, but this has now been resolved with the latest release. We will add a requirements.txt and provide a simple API to run our method on custom inverse problems. In the meantime, you should be able to reproduce our experiments with the source code from the supplementary material after installing the following packages: `torch==2.0.0 tqdm==4.64.1 phiflow==2.4.0 dataclasses==0.6 matplotlib==3.5.1` --- Rebuttal Comment 1.1: Comment: I thank the authors for running some additional experiments which have clarified some questions that I had about the effect of refinement, the actual run times (not only steps) and the evolution of the NN's parameters. I have revisited my score. --- Reply to Comment 1.1.1: Comment: We are grateful for your careful reading of our rebuttal and your revised score! We are glad we could answer your questions and alleviate your concerns.
Summary: This paper develops a novel approach to gradient-based non-convex optimization. The proposed methodology begins with the reparameterization of the parameter space utilizing neural networks, followed by the application of classical techniques such as BFGS, or alternative Neural Network surrogate models for the forward function, to accomplish the optimization task. The efficacy of these methods is verified through four distinctive experiments, which include applications to the Kuramoto-Sivashinsky (K-S) equation and the Incompressible Navier-Stokes equation. The results indicate that the reparameterized optimizer delivers enhanced convergence overall. Strengths: The paper presents a compelling concept of reparametrizing the parameter space of the optimization problem using a neural network and accomplishing optimization via a two-step process that employs the trained/optimized neural network as a preconditioner. However, it remains unclear to the me as to why such a reparameterization is likely to benefit the non-convex optimization procedure. Despite this, the experimental results appear to indicate an enhancement, as evidenced by the four case studies investigated by the author. Weaknesses: My primary concern regarding this paper pertains to its lack of rigor. The authors do not clearly define the inverse problem that they are attempting to solve, nor do they provide cogent proofs or insights explaining why the introduced reparameterization would aid the optimization process. It is quite plausible that the limited experimental studies offered in this paper lack generalizability, and it's conceivable that there are counterexamples where optimizers, without the incorporation of reparameterization, achieve superior convergence. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: In all four cases, the authors do not explicitly define the inverse problem they're attempting to solve, as well as the specific loss function utilized. This lack of clarity greatly complicates the task of comprehending their methodology. The paper falls short in normalizing the loss functions - without this normalization, it becomes exceedingly difficult to ascertain whether a loss value, such as "10", carries any physical significance. The optimization curves presented in the four experiments exhibit some anomalous behaviors. For instance, in Figure 3, the loss of the neural adjoint method escalates with increasing steps, while the supervised method demonstrates oscillation. Could this be attributed to improper tuning of the optimizer's step size? The paper doesn't provide a direct comparison between the optimized solutions obtained and the ground truth, except for the reported loss function. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: While the authors recognize that they have not extend the method for constrained optimization problems, I believe there is an inherent limitation in the approach, as it lacks a mechanistic understanding of why such a method would be effective. This comprehension is fundamental for a method to be broadly applicable and reliable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The authors do not clearly define the inverse problem that they are attempting to solve Our inverse problems have very simple definitions: in all experiments, the objective is the $L^2$ loss between the target and the simulation output from the solution estimate, see Eq. 2. In the wave packet experiment, the difference is between the estimated and target waveform. In the Billiards experiment, it’s the (x,y) difference between the final position of the ball and the target. In the KS and fluid experiments, it’s the difference in final states. We will add the specific loss functions for each experiment to the experimental details section. > [...] nor do they provide cogent proofs or insights explaining why the introduced reparameterization would aid the optimization process Our method works better than classical optimizers for the same reason neural networks learn to generalize the training data. This “unreasonable effectiveness of neural networks” has been demonstrated empirically many times but our theoretical understanding lags behind. It is likely that future progress made in understanding why neural networks generalize can be directly applied to our results. > it's conceivable that there are counterexamples where optimizers, without the incorporation of reparameterization, achieve superior convergence. First, we would like to point out that our experiments cover a broad range of scenarios, and we have never encountered a worsening of results due to reparameterization. But looking at this from a more theoretical point of view, it also seems unlikely. Remember that classical optimization is also performed as a second stage with our method. To yield worse results, cross-talk between examples would have to move the solution estimates towards more shallow local optima, overcoming the pull along the negative gradient. Realistically, this can only occur if the initial guesses were already well-chosen, in which case our method is superfluous anyway. While not impossible, we deem it unlikely that the reparameterized and refined solutions would be worse than those found directly by a classical optimizer in realistic settings. > The paper falls short in normalizing the loss functions Our loss functions, namely $L^2$ losses are widely used in machine learning research, as well as science in general. Reporting absolute loss values is standard practice, and it simplifies replication in future work. Furthermore, there is no one correct way to normalize our loss values since they are unbounded and can be zero. The differences in initial loss stem from the initial guess being closer or further from a minimum and do not directly correlate with the difficulty of the specific example. > in Figure 3, the loss of the neural adjoint method escalates with increasing steps, while the supervised method demonstrates oscillation. Could this be attributed to improper tuning of the optimizer's step size? We can confirm that neither behavior is caused by an improper step size. You can find the corresponding parameter evolution curves in Fig. 6 of the appendix. **Neural adjoint**: In all cases, the neural adjoint method converges to a single position in solution space where the optimization reaches machine precision accuracy and then terminates. The actual optimization loss, measured by the surrogate model, decreases during the optimization. However, this is only a proxy for the real loss, which increases. **Supervised**: While the supervised method does show certain fluctuations in its output, this is because the network is evaluated on a different data set than it is trained on. This is a common phenomenon, related to double descent. Note that the “oscillation” occurs over the course of hundreds to thousands of iterations. A too-large learning rate would result in much higher-frequency oscillations. > The paper doesn't provide a direct comparison between the optimized solutions obtained and the ground truth, except for the reported loss function. The distances from ground truth values for all optimization parameters can be found in Figs. 3, 9, and 14 in the appendix where the dashed gray line represents the ground truth solution. These figures cover three of our experiments but we do not have ground truth solutions for the billiards experiment due to how the examples are generated. Note that the distance in solution space is not a good indicator for the loss, however. In the wave packet experiment, for example, the loss oscillates with increasing distance from the ground truth. By definition, the loss value measures the goodness of solutions and it should be the primary metric for comparing methods. --- Rebuttal Comment 1.1: Comment: I thank the author for the detailed explaination and revision. I have updated my score. --- Reply to Comment 1.1.1: Comment: Thank you for your careful consideration of our rebuttal! We hope we have resolved your questions and worries.
Summary: The manuscript presents a method to reparameterize and solve multiple inverse problems jointly using neural networks. The manuscript tests the proposed method on multiple inverse problems (including some chaotic problems) and compares against Neural Adjoint and BFGS baselines to show measurable performance improvements. Strengths: * The method is simple, and the authors haven't tuned architecture for problems, which would allow their usage as drop-in replacements. * Comparison against baselines shows the method provides noticeable improvements. Weaknesses: * The main downside of these methods is the added training cost (which the authors have mentioned in the limitations section) * I would recommend adding the training wall clock times + solving times in a table to give potential users of this method a proper estimate. * Adding benchmarks for the same problems used in the Neural Adjoint paper would strengthen the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * The paper uses BFGS to refine the solution. Would it be possible to use the solver as a predictor-corrector and incorporate BFGS in the training pipeline? Similar to methods used in Deep Equilibrium Networks [1] [2]. [1] [Neural Deep Equilibrium Solvers](https://openreview.net/forum?id=B0oHOwT5ENL) [2] [Continuous Deep Equilibrium Models: Training Neural ODEs Faster by Integrating Them to Infinity](https://arxiv.org/abs/2201.12240) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: All limitations are clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I would recommend adding the training wall clock times + solving times in a table to give potential users of this method a proper estimate. We have assembled a table that includes all training and optimization times. It is provided in the general rebuttal above. > Adding benchmarks for the same problems used in the Neural Adjoint paper would strengthen the paper. That is a good point. However, the Neural Adjoint paper did not use the functional form of the forward process, i.e. the true simulation. When using it, the problems presented there become trivial to solve. As a sanity check, we have now replicated the robotic arm experiment. Fig. 3 in the attached PDF page shows the results. BFGS and gradient descent (GD) manage to reach machine precision accuracy within a couple of iterations while the network approaches take longer to fit the data (3c). The refinement stage then optimizes all examples to machine precision accuracy (3d). While reparameterized fitting successfully solves these experiments, there is no need to use it since they can be solved perfectly with classical optimizers. All inverse problems in our manuscript exhibit non-trivial features, such as local optima, zero-gradient regions, or chaotic behavior. > The paper uses BFGS to refine the solution. Would it be possible to use the solver as a predictor-corrector and incorporate BFGS in the training pipeline? Similar to methods used in Deep Equilibrium Networks [1] [2]. That’s an interesting idea, but there are two major challenges to overcome. First, the training would be computationally expensive as each network optimization step would require a BFGS solve that itself calls the simulation many times. Second, assuming BFGS converges to a local optimum, the output will always have zero gradient w.r.t. the loss function, so there is no residual to back-propagate. However, as similar approaches have been successfully used in related work, this would be an interesting topic for future work. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal and the updated timings in the rebuttal. > assuming BFGS converges to a local optimum, the output will always have zero gradient w.r.t. the loss function That is a fair point that I missed during my paper review. Thanks for clarifying. I have updated my score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our rebuttal and for updating your score. We appreciate your constructive feedback and are glad that our points were helpful in addressing your concerns.
Summary: This paper discusses a novel approach to finding model parameters from data, a crucial task in science. Traditional iterative optimization algorithms like BFGS can accurately solve simple inverse problems, but their reliance on local information can limit their effectiveness in complex situations with local minima, chaos, or zero-gradient regions. To overcome these issues, the study proposes the idea of jointly optimizing multiple examples. The authors use neural networks to reparameterize the solution space and utilize the training procedure as an alternative to classical optimization. This method is as versatile as traditional optimizers and does not require additional information about the inverse problems, making it compatible with existing general-purpose optimization libraries. The paper evaluates the effectiveness of this novel approach by comparing it to traditional optimization on a variety of complex inverse problems involving physical systems, such as the incompressible Navier-Stokes equations. The findings show significant improvements in the accuracy of the solutions obtained, suggesting that this method could be a powerful tool for tackling complex inverse problems. Strengths: 1. This paper points out a potential new use case of neural networks and deep learning for optimization instead of existing learning to optimize (L2O) methods, that is to use neural networks as part of classic optimization, trying to learn unknown common structures among problem instances of interest. A major difference is that generalization to unseen instances does not matter. 2. The paper is written in crystal clarity with information in every details about the methods, the experiments and the results. The authors discuss about the results of different methods for each setting. Limitations and outlook to future work are also faithfully discussed. 3. Improvements without refinements look impressive on all settings. Improvements after refinements still look great on the first three settings. Weaknesses: 1. For the 4th setting, Incompressible Navier-Stokes, the proposed reparameterization method gives much higher mean losses than BFGS despite the fact that the majority of problems actually improve over BFGS. Could the authors elaborate more on the potential reasons specific to this experiment setting? 2. Since the mean losses could change with different dataset sizes because of the varying instance difficulty, would it be a better presentation of results with relative error or relative loss? (also a relative improvement could be better for results like Figure 4 in the Appendix. 3. 3~6 times more computational cost for the first three settings and up to 22 times for the fluids could be too high for the benefits achieved. This could be subjective but I hope the authors could provide some discussions or justification. 4. Would the "similarity" requirement be too strict to make the proposed method practically useful? For example, in the wave packet localization setting, the parameters A and $\sigma$ are fixed. Is there a practical application scenario that corresponds to this setting? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See the weaknesses part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: High computation cost and potential lack of practical applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > For the 4th setting, [...] reparameterization method gives much higher mean losses than BFGS despite the fact that the majority of problems actually improve over BFGS. This is an artifact of the L2 loss. Few examples with a high loss can dominate the mean even though most examples have a lower loss. You can see this in Figs. 10 and 11 in the appendix, where we have plotted the distribution of loss values. For the reparameterized optimization, it’s primarily the examples with a large left or right initial velocity $\vec v_0$ that get stuck during the optimization and are resistant to refinement. Despite the 30% higher mean loss, the reparameterized fit finds lower loss solutions in 64% of examples for $n=128$. > [...] would it be a better presentation of results with relative error or relative loss? (also a relative improvement could be better for results like Figure 4 in the Appendix. This is a good idea, but it’s hard to realize in practice since it requires a baseline for the difficulty of each problem. The final losses can be zero, so they are not a good reference. Using the initial loss of each problem is possible, but note that the network starts with a slightly different initial guess than the classical optimizer due to the non-zero weight initialization. Nevertheless, we will add the relative loss plots based on the initial guess of BFGS to the appendix but will keep the absolute loss values in the main text because they also facilitate replication of our results. > 3~6 times more computational cost for the first three settings and up to 22 times for the fluids could be too high for the benefits achieved. This could be subjective but I hope the authors could provide some discussions or justification. The computational cost is a limitation of our current implementation. However, we would like to point out that we compared our method to a very efficient parallel BFGS implementation that runs the forward process on the GPU. Most users employing a classical solver would likely run a CPU version, looping over the individual examples. For the KS experiment, sequential BFGS solves took 8x longer than the batched solve for $n=16$ and 76x longer for $n=256$. Running this on the CPU increased runtime by an additional 50-60%. Compared to the sequential CPU approach at $n=256$, our method is 18x faster than BFGS. In any case, our main goal was achieving better results than classical optimizers given access to the same information. We always ran the classical optimizers to convergence, meaning they cannot further improve their solutions given more time. We believe that finding better solutions is worth the extra computation time, especially since BFGS usually is not used in time-critical applications. Furthermore, we trained the reparameterization networks long enough for the loss to decrease significantly. This is not strictly necessary. For example, training the KS network on $n=256$ for just one-quarter of the training time cited above reduces the fraction of examples with better solutions than BFGS by just 0.3% from 69.1% to 68.8%. However, network fitting (162s) and refinement (129s) together now only take about 60% longer than the pure BFGS optimization (180s). > Would the "similarity" requirement be too strict to make the proposed method practically useful? For example, in the wave packet localization setting, the parameters A and σ are fixed. Is there a practical application scenario that corresponds to this setting? We address the applicability of our method in our general comment above. As for the wave packet experiment, we chose to fix A and σ to emulate a constrained optimization task with local minima. It is quite common in science applications to fit a function derived from theory with only a handful of free parameters, e.g. * fitting voltage and current data to models such as Ohm’s law, Kirchhoff’s laws, etc. * fitting temperature and heat exchange data to models such as Fourier’s law, Stefan-Boltzmann law, etc. * fitting vibration and noise data to models of damping, resonance, etc. * fitting drug concentration and cell response to models of inhibition, activation, etc. * fitting trajectories of objects to models of gravity, air resistance, etc.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback and helpful comments! In answering the reviewer’s questions, we performed additional experiments and created many new figures and tables. The attached PDF page shows gradient descent as an additional baseline for all our experiments (Fig 1), visualizes the network weight changes during fitting (Fig 2), and shows new results for the robotic arm experiment from the neural adjoint paper ([Ren et al., 2020](https://arxiv.org/abs/2009.12919)). Some reviewers raised concerns about the applicability of our methods since it relies on multiple similar experiments. This was only briefly discussed in the paper, and we’re happy to clarify this point now. Being able to pool experiments is quite common if you consider that most real-world experiments are performed multiple times. E.g., practically all particle physics experiments collect similar data many times. If one detector collects multiple data, they are all influenced by the detector characteristics, and these characteristics could be learned during the optimization. The same is true for many other fields of research, e.g., wind tunnel experiments are typically performed multiple times for a single model, and measurements of different models could also be pooled. Measurements over time, like PIV, can be split into multiple snapshots to be reconstructed jointly. The setting of solving many similar inverse problems has also been considered in previous ML research, e.g., https://arxiv.org/abs/2205.11912 https://arxiv.org/abs/2206.07681 https://openreview.net/forum?id=HaZuqj0Gvp2 . Some reviews noted that we had not specified wall-clock training and inference times. We will add the following table which shows all relevant times for the largest tested data set size $n$ in seconds. | Experiment | Parallel BFGS | Network fit | Refinement | Supervised fit | Supervised Refinement | Surrogate fit | Neural Adjoint | N.A. Refinement | |------------------------|------------------|-----------------|-----------------|----------------|-----------------------|-----------------|----------------|------------------| | Wave packet fit | $15.1 \pm 1.0$ | $46.9 \pm 0.2$ | $15.0 \pm 0.2$ | $24.0 \pm 0.2$ | $13.8 \pm 0.7$ | $46.5 \pm 0.2$ | $13.1 \pm 0.5$ | $13.8 \pm 1.7$ | | Billiards | $21.5 \pm 1.0$ | $115.2 \pm 0.6$ | $25.3 \pm 0.2$ | $8.8 \pm 0.1$ | $20.9 \pm 1.7$ | $12.9 \pm 0.1$ | $16.5 \pm 2.5$ | $21.8 \pm 1.1$ | | Kuramoto–Sivashinsky | $152.8 \pm 11.8$ | $638.8 \pm 3.7$ | $109.3 \pm 7.2$ | $11.4 \pm 0.9$ | $121.5 \pm 64.2$ | $16.4 \pm 0.8$ | $13.7 \pm 2.3$ | $147.6 \pm 12.3$ | | Incompr. Navier-Stokes | $1858 \pm 95$ | $29510 \pm 637$ | $1270 \pm 205$ | $212 \pm 7$ | $1390 \pm 333$ | $195.6 \pm 0.1$ | $8.7 \pm 1.2$ | $1451 \pm 63$ | From our manuscript, it was not clear, how much of the improvement in loss is made by the reparameterization network fit and how much the secondary refinement stage contributes. In the following table, we have compiled the fraction of the total loss decrease achieved by the network fit. The remaining improvement is made by the refinement stage using BFGS. The given fractions are computed per example and then averaged. | Experiment | $n=4$ | $n=8$ | $n=32$ | $n=128$ | |------------------------|-----------------------|-----------------------|-----------------------|----------------------| | Wave packet fit | $78.5\\% \pm 17.8\\%$ | $89.1\\% \pm 8.8\\%$ | $92.4\\% \pm 3.7\\%$ | $91.7\\% \pm 4.5\\%$ | | Billiards | $88.9\\% \pm 13.0\\%$ | $86.8\\% \pm 14.0\\%$ | $92.9\\% \pm 11.2\\%$ | $98.1\\% \pm 2.0\\%$ | | Kuramoto–Sivashinsky | $93.4\\% \pm 9.8\\%$ | $96.2\\% \pm 5.9\\%$ | $96.0\\% \pm 2.5\\%$ | $95.9\\% \pm 1.1\\%$ | | Incompr. Navier-Stokes | $100.0\\% \pm 0.0\\%$ | $99.4\\% \pm 0.5\\%$ | $96.6\\% \pm 3.4\\%$ | $96.8\\% \pm 2.5\\%$ | We address all other questions and comments raised by the reviews below. Pdf: /pdf/0e0e0187388ed91c6a4cc9a26fa4c202fd44f092.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Complexity Matters: Rethinking the Latent Space for Generative Modeling
Accept (spotlight)
Summary: This work investigates what constitutes a good latent space for generative models, and proposes a new training paradigm for generative models – DAE. Simply put, with DAE generative models are trained as Autoencoder in two stages. First, a relatively weak decoder is employed, whose purpose is to aid the encoder in learning meaningful representations. Second, the weak decoder is replaced with the “actual” decoder and training is continued. The second decoder is the one eventually evaluated as a generative model. Strengths: 1. The paper is very well written, easy and interesting to follow. 2. The questions of what constitutes a good latent space and how is one constructed are often overlooked. Most works follow longstanding paradigms of using predetermined distributions (e.g., Gaussian) or whatever is learned from an autoencoder. Posing these questions and formulating the setting, on its own, is impactful. 3. The proposed method DAE is empirically proven effective, and since it is rather simple to implement, I conjecture it might have a strong impact on future generative modeling works. Weaknesses: While the mathematical formulation in the paper looks sound to me, I think it does not benefit the paper, and significant portions of it could be moved into the appendix. As the authors acknowledge in the Discussion section, the formulation“ serves mainly illustrative purposes” and is “not proven mathematically”. I don’t see it as an issue, as much of Generative Modeling research (and ML in general) is empiric in nature. The proposed method, DAE, could be introduced as an empirically-supported design, while some of the mathematical formulation could be described to serve as intuition (e.g., Theorem 4.1) . However, dedicating over three pages to it seems excessive to me. In my opinion, most readers would greatly benefit if the experiments in Appendix B were present in the main paper instead of some of the mathematical formulation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors please elaborate on “there are other algorithms to implement the decoupled 2-stage training, and the evaluations in this work are not comprehensive”? Why would the authors not include such evaluations in the paper if they find them relevant? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Authors addressed limitations. One of them (non comprehensive evaluation) raises questions. I would appreciate a clarification on that. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments and questions. Below we address them separately: ### 1. "The proposed method, DAE, could be introduced as an empirically-supported design, while some of the mathematical formulation could be described to serve as intuition ... In my opinion, most readers would greatly benefit if the experiments in Appendix B were present in the main paper instead of some of the mathematical formulation." > Thanks for the advice. > You are correct that DAE could be introduced as an empirically-supported design. However, we want to bring more insights into the latent distribution for generative modeling. > It is worth emphasizing that the DAE approach is an outcome and empirical verification of our investigation of the ideal latent distribution for generative modeling. > * Motivated by the GAN training objective, we first introduced $D^G$ to measure the closeness between the latent and the data in distribution. To the best of our knowledge, this work is the first to provide a characterization of the optimal latent distribution from the perspective of minimizing model complexity. Our proposed characterization has its own interest and may shed light on other applications as well. > * By utilizing an encoder, i.e., $P_z=P_{f(x)}$, we argue that minimizing $D^G(P_f(x), P_x)$ with respect to $f$ will give rise to the optimal latent. Notice that here we are considering matching in distribution, rather than sample-wise reconstruction. Therefore, we are advocating the use of adversarial training techniques as in VQGAN to train the encoder. > * We then identified the trade-off between the encoder and decoder, accompanied by a rigorous linear case analysis (Theorem 4.1). To address the trade-off, we proposed the decoupled approach DAE. > The experiments in Appendix B are certainly interesting. We will try to compress the mathematical formulation in the first part of the paper and put some of the results from Appendix B in the main text, to improve the reading experience. ### 2. Elaboration of "there are other algorithms to implement the decoupled 2-stage training, and the evaluations in this work are not comprehensive?" > The gist of our DAE modification is to make the decoder relatively weak in the first stage when training the encoder. By other algorithms, we meant that there are various ways to make a weak decoder. We only explored reducing the number of channels and 2D Dropout. Other candidates include reducing the number of layers, adding $l_2$ regularization, etc. Due to the computational budget, we only experiment with two common ways of regularization. The improvement is consistent in our evaluations and we believe other forms will also work. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! It addresses the remarks I raised in my review.
Summary: This paper presents an approach with theoretical analysis to explore a more suitable latent distribution for generation. For this purpose, this paper proposes a novel distance between the latent and target distributions and tries to minimize it to obtain the optimal data-dependent latent distribution. In practice, a two-stage training strategy called Decoupled Autoencoder (DAE) is introduced to leverage a superior latent distribution to improve the generative performance. The experiments on VQGAN and DiT show the effectiveness of the proposed method. Strengths: 1.The paper is well-structured and easy to follow. 2.The methodology is supported by theoretical analysis. Weaknesses: 1.The proposed methodology is somehow similar to AE-OT-GAN [1], which also learns a latent distribution via an autoencoder and then utilizes the learned latent to train GAN. The authors should explicitly highlight the advantages of their approach over [1]. [1] AE-OT-GAN: Training GANs from Data Specific Latent Distribution. ECCV 2020. 2.Some important related works are missed, particularly the line of works that explore improved latent sampling in GANs, e.g., [1] [2] [3] [4]. In special, AdvLatGAN [2] seeks to adjust and discover a more suitable latent distribution via adversarial learning in both the post-training sampling and the training, and it has already introduced the concept of optimal latent distribution noted $p_z^{op}$ in its theory. [2] Improving Generative Adversarial Networks via Adversarial Learning in Latent Space. NeurIPS 2022. [3] Discriminator optimal transport. NeurIPS 2019. [4] Your gan is secretly an energy-based model and you should use discriminator-driven latent sampling. NeurIPS 2020. 3.The experimental results are insufficient. The comparison basically focuses on the performance gain beyond the vanilla backbones. Authors should compare the other efforts on improving the latent sampling/distribution, e.g., the previously mentioned works. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1.As discussed in [2], the natural image distribution lie upon multiple disjoint manifolds, which potentially requires a discontinuous latent distribution (Prop. 3.2). Does the proposed method adhere to this requirement? 2.In Fig. 2 (a), actually it is hard to discern noticeable performance improvements in the comparison. 3.I am curious why this method can be applied to diffusion models, for which the Gaussian is the stable distribution for noise addition. If the latent distribution is altered, how does the mechanism of diffusion models still function? Please also see the discussion of weaknesses. If the issues are resolved, I would be inclined to reassess and potentially raise my rating. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and effort in reviewing our work. There might be some misunderstanding and we have summarized and clarified the contributions of this work in the overall response. Hopefully, it can address some of your concerns. It is worth noting that although our motivation involves vanilla GAN models, our proposed methodology has an encoder-decoder structure that can be potentially adopted by a wider range of generative models that utilize a low-dimensional latent space. ### 1."The proposed methodology is somehow similar to AE-OT-GAN" > **The focus of our work and AE-OT-GAN is different**. > To clarify, we first summarize the procedure of AE-OT-GAN: > - (1) Train an autoencoder to embed data into the latent space (with distribution $\mu$) by minimizing the reconstruction loss; > - (2) Constructing $T$ with Semi-Discrete OT Map such that $T$ maps uniform $U[0, 1]^d$ to $\mu$; > - (3) GAN Training from $\mu$. > Our work shares the motivation of AE-OT-GAN but we investigate specifically the first step of AE-OT-GAN, i.e., the way to find a better latent. In comparison, the AE part in AE-OT-GAN follows standard procedures. In this sense, AE-OT-GAN can be seen as another example of a generative model that utilizes a latent distribution (induced by the encoder). Hence, our DAE modifications can also be applied to AE-OT/AE-OT-GAN. In the first step, we can utilize an auxiliary decoder to train a better autoencoder, potentially benefiting the afterward OT/GAN modeling. > We will add discussions to the AE-OT-GAN work in our revision. ### 2. "Some important related works are missed, particularly the line of works that explore improved latent sampling in GANs" > For the benefit of other viewers, let us first briefly summarize the related work [2-4]. Both DOT [3] and DDLS [4] can be viewed as post-sampling latent space mining of GANs, by exploiting the trained discriminator (as discussed in the related work section of [4]). AdvLatGAN [2] further considered the GAN training process and proposed adding an implicit latent transform before the mapping function to improve the latent from its initial distribution. The implicit latent transform is trained by adversarial sample mining methods in a bi-level optimization fashion. > All aforementioned works can improve the performance of GAN models through latent space optimization. However, they do not characterize the **optimal latent distribution** and do not consider using an encoder network to parameterize/estimate it. As an encoder-decoder structure is popularly used in large-scale text-to-image generation models, such as stable diffusion, DiT, Muse, Parti, etc, we believe our investigation could bring new insights to them. ### 3."The experimental results are insufficient...Authors should compare the other efforts on improving the latent sampling/distribution..." > Our work focuses on improving the latent distribution of generative models with a low-dimensional latent space. The key practical contribution is the 2-stage DAE training of the autoencoders. For the baseline autoencoders, the popular training methods, vector-quantized or KL-regularized are both experimented in this work, to demonstrate the effectiveness of our new understanding. > Decoder-only GAN models are not our main target and [1-4] do not consider the autoencoder structure. Nevertheless, we have conducted DCGAN experiment on CIFAR-10 in Section 6.2 and the results are reported in Table 2. > Treating DCGAN as a baseline, we introduced 3 modifications. By DCGAN vs. DCGAN-SimCLR, we want to demonstrate that learning latent for generative modeling is different from standard self-supervised learning methods for discriminative tasks. By DCGAN vs VAEGAN, we want to demonstrate that a learned latent from data can be better than data-agnostic Gaussian, and VAEGAN can be further improved by our modified DAE-VAEGAN. > For larger-scale experiments, we considered VQ-GAN and DiT. ### Q1: "... natural image distribution lies upon multiple disjoint manifolds...Does the proposed method adhere to this requirement?" > Yes. If the data distribution lies upon multiple disjoint manifolds, the continuous encoder is capable of mapping them to multiple disjoint manifolds in the latent space. The resulting $P_{f(x)}$ can be multi-modal as well. > This is not a special property of our method. In fact, any AE-type methods can handle the challenge, as discussed in AE-OT. Our work made further improvements over the standard AE training and can result in a better latent distribution. ### Q2: "In Fig. 2 (a), actually it is hard to discern noticeable performance improvements in the comparison." > The figures are highly subjective and serve as a complement to the reported FID. Please mainly refer to the FIDs for a quantitative evaluation. We will move Fig 2(a) to the appendix and add more figures to showcase the difference. ### Q3: "I am curious why this method can be applied to diffusion models..." > In our work, the latent distribution $P_z$ refers to the **low-dimensional** distribution to be mapped to the data distribution $P_x$. The stable distribution of the forward process of diffusion models is sometimes referred to as the "latent distribution" as well. However, we do not call it the latent distribution in this work. We apologize for the confusion and we will add a remark to clarify the difference. > As we reiterated, our work focuses on finding a better latent distribution for generative modeling. Given the latent space, the generative modeling itself, whether it is autoregressive or diffusion, is not the focus of this work. For latent diffusion models, such as Stable-Diffusion and Diffusion Transformers, the diffusion modeling is on the latent space $P_z$, instead of $P_x$. Hence, our DAE modifications can be directly applied, not just to latent diffusion models, but potentially all generative models with an autoencoder structure. --- Rebuttal Comment 1.1: Title: Disjointed manifolds Comment: For your answer to Q1, there is also a second part: the latent modeling has to be be compatible with multi-modal distributions. I think this is more crucial than the auto-encoder part which, being continuous, keeps the original topology of the natural images unchanged. But this is clearly not a limitation of the proposed method, it is rather a limitation of some of the existing generative models. --- Reply to Comment 1.1.1: Title: Author Response to Reviewer 2zRJ and 14Q5 Comment: We thank Reviewer ssMC for your valuable input in the discussion! We agree with your point that latent modeling should be compatible with multi-modal distributions. As pointed out by Reviewer 2zRJ, in standard VAE literature, if the decoder is too powerful will result in **posterior collapse**, that is the stochastic encoder maps data to the prior Gaussian distribution, which is not uni-modal and not informative. From this perspective, our DAE approach utilizes a relatively weak decoder when learning the latent with the aim to better retain the original manifolds in the latent space. As noted in our original response to Q1, if the data distribution is based on multiple disjoint manifolds, the encoder can map them to separate, disjoint manifolds in the latent space. --- Rebuttal Comment 1.2: Title: Thanks for the rebuttal. Comment: Thanks for the rebuttal. Most of the concerns have been addressed. However, I would still like to note some points regarding the related works. 1."AdvLatGAN [2] further considered the GAN training process and proposed adding an implicit latent transform before the mapping function to improve the latent from its initial distribution." "... they do not characterize the optimal latent distribution." This does not provide a concise summary of [2]. The more related part in [2] is Section 3.2.1, which optimizes the latent distribution to better match the data distribution. In [2]'s Section 3.2.1, the concept of optimal latent distribution (noted $p_z^{op}(G)$ in its theory) has already been included, and [2] claims its optimization could minimize the distance between the initial and optimal latent distribution. The motivation has already been explored to some extent by these previous works. 2."These methods do not consider using an encoder network to parameterize/estimate it." I understand the methodology difference. Nevertheless, the motivation for discovering an optimized latent distribution is consistent across these endeavors. This line of research should not be missed in the discussion. --- Reply to Comment 1.2.1: Title: Further clarification on related work Comment: Thanks for the reply! We are glad that most of your concerns have been addressed. Below, we further address the additional questions/comments. ## Relationship to AdvLatGAN [2] We apologize that our summary of [2] was not comprehensive enough and some important discussions were not included in our initial response. First, we would like to emphasize the key differences between [2] and our work. To clarify, we stick to our notations and use lower-case $g$ to denote a generator and use upper-case $G$ to denote a family of generators. ### The **concept of optimality** is different * The so-called optimal latent $Z^{op}(g)$ in [2] is **conditioned on a fixed generator $g$** and the key property of $Z^{op}(g)$ is that $G[Z^{op}(g)]=x_r$ (Definition 3.1). In comparison, our optimal latent does not depend on any pre-defined generator and is characterized as the **closest in distribution to the data distribution $P_x$**. Although our definition involves capacity/complexity constraint on the family of generators $G$, the resulting optimal $P_z^*$ that minimizes $D^{G}(P_z, P_x)$ aims to directly reflect properties of $P_x$. * The definition $Z^{op}(g)$ contains **no granularity** and does not offer **quantitative measurement** of how close a latent is to the optimal one. For some cases, e.g., $g(z)\equiv 0$, no latent distribution can satisfy $G[Z^{op}(g)]=X_r$ and $Z^{op}(g)$ can be an empty set. In comparison, our distance-between-distributions characterization $D^G(P_z, P_x)$ provides a quantitative measurement for the goodness of a given latent $P_z$. Therefore, our concept of optimal latent distribution **is not included** in [2]. Although both works discussed "optimal latent", **the definition and scope are fundamentally different**. ### The **parametrization of the optimal latent** is different. * In [2], the latent is modeled by a transformation $z^*$ from the original latent, e.g., standard Gaussian. In the proposed AdvLatGAN, there exist constraints on $z^*(z)$ that it cannot deviate too much from the identity map ($d(z_0, z)\le \epsilon$ in equation 5). In comparison, we proposed to model it by an encoder $f$ mapping from the original data, i.e., $P_z=P_{f(x)}$. We believe our parametrization is more flexible and may better capture the underlying structures of the data. ## Benefits of an Encoder As is iterated in both works, mapping uni-modal Gaussian to multi-modal distributions with disjoint supports is difficult and requires the mapping function to be highly complicated. Therefore, in [2], the ideal $z^*(z_0)$ (equation 5 in [2]) might also be highly complicated and hard to learn. We think this is an **inherent difficulty** when modeling the latent by transforming a pre-defined data-agnostic distribution. On the other hand, using an encoder can be more efficient in recovering the optimal latent. It is advocated in our work that the encoder-decoder structure should be used for generative modeling. On a side note, the sampling process of diffusion models also transforms a data-agnostic distribution to the target (in the same metric space). The inherent complexity is also an issue. However, the sampling process consists of multiple steps, and the complexity requirement can be easier to satisfy. Nonetheless, existing works on improving GAN, such as [2], are related to our work. You are totally correct that "the motivation for discovering an optimized latent distribution consistent across these endeavors". **We will add detailed discussions to them in the revision**. Thanks again for the questions! Hopefully, we have clarified the uniqueness of our work.
Summary: The paper first proposes a new framework to analyze latent spaces in the context of generative models. This framework takes inspiration from prior results about GANs, which allowed interpreting the min-max training objective as computing a distance between distributions to be minimized, to define a similar distance between the latent distribution and the data distribution. From their analysis, they derive a simple two-step training for auto-encoders to learn better latents and reconstruction, in which they first train the encoder with a weak decoder to extract good latents, and then train a larger decoder to get better reconstructions. They perform experiments in a simple toy case, and then with commonly used models such as GANs, VQGAN, and DiTs. Strengths: - The paper is easy to follow - It proposes a novel view on latent codes, including an explanation of some different properties between SSL and generative latent codes, and a theoretical framework to describe why a powerful encoder/decoder pair can't learn a good latent code. - The proposed practical solution is simple and their experiments show that it can improve performances in a variety of generative settings. Weaknesses: - While the fresh view on latent codes is interesting, it doesn't provide any theoretical guarantees. As far as I understand, the main conclusion is that in order to obtain a good latent code the encoder and decoder should not be too powerful but is not able to give an indication about how the correct balance. Note that in the context of Variational Autoencoders, a similar conclusion had been reached before: that a too powerful decoder is a good explanation for the phenomenon called *posterior collapse* that describes a state in which the latent code is completely uninformative [1]. It is good that it is formally extended to other auto-encoders, but unsurprising. - Because of this lack of quantification of optimal complexity, it is quite unclear whether the positive results obtained in VQGAN and DiT settings are actually related to the theoretical conclusions or not. It could very well be completely unrelated and just confirmation bias. - Related work present different generative models with loose links to the proposed method, but does not discuss any study about the latent spaces of generative models. In addition to links with posterior collapse in VAEs [1], I would also be very interested to read what the authors have to say about sparsity and disentanglement properties of VAE [2], and PCA directions in GAN space [3] among other things. Moreover, I have to point out that contrary to the paragraph in related works, Mask Autoencoders have been explored for their generative capabilities [4]. --- [1] Fixing a Broken ELBO. ICML 2018. Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, Kevin Murphy. [2] Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. ICML 2019. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem [3] GANSpace: Discovering Interpretable GAN Controls. NeurIPS 2020. Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, Sylvain Paris [4] MaskGIT: Masked Generative Image Transformer. CVPR 2022. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T. Freeman Technical Quality: 3 good Clarity: 2 fair Questions for Authors: One minor thing that was not clear to me is Remark 3.5. Could the authors explicit why scaling invariance would be a problem in this context? In any case, despite its limitations, I still find the paper interesting for both its theoretical and practical contributions. I trust the authors will fix the related work section, and my current rating anticipates that the authors will demonstrate their willingness to do so. ----- Post-rebuttal update: Having taken into consideration the rebuttal, discussions and comments from the other reviewers, I updated my score from WA to Accept. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors are very open regarding the limitations of their submission. Potential negative societal impacts of generative models are not. Widely acknowledge ones include amplification of biases, potential misuse for propagating fake information, and concerns about lack of attribution and copyright infringements. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments. Below we address them separately: ### 1. "While the fresh view on latent codes is interesting, it doesn't provide any theoretical guarantees." > Thanks for the question. You are correct that the main idea of our DAE approach is to balance the encoder and the decoder in different stages and our analysis did not offer precise algorithmic guidelines on how to achieve the optimal balance. > However, it is worth emphasizing that the DAE approach is an outcome and empirical verification of our investigation of the ideal latent distribution for generative modeling. > * Motivated by the GAN training objective, we first introduced $D^G$ to measure the closeness between the latent and the data in distribution. To the best of our knowledge, this work is the first to provide a characterization of the **optimal latent** distribution from the perspective of minimizing model complexity. Our proposed characterization has its own interest and may shed light on other applications as well. > * By utilizing an encoder, i.e., $P_z=P_{f(x)}$, we argue that minimizing $D^G(P_f(x), P_x)$ with respect to $f$ will give rise to the optimal latent. **Notice that here we are considering matching in distribution, rather than sample-wise reconstruction**. Therefore, we are advocating the use of adversarial training techniques as in VQGAN to train the encoder. > * We then identified the trade-off between the encoder and decoder, accompanied by a rigorous linear case analysis (Theorem 4.1). To address the trade-off, we proposed the decoupled approach DAE. > Making practical improvements to existing latent generative models is an important goal of this work. However, models such as VQGAN and DiT are too complicated for rigorous analysis. Instead, we draw inspiration from simple (linear) cases and verify the effectiveness on real scenarios mainly through experiments. ### 2. Relationships to VAE literature > This is a good point. As you pointed out, the fact that a decoder too powerful can result in posterior collapse is well known in VAE literature. Here we clarify the key differences to our work. > * Different target: This work focuses on the **ideal latent** distribution for generative modeling from the perspective of **minimizing complexity**. We consider mainly **deterministic** encoder and decoder where the decoder is the generator, and the encoder is what we used to parameterize the latent distribution. In our work, a good latent distribution is the primary target while in VAE, the latent distribution is often chosen in advance to be data-agnostic priors. > * Deterministic vs stochastic: To characterize the ideal latent distribution, we introduced $D^G(P_z, P_x)$, which is closely connected to GANs (Integral Probability Metrics or $f$-divergence). In comparison, VAEs usually maximize likelihood or mutual information $I$. It's worth noting that **while $I(x, z)$ can also serve as a "distance" between distributions in different dimensions, it will not work for our benefit** since we are considering deterministic encoders and decoders. In the deterministic encoder case, we have the conditional entropy $H(f(x)|x)=0$ and $I(x, f(x))=H(f(x))-H(f(x)|x)=H(f(x))$, where $H$ denotes entropy. That is, the mutual information would always be the entropy of the latent, hence uniformly distributed latent is always ideal. > In Fixing a Broken ELBO [1], the authors stated that obtaining a good ELBO is not enough for good representation learning. In practice, the authors proposed to target at a desired target rate $R^*$, such that a better balance between the informativeness of the latent and the reconstruction quality. As a result of the last bullet point, this approach cannot extend to the deterministic case. > In comparison, we focus on the complexity of the generator and we provided characterization of the optimal latent. > [1] also discussed realizability and their optimality is hinted to be the tightest achievable sandwiched bound within the parametric family. However, no formal statement is given. > Nevertheless, the VAE literature is related and we will add discussions to it in our revision. ### 3. "Related work presents different generative models with loose links to the proposed method, but does not discuss any study about the latent spaces of generative models." > Thanks for the suggested related works [1-4]. We will add a paragraph in the related work section to discuss this line of study in our revision. > Here we briefly discuss their relationships to our work. > Representation disentanglement is discussed in [2], which is not covered in our work and would be an interesting future direction to explore. [3] also follows this line of work and investigates the disentangled or interpretable latent variables or directions in GAN generation to control different aspects of the image, e.g., viewpoint, aging, etc. > In MaskGIT, the authors investigated how to better model the tokenized images, rather than finding better latent spaces to do masked modeling. Our insights about the optimal latent distribution and the DAE approach can also be beneficial in their settings. ### 4. "It is quite unclear whether the positive results obtained in VQGAN and DiT settings are actually related to the theoretical conclusions or not." > To empirically evaluate our DAE approach, we conducted a series of experiments from Gaussian mixture data, to DCGAN on CIFAR-10, to VQGAN and DiT on larger datasets. The improvements are consistent across different settings. > In practice, we provide a general rule-of-thumb on how to weaken the decoder in the first stage (cut the channels in half or add Dropout at strength 0.5). Such configurations work across different settings and we have not (and cannot afford to) done much fine-tuning, which supports the hypothesis that our theoretical insights are suitable and effective when applied to real-world model architectures. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers that adress the main points of my review: - The rebuttal convincingly discuss their relation to other works. - I am still not entirely convinced that the empirical results are a strong validation of the theory but the proposed links are reasonable. Also, both parts are valuable enough regardless. Ideally, I would still appreciate a clarification regarding my question on Remark 3.5. --- Reply to Comment 1.1.1: Title: Author Response to Reviewer 2zRJ: Explaination of Remark 3.5 Comment: Thanks for the feedback! We are glad that we have addressed most of your concerns. We are terribly sorry that we didn't explain Remark 3.5. In the following, we first discuss the more general scaling problem, which is asked by Reviewer ssMC, and then specifically explain the arguments in Remark 3.5. The scaling problem might be easier to understand in the encoder-decoder case. Consider the optimal reconstruction case where $f=g^{-1}$ and let $C(\cdot)$ be the Lipschitz constant. For some $\lambda>1$, notice that $f_\lambda := \lambda\cdot f(x)$ and $g_\lambda:=g(x/\lambda)$ have the same reconstruction error, i.e., $g(f(x))=g_\lambda(f_\lambda(x))$ for any $\lambda\ne 0$. In this case, $C(f_\lambda)$ can be arbitrarily large by increasing $\lambda$. This scaling problem remains for the more general cases where $C(f)$ is not invariant to scaling. To address the scaling problem, we considered the generalized $D^{ae}(\cdot, \cdot)$, where both $C(f)$ and $C(g)$ are considered and we want their sum to be small. In this case, $C(f^*)=C(g^*)=1$ is the optimal choice since $C(f^*) + C(g^*) = C(f^*) + 1/C(f^*)\ge 2$ is minimized when $C(f^*)=1$. Now we go back to Remark 3.5. Let us use $P_f$ to represent $P_z$ and the generator has Lipschitz constant less than c, i.e., $g\in G_c$. If we change $P_f$ to $P_{f_\lambda}$ for some $\lambda>1$, we can correspondingly consider $g_\lambda$, which will result in the same reconstructed distribution, i.e., $P_{g(f)}=P_{g_\lambda(f_\lambda)}$. Notice that $C(g_\lambda) < C(g)$ and is also in $G_c$. Therefore, $D^G(P_{f_\lambda}, P_x)$ is non-increasing with $\lambda$ and the optimal $P_{f_\lambda}$ might require $\lambda\to\infty$, which is ill-posed. For a more rigorous illustration, we refer to our Toy Example 1 in Appendix A.1. Please let us know whether we have sufficiently clarified Remark 3.5.
Summary: This paper proposes an asymmetric training scheme for auto-encoders that double as image generator. Based on analytical insights that the decoder should have less capacity than the encoder for the encoder to capture correctly the data distribution, they propose a first training cycle where a strong encoder and a weak decoder are trained jointly. This produces a latent distribution able to better capture the data distribution. Then, in a second stage crucial for the end application, the encoder is frozen and a strong decoder is trained using the "good" latent distribution. Experiments are carried out on faces datasets with VQGAN and class conditional imagenet with a diffusion model and show the proposed training scheme is promising. Strengths: The strengths of this paper are : - It is a simple method, yet effective. As such, it should be easy to reproduce. The improvements are maybe questionable (no error bars, test against the training set - as is traditional in this domain), but given that they are reported with widely different architecture and on different dataset, there is more confidence than usual that it actually works. - The analysis part is really nice. The linear example in particular sheds some light as to why under a constrained budget, the complexity of the encoder should exceed that of the decoder to properly minimize the projected distance (divergence) to the data distribution. Weaknesses: The main weakness is that most of the paper is about the analytical part, which is a bit handwavy (acknowledged in the conclusion), and lacks a good structure. The reading could be improved by having a clear outline of the work such that one does not wonder where the text is headed to after each section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: l.146 the link between the ideal $P_z$ and a low capacity generator is tenuous at best. There is no formal proof of that statement as far as I understand the paper. I can see a structural risk argument (ie, among all generator that equally minimize $D(p_x, p_z)$ chose the lowest complexity one), but it is more a principle and not a formal proof. Correct? l. 190 $D(P_z, P_x)$ will be significantly larger in case of arbitrarily chosen $P_z$. This you don't know. If $P_x$ and $P_z$ have the same topology, then there exists a continuous mapping between the two that can be perfectly approximated by a sufficiently large neural network. In practice, maybe $D$ will be larger, but not necessarily in theory. l. 198 I do not agree that $C(F) = C(G)$ is intuitive. For optimal reconstruction, we have $f = g^{-1}$ (assuming all that is required) and the Lipschitz constant of $g^{-1}$ can be arbitrarily large before that of $g$. For example, with $g(x) = x^2$ and $f(x) = \sqrt{x}$ on $(\epsilon, 1)$, the Lipschitz constant of $f$ can be made arbitrarily large. And vice versa. I think this depends whether it is easier (ie, the mapping has lower Lipschitz) to map from $x$ to $z$ or from $z$ to $x$. But what does it depend on? The size of $z$? l. 312 this is a bit cheating. You should learn an encoder that inverse the (frozen) Gaussian space and evaluate on reconstruction for DCGAN. l 354 Why is the Lipschitz of the encoder lower than that of the decoder for the proposed method? Isn't it completely contradictory to all the story of the paper? A2. Fix notation errors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The conclusion is a list of limitations that is very honest. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments and questions. Below we address them separately: ### 1. "... most of the paper is about the analytical part ... lacks a good structure." > Thanks for the feedback. As is acknowledged, the analytical part mainly serves illustrative purposes and the proposed DAE is not proven mathematically. However, the DAE approach is an outcome and empirical verification of our investigation of the ideal latent distribution for generative modeling. > Making practical improvements to existing latent generative models is an important goal of this work. However, models such as VQGAN and DiT are too complicated for rigorous analysis. Instead, we draw inspiration from simple (linear) cases and verify the effectiveness on real scenarios through experiments. We conducted a series of experiments starting from toy Gaussian mixture data, to DCGAN on CIFAR-10, to VQGAN and DiT on larger datasets. > To improve readability, we summarized the contributions of this work in the overall response and we will incorporate them at the end of the Introduction section to provide an overview. ### 2. "l.146 The link between the ideal $P_z$ and a low capacity generator is tenuous ..." > The ideal $P_z^*$ is characterized as the one that minimizes $D^G(P_z,P_x)$. The link between its definition and the capacity of the generator **can be made mathematically sound with an extra assumption**. > First, recall that $G_c:=\{g\in G: C(g)\le c\}$ where $C(g)$ is defined as some complexity measurement. It is easy to see that for any $c_2 \ge c_1>0$, $G_{c_1} \subset G_{c_2}$. Therefore, $D^{G_{c_2}}(P_z, P_x) \le D^{G_{c_1}}(P_z, P_x)$. > Assumption 1: For any $P_z$ and $P_x$, $D^{G_c}(P_z, P_x)$ is continuous and monotonically decreasing with c. > Consider a generator family $G$ with bounded complexity and we target to have the generated distribution close to the data at level $\epsilon$ measured by $D$, i.e., $D(P_x, P_{g(z)})\le \epsilon$ for some $P_z$ and $g\in G$. > For a certain (non-degenerate) $P_z$, there exists $c>0$ such that $D^{G_c}(P_z, P_x) = \epsilon$, that is, by using this latent $P_z$, we need at least complexity $c$ to achieve the $\epsilon$ goal. If $P_z$ is not ideal, by definition we know that there exists $P_z^*$ that $D^{G_c}(P_z^*, P_x)=\epsilon^* < \epsilon$. By Assumption 1, we know there exists $c^\prime< c$ such that $D^{G_{c^\prime}}(P_z^*, P_x) = \epsilon$. This means that the goal can be achieved using a lower complexity generator if the latent is ideal. > Assumption 1 is not provable for general $D(\cdot, \cdot)$ and $C(\cdot)$. You are correct that this is more of a principle than a formal proof. ### 3. "l.190 $D(P_z,P_x)$ will be significantly larger in case of arbitrarily chosen $P_z$..." > You are right that if $P_x$ and $P_z$ have the same topology with a continuous mapping between the two, $D^{ae}(P_z,P_x)$ may not be significantly larger. > We will modify the statement to $D^{ae}(P_z,P_x)$ will be significantly larger in case of poorly chosen $P_z$ (e.g., uni-modal $P_z$ to multi-modal $P_x$) to make it more clear. ### 4. "l.198 I do not agree that $C(F)=C(G)$ is intuitive... l.354 Why is the Lipschitz of the encoder lower than that of the decoder for the proposed method?" > In the case of optimal reconstruction, we have $f=g^{-1}$ and the Lipschitz constant of $f$ can be arbitrarily large. This is related to our Remark 3.5 about scaling. Let $C(\cdot)$ be the Lipschitz constant. $k\cdot f(x)$ and $g(x/k)$ have the same reconstruction error. > To address the scaling problem, we considered the generalized $D^{ae}(\cdot, \cdot)$, where both $C(f)$ and $C(g)$ are considered and we want their sum to be small. In this case, $C(f^*)=C(g^*)=1$ is the optimal choice since $C(f^*) + C(g^*) = C(f^*) + 1/C(f^*)\ge 2$ is minimized when $C(f^*)=1$. > Going back to the line 198, it is worth emphasizing that, as stated in line 133, $C(\cdot)$ can be as specific as the Lipschitz constant, or as general as the size (width, depth, etc.) of the network. We will add remarks before line 198 to clarify that the complexity $C(\cdot)$ here is the latter general case. In practice, the encoder-decoder pair is often designed with symmetric architecture, with the same number of blocks and parameters, e.g., up-sampling vs. down-sampling, convolution vs. deconvolution, etc. The $C(F)=C(G)$ statement is not mathematically rigorous and we apologize for the confusion. > For the line 198 argument, we refer to **Figure 4** of the Appendix, where we can see that our DAE-modified encoder (orange line) is closer to one than the baseline (blue line). The same is true for the decoder. Given that the ideal case is that both are one, our DAE is effective at reducing the complexity. > As to whether the encoder or the decoder has a larger Lipschitz constant, we hypothesize that in the initial stage, it may closely relate to the mapping dimension. If $x_i$'s and $z_i$'s are of the same scale, $\|x\|_2/\|z\|_2\approx \sqrt{d/d_z}$. Hence, from high-dimensional $x$ to low-dimensional $z$, the Lipschitz constant might be smaller. This can be seen in the early stage of our Figure 4, where $C(f)\approx 0.2$ in the beginning. > However, after training, we do not know why the encoder has a larger Lipschitz constant. ### 5. "l.312 this is a bit cheating." > Treating DCGAN as a baseline, we introduced 3 modifications and the results are shown in Table 2. By DCGAN vs. DCGAN-SimCLR, we want to demonstrate that learning latent for generative modeling is different from standard self-supervised learning methods for discriminative tasks. By DCGAN vs VAEGAN, we want to demonstrate that a learned latent from data can be better than data-agnostic Gaussian, and VAEGAN can be further improved by our DAE approach. ### 6. "A2. Fix notation." > Thanks for pointing it out. We have fixed the singular values notation to $\lambda_i$ in line 576. --- Rebuttal Comment 1.1: Comment: Thanks for the lengthy rebuttal! I think it covers well the remarks I had in my review. --- Reply to Comment 1.1.1: Title: Author Response to Reviewer ssMC Comment: Thanks for the feedback! We are glad that we have addressed your concerns.
Rebuttal 1: Rebuttal: # Overall Response: We thank all the reviewers for their time and efforts in reviewing our work. Before we address the questions and concerns of each reviewer, we would like to provide a summary of our work. Our work aims to **characterize the ideal/optimal low-dimensional latent distribution for latent generative modeling**, from the perspective of minimizing the required **complexity** of the generator. The main contributions can be summarized as the following: - Inspired by the training objective of GANs, we first propose a pseudo-distance $D^G$ between $P_x$ and $P_z$ and **characterize the optimal latent distribution $P_z^*$** by the one that is closest to $P_x$ in the terms of $D^G$. (Section 3.1) - After characterizing $P_z^*$, we adopted the popular encoder parameterization of the latent distribution, i.e., $P_z = P_{f(x)}$ (Section 3.2). We then analyzed the interplay between the encoder and the decoder (generator) and identified the **trade-off between the quality/informativeness of the latent distribution and the capability of the decoder**. (Section 4.1) - To address the trade-off, we proposed a **two-stage training scheme** called DAE that results in better latent spaces for more efficient generative modeling (Section 4.2). To verify our claims, we conducted experiments on various models such as GAN, VQGAN, and DiT, and achieved consistent improvement (Section 6). Although our motivation involves vanilla GAN models, our proposed methodology has an encoder-decoder structure that can be potentially adopted by a wider range of generative models that utilize a low-dimensional latent space. To the best of our knowledge, this work is the first to provide a characterization of the optimal latent distribution from the perspective of minimizing model complexity. Our DAE approach is an outcome of our investigation along this line and an empirical verification of its practical effectiveness. The proposed characterization has its own interest and may shed light on other applications as well.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Finite Population Regression Adjustment and Non-asymptotic Guarantees for Treatment Effect Estimation
Accept (poster)
Summary: In this paper, authors present regression adjusted estimators for estimating the average treatment effect under the Bernoulli design. In particular, they show that by using the leverage scores and a ridge regression adjustment, favorable finite sample bounds on the (conditional) variance may be obtained. This contributes nicely to a long-standing debate about the use of regression adjustment in randomized control trials. Finally, simulations are performed which coroborate theoretical results. Strengths: The main strength of this paper is a finite sample analysis of regression adjusted estimators under the Bernoulli design. There has been much debate about whether regression adjusted estimators are appropriate for estimates of the average treatment effect, [12, 20]. However, that entire literature has been studied under asymptotic analyses, which provides little insight into finite sample performance. (Harshaw et al, 2019) have shown that under a sophisticated experimental design, favorable finite sample bounds on the variance may be obtained. Moreover, in large samples, (using a well-chosen parameter), the Gram--Schmidt Walk design achieves the same asymptotic variance as Lin's regression adjustment. This puts regression adjustment on the defensive and raises a natural question: can regression adjusted estimators achieve similar finite sample results? This paper does a really remarkable job of answering this question in the affirmative, up to some minor details. It will be of interest not only to the NeurIPS community, but also to the statistical causal inference community more broadly. Weaknesses: The paper has a few minor issues, but I believe they can all be addressed by the authors in a satisfactory way without too much work. ## Weakness 1 -- Conditional Analysis The major weakness is a certain ambiguity that arises in a few of the theorems. For example, Theorem 3 does not analyze the estimator in Algorithm 1 unconditionally. Rather, it analyses it conditioned on a high probability event (i.e. "with probability $1 - \delta$, Algorithm 1 computes an unbiased estimator of the average treatment effect with variance...") Although this is natural in computer science, this is very unnatural in this area of causal inference. The reason is that we place larger value on the unconditional statistical inference; the validity / width of confidence intervals depends on the bias and variance, not a conditional bias or variance. Moreover, Theorem 3 suggests that in order to get small variance, we should be setting $\delta$ very large; however, this will incur a large bias, which is not discussed in the Theorem nor in the paper. In order for the causal inference community to fully appreciate these results (and they will!) the authors should derive bounds on the unconditional bias / variance. This will have the additional benefit of giving experimenters advice on how to choose $\delta$ to minimize the MSE. This critique applies to Theorem 4 as well. ## Weakness 2 -- Constant $C$ As it stands, the constant in Theorem 7 is presently unspecified. This means that the Algorithm 1 cannot be run! Authors should give a precise constant here. ## Weakness 3 -- ITE Results are Weak Theorem 2 shows that the collection of ITEs can be estimated up to a constant error. This is unsurprising because ITE estimation is considered impossible due to the fundamental problem of causal inference. I find this estimator and these results not very exciting because it's not clear what a practitioner would do with such a guarantee -- not a single ITE is really known. If authors are pressed on space, I recommend moving this to the appendix; otherwise, it's fine to leave in the paper. ## Weakness 4 -- Comparisons to GSW I think that the comparison to (Harshaw et al 2019) is great and highlights the relevance of this work. However, some of the comparisons are not relevant or could be improved. Here is a list of improvements: - (Line 137): Authors write that the variance of GSW is proportional to "XYZ". However, that's missing the important aspect of (Harshaw et al 2019), which is that these terms can be traded-off! So, I think you have to address (if only very briefly) that GSW-Design allows for trade-off of these terms, but that for $\phi = 1/2$ it is proportional to "XYZ". - (Lines 33, 39, 141, etc): Authors write that GSW Design does not have an online analogue (which is currently true), but I don't think that this is how I would compare your estimator. The practical benefit of your regression adjustment estimator is that the data collection process is just so much simpler -- the Bernoulli design is decentralized, asynchronous, you name it! So although (Harshaw et al 2019) raised an online design as an open problem, it's really only interesting for covariate balancing. Being able to use such a simple design is beneficial much beyond online assignment. I think focusing on the practical simplicity of the Bernoulli design over the practical complexities of the GSW Design is much stronger argument in favor of your estimator, not the fact that Bernoulli design can be implemented online. - (Line 264): Authors write that GSW Design "is a much slower algorithm". This doesn't seem true. (Harshaw et al 2019) give an implementation of GSW-Design that runs in $O(n^2 d)$, which would seem to also be required by for the linear system solve required to compute the ridge regression and scores. ## Minor Misc Comments The following are minor miscellaneous comments - (Line 126): "work" seems like the wrong word, a typo. - (Line 242): The notation $\mathbf{P}_{\mathcal{P}}$ is used but not defined. I think it's fine to just remove the subscript. - (Line 203): I think it is confusing / inappropriate to call Algorithm 1 "leverage score sampling" because that seems to suggest that the assignment to treatment and control (i.e. "sampling") is based on leverage scores. Can you call this something different? Perhaps "cross leverage score adjustment" or something? - (Section 4.2): authors don't specify the value of $\phi$ used for the Gram--Schmidt Walk Design. - It's a shame that most of the algorithms are in the appendix. If possible, I would recommend bringing them to the main body for camera ready version, perhaps at the cost of putting ITE estimation results in the appendix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you address my comment on conditional versus unconditional analysis? - Can you address my comment on the constant $C$ that is used? - How does the Regression Adjusted Horvitz---Thompson estimator relate to double robust estimators? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our work and our contributions to finite sample analysis of regression adjustment. We thank them also for their detailed suggestions, especially those on the presentation, which we will address in a revised version of the paper. > The major weakness is a certain ambiguity that arises in a few of the theorems. For example, Theorem 3 does not analyze the estimator in Algorithm 1 unconditionally... Thank you for pointing this out. We also believe that stating unconditional bounds would improve the paper. We will add the following discussion to the paper. The only reason the $1-\delta$ probability appears is that leverage score sampling only gives a high probability guarantee for the regression error. Moreover our variance only depends on the regression error and the estimator is always unbiased, even if the leverage score sampling does not give the $(1+\epsilon)$ approximation guarantee on the regression. This “bad event” happens with probability at most $\delta$. Therefore it is enough to bound the regression error when the bad event happens. If A^T S^T S A is not a spectral approximation of A^T A (this can be checked since we have access to matrix A and only happens with probability at most $\delta$), we just set $b=0$ and in this case, $||Xb-\mu||\_2^2 + \lambda ||b||\_2^2=||\mu||\_2^2$. Otherwise let $\hat{X}$ be the matrix $X$ concatenated with identity and $\hat{\mu}$ be the vector $\mu$ concatenated with a zero vector and $b^*:=\arg\min||\hat{X}b-\hat{\mu}||\_2^2=\arg\min ||Xb-\mu||\_2^2+\lambda||b||\_2^2$. Let $\tilde{X}$ be the matrix $SX$ concatenated with identity and $\tilde{\mu}$ be the vector $S\mu$ concatenated with a zero vector and $\tilde{b}:=\arg\min || \tilde{X} b - \tilde{\mu} ||\_2^2$. Then since $\tilde{X}$ is a spectral approximation of $\hat{X}$, $$ || \hat{X} \tilde{b} - \hat{\mu} ||\_2 \leq || \hat{X} b^* - \hat{\mu} ||\_2 + || \hat{X} \tilde{b} - \hat{X} b^* ||\_2 \leq || \hat{X} b^* - \hat{\mu} ||\_2 + \frac{1}{1-\epsilon}|| \tilde{X} \tilde{b} - \tilde{X} b^* ||\_2 $$. Moreover $$||\tilde{X}\tilde{b}-\tilde{X}b^*||\_2\leq ||\tilde{X}\tilde{b}-\tilde{\mu}||\_2+||\tilde{X} b^*-\tilde{\mu}||\_2\leq 2\cdot||\tilde{X}b^*-\tilde{\mu}||\_2$$ Now since leverage score sampling preserves the norms in expectation, we have $\mathbb{E}\_S||\tilde{X}b^*-\tilde{\mu}||\_2=||\hat{X}b^*-\hat{\mu}||\_2$. Then by the law of total expectation, $$\mathbb{E}[(\widehat{\tau} - \tau)^2] \leq (1-\delta) \left( \frac{32d}{n^2} \cdot || y^{(1)} - y^{(0)}||\_{\infty}^2 + \frac{8 (1+\frac{2}{1-\epsilon})}{n^2} \min\_{b} \left(||X b - \mu||\_2^2 + 100 \log(n/\delta) \cdot \zeta^2 \cdot ||b||\_2^2 \right) \right) + \delta \cdot || \mu ||\_2^2.$$ > As it stands, the constant in Theorem 7 is presently unspecified. This means that the Algorithm 1 cannot be run! Authors should give a precise constant here. The analysis from other papers (see [9]) shows that it is enough to pick $c$ about 10. We add a note about this to the paper. > Theorem 2 shows that the collection of ITEs can be estimated up to a constant error. This is unsurprising because ITE estimation is considered impossible due to the fundamental problem of causal inference. I find this estimator and these results not very exciting because it's not clear what a practitioner would do with such a guarantee -- not a single ITE is really known. Although we agree that single ITEs cannot be estimated using our approach (and because of the fundamental problem of causal inference), we note that such bounds can be used to estimate other summary statistics. For example, a variance bound for ATE can be derived from our ITE bounds using the Cauchy-Schwarz inequality. In addition, bounds on median ITE or quantiles can be derived from our approach. Exploring these directions more is an interesting avenue for future work. > Authors write that the variance of GSW is proportional to "XYZ". However, that's missing the important aspect of (Harshaw et al 2019), which is that these terms can be traded-off! So, I think you have to address (if only very briefly) that GSW-Design allows for trade-off of these terms, but that for $\phi=1/2$ it is proportional to "XYZ". Thanks for this suggestion. We will add a note about the trade-off of GSW design to the paper. > Authors write that GSW Design does not have an online analogue (which is currently true), but I don't think that this is how I would compare your estimator. The practical benefit of your regression adjustment estimator is that the data collection process is just so much simpler -- the Bernoulli design is decentralized, asynchronous, you name it! ... Thank you for recognizing the generality and applicability of our results. We appreciate the suggestions and will add this discussion to the paper. > Authors write that GSW Design "is a much slower algorithm"... Thanks – we will add a discussion about the running time of our algorithm. Roughly, leverage scores can be computed in input-sparsity time. More specifically in $O(\text{nnz}(X) + d^{\omega})$ where $\text{nnz}(X)$ is the number of nonzero entries of matrix $X$ and $\omega$ is the matrix multiplication exponent. In the worst case, when $X$ is a dense matrix, this is $O(nd + d^{\omega})$. Note that this is the bulk of our computation since the running time of other steps of our algorithms is also $O(nd)$. In comparison, the running time of GSW design is $O(n^2 d)$. Note that since typically $n \gg d$, our running time is much better than GSW. In addition, since our operations involve matrix-matrix or matrix-vector multiplications, this is naturally parallelized in computers. In comparison, GSW design requires a random walk with n steps, which makes it less parallelizable. Therefore in practice, one might see even more dramatic differences in terms of running time. --- Rebuttal Comment 1.1: Title: response to authors Comment: I thank the author for their thoughtful response to my review. I have a few more questions for them. 1. Given the above work which provides unconditional bounds on the mean squared error in terms of $\delta$, can authors give practical suggestions for the value of $\delta$ to pick that might make this unconditional bound small? This seems crucial to the relevance of the results. 2. Authors write "The analysis from other papers (see [9]) shows that it is enough to pick about 10. We add a note about this to the paper.". However, this is confusing. The way the paper is written, it seems that there exists a particular constant and that the results only go through for this precise constant. If so, it seems imperative to find *exactly* this constant, rather than understanding it "approximately". Am I misunderstanding the results or their presentation? --- Reply to Comment 1.1.1: Title: Clarification regarding parameters of leverage score sampling Comment: Thank you for the second round of comments! > Given the above work which provides unconditional bounds on the mean squared error in terms of $\delta$, can authors give practical suggestions for the value of $\delta$ to pick that might make this unconditional bound small? This seems crucial to the relevance of the results. Note that the dependence of the first term on $1/\delta$ is only logarithmic, while the second term depends on $\delta$. Therefore it is reasonable to consider $\delta=1/\text{poly}(n)$. For example, if we take $\delta=1/n^2$, then we get the following bound, $$\mathbb{E}[(\widehat{\tau} - \tau)^2] \leq (1-\frac{1}{n^2}) \left( \frac{32d}{n^2} \cdot || y^{(1)} - y^{(0)}||\_{\infty}^2 + \frac{8 (1+\frac{2}{1-\epsilon})}{n^2} \min\_{b} \left(||X b - \mu||\_2^2 + 300 \log(n) \cdot \zeta^2 \cdot ||b||\_2^2 \right) \right) + \frac{1}{n^2} \cdot || \mu ||\_2^2.$$ > Authors write "The analysis from other papers (see [9]) shows that it is enough to pick about 10. We add a note about this to the paper.". However, this is confusing. The way the paper is written, it seems that there exists a particular constant and that the results only go through for this precise constant. If so, it seems imperative to find exactly this constant, rather than understanding it "approximately". Am I misunderstanding the results or their presentation? The constant $C$ is an oversampling parameter to have the guarantees of leverage score sampling. Note that taking any larger constant is also sufficient to have the guarantees as well, i.e., we need a sufficient oversampling. Therefore we do not need its exact value. An upper bound for $C$ essentially arises from matrix concentration inequalities used to prove the guarantees of the leverage score sampling. Then taking any oversampling parameter larger than or equal to this upper bound is enough to have the guarantees. In particular, Lemma 4 of [9] is as the following. Lemma 4 (Spectral Approximation via Leverage Score Sampling). Given an error parameter $0 < \epsilon < 1$, let $u$ be a vector of leverage score overestimates, i.e., $\tau_i(A) ≤ u_i$ for all $i$. Let $\alpha$ be a sampling rate parameter and let $c$ be a fixed positive constant. For each row, we define a sampling probability $p_i = \min\\{1, \alpha \cdot u_i c \log d \\}$. Furthermore, let $\text{Sample}(u, \alpha)$ denote a function which returns a random diagonal matrix $S$ with independently chosen entries $S_{ii} = \frac{1}{\sqrt{p_i}}$ with probability $p_i$ and $0$ otherwise. If we set $\alpha = \epsilon^{−2}$ , $S = \text{Sample}(u, \epsilon^{−2} )$ has at most $\sum_i \min\\{1, \alpha \cdot u_i c \log d \\} = O(\alpha c ||u||\_1 \log d)$ nonzero entries and $\frac{1}{\sqrt{1+\epsilon}}SA$ is a $\frac{1+\epsilon}{1-\epsilon}$-spectral approximation for $A$ with probability at least $1 − d^{−c/3}$. To have a probability of success of at least $1-\delta$, we need to take $c=\frac{3\log(1/\delta)}{\log d}$. Then an upper bound for $C$ is the coefficient of $\frac{\log(1/\delta)}{\log d}$ which is $3$. Therefore we can take any oversampling parameter greater than or equal to $3$, and the result still holds. Similar results apply to ridge leverage score sampling. We will add a note about this to the paper and will make sure the statement of theorems presents this clearly.
Summary: This paper focuses on estimation of individual and average treatment effects (ITE and ATE) with regression adjustment, which is combined with the method of ridge leverage score sampling in order to obtain the desirable variance bounds. The leading case is algorithm 1, which estimates ATE with leverage score sampling and cross adjustment. In this algorithm, for each observational unit, either treated or control unit is observed with equal probability. Then observed outcomes are adjusted via cross regression adjustments for which the regression coefficients are computed using ridge leverage score samples. The paper provides a number of theoretical results regarding the variance bounds and reports experimental results using both synthetic and real-data datasets. Strengths: - The main strength of the paper is that it provides finite sample variance bounds for estimating the sample mean, individual treatment effects, and the average treatment effect with regression adjustment. - The research question addressed in the paper is of general nature and is important in many applications. Weaknesses: - To my reading of the paper, the most important issue is that it is unclear whether the advances in this paper are substantial relative to Harshaw et al [16]. It seems that theoretically, the variance bounds are comparable between the proposed methods in this paper and the Gram-Schmidt walk (GSW) design method of Harshaw et al [16]. Furthermore, the right panel of Figure 1 shows that GSW outperforms the current paper. - The paper criticizes the GSW design, saying that it is not suitable for online experimental design (line 33 and lines 139-142). However, the proposed method is not immediately suitable for online settings. For example, it is unclear how to implement ridge leverage score sampling in a fully online fashion because the leverage score changes as more observations are available. - The one advantage of the proposed method over GSW might be computational speed in view of table 6 in the supplement. However, there is no theoretical analysis in terms of computational aspects. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Is there any chance to obtain lower bounds for variance? It would complement the existing results on the upper bounds. - Would it be possible to estimate the variance bounds? - Line 192, is the transpose missing for the last $X$? - Steps 5 and 6 in algorithm 1 may contain typos. I think these steps are meant for leverage score sampling but the stated steps are unclear. Please check them. - Line 264. It might be helpful to provide a summary of running times in the main text. - There is very little discussion on the experimental results in section 4. Probably this is due to the page limit. It would be helpful if there is a short summary of numerical findings in the main text. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: One important limitation that is not mentioned in the paper is how to conduct statistical inference on ATE. This is typically done using asymptotic arguments. For example, Harshaw et al [16] provides asymptotic normality and suggests a confidence interval for ATE based on asymptotic normality. Although this paper focuses on finite sample variance bounds, it might be fair to mention the lack of inference methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their review and their questions/suggestions, which we address below. > To my reading of the paper, the most important issue is that it is unclear whether the advances in this paper are substantial relative to Harshaw et al [16]. It seems that theoretically, the variance bounds are comparable between the proposed methods in this paper and the Gram-Schmidt walk (GSW) design method of Harshaw et al [16]. Furthermore, the right panel of Figure 1 shows that GSW outperforms the current paper. Note that although the performance of GSW is similar to our approach, it is much slower than our algorithm. For example, on the twins dataset with about 32 thousand samples, the GSW design is about 500 times slower than our algorithm. > The paper criticizes the GSW design, saying that it is not suitable for online experimental design (line 33 and lines 139-142). However, the proposed method is not immediately suitable for online settings. For example, it is unclear how to implement ridge leverage score sampling in a fully online fashion because the leverage score changes as more observations are available. We would like to emphasize that we did not intend to criticize the GSW design. It certainly is an important work in the intersection of statistics and linear algebra. Our goal was only to address the differences between our approach and GSW design. We will make sure this is reflected in our writing. Regarding the online setting, we like to note that since we are using ridge leverage score sampling, as we pick the regularization (ridge) parameter, we make sure that the probability of selection of each row for any of the potential outcomes is at most 0.5. More specifically, note that in leverage score sampling, we only require to use upper bounds on leverage scores (not the exact leverage score), as stated in Theorem 7. Then by the constants we have picked in our algorithms/theorems we make sure that the probability of picking a unit for outcome 0 or outcome 1 is less than 0.5. Then essentially, the algorithm only needs to toss a fair coin for each unit arriving in the online setting. The only requirement here is to have the largest singular value of the covariate matrix to be bounded during the online experiment (because of our constant). We note that this is a very mild assumption. We will add a discussion about this. A similar argument applies to our subsampling (partial observation) approach as well. In addition, note that leverage scores can be computed in an online manner. See the following paper. Cohen, Michael B., Cameron Musco, and Jakub Pachocki. "Online row sampling." Theory of Computing 16, no. 1 (2020): 1-25. > The one advantage of the proposed method over GSW might be computational speed in view of table 6 in the supplement. However, there is no theoretical analysis in terms of computational aspects. Thanks – we will add a discussion about the running time of our algorithm. Roughly, leverage scores can be computed in input-sparsity time. More specifically in $O(\text{nnz}(X) + d^{\omega})$ where $\text{nnz}(X)$ is the number of nonzero entries of matrix $X$ and $\omega$ is the matrix multiplication exponent. In the worst case, when $X$ is a dense matrix, this is $O(nd + d^{\omega})$. Note that this is the bulk of our computation since the running time of other steps of our algorithms is also $O(nd)$. In comparison, the running time of GSW design is $O(n^2 d)$. Note that since typically $n \gg d$, our running time is much better than GSW. In addition, since our operations involve matrix-matrix or matrix-vector multiplications, this is naturally parallelized in computers. In comparison, GSW design requires a random walk with n steps, which makes it less parallelizable. Therefore in practice, one might see even more dramatic differences in terms of running time. > Is there any chance to obtain lower bounds for variance? It would complement the existing results on the upper bounds. We will add a discussion about terms in the error bounds, but one general way of looking at our bounds is that regression adjustment replaces the population variance (which is equivalent to using a vector of all zero for adjustment) with the error of best linear fit. We note that the Bernoulli design is min-max optimal (see Lemma 4.1 of [16]), and the following works provide hardness bounds on the discrepancy part of the GSW design. We will add a discussion regarding these to the paper. Zhang, Peng. "Hardness Results for Minimizing the Covariance of Randomly Signed Sum of Vectors." arXiv preprint arXiv:2211.14658 (2022). Charikar, Moses, Alantha Newman, and Aleksandar Nikolov. "Tight hardness results for minimizing discrepancy." In Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms, pp. 1607-1614. Society for Industrial and Applied Mathematics, 2011. >Would it be possible to estimate the variance bounds? Since the variance bounds depend on the regression error of vectors to which we do not have access (for example, $y^{(1)}+y^{(0)}$ since for each $i$, we only can observe either $y^{(1)}_i$ or $y^{(0)}_i$.), it seems non-obvious how to obtain meaningful variance estimates. However, this is an interesting suggestion to study for future work. --- Rebuttal Comment 1.1: Title: Further comments Comment: - I very much appreciate the authors' rebuttal; especially, I like the argument regarding a discussion about the running time of their algorithm vs. GSW. - It seems that the authors did not reply to all of my points (e.g., there seems no discussion of the lack of inference methods). I presume that the authors did not respond to each of them because some of them are rather straightforward. - I believe that the paper could be improved a lot after one or two rounds of careful rewriting and so I raised my rating by one more point from 4 to 5. --- Reply to Comment 1.1.1: Title: Inference Comment: Since we are using the Bernoulli design, we believe that the central limit theorem (CLT) results hold in this case, allowing for inference as well. Both Lin [20] (Lemma 6 in Supplementary Material) and Freedman [*] (Theorem 1) have provided CLT results for regression adjustment. We believe these can be adapted to our results, but we defer this task to subsequent works. However, we will include a discussion mentioning this as future work and note that Harshaw et al. [16] have also provided such results. [*] Freedman, David A. "On Regression Adjustments in Experiments with Several Treatments." The Annals of Applied Statistics (2008): 176-196.
Summary: This paper explores the design and analysis of randomized experiments for treatment effect estimation, which is an important problem in causal inference. The goal of treatment effect estimation is to estimate the effect of a specific treatment on individual subjects or the average effect in the population, using only one of the two potential outcomes (either treatment or control) per subject. The objective is to obtain a precise estimator of the treatment effect, which is unbiased and has a smaller variance. However, it is often also desirable to minimize the number of subjects exposed to the experiment due to practical or ethical considerations. To address this, recent research has focused on leveraging linear algebraic techniques to design optimal experiments and estimate treatment effects. The authors identify three main limitations in previous approaches: (1) inefficiency resulting from using the entire population in the experiment, (2) sub-optimal variance bound of the resulting estimator, and (3) inflexibility in experimental design, particularly in online settings. To overcome these challenges, the authors propose combining subsampling techniques from numerical linear algebra with classical statistical methods of regression adjustment. This integrated approach aims to obtain an unbiased estimator with a variance comparable to the optimal value, while also being suitable for online experiment settings. The paper is organized as follows. In Section 2, the authors demonstrate the effectiveness of their proposed approach by considering four types of problems: (1) mean estimation of a vector, (2) individual treatment effect (ITE) estimation, (3) average treatment effect estimation (ATE) with full observation of both potential outcomes, and (4) ATE with partial observation, where only one potential outcome per subject is available. The primary focus is on problems (2) and (4), while problems (1) and (3) serve as precursors to facilitate the exposition of ideas. In Section 3, the authors describe the key techniques employed in their approach, providing a brief outline of the proofs for some theorems. Subsequently, they present experimental results from synthetic datasets (Section 4.1) and real-world datasets (Section 4.2) to validate the effectiveness of their proposed method. Strengths: The paper demonstrates several strengths that contribute to its overall quality and significance. Firstly, the problem of treatment effect estimation addressed in the paper holds substantial importance within the field of causal inference. The research questions explored are highly relevant and carry practical implications, underscoring the significance of the study. Secondly, the proposed approach of integrating subsampling techniques rooted in randomized numerical linear algebra with regression adjustment is a well-founded and suitable strategy for tackling the problem at hand. Additionally, the authors' systematic approach in Section 2 effectively illuminates the challenges associated with addressing the problem and provides a rationale for the proposed approach. Furthermore, the technical tools employed in the paper are well-established and widely utilized in the field. The authors' adept development of methods and analytical arguments reflects a strong grasp of these tools, bolstering the credibility of their findings. In summary, the strengths of the paper lie in the significance of the addressed problem, the logical and appropriate nature of the proposed approach, and the authors' proficient utilization of standard technical tools. These strengths collectively enhance the quality and validity of the research presented in the paper. Weaknesses: While the paper demonstrates strengths, there are several areas where improvements can be made to enhance clarity and strengthen the overall presentation. The weaknesses can be broadly categorized into two main areas: clarity and organization, as well as the soundness of the arguments. I. Concerns regarding clarity and organization Firstly, the paper lacks clarity, potentially resulting from a lack of organization. To improve clarity, the authors should clearly identify the specific problems they are focusing on and indicate which parts of the paper address these issues. This could be achieved by adding a dedicated subsection or table in the Introduction that summarizes the problems addressed, the solutions proposed, and references to specific sections and theorems. Additionally, restructuring sections would contribute to better organization and understanding. Merging Sections 2.1 and 2.2 into a single subsection titled "ITE Estimation" and merging Sections 2.3 and 2.4 under the title "ATE Estimation" would streamline the presentation and emphasize the two treatment effect estimation problems. Furthermore, Section 3 would benefit from reorganization and clearer subsection headings to provide explicit explanations of the specific techniques being described. For example, Section 3.1, currently titled "Random vector regression," contains a combination of an auxiliary theorem (Theorem 8) used to prove Theorem 2 (specifically, the display equation in line 113) related to ITE estimation, as well as a proof sketch of Theorem 3 in Section 2.3 related to ATE estimation. The rationale for grouping these two contents together is unclear, and it would be beneficial to clarify this and consider separating them into distinct subsections. Moreover, the paper contains numerous theorem statements that, although likely mathematically correct, pose challenges in interpretation for several reasons. Many of these theorems reference algorithms that are either presented in later parts of the paper or not included in the main text at all. This creates logical inconsistencies and forces the reader to navigate back and forth, hindering comprehension. Furthermore, the authors do not provide explicit remarks on the optimal variance or the origin of the terms used in the error bounds, particularly, neither in close proximity to the theorem statements nor in a designated discussion section. Consequently, it is difficult to grasp the significance of the error bounds in terms of their semantic interpretation and their implications, including how tight or loose they are compared to optimal errors. II. Concerns regarding soundness Moreover, there are concerns regarding the soundness of the work, which may be further exacerbated by the clarity issues. The authors claim that their proposed method successfully addresses the aforementioned three challenges of previous approaches, but this claim lacks adequate support. To strengthen their arguments, the authors should provide explicit discussions or remarks, either following relevant technical propositions or in a dedicated discussion section. Specifically, they should elaborate on the following aspects in clear and comprehensible language: 1. Clarify why previous approaches, such as [16], require the entire population in the experiment, while the proposed method does not. 2. Discuss the optimal values or lower bounds of variance and how the authors' error bounds can be considered "comparable to the optimal." 3. Explain how the proposed method allows for adaptation to the online experiment design setting. Additionally, it would be beneficial to appropriately compare the proposed method to baseline approaches, such as classical regression adjustment estimators, Lin's interacted regression adjustment [20], or the crude difference-in-means estimator. Conducting such comparisons, particularly when the sample size is equal to the entire population, would provide a clearer understanding of the strengths and limitations (if any) of the proposed approach. Addressing these weaknesses would significantly enhance the clarity, soundness, and overall quality of the paper. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. I would like to request the authors to address the the concerns raised in the “Weaknesses” section.
 2. It appears that the error bound in Theorem 2 may be trivial and not particularly informative because $ d \| y^{(1)} + y^{(0)} \|_{\infty}^{2} \geq \| y^{(1)} + y^{(0)}\|_{2}^{2} \geq \| y^{(1)} - y^{(0)}\|_{2}^{2} = \| t \|_{2}^{2} $ assuming $\| \| y^{(1)} - y^{(0)}\|_{2} \| \leq \| \| y^{(1)} + y^{(0)}\|_{2} \| $. I wonder if the authors clarify the value of Theorem 2? t would be beneficial for the authors to clarify the significance and value of Theorem 2, particularly in relation to its practical implications.
 3. In Section 4.1, it would be helpful to provide information about the population size (n) for each dataset used in the experiments. Additionally, it would be valuable for the authors to evaluate the methods by quantifying the required sample size (i.e., the size of the sub-population) to achieve comparable performance to the method based on the entire population. This analysis would provide insights into the practical efficiency of the proposed method and its advantages in terms of minimizing the required sample size.
 4. In Section 4.2, it appears that the classic regression adjustment method outperforms all the other methods, including the authors' proposed method. In light of these results, it would be important for the authors to provide a clear justification for the value and effectiveness of their proposed method, especially in comparison to the superior performance of the classic regression adjustment method.
 5. In line 267, the authors mention that the classic regression adjustment technique is biased. While this is true, it is worth noting that there are at least two standard unbiased techniques available, namely the difference-in-means method and Lin's interacted regression adjustment [20]. It would be valuable for the authors to include these unbiased techniques in their experiments to provide a comprehensive comparison and analysis of the proposed method against these established approaches. Addressing these concerns would greatly enhance the understanding, validity, and practical applicability of the research presented in the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: It would be valuable for the authors to provide a short discussion on the technical limitations of their methods and offer insights into the potential adverse consequences that may arise when applying these approaches in real-world settings, specifically in experiment design and treatment effect estimation. Nonetheless, it is important to note that given the paper's primary focus on theoretical aspects, a comprehensive and extensive examination of these limitations may not be deemed critical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their in depth review and valuable suggestions, especially those regarding increasing clarity of the writing and paper organization, which we will incorporate in a revised version. > To improve clarity, the authors should clearly identify the specific problems they are focusing on and indicate which parts of the paper address these issues. We have stated the specific problems in Section 1.2. We consider 4 problems: 1) mean estimation; 2) ITE; 3) ATE with full observation (i.e., for each unit, exactly one of the outcomes are observed); 4) ATE with partial observation (i.e., for each unit, at most one of the outcomes are observed). > Furthermore, the authors do not provide explicit remarks on the optimal variance or the origin of the terms used in the error bounds… We will add a discussion about terms in error bounds. One general way of looking at our bounds is that regression adjustment replaces the population variance with the error of best linear fit. We note that the Bernoulli design is min-max optimal (see Lemma 4.1 of [16]). > Clarify why previous approaches, such as [16], require the entire population in the experiment, while the proposed method does not. For [16], this is essentially because of the random walk approach and is inherent to the algorithm. Generally, for previous methods in the literature, including [16], one can uniformly subsample the population and then design an experiment on the subsample and make an estimation, but this leads to worse bounds compared to ours — see the discussion in Section 2.4. Essentially the main reason that our subsampling approach works is the guarantees that leverage score sampling provides for minimizing linear regression error, and the fact that we show the variance of the population can be replaced with the error of the best linear fit. > Explain how the proposed method allows for adaptation to the online experiment design setting. Note that in leverage score sampling, we only require upper bounds on leverage scores (not the exact leverage score), as stated in Theorem 7. Then by the constants we have picked in our algorithms/theorems we make sure that the probability of picking a unit for outcome 0 or outcome 1 is less than 0.5. Then essentially, the algorithm only needs to toss a fair coin for each unit arriving in the online setting. The only requirement here is to have the largest singular value of the covariate matrix to be bounded during the online experiment (because of our constant). We note that this is a very mild assumption. We will add a discussion about this. A similar argument applies to our subsampling (partial observation) approach as well. > It appears that the error bound in Theorem 2 may be trivial and not particularly informative because $d||y^{(1)}+y^{(0)}||\_{\infty}^2\geq||y^{(1)}+y^{(0)}||\_2^2\geq||y^{(1)}-y^{(0)}||\_2^2=||t||\_2^2$ assuming $||y^{(1)}-y^{(0)}||\_2^2\leq||y^{(1)}+y^{(0)}||\_2^2 $. I wonder if the authors clarify the value of Theorem 2? Note that $d$ is the number of covariates/features. $y^{(1)}$ and $y^{(0)}$ are n-dimensional vectors, where n is the population size. Therefore the first inequality you mentioned does not hold. Typically, we have $d \ll n$. Thus, $d||y^{(1)}+y^{(0)}||\_{\infty}^2$ is much smaller than $||y^{(1)}+y^{(0)}||\_2^2$. As an example, pick $\alpha=\frac{d}{\sqrt{n}}$. In this case, the number of observed units will be $O(d\log(d/\delta)/\epsilon^2+d\sqrt{n})$ and the mentioned term in the variance bound is $\sqrt{n}||y^{(1)}+y^{(0)}||\_{\infty}^2$ which could be much smaller than $||y^{(1)}+y^{(0)}||\_{2}^2$ when n is large. In addition, note that $y^{(1)}+y^{(0)}$ is indeed the vector $\mu$ which is the vector appearing in Horvitz-Thompson estimator and Gram-Schmidt walk. In practice, the size of the population is usually much larger than the number of features/covariates, and we believe in such settings, our bound is very useful. > In Section 4.2, it appears that the classic regression adjustment method outperforms all the other methods, including the authors' proposed method... We would like to note that as opposed to classic regression adjustment, we present non-asymptotic bounds for our approach. To our knowledge, the guarantees on classic regression adjustment are asymptotic. Moreover, as you mentioned, classic regression adjustment is biased. Finally, note that it performs very poorly on our synthetic dataset. In general, we believe that no single approach performs well on all datasets and all settings (similar to the no-free-lunch theorem in machine learning). Our approach is most effective in scenarios where the regression errors are small, i.e., the treatment effect is linearly predicted by covariates well. > In line 267, the authors mention that the classic regression adjustment technique is biased. While this is true, it is worth noting that there are at least two standard unbiased techniques available, namely the difference-in-means method and Lin's interacted regression adjustment [20]... Thank you for pointing this out. We have added experiments for comparing the performance of the unbiased versions of classic regression adjustment to the pdf accompanying our response. Interestingly, Lin’s approach works very well on all except one of the datasets. However, on the twins dataset, it performs very poorly (but this might be due to numerical stability issues). Note that Lin’s approach is only asymptotically unbiased, while the focus of our work is to provide Non-asymptotic Guarantees and unbiasedness. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in providing a comprehensive rebuttal and clarifications to address the concerns raised by both myself and other reviewers. As a result of this process, I have gained a deeper understanding of the paper's contributions, notably aided by the insightful inquiries posed by Reviewer UdXn and the subsequent elucidations provided by the authors. It appears that the paper holds promise as a valuable addition to the conference. However, I maintain that substantial rewriting is necessary for the paper to effectively meet the required standards and convey its messages to the audience. Consequently, my assessment has shifted slightly towards the positive side, while still retaining some reservations, should a definitive stance be required.
Summary: The paper addresses the problem of ATE and ITE estimation in the presence of covariates. In particular, the authors provide finite-sample variance bounds for regression adjustment method-based estimators and novel variants thereof. The core of the methodology is using leverage scores, a randomized numerical linear algebra technique. This approach has been previously employed in selecting a sample of the population on which to perform an experiment. Strengths: To the best of my knowledge, using leverage scores for regression adjustment and offering finite-sample variance guarantees are novel contributions to the ATE and ITE estimation literature. The paper is strong in its technical aspects, as many of the theoretical contributions are robustly presented and accompanied by thorough proofs. Weaknesses: The idea of the paper is overall good, but the execution is poor. The main issue is that the paper very technically dense and is largely unreadable without the appendix. It reads like a stream of (highly technical) consciousness rather than a scientific report. It also looks like the authors sacrificed intuition in favor of technical details of questionable relevance. Other notes: * The algorithm links are broken (as they are in the appendix, but that is not mentioned anywhere) * The algorithms referenced from the appendix contain quantities not yet defined which makes the paper difficult to read * Many details in experimental section are relegated to the appendix so it's difficult to assess its soundness. In my opinion, while the paper has potential to be a valuable addition to the conference, it would require substantial rewriting in order to meet the required standards and expectations. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * How is the random vector in Key Challenge 2 built given that we don't observe both $\textbf{y}^{(0)}$ and $\textbf{y}^{(1)}$? * It looks like both potential outcomes are observed in the experiments, is that correct? Again, the experimental section is difficult to read. * It appears that, in several experiments, the performance of the proposed method is comparable to that of the leverage scores method. In light of this observation, what is the significance or unique contribution of this approach? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I was unable to find a discussion by the authors about the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for positive assessment of our technical contributions and for recognizing the novelty of applying leverage scores to regression adjustment. We agree with the reviewer that the paper can be made more readable and appreciate the reviewer’s suggestions towards doing so, which we will address in the revised version. > How is the random vector in Key Challenge 2 built given that we don't observe both $y^{(0)}$ and $y^{(1)}$? To construct the random vector v, we only need to observe one of the outcomes $y^{(0)}$ or $y^{(1)}$ for each unit. In particular, for each unit $i$, we toss a fair coin. If it lands on heads, we observe $y^{(0)}_i$ and set $v_i=-2y^{(0)}_i$. If it lands on tails, we observe $y^{(1)}_i$ and set $v_i=2y^{(1)}_i$. > It looks like both potential outcomes are observed in the experiments, is that correct? No, for each unit, we observe at most one of the potential outcomes. The only method that uses both observations is the baseline for ITE which is used for comparison. > It appears that, in several experiments, the performance of the proposed method is comparable to that of the leverage scores method. In light of this observation, what is the significance or unique contribution of this approach? Our goal was to develop a treatment effect estimation approach that is fast and unbiased, works on finite-population, gives non-asymptotic guarantees, allows only the use of a subset of the population (instead of the whole population), and can be used in different settings (such as online experimental design). As we mention in the introduction, each previous method has some shortcomings. So the goal of our method is to address all of these shortcomings simultaneously. For example, the GSW-design is very slow, as our experiments (presented in the appendix) suggest — on the twins dataset with about 32 thousand samples, the GSW design is about 500 times slower than our algorithm. Another example is that classic regression adjustment and using leverage scores to learn two different vectors are biased approaches.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for the positive assessment of our contributions to finite-population treatment effect estimation and its non-asymptotic analysis. We appreciate the comments on the presentation and organization of the paper. We believe these would make the paper stronger, and we will address these comments in the next version of the paper. Below we address the comments and questions of reviewers individually. The attached PDF contains some extra experiments requested by the reviewers. More specifically, it contains the comparison with the difference-in-means method and Lin's interacted regression adjustment approach. Pdf: /pdf/5cf790c6db8f303b7db069f83e26a4ef15d7bebb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null